Audit Trails That Actually Help

Designing for the person being decided about

Most audit trails are designed for the institution that owns the system. This is a silent architectural decision that shapes almost every decision system built today, and it has consequences that are worth naming.

An audit trail that exists to protect the institution is a defensive record. It documents what the system did and why, in terms that satisfy an auditor or a regulator examining the institution's compliance posture. It is a legal artifact. It speaks the vocabulary of the institution. It is stored in systems the institution controls and released to outside parties only under conditions the institution specifies.

An audit trail that exists to help the person being decided about is a fundamentally different object. It documents the same facts, but with a different primary user in mind. It is written in the vocabulary of the person, not the institution. It is accessible to them on their own terms, not on terms the institution dictates. It answers the questions they actually ask, which are usually different from the questions the institution cares about.

Both kinds of audit trails are technically possible in a deterministic decision system. In fact, the same underlying data can serve both purposes, because the facts of what happened are shared. The question is whether the system is designed to surface those facts to both audiences, or only to one.

This post argues that the second design choice, building audit trails that serve the person being decided about, is not charity. It is a better product decision for institutions that care about defensibility, it is a better architectural decision for systems that want to survive regulatory scrutiny, and it is a better ethical decision for anyone building infrastructure that makes decisions about people's lives.

The post describes what this design actually looks like in code and in product, using examples from healthcare prior authorization and logistics exception handling, which are the two domains I have the most direct experience with. The pattern generalizes to any domain where a decision system operates between an institution and an individual, which is most of them.

The problem with institution-first audit trails

Every audit trail captures a set of facts about a decision. In a typical healthcare prior authorization system, those facts include what was submitted, what criteria were evaluated, what the evaluation result was, who approved or denied, and what documentation supported the decision. The underlying facts are the same regardless of which audience consumes them.

The difference is who can see them, in what form, and under what conditions. In an institution-first design:

  • The audit trail lives in the institution's systems. A patient asking about their own prior authorization decision is not reading the audit trail. They are reading a summary letter prepared from it, usually by a staff member, at the institution's convenience.
  • The vocabulary is institutional. The audit trail speaks in payer codes, criterion identifiers, and policy references. The patient sees something translated into generic language that loses the specificity that would help them challenge the decision.
  • The timing favors the institution. The audit trail is generated synchronously with the decision. The patient receives information about it days or weeks later, sometimes only after requesting it, sometimes only in the form of an appeal denial notice.
  • The completeness is gated. The full audit trail contains details the institution considers proprietary or sensitive. The patient receives a curated excerpt chosen by the institution, not the full record that would enable them to assess the decision on their own terms.

The cumulative effect of these design choices is a system in which the person affected by a decision has strictly less information about it than the system has. This asymmetry is sometimes justified on privacy, security, or commercial grounds. More often, it is a default that nobody examined. The audit trail was designed to satisfy auditors, and auditors are the institution's auditors, so the design optimized for them.

This is a solvable problem. Most of the asymmetry is architectural rather than essential.

What a person-oriented audit trail looks like

Here is a specific example. In our spine surgery prior authorization product, when the engine produces a determination, it generates a data structure that captures every factor that went into the decision. A simplified version looks like this:

{
  "decision_id": "auth_7f3a9b2c",
  "timestamp": "2026-04-15T14:32:07Z",
  "overall_status": "APPROVED_WITH_CONDITIONS",
  "rule_pack": {
    "id": "cigna_spine_fusion",
    "version": "3.2.1",
    "effective_date": "2026-01-01",
    "content_hash": "sha256:a4f2..."
  },
  "criteria_evaluated": [
    {
      "criterion_id": "conservative_therapy_duration",
      "description": "At least 6 weeks of documented conservative therapy",
      "status": "SATISFIED",
      "evidence": {
        "document": "PT_note_2025_12_10.pdf",
        "text_excerpt": "Patient completed 8 weeks of physical therapy...",
        "document_date": "2025-12-10"
      },
      "rule_trace": "8 weeks documented, exceeds 6-week threshold"
    },
    // ... additional criteria
  ],
  "operator_confirmations": [...],
  "integrity_hash": "sha256:b7e3..."
}

Every element of this record was already being captured for institutional audit purposes. The architectural question is who can see it, in what form, at what time.

In an institution-first design, this record lives in the payer's or provider's database. It is exposed to auditors on request. The patient sees a letter that says something like "Your request has been approved. You met all criteria." The letter may not even list the criteria. If the decision had gone the other way, the letter would say "Your request has been denied. You did not meet all criteria," with perhaps a single citation that could be re-examined.

In a person-oriented design, the patient (or their representative) has real-time access to the full record, translated into their vocabulary, from the moment the decision is made. The translation is not dumbed down; it is precisely as specific as the institutional version, just phrased for the reader. Where the institutional version says "criterion conservative_therapy_duration satisfied: 8 weeks documented, exceeds 6-week threshold," the person-oriented version says "Your insurance required at least 6 weeks of physical therapy before approving surgery. The records show you completed 8 weeks. This requirement was met."

The same specificity. The same evidence. The same rule trace. Different framing of who the audience is.

Why this matters more than it appears

The difference between a person-oriented and an institution-oriented audit trail looks small at first. It becomes large once you look at what each enables in a contested decision.

Consider a denial. In the institution-first design, the patient receives a letter: "Your prior authorization request has been denied. You did not meet the following criterion: imaging documentation." The patient has very little to work with. They do not know what imaging was reviewed. They do not know what the reviewer looked for. They do not know what specifically was missing. If they want to appeal, they have to request the full record, which takes time and may require legal support.

In the person-oriented design, the same denial comes with a complete trace: "Your insurance required an MRI or CT scan dated within 6 months of this request that shows pathology at the surgical level. The records reviewed included your MRI from March 2025, which is 13 months before this request. This document is outside the 6-month window. A new imaging study within the past 6 months would satisfy this requirement."

The second version does three things the first cannot:

It tells the patient exactly what went wrong. Not "you did not meet the criterion," but "your MRI is 13 months old; the rule requires 6 months or less." This is the difference between knowing you failed and knowing why, and the difference between those two states is the difference between being able to fix it and not.

It tells the patient exactly what would fix it. "A new imaging study within the past 6 months" is an actionable next step. The patient can schedule an MRI and resubmit, knowing that this specific remediation directly addresses the gap.

It shifts the conversation from adversarial to collaborative. The institution-first denial feels like an accusation of insufficiency. The person-oriented denial feels like a joint problem statement: here is what the rules require, here is what your records show, here is the gap, here is how to close it. The underlying decision is the same. The relationship is different.

This is not a rhetorical improvement. It is a direct consequence of designing the audit trail for the person. The information that lets the patient act was already in the system. It was just not being handed to them.

The same principle across domains

The pattern generalizes. The specifics vary by domain but the architectural principle is the same: the system has the information that would help the person affected; the question is whether the system surfaces that information to them or keeps it in the institutional workflow.

Logistics

A shipment exception occurs. The carrier's system flags it and applies the contract's SLA rules to determine the appropriate response. In an institution-first design, the shipper receives a notification: "Shipment 8842-C delayed. Carrier will deliver on next business day." That is it. The decision has been made. The shipper does not know which SLA clause was invoked, does not know what alternatives the carrier considered, does not know whether they were owed a credit or have grounds to dispute.

In a person-oriented design, the same decision comes with the full trace: "Your shipment was delayed due to carrier hub congestion in Memphis. Under your contract section 4.2(b), weather and infrastructure delays extend the delivery window by one business day without credit. Alternative routing would have added 36 hours and is not triggered under the current SLA because total delay is within the allowed window. If the shipment had been routed through Louisville, delivery would have occurred on time, but this routing requires prior authorization under your contract."

The shipper now understands the decision, understands the alternatives, understands the specific clauses that produced the outcome, and is in a position to make contract adjustments for future shipments if they choose. None of this information was unavailable. It was just not part of the default notification surface.

Benefits determination

A Medicaid applicant submits documentation. The state agency's system evaluates eligibility against the current rules. In an institution-first design, the applicant receives a letter: "Your application has been denied. You do not meet the income requirements." The applicant does not know what income figure was used, does not know which documents were considered, does not know whether a correction would change the outcome.

In a person-oriented design: "Your application was reviewed against the 2026 income limits for a household of 3 in your county, which is $27,750 per year. The income documentation reviewed showed combined household income of $29,400 from your pay stubs dated January through March 2026. If your income has decreased since then, or if household members have changed, an updated application with current documentation may be eligible. You have 60 days to file an appeal and 90 days to file an updated application."

Same decision. Different relationship between the applicant and the system. The applicant who receives the second version can, in most cases, resolve the issue without external help. The applicant who receives the first version usually cannot.

Implementation: what the architecture requires

A deterministic decision engine with the bridge pattern architecture (covered in prior posts: The Bridge Pattern and Encoding Expert Rules) already produces most of what a person-oriented audit trail needs. The rule trace is there. The evidence citations are there. The threshold comparisons are there. The recency evaluations are there. The specific clauses that drove the outcome are captured.

What is typically missing is three things, and each one is an intentional architectural commitment rather than additional engineering work:

A translation layer

Institutional audit data uses institutional vocabulary. A person-oriented audit trail requires a translation layer that maps institutional terms into the vocabulary the person would recognize. This is not simply a dictionary replacement. It requires understanding what level of detail to preserve, what context to add, and what framing to adopt.

The translation layer is a small amount of domain-specific mapping work. For a given rule pack, someone with domain expertise writes the human-facing description for each criterion, and the system uses that description when rendering the audit trail for the affected person. The description is authored once per criterion, not once per decision, so the authoring cost scales with the rule set, not with the case volume.

The discipline is to write the descriptions honestly. It is tempting, under an institution-first mindset, to soften the descriptions to avoid admitting the specific threshold that was applied. "You did not meet the imaging requirement" is softer than "your MRI is 13 months old and the rule requires 6 months or less." Softer is worse. Specificity is the thing the person needs to act, and removing specificity in the name of protecting the institution's position is exactly the wrong trade.

Access mechanisms

A person-oriented audit trail requires a way for the person to actually access it. This is a product and integration question, not just an engineering one. The person needs to know the audit trail exists, needs to have a mechanism to view it, and needs to have it in a format they can use.

The simplest implementation is a dedicated web view, accessed through a link included in the decision notification. The link resolves to a page that shows the translated audit trail with the full rule trace, the evidence citations, and the specific remediation paths if the decision was adverse. More sophisticated implementations integrate with patient portals, shipper dashboards, or benefits-applicant systems that the person already uses.

The key product principle is that the person-oriented view must exist by default, not by request. Requiring a person to request the audit trail reintroduces the institutional gatekeeping that the architecture was supposed to remove. The default path must be: decision is made, person is notified, full audit trail is immediately accessible to them on terms they can use.

Real-time generation

Institution-first audit trails are often generated in batch, updated nightly, or assembled on demand when a case is disputed. Person-oriented audit trails must be generated in the same transaction as the decision, so that the person's view is available at the same moment the decision is communicated.

This is easier than it sounds. A deterministic decision engine already produces the trace as part of generating the decision. The architectural step is to serialize the trace in both institution-facing and person-facing forms at the point of decision, and to expose the person-facing form through the same notification path that tells them the decision was made.

What this unlocks

Systems that ship with person-oriented audit trails change the dynamic of the decisions they make in several concrete ways.

Appeal rates on incorrect denials go up, and appeal rates on correct denials go down. Both of these are good outcomes. Incorrect denials get surfaced faster because the person has the information to spot them. Correct denials are accepted more often because the person understands the rule that applies and can see why their situation does not meet it. The system becomes less adversarial and more collaborative.

Remediation cycles shorten. When the audit trail specifies exactly what would resolve an adverse decision, the person can take the specific action required without having to guess, consult, or engage professional help. This lowers the cost of the decision system as a whole, not just for the affected person but for the institution that has to process re-submissions and appeals.

Trust in the system increases. A person who receives a specific, well-reasoned, traceable explanation of a decision is more likely to accept it (if adverse) or appreciate it (if favorable) than one who receives an opaque notification. Trust is a real operational asset in decision systems, and institution-first designs consistently erode it while person-oriented designs build it.

Compliance posture improves. Regulators increasingly expect that decisions affecting individuals come with individual-level explanations. A system that already ships with person-oriented audit trails is positioned to meet these requirements as they land, rather than having to retrofit them later.

The objection worth addressing

The most common objection to person-oriented audit trails is that they expose information the institution would prefer to keep internal. There are legitimate versions of this concern and illegitimate versions. It is worth separating them.

The legitimate version is about information that genuinely should be private: other patients' data, other applicants' submissions, proprietary underwriting formulas, or security-sensitive details. Nothing in the person-oriented architecture requires exposing these. The audit trail the person sees contains information about their own case and the specific rules applied to it, not about other cases or proprietary internals.

The illegitimate version is about information the institution would prefer to keep opaque for strategic reasons: the specific thresholds applied, the specific documents reviewed, the specific rule version that drove the decision. This information is not proprietary in any meaningful sense. The rule pack is often derived from a publicly available policy. The thresholds are written down in the policy. Keeping them opaque to the person affected is a choice that benefits the institution at the person's expense, and it is not a choice the architecture should enable by default.

When an institution pushes back against surfacing this kind of information, the honest response is usually that the pushback reflects an assumption about the relationship between the institution and the person that is worth reexamining. A decision system that operates between two parties should not default to hiding the decision's basis from one of those parties.

The discipline this requires

Building person-oriented audit trails is not technically harder than building institution-first audit trails. The underlying data is the same. The architectural choices that produce one versus the other are small and local: a translation layer, an access mechanism, a real-time generation path.

What it requires is a specific design discipline. When building a decision system, ask at every stage who the audit trail is for. If the answer is always "the institution," the system will drift toward opacity by default. If the answer includes "the person affected," the system will make different product choices at many small junctures, and those small choices compound into a system that treats the person as a participant rather than as a subject.

The discipline is not expensive. It pays for itself quickly through better appeal outcomes, shorter remediation cycles, and stronger compliance posture. But it does require that someone on the team holds the question as a recurring lens on product decisions, because the default pressure of institutional customers is always to keep more information internal, and the default pressure of engineering is to ship what satisfies the stated requirement, which is almost always the institutional one.

Closing

Audit trails are architectural artifacts. The way they are designed reflects an assumption about who the decision system serves. An audit trail designed for the institution serves the institution. An audit trail designed for both the institution and the person affected serves both. The second design is not meaningfully more expensive to build. It is just a different default.

Every deterministic decision engine captures, as a byproduct of producing defensible decisions, enough information to explain those decisions to anyone. The question is whether the engine's product surface makes that information accessible to the person on the receiving end, or whether it routes the information exclusively to the institution.

For engineers, product leaders, and executives building in this space, the choice is not between a complicated ethical framework and shipping a product. The choice is between two implementations that are technically almost identical and that produce very different relationships between a system and the people its decisions affect. Pick the one that treats the person as a participant. It is better product design. It is better architecture. It also happens to be better ethics, but the ethics follow from the product choice rather than competing with it.

Audit trails that actually help are the ones that help both audiences. Build for both.

Ryan Kamykowski is the CEO and co-founder of Avectic Corporation, which builds deterministic decision infrastructure for AI-assisted workflows. For questions or discussion, email info@avectic.com.