This page is specific to CodaFend, our Medicare Advantage chart coding software. Healthcare is the only domain we work in where every automated decision can end up in an audit, so it's the only product we explicitly engineered around provenance, determinism, and operator control from the first commit. Here's the full writeup of how.
CodaFend is Cast Net Technology's chart coding product for Medicare Advantage risk adjustment teams. This page describes the six engineering practices we follow when building CodaFend, because chart coding is the one domain where "confident and wrong" gets your customer audited.
Most software gets away with the "ship fast, fix later" posture because the cost of a bug is low and correction is quick. A wrong color on a landing page is fine. A wrong character in a tooltip is fine. Fix on deploy, move on.
Healthcare chart coding doesn't work like that. A confident wrong ICD-10 binding on a Medicare Advantage chart is a clinically and financially consequential error. CMS can audit it. The plan can lose reimbursement. The coder can lose trust in the software. And unlike a landing page bug, a wrong code lives in a record for years.
So when we started building CodaFend, we made a deliberate choice: every design decision would favor "flag this for review" over "guess confidently". Every finding would be traceable to a page and line in the source document. Every code path would be testable against a synthetic evaluation pack. That's the posture the rest of this page describes.
Note: the practices below apply specifically to CodaFend. Our other products (Mnemosyne and RuleBrief) use different engineering approaches scoped to their own risk profiles.
In concrete terms: an auditor can trace any CodaFend finding back to the exact PDF page it came from, a regression test catches any silent change in behavior before release, and a coder can override the system at any point without fighting it.
These aren't abstract values. Each one is an architecture requirement we enforce on every code change to CodaFend, and we apply the relevant ones selectively to our other products when the risk profile warrants.
Every output must trace back to a source. Whether it's a character offset in a chart PDF, a market data snapshot, or an operator action in a workflow—every assertion has a citable origin. Nothing is asserted without evidence.
Applied to: chart findings (page/offset), ranked market candidates (scoring inputs), accounting events (order logs), inventory records (API source + operator edit).
Given the same inputs, the system produces the same outputs. This enables reproducible audit, regression testing, and meaningful comparison of outputs across time or across versions. Non-deterministic components are isolated, bounded, and explicitly labeled.
Applied to: PHI-safe synthetic chart evaluation packs; reproducible session replays for decision pipelines; SQLite effective-config snapshots for research workflows.
Automation is introduced only after the problem is bounded. Edge cases, failure modes, and adversarial inputs are enumerated before code is written. The constraint set defines the boundary of safe operation; behavior outside that boundary triggers a flag or a halt.
Applied to: detection confidence thresholds; grid regime gates; liquidity/spread risk gates; OCR quality minimums.
Policy layers, review gates, and kill switches are first-class features. The operator configures the system, approves outputs, and retains the right to override, pause, or halt any automated behavior. The system advises; the operator decides.
Applied to: document review workflows, pipeline review gates, compliance alert approval queues.
New automation logic runs in parallel with existing behavior, read-only, against live or production-equivalent data. The shadow output is observed and compared to the baseline before promotion. No new logic enters production without a shadow validation period.
Applied to: detection model updates in document pipelines; regulatory source classification updates.
Behavioral changes must be intentional. PHI-safe synthetic evaluation packs, deterministic test sets, and baseline comparisons ensure that a code change cannot silently alter system behavior. Regressions are caught before deployment, not discovered in production.
Applied to: document intelligence releases; regulatory intelligence classifier updates.
"Writing healthcare software that holds up in audit isn't slower — it's just different. The work goes into making every decision visible and reversible, instead of making every decision fast and automatic."— The CodaFend team at Cast Net Technology