.jpg)
Primary-care appointments in many U.S. hospitals are booked for 30 minutes, yet clinicians spend 36.2 minutes per visit inside the electronic health record (EHR) - and another six minutes of “pajama time” after hours.
The digital drag is a major driver of burnout; a 2024 survey by the American Medical Association found that 48% of physicians reported feeling burnout, marking the first time since the COVID-19 pandemic that rates dipped below 50%.
Reformers had hoped 2021 billing CMS rule changes would shrink notes, but a review of 1.7 billion charts shows the opposite: average note length grew 8.1% in just three years, and many fear that AI is making it worse.
Ambient AI “scribes” promised relief, and early pilots do trim about 20% of note-writing time. Still, most systems record an encounter and dump a monolithic draft that clinicians must fix line by line, trading keyboard time for proofreading time. Nearly one-third of the clinicians using AI report spending up to three extra hours per week correcting AI-generated notes.
A different playbook: Recursive reasoning
Copenhagen-based Corti believes the way out is not more transcription but better reasoning. Its new AI reasoning pipeline, FactsR, treats a consultation like a stream of evidence, not a wall of text. According to the company’s technical report, the pipeline works in four tight loops:
- Listen & extract: Small sliding windows capture candidate facts (vitals, symptoms, orders) before memory floods.
- Evaluate & refine: A second model scores each fact, rewriting or discarding low-confidence items.
- Clinician in the loop: Every fact carries a timestamp and transcript link. One tap approves, edits, or deletes it.
- Note generation: FactsR stitches the verified list into a template (e.g. SOAP note) when the visit ends ensuring it’s easy to process.
Because the reasoning happens while the patient is still in the room, the clinician can correct course in real time instead of at 10 p.m, without being forced to split attention between patient and screen, as FactsR will continue it’s reasoning through the consultation.
What the numbers say
Corti tested FactsR on the public Primock57 benchmark, pitting it against a strong few-shot GPT-style scribe. With the clinician-review step enabled, the error bars looked like this:

The slight rise in ungrounded lines reflects physician additions that never appeared verbatim in the dialogue, a trade clinicians already make when they summarize care plans.
Why it matters
- Safer summaries: Every fact is provenance-backed, giving auditors and malpractice insurers a clear trail.
- Less cognitive overhead: Early user tests found doctors removed six irrelevant facts and added four missing ones per visit, all without rewinding the entire transcript.
- Model-agnostic: The pipeline is an API layer; hospitals can swap in their preferred large language model without costly retraining.
The bigger picture
Investors poured nearly a billion dollars into ambient scribe start-ups last year. Yet most solutions still work in hindsight, fixing yesterday’s note. FactsR points to a future where AI becomes an analytical partner, catching what the ear misses, editing itself, and handing clinicians a chart they can sign without dread.
Corti is opening the platform to developers this week with HIPAA and GDPR guards in place. A self-hosted option for institutions with sovereign-cloud rules lands later this summer.
If the early data hold up, the real-time reasoning approach could shift ambient AI from a clerical aid to a clinical colleague, one that never gets tired, never forgets, and above all, lets physicians spend their evenings being human again.