Can an # AI #Doctor partner with clinicians? Can we please move past the AI versus doctor/clinician comparisons in taking board exams.. solving diagnostically challenging cases... providing more empathetic on-line responses to patients...? and instead focus on improving patient care and their outcomes? The authors, Hashim Hayat, Adam Oskowitz et. al. at the University of California, San Francisco, of a recent study may be hinting at this: envisioning an agentic model (Doctronic) “used in sequence with a clinician” to expand access while letting doctors focus on high‑touch, high‑complexity care and supporting the notion that AI’s “main utility is augmenting throughput” rather than replacing clinicians (https://lnkd.in/e-y3CnuF) In their study: ▪️ >100 cooperating LLM agents handled history evaluation, differential diagnosis, and plan development autonomously. ▪️ Performance was assessed with predefined LLM‑judge prompts plus human review. ▪️ Primary diagnosis matched clinicians in 81 % of cases and ≥1 of the top‑4 matched in 95 %—with no fabricated diagnoses or treatments. ▪️AI and clinicians produced clinically compatible care plans in 99.2 % of cases (496 / 500). ▪️In discordant outputs, expert reviewers judged the AI superior 36 % of the time vs. 9 % for clinicians (remainder equivalent). Some key #healthcare AI concepts to consider: 🟢 Cognitive back‑up, in this study, the model identified overlooked guideline details (seen in the 36 % of discordant cases; the model used guidelines and clinicians missed). 🟢 Clinicians sense nuances that AI cannot perceive (like body‑language, social determinants). 🟢 Workflow relief , Automating history‑taking and structured documentation, which this study demonstrates is feasible, returns precious time to bedside interactions. 🟢 Safety net through complementary error profiles – Humans misdiagnose for different reasons than #LLMs; so using both enables cross‑checks that neither party could execute alone and may have a synergistic effect. Future research would benefit from designing trials that directly quantify team performance (clinician/team alone vs. clinician/team + AI) rather than head‑to‑head contests, aligning study structure with the real clinical objective—better outcomes through collaboration. Ryan McAdams, MD Scott J. Campbell MD, MPH George Ferzli, MD, MBOE, EMBA Brynne Sullivan Ameena Husain, DO Alvaro Moreira Kristyn Beam Spencer Dorn Hansa Bhargava MD Michael Posencheg Bimal Desai MD, MBI, FAAP, FAMIA Jeffrey Glasheen, MD Thoughts? #UsingWhatWeHaveBetter
AI Tools to Support Clinicians
Explore top LinkedIn content from expert professionals.
Summary
AI tools designed to support clinicians are transforming healthcare by enhancing diagnostic accuracy, improving patient care, and streamlining workflows. These tools function as collaborative aids, offering insights, automating tasks, and serving as safety nets to complement, not replace, human expertise.
- Focus on collaboration: Use AI tools to handle tasks like history-taking and documentation, freeing up time for clinicians to prioritize patient interactions and high-complexity care.
- Leverage complementary strengths: Rely on AI for data-driven insights while combining them with clinicians’ ability to assess non-verbal cues and social factors for comprehensive care.
- Adopt a step-by-step approach: Start by integrating AI in specific workflows, such as real-time transcription or diagnostic support, and expand usage as tools become more refined.
-
-
While taking my daily AI vitamin (💊 The AI Daily Brief podcast), I started exploring the functionality of 𝗮𝗺𝗯𝗶𝗲𝗻𝘁 𝗔𝗜 𝗮𝗴𝗲𝗻𝘁𝘀 and what they could mean for healthcare. Chatting back and forth with an LLM can feel forced: I know what I want, but I must steer the model toward it. Agentic coding tools (e.g., Anthropic's Claude Code, Replit, OpenAI's Codex) flipped that script. I describe an outcome, then watch multiple agents write, test, and publish code while I step in only when needed. Imagine that same fluidity inside an EHR. 𝗔 𝗱𝗮𝘆 𝗶𝗻 𝗰𝗹𝗶𝗻𝗶𝗰 𝘄𝗶𝘁𝗵 𝗮𝗻 𝗮𝗺𝗯𝗶𝗲𝗻𝘁 𝗮𝗴𝗲𝗻𝘁 𝘀𝘁𝗮𝗰𝗸 𝟭𝟮 𝗣𝗠 – 𝗻𝗶𝗴𝗵𝘁 𝗯𝗲𝗳𝗼𝗿𝗲 • 🩺 𝘗𝘢𝘵𝘪𝘦𝘯𝘵 𝘴𝘯𝘢𝘱𝘴𝘩𝘰𝘵 𝘢𝘨𝘦𝘯𝘵: review notes, labs, etc., build a one-page brief. ✅ Epic Outpatient Insights, Synopsis, and ambient dictation vendors are developing pre charting workflows. • 📋 𝘎𝘶𝘪𝘥𝘦𝘭𝘪𝘯𝘦 & 𝘊𝘋𝘚 𝘴𝘸𝘦𝘦𝘱 𝘢𝘨𝘦𝘯𝘵: flag gaps (vaccines, overdue labs), cross check meds vs. alerts. ✅ Most EHRs expose CDS hooks today. 𝟲𝟬 𝗺𝗶𝗻 𝗽𝗿𝗲 𝘃𝗶𝘀𝗶𝘁 • 🗂️ 𝘏𝘰𝘷𝘦𝘳 𝘤𝘩𝘦𝘤𝘬𝘭𝘪𝘴𝘵 𝘢𝘨𝘦𝘯𝘵: a persistent hover card that ranks open issues (🔴 critical → 🟡 routine). 🔭 Doesn’t exist yet, but the building blocks (APIs) are there. 𝗗𝘂𝗿𝗶𝗻𝗴 𝘃𝗶𝘀𝗶𝘁 • 🎙️ 𝘈𝘮𝘣𝘪𝘦𝘯𝘵 𝘥𝘪𝘤𝘵𝘢𝘵𝘪𝘰𝘯 + 𝘤𝘩𝘦𝘤𝘬𝘭𝘪𝘴𝘵 𝘴𝘺𝘯𝘤 𝘢𝘨𝘦𝘯𝘵: live transcription while a checklist auto-ticks off or adds items. ✅ Several AI vendors handle transcription (Abridge, Microsoft, Ambience Healthcare; auto checklist is a great opportunity (🔭). • 📝 𝘖𝘳𝘥𝘦𝘳 & 𝘤𝘰𝘥𝘦 𝘴𝘶𝘨𝘨𝘦𝘴𝘵𝘪𝘰𝘯𝘴 𝘢𝘨𝘦𝘯𝘵: real time drafts of orders + CPT/ICD codes. ✅ Rolling out with current ambient dictation vendors. 𝗔𝗳𝘁𝗲𝗿 𝘃𝗶𝘀𝗶𝘁 • 💌 𝘗𝘢𝘵𝘪𝘦𝘯𝘵 𝘧𝘰𝘭𝘭𝘰𝘸 𝘶𝘱𝘴 𝘢𝘨𝘦𝘯𝘵𝘴: patient-friendly instructions + scheduling assistance. ✅ Emerging in ambient platforms; For example, Hyro already handles smart scheduling. 𝗕𝗲𝘁𝘄𝗲𝗲𝗻 𝘃𝗶𝘀𝗶𝘁𝘀 • 🔔 𝘉𝘢𝘤𝘬𝘨𝘳𝘰𝘶𝘯𝘥 𝘮𝘰𝘯𝘪𝘵𝘰𝘳𝘪𝘯𝘨 𝘢𝘨𝘦𝘯𝘵: scan inbox, new labs, RPM feeds; bundle outstanding items into a daily digest. 🔭 Pieces exist (message rules, pop health dashboards), but true cross-system agent orchestration is lacking. This could scale to including additional agents that do things such as scan for recent literature on a patient's conditions (OpenEvidence), pre-piloting prior auth forms with patient data and info scoured from payer portals... None of this is impossible. • Coding agents already show that LLMs can plan, iterate, and act on entire workflows autonomously. • EHR native AI plugins are landing every quarter, proving the plumbing is ready. What’s missing is the connective, ambient layer that stitches today’s point solutions into one continuous, event-driven loop, with clinicians setting the course instead of steering everything.
-
Be bold. Lead the change. Your clinicians and your patients deserve it. This week, I witnessed history at Presbyterian Healthcare Services. Our expanded launch to more clinics wasn’t just another tech rollout, it was one of the most rewarding moments of my professional life. Significant impact 10–30 minutes → seconds: Time for complex differential diagnosis, point-of-care hyper-personalized treatments, care summaries, insurance justifications, and AVS creation dropped from minutes to seconds. Clinicians are embracing RhythmX AI as a trusted partner and here’s what they’re saying: ▪ “Practicing medicine made so much easier” ▪ “When I showed him [the patient] the history of everything that we did [with RhythmX AI], the questions I asked with RhythmX AI, he was blown away. I was blown away too!” ▪ “RhythmX AI helped me think about stuff I wasn’t thinking about.…So I think it was a big win for that patient to protect her from further falls.” ▪ “We see really complex patients with a lot going on…this helps make sure we aren’t missing anything” ▪ “You have to have a product I can trust. …..I’ve never seen a wrong answer out of your product. And I think that’s really important.” ▪ “It is quite helpful when I am in an exam room with a patient and need a quick answer about their work up, etc. It is quite accessible and allows me to engage the patient while we search for the answer together. This is perhaps my most favorite part of Rhythm X AI.” ▪ “This [RhythmX AI] is great, this is the best I have seen” ▪ “I’m looking forward to incorporating it into my workflow with every patient, and then using it in the room with our patients as questions come up” ▪ “It took about 10 seconds to get all of that information with one simple question..” ▪ “…And then I can use that information to make my point in the HCC documentation that I put. Sometimes there’s 10 or 11 diagnoses that need to be documented like that” ▪ “I do think this is even more of a game changer than Dax. With my workflow…the potential is huge” ▪ “This is great, a good pickup and something I didn’t know on a complicated patient, good for the patient..” This isn’t hype. This is live. In production. Changing lives today. Trustworthy AI, clinically validated, embedded where it matters most - your EHR workflows and hyper-personalized to each patient. If you’ve been asking, “When is the right time?” The answer is clear: now. For the sake of your clinicians and patients. PHS leaders have set the stage for a global revolution in how medicine is practiced. It’s bold, visionary, and deeply human. Lori Walker Brad Cook, MBA, FHFMA Keith Rivera #AgenticAI #HealthcareInnovation #ClinicianExperience #Hyper-personalizedCare #PrimaryCare Revolution
-
Penda Health and OpenAI released a paper yesterday Here are a few takeaways I pulled together from an implementation lens: 📍 Location: Nairobi, Kenya 🏥 Clinics involved: 15 🧑⚕️ Providers involved: 100+ mid-level providers 👥 Patients included: 39,849 total visits (across AI and non-AI groups) 📊 External review: 108 independent physician reviewers (29 based in Kenya). Reviewed a random sample of 5,666 patient visits to assess diagnostic and treatment errors. Used for evaluation only. 🛠️ Product Implementation: AI Consult - The earliest version of AI Consult was discontinued, it required clinicians to actively prompt it, which disrupted workflow - In 2025, Penda redesigned the tool to act as a passive “safety net” during diagnosis and treatment planning. Clinicians remained in control and had final say - A traffic light system (red/yellow/green) flagged potential errors in real time, based on Kenyan national clinical guidelines, Penda’s internal protocols, and Kenyan epidemiological context - Clinician notes (with patient identifiers removed) were shared with the OpenAI API at key points during the visit to generate feedback. Patients provided consent and could withdraw their data at any time - GPT-4o was selected for its lower latency, enabling faster responses during live patient sessions. At the time of implementation, more advanced models hadn’t been released. o3 has since become the highest-performing model via HealthBench - A key implementation challenge: deciding when to input feedback. Some clinicians document asynchronously, so timing affected whether suggestions were helpful or disruptive 📈 Results - In the AI group, clinicians made fewer mistakes across the board. This included asking the right questions to ordering tests, making diagnoses, and choosing treatments - Projected impact: ~22,000 diagnostic and ~29,000 treatment errors potentially averted annually - Red alerts dropped from ~45% to ~35% of visits over time, suggesting clinicians learned with use - 100% of surveyed clinicians said AI Consult helped, though the survey method wasn’t detailed - No statistically significant difference was found in longer-term patient outcomes (measured 8 days later) A follow-up thought: The paper makes a case that implementation is now the bigger challenge and not model capability. That seems true for the context of this study, but model capability would still have to be evaluated safely in more specialized care settings like oncology or neurology. Thanks Karan Singhal, Robert Korom, Ethan Goh, MD for sharing this work publicly! Paper in the comments.