Blog

Ressources

Summary
Ressources

Case study : AI supporting healthcare workers

Webinar case study : Where do healthcare professionals stand on Artificial Intelligence ?
Updated on
25/3/2026

On March 17, 2026, I organized a webinar with over 90 healthcare professionals present around a simple question : Where do you stand regarding Artificial Intelligence?

The 30-Second Essentials

Question Short Answer Key Takeaway
Are caregivers really using AI? Yes, but unevenly 75% of participants already use at least one AI tool in their practice.
What is the most used tool? Administrative chatbots ChatGPT/Gemini lead (24%), surpassing voice dictation (21%).
Primary source of stress? Fear of medical error 43% cite "making a mistake that disables or kills" as their top concern.
Time lost to admin? 1 to 2h per day 45% spend 1-2h daily on admin; 16% exceed 2h.
Is ambient listening adopted? No, not yet Daily usage is currently at 0%, indicating a significant adoption barrier.
Can AI replace judgment? No LLMs can hallucinate data with apparent consistency, making human oversight vital.
What is Galeon doing? Automation & Sovereignty Deployed in 19 hospitals, focusing on predictive biomarkers and secure prescription.

Introduction

I have been working on medical AI since 2016, when I founded Galeon based on one observation: medical data was poorly structured and unusable for training algorithms in hospitals. Galeon developed a shared patient record system that is now deployed in 19 hospitals, with 3 million patient files and over 10,000 active caregivers. This field experience gives me a direct view of the gap between the surrounding discourse on AI and what caregivers experience every day.

Which tools are caregivers actually using ?

AI Tool Declared Usage
Administrative Chatbot (ChatGPT, Gemini…) 24 %
Voice dictation 21 %
Chatbot for medical decision support 18 %
Diagnostic assistance 12 %
Ambient listening 0 %

The dominant use is administrative, not medical. Ambient listening, highly publicized in tech circles, is at zero in real practice. Globally, 94% of the population has never used generative AI. We live in an echo chamber, and our patients are increasingly arriving for consultations with diagnoses provided by ChatGPT.

What is the real source of stress for caregivers ?

Source of stress % of respondents
Making a mistake that disables or kills a patient 43 %
Administrative tasks 27 %
Pressure from administrative bodies 14 %
Work overload 11 %
Delays in consultations 5 %

The fear of error crushes everything else. This result should be engraved in the heads of everyone who designs health AI tools. A tool that saves time but hallucinates dosages is not a solution; it is an additional risk. The CNOM (2024 Atlas) and the HAS (QVT framework, 2022) confirm that administrative burden and the pressure of traceability are among the leading factors of medical burnout in France.

How much time is lost to administration ?

Daily administrative time % of respondents
Less than 30 minutes 5 %
30 min to 1 hour 33 %
1 to 2 hours 45 %
2 to 3 hours 9 %
More than 3 hours 7 %

61% of our participants exceed one hour per day. DREES (2023) estimates that a French general practitioner devotes between 20% and 25% of their time to non-clinical tasks. IGAS (2023) points to the multiplication of traceability obligations as the main cause of deteriorating working conditions in public hospitals.

Which tools to recommend to caregivers ?

Open Evidence remains the international reference: $735 million raised, partnerships with NEJM and JAMA, systematically cited sources, more than 100 million queries per month in the United States. Gemini is useful for quickly accessing verifiable French and international primary sources. MedGPT (Synapse Medicine) is relevant for common pathologies on French corpora, but limited on rare diseases.

On ChatGPT Health, I will be direct: independent evaluations have shown misrouting errors in nearly half of the serious patient cases tested. These companies communicate on medical exam pass rates, forgetting to specify that they have often trained the tool on that very exam. My advice: always use several tools in parallel, cross-reference the answers, and demand the sources.

Limits and challenges : what no one says enough

Hallucinations are structural, not bugs. An LLM generates what is statistically consistent, not what is true. In an ambient listening report, it can invent a weight for an anorexic patient that was never mentioned, written in an impeccable clinical style. The HAS warns of this risk in its guide on clinical decision support systems (2024).

A MIT study (2024) measured up to 40% less brain activity in ChatGPT users, in areas related to reasoning and working memory, with a deficit persisting one month after stopping. Applied to clinical reasoning, this result deserves serious attention. These tools are powerful, but one must maintain a critical mind. Responsibility remains with the doctor, whatever the publishers say (Council of State, 2022).

Conclusion

Caregivers use AI, but mainly for administration. Their primary source of stress remains the fear of error, which makes tool reliability non-negotiable. And time lost in non-medical tasks represents between one and three hours per day for the majority of them. What I have been building with Galeon since 2016 is exactly that: an infrastructure that structures data at the source, protects medical secrecy through decentralized architecture, and deploys validated tools in 19 hospitals. Useful medical AI is not the one that replaces the doctor. It is the one that gives them back the time to care.

Would you like to watch the webinar recording?

Click here

Sources

They trust us

Logo du Centre Hospitalier Intercommunal Toulon La Seyne-sur-MerLogo du Centre Hospitalier Sud Francilien (CHSF)Logo blanc du GHNE (Groupement Hospitalier Nord Essonne) sur fond transparentLogo du CHU de RouenLogo du CHU Caen Normandie