Doctors who use AI may ‘forget’ some skills

The AI Report
Daily AI, ML, LLM and agents news
Artificial intelligence enhances diagnostic accuracy and streamlines patient care. Yet, as AI systems grow sophisticated, a critical question emerges: could reliance on these tools inadvertently diminish human expertise? A recent study offers a cautionary glimpse, suggesting technology designed to augment our abilities might subtly erode skills.
The Surprising Findings from Poland
A Polish study examined gastroenterologists’ colonoscopy performance with and without an AI system highlighting polyps in real-time. While AI initially improved active detection, a critical insight emerged when researchers analyzed performance with the AI switched off.
A 20% Decline in Independent Detection
Strikingly, after AI assistance, doctors’ independent ability to spot polyps declined by approximately 20%. Dr. Marcin Romańczyk, the study leader, theorized doctors might subconsciously become less vigilant, anticipating the AI’s "green box" rather than actively scrutinizing every detail. This suggests a "safety-net effect," where perceived technological backup reduces reliance on core human skills.
Navigating AI Integration: Nuance and Skepticism
While thought-provoking, a nuanced approach is essential. Critics like researcher Johan Hulleman question if a three-month study is sufficient for skills honed over decades to deteriorate. He suggests statistical variations, potentially from patient demographics, could influence the decline. The study also couldn't confirm the medical significance of every "missed" polyp, raising questions about the clinical impact.
Augment, Don't Replace
Despite valid points, the core concern persists: as AI permeates routine medical scans—from ophthalmology to oncology—how do we ensure it augments human capability without fostering over-reliance? Thoughtful integration is crucial, especially for doctors new to AI. Dr. Romańczyk himself believes AI improves his work; the objective is understanding its full impact on human performance.
Strategies for Responsible AI Adoption
To harness AI responsibly, healthcare professionals and system developers must consider its psychological effects. Practitioners need continuous skill maintenance via periodic "AI-free" practice or targeted training emphasizing fundamental diagnostic abilities. Institutions must design AI systems and protocols promoting active human engagement and critical thinking, not passive acceptance. Structured training on interacting with AI, understanding its limitations, and verifying suggestions will be paramount.
Prioritizing Human Expertise
This study is a vital reminder: while AI is an invaluable partner, human expertise remains the cornerstone of patient care. The goal is a symbiotic relationship where AI enhances performance without eroding the irreplaceable cognitive skills and critical judgment only a human professional provides. More real-world studies, exploring AI-human interplay, will guide us toward a future where technology truly empowers, without diminishing the healer's art.
Moving Forward with Intentionality
As AI evolves, its deployment must be intentional and informed. Let’s champion research that celebrates AI’s capabilities while rigorously examining its impact on human skill, building a healthcare system both technologically advanced and deeply human-centered. Our vigilance today will safeguard care quality for generations.

The AI Report
Author bio: Daily AI, ML, LLM and agents news