Responsible Artificial Intelligence governance in oncology

Avatar picture of The AI Report

The AI Report

Daily AI, ML, LLM and agents news
0
0
  • #cancer
  • #ethics
  • #machine_learning
  • #artificial_intelligence
Represent Responsible Artificial Intelligence governance in oncology article
3m read

Governing Artificial Intelligence in Cancer Care: Building a Responsible Framework

Artificial intelligence (AI) is rapidly transforming healthcare, particularly in oncology. From aiding diagnostics to optimizing treatment plans and streamlining operations, AI offers immense potential. However, its increasing integration brings crucial questions about safety, ethics, and effectiveness that demand careful consideration. While general healthcare AI governance frameworks exist, the unique complexities of cancer care necessitate a tailored approach. This post outlines one Comprehensive Cancer Center's experience in designing and implementing a Responsible AI (RAI) governance model specifically built for the oncology landscape.

Why Responsible AI Governance is Vital in Oncology

AI models in cancer care present unique opportunities, such as predicting tumor behavior, refining treatment pathways, and analyzing intricate data. But they also introduce significant risks. Bias embedded in training data can unfortunately exacerbate existing health disparities, potentially leading to inequitable access or outcomes. Ensuring AI models perform reliably, transparently, and safely within this complex, rapidly evolving clinical environment is paramount. Without robust governance structures, institutions risk deploying tools that could inadvertently cause patient harm or erode clinician trust. Oncology requires a framework acutely aware of these specific challenges.

Our Approach: Designing and Implementing RAI Governance

Our journey began with an AI Task Force (AITF) to map current AI activities and identify strategic priorities, highlighting the need for a dedicated governance body. This led to the AI Governance Committee (AIGC), intentionally embedded within existing digital governance to balance promoting AI use with ensuring responsibility. Guided by ethical AI principles, the AIGC developed practical tools and processes.

A central component is the iLEAP (Legal, Ethics, Adoption, Performance) lifecycle management framework. iLEAP guides AI models from idea through development, testing, and monitoring, providing structured paths for research, in-house, and acquired models via defined "decision gates." We also created a Model Information Sheet (MIS) as a "nutrition card" detailing model purpose, data, and risks, and a structured Risk Assessment tool balancing risks and mitigation measures. A Model Registry tracks all models institution-wide.

Putting Governance into Practice: Key Results and Learnings

Over its first year, the AIGC managed a dynamic portfolio, registering and monitoring 26 AI models (including LLMs), overseeing 2 ambient AI pilots, and reviewing 33 nomograms. This demonstrated the feasibility of comprehensive AI governance at scale.

Key takeaways from our implementation:

  • AI demand is growing rapidly, requiring scalable governance processes.
  • Balancing speed and safety is crucial. An "Express Pass" process expedites review for models meeting low-risk/best-practice criteria (e.g., human-in-the-loop, existing QA), successfully applied in case studies like an FDA-approved radiology model and an in-house radiation oncology segmentation model.
  • Ongoing monitoring is critical. The G5 stage of iLEAP tracks technical performance (e.g., drift), clinician adoption/trust (using tools like TrAAIT), success metrics, and AI-induced Adverse Events (aiAEs) integrated with safety reporting.
  • Retrospective review of existing tools like nomograms adds value, ensuring alignment with current evidence.

Actionable Insights for Other Institutions

For institutions establishing or maturing RAI governance in oncology, consider these steps:

  • Form a Multidisciplinary Committee: Include diverse expertise (clinical, technical, ethical, legal, operational).
  • Develop a Clear Intake Process: Standardize how new AI models enter the pipeline.
  • Implement a Risk Assessment Framework: Use a structured tool to evaluate model risk and mitigation strategies.
  • Create a Central Model Registry: Maintain a single source of truth for all AI models and their status.
  • Prioritize Lifecycle Management and Monitoring: Governance extends beyond initial deployment; plan for ongoing surveillance and evaluation.
  • Embed with Existing Systems: Integrate AI governance with existing IT, compliance, quality & safety, and research review structures.
  • Secure Leadership Support: Ensure leadership champions the initiative and grants appropriate authority.
Even without extensive in-house development, a pragmatic approach focusing on acquired vendor solutions is crucial. Minimum components include a multidisciplinary committee, intake, risk assessment, registry, vendor data access, linkage to safety reporting, and leadership support.

The Path Forward

Establishing Responsible AI governance in oncology is an evolving process. As AI becomes more ubiquitous, embedded deeply as system features, scaling governance while maintaining safety will be challenging. Refining processes like tiered review, addressing talent needs, and educating future clinicians are ongoing priorities. Our experience demonstrates that a structured, multidisciplinary, and integrated governance model is essential to navigate AI's complexities safely and effectively in cancer care, ultimately aiming to harness its power to improve patient outcomes responsibly.

Avatar picture of The AI Report
Written by:

The AI Report

Author bio: Daily AI, ML, LLM and agents news

There are no comments yet
loading...