Using artificial intelligence to improve health

The AI Report
Daily AI, ML, LLM and agents news
In a world increasingly shaped by artificial intelligence, the true measure of its power lies not just in its intelligence, but in its ability to serve humanity with trustworthiness and equity. This ethos defines the work of Nikhil Vytla, a Harvard T.H. Chan School of Public Health alumnus, whose journey into AI for social good began with a deeply personal motivation: helping his grandfather, who lost his eyesight to age-related macular degeneration, access the news he loved.
From Personal Inspiration to Public Health Innovation
Vytla's first foray into software development was a mobile app that translated news articles and read them aloud, directly addressing his grandfather's challenge. This early success solidified his belief that technology could solve real human problems, especially for vulnerable populations who stand to gain the most from accessible and accurate tools. This passion led him to pursue computer science and statistics, eventually culminating in a Master of Science in Health Data Science.
During his undergraduate studies, Vytla co-founded a chapter of Computer Science + Social Good, where student teams collaborated with nonprofits and startups. A standout project involved developing virtual reality (VR) software for immunocompromised children confined to isolated hospital wings. Through VR headsets, these children could embark on virtual field trips, explore underwater worlds, and play interactive games. Vytla recounts, "Seeing a child’s face light up as they virtually swam with dolphins while confined to a hospital bed—that’s when you realize technology isn’t just about algorithms and code. It’s about restoring joy and possibility to people when they need it most."
The Imperative of Trustworthy AI in Healthcare
After college, Vytla’s career took a critical turn into AI explainability, first at TruEra and now at Snowflake. He explains the challenge: "AI is a black box. How can you know what features or influences are most important to the model in terms of making a decision?" This question is particularly vital in healthcare, where AI decisions can literally mean the difference between life and death. Understanding how AI models arrive at their conclusions is essential for building trust, and even more critically, for ensuring fairness.
The urgency for trustworthy AI in health care is underscored by concerning revelations: studies have shown that some medical AI systems exhibit racial bias, such as algorithms underestimating pain levels in Black patients or diagnostic tools trained primarily on data from white populations, leading to less accurate diagnoses for people of color. Vytla’s unwavering goal is clear: "My goal isn’t just to make AI smarter—it’s to make AI that works equitably for everyone. I want to bridge the gap between cutting-edge AI research and practical tools that could actually improve patient care in real clinical settings."
Revolutionizing Trauma Care with AI
At Harvard Chan School, Vytla's capstone project focused on a pressing issue in emergency medicine: improving diagnoses for patients with traumatic injuries. Clinical decision-making in trauma care often suffers from subjectivity and variability, potentially leading to missed injuries, delayed treatments, or inconsistent outcomes. Vytla developed an AI model designed to complement a surgeon’s expertise, not override it.
The model ingeniously incorporates various factors: patient demographics, physical exam results, and most notably, medical imaging reports. Recognizing that clinicians use diverse terminology, the model's initial step involves converting this free-text data into standardized diagnostic terms. Based on this comprehensive input, the AI generates a list of potential missed diagnoses and recommends follow-up tests. Crucially, the model was designed to "err on the side of safety." Vytla explains, "In trauma care, false positives are far preferable to false negatives. An extra CT scan might be inconvenient, but a missed internal injury could be fatal. We deliberately designed the system to be cautious—better to be thorough than to miss something critical." This innovative approach demonstrates AI's practical benefits in enhancing diagnostic accuracy and patient safety.
Shaping the Future of AI Transparency
Currently, Vytla continues his groundbreaking work at Snowflake, focusing on the trustworthiness of large language models (LLMs) like ChatGPT and Claude. His efforts involve developing methods to trace how these models reach conclusions, ensuring they can cite sources and express uncertainty. This work is pivotal for making AI responses more transparent and verifiable. He probes fundamental questions: "Do models say what they’re really thinking—and what does it actually mean for a model to think?"
Nikhil Vytla’s journey exemplifies the transformative potential of AI when driven by a profound commitment to social good and ethical design. His work offers a powerful blueprint for developing AI systems that are not only intelligent but also equitable, transparent, and truly beneficial for all, especially within critical fields like public health and medicine. The future of healthcare AI hinges on these principles, promising a landscape where technology truly serves humanity.

The AI Report
Author bio: Daily AI, ML, LLM and agents news