Mission

The KEV.AI LabKnowledge Engineering & Validation for AI — advances the infrastructure that makes medical AI trustworthy.

We believe the hardest problems in healthcare AI are not modeling problems. They are knowledge problems: how to acquire expert judgment, how to represent it without distortion, how to verify it survives the passage from expert to algorithm to clinical use, and how to keep it aligned as the system scales.


Intellectual Lineage

Knowledge engineering has a long history in medicine. The term traces back to Edward Feigenbaum’s work at Stanford and the original medical expert system, MYCIN, developed in the 1970s to support clinical decision-making in infectious disease. That tradition — acquiring expert knowledge, encoding it in computable form, and validating it against ground truth — is the intellectual backbone of our lab.

Modern large language models have transformed what’s possible, but the core challenge has not changed. AI systems for medicine still require:

  1. Knowledge acquisition — eliciting and structuring what experts actually know.
  2. Knowledge representation — encoding that expertise in ways machines can reason with.
  3. Knowledge validation — verifying that deployed systems preserve expert judgment across new cases, new contexts, and new users.

Vanderbilt has genuine heritage in this space, from its Department of Biomedical Informatics to its long history of clinical decision support research. KEV.AI builds on that lineage and extends it to the era of foundation models.


Approach

We are a lab of builder-scholars. Our members ship products, file patents, found companies, and negotiate enterprise contracts — and we believe that direct experience of building deployable systems is what distinguishes useful research from performative research.

Our methodology braids three strands:

  • Implementation science — studying what makes AI interventions take root (or fail to) inside real clinical workflows.
  • Ontology and semantic interoperability — building the representational scaffolds that make clinical knowledge computable across systems.
  • Validation at scale — designing evaluation frameworks that move beyond benchmark performance to measure real-world fidelity, fairness, and safety.

Why This Work Matters

Medical AI is expanding faster than the infrastructure to trust it. Systems are being deployed before their knowledge bases are audited, before their outputs are validated against expert consensus, and before their failure modes are understood. The cost of that gap is borne by patients and learners.

KEV.AI exists to close that gap — not as critics from the sidelines, but as builders working in the same pipelines and tools the field is actually using.


Lab Leadership

Kevin W. Sexton, MD, FACS — Director. Helen and Nicholas Abumrad Director, Vice Chair for Innovation, Section of Surgical Sciences. Professor of Surgery and Biomedical Informatics.

Also see the full team →


Affiliations