CDS STUDENT
SEMINAR
SERIES

Join us every Friday at Boston University Faculty of Computing and Data Sciences (CDS) for cutting-edge research presentations by CDS PhD students across data science, AI, and beyond.

Fridays • 12–1 PM • Duan Family Center for Computing and Data Science 1646

What We Do

We are a student-run initiative within the PhD department of Boston University Faculty of Computing & Data Sciences, dedicated to fostering knowledge sharing and academic growth across our community.

Our Mission?

Create a space where students can explore, present, and discuss the research topics they're passionate about in a supportive, collaborative environment.

Every Friday from 12:00 to 1:00 PM in CDS 1646, CDS researchers present on work that excites them—whether it's their current research, an inspiring paper they've discovered, or a hands-on workshop in their area of expertise. From artificial intelligence to biological sciences, our seminars cover the full breadth of computer and data science.

Meet the Organizers

Freddy Reiber

Freddy Reiber

PhD student in CDS studying how society influences technology and how technology influences society.

Lingyi Xu

Lingyi Xu

PhD student in CDS addressing the challenge of modality missingness in multimodal learning across visual, tabular, and textual data.

Yan (Stella) Si

Yan (Stella) Si

PhD student in CDS working at the intersection of cognitive science and AI.

COMING UP

AIFriday, February 13, 2026

Quantitative evaluation frameworks for the trustworthiness of large language model outputs in medical domains

by Yi Liu

Although large language model (LLM)–based tools have become increasingly popular, their deployment in real-world clinical settings demands a much higher level of precision and reliability, where the cost of diagnostic errors is substantial. Currently, clinicians remain skeptical about relying on LLMs for clinical decision-making, largely due to the lack of rigorous evidence supporting individual model outputs and limited understanding of how such outputs are generated. Even when an LLM produces a correct answer, clinicians often find it difficult to trust the result without transparent justification. Addressing this trust gap is therefore an urgent need. In Yi’s first project, she proposes a scalable, entity-centric evaluation framework for medical question answering, which assesses the clinical alignment and informativeness of LLM-generated responses by tracing and verifying clinically relevant medical entities within patient-specific contexts. This framework enables more faithful and interpretable evaluation of medical LLM outputs beyond surface-level correctness. Building on this work, Yi’s ongoing research explores interpretability methods to analyze the decision flow of LLMs, examining how patient information is processed through internal model representations and transformed into diagnostic summaries or clinical decisions. Together, these efforts aim to improve the transparency and trustworthiness of LLMs for clinical applications.

Location: CDS 1646Time: 12:00 PM - 1:00 PM

Get Involved

Ready to join our community of learners, researchers, and innovators?