About

I am a second-year M.Sc. research student in Computer Science at Université de Montréal, Mila and CRCHUM, co-advised by Prof. Bang Liu and Dr. Quoc Nguyen as part of the Applied Computational Linguistics Lab.

I am a researcher in machine learning interpretability, representation engineering, and AI safety. My current work explores how large language models (LLMs) can be aligned and calibrated by leveraging insights from model interpretability with a focus on concept-based explanations.

I am broadly interested in:

  • Representation learning
  • Concept-based Explainability (TCAV, CAVs, Activation Steering)
  • Actionable Interpretability
  • LLM Alignment

During my undergraduate and graduate studies, I also led the UdeM AI undergraduate club to participate in many networking and conference events as well as participated in many volunteering initiatives. Outside of research, I am a big fan of racket sports.

Research

Activation Editing for Conditional Molecular Generation

Using concept bottleneck models and activation steering, we enable conditional molecular generation by directly manipulating internal LLM representations. We show that our methods significantly improves LLM’s generation alignment to conditionned properties. Read more


Calibrating Large Language Model’s with Concept Activation Vectors for Medical QA

We propose a novel framework for calibrating LLM’s uncertainty through Concept Activatoin Vectors. This improves the safety and calibration of LLMs in high-stakes medical decision making.
Read more


Atypicality-Aware Calibration of LLMs for Medical QA

We propose a novel method for elicitating LLM’s confidence in Medical QA by leveraging insights from medical atypical presentations. Read more