Research Overview

My long-term research goal is centered around the following question: "How can we create new technologies to bridge the current gaps between humans and AI tools to democratize AI beyond the tech-savvy communities?" As a researcher, I find myself at the intersection of Natural Language Processing (NLP) and Information Retrieval (IR), with a broad interest in the general area of Artificial Intelligence and Data Science (see Figure below).

Although Artificial Intelligence (AI) has existed for decades, its widespread accessibility is a relatively recent development—driven largely by systems like ChatGPT that enable natural, human-like interactions. This accessibility offers a tremendous opportunity to democratize AI among the general population. However, it still has several key limitations that hinder democratization, including but not limited to misalignment with end-users’ goals, biased content generation, and a lack of assurance in real-world applications. Accordingly, my current research focuses on addressing these three challenges (as illustrated in the figure below): 1) Decision Advantage with AI, 2) AI Assurance, and 3) AI Alignment.

Image not Loading


Current Projects

2025

Decision Advantage with AI

5 minute read

Published:

Urgent Decision-Making refers to the process of swiftly selecting an appropriate course of action under conditions of intense time pressure, high stakes, and often incomplete, fragmented, or unreliable information. These scenarios demand not only speed but also precision. Effective decision-making in such contexts hinges on the rapid synthesis of diverse and sometimes conflicting data sources into reliable, holistic summaries that support timely and informed responses. Technically, this task presents multiple challenges: information may arrive in real time from heterogeneous sources (e.g., news reports, social media, radio dispatches), its credibility may be uncertain, and the operational environment may evolve faster than systems can adapt. While recent advances in AI have shown remarkable capabilities in general-purpose summarization, current methods fall short in urgent contexts, where the integrity of available information is often under question, and latency, even by a few minutes, can result in irreversible consequences. To address these limitations, our group is currently focusing on the following two core research problems.

AI Assurance

2 minute read

Published:

AI assurance is vital to ensure systems act reliably and ethically, especially in this generative AI era. As AI gains autonomy in creating text, images, and decisions, assurance provides confidence that models behave as intended, respect societal norms, and avoid misinformation or bias. It safeguards against misuse, ensures transparency and accountability, and verifies that generative systems uphold accuracy, fairness, and trustworthiness—protecting both users and institutions in an increasingly AI-driven world. To address these challenges, our lab focuses on three distinct themes under AI Assurance.

AI Alignment

3 minute read

Published:

On the alignment front, our group has focused on developing innovative methods to improve AI alignment without requiring deep technical expertise. One such idea is Alignment via Conversation, where users can engage in a natural dialogue with an AI agent to explain their alignment goals, and the agent takes care of the rest, including fine-tuning, prompt engineering, etc. Also, we introduced a standardized taxonomy called TELeR for designing and categorizing prompts in LLM benchmarking, enabling consistent comparisons across studies and enhancing understanding of how prompt design affects AI performance on complex tasks.

Our Sponsors

Image not Loading