About Me
I lead the Reliable and Secure AI program at ServiceNow Research . My research intersts are diverse and cover several research areas and applications, with the common thread through most of my work being the use of mathematical tools (particularly from mathematical optimization and control theory) to improve the robustness and security of real world systems.
Recent Projects

Attacks against AI systems
I am interested in attacks that expose vulnerabilities of AI systems and quantify risks associated with deploying AI. Examples include work demonstrating how to "steal" parts of production grade LLMs from public facing APIs , quantifying robustness of the Gemini family of models to jailbreaks and prompt injection attacks , and quantifying the risk of catastrophic failure in AI agents.

Human AI collaboration and Human Factors in Aligning AI
Moder AI systems are often best used as assistants. This brings the question: How should AI be designed to best collaborate and communicate with Humans? What are the right modes of communication, and should communication be restricted to specific forms to best facilitate this collaboration? In a series of works from the past few years, we have made progress on understanding this, developing systems that optimally integrate predictions from human clinicians and AI for breast cancer and TB diagnosis, showing how human perception of the AI and AI understanding of human uncertainty affects collaborative performance, and studying the forms human feedback should take when collecting data for AI alignment.

Certifiably Secure AI
AI systems are increasingly deployed in agentic scenarios with access to sensitive information and the ability to take consequentail actions on behalf of a user. These deployments create serious privacy and security risks. Anticipating exactly what attackers may do here is hard as the possiblities increase, and this project seeks to develop mathematical guarantees on the worst case behavior of AI or AI-powered systems. A couple of recent examples is an approach to certifying the robustness of learning algorithms against adaptive and dynamic data poisoning attacks , and the development of superior correlated noise mechanims for differentially private machine learning .
Selected Publications
Please visit my google scholar page for a complete list of publications, selected representative publications are listed belowMentoring & Tutorials
Tutorials & Educational Resources
I enjoy giving tutorials on areas of research I have worked on, some recent ones are listed below:
Mentoring Activities
-
2024
Katherine Collins (PhD student at University of Cambridge)
Krishna Pillutla (Postdoc at Google Research -> Faculty at IIT Madras)
Saiyue Lu (PhD student at UBC Vancouver)
Joshua Kazdan (PhD student at Stanford)
-
2023
Stephan Rabanser (PhD student at University of Toronto)
Rishabh Tiwari (Predoctoral fellow at Google Research -> PhD Student at UC Berkeley)
Suhas Kotha (Undegraduate at CMU -> PhD Student at Stanford)
-
2022
Martin Strobel (PhD student at NUS)
Rishav Chourasia (PhD Student at NUS)
-
2021
Elizabeth Bondi-Kelly (PhD student at USC -> Faculty at University of Michigan)
Harkirat Behl (PhD Student at Oxford -> Researcher at Microsoft Research)
David Stutz (PhD Student at Max Planck -> Researcher at Google DeepMind)
-
2019
Lily Weng (PhD student at USC -> Faculty at UCSD)
Sumanth Dathathri (PhD Student at Caltech -> Researcher at Google DeepMind)
Johannes Welbl (PhD Student at UCL -> Researcher at Google DeepMind)
-
2018
Rudy Bunel (PhD student at Oxford -> Researcher at Google DeepMind)
Chenglong Wang (PhD Student at University of Washington -> Researcher at Microsoft Research )
-
2017
Haoxiang Yang (PhD student at Northwestern -> Faculty at CUHK)
Ben Rapone (PhD Student at Washington State University -> ? )
Thiagarajan Ramachandran (Postdoc at PNNL -> Researcher at PNNL )
Past and current advisees
I have been fortunate to work with some wonderful student collaborators over the years (if you should be on this list but aren't please reach out):Past Affiliations
-
Google DeepMind
Researcher (2017-2024)
-
Pacific Northwest National Laboratory
Researcher (2016-2017)
-
Center for Mathematics of Information, Caltech
Postdoctoral Fellow