I am an Assistant Professor of Computer Science at the UCLA Samueli School of Engineering, and affiliated with the Bunche Center for African-American Studies. Previously, I was a NYU Faculty Fellow and MIT CSAIL Postdoctoral Fellow working with the wonderful Prof. Marzyeh Ghassemi. I received my PhD from the Paul G. Allen School of Computer Science & Engineering at the University of Washington. I was very fortunate to be advised by Prof. Yejin Choi and Prof. Franziska Roesner. My work focuses on measuring factuality, intent and potential harm of human-written language. At UCLA, I run the Misinformation, AI and Responsible Society (MARS) Lab. You can find out more about my research agenda here. Sadly I get too many emails to respond to them individually, but you can fill out this form if you are an undergrad or master's student at UCLA interested in collaborating with us.
February 2025: Invited talk at Stanford NLP Seminar.
December 2024: Invited talk in Healthy ML Group at MIT.
November 2024: Guest lectures in Speech Processing (UCLA) and NLP (MIT).
November 2024: I'll see everyone in Miami for EMNLP! We have three accepted papers on LLM-based misinformation interventions, multimodal fact-checking with open models and evaluating LLMs for mental health support.
October 2024: My research group received funding from the UCLA Initative to Study Hate and Racial/Social Justice Program for new work on equitable content moderation. Details coming soon.
July 2024: Invited talk at UCLA Summer Institute in Computational Social Science.
April 2024: Guest lecture in Ethical Machine Learning In Human Deployments at MIT.
March 2024: Invited talk at UCLA Statistics and Data Science Seminar.
March 2024: Guest lecture in Computational Ethics at CMU on LLMs and factuality.
January 2024: In Berkeley for Scalable Oversight Workshop co-organization.
December 2023: Tutorial co-chair for NeurIPS 2023.
November 2023: Honored to be named on Forbes 30 under 30 list in Science.
November 2023: Invited talk at NYU CDS Seminar.
November 2023: Guest lecture in NLP at MIT.
November 2023: Invited talk at Northeastern.
November 2023: Presenting at NYU-KAIST Inclusive AI Workshop.
October 2023: Invited talk at Mount Holyoke College.
October 2023: Guest lecture on AI Ethics at Oakton College.
September 2023: Co-teaching my first class as a professor (NYU Data Science Capstone).
September 2023: Thank you to MIT (Generative AI Impact Award) and Cohere for $61,000 of grant support over the summer. I look forward to discussing the funded projects!
August 2023: New paper on LLMs for mental health prediction.
August 2023: New paper and dataset (Socratis) exploring capabilities of multimodal models for understanding emotional reactions to images.
June 2023: Panelist at CHIL 2023 on LLMs for healthcare.
June 2023: Talk at Spotify NYC.
April 2023: Invited talks at UCLA, MIT and Princeton.
March 2023: Guest lectures at the University of Washington (Undergraduate NLP, CSE 447) and Carnegie Mellon University (Computational Ethics, CS 11-830).
March 2023: Invited talks at the University of Chicago, Northeastern and Cornell.
February 2023: Invited talks at the University of Pittsburgh, University of Michigan, UMass Amherst, Boston University and Johns Hopkins.
January 2023: Invited talks at Heriot-Watt and Emory.
October 2022: New paper on testing robustness of NLI and hate speech classifiers with generated adversaries accepted to EMNLP Findings!
August 2022: Guest lecture in UW Intro to Machine Learning course (CSE 416).
July 2022: Named an outstanding reviewer for NAACL 2022.
July 2022: Socio-Cultural Inclusion co-chair for NAACL 2022.
May 2022: Our team's proposal to investigate misinformation and social biases will be part of a new TACC high-performance computing program initative.
April 2022: Invited talk at Cornell JEDI dialogues seminar.
February 2022: Two papers accepted to ACL 2022 main conference!
February 2022: Darpa Semafor keynote talk on Misinfo Reaction Frames.
December 2021: Invited talk at Stanford NLP seminar.
October 2021: Presenting at MIT EECS Rising Stars Workshop.
July 2021: Co-organizing Safety for E2E Conversational AI at SIGDIAL 2021.
May 2021: Work on evaluating effectiveness of factuality metrics for summarization (GO FIGURE) accepted to ACL 2021 Findings!
April 2021: New preprint on defending against misinformation.
January 2021: Invited talk at UMass Amherst Rising Stars Seminar.
December 2020: Paragraph-level Commonsense Transformers accepted to AAAI 2021.
Presenting at NeurIPS 2020 Resistance AI Workshop.
October 2020: Presented on Social and Power Implications of Language at UW colloquium.
September 2020: Presented on summarization with cooperative generator-discriminator networks and detection of implicit social biases in text at BBN Technologies.
July 2020: Presented as part of Voice Tech Global panel on implicit bias towards the Black community and conversational AI.
Advancing Equality: Harnessing Generative AI to Combat Systemic Racism
Saadia Gabriel, Jessy Xinyi Han, Eric Liu, Isha Puri, Wonyoung So, Fotini Christia, Munzer Dahleh, Catherine D’Ignazio, Marzyeh Ghassemi, Peko Hosoi, and Devavrat Shah.
MIT Press (2024).
[Preprint]. MIT Generative AI Impact Award.
Generalization in Healthcare AI: Evaluation of a Clinical Large Language Model
Salman Rahman, Lavender Yao Jiang, Saadia Gabriel, Yindalon Aphinyanaphongs, Eric Karl Oermann, Rumi Chunara.
Arxiv 2024.
[Preprint]
Socratis: Are large multimodal models emotionally aware?
Katherine Deng, Arijit Ray, Reuben Tan, Saadia Gabriel, Bryan Plummer, Kate Saenko.
ICCV 2023 WECEIA.
[Paper]
An Empirical Investigation of Machines' Capabilities for Moral Judgment with the Delphi Experiment
Liwei Jiang, Jena D. Hwang, Chandra Bhagavatula, Ronan Le Bras, Jenny Liang, Jesse Dodge, Keisuke Sakaguchi, Maxwell Forbes, Jon Borchardt, Saadia Gabriel, Yulia Tsvetkov, Oren Etzioni, Maarten Sap, Regina Rini, Yejin Choi.
Conditionally accepted to Nature Machine Intelligence 2024.
[Preprint]
Mental-LLM: Leveraging Large Language Models for Mental Health Prediction via Online Text Data
Xuhai Xu, Bingsheng Yao, Yuanzhe Dong, Saadia Gabriel, Hong Yu, James Hendler, Marzyeh Ghassemi, Anind K. Dey, Dakuo Wang.
IMWUT 2024.
[Preprint]
MisinfoEval: Generative AI in the Era of "Alternative Facts"
Saadia Gabriel, Liang Lyu, James Siderius, Marzyeh Ghassemi, Jacob Andreas, Asu Ozdaglar.
EMNLP 2024.
[Paper][Talk][Environment/Code/Data]. MIT Generative AI Impact Award.
Can AI Relate: Testing Large Language Model Response for Mental Health Support
Saadia Gabriel, Isha Puri, Xuhai Xu, Matteo Malgaroli, Marzyeh Ghassemi.
EMNLP 2024 Findings.
[Paper][Talk][Code/Data].
How to Train Your Fact Verifier: Knowledge Transfer with Multimodal Open Models
Jaeyoung Lee, Ximing Lu, Jack Hessel, Faeze Brahman, Youngjae Yu, Yonatan Bisk, Yejin Choi, Saadia Gabriel.
EMNLP 2024 Findings.
[Preprint].
NaturalAdversaries: Can Naturalistic Adversaries Be as Effective as Artificial Adversaries?
Saadia Gabriel, Hamid Palangi, Yejin Choi.
EMNLP 2022 Findings.
[Paper]
Misinfo Reaction Frames: Reasoning about Readers’ Reactions to News Headlines
Saadia Gabriel, Skyler Hallinan, Maarten Sap, Pemi Nguyen, Franziska Roesner, Eunsol Choi, Yejin Choi.
ACL 2022.
[Paper][Data/Models]
ToxiGen: A Large-Scale Machine-Generated Dataset for Adversarial and Implicit Hate Speech Detection
Thomas Hartvigsen, Saadia Gabriel, Hamid Palangi, Maarten Sap, Dipankar Ray, Ece Kamar.
ACL 2022.
[Paper][Data/Models]
GO FIGURE: A Meta Evaluation of Factuality in Summarization
Saadia Gabriel, Asli Celikyilmaz, Rahul Jha, Yejin Choi, Jianfeng Gao.
ACL 2021 Findings.
[Paper]
Discourse Understanding and Factual Consistency in Abstractive Summarization
Saadia Gabriel, Antoine Bosselut, Jeff Da, Ari Holtzman, Jan Buys, Kyle Lo, Asli Celikyilmaz, Yejin Choi.
EACL 2021.
[Paper]
Paragraph-level Commonsense Transformers with Recurrent Memory
Saadia Gabriel, Chandra Bhagavatula, Vered Shwartz, Ronan Le Bras, Maxwell Forbes, Yejin Choi.
AAAI 2021.
[Paper] [Project Page]
Detecting and Tracking Communal Bird Roosts in Weather Radar Data
Zezhou Cheng, Saadia Gabriel, Pankaj Bhambhani, Daniel Sheldon, Subhransu Maji, Andrew Laughlin, David Winkler.
AAAI 2020.
[Paper]
The Risk of Racial Bias in Hate Speech Detection
Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, Noah A. Smith.
ACL 2019. Best Paper Nominee.
[Paper]
Early Fusion for Goal Directed Robotic Vision
Aaron Walsman, Yonatan Bisk, Saadia Gabriel, Dipendra Misra, Yoav Artzi, Yejin Choi, Dieter Fox.
IROS 2019. Best Paper Nominee.
[Paper]