Hi! I'm Saadia Gabriel.

Announcement: In July 2024, I will join the UCLA Samueli School of Engineering as an Assistant Professor 🌴! I am actively looking for motivated students to start in Fall 2024. If you're interested, apply to UCLA's CS PhD program and mention me as a potential advisor in your application. You can also send me an email (skgabrie@cs.ucla.edu), though I can't promise individual responses and will not consider applications until after December. You can find out more about my research agenda here. At NYU and UCLA, I'll be running the Misinformation, AI and Responsible Society (MARS) Lab.

I am a NYU Data Science Faculty Fellow affiliated with ML2, the Center for Responsible AI (RAI) and the Alignment Research Group (ARG). I am also proud to be affiliated with the Bunche Center for African-American Studies at UCLA. Previously, I worked with the wonderful Prof. Marzyeh Ghassemi as a MIT CSAIL Postdoctoral Fellow. I received my PhD from the Paul G. Allen School of Computer Science & Engineering at the University of Washington. I was very fortunate to be advised by Prof. Yejin Choi and Prof. Franziska Roesner. My work focuses on measuring factuality and intent of human-written language. Two key dimensions of machine reasoning that excite me are social commonsense reasoning and fairness in NLP. During my PhD, I interned at SRI, the AI2 Mosaic group and MSR.


Fall 2024: Invited Talk in Healthy ML Group at MIT.
July 2024: Invited Talk at UCLA Summer Institute in Computational Social Science.
April 2024: Guest Lecture in Ethical Machine Learning In Human Deployments at MIT.
March 2024: Invited talk at UCLA Statistics and Data Science Seminar.
March 2024: Guest lecture in Computational Ethics at CMU on LLMs and factuality.
January 2024: In Berkeley for Scalable Oversight Workshop co-organization.
December 2023: Tutorial co-chair for NeurIPS 2023.
November 2023: Honored to be named on Forbes 30 under 30 list in Science.
November 2023: Invited talk at NYU CDS Seminar.
November 2023: Guest lecture in NLP at MIT.
November 2023: Invited talk at Northeastern.
November 2023: Presenting at NYU-KAIST Inclusive AI Workshop.
October 2023: Invited talk at Mount Holyoke College.
October 2023: Guest lecture on AI Ethics at Oakton College.
September 2023: Co-teaching my first class as a professor (NYU Data Science Capstone).
September 2023: Thank you to MIT
(Generative AI Impact Award) and Cohere for $61,000 of grant support over the summer. I look forward to discussing the funded projects!
August 2023: New paper on LLMs for mental health prediction.
August 2023: New paper and dataset (Socratis) exploring capabilities of multimodal models for understanding emotional reactions to images.
June 2023: Panelist at CHIL 2023 on LLMs for healthcare.
June 2023: Talk at Spotify NYC.
April 2023: Invited talks at UCLA, MIT and Princeton.
March 2023: Guest lectures at the University of Washington (Undergraduate NLP, CSE 447) and Carnegie Mellon University (Computational Ethics, CS 11-830).
March 2023: Invited talks at the University of Chicago, Northeastern and Cornell.
February 2023: Invited talks at the University of Pittsburgh, University of Michigan, UMass Amherst, Boston University and Johns Hopkins.
January 2023: Invited talks at Heriot-Watt and Emory.
October 2022: New paper on testing robustness of NLI and hate speech classifiers with generated adversaries accepted to EMNLP Findings!
August 2022: Guest lecture in UW Intro to Machine Learning course (CSE 416).
July 2022: Named an outstanding reviewer for NAACL 2022.
July 2022: Socio-Cultural Inclusion co-chair for NAACL 2022.
May 2022: Our team's proposal to investigate misinformation and social biases will be part of a new TACC high-performance computing program initative.
April 2022: Invited talk at Cornell JEDI dialogues seminar.
February 2022: Two papers accepted to ACL 2022 main conference!
February 2022: Darpa Semafor keynote talk on Misinfo Reaction Frames.
December 2021: Invited talk at Stanford NLP seminar.
October 2021: Presenting at MIT EECS Rising Stars Workshop.
July 2021: Co-organizing Safety for E2E Conversational AI at SIGDIAL 2021.
May 2021: Work on evaluating effectiveness of factuality metrics for summarization (GO FIGURE) accepted to ACL 2021 Findings!
April 2021: New preprint on defending against misinformation.
January 2021: Invited talk at UMass Amherst Rising Stars Seminar.
December 2020: Paragraph-level Commonsense Transformers accepted to AAAI 2021.
Presenting at NeurIPS 2020 Resistance AI Workshop.
October 2020: Presented on Social and Power Implications of Language at UW colloquium.
September 2020: Presented on summarization with cooperative generator-discriminator networks and detection of implicit social biases in text at BBN Technologies.
July 2020: Presented as part of Voice Tech Global panel on implicit bias towards the Black community and conversational AI.


Can AI Relate: Testing Large Language Model Response for Mental Health Support
Saadia Gabriel, Isha Puri, Xuhai Xu, Matteo Malgaroli, Marzyeh Ghassemi.
ArXiv 2024.

Advancing Equality: Harnessing Generative AI to Combat Systemic Racism
Saadia Gabriel, Jessy Xinyi Han, Eric Liu, Isha Puri, Wonyoung So, Fotini Christia, Munzer Dahleh, Catherine D’Ignazio, Marzyeh Ghassemi, Peko Hosoi, and Devavrat Shah.
MIT Press (2024).
[Preprint]. MIT Generative AI Impact Award.

Generative AI in the Era of "Alternative Facts"
Saadia Gabriel, Liang Lyu, James Siderius, Marzyeh Ghassemi, Jacob Andreas, Asu Ozdaglar.
MIT Press (2024).
[Preprint]. MIT Generative AI Impact Award.

Generalization in Healthcare AI: Evaluation of a Clinical Large Language Model
Salman Rahman, Lavender Yao Jiang, Saadia Gabriel, Yindalon Aphinyanaphongs, Eric Karl Oermann, Rumi Chunara.
Arxiv 2024.

Can Machines Learn Morality? The Delphi Experiment
Liwei Jiang, Jena D. Hwang, Chandra Bhagavatula, Ronan Le Bras, Jenny Liang, Jesse Dodge, Keisuke Sakaguchi, Maxwell Forbes, Jon Borchardt, Saadia Gabriel, Yulia Tsvetkov, Oren Etzioni, Maarten Sap, Regina Rini, Yejin Choi.
ArXiv 2022.

Workshop Papers

Socratis: Are large multimodal models emotionally aware?
Katherine Deng, Arijit Ray, Reuben Tan, Saadia Gabriel, Bryan Plummer, Kate Saenko.

Journal Papers

Mental-LLM: Leveraging Large Language Models for Mental Health Prediction via Online Text Data
Xuhai Xu, Bingsheng Yao, Yuanzhe Dong, Saadia Gabriel, Hong Yu, James Hendler, Marzyeh Ghassemi, Anind K. Dey, Dakuo Wang.
IMWUT 2024.

Conference Papers

NaturalAdversaries: Can Naturalistic Adversaries Be as Effective as Artificial Adversaries?
Saadia Gabriel, Hamid Palangi, Yejin Choi.
EMNLP 2022 Findings.

Misinfo Reaction Frames: Reasoning about Readers’ Reactions to News Headlines
Saadia Gabriel, Skyler Hallinan, Maarten Sap, Pemi Nguyen, Franziska Roesner, Eunsol Choi, Yejin Choi.
ACL 2022.

ToxiGen: A Large-Scale Machine-Generated Dataset for Adversarial and Implicit Hate Speech Detection
Thomas Hartvigsen, Saadia Gabriel, Hamid Palangi, Maarten Sap, Dipankar Ray, Ece Kamar.
ACL 2022.

GO FIGURE: A Meta Evaluation of Factuality in Summarization
Saadia Gabriel, Asli Celikyilmaz, Rahul Jha, Yejin Choi, Jianfeng Gao.
ACL 2021 Findings.

Discourse Understanding and Factual Consistency in Abstractive Summarization
Saadia Gabriel, Antoine Bosselut, Jeff Da, Ari Holtzman, Jan Buys, Kyle Lo, Asli Celikyilmaz, Yejin Choi.
EACL 2021.

Paragraph-level Commonsense Transformers with Recurrent Memory
Saadia Gabriel, Chandra Bhagavatula, Vered Shwartz, Ronan Le Bras, Maxwell Forbes, Yejin Choi.
AAAI 2021.
[Paper] [Project Page]

Social Bias Frames: Reasoning about Social and Power Implications of Language
Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Jurafsky, Noah A. Smith, Yejin Choi.
ACL 2020.
Also presented at West Coast NLP Summit (WeCNLP) 2020 and awarded Best Paper.
[Paper] [Data]

Detecting and Tracking Communal Bird Roosts in Weather Radar Data
Zezhou Cheng, Saadia Gabriel, Pankaj Bhambhani, Daniel Sheldon, Subhransu Maji, Andrew Laughlin, David Winkler.
AAAI 2020.

The Risk of Racial Bias in Hate Speech Detection
Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, Noah A. Smith.
ACL 2019. Best Paper Nominee.

MathQA: Towards Interpretable Math Word Problem Solving with Operation-Based Formalisms
Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik Koncel-Kedziorski, Yejin Choi, Hannaneh Hajishirzi.
NAACL 2019.
[Paper] [Data]

Early Fusion for Goal Directed Robotic Vision
Aaron Walsman, Yonatan Bisk, Saadia Gabriel, Dipendra Misra, Yoav Artzi, Yejin Choi, Dieter Fox.
IROS 2019. Best Paper Nominee.