UCLA CS 269, Winter 2025
M/W 4-5:50pm, Boelter Hall 5264
Instructor: Saadia Gabriel
Email: skgabrie@cs.ucla.edu
Office: Eng VI 295A
Office Hours: 1-2pm on Mondays
Course Description: Large language models (LLMs) are becoming ubiquitous in our society. They are used in many real-world applications ranging from content moderation and online advertisement to healthcare. Given their increasing role in what we see, how we think, and what is publicly known about us, it is critical to consider ethical ramifications of deploying LLM-based systems. This seminar will provide a lens on historical and current computational ethics problems in natural language processing (NLP). We will read and discuss literature about how large-scale language modeling is impacting domains such as privacy, healthcare and political science. These discussions will be accompanied by guest lectures from domain experts. There will be a group project where students will develop critical thinking and problem-solving by writing short perspective pieces on a future AI policy framework to mitigate ethical risks observed from the literature review.
Schedule:
Date | Topic | Description | Assignment(s) |
---|---|---|---|
1/6 | Intro   | We will go over the syllabus, schedule, reading list and course expectations. There will be an overview of historical challenges. |
|
1/8 | Security & Privacy   | Guest Lecture: Niloofar Mireshghallah (UW) |
|
1/13 | Student Presentations   | 4 open slots | Sign up |
1/15 | Student Presentations   | 4 open slots | Sign up |
1/20 | Holiday   | No class | |
1/22 | Avoiding Algorithmic Monoculture    | Guest Lecture: Ashia Wilson (MIT) |
|
1/27 | Student Presentations   | 4 open slots | Sign up |
1/29 | Auditing Deployed Systems: Healthcare   | Guest Lecture: Deb Raji (UC Berkeley) |
|
2/3 | Student Presentations   | 4 open slots |
|
2/5 - 2/10 | No class |
|
|
2/12 | Factuality & Hallucinations | Guest Lecture: Homa Hosseinmardi (UCLA) |
|
2/17 | Holiday   | No class | |
2/19 | Student Presentations   | 4 open slots |
|
2/24 | Harmful Biases in LLMs | Guest Lecture: Maarten Sap (CMU) |
|
2/26 | Student Presentations   | 4 open slots | Sign up |
3/3 | Data Provenance & Accountability | Fireside Chat: Ece Kamar (MSR) |
|
3/5 | Final Presentations | Schedule TBD | |
3/10 | Final Presentations | Schedule TBD | |
3/12 | Final Presentations | Schedule TBD |
|
Resources:
We will be using Perusall for collaborative paper note-taking and course discussion.
Grading:
Detailed guidelines for assignments will be released later in the quarter.
Course Policies:
Late Policy. Out of courtesy to peers, it's expected that students complete reading assignments on time, but students may turn in 1 reading assignment up to a week late without penalty. Since the final project is a group assignment there are no late days, but extensions will be considered under extraordinary circumstances. Students are expected to communicate potential presentation conflicts (e.g. illness, conference travel) to the instructor in advance.
Academic Honesty. Reading assignments are expected to be completed individually outside of the paper presentation and the instructor will check for overlap between posted comments/questions. For all assignments, any collaborators or other sources of help should be explicitly acknowledged. Violations of academic integrity (please consult the student conduct code) will be handled based on UCLA guidelines.
Accommodations. Our goal is to have a fair and welcoming learning environment. Students should contact the instructor at the beginning of the quarter if they will need special accomodations or have any concerns.
Use of ChatGPT and Other LLM Tools. Students are expected to first draft writing without any LLMs and all ideas presented must be their own. Students may use LLMs for grammer correction and minimal editing if they add an acknowledgement of this use. Any work suspected to be entirely AI-generated will be given a grade of 0.
Acknowledgements: This course was very much inspired by 2 UW courses: Yulia Tsvetkov's Ethics in AI course and Amy X. Zhang's Social Computing course. It was also inspired by Marzyeh Ghassemi's Ethical ML in Human Deployments course at MIT.