Computational Ethics, Large Language Models and the Future of NLP

UCLA CS 269, Winter 2025

M/W 4-5:50pm, Boelter Hall 5264

Instructor: Saadia Gabriel
Email: skgabrie@cs.ucla.edu
Office: Eng VI 295A
Office Hours: 1-2pm on Mondays

Course Description: Large language models (LLMs) are becoming ubiquitous in our society. They are used in many real-world applications ranging from content moderation and online advertisement to healthcare. Given their increasing role in what we see, how we think, and what is publicly known about us, it is critical to consider ethical ramifications of deploying LLM-based systems. This seminar will provide a lens on historical and current computational ethics problems in natural language processing (NLP). We will read and discuss literature about how large-scale language modeling is impacting domains such as privacy, healthcare and political science. These discussions will be accompanied by guest lectures from domain experts. There will be a group project where students will develop critical thinking and problem-solving by writing short perspective pieces on a future AI policy framework to mitigate ethical risks observed from the literature review.

Schedule:

Date Topic Description Assignment(s)
1/6 Intro   We will go over the syllabus, schedule, reading list and course expectations. There will be an overview of historical challenges.
  • Reading assignment #1, due by 1/12 11:59pm PT.
1/8 Security & Privacy   Guest Lecture: Niloofar Mireshghallah (UW)
  • Reading assignment #2, due by 1/14 11:59pm PT.
1/13 Student Presentations   4 open slots Sign up
1/15 Student Presentations   4 open slots Sign up
1/20 Holiday   No class
1/22 Avoiding Algorithmic Monoculture    Guest Lecture: Ashia Wilson (MIT)
  • Reading assignment #3, due by 1/26 11:59pm PT.
1/27 Student Presentations   4 open slots Sign up
1/29 Auditing Deployed Systems: Healthcare   Guest Lecture: Deb Raji (UC Berkeley)
  • Reading assignment #4, due by 2/2 11:59pm PT.
2/3 Student Presentations   4 open slots
  • Final project group matching
  • Sign up
2/5 - 2/10 No class
  • Final project abstract, due by 2/10 11:59pm PT.
2/12 Factuality & Hallucinations Guest Lecture: Homa Hosseinmardi (UCLA)
  • Reading assignment #5, due by 2/18 11:59pm PT.
2/17 Holiday   No class
2/19 Student Presentations   4 open slots
  • Reading assignment #6, due by 2/25 11:59pm PT.
  • Sign up
2/24 Harmful Biases in LLMs Guest Lecture: Maarten Sap (CMU)
  • Fireside Chat Q&A, due by 3/2 11:59pm PT.
2/26 Student Presentations   4 open slots Sign up
3/3 Data Provenance & Accountability Fireside Chat: Ece Kamar (MSR)
  • Final presentation schedule released, slides due
3/5 Final Presentations Schedule TBD
3/10 Final Presentations Schedule TBD
3/12 Final Presentations Schedule TBD
  • Final write-ups due 3/14 by 11:59pm PT.

Resources:

We will be using Perusall for collaborative paper note-taking and course discussion.

Grading:

Detailed guidelines for assignments will be released later in the quarter.

  • Reading assignments (40%)
    • Students will read the assigned papers and post an original comment or question for each paper on Perusall. (36%)
    • In pairs, students will sign up to present one of the assigned papers and summarize online discussion from Perusall. Each student will only present once. (4%)
  • Project (55%)
    • Students will form groups and write a short (max 5 pg) perspective piece on an AI policy framework for addressing concerns raised during one of the reading assignments.
    • This will be graded based on a mid-quarter abstract (10%), final in-person presentations (15%) and a final write-up (30%).
  • Peer Feedback (5%)
    • Students will be asked to provide short, constructive feedback to their peers' final presentations that can aid in editing project write-ups.

Course Policies:

Late Policy. Out of courtesy to peers, it's expected that students complete reading assignments on time, but students may turn in 1 reading assignment up to a week late without penalty. Since the final project is a group assignment there are no late days, but extensions will be considered under extraordinary circumstances. Students are expected to communicate potential presentation conflicts (e.g. illness, conference travel) to the instructor in advance.

Academic Honesty. Reading assignments are expected to be completed individually outside of the paper presentation and the instructor will check for overlap between posted comments/questions. For all assignments, any collaborators or other sources of help should be explicitly acknowledged. Violations of academic integrity (please consult the student conduct code) will be handled based on UCLA guidelines.

Accommodations. Our goal is to have a fair and welcoming learning environment. Students should contact the instructor at the beginning of the quarter if they will need special accomodations or have any concerns.

Use of ChatGPT and Other LLM Tools. Students are expected to first draft writing without any LLMs and all ideas presented must be their own. Students may use LLMs for grammer correction and minimal editing if they add an acknowledgement of this use. Any work suspected to be entirely AI-generated will be given a grade of 0.

Acknowledgements: This course was very much inspired by 2 UW courses: Yulia Tsvetkov's Ethics in AI course and Amy X. Zhang's Social Computing course. It was also inspired by Marzyeh Ghassemi's Ethical ML in Human Deployments course at MIT.