Loading…
This event has ended. Create your own event on Sched.
November 21-30 – Sign-ups/changes open. After December 1, schedules will be frozen.
Thursday January 9, 2025 8:55am - 10:10am PST
WRC
Thanks to significant advances in machine learning and data science, decision makers are embracing and employing advanced algorithms and statistical models to help with or fully automate difficult tasks across our society. With examples ranging from advertising and finance to healthcare and criminal justice, machine learning tools have become ubiquitous. While often providing significant improvements in speed and performance, these tools come with increased complexity that can make the decision making process opaque and difficult to evaluate. How did your model make that prediction? Why? Are the decisions that it makes fair? How can we quantify fairness? In this activity, we will discuss real-world examples of automated algorithmic decision making along with the practical and ethical problems they can face. We will explore the ideas of bias, fairness, safety, and interpretability.
Facilitators
BS

Ben Seiler

Stanford University
Ben Seiler is a postdoctoral research fellow in the department of Epidemiology and Population Health at the Stanford School of Medicine. He specializes in developing and deploying interpretable statistical learning methods. As part of the Stanford Human Trafficking Data Lab, Ben currently... Read More →
Thursday January 9, 2025 8:55am - 10:10am PST
WRC
Feedback form is now closed.
Share Modal

Share this link via

Or copy link