Support for LAist comes from
Local and national news, NPR, things to do, food recommendations and guides to Los Angeles, Orange County and the Inland Empire
Stay Connected
Listen
Podcasts AirTalk
As predictive algorithms become widespread, how do we approach machine bias?
solid blue rectangular banner
()
AirTalk Tile 2024
Jul 25, 2017
Listen 9:56
As predictive algorithms become widespread, how do we approach machine bias?
Ideally, predictive algorithms are stone-cold, rational, big data-crunching tools that can assist humans in their flawed decision-making process, but the caveat is that they can often reflect the biases of their creators.
(EDITOR'S NOTE: Image was created as an Equirectangular Panorama. Import image into a panoramic player to create an interactive 360 degree view.) Maschinenmensch (machine-human) on display at the Science Museum at Preview Of The Science Museum's Robots Exhibition at Science Museum on February 7, 2017 in London, England.
Maschinenmensch (machine-human) on display at the Science Museum at Preview Of The Science Museum's Robots Exhibition at Science Museum.
(
Ming Yeung/Getty Images Entertainment Video
)

Ideally, predictive algorithms are stone-cold, rational, big data-crunching tools that can assist humans in their flawed decision-making process, but the caveat is that they can often reflect the biases of their creators.

Ideally, predictive algorithms are stone-cold, rational, big data-crunching tools that can assist humans in their flawed decision-making process, but the caveat is that they can often reflect the biases of their creators.

According to Laura Hudson in her FiveThirtyEight piece “Technology Is Biased Too. How Do We Fix it?” algorithmic bias is a growing problem, as organizations increasingly use algorithms as a factor in deciding whether to give someone a loan, offer someone a job or even whether to convict a defendant or grant them parole.

But fixing these algorithms presents a philosophical quandary: how to define fairness? And if biases are impossible to avoid, then which ones are less harmful than others?

So how are problematic algorithms already being used today? How, if at all, can they be made “fair?” And in what way can we use algorithms responsibly?

Guest:

Suresh Venkatasubramanian, professor of computing at the University of Utah and a member of the board of directors for the ACLU Utah; he studies algorithmic fairness

Credits
Host, AirTalk
Host, Morning Edition, AirTalk Friday, The L.A. Report A.M. Edition
Senior Producer, AirTalk with Larry Mantle
Producer, AirTalk with Larry Mantle
Producer, AirTalk with Larry Mantle
Associate Producer, AirTalk & FilmWeek
Associate Producer, AirTalk
Apprentice News Clerk, AirTalk
Apprentice News Clerk, FilmWeek