Skip to content Skip to navigation

Research & Ideas

Search this site

Mutale Nkonde: How to get more truth from social media

A sociologist and former journalist warns that the artificial intelligence behind much of today’s social media is inherently biased, but it’s not too late to do something about it.

A person holds a smartphone while social media icon and reaction emojis float up from the screen

It’s time for greater scrutiny of the algorithms that profile users and filter ideas in social media. | iStock/VectorFun

The old maxim holds that a lie spreads much faster than a truth, but it has taken the global reach and lightning speed of social media to lay it bare before the world.

One problem of the age of misinformation, says sociologist and former journalist Mutale Nkonde, a fellow at the Stanford Center on Philanthropy and Civil Society (PACS), is that the artificial intelligence algorithms used to profile users and disseminate information to them, whether truthful or not, are inherently biased against minority groups, because they are underrepresented in the historical data upon which the algorithms are based.

Now, Nkonde and others like her are holding social media’s feet to the fire, so to speak, to get them to root out bias from their algorithms. One approach she promotes is the Algorithmic Accountability Act, which would authorize the Federal Trade Commission (FTC) to create regulations requiring companies under its jurisdiction to assess the impact of new and existing automated decision systems. Another approach she has favored is called “Strategic Silence,” which seeks to deny untruthful users and groups the media exposure that amplifies their false claims and helps them attract new adherents.

Nkonde explores the hidden biases of the age of misinformation in this episode of Stanford Engineering’s The Future of Everything podcast, hosted by bioengineer Russ Altman. Listen and subscribe here.