How do you give voters a greater voice in their local governments?
In countless towns across the United States, municipal budgets and other important decisions are made by a relatively small number of elected officials who have been swayed by well-connected interest groups.
But eight years ago, a team of Stanford researchers launched an experiment see if there was a way to give residents of cities and towns a more direct voice in how their local governments spend money.
The result was the Participatory Budgeting Platform, in which residents are typically asked how they would allocate a specific pot of money between a list of competing potential projects – a new playground, for example, versus new bicycle lanes or a new library. People must keep their preferences within a budget, so they have to weigh the tradeoffs in much the same way that elected officials have to.
Today, cities and towns across the nation are embracing it. More than 100 local governments have used the Stanford platform for 120 local participatory budgeting elections, from New York City and Chicago to Oakland, California. Those surveys have been used to allocate about $75 million, and the numbers keep climbing.
But now, Ashish Goel, a professor of management science and engineering, and his collaborators are wrestling with new questions: How much can cities do to seek equitable participation, by reaching out to minorities and older adults who are often underrepresented? Should they worry about an unbalanced result if one demographic group is so passionate about an issue that it seems to overwhelm everybody else?
These issues were on dramatic display in Austin, Texas, which enlisted Goel’s group in 2020 to carry out an unusually ambitious effort in participatory budgeting.
Austin officials surveyed residents about the city’s entire budget – each source of tax revenue as well as each department budget, from the police to public works. How much more – or less – were people willing to pay in taxes? What kind of taxes? Should particular agencies – police, fire, schools, public works – get either more money or less?How much?
As it happened, two major events coincided to produce unusually lopsided results.
The first was the COVID-19 pandemic, which forced Austin officials and the Stanford team to shift their focus from in-person to online voting. That raised new challenges about engaging groups that are often underrepresented, such as minority communities and older adults.
With in-person voting, Austin and many other local governments have a host of strategies to reach underrepresented groups. Those include sending poll workers to particular neighborhoods and holding voting events in libraries, churches and senior citizen centers.
Those strategies couldn’t work during the pandemic lockdown, but online efforts at outreach hadn’t been effective in previous budget surveys. In the North Carolina cities of Greensboro and Durham, the Stanford team had tried reaching Black residents through targeted Facebook ads. Yet it actually cost more to get additional Black responses through targeted ads than through advertising to the public at large.
In Austin, whites voted in disproportionately high numbers, relative to their share of the population. Latinx and Black residents voted in disproportionately low numbers. Likewise, people older than 45 were deeply underrepresented.
“What would really move the needle is to get the social media platforms involved,” says Goel. “Advertising agencies know exactly how to do it.”
The second major event was the murder on May 25, 2020, of George Floyd, a Black man, by a white police officer in Minneapolis. Floyd’s death sparked angry protests and calls for “defunding” the police in Austin and many other cities.
In the two weeks after Floyd’s death, voting in Austin’s survey soared to almost unimaginable levels – from fewer than 40 responses per day before May 25 to more than 3,300. Younger residents between 18 and 34 accounted for 72% of the responses, more than double that group’s share of the city’s population.
The result was a thundering call to slash the police budget. Nearly half of the respondents called for a reduction of 5%, the maximum reduction allowed on the survey. In the aggregate, residents called for a reduction of 2.88%, or about $12.5 million.
The Austin City Council actually went further, cutting the police budget by $21 million and shifting another $130 million to civilian functions like forensic science and alternative approaches to public safety. It isn’t clear how much the survey played a role in the Austin City Council’s decision. But Goel says the heavily disproportionate voting highlighted new questions about what really constitutes a “representative” result.
“It is not clear what the best approach is. Should we go out of our way to solicit a demographically balanced sample, even if that dilutes the voices of some groups?” he asked. “Or should we respect the passion that a motivated and mobilized section of society brings to an issue, and not make any attempt to dilute their voices by demographic rebalancing?”
Given the lopsided numbers of younger people voting, did the results accurately represent the views of the overall population? Or were the results distorted by an organized campaign through social media?
To see if the disproportionate share of young people had thrown off the results, the Stanford team supplemented the raw results with age-adjusted tables that put the results closer in line with the city’s age distribution. Not much changed: The age-adjusted tally called for cutting the police budget by 2.24% instead of 2.88%. In both cases, the police budget was the only one that people wanted to cut.
“I’m still not sure we did the right thing by creating the age-adjusted tables, even though in this case it didn’t change the results,” Goel says.
Perhaps surprisingly, the Stanford team found little evidence of a centralized mobilization campaign by Black or progressive groups. Although there was passion on social media about Black Lives Matter and police reform in Austin, Lodewijk Gelauff, a doctoral student at Stanford who oversaw much of the work, said he didn’t see signs that the voting surge had been directed from specific sites.
Indeed, there wasn’t much time to organize a campaign. The voting rate began to skyrocket the day after Floyd’s death. The team also concluded, by checking zip codes and looking at internet addresses, that most of the voting did indeed come from people in Austin rather than from outside the city.
“The voting was triggered by a lot of tweets and retweets, but as far as we could tell they weren’t coming from a single source,” says Gelauff. “It seemed more spontaneous than organized.”
He added that intense emotions and disproportionate turnout among some groups may simply be natural features of participatory budgeting. “Equitable participation doesn’t mean that everything has to be evenly represented,” said Gelauff. “Some people care more about certain things, and that’s important to know.”
Goel says there may be a better alternative: Instead of seeking a statistically more balanced result, he and his colleagues are looking at ways to analyze lumpy “clusters” of opinions.
“Technologically, we are excited about finding what we call ‘opinion minorities,’” he says. “We can report not just opinions segregated by demographics, but also identify and report on opinion clusters that don’t take demographics into account. These can become part of the decision-making process, which would help ensure that we don’t miss minority voices.”
Put another way, he suggests, the key to making participatory budgeting more representative may lie in embracing the messiness of democracy itself.