Background image: The Bold Italic Background image: The Bold Italic
Social Icons

Fixing Silicon Valley Is a Civil Rights Emergency. Here’s One Way to Begin.

8 min read
Jumana Abu-Ghazaleh
Demonstrators protesting police violence on June 2, 2020, in New York City. Photo: Ira L. Black/Corbis via Getty Images

Countering anti-Blackness within our society is a target-rich environment; white supremacist thinking is engrained in every industry, which means a need for each to be improved or totally rebuilt. That includes Silicon Valley and the technology it’s building to fuel our future.

Big Tech’s complicity with structural, systemic anti-Black racism and harming disadvantaged communities has long been evident but little improved. Recent protests over the murders of George Floyd, Breonna Taylor, and too many other Black Americans at the hands of police have forced tech workers and executives alike to take it seriously.

Sign up for The Bold Italic newsletter to get the best of the Bay Area in your inbox every week.

Silicon Valley is deeply, deeply implicated in the issues the Black Lives Matter movement is demanding be addressed.

No matter how many woke-sounding statements Amazon releases, CEO Jeff Bezos has supported anti-Black policies, including the militarization of police forces that have them acting more like occupying armies than community servants. Then Facebook CEO Mark Zuckerberg, who is openly pretending that President Donald Trump quoting the words of George Wallace, the famed 1960s arch-segregationist, is racially neutral in order to protect profits, leading to rolling Facebook worker protests. This is in addition to the platform actively spreading misinformation about Black Lives Matter demonstrations.

Meanwhile, we’ve known for years that Google’s YouTube auto-play funnels people toward racist, far-right content — and the company’s efforts to change that algorithm have been paltry at best.

These are a few specific examples of bad behavior from tech’s biggest players, but the reality is these problems are deep-seated in every aspect of the tech industry. As Cathy O’Neil has catalogued for years, virtually all algorithms are racist, discriminating against people of color in countless ways, from job searches to apartment listings to the beauty standards we consume. This fact is partly due to the abysmal rates of Black people working in tech and how little VC funding is going to Black startup founders.

These emergencies have needed fixing for years. For so long, we’ve been confronted daily with yet another Silicon Valley tragedy, another scandal, another failure of leadership, another breach of trust. Our world turns on the private decisions of a select few unelected programmers who craft algorithms that in turn craft human society. As Jaron Lanier, the founding father of virtual reality, puts it: “It is impossible to work in information technology without also engaging in social engineering.”

This standstill isn’t the fault necessarily of individual tech workers; it is the system they’re working in. But tech workers have a responsibility to work toward a better system, and we know they have collective power. After all, the tech giants know the power their own workers have — why else would they spend so much energy trying to suppress their organizing?

Thankfully, those who work in tech can do something right now, at this very moment, to set the industry on a broader course toward an ethical standard we can be proud of:

When building a product, feature or service, don’t just think about the helpful impact you’ll have on your target audience; also consider the unintended or potentially harmful impact it could have on disadvantaged communities, including communities of color.

Silicon Valley must take responsibility for the potential outcomes of the technology it builds and actively work, up front, to minimize any harm the technology could cause.

A duty of care for the tech industry

In the immediate aftermath of the 9/11 attacks and the anthrax attacks shortly thereafter, the American Medical Association updated the pledge taken by new doctors upon their graduation from medical school. The opening preamble is worth reading at length in this moment:

Never in the history of human civilization has the well-being of each individual been so inextricably linked to that of every other. Plagues and pandemics respect no national borders in a world of global commerce and travel… The unprecedented scope and immediacy of these universal challenges demand concerted action and response by all. As physicians, we are bound in our response by a common heritage of caring for the sick and the suffering… Only by acting together across geographic and ideological divides can we overcome such powerful threats. Humanity is our patient.

There’s a centuries-spanning story behind that declaration. It includes how Western medicine went from a Wild West of unlicensed freelance practitioners to a professionalized endeavor — one where we know that individual doctors are held to certain ironclad standards of training and conduct within that system. And, crucially, they face real consequences for falling outside those standards.

It also includes the development of the notion of a duty of care.

When people think of duty of dare, they are likely to think most immediately of doctors given whole infrastructures of accreditation, training, and certification to ensure that treatment will be informed by that duty. But it’s not limited to the medical field. Engineers typically have a duty of care—that is, a duty to take care from avoiding harm. As do public defenders. The idea in these fields is that its workers are held to universal or near-universal norms and standards — and when they fall short of those norms and standards, there are institutions to hold them in line.

This raises the question: What if people working in tech had an obligated duty of care to the end user drilled into them?

If doctors need to be held to professional standards because their collective effect on humanity itself is so profound, then why aren’t the technologists whose tools hold such immense social, political, economic, and health consequences for billions?

Perhaps it’s time for the tech industry to come up with these duty of care pledges and drill them into its workers. Tech is creating and shaping our systems of the future—why shouldn’t they be held to these standards?

Start with the Dr. Evil test

While the restructuring of Silicon Valley to make it an anti-racist and more just industry is a monumental task, there is an action we can all take immediately. It applies to every group in software development: product managers, designers, engineers, programmers, everyone.

Provisionally, call it the Dr. Evil test. Or ethical future-proofing, or future scanning. Or perhaps the right term to use is fear-testing. (I can imagine a team leader somewhere asking, “Before we hit the next item in our checklist, did we F-test this?”)

Ultimately, that’s what we’re talking about: surfacing our own fears about how what we build could be used against those we love, the vulnerable, and the disadvantaged. We know from past research that calmly mentally simulating worst-case scenarios can often help us cope with such scenarios when they arise. But if our fears about our own tools remain unexpressed, there’s no room to ensure they don’t materialize later on.

Fear is the easy one to talk about. Love is the harder one — too at risk of sounding woo-woo or unserious. Thankfully, there’s a body of thorough academic research on what happens when we talk about who or what we love and our willingness to protect them.

It’s currently hard to imagine a coding team within one of the big four tech giants checking in with each other about how a tool would affect their loved ones at multiple codified steps in the process. Thankfully, adopting a new practice like this has an accumulative power.

We’ve seen the effects of tech getting into the wrong hands time and time again — take those who exploit Zoom to shout slurs, taking advantage of the lack of a password prompt in most meetings.

We ought to consider how other, untargeted audiences might use our products along with the unintended consequences this might generate.

We must consider: What “choke points” are there to squash bad intentions? At present, it’s a patchwork system and mostly as retroactive PR to protect brand integrity, rather than proactive measures to protect users. What might it look like for it to be arranged differently?

What if we built software as though we knew, for a fact, that someone we love and want to keep safe from harm — a child, parent, or grandparent, perhaps — would be using it? We might have a Facebook that got rid of in-person event listings proactively instead of after there were scads of Covid-spreading gatherings. We might have a Google that took the spread of Covid-19 conspiracies on YouTube seriously. We might have an Amazon that listened to its workers, ensuring basic workplace safety even if only to ensure basic customer safety.

What else might we be doing differently? Would we be on the verge of what some researchers consider a tech-fueled mental health crisis among teenagers? Would Facebook be experimenting with making sadness and depression contagious among their users?

YouTube’s video recommendation algorithm has fueled the spread of neo-Nazi-adjacent content for years — and YouTube’s owner, Google/Alphabet, has been slow at best to even acknowledge the concern. It’s hard to imagine that status quo remaining in place under a new paradigm, where we build around the worst-case use scenario.

Of course, this is just a start, and one major problem here is that white tech employees only envisioning the people they love will likely still only encompass white and/or privileged people. So it’s important to go a step further to envision what the impact of your product will be on disadvantaged communities — communities that may not be your targeted audience but may suffer unintended consequences from your technology if you don’t think about it up front.

Thinking about and protecting all users

Creating “personas” is a common practice in the tech industry — it’s the process of creating imagined product users to guide the creation of software. A persona can be extremely elaborate, including creating their biographies, photographs, and weekly routines, or it can be supremely pared down. Increasingly, there’s been a push within the industry to base these personas on actual people as much as possible.

Alan Cooper, a software designer and “father of (the programming language) Visual Basic,” was the first to attempt user personas in software development, and his analysis of their purpose is instructive. “Personas allow us to see the scope and nature of the design problem… They are the bright light under which we do surgery,” he writes.

Under tech’s ruling order of “move fast and break things,” the personas in use today are typically reductive, creating distance between the builders and the users. They are used explicitly to make software easier to use for nontechnical users. Not better. Not safer. Not more just or fair. Easier.

How Twitter Amplifies Authoritarianism
A brief history of communications technology for autocrats

Instead of building exclusively for business goal–driven personas, we should consider how other, untargeted audiences might use our products, as well as what the unintended consequences might be, especially on disadvantaged communities. The example of algorithmic sexism and racism comes to mind, as does the Islamophobic violence that’s been fostered by social media news tools.

It is also important to note that all of these efforts are futile if diversity numbers remain as low as they are in tech. Personas created by a given group of engineers are going to reflect those engineers’ own experiences. Which makes it all the more important that hires are diverse and given space to advance. This is about more than merely adding a perfunctory “Oh, by the way, this consumer composite? She’s an African American woman” to a user persona. It’s about articulating our own fears and desires in what we build — and making that “our” an inclusive one.

This is something we can do right now

Tech workers deserve better than working within apathetic environments. But while we’re fighting for a better industry, there are things all of us can do: Use the power we have to insist on personas that include our tools being used not just by our best friends but also by our worst enemies, those who wish us harm, and those who might use our tools to harm innocent others.

The end user is not just an end user. It is a person. The tech we build can hurt that person.

Medical professionals treat everyone with respect as their patients. As social technologists, we should treat our end users with the same respect.

Last Update: December 14, 2021

Author

Subscribe to our Newsletter

Subscribe to our email newsletter and unlock access to members-only content and exclusive updates.