From Cameras to Airport Scanners, Technology Has a Race Problem

(Illustration: Joel Louzado)
(Illustration: Joel Louzado)

When Nikon’s S630 digital camera was released in 2010, customers soon realized it had a hard time “reading” Asian faces. The device couldn’t detect Asian eyes, so it would prompt users to stop blinking or squinting in order to take the perfect picture. A few years later, Google faced intense backlash when its photo software, which is used on everything from computers to phones, categorized Black people as “gorillas.” In 2017, a joint study by the University of Toronto and MIT found Amazon’s AI software had a hard time recognizing and identifying non-white faces. And earlier this month, ProPublica reported that full-body scanners at American airports “are prone to false alarms for hairstyles popular among women of color”—like braids and dreadlocks. (Travellers who wear turbans or wigs also face a higher rate of false alarms.)

These are just a few examples of the ways technology can betray minority consumers—and have done so for years. If you’re not dealing with these types of microaggressions on a daily basis, it’s easy to think of them as one-offs, or at least as minor inconveniences. But when technology doesn’t work the way it’s supposed to for POC, it’s a sign of something far worse: Today’s tech companies have a problem with race.

Advocates have been saying this for years. Tech’s race problem can be found in their products, yes, but we live in a time where governments and companies alike rely on high-tech AI and machine learning to determine who to hire, how consumers save money and even who goes to prison. If some of us—the ones who aren’t white or male—aren’t “seen” by the technology that surrounds us, or worse, are “seen” in inequitable ways, it could easily create a system where minorities are permanently relegated to second-class citizen status.

“Physical and digital realities are different, obviously, but we’re seeing similar practices or discrimination that happen in the physical world being replicated in digital environments,” explains Nasma Ahmed, director of Canada’s Digital Justice Lab. “We have to be thinking about the larger systemic issues that are occurring in our society [racism, sexism, discrimination] and how that feeds into the tools we’re creating.”

Biased technology affects everyone

In many ways, invisible algorithms run most of our lives already. Predictive show suggestions? That’s AI. Voice-activated assistants? AI too. Smart home applications? Yep, that’s AI.

While critics often worry about future job redundancies courtesy of artificial intelligence, Akwasi Owusu-Bempah, assistant professor of sociology at University of Toronto, believes a troubling aspect of the technology is how it can be biased against certain groups.  “I think inevitably this is becoming an increasing reality in our lives… it’s being used to determine what is marketed to us, it’s being used to determine what news we see, what appears in our social media feeds.”

He isn’t wrong. ProPublica recently discovered companies were circumventing certain legislation by showcasing jobs and real estate to individuals based on their race or gender. A Cornell University study also found dismaying results when it came to men and women online. Researchers published a 2014 report that found Google’s online ad system was more likely to show high-income jobs to men than women.

How can technology even be biased?

AI tools, even the most advanced examples, aren’t sentient—but researchers have shown that these tools can mimic or mirror racist prejudices. And, since most employees in the tech industry and at the leading institutions researching artificial intelligence are white and male, it can mean a whole host of issues for women, children and people of colour around the world.

How serious is it? When MIT set out to investigate potential bias in artificial intelligence algorithms researchers found that machines were more likely to conflate positive words with white faces and associate negative words with Black faces.

As AI-influenced technology gets smarter and smarter and moves from the hypothetical into the real world it could lead to real-life consequences. Here’s how it works: Machines are only as good as the data it receives. If you give computers imperfect or biased datasets, it’s likely the results it comes up with be less than helpful.

Why diversifying tech’s workforce won’t solve the problem

For years, institutions have said that a diverse workforce was the solution to ending bias in the tech industry. Of course, diverse representation is important, but it’s not the perfect fix. Especially if workers at these companies aren’t able to voice their opinions or take action, and if management doesn’t play an active role in changing company culture.

For instance, a 2017 US study found underrepresented minorities—which include LGBTQ+, Hispanic, Black and Asian employees—that worked in the tech sector were more likely to quit or leave due to discrimination and unfair treatment. Even when diverse employees were hired at tech-focused companies, many left the industry after a few years. Clearly, hiring diverse employees to create more equitable products can’t truly work if the workplace isn’t set up for success.

“Despite the increased focus on workforce diversity, employee retention tends to be overlooked in the analyses of diversity within tech companies and the ecosystem as a whole,” the report stated. “… Put simply, the diversity numbers may not be changing at least in part because tech companies have become a revolving door for underrepresented groups. Without a nuanced and accurate analysis of the problem, and a comprehensive roadmap for solutions, these disparities will remain largely unchanged.”

Some companies—and cities—are trying to fix the problem

In New York City, officials are slowly coming to terms with how AI impacts everything from neighbourhood improvements to local project approval. In response, councillors are doing something quite novel: Pushing for transparency at the civil level. NYC Mayor Bill de Blasio recently announced  the city would analyze its AI-influenced algorithms or “automated decision systems,” which advise councillors and officials on everyday items to ensure fair and equitable treatment for all its residents. The move will ideally also give marginalized communities—who are all-to-often left out of policy and technology decisions—a chance to learn more and play a more active role in their use.

Jodie Wallis, managing director of artificial Intelligence at Accenture Canada, a global consulting and service company, is also trying to help make a positive change for underrepresented groups using technology. Her company created the company’s Fairness Tool, which identifies gaps in the machine-learning solutions, which are often used to power business (and some government) decisions. The Accenture toolkit is offered to clients to help catch bias in underlying data that powers products or software. “Bias occurs when you have a data set that over represents or under represents a particular group,” she explains. “The [fairness] tools looks at your data set and points out what you might have missed.”

Is there anything *we* can do?

Whether a solution comes from government or the private sector, Canadians have to understand what’s happening, too. In order to create equitable change and avoid bias, it’s crucial to teach communities how these technologies impact their world. Teaching people how artificial intelligence software works, its weaknesses and impacts on the world is just part of the equation when it comes to helping people advocate for themselves and their rights.

“First and foremost we need to increase public awareness about the use of and infiltration of these technologies into our lives,” Owusu-Bempah explains. “I think there should be more of an effort made to help individuals understand how their lives are impacted by these technologies in order for us to have a decent sense about how we can ensure that they’re used positively.”

As long as there are human beings, bias will always be an issue—we’re by nature prone to it, unconscious or not. But as new technologies, like AI, are integrated into society at an accelerating rate, the stakes become much higher. Because if we don’t recognize the problem and take steps to remove that bias, it could easily become so widespread that will impact not just this generation, but all those that come after us as well.

Related:

Why Unplugging Is Especially Important for Black Women
Tough Conversations I’ve Had with White People—Including My Wife
If It Feels Like Racism In Canada Is Getting Worse, That’s Because It Is

The post From Cameras to Airport Scanners, Technology Has a Race Problem appeared first on FASHION Magazine.