A colorblind future

Deck sub header

Are the machines making us better, or are we making them worse?

By Cassandra Morrison

As a human race, we have been dreaming of the future since we had the wherewithal to distinguish it from the past and present. With hope, optimism, and sometimes trepidation, we have made our own predictions and plans about how we see it unfolding.

While we’re not driving the flying cars of the Jetsons to work in the morning, airplanes fly us around the world and spaceships fly us out of it. And although we cannot teleport, virtual reality glasses can seemingly take us millions of miles away. And robots? Well, robots and artificial technology first came to rise in the 80s, and their uses continue transform how we live—for better or worse.

Your new machine is great!

The appeal of a robotic vacuum is pretty easy to understand—the time you’ll save cleaning! Artificial intelligence and machine learning advantages are also just as clear: High-stakes decisions can be made on immense sets of data instead of subjective opinion. Programs can scrutinize billions of datasets, see correlations, and make predictions—all without a perspective clouded by personal experience and human fallacy. They can literally give us the solutions to our problems—and fast.

This is why AI continues to drive and expand into many industries in the 21st century. The data-driven decisions that they help make stretch across many fields—from screening resumes for a company, to determining whether to grant someone a loan, to leading a police investigation based on facial recognition. Even our health care system relies on these algorithms, and these critical decisions have lasting impacts.

“It’s hard to imagine any industry not already touched in some ways by machine learning and AI,” says Daniel Sieberg, a Saybrook board member who works and writes about the tech and AI sector. “That would run counter to the ethos that’s helped humanity evolve to this stage. Thanks to innovation, iteration, and invention, there are many ways that AI could help save time and maybe even help us be better humans. That’s a best-case scenario though—it won’t always happen, and it’s going to be fraught with challenges.”

One of these challenges—err problems—is bias and racial prejudice that’s seemingly coded into these futuristic solutions. What was once thought of as a powerful, desirable quality of machine learning—objectivity—when left unchecked, can actually make decisions that are biased or unfair.

But your new machine is a little racist

While racism among humans is a social problem, racism in a computer program is an engineering problem. “Programmers, of course, have some amount of license and authority within the creation of any code. They work with other programmers, developers, and engineers who are managed by a project lead or reviewed by a director and other colleagues and so on,” Sieberg says. “There’s a whole network of implementation that should be designed to mitigate any malicious or accidental bias into an algorithm.”

Yet problems continue to arise.

In our criminal justice system, risk assessments were once a new dawn, an opportunity to wipe away bias. By using AI to determine a criminal’s likelihood of repeating an offense, the courts could make a more informed sentencing decision. However, statistical evidence has shown that these algorithms often get it wrong when using data based on a person’s race. A 2016 ProPublica study found that the program used in courtrooms rated black people at a higher risk than white criminals for re-offense. Yet of 7,000 people in 2013 and 2014 the program predicted would reoffend, only 20% actually did.

The data the program “learned” from showed that black criminals were at greater risk of recidivism, but what the machine didn’t learn is that our criminal justice system has historically mistreated and over-policed black and minority criminals.

In 2014, then U.S. Attorney General Eric Holder warned that these risk assessments might be failing the system. “Although these measures were crafted with the best of intentions, I am concerned that they inadvertently undermine our efforts to ensure individualized and equal justice,” he said, adding, “they may exacerbate unwarranted and unjust disparities that are already far too common in our criminal justice system and in our society.”

Although these measures were crafted with the best of intentions, I am concerned that they inadvertently undermine our efforts to ensure individualized and equal justice.

And your new machine makes you sicker

Another large field that utilizes risk assessment is health care. When doctors ask you how you’re feeling on a scale of one to 10, or how sad you’ve been over the past two weeks on a scale of unable to get out of bed to somewhat blue—they are getting important data to use to determine their next steps. This data is often input into a program that uses statistical information to determine which patients need more attention, more tests, more medication—and who should be sent home.

Algorithms that help make these decisions are used in treatment for more than 200 million people in the United States each year. With a health care system that is overtaxed and an impending physician shortage of more than 122,000 by 2032, finding ways to streamline patient treatment is necessary. But as with many solutions, new problems arise.

A study released in 2019 found that the algorithm used was more likely to refer black people than white people for more treatment when they were equally sick. The algorithm relied heavily on health care costs rather than illness—and in so doing, found in its data that black patients usually had higher health care costs. Upon further review of the dataset the program was analyzing, however, this discrepancy was because the average black person was also substantially sicker than the average white person, with a greater prevalence of conditions such as diabetes, anemia, kidney failure, and high blood pressure. When looking at the average white person with the same chronic health problems, black patients cost an average of $1,800 less than those white patients.

And while health care should not be based on the associated costs, this was weighted in the results, and the study found that black patients had to be sicker than white patients to be referred for any additional treatment or attention. Only 17.7% of patients that the algorithm assigned to receive extra care were black. The researchers suggested that the proportion would be 46.5% if the algorithm were unbiased.

So can you return this new machine?

The good news: AI has no bias of its own.

The bad news: It is very easy to train an AI machine to be racist. Computers learn how to be racist, sexist, and prejudiced in a similar way that children do: from their creators.

“Diversity and inclusion are not new challenges within the technology field. But as artificial intelligence and machine learning are incorporated into more and more aspects of daily life, we need to be mindful of the inherent biases that could be unknowingly, unwittingly, or unintentionally included,” Sieberg says.

AI learns about how the world has always been. It analyzes data from the past. It doesn’t know how the world should be—or how it could be in the future. Bias has left a dark thumbprint on many aspects of our history—whether based on gender or race or anything else.

The futuristic solutions we dreamed of for decades have solved many problems but come with a set of their own. Just as humans must work to overwrite our own deep-seated misogyny and racism, we have to take the same time and care to make sure the machines we’ve created don’t reflect our basest characteristics.

“There needs to be continued oversight by outside agencies and individuals to monitor what develops within machine learning and AI efforts, even if they’re not the ones creating it. In some cases, sexist or racist or discriminatory behavior may contain malintent—other times it could be a teachable moment,” Sieberg says. “Transparency is critical to the future of AI and ensuring that there are ethicists and trained professionals installed at companies to tackle these concerns is a crucial first step. Shining a light is the only way to try to eliminate the dark corners.”

Learn more about Saybrook University

If you are interested in learning more about the community and academic programs at Saybrook University, fill out the form below to request more information. You can also apply today through our application portal.

Back to top