Select Page

 

Despite being a mathematician’s dream word, algorithms — or sets of instructions that humans or, most commonly, computers execute — have cemented themselves as an integral part of our daily lives. They are working behind the scenes when we search the web, read the news, discover new music or books, apply for health insurance and search for an online date. To put it simply, algorithms are a way to automate routine or information-heavy tasks.

However, some “routine” tasks have serious implications, such as determining credit scores, cultural or technical “fit” for a job, or the perceived level of criminal risk. While algorithms are largely designed with society’s benefit in mind, algorithms are mathematical or logical models meant to reflect reality — which is often more nuanced than can be captured in a model. For instance, some students aren’t eligible for loans because lending models deem them too risky by virtue of their zip codes; this can result in an endless spiral of education and poverty challenges.

Algorithms are often positive forces for society by improving human services, reducing errors and identifying potential threats. However, algorithms are built by humans and, thus, reflect their creators’ imperfections and biases. To ensure algorithms help society and do not discriminate, disparage or perpetuate hate we, as a society, need to be more transparent and accountable in how our algorithms are designed and developed. Considering the importance of algorithms in our daily life, we will share a few examples of biased algorithms and provide perspective on how to improve algorithm accountability.

How Computers Learn Biases

Much has been written on how humans’ cognitive biases influence everyday decisions. Humans use biases to reduce mental burden, often without cognitive awareness. For instance, we tend to think that the likelihood of an event is proportional to the ease with which we can recall an example of it happening. So if someone decides to continue smoking based on knowing a smoker who lived to be 100 despite significant evidence demonstrating the harms of smoking, that person is using what is called the availability bias.

Humans have trained computers to take over routine tasks for decades. Initially, these tasks were for very simple tasks, such as calculating a large set of numbers. As the computer and data science fields have expanded exponentially, computers are being asked to take on more nuanced problems through new tools (e.g. machine learning). Over time, researchers have found that algorithms often replicate and even amplify the prejudices of those who create them. Since algorithms require humans to define exhaustive, step-by-step instructions, the inherent perspectives and assumptions can unintentionally build in bias. In addition to bias in development, algorithms can be biased if they are trained on incomplete or unrepresentative training data. Common facial recognition training datasets, for example, are 75% male and 80% white, which leads them to demonstrate both skin type and gender biases, resulting in higher error rates and misclassification.

On a singular level, a biased algorithm can negatively impact a human life significantly (e.g. increasing the prison time based on race). When spread across an entire population, inequalities are magnified and have lasting effects on certain populations. Here are a few examples.

Searching and Displaying Information

Google, one of the most well-known companies in the world, shapes how millions of people find and interact with information through search algorithms. For many years, Googling “Black girls” would yield sexualized search results. Google’s engineers are largely male and white, and their biases and viewpoints may be unintentionally (or intentionally) reflected in the algorithms they build. This illustrates the consequences of unquestioningly trusting algorithms and demonstrates how data discrimination is a real problem. By 2016, after drawing widespread attention, Google had also modified the algorithm to include more diverse images of Black girls in its image search results.

Recruiting

Many companies use machine learning algorithms to scan resumes and make suggestions to hiring managers. Amazon scrapped an internal machine learning recruiting engine after realizing it favored men’s resumes. To train the system, they had used resumes of current and previous employees over 10 years in order to identify patterns. However, this meant that most of the resumes came from men. Had the model not been questioned and reviewed, it would have only exacerbated Amazon’s penchant for male dominance.

In addition to gender bias, tech companies are known for low levels of diversity and racist hiring practices. Blacks and Latinos are increasingly graduating from college with computer science degrees, but they are still underemployed. Hiring trends based on these biases exacerbate white privilege and discriminate against people of color.

Health Care

The U.S. health care system uses commercial algorithms to guide health decisions, and algorithms help doctors identify and treat patients with complex health needs. A good example of this would be the CHADS-VASC atrial fibrillation risk score calculator, which estimates risk of developing a stroke for patients with atrial fibrillation and helps guide preventative treatment.

However, Science published a study in which researchers found “significant racial bias” in one of these widely used algorithms, resulting in consistently and dramatically underestimating Black patients’ health care needs. Practitioners use this algorithm to identify patients for “high-risk care management” programs, which seek to improve the care of patients with complex health needs by providing additional resources, greater attention from trained providers and more coordinated care.

The algorithm uses health care costs as a proxy for health needs, when more accurate variables like “active chronic conditions” would be more accurate. Without the algorithm’s bias, the percentage of Black patients receiving extra health care services would jump from 17.7% to 46.5%, which would likely improve their health and recovery rates.

Policing

From arrest through bail, trial and sentencing, algorithmic inequality shows up. Police in Detroit recently falsely arrested Robert Julian-Borchak Williams based on facial recognition software, who was detained for 30 hours and interrogated for a crime someone else committed. Ultimately the charges were dropped due to insufficient evidence, but this marks the beginning of an uncertain chapter. Joy Buolamwini, an MIT researcher and founder of the Algorithmic Justice League, noted “The threats to civil liberties posed by mass surveillance are too high a price. You cannot erase the experience of 30 hours detained, the memories of children seeing their father arrested, or the stigma of being labeled criminal”.

Algorithms inform decisions around granting or denying bail and handing out sentences. They help assign a reoffense risk score to determine whether to assign additional police resources to ‘high-risk’ individuals. Additionally, “hot spot policing” uses machine learning to analyze crime data and determine where to concentrate police patrols at different times of the day and night.

The Correctional Offender Management Profiling for Alternative Sanctions algorithm, used by judges to predict whether defendants should be detained or released on bail pending trial, was found to be negatively biased against African-Americans. Using arrest records, defendant demographics and other variables, the algorithm assigns a risk score to a defendant’s likelihood to commit a future offense. Compared to whites who were equally likely to re-offend, African-Americans were more likely to be assigned a higher-risk score and spent longer periods of time in detention while awaiting trial.

It’s Not All Bad: Initial Progress in Technology

Following the controversial and highly publicized death of George Floyd, white Americans are beginning to acknowledge the significant racial inequalities in the U.S. In light of civil unrest, large tech companies are beginning to respond. IBM announced it is stopping all facial recognition work, and Amazon paused selling its facial recognition tool to law enforcement for one year. Microsoft President Brad Smith announced the company would not sell facial recognition to police “until we have a national law in place, grounded in human rights, that will govern this technology.”

Other companies are also taking things into their own hands. Six Los Angeles tech companies share how they are taking action, from fostering and elevating the important dialogue around race in America, implementing more assertive diversity hiring and recruitment practices, providing additional mental health services and donating to organizations fighting for racial equality. The Stop Hate for Profit campaign is an excellent example of how the public can pressure tech companies to move away from “neutrality” that is in fact biased. Netflix launched a Black Lives Matter collection, which is a great example of amplifying creative black voices.

Where Do We Go From Here?

In many situations, algorithms make our lives easier. However, as shown in numerous examples, algorithms create bias that disproportionately affects certain populations. In order to continually improve how algorithms support society, we need to demand more accountability and transparency. Recognizing that there is a problem is the first step. From here, society needs to demand action. Vox shared an algorithmic bill of rights to protect people from risks that artificial intelligence is introducing into their lives.

Citizens have a right to know how and when an algorithm is making a decision that affects them, as well as the factors and data it is using to come to that decision. The Association for Computing Machinery developed transparency and accountability principles for algorithms as well. We can support these organizations in raising awareness as well as advocating and lobbying for more transparency and accountability from companies as well as the government.

Tech companies must become more inclusive and diverse so their teams are more representative of the population they serve. Policymakers need to educate themselves on the risks of opaque algorithms and proactively regulate them.

In our work with LA Tech4Good (@LATech4Good), we are exploring ways to amplify technology organizations in the greater Los Angeles area that are reducing algorithmic bias and improving diversity in technology. If you know organizations working towards these efforts or want to get involved, please let us know at hello@latech4good.org

Free Resource: Have you downloaded the “State of Artificial Intelligence in the Nonprofit Sector” report yet? It’s the most comprehensive analysis of AI for social good available. 

About the authors:  

Meghan Wenzel is a senior UX researcher at 15Five, an employee engagement platform focused on creating highly-engaged, high performing organizations by helping people become their best selves. Before joining 15Five, Meghan founded the UX Research team at Factual, a startup focused on building tools to make data more accessible and actionable. Meghan writes UX Research focused content on Medium, as well as education, mindfulness, and neuroscience books and research briefs with The Center for Educational Improvement

Jared is the CEO of PwrdBy, speaker, and a published author. PwrdBy empowers nonprofits to fundraise smarter through artificial intelligence apps such as Amelia and NeonMoves. Before joining PwrdBy, Jared was a Senior Consultant in Deloitte’s Sustainability practice with experience working with Fortune 500 companies to design social and environmental sustainability strategies. He is a Lean Six Sigma Process Black Belt.