We have been trained to think that all bias is bad. And that we should do all we can to rid ourselves of it to avoid being unfair. What we often fail to realise is that bias is a shortcut that allows us to cut through the noise and zero in on the data we need to process.
This is a link-enhanced version of an article that first appeared in the Mint. You can read the original here. For the full archive of all the Ex Machina articles, please visit the website.
We often think of bias as a flaw that needs to be purged. It’s not. It’s a cognitive shortcut that we need to fine-tune. We are constantly bombarded by shapes, colours, sounds and smells. Even though the human sensory system can collect this information at a rate of approximately 10 million bits per second, our conscious mind can only process 50 bits of information per second. That we are such a successful species, even though we can only process 0.0005% of all the data we collect, is a testament to our brain’s ability to quickly sort the important from the trivial.
We do this using heuristics, simple rules of thumb that help us decide which information to prioritise based on context, experience and human evolutionary adaptations. This is how we can tell, just from its colour and the way it looks, that certain things should not be put in our mouths. It's how we know, even though we’ve never visited them before, a dangerous neighbourhood the moment we enter it. When we hear something rustling in the dark, that fight-or-flight response we instantly feel is because, at some time in humanity’s past, that sound indicated the presence of a predator.
We use heuristics like these to make all sorts of decisions. It is how we zero in on relevant facts when cognitive limitations, incomplete information and time constraints deny us the luxury of processing all the information we need. We use different heuristics, depending on the circumstances. For instance, when we meet new people, we invoke the representativeness heuristic that allows us to evaluate them based on their similarity to certain stereotypes we have already formed. Every time we take an instant liking to someone, it has more to do with how certain superficial traits they possess align with our preconceived notions than any real knowledge of who they are.
These heuristics keep us going, helping us navigate life in a complex world. But they sometimes backfire, and when that happens, we call them biases. The ‘availability bias,’ for instance, over-weights emotionally significant events that are vivid and recent, even though it is more than reasonable to presume that recently processed data is representative of similar information previously obtained. When we are accused of ‘confirmation bias,’ it means we have favoured information that supports our existing beliefs, even though this is a useful heuristic to keep us from constantly questioning what we already know.
Biases can also result in unfair outcomes. Recruiters have been known to overlook deserving candidates, and even supposedly impartial judges have allowed their subconscious prejudices to affect their decisions. To address this, we put in place automated decision-making systems wherever we could, with the aim of ensuring that critical decisions were made objectively, based solely on the available data. This, we believed, would reduce the harms resulting from bias.
We soon discovered that the algorithms we were using had biases of their own. The conclusions they drew were unexpectedly skewed by gender and ethnicity, even though there was nothing explicit in the data to support it. Automated résumé screeners systematically down-ranked the CVs of women applicants because historic hiring patterns suggested that men were better candidates. Since these biases were implicit in their training data, they got encoded into the heuristics these algorithms developed for the tasks they were given. What we thought was objective logic concealed an entirely different set of inequities.
Today, algorithmic bias is considered one of the most significant risks of artificial intelligence (AI). Now that we know they could conceal biases that are hard to detect, we treat AI decisions with suspicion. As a result, most AI governance efforts are focused on eliminating every last sign of possible bias.
This approach presumes that the ideal state is radical neutrality—complete freedom from bias of all sorts. This, however, misunderstands both the nature of bias and the process of cognition.
Biases aren’t bugs in our system; they are tools we have developed to help us quickly arrive at the conclusions we need to make, given the constraints of real-time decision-making. If we don’t invoke them, we would simply be incapable of processing the nearly infinite amount of information available in the time we have to decide. These are the trade-offs we have to make to avoid being paralysed by the amount of data we would otherwise have to consider.
If we think of biases as a set of heuristics that are essential to discovering useful signals from the fire-hose of information, it becomes easier for us to reconcile ourselves to the trade-offs involved in deploying them. Rather than looking to build bias-free algorithms, we need to better understand how these biases work so that we can mitigate the harms that might result from their use.
One way to do this might be to develop tools that can detect these harms on the margin—early warning systems designed to indicate when the decisions of an AI system have started to deviate from the norm in unreasonable ways. This will allow us to redesign our systems before the unintended consequences of these heuristics result in widespread harm.
Biases are an essential element of human intelligence. If we need these cognitive tools to extract meaning from a universe too vast to completely comprehend, we should embrace the trade-offs necessary for their use in the age of AI.
Over 2.6k subscribers