Issue #11: Artificial Intelligence

Artificial Intelligence, Bias, And Opportunity

I do not want to sound like a tin-foil hat conspiracy theorist, but your country, your neighbourhood, your job…your entire life is already controlled by artificial intelligence. If you do not like the bit in Star Trek: Discovery where we find that Control has taken over Starfleet, then I have bad news: an AI is running the military, too.

The category of AI that is used in these contexts is the type that supports decisions by sorting inputs into different categories. Usually the rules that govern these categorisations are derived by learning about many existing cases. When a new case arises that appears not to fit the existing rules, there is often some process by which the AI can update its rules to take the case into account.

The type of AI I am talking about, the one which already runs huge swathes of society, is the bureaucracy. A decision support system that encourages many people to make compatible decisions in support of a common goal. The usual implementation of artificial intelligence inside such an AI product is the policy. A policy is a decision tree that categorises inputs, where the categories are frequently described in terms of the next action to take. Bureaucracies change their policies to take into account updated information from sources such as management escalations, complaints, lawsuits, and legislative changes. In other words, they can be trained and they learn from new data.

Companies are bureaucracies, therefore companies are AIs. So are governments, civil service departments, universities, charities, local councils, sports clubs and hobby societies. The world has been running on AI for thousands of years.

Bureaucracies exhibit bias. If your department of motor vehicles is racist, your department of social services exhibits class bias, your company pays women less than men, then what you have is a biased artificial intelligence. The reasons for the bias may not be clear, indeed the specific actions or decisions that introduce the bias can be hidden. It is the outcome that is important, and the outcome can be biased.

This is why replacing a bureaucracy with a computer-based AI system with no thought preserves the existing biases. The AI only has the existing system and its outputs to learn from, the outputs are the biased outcomes of the bureaucracy, therefore the AI learns how to exhibit the same biases.

And this is why the popularisation of computer-based AI is a huge opportunity. With each system, we get to evaluate the system it is replacing and ask ourselves whether we want to perpetuate the biases of the existing bureaucracy, or build a new system that is free from those biases. The AI transformation is not at all about the technology, and all about the people and how they interact.

Cover photo by sk on Unsplash.
Donate using Liberapay

Graham is a senior Research Software Engineer at Oxford University. He got hooked on making quality software in front of a NeXT TurboStation Color, and still has a lot to learn.