media update's Nikita Geldenhuys takes a closer look at this question.

Some researchers have been calling for more ethical standards for AI. They cite examples of AI technology negatively affecting people or AI systems that show racial, gender or socioeconomic status bias.

What they’re asking is that AI product developers take time to understand all of the effects that their technology could have, and how it could harm people. With that in mind, companies can build AI products that can only be used for their intended purpose and are less likely to go off the rails.

Is it too soon to think about ethics in AI?

It’s easy to fall into the trap of assuming that we don’t have to worry about ethical rules for AI now. After all, we don’t have ‘humanoid’, artificially intelligent robots walking among us yet.

But building ethics into the AI technology that we’re developing now means that our understanding of ethical AI can grow as this field advances.

Also, the AI systems being implemented in the world today already have an impact on people, workplaces and industries. We can influence, to some extent, whether this impact is good or bad – which comes down to whether we’re ready to talk about ethics in AI.

To understand why ethical AI is a good thing for everyone, you need to understand some of the dangers of AI without ethics:


AI can become really, really biased

Bias creeps into AI – and specifically machine learning algorithms – when human preference or emotion affects the training data that machines are given to learn from.

Training data that is skewed or incomplete can seriously affect the information that machine learning systems deliver – and put people at risk.

In some places, machine learning algorithms are being used to screen job applicants, approve loans and even recommend parole for incarcerated individuals. When these algorithms create models based on data that isn’t representative, they can treat people unfairly and turn down job, loan or parole applications based on race, gender and other factors.

Ethical developers should eliminate or reduce bias in their machine learning-powered products as far as possible.
Read more about this dilemma in our article, Bias in machine learning – it’s real and it’s bad.



AI can spin out of control, with no way to stop it

Without a kill-switch or a fail-safe mechanism, AI can wreak havoc. Without ethical standards, AI developers won’t be obliged to build systems in a way that allows them to be safely shut down.

Most AI product developers would want to provide their customers with a product they can trust – which is a product that they have control over. But what if a company rushes to put their product into the market, and doesn’t spend time developing a safe way for users to switch off the product?

An article on TechEmergence, by Daniel Faggella, describes a possible scenario where an autonomous car is hacked while driving on the highway. If the car manufacturer didn’t design the car’s kill-switch carefully, then pressing the switch could cause the car to spin out of control and harm people.

Thinking about the way AI technology can fail, and how to safely stop these systems in such an event, is only ethical.

AI can make decisions that humans can’t understand

Why is this a problem? If an AI starts functioning in a way it wasn’t built to, humans have to be able to diagnose the problem. If a self-driving car has an accident that it could have avoided, its creators need to understand why its AI ‘brain’ chose to cause the accident.

But some AI systems, especially those that are built to make complicated decisions, follow intricate and sophisticated processes that are nearly impossible for humans to understand.  

Brian Green from the Markkula Center for Applied Ethics uses this example to explain just how complicated some AI processes can be for humans to grasp: “In 2014 a computer proved a mathematical theorem called the ‘Erdos discrepancy problem’, using a proof that was, at the time at least, longer than the entire Wikipedia encyclopedia.”

He says explanations like the proof might be accurate, but humans will have no way of knowing that. Because of this, developers won’t be able to fix their faulty technology.

AI can have a positive impact on people, but requires that we decide what we want our future lives with AI technology to look like. It’s not too early to insist that AI products be developed according to ethical standards. AI is already here, it’s already having an impact – and ethics will help determine whether it does more good than harm.

Stay up-to-date with our articles by subscribing to our newsletters.
The AI technology dreamt up for sci-fi movies is very different from AI products we use today. Learn more about the two types of AI in our article, What’s the difference between weak and strong artificial intelligence?
Image designed by Freepik