To be ethical, AI must be explainable. Here's why

​​In the limelight

The ethical implications of AI have been a hot topic of debate for years now—and for good reason. 

As AI becomes more advanced, it’s being relied upon to make increasingly important and quite often automated decisions, which have more and more wide-reaching impacts on people, planet and profit. So, it’s essential that we’re able to understand why these systems make the decisions they do. 

Unlike black-box AI, explainable AI is built in a way that enables users to understand why it produces the outputs it does, and how it has reached its decisions. Explainable AI references the whole system, taking into account the way the model has been designed and developed, the data it uses, how it has been deployed and the outputs it produces.

Explainability allows humans to understand why an AI produces a specific output.

3 reasons why explainability is an integral part of ethical AI

1. Improving the quality of outputs

Quite simply, if you don’t know why AI is suggesting low-quality or biased outputs, fixing it is going to be very difficult. Having an explanation of how AI came to a particular decision enables data scientists to pinpoint the root cause of an issue and take corrective action,  both during development and ongoing maintenance of the system. In fact, explainable AI even helps improve efficiency of the development and MLOps teams by quickly highlighting issues and potentially catching these early.

2. Building trust with users

Explainable AI helps users understand where their data is being used and how it affects the outputs of the system. 

It allows users to challenge the results if they feel they have been treated unfairly by the algorithm. 

It enables those responsible for the AI decisions to verify that they are indeed the right decisions to take, that they meet the organisation’s ethics code, business objectives and regulatory requirements. 

It supports the adoption of new technologies and systems by businesses and the public, helping to make investment in AI worth it. Winning users’ trust is essential for companies that use AI.

How bias can unintentionally occur

Jeremy Bradley, Chief Data Scientist at Datasparq, shared a fascinating insight with me.

“Bias in AI isn’t necessarily the result of a model singling out a demographic trait. It often forms through secondary data points. If there are lots of people with overlapping demographic traits living in the same area, for instance, a model might exhibit bias against a group by discriminating on the basis of a postcode.”

That’s why companies, when collecting data, often include an optional section asking users to share demographic information. It’s understandable why users might be wary; it seems like this is how bias might be built in. But, it’s actually the opposite. 

“It might seem odd, but having this additional demographic data makes bias more preventable. When we have the primary data, we can find and fix any bias resulting from the secondary data.”

3. Sharing learnings

At Datasparq, we believe that AI works best when it works with humans—as part of the team.

So, just as you wouldn’t expect a colleague to refuse to tell you why they decided to set a certain price for an airline ticket, you shouldn’t expect AI to refuse—or lack the ability to give an answer—either.

Understanding why AI has made a particular suggestion can help a human identify the best way to practically use it, too. 

For example, understanding what has led to a prediction of a specific risk occurring provides insight into what business change or intervention can be made to address and mitigate this risk.

When AI is explainable, we humans can learn from its decision-making and build on it, and likewise, intervene and course-correct AI systems when we know something that the machine doesn’t.

Incoming legislation

Explainable AI is already playing an important role in helping organisations demonstrate that legal, regulatory and compliance requirements are being met.

Given the importance of explainability in AI, it won’t come as a surprise to hear that the principle will likely be entrenched in law in the coming years.

The EU is developing its AI Act, set to enshrine the principles discussed above in law. Though it’s still in the early stages of being developed, it could become law by the latter half 2024.

Similarly, the UK has proposed an AI rulebook in its AI Action Plan. 

So it’s worth making sure that any AI systems you’re currently using are prepared for the new guidelines.

Wrapping up

Explainable AI is key to ethical AI. It helps identify and fix bias, builds trust with users and allows us to learn from and build on the decisions that machines make.

And with legislation set to make explainability a legal requirement—now is the time to start making sure your systems are ready.

More insights

Call us when you're ready