The importance of explainable AI in a digital world

European Union (EU) lawmakers recently introduced new rules that will shape how companies use artificial intelligence (AI). At the heart of the legislation is the need to maximise the potential of AI without impeding on privacy laws or fundamental human rights. But to do this effectively, we must explain the outcomes of the AI being built – and the decisions they make. Eduardo Gonzalez, Chief Innovation Officer at Global AI Ecosystem Builder, Skymind, shares his insight. 

What is Explainable AI and why does it matter? 

Explainable AI is artificial intelligence where the results of the solution can be understood and explained by humans. As a problem, however, explainable AI is as old as AI itself. 

Artificial intelligence is becoming a significant part of our daily lives in our digital world, from fingerprint identification and facial recognition to predictive analytics. We are finding ourselves in a position where we have no choice but to trust the outcomes of these AI-driven systems.

But how did the AI application come up with the decision it made? 

You can always ask a person to explain themselves. If they decide, you can ask them why and they can then verbalise their reasoning, but that’s something that’s missing about AI.  

There are a few ways that we’ve tried to make AI systems’ explainable’ – the most basic is through artificial neural networks, where algorithms are used to recognise relationships between data sets and interpret them based on what the dataset input was. 

However, when it comes to things like images, machine learning can pick up on features in your dataset that you weren’t focused on and, therefore, won’t give you the desired outcomes.

For example, suppose you are trying to train the system to identify between a fox and a dog because the animal is what is very important for the AI to identify. In that case, you have to find a way to explain what the model is looking at. If you have an image of a fox and a dog and cover up the image of the fox and ask the system to identify what animal it’s looking at, and it still says the fox, then the system is not looking at the dog but something else in the image to make its decision. 

The most famous case of this unexpected outcome involved when the Pentagon wanted to automatically use artificial neural networks to detect camouflaged enemy tanks automatically. They trained a neural network on photos of camouflaged tanks in trees and trees without tanks. The researchers got the results they wanted – the system worked well for them – but when the Pentagon tested it out in real life, the system failed because it was looking at the weather, not the tank. The dataset of photos of camouflaged tanks had been taken on cloudy days, while the photos of the plain forest had been taken on sunny days. The neural network had learned to tell cloudy days from sunny days instead of distinguishing camouflaged tanks from an empty forest.

AI, as it stands now, can’t do things that aren’t linear. If only we could ask the model, what is it that you’re looking at – and it could tell you, it would be easy, but it can’t communicate yet. This is why we need something explainable.

What should business leaders who are looking to embrace this technology look out for?

Leaders should first and foremost look at overcoming unconscious and conscious bias in explainable AI.

Unconscious bias is already a big problem in AI – so we need to find ways to curtail this – otherwise, businesses and countries will miss out on reaching their full economic potential.

Bias in AI can lead to all kinds of problems, such as recruiting for talent. Systems can be trained to stream resumes now, and if there’s any bias in the datasets, then the model will learn them and discriminate against candidates. 

For example, a applicant might have a feminine-sounding name on a CV that a system is streaming. It won’t want to hire that potential candidate to be an engineer because of some implicit human bias against that name, and the model has picked it up and discarded the CV. 

When training a model, there are ways to prevent this outcome by weighing certain things, such as giving the system a terrible score if it shows gender bias of any kind. The other way to negate this is by removing the kind of data leading to problems- so if you remove the name field in a CV – you don’t have to worry about the model learning that bias.

Explainable AI – call for a good system. 

Many use cases demand and require explainable AI, and we will not get very far if we don’t have a good system for it. For example, anything that has to do with the legal system in the future that requires explainable AI could end up with many court cases being thrown out.

A good example is self-driving cars: if an accident happens and a court hearing requires an explanation as to why the computer made the decision it did, it needs to be evidenced. If the researchers of the car cannot evidence it, then the case will be dismissed. 

Where will we need explainable AI the most?

Where we need explainable AI the most is in the medical field. One of the things we are doing is developing a dental AI system. We are implementing the system from a dentist’s perspective, and the things they want to see is evidence to help them come up with explanations for certain medical actions to take.

For example, the impacted tooth metric in our system grades it according to how difficult it will be addressed. A dentist can look at the x-ray, and in a few seconds, they can see what they need to do while the system is making a case for whether the patient needs to get surgery or not. Certain rules are trained to look at – such as whether the tooth’s root touches the nerve in the jaw, which can complicate things, or whether the tooth and nerve intersect.

The dentist can see where the AI system predicted the diagnosis, and they can verify and trust the machine’s decision. It decreases the workload for the dentist and saves them time having to write the report by giving them that critical second opinion.

Using the X-ray, the AI system can also flag other potential diseases not typically diagnosed with X-rays, such as cavities. Instead of the dentist having to use a magnifying glass to spend ten minutes sweeping the image, the AI system can instantly identify things they might have missed and order another test to get a better look. AI can also predict if a difficult surgery is required, which helps the dentist choose an appropriate specialist to refer to.

READ MORE:

Explainable AI, Diversity and the Future

AI will play a much bigger part in our lives in the near future, but we need to do more to make sure that the outcomes are beneficial to the society we aim to serve. That can only happen if we continue to develop better use cases and prioritise diversity when creating AI systems. This will help mitigate conscious and unconscious biases and deliver a better overall picture of the real-world issues we are trying to address.

For more news from Top Business Tech, don’t forget to subscribe to our daily bulletin!

Follow us on LinkedIn and Twitter

Luke Conrad

Technology & Marketing Enthusiast

Giesecke+Devrient launches new Smart Label at CES 2025

Giesecke Devrient • 06th January 2025

G+D has today launched the G+D Smart Label, its innovative tracking solution that transforms any package into an IoT device. Ultra-thin and only slightly larger than a credit card, the new Smart Label proposition has been jointly developed by G+D in conjunction with its hardware partner, Sensos to enable cost-effective, accurate location tracking for a...

Choose an AI solution to transform beyond technology

Kit Cox • 09th December 2024

The first step is knowing exactly what your business wants to achieve with AI; think faster, smarter and more efficient. Once you know what you are working towards, you can start looking for a solution that can help you make it a reality. AI integration can feel like a daunting task at the beginning, so...

A Roadmap to Security and Privacy Compliance

John Lynch Director of Kiteworks • 04th December 2024

Only by understanding the current regulatory environment and implementing robust data protection measures, can organisations enhance their security posture, ensure compliance, and build resilience against the latest cyber threats. This article provides a comprehensive roadmap of how to do it.

Data-Sharing Done Right: Finding the Best Business Approach

Bart Koek • 20th November 2024

To ensure data is not only available, but also accessible to those that need it, businesses recognise that it is vital to focus on collecting, sorting and governing all the data in their organisation. But what happens when data also needs to be accessed and shared across the business? That is where organisations discover a...

Nova: The Ultimate AI-Powered Martech Solution for Boosting Sales, Marketing...

Erin Lanahan • 19th November 2024

Discover how Nova, the AI-powered engine behind Launched, revolutionises Martech by automating sales and marketing tasks, enhancing personalisation, and delivering unmatched ROI. With advanced intent data integration, revenue attribution, and real-time insights, Nova empowers businesses to scale, streamline operations, and outperform competitors like 6Sense and 11x.ai. Experience the future of Martech with Nova’s transformative AI...

How E-commerce Marketers Can Win Black Friday

Sue Azari • 11th November 2024

As new global eCommerce players expand their influence across both European and US markets, traditional brands are navigating a rapidly shifting landscape. These fast-growing Asian platforms have gained traction by offering ultra-low prices, rapid product turnarounds, heavy investment in paid user acquisition, and leveraging viral social media trends to create demand almost in real-time. This...

Why microgrids are big news

Craig Tropea • 31st October 2024

As the world continues its march towards a greener future, businesses, communities, and individuals alike are all increasingly turning towards renewable energy sources to power their operations. What is most interesting, though, is how many of them are taking the pro-active position of researching, selecting, and implementing their preferred solutions without the assistance of traditional...