By the year 2030, 6G services are expected to demand a staggering 1000 times the data rate, manage diverse service requirements and host a wide array of solutions including augmented reality (AR), virtual reality (VR), robotics, telematics, autonomous vehicles, brain-computer interfaces, and smart cities. This results in an exceptionally complex 6G network architecture that requires the assistance of AI-based solutions for automation and the creation of a native AI network. However, the lack of transparency in AI-driven network operations raises concerns, such as a limited understanding of how these networks function and vulnerabilities to adversarial attacks.
What is Explainable AI?
Explainable AI (XAI) is a set of techniques and processes that allow humans to understand the results and outputs of artificial intelligence (AI) models. AI models are complex and difficult to understand, which can make it difficult for humans to confidently trust the decisions and recommendations they produce.
However, an immense problem is that in AI deep-learning methods there is an uninterpretable feature known as Black Box AI. This refers to systems that provide results from a dataset without revealing the inner workings to end-users. Essentially, machine learning programs generate conclusions from data that is inputted, but the specific mechanisms behind these conclusions remain hidden, even to expert engineers who write these algorithms and understand a lot of the complexities in AI.
To combat this – enter White Box AI. With explainability at the core of the models and allows humans to understand and trust how the model came to its answer, why the model predicted the outcome and where the AI model arrived at its results. This allows users of the AI service to understand what data the model used to reach its conclusion, meaning you can validate the answer it has produced. This is particularly attractive to businesses as if an AI system produces misleading data with no way of seeing how it came to its conclusion, it can have serious repercussions for its reputation and consumers’ trust.

Building Trust and Ethics in AI
When we understand how AI systems operate, naturally we feel more confident in their decision-making abilities. Effectively, Explainable AI (XAI) would help to act as a bridge between unlocking the full capabilities of AI models and making them safe and understandable to use. But XAI does more than just demystify AI; it also helps identify and address bias in AI systems. These biases can sneak in through the data AI systems are trained on, reflecting the biases present in our society.
Additionally, XAI encourages responsible and ethical AI use. When people have a clearer understanding of how AI systems impact our world, they’re more likely to use them in ways that benefit society as a whole. In simple terms, XAI is our guide in ensuring that AI reaches its potential while keeping things ethical and trustworthy, all while bridging the gap between humans and the AI that surrounds us.

ENABLE 6G’s role
In the ENABLE 6G project, they are researching how Explainable AI can be used in the next wireless networks. One of the primary motivations for embracing XAI in wireless networks is that it ensures confidence and trust in the network system that will run AI models. The lack of transparency is a significant challenge in Deep Neural Networks (DNN), especially when combined with Reinforcement Learning (RL), where hidden layer dynamics further obscure AI’s decision-making processes. In this pursuit, a recently published study has emerged, titled “Towards Native Explainable and Robust AI in 6G Networks”, marking a significant step towards enhancing the transparency and trustworthiness of AI in the context of 6G networks.

Example 1: Forecasting via Machine Learning
ML techniques, specifically Long Short-Term Memory (LSTM) recurrent neural networks (RNN), are applied to perform forecasts essential for network management, including predictions related to traffic load, cell congestion, and routing. LSTM networks are particularly well-suited for tasks involving sequential data, such as natural language processing, speech recognition, time series analysis, and more. This example offers insights into mobile traffic collected at regular intervals, demonstrating the potential of ML models. Additionally, the ENABLE 6G team developed a tool capable of explaining LSTM model operations which has the ability to identify the causes of AI model errors, providing transparency and interpretability in network forecasting.
ML techniques, specifically Long Short-Term Memory (LSTM) recurrent neural networks (RNN), are applied to perform forecasts essential for network management, including predictions related to traffic load, cell congestion, and routing. This example offers insights into mobile traffic collected at regular intervals, demonstrating the potential of ML models. Additionally, the ENABLE 6G team is developing a tool capable of explaining LSTM model operations which has the ability to identify the causes of AI model errors, providing transparency and interpretability in network forecasting.
Example 2: Slicing with Deep Reinforcement Learning
Deep Reinforcement Learning (DRL) is a type of AI that can be used to train agents to make decisions in complex environments. In the context of 6G networks, DRL can be used to manage network slices in a way that optimises performance for all users. DRL agents monitor key performance indicators (KPIs) such as data speed and latency, and then make adjustments to the network as needed. The DRL agent closely monitors Key Performance Indicators (KPIs) encompassing metrics like transmitted bitrate, transmitted packets and downlink buffer size. These KPIs are observed across various network slices, including eMBB, eMTC, and URLLC. Explainable Reinforcement Learning (XRL) is used to help network operators understand the decisions made by DRL agents and to suggest alternative actions that may improve user KPIs through programmable policies.
Legal requirements
The EU AI Act, introduced by the Commission in 2021, is a regulatory framework for AI systems operating within the European Union. However, due to the intricacy of implementing new regulations, it won’t officially come into play until 2025. This marks a significant milestone as it’s the first comprehensive AI law presented by a major regulatory body. While several countries, such as the US, China, and India, are also in the AI regulation game, the EU AI Act stands out for its rigorous approach in comparison.
Explainability in AI will play a significant role in AI-related companies’ compliance with the EU AI Act. There is a risk for companies that develop AI models without deep learning algorithms to ensure transparency and explainability. They could risk tougher application processes and even fines of up to €30,000,000 or up to 6% of their total worldwide annual turnover for the preceding financial year if found to breach the regulation.
The EU AI Act classifies AI applications into 4 risk categories.
- Unacceptable Risk: This category includes AI systems that manipulate human behaviour or exploit vulnerabilities leading to harm, such as the government-run social scoring system seen in China.
- High-Risk Applications: Examples of high-risk applications are tools like CV-scanning algorithms used to rank job applicants. These applications are subject to specific legal requirements to ensure transparency and fairness in their operations.
- Limited Risk: Applications not falling into the banned or high-risk categories are allowed to operate with a relatively lighter regulatory touch, which will help foster innovation. This category includes AI systems presenting manageable risks, requiring specific transparency measures to inform users about AI interaction.
- Minimal Risk: Most AI applications fall under this category, where safety and fundamental rights risks are minimal, promoting innovation without imposing specific legal obligations under the EU AI Act. This comprehensive categorisation aims to strike a balance between fostering innovation and safeguarding the welfare of society.

Conclusion
Explainable AI is a rapidly developing field, and new XAI techniques are being developed all the time. As AI models become more complex and powerful, XAI will become increasingly important for ensuring that AI is used responsibly and ethically. There is a growing awareness of the need to strike a balance between the benefits and potential drawbacks of AI models. Given the expected size, complexity, and importance of 6G networks, it becomes even more vital to understand and trust the AI models in which it will
Acronym |
Terms |
Definitions |
---|---|---|
XAI |
Explainable AI |
AI systems designed in a way that their decisions and reasoning can be easily understood and explained to humans. |
VR |
Virtual Reality |
A technology that creates a computer-generated, immersive environment, allowing users to interact with and experience a simulated reality. |
AR |
Augmented Reality |
Technology that overlays digital information or virtual elements onto the real world, enhancing a person’s perception of their surroundings. |
DNN |
Deep Neural Networks |
A type of artificial neural network with multiple layers (deep layers) used for complex tasks like image and speech recognition. |
RL |
Reinforcement Learning |
A machine learning approach where an agent learns to make decisions by interacting with an environment and receiving feedback in the form of rewards or penalties. |
ML |
Machine Learning |
A field of AI that enables machines to learn from data and make predictions or decisions without being explicitly programmed. |
LSTM |
Long Short-Term Memory |
A type of recurrent neural network (RNN) designed to capture long-term dependencies in data, often used in natural language processing and time series analysis. |
RNN |
Recurrent Neural Networks |
A type of neural network designed for sequences and time series data, with connections that allow information to flow backward and forward through the network. |
DRL |
Deep Reinforcement Learning |
Combining deep learning and reinforcement learning to train AI agents for complex tasks, often with deep neural networks as function approximators. |
KPI |
Key Performance Indicators |
Metrics used to measure the performance or effectiveness of a process, system, or organisation. |
eMBB |
Enhanced Mobile Broadband |
An advanced wireless communication technology that offers faster data speeds and improved connectivity for mobile devices. |
eMTC |
Enhanced Machine-Type Communication |
A category of communication in 5G and beyond networks tailored for efficient communication between Internet of Things (IoT) devices. |
URLLC |
Ultra-Reliable Low Latency Communications |
A communication standard that provides high-reliability and low-latency connections, crucial for applications like AR, VR, autonomous vehicles and industrial automation. |
XRL |
Explainable Reinforcement Learning |
An approach to reinforcement learning that focuses on making the decision-making process of AI agents more interpretable and understandable. |
Sources
https://arshren.medium.com/exploring-the-world-of-explainable-ai-for-tabular-data-9e7175c943fe
https://www.scs.org.sg/articles/machine-learning-vs-deep-learning
https://artificialintelligenceact.com/title-x/article-71/
https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai