Ahead of the RAeS AI in Civil Aviation and Airports Summit in November, ALEXIS BROOKER* formerly Cirium’s VP of Research, explores the challenges that come with harnessing AI’s potential in one of humanity’s most complex industries where safety comes first, and both accurate and timely data is critical.

The opening introduction above was written by artificial intelligence (AI) using Microsoft Copilot for Word using OpenAI’s ChatGPT with the prompt “please create a fun and engaging opening sentence for this article tailored for Royal Aeronautical Society magazine readers.”

In this article, written by a human, we aim to shed light on the fascinating intricacies surrounding AI challenges in aviation – from negative side effects to reward hacking, from scalability and safety concerns to data distribution shifts.

The richness and complexity of aviation makes it an ideal industry to realise the benefits of sophisticated analytics, machine learning, and generative AI. Obtaining accurate data in aviation is hard, especially concerning your competitors. Knowing what to do with the data is also a problem exacerbated by the current skills’ shortage across the industry.

These combined incentives drive an increasing demand for insights that requires cutting the data in a thousand different ways. The bottleneck is often the systems, teams, software, data access and processes. Imagine a world where, as an aviation executive at an airline, airport, manufacturer, Air Navigation Service Provider or systems supplier, you could have answers to the hardest questions as quickly as you could ask them. Therein is a latent demand for insights, but what questions should you ask? Here lies the first challenge: the sheer complexity of the environment.

The data environment


Can AI be used to control an airline’s revenue management system?

The complexity of the data environment can be attributed to several factors:

Vast amounts of diverse data sources: Aviation generates an enormous amount of data from various sources, including aircraft sensors, air traffic control systems, weather reports and forecasts, maintenance logs, passenger information and more. The sheer volume and diversity of these datasets pose a significant challenge when it comes to managing and integrating them effectively. Each data source has its own format, structure, and update frequency, making it challenging to harmonise and consolidate the information into a unified dataset for analysis. Even using the plethora of open standards available, the diversity and variability of quality of real-world data make this an even harder problem to solve.

Interconnectedness: The aviation industry involves numerous interconnected components, such as airlines, airports, air traffic management systems, regulatory authorities and ground services providers. Understanding and analysing these complex relationships is crucial when applying AI solutions. For example, predicting flight delays requires consideration of not only weather conditions but also factors, like airport operations and ground or airspace congestion among others. Furthermore, changes in one component can have ripple effects throughout the entire system with many versions of the ‘truth.’

Real-time nature: Safety-critical decisions in aviation need to be made swiftly and accurately in real-time situations. However, handling large volumes of data with low-latency deterministic requirements poses a unique challenge for AI algorithms that must process this vast amount of information efficiently within tight time constraints. Balancing real-time decision-making needs with computational resources necessitates designing efficient algorithms that can handle high-speed streams of incoming data while delivering accurate results promptly. Partly for this reason, nascent generative AI experiments are rightly focused on low-stakes historical analysis and recommendations rather than real-time decision-making. Earning trust from human experts will take a long time, will probably require new innovations beyond the current advances and will require an extensive track record of audited trustworthy behaviour prior to adoption.

Data quality and accuracy: Ensuring accurate and reliable data is essential for successful AI applications in aviation. Inaccurate weather forecasts or corrupted maintenance records could lead to compromised safety measures if not addressed appropriately before deploying AI systems. Current systems rely on detailed requirements and deterministic outcomes with known behaviour. Data is not just required for the decisions to be made, but complex test and evaluation data is required for formal validation and verification, in the case of AI – pre-training, fine tuning and ongoing performance monitoring.

Tackling these challenges effectively requires robust strategies for managing diverse datasets from multiple sources while maintaining their quality and accuracy levels throughout different stages of processing – from ingestion to training models or generating insights for decision-making purposes. The ‘rubbish in, rubbish out’ analogy still holds for systems that use AI.

Negative side effects and reward hacking


AI can use the historical positions of global aircraft to help our airlines undertake predictive maintenance for life-limited parts. (Satair)

No matter how rich the data environment, effectively modelling the desired outcome can still be elusive. When it comes to modelling the world of aviation and defining the effective scope for the use of generative AI, and even AI agents, we must create reward functions to motivate those agents in the direction we desire. Setting objectives and defining reward functions is hard in a complex environment where side effects, unintended solutions or short cuts may be non-obvious to the most intelligent and talented human designers.

For example, imagine a simple scenario where an AI agent controlling a robot cleaner is programmed to clean an office and is rewarded when there is no visible dirt on the floor. Through an infamous published experiment, via trial and error, the AI eventually adapted its behaviour by simply disabling its visual sensors (closing its eyes) so that it could no longer detect any dirt. While this behaviour technically meets the conditions set by the human designer for receiving the reward, it was certainly not the intended outcome.

These unexpected consequences highlight the need for careful consideration of all possibilities when designing AI systems. The sheer complexity within the aviation system is why AI offers a glimmer of hope here – can any deterministic model even consider all possible outcomes from every aviation decision? Within academia, the use of Generative Adversarial Networks (evaluation AIs to challenge and set the reward function and review the behaviour of the AI) is an interesting area of active research. Overall, the experience, training, standards and processes and the calm ability of highly trained air traffic controllers, pilots and many aviation professionals is what makes aviation as safe as possible today. The mere possibility of an AI taking shortcuts in order to meet a poorly defined reward function is a major challenge to be overcome before AI can be considered for any safety-related use case.

Scalability and safety


AI is here to stay, but how can the industry use it pointedly?

Ensuring scalability and safety are crucial aspects of incorporating AI into aviation systems that involve human decision-making alongside AI support. In such systems, standards like DO-178, DO-278 and their sibling standards from EASA and EUROCAE play a pivotal role in defining validation and verification processes that must be met for the requirements, design, development, testing and acceptance of any ground-based or airborne aviation systems and software. That said, if the ‘explanation capability’ was not in the requirements, these systems do not explain ‘how’ they reached an answer. They simply process and display the message or data as designed and documented to the human user who makes the decision.

However, for AI based systems that keep a human ‘on-the-loop’ rather than ‘in-the-loop’ to enable scalability of decision-making, auditable explainability becomes paramount when a sophisticated AI agent supports or makes recommendations to humans. How did the AI agent reach the decision? What were the assumptions behind the analysis, which data with suitable provenance was used? Which options were excluded and why?

It is essential to understand what goals or derived reward functions the agents are following and be able to dig deeper to audit the internal agent decision-making process. Interestingly, deep reinforcement learning agents have demonstrated instances where they exploit bugs or pause games to achieve higher scores instead of playing them as intended. This highlights the importance of comprehending how these agents arrive at their decisions to maintain credibility in decision-making – for historical analysis, commercial recommendations, or even high-stakes business cases. It may be that the use of a licensed and aviation community managed Open Source approach is required to model weights and algorithms for key decision-making processes.

Data distribution shift


AI products help airlines and travel agents to communicate accurately with customers.

The challenge of data distribution shift refers to the scenario where AI models are trained on specific datasets but, during actual deployment, they encounter different real-world data to how they were initially trained, sometimes made worse if the results of a model are used as a future input.

Let us consider an AI system responsible for revenue management in airlines. Its objective is to maximise revenue by setting ticket prices based on factors, like demand, flight capacity, historical sales data, demographics, and analysis of the competition.

During training, the AI model is trained and updated with historical pricing data and learns patterns to optimise ticket prices over time from nine months pre-departure through to the final 24 hours accordingly. However, when this model is deployed in a real-time environment, it receives feedback from its own decisions as passengers buy tickets at the revised prices.

In this case, let us assume that the initial pricing strategy set by the model includes lower ticket prices six months in advance, but higher prices for business-class tickets closer to departure since there is higher demand for premium seats from business passengers in the final three weeks before the flight. As passengers start purchasing tickets based on these cheaper prices well in advance, if the system incorporates their choices back into its input (a feedback loop), over thousands of flights it may interpret these early sales as an indication that lowering business-class fares would lead to greater load factor, reducing uncertainty and filling the aircraft (part of the intended goal). This model, in small increments over many flights, loses the potential additional revenue from last-minute business travellers. Human oversight or expert intervention, therefore, remains crucial to ensure AI models remain aligned with broader business objectives while maintaining a balance between automated decision-making and experienced judgement.

Harnessing AI in aviation


AI is revolutionising the aerospace sector.

In conclusion, the integration of AI into the aviation industry brings immense potential for enhancing safety, efficiency, and decision-making processes for business and operations. However, it also presents several highly complex challenges that must be addressed to ensure successful adoption – through earning the trust of human experts.

* Alex Brooker – Owner and CEO, Airside Labs:

Alex Brooker is a pioneering force in aviation technology innovation. With over two decades of experience in the industry, he has spearheaded multiple award-winning projects including the Laminar Data Hub (IHS Janes Innovation Award winner) and XMAN cross-border flight arrival system, which has significantly reduced CO2 emissions for arrivals at major UK airports. As a key contributor to CANSO’s Strategic Technology Group and author of their SWIM white paper, Alex has shaped global aviation data standards and practices. He recently led the ATOMICUS drone innovation consortium, advancing UTM solutions now utilised by major airtaxi manufacturers and mission planners. Following his success driving Product innovation and the integration of generative AI technologies at Cirium, he has launched Airside Labs. Airside is a specialised startup focused on accelerating innovation through AI experimentation and data-driven insights. Alex is a Chartered Engineer and Fellow at the IET, and a Chartered Manager with the CMI.

His presentation at the RAeS AI Summit will be titled “Ground Effect: Measuring GenAI’s Aviation Acumen”



Alexis Brooker




25 October 2024