What is the difference between black box AI and model-based AI?
AI and machine learning have both been successfully applied to a wide range of applications. While brilliant computational tools, adopted by many businesses and organisations, there is a key issue which remains — the ‘black box’ of AI.
What is the black box of AI?
This is not the ‘black box’ we immediately think of as logging flight data. Black box AI involves collecting a large amount of ‘big data’ based on past, historical transactions before identifying patterns in that data. It then simulates how the organisation will perform based on the patterns identified. The system generates a result, but the user has no understanding of how the result was attained.
Due to the unknowns in the processes — or maybe the user’s inability to comprehend those processes — the user becomes doubtful of the results.
How does model-based AI differ from black box AI?
Model-based AI, on the other hand, is a predictive tool that enables the user to interrogate the data and the output conclusions. It represents concepts, objects, ideas present in the real world with meaningful computational representations known as ‘agents’.
Model-based AI can be defined by these characteristics:
It operates according to a behavioural model, where each key element in a business or organisational system — such as an asset, a facility or a decision-maker — can be represented as an agent and configured to act in a particular way.
The model is the result of these individual agents operating and interacting with each other to create accurate simulations.
The agents create data that can be compiled into a set of outputs to determine key performance indicators such as cost, availability, capacity or utilisation.
This can then be compared against real-world scenarios to see if the agent made the correct decision given the current available options.
Changing the way an agent is configured — for example, increasing the operational activity of an aircraft or changing the availability and location — provides numerous alternative hypothetical approaches which can be examined and compared using ‘what if?’ analysis.
Utilising model-based AI
The aerospace and defence industries increasingly rely on AI to schedule and plan complex, multi-year maintenance operations across thousands of aircraft.
Model-based AI allows a user to ask the system hypothetical questions, for example:
If engine X or aircraft Y was removed from service, what impact would that have on the rest of the fleet?
To answer such a question, model-based AI uses labelling to recognise the concept of ‘a flight’ as something we understand with the system. The system can house multiple entities, for example, an aircraft, an engine or a maintenance operative. It then represents these computationally. Because the relationships between entities can be verified, they are trusted.
The key to avoiding the black box problem? Model-based AI
Using a model-based AI approach removes the problem of mistrust in the black box. Each decision can be traced and monitored to provide users with a greater understanding of the reasoning applied. These results are ‘safe’ because there is an underlying guarantee that it will perform the task that is required.
There is no question that AI can sort through immense amounts of data to provide results faster than any human counterpart. However, when there are external factors to think about, those same human operators need more than just the what. They need to have the reassurance of the why provided by model-based AI which gives them confidence in the output.
To find out more about Aerogility’s use of model-based AI, take a look at our video series with Professor Mike Luck and Nick Jennings.