Some AI-based services and tasks today are relatively trivial – such as a song recommendation on a streaming music platform.
However, AI is playing an expanding role in other areas with far greater human impact. Imagine you’re a doctor using AI-enabled sensors to examine a patient, and the system comes up with a diagnosis demanding urgent invasive treatment.
In situations such as this, an AI-driven decision on its own is not enough. We also need to know the reasons and rationale behind it. In other words, the AI has to “explain” itself, by opening up its reasoning to human scrutiny.
The transition to Explainable AI is already underway, and within three years, we expect it to dominate the AI landscape for businesses.
Explainable AI systems will play this pivotal role through their ability to:
Explainable AI, ready for takeoff
The transition to Explainable AI is already underway, and within three years, we expect it to dominate the AI landscape for businesses. It will empower humans to take corrective actions, if needed, based on the explanations machines give them. But how will it do this?
There are three ways of manifesting and conveying the reasoning behind AI decisions made by machines:
- Using data from the machine learning – using comparisons with other examples to justify the decisions
2. Using the model itself – explanations mimic the learning model by abstracting it through rules or combining it with semantics
3. Hybrid approach combining both data and model – offers metadata and feature-level explanations.”The future of AI lies in enabling people to collaborate with machines to solve complex problems. Like any efficient collaboration, this requires good communication, trust and understanding.”
– FREDDY LECUE, Explainable AI Research Lead, Accenture Labs
Two use cases for Explainable AI
No. 1 – Detecting abnormal travel expenses
Most existing systems for reporting travel expenses apply pre-defined views, such as time period, service or employee group. While these systems aim to detect abnormal expenses systematically, they usually fail to explain why the claims singled out are judged to be abnormal.
To address this lack of visibility into the context of abnormal travel expense claims, Accenture Labs designed and built a travel expenses system incorporating Explainable AI. By combining knowledge graph and machine learning technologies, the system delivers insight to explain any abnormal claims in real-time.
No. 2 – Project risk management
Most large companies manage hundreds, if not thousands, of projects every year across multiple vendors, clients and partners. A company’s expectations are often out of line with the original estimates because of the complexity and risks inherent in the critical contracts.
This means decision-makers need systems that not only predict the risk tier of each contract or project, but also give them an actionable explanation of these predictions. To address the challenges, Accenture Labs applied Explainable AI and developed a five-stage process to explain the risk tier of projects and contracts.
Eight measures can be applied to assess its value and effectiveness. These measures capture the elements that people need in an explanation, but cannot necessarily all be achieved. While explainable AI will use and expose techniques that address these questions, we—as humans—should still expect a trade-off between value and effectiveness.
How much effort is needed for a human to interpret it?
How concise is it?
How actionable is the explanation? What can we do with it?
Could it be interpreted/reused by another AI system?
How accurate is the explanation?
Does the “explanation” explain the decision completely, or only partially?
A technology revolution with people at its heart
Explanation is fundamental to human reasoning, guiding our actions, influencing our interactions with others and driving efforts to expand our knowledge. AI promises to help us identify dangerous industrial sites, warn us of impending machine failures, recommend medical treatments, and take countless other decisions.
The promise of these systems won’t be realized unless we understand, trust and act on the recommendations they make. To make this possible, high-quality explanations are essential.
Source: Accenture Lab AI report 2018
Accenture Labs, in a new report, details how we can meet the need for more information by giving AI applications the ability to explain to humans not just what decisions they made, but also why they made them.