Is Explainable AI (xAI) the Next Step, or Just Hype?
Recent years have seen the expansion of artificial intelligence into an array of industries with varying levels of disruption. Once a horizon-technology (perhaps similar to how we now view quantum computing) AI has officially breached everyday life, and informed opinions are no longer reserved for tech enthusiasts and elite data scientists. Now, stakeholders include executives, investors, managers, the government, and ultimately customers.
While conversations regarding Explainable AI (xAI) date back decades, the concept emerged with renewed vigor in late 2019 when Google announced its new set of xAI tools for developers. The concept of xAI is relatively simple: historically, machine learning models have operated within a “black box,” with outcomes determined by an astounding number of interwoven parameters so complex (in the millions) that explaining them proved impossible. The goal of xAI is to engineer transparency and literal explanations into models, ultimately allowing the final outcomes to come equipped with context. For example, xAI might determine an image to be a wolf, offering the explanation: It is an animal with sharp teeth, fur, and there is snow in the background.
Although xAI is considered a technology, it can equally be understood as a best practice. Artificial intelligence is outperforming humans in fields steeped in ethical dilemmas, such as healthcare, finance, and law. While the promise of implementing technology to reduce human bias and increase efficiency is alluring, organizations are responsible for their decisions (human or robot), and if they cannot explain a decision, they’re vulnerable to multiple liabilities. AI may be able to set bail in a consistently more fair way than a judge; however, even AI can be misled by poor data or overfitting, and when AI causes unfair sentencing, declined mortgage applications, or misdiagnosed cancer, problems inevitably arise. Mistakes are inevitable, but explaining mistakes is also necessary for any of these high-stakes environments.
Outside of extreme circumstances, xAI provides an expanded category of features for companies to promote and sell. With Garnter’s expectation that the global AI economy will expand from $1.2 trillion in 2019 to $3.9 trillion by 2022, each company should expect to define competitive models beyond the promised outcomes. Providing a black box AI model that guarantees an improvement may be tempting, but identifying specific advanced features provides organizations talking points to enhance their own marketing and the clients’ awareness.
The rise of xAI
The timing or xAI’s popularity is not coincidental. Public opinion of the tech industry has plummeted in recent years, with only 50% of participants believing that tech companies create a positive impact in the US, down from over 70% just four years ago. While many companies are slow to adapt to this trend, attuned leaders recognize the shift towards accountability and trust. Implementing xAI moves tech-forward companies in this direction and shows initiative on an issue that may inevitably become unavoidable policy.
In 2017, Google’s decision to announce their “AI-first” strategic policy seemed bold; however, just a few years later, the concept of tech executives tightly embracing AI seems almost expected. Since the world’s first company opened its doors (Google says it’s the Dutch East India Company in 1602, publicly traded at least), leaders have relied on financially-informed decisions. In recent years, the rise of big data and the IoT opened the floodgates for previously unavailable insights; executives adapted their language to include “data-informed” decisions. The next, natural evolution is AI-backed decisions. Leaders are expected to discuss and defend their own decisions to stakeholders, the public, press, and the law; this expectation does not dissolve with the introduction of complex AI.
Making xAI effective
For xAI to be reliable, it cannot be an ad-hoc addition or an afterthought. Developers and engineers must implement xAI into the design and architecture of the application they are building. It’s also important to note that not every AI project requires explaining; xAI may be cumbersome and cost-prohibitive in video games, entertainment or types of production analysis.
A recurrent phenomenon in AI and machine learning is the inability to explain the operations within the “black box,” creating certain ideal outcomes. Members of the developer community have expressed skepticism regarding the promises of xAI, arguing that some models are so complex, they are impossible to explain, and being forced to explain will hamper creative progress. In certain cases, this is undoubtedly true. Whether this reality is caused by our primitive understanding of the technology, or a more pervasive, unavoidable reason is up for debate.
Moving toward xAI does not require engineers or architects to stop producing black box models; it simply raises the standard for the most critical public-facing technologies that operate in fields heavily reliant on sound ethics. The desired outcomes and expectations of any AI should be determined in an early meeting, and xAI should be part of that discussion.
Some projects may require uniquely complex black box models designed for performance without explanation, while others may not hold value without explanation. Each project has unique needs, and xAI offers one more layer of possibilities.