Home News Explainable AI Using Expressive Boolean Formulas

Explainable AI Using Expressive Boolean Formulas

0
Explainable AI Using Expressive Boolean Formulas

The explosion in artificial intelligence (AI) and machine learning applications is permeating nearly every industry and slice of life.

But its growth doesn’t come without irony. While AI exists to simplify and/or speed up decision-making or workflows, the methodology for doing so is usually extremely complex. Indeed, some “black box” machine learning algorithms are so intricate and multifaceted that they’ll defy easy explanation, even by the pc scientists who created them.

That will be quite problematic when certain use cases – resembling within the fields of finance and medicine – are defined by industry best practices or government regulations that require transparent explanations into the inner workings of AI solutions. And if these applications will not be expressive enough to fulfill explainability requirements, they might be rendered useless no matter their overall efficacy.

To handle this conundrum, our team on the Fidelity Center for Applied Technology (FCAT) — in collaboration with the Amazon Quantum Solutions Lab — has proposed and implemented an interpretable machine learning model for Explainable AI (XAI) based on expressive Boolean formulas. Such an approach can include any operator that will be applied to 1 or more Boolean variables, thus providing higher expressivity in comparison with more rigid rule-based and tree-based approaches.

You could read the complete paper here for comprehensive details on this project.

Our hypothesis was that since models — resembling decision trees — can get deep and difficult to interpret, the necessity to seek out an expressive rule with low complexity but high accuracy was an intractable optimization problem that needed to be solved. Further, by simplifying the model through this advanced XAI approach, we could achieve additional advantages, resembling exposing biases which are necessary within the context of ethical and responsible usage of ML; while also making it easier to take care of and improve the model.

We proposed an approach based on expressive Boolean formulas because they define rules with tunable complexity (or interpretability) in response to which input data are being classified. Such a formula can include any operator that will be applied to 1 or more Boolean variables (resembling And or AtLeast), thus providing higher expressivity in comparison with more rigid rule-based and tree-based methodologies.

On this problem we now have two competing objectives: maximizing the performance of the algorithm, while minimizing its complexity. Thus, somewhat than taking the standard approach of applying considered one of two optimization methods – combining multiple objectives into one or constraining considered one of the objectives – we selected to incorporate each in our formulation. In doing so, and without lack of generality, we mainly use balanced accuracy as our overarching performance metric.

Also, by including operators like AtLeast, we were motivated by the thought of addressing the necessity for highly interpretable checklists, resembling an inventory of medical symptoms that signify a specific condition. It’s conceivable that a call can be made through the use of such a checklist of symptoms in a way by which a minimum number would should be present for a positive diagnosis. Similarly, in finance, a bank may determine whether or not to supply credit to a customer based on the presence of a certain variety of aspects from a bigger list.

We successfully implemented our XAI model, and benchmarked it on some public datasets for credit, customer behavior and medical conditions. We found that our model is usually competitive with other well-known alternatives. We also found that our XAI model can potentially be powered by special purpose hardware or quantum devices for solving fast Integer Linear Programming (ILP) or Quadratic Unconstrained Binary Optimization (QUBO). The addition of QUBO solvers reduces the variety of iterations – thus resulting in a speedup by fast proposal of non-local moves.

As noted, explainable AI models using Boolean formulas can have many applications in healthcare and in Fidelity’s field of finance (resembling credit scoring or to evaluate why some customers can have chosen a product while others didn’t). By creating these interpretable rules, we will attain higher levels of insights that may result in future improvements in product development or refinement, in addition to optimizing marketing campaigns.

Based on our findings, we now have determined that Explainable AI using expressive Boolean formulas is each appropriate and desirable for those use cases that mandate further explainability. Plus, as quantum computing continues to develop, we foresee the chance to realize potential speedups through the use of it and other special purpose hardware accelerators.

Future work may center on applying these classifiers to other datasets, introducing recent operators, or applying these concepts to other uses cases.

LEAVE A REPLY

Please enter your comment!
Please enter your name here