Skip to content
Logo
  • Products
    • Finnate
      • Alternative Investment
      • Business Lending
      • Digital Onboarding
    • Metiz
      • PIIManager
      • DocuParse
      • CustomerPulse
    • Accelerators
      • Accelor Virtual Assistant
      • Accelor Object Importer
      • NFP In A Box
  • Services
    • Salesforce
      • Capabilities
      • For Non-profits
      • Marketing Cloud
    • Digital
      • Leadsquared
      • Mobile & Web
    • Data
      • Data Engineering
      • AI/ML
      • BI and Data Analytics
      • Data Management
      • Conversational AI
    • Automation & Testing
      • RPA
      • Testing Services
    • ERP
      • Odoo
    • Consulting
      • Transformation Advisory
      • Architecture & Security
      • Customer Experience
      • Product Engineering
    • Business Services
      • Market Research
      • Documentation
      • Mortgage
      • Creative
      • Legal
  • Company
    • About Us
    • Careers
  • Resources
    • Blogs
    • Brochures
    • case study
    • Events
    • Newsletters
    • Podcasts
Contact
Search
Close

Interpretable AI

  • published November 21, 2020

AI has been making rapid advances, and even if we choose not to use AI, it touches our lives in many ways.

Over the years, AI models have become very complex. Some of the models have more than 100 million parameters. If we use such a complex model, it is hard to explain how the model arrived at its results.

Why bother about model interpretability?

If we use AI to solve a problem like recommending products to customers or sort mails based on postal code, we do not need to worry about the model’s interpretability. But if we are dealing with decisions that impact people in a significant way, we not only want the model to be fair, but also the able to explain the decision-making process.

Here are some examples where we need to explain the rationale behind the decision to the people involved

  • Credit decisions
  • Forensic analysis
  • College admissions
  • Medicine research
  • Demand from regulatory bodies

The need for an interpretable AI is quite real. In 2018, Amazon scrapped an AI-based resume selection tool because it showed a bias against women. Any model is only as good as the data we use to train it. So, the demand for interpretable AI is healthy not just for society but also for business.

There are many approaches to interpreting a complex model. I will explain two popular methods.

Local Interpretable Model-Agnostic Explanations (LIME)

A complex model means that the decision boundary is non-linear. For the sake of simplicity, let us assume that we have only two input variables, and we want to classify the data points into two classes. This simple assumption will help us with easy visualisation. Let us look at the following diagram.

In the diagram above, let us assume that we have a data set of people with two input variables, Age and Income, and we want to classify if a person has diabetes or not. A red dot means that the person has diabetes and the green one indicates that the person does not have diabetes. You would notice that the decision boundary is non-linear.

If we need to explain why a person has diabetes, then we can create a proxy function which is linear and works well in a small region.

The red straight line at the bottom right is the proxy decision boundary. Note that this linear proxy is local (hence the word local in LIME). For points that are not in the vicinity of the proxy function, we will need another proxy function.

Shapley Additive Explanations (SHAP)

The idea of SHAP is an extension of Shapley Values which were coined by Lloyd Shapley in 1974. The concept of SHAP is borrowed from game theory. Imagine a game of rowing where there are five rowers in each boat. Once the game is over, how should the prize money be divided among the winning team members?

You could think of an AI program as a similar collaborative game. In our example, we can think of Age and Income as the players and the ‘decision’ of being diabetic or non-diabetic as an outcome. Using SHAP, for every outcome at the local level or all outcomes at the global level, we can assign a percentage for each variable (i.e. Age and Income). The math behind SHAP is a bit involved so I will not elaborate it here.

If you are interested to know more about interpretable AI, please reach out to us.

 

You can read and follow our publications on Medium

Prabhash Thakur

Director, Data Science

Envelope Linkedin

Interpretable AI

Products
  • Finnate
  • Metiz
  • Accelerators
Categories
  • AI/ML
  • Business
  • Business Services
  • Corporate
  • Fintech
  • Salesforce
  • Technology
  • Trending
Tags
Agile business model AI AI in business Artificial Intelligence Asset management Automation branding Business covid CRM Energy enterprise agile Finance Integrations Machine Learning marketing microsoft pandemic personal branding powerpoint Salesforce social events social media visual presentation Voice women in centelon womens forum work from home working from home
Recent Posts
  • Three Trends in Business IT for 2023
  • Looking beyond
  • Loan Management Software
  • AI in the Asset Management Industry
  • Implementing Agile

Contact Us

Australia

Level 13, 200 Queen Street, Melbourne Victoria 3000.

India

B Wing, Level 2, Ghule Square DSK Ranwara Road, Bavdhan,
Pune 411-021

Singapore

50 Raffles Place #37-00 Singapore Land Tower Singapore 048-623

USA

196 N 3rd Street, Suite 319, San Jose,
CA 95112
ISO Certified 27001:2013
Great Place to Work - Certified™ Nov 2021-22
Centelon © 2023. All rights Reserved

Privacy Policy

Thanks for showing an interest in our products.

Our team will get back to you at the earliest to book a requested demo call at your preferable time.

 

Back to Website