Skip to content
Logo
  • Industries
    • Aged Care and NDIS
    • Banking, Financial Services & Insurance
    • Energy
    • Public Sector
    • Non Profit
  • Services
    • Consulting
      • Transformation Advisory
      • Architecture & Security
      • Customer Experience
      • Product Engineering
    • Artificial Intelligence
      • Generative AI
      • Conversational AI
      • Machine Learning
    • Data
      • Data Engineering
      • BI and Data Analytics
      • Data Management
    • Salesforce
      • Capabilities
      • For Non-profits
      • Marketing Cloud
    • Digital Experiences
      • HubSpot
      • Jitterbit
      • Strapi
      • LeadSquared
      • WP Engine
      • Mobile & Web
      • Zepic
      • Web Development
      • Mobile Development
      • Product Engineering Services
    • Managed Services
      • Application Management
      • Integration Platforms
      • Infrastructure & Cloud
      • Cybersecurity & Compliance
    • Automation & Testing
      • RPA
      • Testing Services
    • Connected Devices
      • Engineering services
      • IoT Services
      • Automation Services
    • ERP
      • Odoo
    • Business Services
      • Market Research
      • Documentation
      • Mortgage
      • Creative
      • Legal
  • Solutions
    • Finnate
      • Investing
      • Lending
      • Digital Onboarding
    • Metiz
      • PIIManager
      • DocuParse
      • CustomerPulse
    • Accelerators
      • Accelor MissionXcel
      • Accelor Object Importer
      • Accelor Virtual Assistant
    • Metiz
      • At play
      • Under the hood
    • Connected Devices
      • Centelon IoT platform
      • Cento
    • DXP
      • Capabilites
      • Industries
  • Partners
  • Resources
    • Case Studies
    • White papers
    • Blogs
    • Podcasts
    • Brochures
    • Events
    • Newsletters
  • Company
    • About Us
    • Careers
Contact
Search
Close this search box.

Interpretable AI

AI has been making rapid advances, and even if we choose not to use AI, it touches our lives in many ways.

Over the years, AI models have become very complex. Some of the models have more than 100 million parameters. If we use such a complex model, it is hard to explain how the model arrived at its results.

Why bother about model interpretability?

If we use AI to solve a problem like recommending products to customers or sort mails based on postal code, we do not need to worry about the model’s interpretability. But if we are dealing with decisions that impact people in a significant way, we not only want the model to be fair, but also the able to explain the decision-making process.

Here are some examples where we need to explain the rationale behind the decision to the people involved

  • Credit decisions
  • Forensic analysis
  • College admissions
  • Medicine research
  • Demand from regulatory bodies

The need for an interpretable AI is quite real. In 2018, Amazon scrapped an AI-based resume selection tool because it showed a bias against women. Any model is only as good as the data we use to train it. So, the demand for interpretable AI is healthy not just for society but also for business.

There are many approaches to interpreting a complex model. I will explain two popular methods.

Local Interpretable Model-Agnostic Explanations (LIME)

A complex model means that the decision boundary is non-linear. For the sake of simplicity, let us assume that we have only two input variables, and we want to classify the data points into two classes. This simple assumption will help us with easy visualisation. Let us look at the following diagram.

image 7

In the diagram above, let us assume that we have a data set of people with two input variables, Age and Income, and we want to classify if a person has diabetes or not. A red dot means that the person has diabetes and the green one indicates that the person does not have diabetes. You would notice that the decision boundary is non-linear.

If we need to explain why a person has diabetes, then we can create a proxy function which is linear and works well in a small region.

image 8

The red straight line at the bottom right is the proxy decision boundary. Note that this linear proxy is local (hence the word local in LIME). For points that are not in the vicinity of the proxy function, we will need another proxy function.

Shapley Additive Explanations (SHAP)

The idea of SHAP is an extension of Shapley Values which were coined by Lloyd Shapley in 1974. The concept of SHAP is borrowed from game theory. Imagine a game of rowing where there are five rowers in each boat. Once the game is over, how should the prize money be divided among the winning team members?

You could think of an AI program as a similar collaborative game. In our example, we can think of Age and Income as the players and the ‘decision’ of being diabetic or non-diabetic as an outcome. Using SHAP, for every outcome at the local level or all outcomes at the global level, we can assign a percentage for each variable (i.e. Age and Income). The math behind SHAP is a bit involved so I will not elaborate it here.

If you are interested to know more about interpretable AI, please reach out to us.

 

You can read and follow our publications on Medium

Prabhash Thakur

Director, Data Science

Envelope Linkedin
Envelope Linkedin
  • November 21, 2020
Written By Prabhash Thakur
Products
  • Finnate
  • Metiz
  • Accelerators
Categories
  • AI/ML
  • Business
  • Business Services
  • Corporate
  • Fintech
  • Non Profit
  • Salesforce
  • Technology
  • Trending
Tags
Agile business model AI AI in business Artificial Intelligence Asset management Automation branding Business covid CRM Energy enterprise agile Finance Integrations Machine Learning marketing microsoft pandemic personal branding powerpoint Salesforce social events social media visual presentation Voice women in centelon womens forum work from home working from home
Recent Posts
  • Unlocking Agility in Aged Care in Australia: A 2025 Playbook for Transformation
  • Strategic Recommendations for Australian leaders to drive innovation and impact in 2025
  • The tech insights Australian leaders need for 2025
  • New Aspirations: Embracing AI – Our Journey through Generative AI, Conversational AI, and Machine Learning
  • Three Trends in Business IT for 2023
PrevPreviousMachine Learning to Enrich Marketing Social Events
NextThe Art of Personal BrandingNext

Let’s Create Big Stories Together

Expertise Deployments in Salesforce, ERP, CRM, Web & Mobile Developments, Artificial Intelligence, Data Management & Resource Augmentation.

Book a Consultation

Contact Us

Australia

Level 13, 200 Queen Street Melbourne VIC 3000
Australia

India

B Wing, Level 2, Ghule Square DSK Ranwara Road, Bavdhan,
Pune 411-021

Singapore

2 Shenton Way #15-04 SGX Centre 1, Singapore 068804

USA

196 N 3rd Street, Suite 319, San Jose,
CA 95112
ISO Certified 27001:2013
Great Place to Work - Certified™ Nov 2021-22
Centelon © 2025. All rights Reserved

Privacy Policy

Terms of Service

Thanks for showing an interest in our products.

Our team will get back to you at the earliest to book a requested demo call at your preferable time.

 

Back to Website