Master's thesis: Towards a Quantitative Evaluation Metric for XAI

Finding a quantitative evaluation metric that can be used to estimate the usefulness of ML model explanations. That wil be the goal of your research in this thesis.

Solliciteer direct

Required interest(s)

  • Explainable AI
  • Artificial Intelligence
  • Machine Learning

What do you get

  • A challenging assignment within a practical environment
  • € 1000 compensation, € 500 + lease car or € 600 + living space
  • Professional guidance
  • Courses aimed at your graduation period
  • Support from our academic Research center at your disposal
  • Two vacation days per month

What you will do

  • 65%  Research
  • 10%  Analyze, design, realize
  • 25%  Documentation

As machine learning (ML) systems take a more prominent and central role in contributing to life-impacting decisions, ensuring their trustworthiness and accountabilityis of utmostimportance. Explanations sit at the core of these desirable attributes of a ML system. The emerging field is frequently called “Explainable AI (XAI)” or “Explainable ML.” The goal of explainable ML is to intuitively ex-plain the predictions of a ML system, while adhering to the needs to various stakeholders. Many explanation techniques were developed with contributions from both academia and industry. However, there are several existing challenges that have not garnered enough interest and serve as roadblocks to widespread adoption of explainable ML.

It is difficult to determine which eXplainable AI technique (XAI) is most useful in a given scenario. A proper quantitative evaluation metric to determine this does not exist at the moment. As a result, it is unknown whether the explanations created for a certain model are sufficient and usable.

The goal of your research is to find a quantitative evaluation metric that can be used to estimate the usefulness of ML model explanations. This should be a stable metric that could even be utilized in an automated MLOps flow, to give the best possible set of explanations for a given ML model. Your results will assist us in implementing XAI solutions at our clients.

About Info Support Research Center

We anticipate on upcoming and future challenges and ensures our engineers develop cutting-edge solutions based on the latest scientific insights. Our research community proactively tackles emerging technologies. We do this in cooperation with renowned scientists, making sure that research teams are positioned and embedded throughout our organisation and our community, so that their insights are directly applied to our business. We truly believe in sharing knowledge, so we want to do this without any restrictions.

Read more about Info Support Research here.


  1. 1
  2. Kennismakingsgesprek

    Bespreek (studie) loopbaan, interesses en ambities en introductie Info Support.

  1. 2
  2. Beoordelingen

    Assessment van professionele kennis en persoonlijkheid (capaciteit, competenties en motieven).

  1. 3
  2. Selectie interview

    Professionele kennis en persoonlijkheid verdiepen.

  1. 4
  2. De ondertekening van een contract

    Contractaanbieding en uitnodiging voor tekenmomenten.