Finding an eXplainable AI Feedback Loop

Establishing a feedback loop between Machine Learning models and their explainers. Your research should make it easier to actually act on ML explanations, instead of simply viewing it as interesting observations.

Required interest(s)

  • Explainable AI
  • Artificial Intelligence
  • Machine Learning

What do you get

  • A challenging assignment within a practical environment
  • € 1000 compensation, € 500 + lease car or € 600 + living space
  • Professional guidance
  • Courses aimed at your graduation period
  • Support from our academic Research center at your disposal
  • Two vacation days per month

What you will do

  • 65% Research
  • 10% Analyze, design, realize
  • 25% Documentation

As machine learning (ML) systems take a more prominent and central role in contributing to life-impacting decisions, ensuring their trustworthiness and accountabilityis of utmostimportance. Explanations sit at the core of these desirable attributes of a ML system. The emerging field is frequently called “Explainable AI (XAI)” or “Explainable ML.” The goal of explainable ML is to intuitively explain the predictions of a ML system, while adhering to the needs to various stakeholders. Many explanation techniques were developed with contributions from both academia and industry. However, there are several existing challenges that have not garnered enough interest and serve as roadblocks to widespread adoption of explainable ML.

Two of the most widely used ML frameworks for eXplainable AI are LIME and SHAP. The first is used for local explanations, meaning to describe on what basis a certain prediction was made. The second is applied on a global scale, and thus tries to explain the overall impact a parameter has on the model’s outcome.

These explainers are used to describe the behavior of Machine Learning models but don’t give pointers to improve the model. For instance, when a model misclassifies a certain sample, how could the outcome of explainers be used to give feedback to the model? Or when the explainer determines that a model to recognize cats is actually looking for balls of fur present in the image, how could that observation be used to improve the model?

Your research aims to establish a feedback loop between Machine Learning models and their explainers. It should use the results of the explainers to better train the models, in either an automated or supervised scenario. This research should make it easier to actually act on ML explanations, instead of simply viewing it as interesting observations.

About Info Support Research Center

We anticipate on upcoming and future challenges and ensures our engineers develop cutting-edge solutions based on the latest scientific insights. Our research community proactively tackles emerging technologies. We do this in cooperation with renowned scientists, making sure that research teams are positioned and embedded throughout our organisation and our community, so that their insights are directly applied to our business. We truly believe in sharing knowledge, so we want to do this without any restrictions.

Sign up for this assignment

  • Geaccepteerde bestandstypen: docx, doc, txt, pdf.
  • Geaccepteerde bestandstypen: docx, doc, txt, pdf.

Application procedure

  1. 1
  2. Introductory meeting

    Discuss (study) career, interests and ambitions and introduction Info Support.

  1. 2
  2. Review

    Assessment of professional knowledge and personality (capacity, competences and motives).

  1. 3
  2. Selection interview

    Deepen professional knowledge and personality.

  1. 4
  2. The signing of a contract

    Contract offer and invitation for drawing moments.