SHAP and LIME aren’t efficient enough to operate on big data. In this thesis your research aims on not using a subset of the data, or several dozens of predictions but solving this problem by creating a scalable eXplainable AI algorithm.
- Explainable AI
- Artificial Intelligence
- Machine Learning
What do you get
- A challenging assignment within a practical environment
- € 1000 compensation, € 500 + lease car or € 600 + living space
- Professional guidance
- Courses aimed at your graduation period
- Support from our academic Research center at your disposal
- Two vacation days per month
What you will do
- 65% Research
- 10% Analyze, design, realize
- 25% Documentation
As machine learning (ML) systems take a more prominent and central role in contributing to life-impacting decisions, ensuring their trustworthiness and accountabilityis of utmostimportance. Explanations sit at the core of these desirable attributes of a ML system. The emerging ﬁeld is frequently called “Explainable AI (XAI)” or “Explainable ML.” The goal of explainable ML is to intuitively explain the predictions of a ML system, while adhering to the needs to various stakeholders. Many explanation techniques were developed with contributions from both academia and industry. However, there are several existing challenges that have not garnered enough interest and serve as roadblocks to widespread adoption of explainable ML.
Two of the most widely used ML frameworks for eXplainable AI are LIME and SHAP. The first is used for local explanations, meaning to describe on what basis a certain prediction was made. The second is applied on a global scale, and thus tries to explain the overall impact a parameter has on the model’s outcome.
The current implementations of SHAP and LIME aren’t efficient enough to operate on big data. Explaining models is compute intensive and the implementations are not built to be ran on distributed systems. Therefore, explainability is usually carried out using a subset of the data, or several dozens of predictions. This is insufficient to properly apply eXplainable AI.
Your research aims to solve this problem by creating a scalable eXplainable AI algorithm. Ideally this would be a more efficient or highly scalable version of LIME or SHAP but other approaches are desired as well. Our hope is that your research assists us in explaining complex ML models on large datasets at our clients, which is currently unfeasible.
About Info Support Research Center
We anticipate on upcoming and future challenges and ensures our engineers develop cutting-edge solutions based on the latest scientific insights. Our research community proactively tackles emerging technologies. We do this in cooperation with renowned scientists, making sure that research teams are positioned and embedded throughout our organisation and our community, so that their insights are directly applied to our business. We truly believe in sharing knowledge, so we want to do this without any restrictions.
Read more about Info Support Research here.