Create an AutoML-like solution that picks the most interpretable model from a range of possible good enough models.
- Artificial Intelligence
- Machine Learning
What do you get
- A challenging assignment within a practical environment
- € 1000 compensation, € 500 + lease car or € 600 + living space
- Professional guidance
- Courses aimed at your graduation period
- Support from our academic Research center at your disposal
- Two vacation days per month
What you will do
- 65% Research
- 10% Analyze, design, realize
- 25% Documentation
Explainable machine learning (XAI) and artificial intelligence models have been used to justify a model’s decision-making process. This added transparency aims to help improve user performance and understanding of the underlying model. However, in practice, explainable systems face many open questions and challenges.
Decision trees, SVM’s and lineair and logistic regression models can be trained with different hyperparameters and even different feature sets. The explainability of the trained model will vary based on these parameters and features. For example decision trees are not stable, so many equally accurate tree exists. That means we can pick the one that’s the most interpretable. We want to automatically search for the best explainable model.
About Info Support Research Center
We anticipate on upcoming and future challenges and ensures our engineers develop cutting-edge solutions based on the latest scientific insights. Our research community proactively tackles emerging technologies. We do this in cooperation with renowned scientists, making sure that research teams are positioned and embedded throughout our organisation and our community, so that their insights are directly applied to our business. We truly believe in sharing knowledge, so we want to do this without any restrictions.
Read more about Info Support Research here.