Avatar

Über Marcel Spitzer

Der Autor hat bisher keine Details angegeben.
Bisher hat Marcel Spitzer, 2 Blog Beiträge geschrieben.

Machine Learning Interpretability: Explaining Blackbox Models with LIME (Part II)

2019-06-04T16:14:33+00:00

The idea behind the model-agnostic technique LIME is to approximate a complex model locally by an interpretable model and to use that simple model to explain a prediction of a particular instance of interest.

This is the second part of our series about Machine Learning interpretability. We want to describe LIME (Local Interpretable Model-Agnostic Explanations), a popular t

Machine Learning Interpretability: Explaining Blackbox Models with LIME (Part II) 2019-06-04T16:14:33+00:00

Machine Learning Interpretability: Do You Know What Your Model Is Doing?

2019-04-02T13:36:57+00:00

Unlike usual performance metrics, fairness, safety and transparency in machine learning models are much harder if not impossible to quantify. Here are some techniques (and examples) to provide interpretability, to make decision systems understandable not only for their creators, but also for their customers and users.

Machine learning has a great potential to improve data products and business processes. It is used to propose products and news articles that we might be interested i

Machine Learning Interpretability: Do You Know What Your Model Is Doing? 2019-04-02T13:36:57+00:00