0000003075 20W 2SWS SE Seminar: Opening the Black Box, Trends in expainable AI (IN2107)   Hilfe Logo

LV - Detailansicht

Wichtigste Meldungen anzeigenMeldungsfenster schließen
Allgemeine Angaben
Seminar: Opening the Black Box, Trends in expainable AI (IN2107) 
Wintersemester 2020/21
Informatik 4 - Lehrstuhl für Software & Systems Engineering (Prof. Pretschner)
Zuordnungen: 1 
Angaben zur Abhaltung
The rapid deployment of systems with artificial intelligence (AI) components into all aspects of daily life requires enabling them to be explainable. All the involved stakeholders should understand the decisions taken by AI systems (e.g., machine learning applications). Because of the complexity of these systems and their black-box nature, it is often hard to explain their results. However, recently the public is increasingly interested in explaining decisions, recommendations, or predictions that result from such systems [2, 3]. Further, explanation and interpretability in AI (xAI) and machine learning systems are now necessary for regulation compliance, system improvements, and trust enhancement [3].

Many aspects of xAI are currently under research by academics and practitioners. Examples include: How can we approximate the true criteria of classifiers? What is a good explanation for non-experts? What is an interpretable model of an artificial neural network? In this seminar, we look at similar questions and review the literature around them. We study approaches, tools, examples around xAI.
How it goes?
a. Each student will study and analyze the literature around one aspect (a preliminary list will be provided later).
b. The literature should provide a general understanding, approaches and tools of a specific aspect. Taxonomies or meta-models are a good starting point.
c. Each student will summarize the scientific progress in tackling one aspect in a paper at the end of the semester. Throughout the semester, the student will formulate research questions tailored to their topic.
A background in machine learning algorithms, deep learning, declarative AI, causality modeling, or any field specified in the list of topics is desirable, but not required
The seminar aims to review the literature around xAI and possibly identify research gaps in that domain. This seminar also aims to introduce students to the scientific method of critically reading, understanding, analyzing, explaining, and presenting existing scientific literature.
Students will read papers that are assigned to them by their supervisors. They are required to find further relevant research on the topic. Understanding the central statements of a paper includes highlighting, complementing, and explaining assumptions, as well as deliberately or accidentally incomplete chains of argumentation – typically followed by examples. This understanding should be reflected in the written exposé. This exposé must include the problem that is tackled by the selection of papers, as well as their respective central assumptions, arguments, and results. A highly motivated student is expected to come up with a classification scheme within which all selected publications may be neatly organized and their crux matter discussed in the context of the corresponding problem.
Possible topics
The preliminary list of related fields can be (but not limited to):
i. Requirements, stakeholders, concepts, and taxonomies of xAI
ii. Local explanation
iii. Global explanation
iv. Case-based explanation
v. Argument-based explanation
vi. Counterfactual-based explanation
vii. Tools and case-studies
viii. Deep learning explanation
Für die Anmeldung zur Teilnahme müssen Sie sich in TUMonline als Studierende/r identifizieren.
Anmerkung: • Slides must be discussed with the supervisor at least one week before the presentation. The presentation must be held in English!
• Participation and attendance in all seminar presentations are mandatory. Students must read the final submissions of their colleagues and participate in the discussions.
• Registration for the seminar takes place by the TUM Online Matching System.
• Once successfully registered for the seminar
o Students select at most three free available individual seminar topics of their choice.
o Send the selected topics via email (subject: “xAI seminar") in preferred order from 1 (=most preferred topic) to 3 to Amjad Ibrahim.
• Once allotted a topic (you will get a confirmation email)
• Students must acknowledge their acceptance of the topic and participation in the seminar latest by TBA.
• Students willing to quit the seminar must send a cancellation email by TBA, failing which they will be graded 5.0
2- Miller, Tim. “Explanation in Artificial Intelligence: Insights from the Social Sciences.” Artificial Intelligence 267 (2019): 1–38. Crossref. Web.
3- Brent Mittelstadt, Chris Russell, and Sandra Wachter, ‘Explaining explanations in ai,’ in Proceedings of the conference on fairness, accountability, and transparency, pp. 279–288. ACM (2019)
4- Interpretable Machine Learning: A Guide for Making Black Box Models Explainable, Christoph Molnar
Online Unterlagen
E-Learning Kurs (Moodle)