Working Paper Series FAIR
Working Paper Series: Fairness in Artificial Intelligence Reasoning
Künstliche Intelligenz (KI) durchdringt zunehmend sämtliche Bereiche des gesellschaftlichen Lebens. Der Bildungssektor ist dabei in doppelter Weise herausgefordert. Einerseits soll er die Ausbildung hochqualifizierter KI-Experten gewährleisten. Andererseits führen KI-basierte Systeme (z.B. learning analytics, drop-out detection systems) zu tiefgreifenden Veränderungen in Forschung und Lehre der Hochschulen. Während Befürworter solcher Technologien erwarten, dass KI die Qualität universitärer Bildung verbessert und die Effizienz der Hochschulen insgesamt stärkt, befürchten Kritiker, dass automatisierte Systeme soziale Ungleichheiten reproduziert oder gar verstärkt werden könnten. Wenn Entscheidungen über den Zugang zu Bildungsangeboten oder den Studienerfolg zunehmend von KI getroffen werden, stellen sich folglich zentrale Fragen nach Fairness, Verantwortung und Transparenz.
An dieser Stelle setzt das interdisziplinäre FAIR/HE Projekt an und untersucht, unter welchen technologischen und sozialen Voraussetzungen eine faire und sozial verträgliche Implementierung KI-basierter Systeme an deutschen Universitäten gelingen kann. Dabei werden „zwei Gesichter“ von Fairness unterschieden: (1) objektive Diskriminierung durch unfairen Dateninput oder fehlerhafte Algorithmen, (2) subjektive empfundene (Un)fairness bei den Betroffenen. Um beide Formen von Fairness sowie deren Zusammenwirken adäquat zu untersuchen, ist kooperative Forschung zwischen InformatikerInnen und SozialwissenschaftlerInnen unverzichtbar. Das interdisziplinäre FAIR/HE Konsortium trägt dazu bei, die deutschen Hochschulen auf die Herausforderungen und Chancen von KI adäquat vorzubereiten. Das Projekt wird Verfahren und Lösungen für den fairen Umgang mit Daten erarbeiten, Werkzeuge für die Gestaltung diskriminierungsfreier und verstehbarer Algorithmen entwickeln und wertvolles Gestaltungswissen über die kognitiven und emotionalen Reaktionen von Betroffenen bereitstellen.
Projektleiter: Frank Marcinkowski, Ulrich Rosar, Christopher Starke (Institut für Sozialwissenschaften), Stefan Conrad, Stefan Harmeling, Michael Leuschel (Institut für Informatik)
Working Paper No. 1
By Birte Keller, Janine Baleis, Christopher Starke, and Frank Marcinkowski
Abstract:
This report provides an overview of some promising applications of artificial intelligence (AI) in German universities. Although the AI sector is booming, the higher education sector seems to have benefited little from this boom thus far. In any case, schools and universities are relatively low-priority targets in the development of new AI-based systems than, for example, medical diagnostics or individual transport. This report aims to provide initial insights into the relevant applications that are currently being developed at and for universities in Germany in this opaque scenario. To this end, the report was prepared based on a methodological triangulation. In the first step, relevant literature on AI in the university sector and existing state-of-the-art reports of other countries were evaluated. In the second step, an analysis of the official documents of German universities pertaining to AI and digitization strategies was carried out, as far as such papers were available. In the third step, 13 guideline-based expert interviews were conducted to confirm and extend the impressions gained from the relevant literature and from the document analysis. On this empirical basis, a few of the AI systems currently in use at tertiary education institutions are presented, the opportunities and risks associated with their use are discussed, and the future of such systems is discussed. Even if we cannot claim that this report provides a complete picture of the domain, it does highlight important fields of application and lines of development related to AI.
(Download)
Working Paper No. 2
By Jannik Dunkelau and Michael Leuschel
Abstract:
We provide an overview of the state-of-the-art in fairness-aware machine learning and examine a wide variety of research articles in the area. We survey different fairness notions, algorithms for pre-, in-, and post-processing of the data and models, and provide an overview of available frameworks.
(Download)
Working Paper No. 3
By Janine Baleis, Birte Keller, Christopher Starke, & Frank Marcinkowski
Abstract:
Artificial intelligence is increasingly used to make decisions that can have a significant impact on people's lives. These decisions can disadvantage certain groups of individuals. A central question that follows is the feasibility of justice in AI applications. Therefore, it should be considered which demands such applications have to meet and where the transfer of social order to algorithmic contexts still needs to be overhauled. Previous research efforts in the context of discrimination come from different disciplines and shed light on problems from specific perspectives on the basis of various definitions. An interdisciplinary ap-proach to this topic is still lacking, which is why it is considered sensible to sys-tematically summarise research findings across disciplines in order to find paral-lels and combine common fairness requirements. This endeavour is the aim of this paper. As a result of the systematic review, it can be stated that the individual perception of fairness in AI applications is strongly context-dependent. In addi-tion, transparency, trust and individual moral concepts demonstrably have an in-fluence on the individual perception of fairness in AI applications. Within the interdisciplinary scientific discourse, fairness is conceptualized by various defi-nitions, which is why there is no consensus on a uniform definition of fairness in the scientific literature to date.
(Download)