Jump to contentJump to search

Working Paper: FAIR

Working Paper Series: Fairness in Artificial Intelligence Reasoning

Working Paper No. 1

By Birte Keller, Janine Balleis, Christopher Starke, and Frank Marcinkowski

Abstract:
This report provides an overview of some promising applications of artificial intelligence (AI) in German universities. Although the AI sector is booming, the higher education sector seems to have benefited little from this boom thus far. In any case, schools and universities are relatively low-priority targets in the development of new AI-based systems than, for example, medical diagnostics or individual transport. This report aims to provide initial insights into the relevant applications that are currently being developed at and for uni-versities in Germany in this opaque scenario. To this end, the report was prepared based on a methodological triangulation. In the first step, relevant literature on AI in the university sector and existing state-of-the-art reports of other countries were evaluated. In the second step, an analysis of the official documents of Ger-man universities pertaining to AI and digitization strategies was carried out, as far as such papers were available. In the third step, 13 guideline-based expert interviews were conducted to confirm and extend the impressions gained from the relevant literature and from the document analysis. On this empirical basis, a few of the AI systems currently in use at tertiary education institutions are pre-sented, the opportunities and risks associated with their use are discussed, and the future of such systems is discussed. Even if we cannot claim that this report provides a complete picture of the domain, it does highlight important fields of application and lines of development related to AI.
(Download)

Working Paper No. 2

By Jannik Dunkelau and Michael Leuschel

Abstract:
We provide an overview of the state-of-the-art in fairness-aware machine learning and examine a wide variety of research articles in the area. We survey different fairness notions, algorithms for pre-, in-, and post-processing of the data and models, and provide an overview of available frameworks.
(Download)

Working Paper No. 3

By Janine Baleis, Birte Keller, Christopher Starke, and Frank Marcinkowski

Abstract:
Artificial intelligence is increasingly used to make decisions that can have a significant impact on people's lives. These decisions can disadvantage certain groups of individuals. A central question that follows is the feasibility of justice in AI applications. Therefore, it should be considered which demands such applications have to meet and where the transfer of social order to algorithmic contexts still needs to be overhauled. Previous research efforts in the context of discrimination come from different disciplines and shed light on problems from specific perspectives on the basis of various definitions. An interdisciplinary ap-proach to this topic is still lacking, which is why it is considered sensible to sys-tematically summarise research findings across disciplines in order to find paral-lels and combine common fairness requirements. This endeavour is the aim of this paper. As a result of the systematic review, it can be stated that the individual perception of fairness in AI applications is strongly context-dependent. In addi-tion, transparency, trust and individual moral concepts demonstrably have an in-fluence on the individual perception of fairness in AI applications. Within the interdisciplinary scientific discourse, fairness is conceptualized by various defi-nitions, which is why there is no consensus on a uniform definition of fairness in the scientific literature to date.
(Download)

Responsible for the content: