Jump to contentJump to search

Research on digitalization

Like every technological revolution, digitization and the increasing spread of AI systems harbor not only great opportunities, but also major risks. These include, for example, data security, which is becoming salient due to the collection, aggregation and analysis of countless data traces in almost all areas of life. Likewise, questions of ethics and fairness are coming into focus against the backdrop of discriminating algorithms. Nevertheless, the assumption of numerous tasks and decisions by AI systems seems to be viewed largely uncritically, considering the willingness of many young people in particular to make their personal data available. This is by no means self-evident; on the contrary, it requires a great deal of explanation. Why is it that many people either don't perceive the dangers of digitization at all or dismiss them as negligible collateral damage? Under which conditions do attitudes toward digitization change and when do they become relevant for action? (How) Can socially responsible AI systems be designed? How fairly are algorithmic decisions perceived? These are some central questions addressed in this research field. Research in this area provides necessary basic knowledge for the overarching goal of securing and strengthening democracy in the digital society.


Opinion Monitor Artificial Intelligence - Meinungsmonitor Künstliche Intelligenz [MeMo:KI]

04/2021 – 03/2024

Externally funded research project

Stiftung Mercator

Artificial intelligence (AI) seems to find itself in an “AI spring" at the end of the 2010s.
In many cases, it is seen as having immense potential for change in various areas of society, encompassing both opportunities and risks. In political strategy papers, a human-oriented AI design, which follows principles of social and democratic compatibility, is stated as the goal. Based on a political economy supposition, the project assumes that the achievement of this goal is only possible through  politicizing the topic. Therefore, the project monitors the evolution of public and published opinion on the topic using regular population surveys, manual and automated media content analysis, and analysis of social media discourse on the topic. The data is made available to a wide audience on the project website.

Project website: cais.nrw/memoki

Prof. Dr. Frank Marcinkowski, Pero Došenović, Kimon Kieslich

In research partnership with Center for Advanced Internet Studies (CAIS)

Responsible Academic Performance Prediction [RAPP]

03/2021 – 02/2024

Externally funded research project

German Federal Ministry of Education and Research [Bundesministerium für Bildung und Forschung (BMBF)]

Academic Performance Prediction (APP) systems, i.e. systems for performance prediction, as supporting AI systems in higher education promise the early detection of potential failures and thus enable the university to use resources in a targeted manner in order to prevent them through individual support measures. However, according to a study carried out at the Heinrich Heine University, the use of AI-based systems is viewed as problematic by students as far as their own data and planning are concerned. This represents a serious obstacle to the deployment and success of such systems.

The aim of this project is therefore a socially acceptable use of AI systems, for which purpose ethical aspects and their perception by those affected are to be researched. For this purpose, our colleagues from computer science are developing an AI system for Academic Performance Prediction, in which a rule-based explanation component creates extensive transparency for those affected. At the same time, our work at the division of ‘Communication and Media Studies I’ focuses on the use of this system, the data required for prediction according to technical and ethical aspects, and student perceptions, which we study in laboratory and field experiments. From this, recommendations for action will ultimately be derived in collaboration with the responsible bodies in the university for the use of such systems.


Projektwebseite: https://rapp.hhu.de/en/


Projektwebseite: tba

Prof. Dr. Frank Marcinkowski, Birte Keller, Dr. Marco Lünich

In interdisciplinary cooperation with Prof. Dr. Stefan Conrad (computer science, HHU), Prof. Dr. Michael Leuschel (computer science, HHU), Prof. Dr. Ulrich Rosar (sociology, HHU), Dr. Johannes Krause (sociology, HHU), Dr. Christopher Starke (media studies, University of Amsterdam)

On the Legitimacy of the Implementation of Artificial Intelligence Systems [working title]

03/2021 – predicted 2025

Dissertation project


The PhD project focuses on the question of how to create legitimacy for the use of artificially intelligent (AI) systems, despite the many risks associated with this technology.
In scientific literature, the question of the design of AI systems is frequently discussed in this context and numerous ethical guidelines on algorithmic design have been developed. However, given the background of various known risks (e.g., discrimination, data security, lack of human autonomy), it is surprising that only questions of "how" are raised, but no questions of "whether to or not." Therefore, the sociology of justification is used as the theoretical foundation of the thesis in order to first theoretically and then empirically shed light on the justification basis used to legitimize the use of AI systems. As an exemplary use case, the university is chosen as the place of application of AI.

Birte Keller

Big Data Belief System

09/2014 – 08/2020

Completed dissertation project


The emergence of the digital society is accompanied by powerful narratives such as the knowledge society. In this context, the possibility of collecting and analyzing large amounts of digital data (i.e., Big Data) is emphasized. Big Data is associated with expectations for the production of social knowledge and the benefits derived from it. In the literature, it is assumed that the conviction of the consequences of quantifying the world and the social also catches on in people's minds and becomes relevant to attitudes. In two sequential studies, this dissertation explores the assumption of widespread beliefs about the quality and efficacy of digital data and analyzes collective beliefs of knowledge and utility gains from Big Data. For this purpose, the measurement of a Big Data Belief System (BDBS) for standardized survey research is designed and tested. By using this survey instrument in diverse research contexts of the use of artificial intelligence technologies, the extent to which the presence of the BDBS explains attitudes toward phenomena of digitization is examined.

The dissertation was published in 2022 by Springer Verlag under the title "Der Glaube an Big Data. Eine Analyse gesellschaftlicher Überzeugungen von Erkenntnis- und Nutzengewinnen aus digitalen Daten".

Dr. Marco Lünich

Fair Artificial Intelligence Reasoning (FAIR)

05/2019 – 04/2020

Completed externally funded research project


The goal of the completed "Fair Artificial Intelligence Reasoning (FAIR)" project in the "Artificial Intelligence and its Impact on Tomorrow's Society" funding line of the Volkswagen Foundation was to examine decisions based on artificial intelligence (AI) in terms of fairness. The project focused on the application area of higher education. On the one hand, an increase in potential AI applications can be expected in this area. On the other hand, there is a great risk of discrimination when algorithms influence the future of students, for example by deciding on their admission to studies or their grades. In collaboration with computer scientists and sociologists from Heinrich Heine University Düsseldorf, a common working definition for fair AI was first developed by systematically processing the relevant scientific literature from the social sciences as well as computer science. In addition, potential application areas in which fair algorithms become salient in a university context were identified. These efforts served as the groundwork for the, then, successfully incorporated RAPP project.

Prof. Dr. Frank Marcinkowski, Janine Baleis, Birte Keller, Dr. Christopher Starke

In interdisciplinary cooperation with Prof. Dr. Stefan Conrad (computer science, HHU), Prof. Dr. Stefan Harmeling (computer science, HHU), Prof. Dr. Michael Leuschel (computer science, HHU), Prof. Dr. Ulrich Rosar (sociology, HHU)

Responsible for the content: