Jump to contentJump to search


Two presentations at the ACM Conference on Fairness, Accountability, and Transparency (FAccT '24) in Brazil - currently published

Members of the chair presented two papers at the 7th ACM FAccT conference, which took place from June 3 - 6 in Rio de Janiero, Brazil.

In the first paper, Kimon Kieslich and Marco Lünich examine the demand for the regulation of biometric remote identification in German society. In a factorial survey, they analyze the effects of trust in AI and in the law enforcement system as well as perceptions of discrimination on support for regulation in four different use cases.

In the second paper, Marco Lünich and Birte Keller deal with student perceptions of fairness towards AI-based performance prediction systems. In this study, too, a factorial survey design was chosen to analyze the influence of different characteristics of decision trees used for student performance prediction on students' informational and distributive fairness perceptions using the factors of accuracy and simplicity. In addition, the roles of the understanding of the causal relationship of decision making (causability) and the level of institutional trust were considered as mediating and moderating factors.

Both papers were published after the conference:

Kieslich, K., & Lünich, M.  (2024). Regulating AI-Based Remote Biometric Identification. Investigating the Public Demand for Bans, Audits, and Public Database Registrations. In ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT ’24), June 3–6 Juni , 2024, Rio de Janeiro, Brazil. https://doi.org/10.1145/3630106.3658548

Lünich, M., & Keller, B. (2024). Explainable Artificial Intelligence for Academic Performance Prediction. An Experimental Study on the Impact of Accuracy and Simplicity of Decision Trees on Causability and Fairness Perceptions. In ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT ’24), June 3–6, 2024, Rio de Janeiro, Brazil. https://doi.org/10.1145/3630106.3658953


New Publication in Technology, Knowledge and Learning: On students' perception of the fairness of different distributive justice norms

Lünich, M., Keller, B., & Marcinkowski, F. (2023). Fairness of Academic Performance Prediction for the Distribution of Support Measures for Students: Differences in Perceived Fairness of Distributive Justice Norms. Technology, Knowledge and Learning. https://doi.org/10.1007/s10758-023-09698-y

Artificial intelligence in higher education is becoming more prevalent as it promises improvements and acceleration of administrative processes concerning student support, aiming for increasing student success and graduation rates. For instance, Academic Performance Prediction (APP) provides individual feedback and serves as the foundation for distributing student support measures. However, the use of APP with all its challenges (e.g., inherent biases) significantly impacts the future prospects of young adults. Therefore, it is important to weigh the opportunities and risks of such systems carefully and involve affected students in the development phase. This study addresses students’ fairness perceptions of the distribution of support measures based on an APP system. First, we examine how students evaluate three different distributive justice norms, namely, equality, equity, and need. Second, we investigate whether fairness perceptions differ between APP based on human or algorithmic decision-making, and third, we address whether evaluations differ between students studying science, technology, engineering, and math (STEM) or social sciences, humanities, and the arts for people and the economy (SHAPE), respectively. To this end, we conducted a cross-sectional survey with a 2 3 factorial design among n = 1378 German students, in which we utilized the distinct distribution norms and decision-making agents as design factors. Our findings suggest that students prefer an equality-based distribution of support measures, and this preference is not influenced by whether APP is based on human or algorithmic decision-making. Moreover, the field of study does not influence the fairness perception, except that students of STEM subjects evaluate a distribution based on the need norm as more fair than students of SHAPE subjects. Based on these findings, higher education institutions should prioritize student-centric decisions when considering APP, weigh the actual need against potential risks, and establish continuous feedback through ongoing consultation with all stakeholders.

AI | Conflicts | Conventions - Political Communication Conference in Düsseldorf

This year's Joint Annual Conference of the Section "Communication and Politics" of the German Communication Association  (DGPuK), the Working Group "Politics and Communication" of theGerman Political Science Association (DVPW) and the Section "Political Communication" of the Swiss Association of Communication and Media Research (SGKM) is organized by the Düsseldorf Institute for Internet and Democracy (DIID) at the Heinrich Heine University Düsseldorf with the participation of Marco Lünich and will take place from June 28 to 30, 2023 at the 'Haus der Universität'. The conference will focus on the metatrend of advancing digitalization in political communication research under the title "AI | Conflicts | Conventions" [KI | Konflikte | Konventionen].

With Marco Lünich and Birte Keller, two members of the chair will present their research at the conference. While Marco Lünich's talk will deal with the temporality of data and evidence-based political communication and decision-making processes as well as their problematization, theorization & desiderata, Birte Keller will present a document analysis of educational policy justifications for the use of artificially intelligent systems.

Registration for the conference is still possible until June 21, 2023.

Upcoming Presentation at Learning AID Conference in Bochum in August

Marco Lünich, Birte Keller and Frank Marcinkowski will present at this year's conference 'Learning Analytics, Artificial Intelligence and Data Mining in Higher Education' (Learning AID) on August 28-29 in Bochum, as part of the session 'Privacy, Ethics and Policy', giving a talk titled "Student perceptions of learning analytics and their consequences for attitudes, preferences, and behavioral intentions using Academic Performance Prediction as an example - results of a representative survey and implications for the adoption of AI in higher education" [Die studentische Wahrnehmung von Learning Analytics und ihre Konsequenzen für Einstellungen, Präferenzen und Verhaltensintentionen am Beispiel von Academic Performance Prediction - Ergebnisse einer Repräsentativbefragung und Implikationen für die Einführung von KI an der Hochschule].



Presentation at the 68. Annual Conference of the German Communication Association (DGPuK) in May, 2023

Researchers of the Chair will present two presentations at the 68th Annual Conference of the German Communication Association (DGPuK), which will take place in Bremen between May 18-20, 2023.

Albina Maxhuni, Marco Lünich, Birte Keller, and Frank Marcinkowski will present a paper entitled "Hegemonic Technology Implementation in Higher Education - A Qualitative Analysis of Affected Students' Perceptions of Harm in the Adoption of Dropout Detection". The presentation focuses on the results of a qualitative content analysis, which takes a look at student perceptions towards performance prediction systems at the higher education sector and investigates which technical and social as well as individual and societal risks that may emanate from AI applications are feared by those affected.

In addition, Jule Roth, Marco Lünich, and Christopher Starke will present a talk that presents the findings of an empirical study on the legitimacy perception of algorithmic decision-making processes. The title of the talk is "With AI through the crisis? Legitimacy Perceptions of AI-Assisted Energy Policy Decision Processes."

New Publication in Big Data & Society - Fairness perceptions of algorithmic decision-making: A systematic review of the empirical literature

Starke, C., Baleis, J., Keller, B., & Marcinkowski, F. (2022). Fairness perceptions of algorithmic decision-making: A systematic review  of the empirical literature. Big Data & Society, 9(2). doi.org/10.1177/20539517221115189

Algorithmic decision-making increasingly shapes people's daily lives. Given that such autonomous systems can cause severe harm to individuals and social groups, fairness concerns have arisen. A human-centric approach demanded by scholars and policymakers requires considering people's fairness perceptions when designing and implementing algorithmic decision-making. We provide a comprehensive, systematic literature review synthesizing the existing empirical insights on perceptions of algorithmic fairness from 58 empirical studies spanning multiple domains and scientific disciplines. Through thorough coding, we systemize the current empirical literature along four dimensions: (1) algorithmic predictors, (2) human predictors, (3) comparative effects (human decision-making vs. algorithmic decision-making), and (4) consequences of algorithmic decision-making. While we identify much heterogeneity around the theoretical concepts and empirical measurements of algorithmic fairness, the insights come almost exclusively from Western-democratic contexts. By advocating for more interdisciplinary research adopting a society-in-the-loop framework, we hope our work will contribute to fairer and more responsible algorithmic decision-making.

Presentation of the RAPP project at the workshop "(Un)Fairness of Artificial Intelligence" in Amsterdam (27.-28.10.2022).

From October 27-28, 2022, the project team of the Research Priority Area Human(e) AI of the University of Amsterdam invited to an interdisciplinary workshop on "(Un)Fairness of Artificial Intelligence". The RAPP project team of the chair, consisting of Marco Lünich, Birte Keller and Frank Marcinkowski, followed the call for papers and took the opportunity to present a paper entitled "The effects of students' distributive justice norm preferences on the evaluation of Artificial Intelligence in higher education." to the attending researchers from all over Europe.

MeMo:KI at KI.Forum.NRW 2022 on AI and sustainability

At the KI.Forum.NRW 2022 on October 26, Prof. Dr. Frank Marcinkowski will report which connections citizens make between ecological sustainability and artificial intelligence - and which they do not. The data comes from our project Meinungsmonitor Künstliche Intelligenz (MeMo:KI). All further info on the event can be found here: www.kiforum.nrw.

German Publication in "Grenzen, Probleme und Lösungen bei der Stichprobenziehung": Die Auswahl von Zeiträumen und Startzeitpunkten in der Zeitreihenanalyse

Kohler, S. (2022). Die Auswahl von Zeiträumen und Startzeitpunkten in der Zeitreihenanalyse. In J. Jünger, U. Gochermann, C. Peterm & M. Bachl. (Hrsg.) Grenzen, Probleme und Lösungen bei der Stichprobenziehung. (S. Köln: 353-378). Herbert von Halem-Verlag.

Die Variable Zeit spielt in der Kommunikationswissenschaft eine zentrale Rolle, dennoch sind Diskussionen über und Auseinandersetzungen mit der Variablen eher selten. Im Beitrag wird zunächst mit Bezug auf die Agenda-Setting-Forschung diskutiert, welche Kriterien im wissenschaftlichen Forschungsprozess einen Einfluss auf die Auswahl von Zeiträumen haben können. Schließlich werden zwei Entscheidungen bei der Analyse von zeitbasierten Daten sekundäranalytisch überprüft: der Startpunkt und die Länge von Zeiträumen bei der Aggregation von Zeitreihen. Es zeigt sich, dass insbesondere extreme Ereignisse wie ein Terroranschlag besondere Berücksichtigung finden, um Zeitverläufe adäquat zu erfassen. 

MeMo:KI and DIID at the Night of Science

On 09.09.2022, citizens will have the opportunity to inform themselves from 5 pm to midnight at Schadowplatz at the Haus der Universität in Düsseldorf's city center at numerous information booths, in lectures and talk sessions about various research fields. Our project Meinungsmonitor Künstliche Intelligenz (Opinion Monitor Artificial Intelligence) will also be on site to provide insights into the new dashboard on population opinion as well as analysis of media coverage and Twitter communication about artificial intelligence. Esther Laukötter will be available to answer questions at the joint booth with the Düsseldorf Institute for Internet and Democracy (DIID).

Benefits and detriments of interdisciplinarity on early career scientists’ performance. - Publication in PLOS ONE

Unger, S., Erhard, L., Wiczorek, O., Koß, C., Riebling, J., & Heiberger, R. H. (2022). Benefits and detriments of interdisciplinarity on early career scientists' performance. An author-level approach for U.S. physicists and psychologists. PLOS ONE,  Online First. https://doi.org/10.1371/journal.pone.0269991

Is the pursuit of interdisciplinary or innovative research beneficial or detrimental for the impact of early career researchers? We focus on young scholars as they represent an understudied population who have yet to secure a place within academia. Which effects promise higher scientific recognition (i.e., citations) is therefore crucial for the high-stakes decisions young researchers face. To capture these effects, we introduce measurements for interdisciplinarity and novelty that can be applied to a researcher’s career. In contrast to previous studies investigating research impact on the paper level, hence, our paper focuses on a career perspective (i.e., the level of authors). To consider different disciplinary cultures, we utilize a comprehensive dataset on U.S. physicists (n = 4003) and psychologists (n = 4097), who graduated between 2008 and 2012, and traced their publication records. Our results indicate that conducting interdisciplinary research as an early career researcher in physics is beneficial, while it is negatively associated with research impact in psychology. In both fields, physics and psychology, early career researchers focusing on novel combinations of existing knowledge are associated with higher future impact. Taking some risks by deviating to a certain degree from mainstream paradigms seems therefore like a rewarding strategy for young scholars.

How Is Socially Responsible Academic Performance Prediction Possible? - Publication in "Strategy, Policy, Practice, and Governance for AI in Higher Education Institutions"

Keller, B., Lünich, M. & Marcinkowski, F. (2022). How Is Socially Responsible Academic Performance Prediction Possible?: Insights From a Concept of Perceived AI Fairness. In F. Almaraz-Menéndez, A. Maz-Machado, C. López-Esteban, & C. Almaraz-López (Hrsg..), Strategy, Policy, Practice, and Governance for AI in Higher Education Institutions (S. 126-155). IGI Global. doi.org/10.4018/978-1-7998-9247-2.ch006

The availability of big data at universities enables the use of artificial intelligence (AI) systems in almost all areas of the institution: from administration to research, to learning and teaching, the use of AI systems is seen as having great potential. One promising area is academic performance prediction (APP), which is expected to provide individual feedback for students, improve their academic performance and ultimately increase graduation rates. However, using an APP system also entails certain risks of discrimination against individual groups of students. Thus, the fairness perceptions of affected students come into focus. To take a closer look at these perceptions, this chapter develops a framework of the “perceived fairness” of an ideal-typical APP system, which asks critical questions about input, throughput and output, and based on the four-dimensional concept of organizational justice, sheds light on potential (un-)fairness perceptions from the students' point of view.

The Effect of Science-Related Populism on Vaccination Attitudes and Decisions: Publication in Journal of Behavioral Medicine

Kohler, S. & Koinig, I. (2022). The Effect of Science-Related Populism on Vaccination Attitudes and Decisions. Journal of  Behavioral Medicine. doi.org/10.1007/s10865-022-00333-2

As the COVID-19 pandemic has sadly shown, the decision against vaccination is often linked to political ideologies and populist messages among specific segments of the population: People do not only have concerns about a potential health risk associated with vaccination but seem to have also adopted more populist attitudes towards science. In this study, the relationship between science-related populism and individuals’ attitudes towards vaccination was examined, presuming that scientific-related populism also influences individual responses towards different vaccinations. As different types of diseases and their vaccines might be perceived rather distinctively by the public, different vaccinations were considered. The survey is based on responses from 870 people from Germany and Austria. Results indicate that science-related populism influences responses towards some vaccination types, especially for those that receive extensive media coverage such as COVID-19 and measles (MMR). There was no significant impact of science-related populism on individuals’ vaccination intentions for other vaccines like seasonal influenza, human papillomavirus, or tick-borne encephalitis. In conclusion, limitations and directions for future research are addressed.

MeMo:KI [Opinion Monitor Artificial Intelligence] at the Theme Development Workshop of the Vision4AI project

At the Theme Development Workshop of the EU-funded Vision4AI project "AI: Mitigating Bias & Disinformation", Pero Došenović is invited as an expert in the breakout session "Science Communication with and on AI". There, he will discuss with participants challenges of a potential audience for science communication about AI based on data from the Opinion Monitor Artificial Intelligence.

AI-Ethics by Design: Publication in Big Data & Society

Kieslich, K., Keller, B. & Starke, C. (2022). AI-Ethics by Design. Evaluating Public Perception on the Importance of Ethical Design Principles of AI. Big Data & Society, 1-15. https://doi.org/10.1177/20539517221092956

Despite the immense societal importance of ethically designing artificial intelligence, little research on the public perceptions of ethical artificial intelligence principles exists. This becomes even more striking when considering that ethical artificial intelligence development has the aim to be human-centric and of benefit for the whole society. In this study, we investigate how ethical principles (explainability, fairness, security, accountability, accuracy, privacy, and machine autonomy) are weighted in comparison to each other. This is especially important, since simultaneously considering ethical principles is not only costly, but sometimes even impossible, as developers must make specific trade-off decisions. In this paper, we give first answers on the relative importance of ethical principles given a specific use case—the use of artificial intelligence in tax fraud detection. The results of a large conjoint survey (n = 1099) suggest that, by and large, German respondents evaluate the ethical principles as equally important. However, subsequent cluster analysis shows that different preference models for ethically designed systems exist among the German population. These clusters substantially differ not only in the preferred ethical principles but also in the importance levels of the principles themselves. We further describe how these groups are constituted in terms of sociodemographics as well as opinions on artificial intelligence. Societal implications, as well as design challenges, are discussed.

Exploring the roles of trust and social group preference on the legitimacy of algorithmic decision-making vs. human decision-making for allocating COVID-19 vaccinations: Publication in AI & Society

Lünich, M., Kieslich, K. Exploring the roles of trust and social group preference on the legitimacy of algorithmic decision-making vs. human decision-making for allocating COVID-19 vaccinations. AI & Society (2022), 1-19. https://doi.org/10.1007/s00146-022-01412-3

In combating the ongoing global health threat of the COVID-19 pandemic, decision-makers have to take actions based on a multitude of relevant health data with severe potential consequences for the affected patients. Because of their presumed advantages in handling and analyzing vast amounts of data, computer systems of algorithmic decision-making (ADM) are implemented and substitute humans in decision-making processes. In this study, we focus on a specific application of ADM in contrast to human decision-making (HDM), namely the allocation of COVID-19 vaccines to the public. In particular, we elaborate on the role of trust and social group preference on the legitimacy of vaccine allocation. We conducted a survey with a 2 × 2 randomized factorial design among n = 1602 German respondents, in which we utilized distinct decision-making agents (HDM vs. ADM) and prioritization of a specific social group (teachers vs. prisoners) as design factors. Our findings show that general trust in ADM systems and preference for vaccination of a specific social group influence the legitimacy of vaccine allocation. However, contrary to our expectations, trust in the agent making the decision did not moderate the link between social group preference and legitimacy. Moreover, the effect was also not moderated by the type of decision-maker (human vs. algorithm). We conclude that trustworthy ADM systems must not necessarily lead to the legitimacy of ADM systems.

Luisa-Sophie Lasenga supports KMW I in the secretary's office

Since 11.04.2022 Luisa-Sophie Lasenga supports the team of the chair Communication and Media Studies I in the secretariat.

Big Data Belief System: German publication of the dissertation of Dr. Marco Lünich

Lünich, M. (2022). Der Glaube an Big Data. Eine Analyse gesellschaftlicher Überzeugungen von Erkenntnis- und Nutzengewinnen aus digitalen Daten. Springer Fachmedien Wiesbaden. https://doi.org/10.1007/978-3-658-36368-0

Das Aufkommen der digitalen Gesellschaft geht einher mit wirkmächtigen Narrativen wie etwa der Wissensgesellschaft. In diesem Zusammenhang wird die Möglichkeit der Sammlung und Auswertung von großen digitalen Datenbeständen betont, den Big Data. Mit Big Data verbinden sich Erwartungen für die gesellschaftliche Erkenntnisproduktion und hieraus gezogenen Nutzen. In der Fachliteratur wird vermutet, dass die Überzeugung von den Konsequenzen einer Quantifizierung der Welt und des Sozialen auch in den Köpfen der Menschen verfängt und einstellungsrelevant wird. Die Dissertation geht in zwei aufeinander aufbauenden Studien der Annahme verbreiteter Glaubensüberzeugungen von der Qualität und Wirkmächtigkeit der Digitaldaten nach und analysiert kollektive Überzeugungen von Erkenntnis‐ und Nutzengewinnen aus Big Data. Hierzu wird die Messung eines Big‐Data‐ Glaubenssystems (BDGS) für die standardisierte Befragungsforschung entworfen und getestet. Durch Einsatz dieses Untersuchungsinstruments in diversen Forschungszusammenhängen des Einsatzes von Technologien Künstlicher Intelligenz wird geprüft, inwieweit das Vorhandensein des BDGS Einstellungen gegenüber Phänomenen der Digitalisierung erklärt.

German publication in Studies in Communication and Media: Validierung von NER-Verfahren zur automatisierten Identifikation von Akteuren in deutschsprachigen journalistischen Texten

Buz, C., Promies, N., Kohler, S. & Lehmkuhl, M. (2021). Validierung von NER-Verfahren zur automatisierten Identifikation von Akteuren in deutschsprachigen journalistischen Texten. Studies in Communication and Media, 10(4), 590 - 627. doi.org/10.5771/2192-4007-2021-4-590

The aim of this paper is the validation of a method that can be used to automate a sub-step in the content analysis of text data. The method under investigation is called Named Entity Recognition (NER) and is specialized in the automated identification and extraction of proper names (persons, organizations, places) in texts. In communication science, automated methods are increasingly used for the analysis of large amounts of text, but there are hardly any studies dealing with the validity of the automatically obtained results. The aim of the study presented here is to test the suitability of such a procedure for future, extensive actor analyses. These allow comprehensive, cross-media comparisons of the general news coverage, as well as the quantitative analysis of the occurrence, frequency and diversity of the named actors or institutions over long periods of time. Since these NER methods are developed and trained using specific annotated text data, it is uncertain whether they will achieve precise and correct identification of entities with unknown journalistic news articles. To evaluate that, this work applies three different NER methods and compares the outcome of these automated analyses with the results of a manual content analysis. The results show that there is a high concordance between the manually and automatically identified personal names. For the automated identification of the names of organisations, the match rate with the manual codings appears to be lower.

Workshop "AI in Political Communication Research: Theoretical Perspectives and Empirical Questions".

As part of DGPuK 22, Marco Lünich, Carina Weinmann, Pero Došenović and Kimon Kieslich initiate the workshop "AI in Political Communication Research: Theoretical Perspectives and Empirical Questions".

The objectives of this workshop, which is aimed at communication scholars with research interest in the topic area of artificial intelligence (AI) technologies in political communication, include: The creation of an overview of current research questions and approaches, the promotion of scientific exchange, the formation of networks and cross-location research projects as well as the formulation and processing of new questions.

The workshop will take place on 22.02.2022 at the Haus der Universität in Düsseldorf from 12:00 - 16:00. Registration is possible via the online portal of DGPuK 2022: www.moodle-dgpuk22.de/login/index.php

Translated with www.DeepL.com/Translator (free version)

New publication in research area Political Communication

Marcinkowski, F. (2020). Systemtheorie und politische Kommunikation. In I. Borucki, K. Kleinen-von-Königslöw, S. Marschall, & T. Zerback (Hrsg.) Handbuch Politische Kommunikation. Wiesbaden: Springer VS. https://doi.org/10.1007/978-3-658-26242-6_5-1

First results of the Opinion Monitor Artificial Intelligenc published

Since the beginning of May, the Opinion Monitor Artificial Intelligence (Meinungsmonitor Künstliche Intelligenz [MeMo:KI]) has been examining the public's attitude to AI issues every two weeks. How intensively do citizens deal with this technology? How is the use of AI in different fields of application evaluated? And what significance does AI have for future election decisions? First data of the population surveys of the MeMo:KI are now also available in a beta version of our interactive dashboard.

Within the framework of the regular surveys, special topics are also repeatedly addressed in addition to the recurring questions. Currently, the entire society is concerned about the Corona pandemic. Worldwide, there are various AI applications for the fight against the virus, that are in development process or already in use. Some of the technological solutions are also conceivable in Germany. But what about public acceptance for such applications? About 2,000 people were interviewed in mid-May and mid-June. For most applications, approval values were found to be in part considerably higher than in an unspecific question on the use of AI in the healthcare sector. Especially in the field of anti-corona research, the use of AI is supported by a broad majority: Three-quarters of the respondents support the use of AI in the research of active substances. There is less support for applications that are intended to move closer into people's private sphere or to clarify existential questions. For further information, please contact us directly at .

New publication in research area Political Online Communication

Marcinkowski, F. & Dosenovic, P. (2020). From incidental exposure to intentional avoidance: Psychological reactance to political communication during the 2017 German national election campaign. New Media & Society.https://doi.org/10.1177/1461444820902104

Staff member with a contribution at the Conference on Fairness, Accountability, and Transparency of the Association for Computung Machinery (ACM FAT*)

From 27 to 30 January 2020, the second Conference on Fairness, Accountability and Transparency of the ACM will take place in Barcelona, Spain. The chair is represented with a contribution at the Computer Science Conference with interdisciplinary focus:

Marcinkowski, F., Kieslich, K., Starke, C. & Lünich, M. (2020, in press). Implications of AI (Un-)Fairness in Higher Education Admissions: The Effects of Perceived AI (Un-)Fairness on Exit, Voice and Organizational Reputation. Proceedings of the Conference on Fairness, Accountability, and Transparency of the Association for Computing Machinery (ACM FAT*).  https://doi.org/10.1145/3351095.3372867.

The article will also be available as free download in the Conference Proceedings.

Two presentations at the Annual General Conference of the European Consortium for Political Research (ECPR) in Worclaw

This year's General Conference of the ECPR will take place in Worclaw, Poland from 4 to 7 September 2019. The team is represented with a total of two contributions:

Kieslich, K., & Marcinkowski, F. (2019, September). Involuntary Media Populism – How German Mainstream Media Inadvertently Facilitate Populist Movements by Evoking Fear and Anger. Presentation at the Annual General Conference of the European Consortium for Political Research (ECPR) in Wroclaw (04.-07.09.2019).

Kieslich, K., Marcinkowski, F., & Starke, C. (2019, September). Do not Trust the Elite! – Correlates Among Populist Attitudes and Trust in Politics and Media in Four Western-European Countries. Presentation at the Annual General Conference of the European Consortium for Political Research (ECPR) in Wroclaw (04.-07.09.2019).

New publication in research area Sport & Media

Lünich, M., Starke, C., Marcinkowski, F., & Dosenovic, P. (2019). Double Crisis: Sport Mega Events and the Future of Public Service Broadcasting. Communication & Sport. https://doi.org/10.1177/2167479519859208

Three presentations at 69th Annual Meeting of the International Communication Association in Washington, D.C.

The 69th Annual Meeting of the International Communication Association (ICA) will be held in Washington D.C. from 24 to 28 May 2019. The team will be represented with a total of three presentations at the largest international conference for communication scientists:

Dosenovic, P., & Marcinkowski, F. (2019, Mai). From Incidental Exposure to Intentional Avoidance: Psychological Reactance to Political Communication During the 2017 German National Election Campaign. Presentation at the 69th Annual Meeting of the International Communication Association (ICA), Interest Group Political Communication in Washington, D.C. (USA) (24.-28.05.2019).

Geise, S., Hänelt, M., & Dosenovic, P. (2019, Mai). What Follows "Fake News"? Effects of Alleged "Fake News" Perception on Self- and Social-Related Follow-Up Participation. Präsentation auf der 69. Presentation at the 69th Annual Meeting of the International Communication Association (ICA), Interest Group Mass Communication in Washington, D.C. (USA) (24.-28.05.2019).

Hase, V., Kieslich, K., & Engelke, K. (2019, Mai). The Things We Fear – Using Automated Content Analysis to Uncover How UK and US Media Construct Fear over Time (1990-2017). Präsentation auf der 69.Presentation at the 69th Annual Meeting of the International Communication Association (ICA), Interest Group Journalism Studies in Washington, D.C. (USA) (24.-28.05.2019).

Successful application for third-party funding as part of the research initiative "Artificial Intelligence and the Society of the Future" of the Volkswagen Foundation

As part of the research initiative "Artificial Intelligence and the Society of the Future" of the Volkswagen Foundation, a planning grant was obtained for a total of nine months. The interdisciplinary project  Fair Artificial Intelligence Reasoning (FAIR) will discuss the integration of fairness concepts into the programming of artificial intelligence and examine to what extent this has an influence on the perception of legitimacy of algorithmic based decision making. Applications in higher education are used as examples. Together with colleagues from sociology and computer science, a full proposal will be prepared within the framework of the Planning Grant. Janine Baleis and Birte Keller, two new project staff members at the chair, were recruited for this project.

Successfully completed doctorate procedure of Christopher Starke

On August 30, 2018, Christopher Starke has successfully completed his dissertation entitled "United in Diversity? The Effects of Media Identity Framing on Individual European Solidarity" and was awarded magna cum laude. We congratulate Christopher Starke and are looking forward to continuing to work with him as a PostDoc within the team.

New Publication in research area "Digital Society"

Lünich, M., & Marcinkowski, F. (2018). Der Facebook-Datenskandal im Spiegel der öffentlichen Meinung. Précis für das Düsseldorfer Institut für Internet und Demokratie (DIID). Abrufbar unter (Download)

Responsible for the content: