Monthly Archives: September 2025

Three new NERDS publications: Polarization, image-to-text-mapping, and candidate recommendation

We have three new publications out, as always on a variety of topics!

  1. Estimating affective polarization on a social network, by Marilena Hohmann and Michele Coscia, published in PLOS ONE.

    Concerns about polarization and hate speech on social media are widespread. Affective polarization, i.e., hostility among partisans, is crucial in this regard as it links political disagreements to hostile language online. However, only a few methods are available to measure how affectively polarized an online debate is, and the existing approaches do not investigate jointly two defining features of affective polarization: hostility and social distance. To address this methodological gap, we propose a network-based measure of affective polarization that combines both aspects – which allows them to be studied independently. We show that our measure accurately captures the relation between the level of disagreement and the hostility expressed towards others (affective component) and whom individuals choose to interact with or avoid (social distance component). Applying our measure to a large-scale Twitter data set on COVID-19, we find that affective polarization was low in February 2020 and increased to high levels as more users joined the Twitter discussion in the following months.
    See also Michele’s blog post: https://www.michelecoscia.com/?p=2466
  2. Leveraging VLLMs for Visual Clustering: Image-to-Text Mapping Shows Increased Semantic Capabilities and Interpretability, by Luigi Arminio, Matteo Magnani, Matías Piqueras, Luca Rossi, and Alexandra Segerberg published in Social Science Computer Review.

    We test an approach that leverages the ability of Vision-and-Large-Language-Models (VLLMs) to generate image descriptions that incorporate connotative interpretations of the input images. In particular, we use a VLLM to generate connotative textual descriptions of a set of images related to climate debate, and cluster the images based on these textual descriptions. In parallel, we cluster the same images using a more traditional approach based on CNNs. In doing so, we compare the connotative semantic validity of clusters generated using VLLMs with those produced using CNNs, and assess their interpretability. The results show that the approach based on VLLMs greatly improves the quality score for connotative clustering. Moreover, VLLM-based approaches, leveraging textual information as a step towards clustering, offer a high level of interpretability of the results.
  3. Mapping Stakeholder Needs to Multi-Sided Fairness in Candidate Recommendation for Algorithmic Hiring, by Mesut Kaya and Toine Bogers, published in RecSys ’25: Proceedings of the Nineteenth ACM Conference on Recommender Systems

    Past analyses of fairness in algorithmic hiring have been restricted to single-side fairness, ignoring the perspectives of the other stakeholders. In this paper, we address this gap and present a multi-stakeholder approach to fairness in a candidate recommender system that recommends relevant candidate CVs to human recruiters in a human-in-the-loop algorithmic hiring scenario. We conducted semi-structured interviews with 40 different stakeholders (job seekers, companies, recruiters, and other job portal employees). We used these interviews to explore their lived experiences of unfairness in hiring, co-design definitions of fairness as well as metrics that might capture these experiences. Finally, we attempt to reconcile and map these different (and sometimes conflicting) perspectives and definitions to existing (categories of) fairness metrics that are relevant for our candidate recommendation scenario.

Roberta Sinatra and Vedran Sekara talk at Rigsrevisionen

In August, NERDS faculty Roberta Sinatra and Vedran Sekara were invited to Rigsrevisionen, the Danish Audit national agency, to give a talk on bias in algorithms.

TheRoberta Sinatra giving a talk at Rigsrevisioneny presented their latest research project, focusing on how algorithmic bias can affect decisions in sensitive areas of public policy. In particular, they focused on their FAccT paper about a Decision Support System used by the Danish social sector, where algorithms have been tested to assess the risk of maltreatment for children. Their work showed that the algorithm was biased with respect to age: for instance, a 16-year-old shoplifter was rated at higher risk than a 2-month-old baby living with two parents struggling with substance abuse. This illustrates how algorithmic outputs can amplify bias if not critically examined, and why human oversight remains crucial.

The talk at Rigsrevisionen also raised the broader question of whether algorithms can be used responsibly for risk assessments in complex social contexts, and emphasized the need for careful scrutiny when deploying algorithmic solutions, especially AI-driven, in the public sector.

The event was organized by Rigsrevisionen’s internal data analytics network and drew a large audience, underscoring the relevance of these issues for public accountability and governance.

Read Rigsrevisionen’s post about the talk on LinkedIn →

Read the paper: “Failing Our Youngest: On the Biases, Pitfalls, and Risks in a Decision Support Algorithm Used for Child Protection” →

Read the Danish news piece “Kan algoritmer se ind i et barns fremtid?” →

Morten Boilesen has joined NERDS

We welcome our latest NERDS member: Morten Boilesen.Headshot of Morten

Morten joins as a new PhD student, coming with degrees in mathematics, musicology, and engineering, and experience as a start-up data scientist.

Morten will work with Jonas L Juul on the InForM project (funded by the Novo Nordisk Foundation), using Danish register data to study how COVID-19 spread in Denmark. 3 exciting years ahead! We are excited to have you with us, Morten. Welcome!

New NERDS publication on hitting the music charts

We have a new exciting publication out! 🎸

Is it getting harder to make a hit? Evidence from 65 years of US music chart history, by Marta Ewa Lech, Sune Lehmann & Jonas L. Juul, published in EPJ Data Science

We show that the dynamics of the Billboard Hot 100 chart have changed significantly since the chart’s founding in 1958, and, in particular, in the past 15 years. Whereas most songs spend less time on the chart now than songs did in the past, we show that top-1 songs have tripled their chart lifetime since the 1960s, and the highest-ranked songs maintain their positions for far longer than previously. At the same time, churn has increased drastically, and the lowest-ranked songs are replaced more frequently than ever. Together, these observations support two competing and seemingly contradictory theories of digital markets: The Winner-takes-all theory and the Long Tail theory. Who occupies the chart has also changed over the years: In recent years, fewer new artists make it into the chart and more positions are occupied by established hit makers. Finally, investigating how song chart trajectories have changed over time, we show that historical song trajectories cluster into clear trajectory archetypes characteristic of the time period they were part of. Our results are interesting in the context of collective attention: Whereas recent studies have documented that other cultural products such as books, news, and movies fade in popularity quicker in recent years, music hits seem to last longer now that in the past.