What users see – and don’t see

15 December 2025
What users see – and don’t see

Jana Lasser wants to understand not just what people do on social media, but also what they see in their personalised feeds. For her, this is central to grasping how recommendation algorithms shape civic discourse – and where current regulation still falls short. 

Lasser points to a continuing challenge for her research group in studying recommendation algorithms:  they still cannot observe what is called ‘private exposure information’ - the personalised feeds that social media platforms show to individual users.  

Under Article 40.12 of the Digital Services Act (DSA), scholars can request access to specific public data such as posts, replies, and sometimes shares, but this represents only a thin slice of what really matters for democratic debate. To grasp the effects of recommendation algorithms,’ she argues, ‘we need to see the complete outcomes they produce in terms of what content they show to individual users.’ 

Her ERC Starting Grant-funded project DeSiRe focuses on how today’s recommendation systems may undermine the conditions for discursive democracy. For public debate to function, ‘people need to be exposed to a diversity of perspectives, discussions need to maintain at least a basic level of constructiveness, information needs to be reasonably reliable, and everyone needs to have the opportunity to be heard and help shape the discourse,’ she says. 

Social media platforms, she notes, can undermine these conditions by creating echo chambers and filter bubbles, facilitating the spread of misinformation and hate speech, and directing disproportionate attention to a small number of highly influential accounts.

 

Designing algorithms for democratic resilience

 

The DeSiRe project aims to put numbers to these effects by examining which content is amplified and de-amplified by current content recommendation algorithms; in relation to the content and distribution characteristics necessary to sustain healthy civic discourse, Lasser explains. ‘After establishing the current level of risk major platforms such as X pose to civic discourse, we aim to design novel content-recommendation algorithms that reduce this risk by redistributing attention.’ 

Instead of maximising engagement at all costs, Lasser’s team asks what platforms would look like if they optimised for individual and societal flourishing.  

In practice, this means algorithms that highlight content that fosters constructive exchanges – alongside humour, creativity, and everyday sociality – while down-ranking rage-bait, polarising outrage, and dehumanising speech. 

To achieve this, the team combines approaches from social science and computer science. They study how current systems influence exposure, attention, and interaction patterns, and then use those insights to build alternative recommendation models that distribute attention more fairly and sustainably. The aim is not to eliminate conflict or emotion, but to design systems in which disagreement does not automatically spiral into toxicity and fragmentation.

 

A participatory, experimental approach

 

A core element of DeSiRe is participation. Rather than imposing a single, top-down vision of ‘healthy discourse,’ Lasser’s team works with users to understand what they value in different contexts. They plan to solicit people’s preferences for trade-offs between, for example, diversity and personal relevance, or safety and maximal freedom of expression, in scenarios such as public health crises or elections. These preferences inform new algorithmic metrics that can be tuned to different democratic needs. 

Because live experimentation on commercial platforms is tightly controlled, the project develops open-source ‘digital twins’ of social media platforms: simulated environments where alternative recommendation algorithms can be tested safely and transparently. This approach allows the team to observe how different designs affect exposure and, conversation dynamics, without depending on the goodwill of large platform companies.

Watch the teaser

Jana Lasser

From risk to regulation

 

The DSA requires Very Large Online Platforms (VLOPs) to assess and mitigate systemic risks, including threats to civic discourse. Lasser sees her work as translating these abstract obligations into concrete design choices. The project will develop a scenario-based risk assessment framework for algorithms, helping policymakers and regulators evaluate which interventions genuinely reduce harms and which might have unintended side effects. 

Ultimately, Lasser’s research aims to offer both technical prototypes and policy-relevant guidance: open-source tools for experimenting with new recommendation systems, metrics for assessing their impact on public debate, and recommendations for how regulation can steer platforms toward becoming infrastructures that strengthen, rather than weaken, democratic societies. 

‘I hope that content recommendation algorithms are adapted to not only optimise for time spent on the platform, but also for other things that are meaningful for individual and societal flourishing. For my research specifically, this would mean that recommendation algorithms emphasise content that inspires constructive exchanges next to funny memes and cute cats and de-emphasise rage bait and hateful content.’

Jana Lasser

Jana Lasser is a Professor of Data Analysis at the University of Graz. She leads the research group on Complex Social & Computational Systems at the interdisciplinary research centre IDea_Lab and is an Associate Faculty member at the Complexity Science Hub Vienna.