Why people share disinformation
11 January 2023
Fake News And Real People – Using Big Data To Understand Human Behaviour

Fake news spreads faster, further and deeper than real news. Why are some people more likely to share disinformation than others? ERC grantee Joana Gonçalves de Sá is using data science and behavioural psychology to disentangle personal traits and contexts of digital users, as well as the workings of social media platforms. Her research will result in a concrete tool to audit search engines in an effort to detect disinformation in real time.

Disinformation, fake news and propaganda have always existed. What sets today apart from the past is the rapid dissemination and global reach through digital technologies. Once users access content, they are pulled back to it over and over again.

Search engines use past interactions, cookies and other tracking tools to draw users back to platforms by offering tailored advertisements, playing out content that is likely to get the most attention. Algorithms are designed to create meaningful interaction rather than meaningful content: that users are exposed to when they enter a platform is based on the data that is being collected  and processed about them. Hence, platforms target users because of their data, creating ‘social bubbles’.

“This is something I deeply care about,” says Joana Gonçalves de Sá. “These bubbles are about much more than just seeing the same advertisements over and over again. Users can become subconsciously trapped in an information bubble, leading them to believe certain truths. In this way, social networks and especially the algorithmic logic of the platforms are prone to initiate a series of human cognitive biases: certain inclinations for or against an idea, object or person, potentially resulting in social inequalities and polarisation. We need to burst the bubbles!”

Petri dish

The FARE project focuses on the act of sharing content rather than on the disinformation itself. “Fake news sharing offers a great ‘lab’ for studying how we think and behave”, says Gonçalves de Sá. “We are basically using disinformation as a Petri dish: what type of bias or contexts make people more likely to believe an untrue post? Which subjects and circumstances can be more polarising? The idea is to study how humans make decisions by trying to understand the context and their individual traits together.”

The research team is gathering an impressive set of data: more than 10,000 news pieces, thousands of Twitter user profiles and account activity, complemented with surveys conducted across four continents. Theory from information systems, and about how disease spreads, is used to make sense of the data.

Gonçalves de Sá calls the combination of the various data sets (social media posts, online searches, digital traces) and tools (mostly statistical models) “a macroscope”, by which “we will be able to see what is going on in the digital world based on all the data floating around. Through our approach it becomes possible to ‘follow’ disinformation and, similar to epidemiological models, identify susceptibility, contagion rates, and prevalence in specific populations.”

The use of big data in the social sciences brings ethical questions to the forefront. In line with the ERC’s requirement to ensure the highest ethical research standards, Gonçalves de Sá and her team are developing testing methods aimed at protecting the privacy of users. Examples include testing with very little data, not requiring human data, or using human data that does not leave anyone’s computer, similar to federated learning (a machine learning technique that does not require the exchange of data samples). If successful, these research approaches could be used by others.

Rational agents

Sharing fake news is at odds with the common assumption that humans are rational agents. The expectation is that when individuals encounter a piece of disinformation, they will not share it. However, people are more likely to share content that confirms what they are already think, believe or consider to be true, and this choice is often based on an overestimation of their own knowledge. “People who are most susceptible to fake news might overestimate how much they know about a subject, and this can be tested”, Gonçalves de Sá argues.

In addition to such cognitive bias, the research group also takes environmental factors into account. An individual might be susceptible to disinformation, but they will only share such content if people around them are open to it or will forward it. In this sense, disinformation is, according to Gonçalves de Sá, like a virus that ‘infects’ people, who then transmit it to others. “If I am not exposed to the virus, I can continue with my life even though I will be susceptible if this virus comes close to me.”

The ‘contagious mechanisms’ that are behind the fake news epidemic are further determined by someone’s socio-economic context. The ability to discriminate between what is true and false weighs heavily on the individual. Gonçalves de Sá comments on this as “quite a cognitive burden”, as it requires attention. “What if you are extremely tired? You had a long day. How much energy do you still have left to think critically through information you are exposed to?”

Search engines

Although her research on social networks is still ongoing, Gonçalves de Sá has already received ERC Proof of Concept funding to explore FARE’s potential to help our society. This spin-off project –FARE_Audit - seeks to understand and curtail the role of search engines in the prominence of disinformation.

“We know that our browsing history influences the advertisements that we see on search engines or social media platforms. Algorithms filter and select displayed advertisements so they become more targeted. Search engines work in the same way, and potentially contribute to creating disinformation bubbles. Each time a user visits a low-credibility website, cookies are placed, which may in turn be used by search engines and influence future query results.”

FARE_AUDIT is developing a tool to detect and monitor disinformation in real time. Together with graduate student Íris Damião and postdoctoral researcher José Reis, Gonçalves de Sá is simulating online search queries to see how results differ according to users’ online profile. The researchers are using bots and web crawlers that visit different websites, and then make the same queries on search engines. “Are the bots that visited low-credibility websites more likely to be directed to other low-credibility content? Do they remain inside a ‘bubble’?”

“The spread of fake news can hardly be stopped”, Gonçalves de Sá continues. “However, by understanding the susceptibility of individuals in different contexts, and the role of online platforms, we can try to single out the abetting factors and learn how to minimise individuals’ exposure to fake news.”


Joana Gonçalves de Sá coordinates the Social Physics and Complexity (SPAC) group at LIP-Laboratory of Instrumentation and Experimental Particle Physics in Portugal. She has a degree in Physics Engineering from the University of Lisbon, and a PhD in Systems Biology from ITQB-NOVA University of Lisbon, having developed her thesis at Harvard University, USA. Her research uses quantitative methods to study problems at the interface between Biomedicine, Social Sciences, and Computation, with a large ethical and societal focus. She coordinated the Science for Society Initiative at Instituto Gulbenkian de Ciência and was the founder and director of the Graduate Program Science for Development (PGCD).

Project information

Fake News And Real People – Using Big Data To Understand Human Behaviour
Fake News And Real People – Using Big Data To Understand Human Behaviour
Host institution:
Laboratory of Instrumentation and Experimental Particle Physics (LIP)
Call details
ERC-2019-STG, SH3
ERC Funding
1 499 844 €