Artificial intelligence for living texts
24 May 2023
While existing AI tools help people search and extract information from texts, the technology still is basic. Iryna Gurevych is developing advanced AI tools for what she calls "living texts" — texts that dynamically evolve through revisions over time. 
Modelling Text as a Living Object in Cross-Document Context

Artificial Intelligence for language has until now focused on static, standalone texts. “But if you think about it, real texts are not like that”, Iryna Gurevych says. “Every text is written in a context, evolving over time and often updated after publication.” Such texts are connected to related and revised versions. “These connections are exactly what we need to help people follow information through sources or collaborate more efficiently,” she says.

General conceptual models, datasets, and AI tools for text as a “living object” are lacking. Gurevych and her team are working to address this gap by combining theoretical work on intertextuality and cutting-edge deep learning technology. “We plan to develop new large-scale datasets of interconnected, evolving documents, analyse the relationships between them, and create a new generation of AI models that will support humans in day-to-day textual work beyond mere information extraction and search.”

 

What led you to focus on academic writing and disinformation in your case studies?


“Academic writing is the main way science is communicated, and scientists are very particular about the accuracy and quality of texts. To be considered scientifically valid, most academic publications undergo peer review, where multiple anonymous researchers evaluate the article for methodological rigor, interesting findings, and well-crafted writing. This makes peer review an excellent example of "living texts" since articles are discussed and revised based on reviewers' suggestions. Therefore, we aim to apply our technology to peer review to make scientific communication more efficient and robust.

In recent years, we have witnessed several information-related crises where public opinion has been misled, ranging from COVID and 5G conspiracy theories to election-related manipulations. We noticed that in many such cases, identifying the source of the information would aid in debunking disinformation. The problem is, however, that people rarely use citations on social media, and manually analysing information sources is a lot of work. We want to apply our technology to help people track information through sources and time and make this process easier. Hopefully, this will help to fight disinformation.” 


Which results from your research have surprised you the most?
 

“It was surprising and encouraging to see how the same models of text evolution can be applied to completely different areas. A scientific peer review talking about a research article is in a way similar to a Reddit blog post talking about a tweet. The way a Wikipedia page is updated with new facts is similar to the way scientific papers are revised during peer review or comments are made in a Google Doc or in a PDF. In all cases, it is a small text ‘talking’ about another text. We are looking forward to exploring a whole range of different applications with our new general model. 

However, an unexpected challenge came from the data: we did not realise how difficult it would be to collect clearly licensed data consisting of living texts. For static, isolated texts, there are well-established licensing, authorship, and distribution practices. But what is a license of a document draft? How do you attribute an anonymous peer review? Who owns a highlight on a PDF, and what about the underlying text? We had to work with lawyers and the research community to figure out good solutions for this.” 


What do you see as the primary benefits and drawbacks of AI tools such as Chat GPT, which are currently receiving much attention? 


“ChatGPT excels at handling simple questions but struggles when it comes to complex tasks that require careful reasoning and factual correctness. The problem is that a complex task for an AI tool can be simple for a human, and vice versa. 

Another limitation of ChatGPT is that it lacks the ability to understand the relationships between textual documents, context, and time since it only processes small snippets of text. This can sometimes result in incorrect information being generated.

On the other hand, ChatGPT is trained on a vast amount of text documents, giving it a memory that far surpasses that of humans by millions of times. This makes it challenging for us humans to check whether the answers it produces are correct, given our own memory limitations and lack of access to such a large body of information.

It is crucial to inform individual users and society about the limitations of IA models like ChatGPT. But even researchers don’t have a full picture here yet. It is a primary objective of the scientific community in AI for text to understand the mechanisms behind the impressive performance of ChatGPT and similar tools, but equally to highlight their shortcomings. We hope that our model will help to create a new generation of AI tools that are more factually robust and can assist humans in reading, understanding, and writing text more efficiently – and can better account for context.”


Looking to the future, how do you expect AI to develop further in your field of research?

 

“In the coming years, there are several areas in AI for text that we need to address. One significant challenge is the need to process structured texts that contain multimedia such as images, videos, and links. Those type of texts require a different approach than the traditional analysis of small amounts of text without any multimedia.

Improving the robustness of language models to domain shift is another crucial area that requires our attention. Unlike humans, who can seamlessly transition between reading news articles and tweets, there is a substantial disparity between these types of texts for AI tools.

Additionally, deep learning-based models are notoriously opaque, and we need new ways to explain why models make certain predictions and influence their future behaviour. ChatGPT’s popularity has shown that society has a keen interest in using such tools. This is why more work is needed to ensure that this kind of AI is ethically and legally sound, preserves privacy, avoids bias, and is indeed ready for practical application.”

 

Biography


Iryna Gurevych is Professor of Computer Science and Director of the Ubiquitous Knowledge Processing (UKP) Lab at the Technical University (TU) of Darmstadt in Germany. In addition, she is an Adjunct Professor at the NLP Department of MBZUAI, Abu-Dhabi. She obtained her PhD degree from the University of Duisburg-Essen in Germany in 2003. Her main research interests include machine learning for large-scale language understanding and text semantics. In 2022, she won an ERC Advanced Grant for the project “InterText – Modeling Text as a Living Object in a Cross-Document Context.” 
 

Project information

InterText
Modelling Text as a Living Object in Cross-Document Context
Modelling Text as a Living Object in Cross-Document Context
Researcher:
Iryna Gurevych
Host institution:
Technical University of Darmstadt
,
Germany
Call details
ERC-2021-ADG, PE6
ERC funding
2 499 721 €