Project acronym THINKBIG
Project "Patterns in Big Data: Methods, Applications and Implications"
Researcher (PI) Nello Cristianini
Host Institution (HI) UNIVERSITY OF BRISTOL
Call Details Advanced Grant (AdG), PE6, ERC-2013-ADG
Summary "The availability of huge amounts of data has revolutionized many sectors of society, enabling engineers to bypass complex modeling steps, scientists to find shortcuts to new knowledge, and businesses to explore novel business models. For all its success, this field is still very young, and in need of systematic attention. Both risks and opportunities are very significant at this stage. They can be organized into three interconnected areas, which need to be addressed in a coordinated way: methods, applications and implications. By this we mean the interconnected needs to 1) develop new technology to take advantage of this resource; 2) explore the domains where this technology can make a significant impact; and 3) develop a set of cultural, legal and technical tools to reduce the risks associated with the application of these technologies to science and society. This project is about understanding, exploiting and managing the paradigm shift that is under way. It will address these three areas at the same time, 1) developing new types of algorithms and software architectures to take full advantage of this opportunity; 2) exploring new areas of opportunity for big-data to make an impact, with particular attention to the growing field of computational social sciences; and 3) investigating the ethical and epistemological challenges that arise from the transition towards a data-driven way of running society, business and science. We build on a strong track record in each of these areas. We have secured access to a valuable resource for historians, the collection of all UK newspapers from the past 200 years, which we will analyze with our tools, and we will greatly expand our current work on social media mining, working closely with colleagues from other disciplines. It is our intention to impact the social sciences, the general public and the law makers, besides our field of engineering."
Summary
"The availability of huge amounts of data has revolutionized many sectors of society, enabling engineers to bypass complex modeling steps, scientists to find shortcuts to new knowledge, and businesses to explore novel business models. For all its success, this field is still very young, and in need of systematic attention. Both risks and opportunities are very significant at this stage. They can be organized into three interconnected areas, which need to be addressed in a coordinated way: methods, applications and implications. By this we mean the interconnected needs to 1) develop new technology to take advantage of this resource; 2) explore the domains where this technology can make a significant impact; and 3) develop a set of cultural, legal and technical tools to reduce the risks associated with the application of these technologies to science and society. This project is about understanding, exploiting and managing the paradigm shift that is under way. It will address these three areas at the same time, 1) developing new types of algorithms and software architectures to take full advantage of this opportunity; 2) exploring new areas of opportunity for big-data to make an impact, with particular attention to the growing field of computational social sciences; and 3) investigating the ethical and epistemological challenges that arise from the transition towards a data-driven way of running society, business and science. We build on a strong track record in each of these areas. We have secured access to a valuable resource for historians, the collection of all UK newspapers from the past 200 years, which we will analyze with our tools, and we will greatly expand our current work on social media mining, working closely with colleagues from other disciplines. It is our intention to impact the social sciences, the general public and the law makers, besides our field of engineering."
Max ERC Funding
2 112 798 €
Duration
Start date: 2014-03-01, End date: 2019-02-28
Project acronym THIRDWAVEHCI
Project Third Wave HCI: Methods, Domains and Concepts
Researcher (PI) William Gaver
Host Institution (HI) GOLDSMITHS' COLLEGE
Call Details Advanced Grant (AdG), PE6, ERC-2008-AdG
Summary This proposal is for interdisciplinary research that will help bring to maturity the emerging paradigm of third-wave HCI , which addresses interaction as situated meaning-making in everyday life. With my established interdisciplinary research team, I will design prototypes that show how third-wave thinking is relevant for domains of recognised importance to help bring this paradigm to the centre of HCI. We will develop an integrated set of tactics and orienting concepts based on our practice to elucidate and support research and design in third-wave HCI. Crucially, we will develop a new methodology for this research, based on the deployment and study of 50 100 batch-produced prototypes in real-world situations. This will mark a significant leap forward, allowing prototype technologies to be studied using social scientific and design-led methods in field trials several orders of magnitude larger than normal a development from which third-wave HCI, with its commitment to multiple, local appropriations, will benefit enormously. The project will be centred around two Case Studies in which we will develop robust and highly finished prototypes, batch produce them in large numbers, deploy them in large-scale field studies with members of the general public as well as specialist commentators, and use a variety of traditional and experimental methods to capture their experiences. The first Case Study will produce a suite of electromechanically extended sensors that provide resources for environmental awareness in the home without being judgmental or didactic. The second Case Study will develop mobile devices that display readymade, location-based information to provide a behind the scenes view of local neighbourhoods. When dozens of these prototypes are in use simultaneously, we will be able to observe as communities of practice form, and a hundred different stories emerge, leading to a transformative coming-of-age for third-wave HCI.
Summary
This proposal is for interdisciplinary research that will help bring to maturity the emerging paradigm of third-wave HCI , which addresses interaction as situated meaning-making in everyday life. With my established interdisciplinary research team, I will design prototypes that show how third-wave thinking is relevant for domains of recognised importance to help bring this paradigm to the centre of HCI. We will develop an integrated set of tactics and orienting concepts based on our practice to elucidate and support research and design in third-wave HCI. Crucially, we will develop a new methodology for this research, based on the deployment and study of 50 100 batch-produced prototypes in real-world situations. This will mark a significant leap forward, allowing prototype technologies to be studied using social scientific and design-led methods in field trials several orders of magnitude larger than normal a development from which third-wave HCI, with its commitment to multiple, local appropriations, will benefit enormously. The project will be centred around two Case Studies in which we will develop robust and highly finished prototypes, batch produce them in large numbers, deploy them in large-scale field studies with members of the general public as well as specialist commentators, and use a variety of traditional and experimental methods to capture their experiences. The first Case Study will produce a suite of electromechanically extended sensors that provide resources for environmental awareness in the home without being judgmental or didactic. The second Case Study will develop mobile devices that display readymade, location-based information to provide a behind the scenes view of local neighbourhoods. When dozens of these prototypes are in use simultaneously, we will be able to observe as communities of practice form, and a hundred different stories emerge, leading to a transformative coming-of-age for third-wave HCI.
Max ERC Funding
2 439 757 €
Duration
Start date: 2009-04-01, End date: 2014-12-31
Project acronym TransModal
Project Translating from Multiple Modalities into Text
Researcher (PI) Maria Lapata
Host Institution (HI) THE UNIVERSITY OF EDINBURGH
Call Details Consolidator Grant (CoG), PE6, ERC-2015-CoG
Summary Recent years have witnessed the development of a wide range of computational methods that process and generate natural language text. Many of these have become familiar to mainstream computer users such as tools that retrieve documents matching a query, perform sentiment analysis, and translate between languages. Systems like Google Translate can instantly translate between any pair of over fifty human languages allowing users to read web content that wouldn't have otherwise been available. The accessibility of the web could be further enhanced with applications that translate within the same language, between different modalities, or different data formats. There are currently no standard tools for simplifying language, e.g., for low-literacy readers or second language learners. The web is rife with non-linguistic data (e.g., databases, images, source code) that cannot be searched since most retrieval tools operate over textual data. In this project we maintain that in order to render electronic data more accessible to individuals and computers alike, new types of models need to be developed. Our proposal is to provide a unified framework for translating from comparable corpora, i.e., collections consisting of data in the same or different modalities that address the same topic without being direct translations of each other. We will develop general and scalable models that can solve different translation tasks and learn the necessary intermediate representations of the units involved in an unsupervised manner without extensive feature engineering. Thanks to recent advances in deep learning, we will induce representations for different modalities, their interactions, and correspondence to natural language. Beyond addressing a fundamental aspect of the translation problem, the proposed research will lead to novel internet-based applications that simplify and summarize text, produce documentation for source code, and meaningful descriptions for images.
Summary
Recent years have witnessed the development of a wide range of computational methods that process and generate natural language text. Many of these have become familiar to mainstream computer users such as tools that retrieve documents matching a query, perform sentiment analysis, and translate between languages. Systems like Google Translate can instantly translate between any pair of over fifty human languages allowing users to read web content that wouldn't have otherwise been available. The accessibility of the web could be further enhanced with applications that translate within the same language, between different modalities, or different data formats. There are currently no standard tools for simplifying language, e.g., for low-literacy readers or second language learners. The web is rife with non-linguistic data (e.g., databases, images, source code) that cannot be searched since most retrieval tools operate over textual data. In this project we maintain that in order to render electronic data more accessible to individuals and computers alike, new types of models need to be developed. Our proposal is to provide a unified framework for translating from comparable corpora, i.e., collections consisting of data in the same or different modalities that address the same topic without being direct translations of each other. We will develop general and scalable models that can solve different translation tasks and learn the necessary intermediate representations of the units involved in an unsupervised manner without extensive feature engineering. Thanks to recent advances in deep learning, we will induce representations for different modalities, their interactions, and correspondence to natural language. Beyond addressing a fundamental aspect of the translation problem, the proposed research will lead to novel internet-based applications that simplify and summarize text, produce documentation for source code, and meaningful descriptions for images.
Max ERC Funding
1 900 778 €
Duration
Start date: 2016-09-01, End date: 2021-08-31
Project acronym VERIWARE
Project From Software Verification to Everyware Verification
Researcher (PI) Marta Zofia Kwiatkowska
Host Institution (HI) THE CHANCELLOR, MASTERS AND SCHOLARS OF THE UNIVERSITY OF OXFORD
Call Details Advanced Grant (AdG), PE6, ERC-2009-AdG
Summary In the words of Adam Greenfield, the age of ubiquitous computing is here: a computing without computers, where information processing has diffused into everyday life, and virtually disappeared from view . Conventional hardware and software has evolved into everyware sensor-enabled electronic devices, virtually invisible and wirelessly connected on which we increasingly often rely for everyday activities and access to services such as banking and healthcare. The key component of everyware is embedded software, continuously interacting with its environment by means of sensors and actuators. Ubiquitous computing must deal with the challenges posed by the complex scenario of communities of everyware , in presence of environmental uncertainty and resource limitations, while at the same time aiming to meet high-level expectations of autonomous operation, predictability and robustness. This calls for the use of quantitative measures, stochastic modelling, discrete and continuous dynamics and goal-driven approaches, which the emerging quantitative software verification is unable to address at present. The central premise of the proposal is that there is a need for a paradigm shift in verification to enable everyware verification, which can be achieved through a model-based approach that admits discrete and continuous dynamics, the replacement of offline methods with online techniques such as machine learning, and the use of game-theoretic and planning techniques. The project will significantly advance quantitative probabilistic verification in new and previously unexplored directions. I will lead a team of researchers investigating the fundamental principles of everyware verification, development of algorithms and prototype implementations, and experimenting with case studies. I will also provide continued scientific leadership in the area of ubiquitous computing.
Summary
In the words of Adam Greenfield, the age of ubiquitous computing is here: a computing without computers, where information processing has diffused into everyday life, and virtually disappeared from view . Conventional hardware and software has evolved into everyware sensor-enabled electronic devices, virtually invisible and wirelessly connected on which we increasingly often rely for everyday activities and access to services such as banking and healthcare. The key component of everyware is embedded software, continuously interacting with its environment by means of sensors and actuators. Ubiquitous computing must deal with the challenges posed by the complex scenario of communities of everyware , in presence of environmental uncertainty and resource limitations, while at the same time aiming to meet high-level expectations of autonomous operation, predictability and robustness. This calls for the use of quantitative measures, stochastic modelling, discrete and continuous dynamics and goal-driven approaches, which the emerging quantitative software verification is unable to address at present. The central premise of the proposal is that there is a need for a paradigm shift in verification to enable everyware verification, which can be achieved through a model-based approach that admits discrete and continuous dynamics, the replacement of offline methods with online techniques such as machine learning, and the use of game-theoretic and planning techniques. The project will significantly advance quantitative probabilistic verification in new and previously unexplored directions. I will lead a team of researchers investigating the fundamental principles of everyware verification, development of algorithms and prototype implementations, and experimenting with case studies. I will also provide continued scientific leadership in the area of ubiquitous computing.
Max ERC Funding
2 060 360 €
Duration
Start date: 2010-05-01, End date: 2016-04-30
Project acronym ViAjeRo
Project ViAjeRo: Virtual and Augmented Reality passenger experiences
Researcher (PI) Stephen BREWSTER
Host Institution (HI) UNIVERSITY OF GLASGOW
Call Details Advanced Grant (AdG), PE6, ERC-2018-ADG
Summary ViAjeRo will radically improve passenger journeys using immersive Virtual and Augmented Reality to support entertainment, work and collaboration on the move. In Europe, people travel an average of 12,000km per year on private and public transport, in cars, buses, planes and trains. These journeys are often repetitive and wasted time. This total will rise with the arrival of fully autonomous cars, which free drivers to become passengers. The potential to recover this lost time is impeded by 3 significant challenges
. Confined spaces: These limit interactivity, and force us to rely on small displays such as phones or seatback screens
. Social acceptability: We may share the space with others, inducing a pressure to conform, inhibiting technology use
. Motion sickness: Many people get sick when they read or play games in vehicles. Once experienced, it can take hours for symptoms to resolve
VR/AR headsets could allow passengers to use their travel time in new, productive, exciting ways, but only if bold research is undertaken to overcome these fundamental challenges. ViAjeRo will use VR/AR to do adventurous multidisciplinary work, unlocking the untapped potential of passengers. They will be able to use large virtual displays for productivity; escape the physical confines of the vehicle and become immersed in virtual experiences; and communicate with distant others through new embodied forms of communication – all whilst travelling. This will be of great benefit to European society and open a new area for products and services. Our vision requires groundbreaking contributions at the intersection of HCI, neuroscience and sensing to:
1 Develop novel interaction techniques for confined, seated spaces
2 Support safe, socially acceptable use of VR/AR, providing awareness of others and the travel environment
3 Overcome motion sickness through novel multimodal countermeasures and neurostimulation
4 Tailor the virtual and physical passenger environment to support new,
Summary
ViAjeRo will radically improve passenger journeys using immersive Virtual and Augmented Reality to support entertainment, work and collaboration on the move. In Europe, people travel an average of 12,000km per year on private and public transport, in cars, buses, planes and trains. These journeys are often repetitive and wasted time. This total will rise with the arrival of fully autonomous cars, which free drivers to become passengers. The potential to recover this lost time is impeded by 3 significant challenges
. Confined spaces: These limit interactivity, and force us to rely on small displays such as phones or seatback screens
. Social acceptability: We may share the space with others, inducing a pressure to conform, inhibiting technology use
. Motion sickness: Many people get sick when they read or play games in vehicles. Once experienced, it can take hours for symptoms to resolve
VR/AR headsets could allow passengers to use their travel time in new, productive, exciting ways, but only if bold research is undertaken to overcome these fundamental challenges. ViAjeRo will use VR/AR to do adventurous multidisciplinary work, unlocking the untapped potential of passengers. They will be able to use large virtual displays for productivity; escape the physical confines of the vehicle and become immersed in virtual experiences; and communicate with distant others through new embodied forms of communication – all whilst travelling. This will be of great benefit to European society and open a new area for products and services. Our vision requires groundbreaking contributions at the intersection of HCI, neuroscience and sensing to:
1 Develop novel interaction techniques for confined, seated spaces
2 Support safe, socially acceptable use of VR/AR, providing awareness of others and the travel environment
3 Overcome motion sickness through novel multimodal countermeasures and neurostimulation
4 Tailor the virtual and physical passenger environment to support new,
Max ERC Funding
2 443 657 €
Duration
Start date: 2019-09-01, End date: 2024-08-31
Project acronym VISCUL
Project Visual Culture for Image Understanding
Researcher (PI) Vittorio Ferrari
Host Institution (HI) THE UNIVERSITY OF EDINBURGH
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary The goal of computer vision is to interpret complex visual scenes, by recognizing objects and understanding their spatial arrangement within the scene. Achieving this involves learning
categories from annotated training images. In the current paradigm, each category is learned starting from scratch without any previous knowledge. This is in contrast with how humans learn, who accumulate knowledge about visual concepts which they reuse to help learning new concepts.
The goal of this project is to develop a new paradigm where computers learn visual concepts on top of what they already know, as opposed to learning every concept from scratch. We propose to progressively learn a vast body of visual knowledge, coined Visual Culture, from a variety of available datasets. We will acquire models of the appearance and shape of categories in general, models of specific categories, and models of their spatial organization into scenes. We will start learning from datasets with high degree of supervision and then gradually move to datasets with lower degrees. At each stage we will employ the current body of knowledge to support learning with less supervision. After acquiring Visual Culture from existing datasets, the machine will be ready to learn further with little or no supervision, for example from the Internet. Visual Culture is related to ideas in other fields, but no similar endeavor was undertaken in Computer Vision yet.
This project will make an important step toward mastering the complexity of the visual world, by advancing the state-of-the-art in terms of the number of categories that can be localized, and in
the variability covered by each model. Moreover, Visual Culture is more than a mere collection of isolated categories, it is is a web of object, background, and scene models connected by spatial relations and sharing visual properties. This will bring us closer to image understanding, the automatic interpretation of complex novel images.
Summary
The goal of computer vision is to interpret complex visual scenes, by recognizing objects and understanding their spatial arrangement within the scene. Achieving this involves learning
categories from annotated training images. In the current paradigm, each category is learned starting from scratch without any previous knowledge. This is in contrast with how humans learn, who accumulate knowledge about visual concepts which they reuse to help learning new concepts.
The goal of this project is to develop a new paradigm where computers learn visual concepts on top of what they already know, as opposed to learning every concept from scratch. We propose to progressively learn a vast body of visual knowledge, coined Visual Culture, from a variety of available datasets. We will acquire models of the appearance and shape of categories in general, models of specific categories, and models of their spatial organization into scenes. We will start learning from datasets with high degree of supervision and then gradually move to datasets with lower degrees. At each stage we will employ the current body of knowledge to support learning with less supervision. After acquiring Visual Culture from existing datasets, the machine will be ready to learn further with little or no supervision, for example from the Internet. Visual Culture is related to ideas in other fields, but no similar endeavor was undertaken in Computer Vision yet.
This project will make an important step toward mastering the complexity of the visual world, by advancing the state-of-the-art in terms of the number of categories that can be localized, and in
the variability covered by each model. Moreover, Visual Culture is more than a mere collection of isolated categories, it is is a web of object, background, and scene models connected by spatial relations and sharing visual properties. This will bring us closer to image understanding, the automatic interpretation of complex novel images.
Max ERC Funding
1 481 516 €
Duration
Start date: 2013-01-01, End date: 2017-12-31
Project acronym VISREC
Project Visual Recognition
Researcher (PI) Andrew Zisserman
Host Institution (HI) THE CHANCELLOR, MASTERS AND SCHOLARS OF THE UNIVERSITY OF OXFORD
Call Details Advanced Grant (AdG), PE6, ERC-2008-AdG
Summary Our goal is to develop the fundamental knowledge to design a visual system that is able to learn, recognize and retrieve quickly and accurately thousands of visual categories, including materials, objects, scenes, human actions and activities. A ``visual google'' for images and videos -- able to search for the ``nouns'' (objects, scenes), ``verbs'' (actions/activities) and adjectives (materials, patterns) of visual content. The time is right for making great progress in automated visual recognition: imaging geometry is well understood, image features are now highly developed, and relevant statistical models and machine learning algorithms are well-advanced. Our goal is to make a quantum leap in the capabilities of visual recognition in real-life scenarios. The outcomes of this research will impact any applications where visual recognition is useful, and will enable new applications entirely: effortlessly searching and annotating home image and video collections on their visual content; searching and annotating large commercial image and video archives (e.g. YouTube); surveillance; using an image, rather than text, to access the web and hence identify its visual content.
Summary
Our goal is to develop the fundamental knowledge to design a visual system that is able to learn, recognize and retrieve quickly and accurately thousands of visual categories, including materials, objects, scenes, human actions and activities. A ``visual google'' for images and videos -- able to search for the ``nouns'' (objects, scenes), ``verbs'' (actions/activities) and adjectives (materials, patterns) of visual content. The time is right for making great progress in automated visual recognition: imaging geometry is well understood, image features are now highly developed, and relevant statistical models and machine learning algorithms are well-advanced. Our goal is to make a quantum leap in the capabilities of visual recognition in real-life scenarios. The outcomes of this research will impact any applications where visual recognition is useful, and will enable new applications entirely: effortlessly searching and annotating home image and video collections on their visual content; searching and annotating large commercial image and video archives (e.g. YouTube); surveillance; using an image, rather than text, to access the web and hence identify its visual content.
Max ERC Funding
1 872 056 €
Duration
Start date: 2009-01-01, End date: 2014-12-31