Social mediaBuilding a lie detector for social media

Published 20 February 2014

In our digital age, rumors — both true and false — spread fast, often with far-reaching consequences. The ability quickly to verify information spread on the Internet and track its provenance would enable governments, emergency services, health agencies, and the private sector to respond more effectively.

Scientists determine means to filter truth from lies in social media // Source: army.mil

In our digital age, rumors — both true and false — spread fast, often with far-reaching consequences. An international group of researchers, led by the University of Sheffield, is aiming to build a system which will automatically verify online rumors as they spread around the globe.

Social networks have been used to spread accusations of vote-rigging in Kenyan elections, allege that Barack Obama was Muslim, and claim that the animals were set free from London Zoo during the 2011 riots. In all of these cases — and many more — an ability quickly to verify information and track its provenance would enable journalists, governments, emergency services, health agencies, and the private sector to respond more effectively.

A University of Sheffield release reports that lead researcher, Dr. Kalina Bontcheva, from the Department of Computer Science in the University of Sheffield’s Faculty of Engineering, explains: “There was a suggestion after the 2011 riots that social networks should have been shut down, to prevent the rioters using them to organize. But social networks also provide useful information — the problem is that it all happens so fast and we can’t quickly sort truth from lies. This makes it difficult to respond to rumors, for example, for the emergency services to quash a lie in order to keep a situation calm. Our system aims to help with that, by tracking and verifying information in real time.”

The EU-funded project aims to classify online rumors into four types: speculation — such as whether interest rates might rise; controversy — as over the MMR vaccine; misinformation, where something untrue is spread unwittingly; and disinformation, where it is done with malicious intent.

The system will also automatically categorize sources to assess their authority, such as news outlets, individual journalists, experts, potential eye witnesses, members of the public, or automated “bots.” It will also look for a history and background, to help spot where Twitter accounts have been created purely to spread false information.

It will search for sources that corroborate or deny the information, and plot how the conversations on social networks evolve, using all of this information to assess whether it is true or false. The results will be displayed to the user in a visual dashboard, to enable them to easily see whether a rumor is taking hold.

Dr. Bontcheva adds: “We can already handle many of the challenges involved, such as the sheer volume of information in social networks, the speed at which it appears and the variety of forms, from tweets, to videos, pictures and blog posts. But it’s currently not possible to automatically analyze, in real time, whether a piece of information is true or false and this is what we’ve now set out to achieve.”

Throughout the project, the system will be evaluated in two real-world domains. For digital journalism, it will be tested by the online arm of the Swiss Broadcasting Corporation, swissinfo.ch. For healthcare, it will be tested by the Institute of Psychiatry at King’s College London, where they aim to look at new recreational drugs trending in online discussions and then find out how quickly these feature in patients’ medical records and discussions with doctors.

The release notes that the three-year project, called Pheme, is a collaboration between five universities — Sheffield, Warwick, King’s College London, Saarland in Germany, and MODUL University Vienna in Austria — and four companies — ATOS in Spain, iHub in Kenya, Ontotext in Bulgaria, and swissinfo.ch

The project is named after the Pheme of Greek mythology, who was said to have “pried into the affairs of mortals and gods, then repeated what she learned, starting off at first with just a dull whisper, but repeating it louder each time, until everyone knew.” She is described as the “personification of fame and notoriety, her favour being notability, her wrath being scandalous rumors.”

Following the 2011 U.K. riots, one of the project partners, Professor Rob Procter from the University of Warwick, worked with the LSE and the Guardian’s interactive team manually to analyze the spread of rumors on Twitter during the riots. This took several months (see “Reading the Riots,” Guardian, 7 December 2011). The Pheme system will aim to do something similar, but automatically and immediately.