By: Rob Waugh
Social networks such as Facebook and Twitter are using AI and machine learning, supported alongside human review, as a crucial tool to help detect and limit the spread of fake news
Most social networks rely on a mix of technology and human effort. Facebook employs third-party fact checkers, and has expanded that service to more countries and begun reviewing photos and videos, not just text and links. Facebook and Twitter have also said that AI or machine learning is becoming another crucial tool to spot and limit the spread of fake news.
The fight against fake content
Facebook already uses AI tools to highlight potentially false stories and refer them to human fact-checkers. In September last year, a photo circulated on social media in Brazil, following the stabbing in Juiz de Fora of the-then president candidate Jair Bolsonaro.
It showed an image of a man standing next to Senator Gleisi Hoffmann, and claimed that the man was Bolsonaro’s attacker.
Facebook’s machine-learning model identified the image as potentially false, and Brazilian fact-checking organisation Aos Fatos confirmed that the image had been taken in a completely different city. Facebook then used photo-detection technology to “demote” thousands of identical copies of the image.
“Machine learning enables us to make predictions about a lot of data, without having to use humans to review it,” said Tessa Lyons, Facebook’s head of news feed integrity, in a Facebook documentary entitled Facing Facts.
Twitter also states that AI is a key tool in the battle against fake news, saying that the company is “investing heavily” in technology to tackle fake accounts and manipulation.
A spokesperson said: “We’ve integrated behaviour-based signals to help our machine learning tools more proactively challenge problematic accounts and behaviour – as early as the point of account creation. This will remain a critical area of focus for the company in 2019.”
The efforts appear to be paying off: research by the University of Michigan suggests that the sharing of content from dubious sites has now dropped to pre-2016 levels on Facebook.
The fake news arms race
Last summer, Facebook acquired a “natural language” AI company, Bloomsbury AI, and its AI system, Cape, which reads documents and “understands” the content. Facebook said that the acquisition “will strengthen Facebook’s efforts in natural language processing research”.
Natural language processing could be key in the “arms race” between fake news creators and social networks, helping to spot and flag fake news almost instantly, says Iain Brown, head of data science, SAS UK & Ireland.
Brown says: “Professional text mining tools, adding Natural Language Processing (NLP) to the mix, can be implemented to identify fake news. Ultimately, AI can help identify fake news almost immediately when it appears.”
AI and humans working together
It’s clear artificial intelligence isn’t a magic bullet to stop fake news: an MIT study in October 2018 found that AI was only 65 per cent effective in spotting fake news articles.
But the important advantage of artificial intelligence is speed, says Rob Clyde, board chair of international IT governance association ISACA.
“Crowd-sourced vetting is useful, but it takes time, during which the untrue ‘news’ will spread,” says Clyde.
Clyde says that “supervised learning” can help AI to become more accurate over time. He continues: “Humans can feed the AI examples of already categorised fake and non-fake news. The human trainers will see the AI’s decision and provide feedback, and this feedback will allow the AI to become even more accurate.”
Artificial intelligence is not perfect, but it’s better than anything else out there, says Zhewei Zhang, assistant professor of information systems at Warwick Business School.
Zhang says: “Deliberately manufactured fake news may fool the AI system by altering a few key points. Last year, researchers fooled a Google AI algorithm into thinking a rifle was a helicopter. After changing a few pixels, two pictures that may be seen as identical by a human will be identified as two completely different objects by AI.”
Zhang says he believes artificial intelligence is “absolutely necessary” in the battle against fake news, and will work alongside humans in the long term.
He explains: “I think AI/ML (machine learning) will generally outperform crowd-sourced vetting, but I don’t think it can replace the human effort. They will be complementary to each other. Human intelligence will be more effective in detecting new breeds of fake news, and provide important training data to improve the AI.”