Disinformation nation

15 Dec 2020

Disinformation nation

Rapid technological advances in recent years have accelerated the move to online and the use of social media as a means of exchanging views and ideas. The ability to connect directly to anybody else in the world on a real-time basis comes with significant advantages but it also brings challenges in separating fact from fiction. More traditional sources of information such as newspapers and television have typically been subject to scrutiny by regulatory bodies or professional codes of conduct, with corresponding remedial action required for an infraction. The internet is slowly catching up but in terms of self-regulation it’s early days, meaning that there remains ample opportunity to spread misinformation and disinformation for political or economic gain.

Instances of ‘fake news’ and misleading claims continue to be brought to light. The Washington Post believes it has identified over 23,000 instances of President Donald Trump making false or misleading claims from the beginning of his presidency on 20th January 2017 to 11th September 2020. President Trump continues to fight against his ousting and has pushed for a raft of recent social media posts regarding voter manipulation at the recent US election to be flagged or removed across several platforms. A post on 28th November 2020 by Amit Malviya, National Head of Information and Technology for the ruling BJP political party in India, was flagged by Twitter as ‘manipulated media’, the first such move against Narendra Modi’s party in Twitter’s biggest market. Twitter also took action to suspend tens of thousands of accounts in the summer, saying that they were part of a misinformation campaign run by the Chinese government to spread pro-Chinese messages around Hong Kong.  

Videos referred to as ‘deepfakes’ of manipulated footage of politicians or celebrities are being proliferated online, with one example from 2019 purporting to be of former Italian prime minister Matteo Renzi making obscene gestures towards other Italian politicians. It was revealed that his likeness had been spliced onto footage of a comedian using sophisticated computer algorithms. These generative adversarial networks, or GANs, are computer programs that use artificial intelligence to manipulate existing videos or images and, more worryingly, they can create false images of people who do not exist by producing a synthetic image that isn’t detectable as fake to most casual observers. These images can then be used to front fake social media accounts with artificial intelligence producing new and original content using machine learning techniques to hone the language used over time to appear as if a real person created it. Such computer programs can be used in a benign context but also to push more nefarious agendas.

A report published on 19th March 2020 from the World Health Organisation identified what it called an ‘infodemic’, an over-abundance of information relating to the Covid-19 pandemic which included instances of rumour and myth being perpetuated online. The Oxford Internet Institute has suggested that publicly available information should be included as a social determinant of health, factors accounting for 30-55% of health outcomes, alongside more common factors such as education and housing, potentially contributing to a decline in healthcare due to misinformation. As the UK looks to roll out the world’s first vaccination program, social media is rife with conspiracy theories ranging from malicious interference from Microsoft founder and billionaire Bill Gates, to more traditional ‘anti-vaxx’ propaganda alleging links to conditions such as autism or other harmful side effects. Given that herd immunity requires a large proportion of the population to be vaccinated and not all individuals are eligible due to underlying conditions or co-morbidities, a sufficiently large anti-vaccination movement could endanger the efficacy of the entire exercise. This would have far reaching human and economic consequences.

So, what can be done in the fight against misinformation? In the context of the coronavirus, the Cabinet Office’s Rapid Response Unit were tasked with fighting fake news from the beginning of the pandemic, rebutting inaccurate claims or asking posts to be removed. Such is the scale of the threat to the vaccine operation, that the army’s 77th Brigade information warfare unit have been asked to help identify any disinformation originating from overseas. If any risks are identified they would be referred to the intelligence agency GCHQ in order to remedy the danger.

Legislators are slowly catching up. In the UK, the Online Harms white paper, which is designed to protect internet users and the digital economy, is currently going through a consultation period. Part of the scope of the document is to target misinformation and reduce the harm it causes. Over in the States, legislation to weaken Section 230 of the 1996 Communications Decency Act is being consulted on within congress to hold social media platforms accountable for the content users share. The current legislation protects the companies but the proposal is to change the law so that they are considered publishers of the content and are therefore responsible for its accuracy rather than simply being able to claim it is solely up to the individual posting and washing their hands of the consequences.

In response to impending changes, social media companies have taken to self-regulation, attempting to balance the need for freedom of speech and freedom from censorship with the need to minimise harm caused by deliberate or negligent proliferation of misinformation. This often causes them to be caught in an invidious position with action, or inaction, likely to anger one party. Facebook agreed to remove Covid-19 related misinformation only if it could contribute to imminent physical harm, preferring instead to label false information with a warning where fact-checkers could identify them. More recently, the company has indicated it would remove vaccine-related conspiracy theories and has banned adverts attempting to influence people from not being vaccinated.

Artificial Intelligence can also be used to combat the problem. In the same way as false accounts and posts from non-existent people can be created by computers, they can also be detected by them. This can aid in sifting through the sheer volume of content that is being published every day. YouTube employs a 10,000 strong team to filter content on its platform but when the pandemic took many of them offline, the automated systems supporting the team were given greater autonomy to take down videos with no human oversight.

There is a large volume of information being shared online every day and unfortunately this includes misinformation and disinformation which can be harmful to society and our economy. However, there is a great deal of work being undertaken to understand the sources of this misinformation and bringing the individuals involved to account for their actions.

 

Richard O'Sullivan, Investment Research Manager, RSMR


Share this article