Disinformation and misinformation, what’s in a name?


Jason Shepherd
Contributor

As has been written in these pages recently, disinformation is a global problem and one which is becoming ever more pressing.

The impact on public discourse has affected the trajectory of European and US politics; information management in Africa and the Middle East is very much an active front in Russia’s war in Ukraine; and has arguably created an existential risk in global warming.

Mis- and dis-information both cost lives at the height of the pandemic. This isn’t about hurt feelings, and the ‘bottom half of the internet1’ – this matters, a lot, right now.

Tackling the spread of inaccurate information on the internet is hard. The first problem is just the sheer volume of data. We each live in our information bubbles, and we can’t keep up with everything flying past us; trying to manage the whole seething foam of bubbles is a fool’s errand.

Or so we’re told.

digital people consultants
Misinformation and DisInformation

While it is possible to verify specific stories as they are highlighted, we recognise this approach will struggle to scale. The more challenging second problem is that there is a rarely a ‘yes/no’ answer to the question ‘is this statement accurate?’

Context matters, as does the nature of the statement itself. ‘Is the Earth warming?’ is an objective statement that can be answered (erm, yes, btw); ‘Is it due to human activity?’ is now generally accepted, but still argued. ‘Is there a consensus among those who’ve actively studied it, that it is due to humans?’ is once again objective (again, yes).

But all those nuances matter when it comes to assessing the veracity of the statement.

Rather than assessing the veracity of a statement, individual people tend to look at how much – and why – they trust the source. If a professor of epidemiology tells me about a pandemic, I will trust that rather more than I will if it’s a TV star.

Not everyone makes that distinction, I realise, and sometimes it takes a TV star to seize someone’s attention in the first place, but it’s worth checking who’s passing them their speaking notes.

Let’s turn to the distinction between misinformation and disinformation. The former is a mistake, where inaccurate information is propagated with good intent. The latter is a lie, knowingly told for effect. To tell such a lie, the teller has to present themselves as just plausible and trustworthy enough.

When it comes to making the background chorus, that plausibility can be paper thin. All those posts on Twitter that chip away at a government policy don’t have to be very trustworthy, they just make it sound like it’s reasonable to believe that masks make the virus worse, or that sunspots explain weather changes over the last twenty years.

If they can get it amplified as misinformation – and passed on with innocent enthusiasm – so much the better.

Longer pieces, which set out the meat of the issue, need to be better dressed up. Unfortunately, it’s very easy to dress up on the internet and look respectable – especially on the bigger platforms which are designed with very low cost of entry.

How might this line of thought help us?

Well, the first thought is that a very small degree of ‘know your customer’ – raising the bar to entry for accounts on the big platforms – might go a very long way to impeding the automatic amplification of disinformation.

There are very good arguments against this, however; anonymous posting is hugely empowering for significantly sized communities who otherwise lack a voice and are frequently suppressed.

However, the online communications environment is a scale-free network, with key ‘super-connecting’ nodes that have huge reach. These are the ‘influencers’ that work professionally putting out content and trying to get as many views as possible.

They are in a commercial relationship with the platforms – and, often, with sponsors who use them to get information broadcast. They are not just ‘innocent’ users, and they have significant power without the associated responsibility.

We think that transparency of sponsorship is a key weakness in the system, and that addressing it will go a long way to providing information consumers with information they can use to come to their own judgement on what, and who, to trust.

On the inside of the platforms, the distinction between private and commercial users has become blurred, but they know it matters. On the outside, we’ll continue to find ways to follow the money and find out who’s paying the piper.

Just as in politics in the real world, knowing who’s paying to lobby your elected representatives on what issues is a critical part of accountability in public life. Knowing who’s paying to fill your feed with opinion and poorly contextualised facts is just as important. Given the impact that it’s had, we think that’s worth getting after.

Thomson Reuters Special Services International (TRSSI) works with partners such as Refinitiv to extract information by combining disparate commercial, proprietary, and public datasets. There are two keys to our approach: 1. focussing on relationships between entities as much or more than the characteristics of the entities themselves, and 2. looking for hidden relationships by combining multiple data sources. When we apply this philosophy to understanding disinformation actors, we call it ‘Full Spectrum Counter Disinformation’.

To identify controlling actors in disinformation campaigns, we look at three domains:

  • Physical. Real world infrastructure such as servers, technology, and the real-world institutions that own or control that tech and infrastructure
  • Logical. Digital infrastructure such as domains or online platforms
  • Conceptual. Users’ online presence and the content they generate/share

Crucially, we can look not only at relationships within each domain, but between them. When we look at entities – such as individual accounts, ideas, hosting companies, platform providers, moderators, and so forth – in terms of their relationships, we can see two unique risk flags.

One is a link between an ostensibly good actor and a bad actor. We can frequently see who pays someone, or who provides services to someone, or who steals ideas from someone.

The other is a pattern of relationships that flags an entity as looking like other ‘bad’ entities. Just as people learn to recognise the signs of risk when dealing with other people (‘someone dressed like that, in this location, with those behaviours, makes me worried’), so analysts learn to recognise signs of risk in internet actors. Ducks look, and walk, like ducks.

This doesn’t tell us whether the content of a mass personal message is a fact or not, but it does tell us how suspicious we should be that the poster is generally to be trusted. Indeed, this approach provides a leading indicator – a new swathe of posts on a given topic hosted on a given platform, amplified by a diverse range of influencers that have a common hosting or sponsorship route, can be flagged as potentially damaging before the content is even examined.

Advertising standards have long asserted that people need to know when they’re being sold something – people also deserve to be told that the personal message of support they’ve received, encouraging them to riot, throw away their mask, march against net zero policies, and so forth, is actually part of a coordinated campaign with a single guiding mind. Then they are able to engage with the content as an equal, and not as a victim.

Translating this intelligence-led investigative approach into a policy framework that encourages the right social and technical innovation is not an easy path but judging honesty and intent may still prove easier than judging factual truth, at population scale.

Jason Shepherd is the Senior Director of International Strategy at Thomson Reuters Special Services International and a Senior Associate Fellow at the Royal United Services Institute (RUSI). He joined Thomson Reuters in 2021 after a twenty-three-year career in the UK national security community, during which he contributed to interoperability both between the Five Eyes Alliance (FVEY) partners and the UK agencies and government departments. For more information on the services covered in this article please contact Phillip Malcolm on at Phillip.malcolm@trssintl.com.

This byline was produced as part of a commercial partnership between InnovationAus.com and Refinitiv.

Do you know more? Contact James Riley via Email.

Leave a Comment