Disinformation: the lurking threat no emergency leader can ignore
How your agency’s reputation and ability to respond are on the line in the Next Big Event - there is a simple approach to avoiding the pain.
10 years ago this month, residents of the east coast of the United States from Florida to Maine were hit by Superstorm Sandy.
They were also hit with fake images of sharks swimming in inland streets, photos that had been altered to exaggerate storm damage, emergency instructions that were wrong, news that the New York Stock Exchange was under 3 feet of water (it wasn’t), and that the power company had disconnected electricity to Manhattan (it hadn’t).
Superstorm Sandy turned out to be Ground Zero for natural hazard disinformation.
Hurricanes Harvey and Irma (both 2017) saw exponential disinformation evolution when fake news crossed pollinated with wild conspiracy theories and were reposted by the President and other politicians. Like the one that ‘illegal’ immigrants were getting all the disaster relief. Or, spread by others, that the government was geo-engineering the weather. And then there was the one about Black Lives Matter blocking disaster aid; the Houston mayor missing in action; and mosques that were refusing shelter to non-Muslim Houston residents and hoarding aid. You can see a whole list here.
From a distance it is ridiculous and hilarious.
But it has real impact on response and recovery operations and agencies and their communication teams need to pay attention.
In Kerala in India in 2018, widespread flooding killed nearly 500 people – and some of those deaths were thought to be as a result of the way people and agencies responded to disinformation spread on Whats App.
Among the worst: a dam was about to burst, Peechi Dam gates were about to be opened, Kerala doesn’t need money, most of the affected people are from rich or middle class families and therefore don’t need help, donations to the Chief Minister’s Disaster Relief Fund will be misused, and sundry scams encouraging people to donate to bogus relief organisations.
Emergency operations were halted and worried householders clogged roads in efforts to un-necessarily evacuate out of what they thought would be the path of the dam water. False reports of road closures also hindered passage of rescuers to the worst affected areas.
Disinformation during floods in Valencia, Spain last year trotted out geoengineering conspiracies and the story that dams had been deliberately destroyed upstream, and just this year in Portugal and Greece, disinformation has set up fake narratives about a benevolent Russia providing firefighting support. Much of the Valencia flood disinformation also originated from Russia.
It’s spreading, and it WILL come to a disaster near you.
In Australia we feel fairly protected but we regularly see cloud seeding, other government weather manipulation and ‘chemtrails’ narratives pop up in online conversations, especially about weather hazards. The old ‘plague of arsonists’ climate crisis denial narrative comes out every bushfire season.
AAP and other sites do a great job of identifying and debunking fake news, like last month’s fake ‘breaking news’ of Australian storms with video of a typhoon in the Philippines standing in for a storm in Brisbane.
From AAP factchecker site, viewed October 4, 2025 at https://www.aap.com.au/factcheck/overseas-disaster-footage-used-in-fake-breaking-news-of-australian-storms/
But politicians and agencies in many countries are strangely silent, even though there are a number of important ways to get head of these narratives – showed off by the Indian government during the Kerala floods in 2018.
Dealing with disinformation in disaster
There are two key activities, tightly linked, which should kick off BEFORE the impact phase. These are pretty simple to integrate into existing communication approaches: inoculation and its key tool, prebunking.
1. Innoculation
Innoculation is literally a ‘vaccine against brainwash’.
In one study, testing inoculation against climate change fake news, a ‘brief’ communication effort – such as social media posts, media advisories and spokesperson warning of what to expect - decreased acceptance of disinformation by 33%.
In the same study, a more in-depth method decreased belief and sharing by 75%!!
A more in-depth method could be a workshop or usual community engagement program, or a gamified community approach via conventional socials and media.
Innoculation and prebunking are much the same thing, although inoculation is more of a strategy, while prebunking is the key tool.
All you need is a record of what’s happened in the past in both your country and the USA, a bit of imagination relating to your own environment (like proximity to Russia, your local media environment), and a creative team who can brainstorm potential disinformation scenarios as a season or event approaches.
Then you need platforms you can use to flag these scenarios. And a creative way of implementing your prebunking efforts.
Gamification is one of these creative ways that has been shown to work.
2. Fact checking/debunking
Your agencies need to call out disinformation the minute it emerges. The more eyeballs on that message before it is debunked, the harder it is to stop the virus.
DO THIS PREBUNKING AT EVERY OPPORTUNITY.
Share new disinformation at joint information committees, incident management meetings and across your comms teams.
Have your highly trusted spokespeople talk about it at every press briefing, video and live streaming. Agency heads should take the lead (SUPPORTED by politicians, who are low on the trust scale).
Don’t fall into the trap of thinking that by talking about disinformation you will create copycats. It is already ubiquitous and you should trust the science on what works.
Where does it come from?
Like all good concepts, disinformation has a great framework behind it – the C5 interaction model. This 2025 model describes the context, causes, content, the cycle of amplification and the consequences of disinformation and how these interact. You can check out the model yourself for the detail.
The ‘causes’ component of this model helps us understand both the creators of disinformation and their motives.
Creators can be explained using three different categories: human/non-human, individual/organisation and non-state/state actor.
So Donald Trump pushing hydroxychloroquine as a cure for COVID-19 was a human/individual/state actor.
A cluster of Russian bots pushing Russian aerial wildfire support in Portugal were non-human/organisational/state actors
News Corp Australia outlets pushing the arsonist theory to drown out the climate crisis consequences during the Australian 2019-20 bushfires were human/organisational/non-state actors.
Motives are classified as ideological (political, religious, some other belief system such as libertarian or sovereign citizen) or financial.
Cloud seeding is ideological coming from a conspiracy theory. Claims that only immigrants will get flood relief is based on political ideology, and scamming ‘donations for flood survivors’ is financially motivated.
Usually creating classifications or archetypes helps us solve a communication problem – but in the case of disinformation, the treatment is the same no matter whether it’s a virus or an infection, or who posted it.
Innoculation is a very simple approach that can easily fold into your normal agency communication programs – it’s just daunting because disinformation, especially the bot-driven versions, seems unbeatable and we want to believe that it can’t happen during disaster.
But take heart, your communities are intelligent and caring, and given the right tools to deal with disinformation, they WILL step up.
Dr Barbara Ryan is a former journalist, communication professional and emergency communication academic and has been studying emergency communication and community information seeking since 2008. She offers workshops to help government communication teams develop disinformation innoculation strategies.