It’s exercise season - but do they work?

As we in the southern hemisphere exit severe weather season, Australian agencies are ramping up large scale, cross-agency emergency management exercises at state and Federal levels in May.

So three things:

1.        Do they work?

2.        What makes a good exercise?

3.        How do they measure the messy stuff (that is, ‘the general public’)?

Do they work?

Looking through the research, there is not a great deal to be found.

Professor Jim McLennan’s team did a literature review on training to improve emergency management decision-making to find out what the science tells us.  They found studies on three groups of exercises: discussion-based exercises such as workshops, tactical decision games (TDG) and tabletop exercises; operations-based training like drills, EOC exercises and field exercises; and e-based exercises, such as simulations and virtual reality exercises.

Very few looked at evaluation of the activities and the decision-making within them.

Very briefly, workshops, TDGs, drills and EOC exercises have not been evaluated as learning tools.  Computer simulations have been evaluated, but at individual response level – they don’t seem to have been tested in larger scale or inter-organisational setting.  VR is more effective than other training tools, but measurement is also at individual or small group levels.

So do exercises work or not? The answer to that seems to have been held in-house across agencies – always a dangerous habit that doesn’t look good at the post-event inquiry.

Tabletop exercises have been measured (Dausey, Buehler and Lurie looked at this in 2007 in health exercises) and while the team measured both improvements in agency responses AND ideas for improving tabletop design (we’ll come back to this), it didn’t measure whether the exercise worked to improve decision-making and whether participants improved knowledge, skills or connections.

Field exercises are another popular format especially for cross-agency activities, but in the one evaluation McLennan’s team could find, participants reported that they learned little from the exercises. That study, of 10 Swedish multi-agency collaborations, resulted in a framework that the study team then tested to see if it improved learning and usefulness of field exercises overall.

This framework featured collaboration at each step of the exercise development and implementation, two short in-process feedback sessions, and a repeat of the first stages of the scenario after the first feedback session. They tested it and found some improvement in learning and improvements in cross-agency collaboration.

So the bottomline is that only field exercises have been measured for their effect on organisational relationships and learning, and these are only successful if the exercise is conducted a certain way.

 

What makes a good exercise?

Coming back to the Dausey team’s tabletop exercise design framework - this lays out a series of principles that I think could be applied to any of the three groups of exercises, and many of which seem to be already applied to exercises around the world.  Most are a no-brainer, but two are interesting:

1.        Exercises should be designed to achieve a specific objective.

2.        Exercises should be as realistic as possible while remaining logistically feasible.

3.        Table top exercises should be developed around issues rather than scenarios.  Determine the issues then build the scenario to fit.

4.        Decision-making must be forced, time-limited and targeted at one of the issues.

5.        Exercises should involve limited number of participants (no clues provided on ideal numbers or the point at which it becomes too big).

6.        Exercise design and execution may benefit from collaborative engagement of representatives from participating agencies and external developers and facilitators.

All that remains is that the research into effects needs to be done!

 

Do exercises measure the messy stuff?

Debriefs are the main way of pointing to problems that need to be fixed from the exercise and what went well.  The way a debrief is facilitated and the many ways information can be collected can raise (or lower) the standard of learning and effect of the exercise.  Hot (onsite immediately after) and cold debriefs (within six weeks) are recommended.

When we say debrief, we talk about a meeting that asks three questions, using Queensland’s Ergon Energy’s format:

What can we sustain?  What can we improve? What can we fix?

These questions are general and can cover every small problem or win,  but debriefs can often be driven by the number of people who have experienced a certain issue, the loud voices, the larger agencies and the session’s time constraints.

Small but important issues or problems can arise but not make it to debrief – maybe only one person experienced it, maybe it was put forward by a smaller agency with not much volume, maybe they ran out of time. They need to be picked up by post-exercise review tools.

Debriefs can be held in a range of different ways – games, journal writing, questionnaires, panel discussions, dialogues, interviews, even a delphi study that collects and prompts as it goes around to the participants.

Regarding the messy stuff – such as participants’ coping, media involvement, and involvement of the community or community organisations that expose tricky obstacles – most of the research focuses on emergency agencies. We know that for the operational people, dealing with ‘the general public’ is not their thing.  Community exercises are conducted separately and often successfully measured, but with no operational component.  The California Shakeout, flood drills in the Philippines, and bushfire evacuation drills in Victoria are examples.

Media is rarely included, despit their partnerships during incidents.

Because of this, I reckon the Dausey team’s list for a good tabletop needs a couple more aspects to cover off the messy business of people:

7.        Include a panel of community representatives (including vulnerable communities).

8. Involve media.

We need research

It’s a bit concerning that emergency agencies are relying on techniques that haven’t been well tested for their effect. 

We need research at the basic level of “well,  duh, we knew that all along” in order to progress to testing trickier and more nuanced ideas on the same topic.

What you think works is very different to what you know works.

Next
Next

Our biggest import from the USA is disaster disinformation