Close This site uses cookies. If you continue to use the site you agree to this. For more details please see our cookies policy.

Search

Type your text, and hit enter to search:

Planning for the unexpected

Chris Berry, Founder of the risk management software company Initsys (a CRJ Partner) analyses recent events that should have taught us how crucial it is to plan for unexpected events, and how a lack of foresight and imagination can hinder risk planning.

Image 1Looking ahead and assessing potential risks that might evolve quickly can highlight gaps in communication or operational procedures. Perhaps the previous generation had a better understanding of incident management and the necessity for planning?

We all plan, every day, and for almost everything. But, over the last 30 years, the risk industry has made making plans almost a secondary thing. Instead, there is a reactive purchase of new equipment that improves 'apparent readiness', but no one seems to plan or learn adequately.
 
This lack of foresight has been a factor in all significant incidents in recent history. 
 
At a government level, risk-minded people have tried to get at least the major stuff prepared. Still, lack of funding, political will and external interference has caused a systemic failure. Despite the same issues occurring time and again. It has to stop. It starts at the top.
 
During the cold war, the UK had the best preparedness for a national emergency. A generation forged by WW2 had seen how good planning made the difference. There was enough PPE for any national crisis (to the standards of the day), enough food (nutritional packs defined from experience) and a ready stock of emergency vehicles for a prepared army in a civilian role. Out of date equipment or food and protective clothing were delivered to the public via a well-oiled group of entrepreneurs.
 
It was a virtually self-funding wheel of government purchase, hold, release and replace.
 
Gradually, the Government changed direction, with costly consequences. PPE is acquired using a just in time (JIT) theoretical model that failed to realise how a pandemic affects the whole world and, essentially, sourcing PPE became the responsibility of hospital trusts. Large retailers assured the government that their supply chain was so efficient that it was not necessary to hold emergency food stocks – at all. And a weighted combination of cost savings and pressure from some unions meant that stored specialised emergency vehicles and the training were removed from those who would operate them.
 
Covid has exposed these weaknesses. We did not overcome the lack of PPE by JIT, and retailers could barely maintain the supply of fundamentals; when food parcels were required, nutrition seemed to take a back seat. Unions had to back down and accept that emergency workers would have to cross roles – they did so successfully. Vehicles were purchased rapidly or borrowed from charities. 
 
We didn't plan. And we failed to recognise the risk, despite the warnings issued by WHO and others. We certainly cannot say that Covid was a force majeure event. It was forecast, therefore not a surprise.
 
I think the generation before had a better understanding of incident management and the necessity for planning.
 
I do not think that we need to go back to holding significant food stocks (but should we at least have the basics?) and I also believe that PPE for the health service should be a centrally controlled purchase.
 
I also think it's true that the missed opportunity of UK combined emergency control rooms – proposed and built in the 1990s at great expense – would have paid dividends in both the Covid era and recent terror incidents. 
 
During the IRA terror campaign, the UK built a preparedness plan; constructed by necessity. A simple factor was critical; just as in WW2, control rooms represented all emergency services, a bomb warning would bring a police element, bomb disposal unit, ambulance and fire services. Communication was simple because they were all present.
 
Yet Manchester, on May 22, 2017, failure in the most basic communication between the emergency services caused a lack of response. Luckily, there was some reaction when there should have been none as the issue of a ‘Plato’ order (active killer on the premises) initially issued by Greater Manchester Police (correctly), did not get passed to North West Ambulance Service. It was then not rescinded to Greater Manchester Fire Service for over two hours.
 
There is no central messaging system; this is something that is easily remedied with existing software.
 
There is some criticism of the venue and the security. A security team member didn't pass a report of a potential terrorist by the public to the control room because the officer could not make contact by radio. 
 
A pre-flight plan should have covered the security, involving having a properly trained team on the ground in place and communication with a control facility. Like those adopted by pilots, this pre-flight check would show that what could be required is in place, working and ready.
 
No pre-flight, no flight, no event.
 
Bearing in mind, this site is a central transportation hub as well as a concert venue. Every day there should be a pre-flight; but does this happen?
 
I doubt there had been any prior imagination of this potential 'scene' (the term risk planners used in the 1970s). If there had, then I am sure the basic errors committed that night would not have happened. There is no doubt in my mind that a planned scene (which could have been a sandbox exercise) would have highlighted the communication-related mistakes and caused a proper operating procedure development.
 
There appeared to be no active high-level risk management in any emergency services, probably owing to budget, more likely because the culture doesn't exist anymore. 
 
Staying with Manchester, let's compare 2017 to the bombing on the June 5, 1996 (almost 20 years earlier). The IRA had issued a warning 90 minutes before the detonation; it took over 20 minutes to check that the notification was genuine and despatch all but a handful of police responders. However, good communication and pre-planning in the control room meant that fire, ambulance and bomb disposal resources mobilised quickly. Over 75,000 people were evacuated safely in under an hour. There was an evacuation plan and it worked perfectly, to the point where once the combined services issued a 'clear' order, bomb disposal withdrew from trying to disarm what turned out to be the largest bomb since WW2 – the risk to bomb disposal personnel having been raised owing to the reduction of risk to the public. 
 
And by plan, I mean a plan. Everyone knew what to do, as there had been an exercise at the Fire Services Technical College that explored the scenario of both an explosion with lack of warning and a pre-incident evacuation. Almost every town had one. The 1.1d that the Fire Service carried out for all significant risks knitted together with the regional plan and then with the national programme (there was even some international planning – a job now done by charities).
 
To the casual observer, it would appear that as far as an incident response is concerned, we haven't gone forward but backwards.

pic 2As for planning, it isn't what people imagine it to be. The currently widely used risk matrix throws up the expected. What it doesn't show is the evolving risk that is around us all the time

Let's go back to Manchester, 1979 this time. Woolworths Manchester is the group's flagship department store, the largest in Europe. 
 
A little after 12:45 hrs, 140 people were packed into the restaurant and others were queuing. In all, 1,100 people were inside, either working or shopping.
 
An overloaded light fitting in the furniture department burst into flames, dropping melting plastic onto the furniture below. The fire increased and spread to rubbish underneath the escalator that led directly to the restaurant's floor.
 
At about 12.50 hrs, people started heading for the exits. There was no fire alarm or evacuation warning at this stage, and the business was not required to install sprinklers. Some of those evacuating found emergency exits locked. 
 
At about 13:10 hrs, reports indicate that the fire alarm was sounding.
 
However, it wasn't until 1:30 hrs when a taxi driver reported that smoke was billowing from the building that the fire brigade then attended immediately. Many staff waited on the pavement for arriving emergency services, multiple reports adding to the confusion. People were trapped but still safe, taking away the efforts of firefighting.
 
Ten people died. Some were found near the abandoned tills, food trays in hand; killed not directly by the fire, but by smoke.
 
This fire started a chain reaction in legislation. 
 
After the investigation into the cause of the fire found the foam used to fill the budget furniture was to blame for the smoke. Fire Officer Bob Graham led the fire investigation. 
 
After it, he joined with other campaigners to persuade the government to change the law and oblige furniture makers to use flame-resistant foam. In 1988, The Furniture and Furnishings (Fire) (Safety) Regulations forced manufacturers to make furniture fillings and covers from safer materials.
 
Fast forward to 1981. Woolworths now has better training for their staff and, more importantly, a plan. It is workable and is combined with automatic signalling to the fire service. It is successful; despite the many false alarms, fires are less severe, and there are no significant losses. Woolworths leads the way in pressing for safer furniture and no longer stock items of risk.
 
Wimbledon Broadway Woolworths is a modernised three-storey department store, about half the size of Manchester. Fire breaks out in a storeroom. It builds heat, eventually to flashover when the door breaches.
 
Despite it being a much fiercer fire, destroying the building, staff and customers evacuate safely. Workers herd the customers, checking each floor as they migrate downwards. It is an example of a plan that works.
 
However, the fire service is unaware that the store has not built a fire protective cladding to the main spars as recommended three years earlier following a replacement of a ceiling. It is on the Fire Service 1.1d as a risk.
 
Firefighters are trapped and one killed when the upper floor collapses. Fire investigation work never proved that the lack of structural protection was the cause of the failure, but the anecdotal discussion has always said that the men would never have deployed had that risk been known. It was; it just wasn't visible.
 
Woolworths’ reputation in the UK was in tatters. As a consequence of the poor publicity caused by these events and a significant increase in insurance costs, its parent group decided to sell its UK operation to the paternoster group (Kingfisher).
 
The Woolworths case is an example of evolving risk. Reduction of risk to the public by proactive work in training staff and physical fire protection; the risk to the responders, the building and eventually the business increased by the potential failure to undertake the remedial construction.
 
Today's risk and crisis management tend to compartmentalise risk into natural, artificial and social. In the Woolworths example, the fire would be 'natural’, albeit human-caused; 'artificial' would be the failure to adopt the fire protection advised and the 'social' would be damage to reputation. However, this is not appropriate. As can be seen, all three are in evidence in the one incident, but the consequences are challenging to imagine. 
 
Maybe this is where software and machine learning can play their part; the machine learns that a maintenance issue has effects and will make that known. 
 
Let's look briefly at the history leading to the Grenfell fire.
 
It had been known for some time that cladding causes fire; the Lakanal Fire in 2009 was attributed to refurbishment work that included similar materials as Grenfell. Yet 'remain in place' remained in place, despite many in the Fire Service knowing it was the wrong thing.
 
It was the same for Summerland in 1973 and Knowsley Heights 1991, and others. 
 
Would a centralised database of buildings and risks be of advantage? Something like a Wikipedia for structures and public spaces to show all the dangers and defects in a facility? Establishments and public space to be classified by machine learning and fire risk assessors (adequately trained and certified) could extend their purpose to all risks, especially public spaces. Responsible people can add risk to the database, as could pre-flight checks, and the facility manager must answer or clear each one. 
 
The database would then be made visible to emergency services on demand.
 
Machines can also classify incidents and 'fork' each phase as necessary when triggers occur. For example, a flood is natural and possibly defined as an emergency, disaster, or a crisis. It could have an artificial element, a broken pipe or, in the case of government, a social component, forked again as rehousing and publicity. 
 
But how to classify these incidents? What about the fire precautions work required at Woolworths Wimbledon Broadway?
 
Is maintenance an incident? And is it an emergency, disaster or crisis? What is the effect on the business?
 
No doubt it was expensive, and that was probably a factor in the management's decision-making process at the time, probably pushed to the side because of the business's advances in its evacuation process. I know that many companies and governments delay this type of work where other factors offset it. I have a first-hand experience where a business actively expected to lose one or two stores a year to fire and did nothing to prevent it proactively. 
 
In my view, maintenance is a risk and must be planned, no matter how inconsequential, and it should be for the directors to decide if the risk is acceptable or not. A reasonably well-trained risk-assessor should be able to identify and mark the maintenance as non-critical, emergency, disaster or crisis, as can software. By adding consequence (threat), an algorithm can assist in identifying those items that need urgent attention – the evolving risk.
 
If we go back to the 2017 Manchester Arena bomb, a pre-flight (event) plan would have identified that the radios issued to the security staff were not working. Poor communication is a maintenance event and has a consequence that would raise the threat level. It is unlikely that any multiagency sandbox exercise would have picked this up, but it would have been highlighted via the pre-flight plan and reported to the event manager. 
 
Mitigation may have been the request for better radios, more police presence or shepherding of visitors from exiting via the concourse. Of course, the reported risk (had it reached the SCR) would have required action. 
 
It is worth noting that significant strides have been made in machine learning and object classification, and analysis of the metadata to create a risk. The addition of this technology to the existing CCTV is nowadays possible. A person carrying a rucksack into an area not related to the transportation hub would have caused an alert because the obvious question would be: Why a rucksack? This is precisely the questioning that triggered the member of public who raised concerns; for a long time, we have been told to act if something doesn't feel right. Machines can do this as well.
 
But there is no doubt that the control room operators would have required some guidance of what expectations would be in this case, and again, technology provides the answer. Easily accessed, predetermined rules allow a plan to flow without hesitation.
 
Planning for any incident is essential. It needs to be proportionate to the risk, and there are times when nothing you do will be enough. That's expected and is part of the learning process. As long as it is 'learned'. And that is the point of the machine in the process. We tend to forget as time passes. A machine does not.

There are similarities between Hillsborough and Wembley in July 2021; early reports indicate Wembley could so very nearly have been the same as Hillsborough had the venue held a capacity crowd. I want to ask if counterterrorism officers' social media intelligence found an anarchist plan to breach the perimeter (as reported). Was that 'evolving risk' passed to the command and control team? If so, what did they do about it?
 
It isn't the first time that evolving risk has had an impact. 
 
Bradford Football Stadium (1985) – accumulating rubbish, maintenance and poor management of exits caused the deaths of 56 people. Lesson learned? No.
 
The Kings Cross Fire (1987) was a catalogue of mistakes and maintenance issues that caused a flashover and 31 deaths. Many of the potential problems involved in the flashover were noted and ignored – the operations director having detailed the risk of fire from 22 layers of paint, as had the risk of accumulating rubbish beneath the escalator. Similarities to the event in Bradford are now apparent. It is entirely feasible that recognising the threat of accumulated rubbish is immediate by software trained to that task.
 
I am as far behind the efforts of survivors against terror as it can be. I support Martyn's Law.
 
However, the responsibility discussed must extend to all incidents and planning and that of the government department responsible. There has been improvement in equipment supply owing to inquest and inquiry – defined recently as ‘symbolic readiness’ – including the addition of CCTV, intruder alarms. But what is the purpose of this in the case of an active incident, other than recovery of images post-attack unless there is the operational management of the CCTV?
 
Unfortunately, there remains no impetus towards simulation training in the context of planning for any public space, despite the necessity, nor focus on pre-flight nor post-incident learning (the so-called lucky escape) and absolutely no focus on evolving risk.
 
Addressing these failures must surely be a high priority to a crisis manager.

    Tweet       Post       Post
Oops! Not a subscriber?

This content is available to subscribers only. Click here to subscribe now.

If you already have a subscription, then login here.