Today, within minutes of a disaster, the public begins to self-manage response via Facebook, Twitter, Open Street Map, and other social media systems. But emergency responders and business continuity managers are still taught the “command and control” response model that runs counter to reality. The result is that these "social" communities are on the sidelines of public response to a disaster: Public and formal response systems run independent of one another, and never the twain shall meet.
In order to start closing the gulf between response groups, we set up a social media mock disaster exercise at the recent Continuity Insights Management Conference. The goal was to give the response community a taste of public response to disasters using Facebook as a tool to coordinate response. Unlike the one or two other attempts at such an exercise, we made the exercise as real as possible by using a real Facebook page, with profiles and Twitter feeds.
We first created the Facebook page “Mock Disaster--Wisconsin Cheesemakers” that represented a hypothetical company in “Packerville, Wisconsin.” We then created 10 Facebook profiles, each representing a company employee, such as “Business Continuity Manager” and “Delivery Truck Driver.”
We set up 10 tables, with one profile per table. Each profile had wants (e.g. “Find out if my son in school is OK”), and resources (e.g. “I own a chainsaw”). Participants took on the persona of the profile for their table, and acted out their response to a tornado that hit the company through interactions on Facebook, just as in a real disaster.
We fed “injects” to the event via Twitter, with people downloading Twitter monitoring software before the exercise. Someone at each table was assigned to monitor Twitter for injects and relay them to the others at the table. The hashtag #MOCKDISASTER preceeded each tweet so that members of the public were not alarmed by the tweets.
In essence, each table created a mini “Virtual Operations Support Team,” both gathering and sending information through the systems and agreeing on response.
Needless to say, the experience was new to all participants and we learned a lot. Here are some of the lessons learned:
1. Teach People How to Use the Technology Well in Advance
People who grew up in the pre-Internet age need time to be set up on the systems and be taught how to use them. Public response will be almost instant—people will be Tweeting about the disaster as it happens—and so you don’t want to have to spend hours downloading monitoring software and figuring out how to use it as others are managing the event without you. We will talk about the various systems that need to be monitored and how to use them later in this series.
2. Develop Layers of Communication Protocols
Communication during a disaster is best thought of as concentric circles, each with internal communication systems for speaking among themselves, and external systems for speaking with those in other circles. The inner most circle is the response team, which might use SMS, email, and a document sharing and editing systems like Google Docs or Dropbox to communicate among themselves (again, these systems will be explained later in the series). The second circle is the employees, who might communicate via an Enterprise Social Networking solution like Jive or Yammer. Finally, communication with the public is best done with Facebook, Twitter, Google+ and the like.
3. Information Comes Faster than Expected
Participants found that information comes fast and furiously — most were overwhelmed by the speed of the information flow, despite the fact that only ten profiles were working at once. In a real disaster thousands of people will be using the systems. A system for keeping up with the information and feeding it up, down, and across organizations is needed. Again, this requires an intimate knowledge of how each system works. For instance, Tweets come in chronologically, which makes them easy to identify as they come in, but Facebook responses appear under the original postings, which may be many pages down from the most recent postings. Without understanding how this works, postings become easy to miss.
4. Create a Method for Connecting the Dots
Two separate reports were sent to intelligence agencies before 9/11 warning of Arab nationals enrolled in U.S. flights schools who were acting peculiar. By themselves, each was just one of thousands of leads that go nowhere. Together, they showed a pattern. Pattern recognition is the key to understanding a situation.
The speed and diversity of information coming in made connecting the dots a challenge. For instance, the set-up stated that lovely Packerville, Wisconsin was home to “Lafollette College, with 2,000 on-campus students.” An inject announced that classes were cancelled due to the tornado. Finally, the company announced that the Wallatoola River is rising and needs to be sandbagged. Nobody connected the dots to see that 2,000 bright eyed, bushy tailed college students were sitting around in town with nothing to do, able to lend a hand. Maybe it was due to the ingrained assumption that response is managed with one’s own resources, not with community resources. Maybe it was just that groups were not geared up to connect the dots. Either way, the emergency response and business continuity community needs to change its methods and mindset if it is to get into the modern game of emergency response.
For more images from the Social Media Tabletop Exercise, go to: http://www.facebook.com/media/set/?set=a.351765071552677.82311.348967925165725&type=1
Orlando will conduct a one-day pre-conference workshop on Creating & Running A Virtual Operations Support Team (VOST) at the 11th Annual Continuity Insights Management Conference, April 22-24, 2013 at the Sheraton San Diego Hotel & Marina. See http://www.cimanagementconference.com/pre-post-conference-workshops for more information.
The next article in the series focuses on Structured Networks & Self-Coordinated Disaster Response.