John OrlandoThis article is the fourth installment in Orlando's series on virtual operations support teams (VOSTs). The first installment, Lessons Learned From The Social Media Tabletop Exercise, is available here. The second installment, Structured Networks & Self-Coordinated Disaster Response, is available here. The third installment, Harnessing The Wisdom Of Crowds, is available here.

Crisis Information From Social Media

In recent months, a lot of coverage has been dedicated to instances of misinformation on social media during a crisis. In particular, Shashank Tripathi, the campaign manager for a State Senatorial candidate in New York, Tweeted politically motivated misinformation about Hurricane Sandy. His candidate got creamed in the election, by the way.

These reports might lead some to think that social media should not be used to gather information during a crisis, but this would be a mistake for a number of reasons:

  1. No source is infallible. Mass media outlets routinely get it wrong. Tripathi’s false claims were immediately questioned on Twitter, but were spread by the mass media. Similarly, mass media spread a number of false reports during the Sandy Hook incident, and even official reports routinely get information wrong. [1]
  2. No information is also bad. As Patrick Meier put it, “False information can cost lives. But no information can also cost lives, especially in a crisis zone. Indeed, information is perishable so the potential value of information must be weighed against the urgency of the situation. Correct information that arrives too late is useless.” [2]
  3. Social media can correct false information from other sources: Jeanette Sutton demonstrated that crises breed a network of fact-checkers who crop up around the world and patrol social networks to correct misinformation provided by social media, the mass media and official reports. In one case they corrected misinformation about the Tennessee Coal Pond disaster that was provided by official sources, leading those sources to retract the claims. [3]
  4. Social media tends to self-correct: Study after study, including one carried out by the Defense Advanced Resource Projects Agency, demonstrates that social networks of all varieties tend to self-correct for misinformation. [4] As one commentator put it:

    "A redeeming feature of Twitter is the relative speed with which its users manage to sniff out and debunk the most widely circulated falsehoods. On Friday, for instance, word that the media had fingered the wrong suspect was circulating on Twitter while TV networks were still running with the false reports. The New Yorker's Sasha Frere-Jones has called the site a "self-cleaning oven." In Sandy's wake, Buzzfeed's John Hermann declared it a "truth machine."" [5]

    While cynics assume that social media must amplify false reports, research proves the opposite. A study of over 4,700,000 tweets related to the earthquake in Chile found that:

    “About 95% of tweets related to confirmed reports validated that information. In contrast only 0.03% of tweets denied the validity of these true cases.….[Meanwhile], about 50% of tweets will deny the validity of false reports.” [6]

    Most people are not trying to mislead during a disaster, and the vast majority of good intentions will drown out the small number of bad intentions.
  5. Social media provides more information: Both official and mass media accounts by necessity summarize information, causing the loss of valuable detail. If you live near a wildfire, you are less interested in the total number of acres burned than where the fire is in relation to your street. That’s why crowdmapping has become such a valuable tool for disaster response -- it preserves the millions of details about the contours of the situation that are critical to decision-making.
  6. We are already doing it: Patrick Meier points out that emergency responders already crowdsource information through 911. Responders do not even worry about verifying the information from a 911 call before responding, they just go. Social media analytics is 911 writ large, with the added ability to gather much more information and do verification on that information, as we will see below.

Thumbs up, thumbs downAuthenticating Reports

While a number of commentators have focused on the question of how to verify information from social media, the considerations above show that the question is also “How to use social media to verify information from all sources -- social media, mass media, official, etc.?” Imagine that you are a fire chief with 20 firefighters positioned along a ridge fighting a wildfire. One official source reports that they are not in harm’s way, but suddenly you are flooded with hundreds of independent reports from citizens claiming to see that the fire has circled around and threatens to engulf your firefighters. What would you do?

In fact, the real question is “How to gather, authenticate, and integrate information from a variety of sources -- citizen, mass media, official, etc. -- to develop situational awareness?” This is the major new task of disaster managers, and the virtual operations support team (VOST) has arisen to do just that.

We will learn how to set up and run a VOST in the pre-conference workshop at the 11th Annual Continuity Insights Management Conference in April, 2013. See for more information.

More importantly, we will learn how to gather and authenticate information from a variety of sources during an incident. A host of research is emerging on how to authenticate information from different sources during a disaster, and we will apply these findings to a real crisis that is occurring somewhere in the world during the workshop.

Participants will practice gathering information from a variety of sources, applying verification criteria, categorizing and aggregating the information, and then evaluating it to form a picture of the situation. This will not only provide participants with experience using the tools, but also experience in verification techniques that they would apply to their own crisis management.

Verifying techniques fall into two categories: reliability of source and outside confirmation. Within them are principles developed from a variety of studies and real-life applications. For instance, the Standby Taskforce is a 700 person group of volunteers from around the world who assist the United Nations in responding to events by gathering, organizing and verifying information from a variety of sources, including social media. They have developed fairly sophisticated principles and procedures for judging the credibility of information. We will use many of these principles in our own mock VOST. These principles include: [7]

  • Location of the source: Eye witness or far away? Is it a retweet or original?
  • Type of source: Journalist, ordinary citizen, diplomat, etc?
  • Language used: URL’s provided, positive vs. negative language, profanity, adjectives, grammar, etc.
  • Quantity of information on the source: The Standby Taskforce asks questions such as “Does the source provide a name, picture, bio and any links to their own blog, identify, profession … does searching for this name on Google provide clues to the person’s identify? Perhaps a Facebook page, a professional email address, a LinkedIn profile?”
  • Followers: How many followers does the source have? Are the followers in the affected area, indicating a relation to the scene (and thus care for those who are reading the information, making it more likely to be genuine)? How many people does the source follow? What type of people are they?
  • Timing of the information: Is the information in real-time or delayed, and is the timing suspicious?
  • Provides visual evidence: Is there a photo or video accompanying the report that can be evaluated?

 Outside confirmation includes:

  • Number of independent reports: As I discussed in an earlier article, the independence of reports is critical to creating “the wisdom of crowds.” Many eyes independently reporting on a situation have been proven to create a highly accurate collective picture of the event. Mob mentality is created when only a few voices are heard and influence the opinions of others sequentially. Independence is a major criteria for validation.
  • Coherence with reports from the same area: Are other reports from the same area consistent with it?
  • Can others verify?: One simple crowdsourcing tool is to feed a report back into the system and ask if others can produce similar reports.

A world of resources is now available to the emergency manager for gathering information about a crisis and creating a picture of unfolding events. These resources can be overwhelming to the business continuity or disaster manager, but using them simply requires training and practice. Please join us to learn how they can be applied within your organization.


[1] “Beyond Sandy Hook. Why it's OK for the media to be wrong (for a while),” Chris Seper,

[2] “Information Forensics: Five Case Studies on How to Verify Crowdsourced Information from Social Media,” Patrick Meier, iRevolution,

[3] “Twittering Tennessee: Distributed Networks and Collaboration Following a Technological Disaster,” Jeanette Sutton.

[4] DARPA Network Challenge Project Report,

[5] "Building a Better Truth Machine," Will Oremus, Slate, December 14, 2012,

[6] "Analyzing the Veracity of Tweets during a Major Crisis," Patrick Meier, iRevolution, September 19, 2010,

[7] The sampling of principles are drawn from a variety of sources, especially Verifying Crowdsourced Social Media Reports for Live Crisis Mapping: An Introduction to Information Forensics, Patrick Meier, iRevolution, The quote below is also from that work.