Tag Archives: Information

Quantifying Information Flow During Emergencies

I was particularly pleased to see this study appear in the top-tier journal, Nature. (Thanks to my colleague Sarah Vieweg for flagging). Earlier studies have shown that “human communications are both temporally & spatially localized following the onset of emergencies, indicating that social propagation is a primary means to propagate situational awareness.” In this new study, the authors analyze crisis events using country-wide mobile phone data. To this end, they also analyze the communication patterns of mobile phone users outside the affected area. So the question driving this study is this: how do the communication patterns of non-affected mobile phone users differ from those affected? Why ask this question? Understanding the communication patterns of mobile phone users outside the affected areas sheds light on how situational awareness spreads during disasters.

Nature graphs

The graphs above (click to enlarge) simply depict the change in call volume for three crisis events and one non-emergency event for the two types of mobile phone users. The set of users directly affected by a crisis is labeled G0 while users they contact during the emergency are labeled G1. Note that G1 users are not affected by the crisis. Since the study seeks to assess how G1 users change their communication patterns following a crisis, one logical question is this: do the call volume of G1 users increase like those of G0 users? The graphs above reveal that G1 and G0 users have instantaneous and corresponding spikes for crisis events. This is not the case for the non-emergency event.

“As the activity spikes for G0 users for emergency events are both temporally and spatially localized, the communication of G1 users becomes the most important means of spreading situational awareness.” To quantify the reach of situational awareness, the authors study the communication patterns of G1 users after they receive a call or SMS from the affected set of G0 users. They find 3 types of communication patterns for G1 users, as depicted below (click to enlarge).

Nature graphs 2

Pattern 1: G1 users call back G0 users (orange edges). Pattern 2: G1 users call forward to G2 users (purple edges). Pattern 3: G1 users call other G1 users (green edges). Which of these 3 patterns is most pronounced during a crisis? Pattern 1, call backs, constitute 25% of all G1 communication responses. Pattern 2, call forwards, constitutes 70% of communications. Pattern 3, calls between G1 users only represents 5% of all communications. This means that the spikes in call volumes shown in the above graphs is overwhelmingly driven by Patterns 1 and 2: call backs and call forwards.

The graphs below (click to enlarge) show call volumes by communication patterns 1 and 2. In these graphs, Pattern 1 is the orange line and Pattern 2 the dashed purple line. In all three crisis events, Pattern 1 (call backs) has clear volume spikes. “That is, G1 users prefer to interact back with G0 users rather than contacting with new users (G2), a phenomenon that limits the spreading of information.” In effect, Pattern 1 is a measure of reciprocal communications and indeed social capital, “representing correspondence and coordination calls between social neighbors.” In contrast, Pattern 2 measures the dissemination of the “dissemination of situational awareness, corresponding to information cascades that penetrate the underlying social network.”

Nature graphs 3

The histogram below shows average levels of reciprocal communication for the 4 events under study. These results clearly show a spike in reciprocal behavior for the three crisis events compared to the baseline. The opposite is true for the non-emergency event.Nature graphs 4

In sum, a crisis early warning system based on communication patterns should seek to monitor changes in the following two indicators: (1) Volume of Call Backs; and (2) Deviation of Call Backs from baseline. Given that access to mobile phone data is near-impossible for the vast majority of academics and humanitarian professionals, one question worth exploring is whether similar communication dynamics can be observed on social networks like Twitter and Facebook.

 bio

Can Official Disaster Response Apps Compete with Twitter?

There are over half-a-billion Twitter users, with an average of 135,000 new users signing up on a daily basis (1). Can emergency management and disaster response organizations win over some Twitter users by convincing them to use their apps in addition to Twitter? For example, will FEMA’s smartphone app gain as much “market share”? The app’s new crowdsourcing feature, “Disaster Reporter,” allows users to submit geo-tagged disaster-related images, which are then added to a public crisis map. So the question is, will more images be captured via FEMA’s app or from Twitter users posting Instagram pictures?

fema_app

This question is perhaps poorly stated. While FEMA may not get millions of users to share disaster-related pictures via their app, it is absolutely critical for disaster response organizations to explicitly solicit crisis information from the crowd. See my blog post “Social Media for Emergency Management: Question of Supply and Demand” for more information on the importance demand-driven crowdsourcing. The advantage of soliciting crisis information from a smartphone app is that the sourced information is structured and thus easily machine readable. For example, the pictures taken with FEMA’s app are automatically geo-tagged, which means they can be automatically mapped if need be.

While many, many more picture may be posted on Twitter, these may be more difficult to map. The vast majority of tweets are not geo-tagged, which means more sophisticated computational solutions are necessary. Instagram pictures are geo-tagged, but this information is not publicly available. So smartphone apps are a good way to overcome these challenges. But we shouldn’t overlook the value of pictures shared on Twitter. Many can be geo-tagged, as demonstrated by the Digital Humanitarian Network’s efforts in response to Typhoon Pablo. More-over, about 40% of pictures shared on Twitter in the immediate aftermath of the Oklahoma Tornado had geographic data. In other words, while the FEMA app may have 10,000 users who submit a picture during a disaster, Twitter may have 100,000 users posting pictures. And while only 40% of the latter pictures may be geo-tagged, this would still mean 40,000 pictures compared to FEMA’s 10,000. Recall that over half-a-million Instagram pictures were posted during Hurricane Sandy alone.

The main point, however, is that FEMA could also solicit pictures via Twitter and ask eyewitnesses to simply geo-tag their tweets during disasters. They could also speak with Instagram and perhaps ask them to share geo-tag data for solicited images. These strategies would render tweets and pictures machine-readable and thus automatically mappable, just like the pictures coming from FEMA’s app. In sum, the key issue here is one of policy and the best solution is to leverage multiple platforms to crowdsource crisis information. The technical challenge is how to deal with the high volume of pictures shared in real-time across multiple platforms. This is where microtasking comes in and why MicroMappers is being developed. For tweets and images that do not contain automatically geo-tagged data, MicroMappers has a microtasking app specifically developed to crowd-source the manual tagging of images.

In sum, there are trade-offs. The good news is that we don’t have to choose one solution over the other; they are complementary. We can leverage both a dedicated smartphone app and very popular social media platforms like Twitter and Facebook to crowdsource the collection of crisis information. Either way, a demand-driven approach to soliciting relevant information will work best, both for smartphone apps and social media platforms.

Bio

 

Tweets, Crises and Behavioral Psychology: On Credibility and Information Sharing

How we feel about the content we read on Twitter influences whether we accept and share it—particularly during disasters. My colleague Yasuaki Sakamoto at the Stevens Institute of Technology (SIT) and his PhD students analyzed this dyna-mic more closely in this recent study entitled “Perspective Matters: Sharing of Crisis Information in Social Media”. Using a series behavioral psychology experiments, they examined “how individuals share information related to the 9.0 magnitude earthquake, which hit northeastern Japan on March 11th, 2011.” Their results indicate that individuals were more likely to share crisis infor-mation (1) when they imagined that they were close to the disaster center, (2) when they were thinking about themselves, and (3) when they experienced negative emotions as a result of reading the information.

stevens1

Yasu and team are particularly interested in “the effects of perspective taking – considering self or other – and location on individuals’ intention to pass on information in a Twitter-like environment.” In other words: does empathy influence information sharing (retweeting) during crises? Does thinking of others in need eliminate the individual differences in perception that arise when thinking of one’s self instead? The authors hypothesize that “individuals’ information sharing decision can be influenced by (1) their imagined proximity, being close to or distant from the disaster center, (2) the perspective that they take, thinking about self or other, and (3) how they feel about the information that they are exposed to in social media, positive, negative or neutral.”

To test these hypotheses, Yasu and company collected one year’s worth of tweets posted by two major news agencies and five individuals following the Japan Earthquake on March 11, 2012. They randomly sampled 100 media tweets and 100 tweets produced by individuals, resulting a combined sample of 200 tweets. Sampling from these two sources (media vs user-generated) enables Yasu and team to test whether people treat the resulting content differently. Next, they recruited 468 volunteers from Amazon’s Mechanical Turk and paid them a nominal fee for their participation in a series of three behavioral psychology experiments.

In the first experiment, the “control” condition, volunteers read through the list of tweets and simply rated the likelihood of sharing a given tweet. The second experiment asked volunteers to read through the list and imagine they were in Fukushima. They were then asked to document their feelings and rate whether they would pass along a given message. Experiment three introduced a hypo-thetical person John based in Fukushima and prompted users to describe how each tweet might make John feel and rate whether they would share the tweet.

empathy

The results of these experiments suggest that, “people are more likely to spread crisis information when they think about themselves in the disaster situation. During disasters, then, one recommendation we can give to citizens would be to think about others instead of self, and think about others who are not in the disaster center. Doing so might allow citizens to perceive the information in a different way, and reduce the likelihood of impulsively spreading any seemingly useful but false information.” Yasu and his students also found that “people are more likely to share information associated with negative feelings.” Since rumors tend to evoke negativity,” they spread more quickly. The authors entertain possible ways to manage this problem such as “surrounding negative messages with positive ones,” for example.

In conclusion, Yasu and his students consider the design principles that ought to be considered when designing social media systems to verify and counter rumors. “In practice, designers need to devote significant efforts to understanding the effects of perspective taking and location, as shown in the current work, and develop techniques to mitigate negative influences of unproved information in social media.”

Bio

For more on Yasu’s work, see:

  • Using Crowdsourcing to Counter False Rumos on Social Media During Crises [Link]

Using Rapportive for Source and Information Verification

I’ve been using Rapportive for several few weeks now and have found the tool rather useful for assessing the trustworthiness of a source. Rapportive is an extension for Gmail that allows you to automatically visualize an email sender’s complete profile information right inside your inbox.

So now, when receiving emails from strangers, I can immediately see their profile picture, short bio, twitter handle (including latest tweets), links to their Facebook page, Google+ account, LinkedIn profile, blog, SkypeID, recent pictures they’ve posted, etc. As explained in my blog posts on information forensics, this type of meta-data can be particularly useful when assessing the importance or credibility of a source. To be sure, having a source’s entire digital footprint on hand can be quite revealing (as marketers know full well). Moreover, this type of meta-data was exactly what the Standby Volunteer Task Force was manually looking for when they sought to verify the identify of volunteers during the Libya Crisis Map project with the UN last year.

Obviously, the use of Rapportive alone is not a silver bullet to fully determine the credibility of a source or the authenticity of a source’s identity. But it does add contextual information that can make a difference when seeking to better under-stand the reliability of an email. I’d be curious to know whether Rapportive will be available as a stand-alone platform in the future so it can be used outside of Gmail. A simple web-based search box that allows one to search by email address, twitter handle, etc., with the result being a structured profile of that individual’s entire digital footprint. Anyone know whether similar platforms already exist? They could serve as ideal plugins for platforms like CrisisTracker.

From Crowdsourcing Crisis Information to Crowdseeding Conflict Zones (Updated)

Friends Peter van der Windt and Gregory Asmolov are two of the sharpest minds I know when it comes to crowdsourcing crisis information and crisis response. So it was a real treat to catch up with them in Berlin this past weekend during the “ICTs in Limited Statehood” workshop. An edited book of the same title is due out next year and promises to be an absolute must-read for all interested in the impact of Information and Communication Technologies (ICTs) on politics, crises and development.

I blogged about Gregory’s presentation following last year’s workshop, so this year I’ll relay Peter’s talk on research design and methodology vis-a-vis the collection of security incidents in conflict environments using SMS. Peter and mentor Macartan Humphreys completed their Voix des Kivus project in the DRC last year, which ran for just over 16 months. During this time, they received 4,783 text messages on security incidents using the FrontlineSMS platform. These messages were triaged and rerouted to several NGOs in the Kivus as well as the UN Mission there, MONUSCO.

How did they collect this information in the first place? Well, they considered crowdsourcing but quickly realized this was the wrong methodology for their project, which was to assess the impact of a major conflict mitigation program in the region. (Relaying text messages to various actors on the ground was not initially part of the plan). They needed high-quality, reliable, timely, regular and representative conflict event-data for their monitoring and evaluation project. Crowdsourcing is obviously not always the most appropriate methodology for the collection of information—as explained in this blog post.

Peter explained the pro’s and con’s of using crowdsourcing by sharing the framework above. “Knowledge” refers to the fact that only those who have knowledge of a given crowdsourcing project will know that participating is even an option. “Means” denotes whether or not an individual has the ability to participate. One would typically need access to a mobile phone and enough credit to send text messages to Voix des Kivus. In the case of the DRC, the size of subset “D” (no knowledge / no means) would easily dwarf the number of individuals comprising subset “A” (knowledge / means). In Peter’s own words:

“Crowdseeding brings the population (the crowd) from only A (what you get with crowdsourcing) to A+B+C+D: because you give phones&credit and you go to and inform the phoneholds about the project. So the crowd increases from A to A+B+C+D. And then from A+B+C+D one takes a representative sample. So two important benefits. And then a third: the relationship with the phone holder: stronger incentive to tell the truth, and no bad people hacking into the system.”

In sum, Peter and Macartan devised the concept of “crowdseeding” to increase the crowd and render that subset a representative sample of the overall population. In addition, the crowdseeding methodology they developed genera-ted more reliable information than crowdsourcing would have and did so in a way that was safer and more sustainable.

Peter traveled to 18 villages across the Kivus and in each identified three representatives to serve as the eyes and years of the village. These representatives were selected in collaboration with the elders and always included a female representative. They were each given a mobile phone and received extensive training. A code book was also shared which codified different types of security incidents. That way, the reps simply had to type the number corresponding to a given incident (or several numbers if more than one incident had taken place). Anyone in the village could approach these reps with relevant information which would then be texted to Peter and Macartan.

The table above is the first page of the codebook. Note that the numerous security risks of doing this SMS reporting were discussed at length with each community before embarking on the selection of 3 village reps. Each community decided to voted to participate despite the risks. Interestingly, not a single village voted against launching the project. However, Peter and Macartan chose not to scale the project beyond 18 villages for fear that it would get the attention of the militias operating in the region.

A local field representative would travel to the villages every two weeks or so to individually review the text messages sent out by each representative and to verify whether these incidents had actually taken place by asking others in the village for confirmation. The fact that there were 3 representatives per village also made the triangulation of some text messages possible. Because the 18 villages were randomly selected as part the randomized control trial (RCT) for the monitoring and evaluation project, the text messages were relaying a representative sample of information.

But what was the incentive? Why did a total of 54 village representatives from 18 villages send thousands of text messages to Voix des Kivus over a year and a half? On the financial side, Peter and Macartan devised an automated way to reimburse the cost of each text message sent on a monthly basis and in addition provided an additional $1.5/month. The only ask they made of the reps was that each had to send at least one text message per week, even if that message had the code 00 which referred to “no security incident”.

The figure above depicts the number of text messages received throughout the project, which formally ended in January 2011. In Peter’s own words:

“We gave $20 at the end to say thanks but also to learn a particular thing. During the project we heard often: ‘How important is that weekly $1.5?’ ‘Would people still send messages if you only reimburse them for their sent messages (and stop giving them the weekly $1.5)?’ So at the end of the project [...] we gave the phone holder $20 and told them: the project continues exactly the same, the only difference is we can no longer send you the $1.5. We will still reimburse you for the sent messages, we will still share the bulletins, etc. While some phone holders kept on sending textmessages, most stopped. In other words, the financial incentive of $1.5 (in the form of phonecredit) was important.”

Peter and Macartan have learned a lot during this project, and I urge colleagues interested in applying their project to get in touch with them–I’m happy to provide an email introduction. I wish Swisspeace’s Early Warning System (FAST) had adopted this methodology before running out of funding several years ago. But the leadership at the time was perhaps not forward thinking enough. I’m not sure whether the Conflict Early Warning and Response Network (CEWARN) in the Horn has fared any better vis-a-vis demonstrated impact or lack thereof.

To learn more about crowdsourcing as a methodology for information collection, I recommend the following three articles:

Predicting the Future of Global Geospatial Information Management

The United Nations Committee of Experts on Global Information Management (GGIM) recently organized a meeting of thought-leaders and visionaries in the geo-spatial world to identify the future of this space over the next 5-10 years. These experts came up with some 80+ individual predictions. I’ve included some of the more interesting ones below.

  • The use of Unmanned Aerial Vehicles (UAVs) as a tool for rapid geospatial data collection will increase.
  • 3D and even 4D geospatial information, incorporating time as the fourth dimension, will increase.
  • Technology will move faster than legal and governance structures.
  • The link between geospatial information and social media, plus other actor networks, will become more and more important.
  • Real-time info will enable more dynamic modeling & response to disasters.
  • Free and open source software will continue to grow as viable alternatives both in terms of software, and potentially in analysis and processing.
  • Geospatial computation will increasingly be non-human consumable in nature, with an increase in fully-automated decision systems.
  • Businesses and Governments will increasingly invest in tools and resources to manage Big Data. The technologies required for this will enable greater use of raw data feeds from sensors and other sources of data.
  • In ten years time it is likely that all smart phones will be able to film 360 degree 3D video at incredibly high resolution by today’s standards & wirelessly stream it in real time.
  • There will be a need for geospatial use governance in order to discern the real world from the virtual/modelled world in a 3D geospatial environ-ment.
  • Free and open access to data will become the norm and geospatial information will increasingly be seen as an essential public good.
  • Funding models to ensure full data coverage even in non-profitable areas will continue to be a challenge.
  • Rapid growth will lead to confusion and lack of clarity over data ownership, distribution rights, liabilities and other aspects.
  • In ten years, there will be a clear dividing line between winning and losing nations, dependent upon whether the appropriate legal and policy frameworks have been developed that enable a location-enabled society to flourish.
  • Some governments will use geospatial technology as a means to monitor or restrict the movements and personal interactions of their citizens. Individuals in these countries may be unwilling to use LBS or applications that require location for fear of this information being shared with authorities.
  • The deployment of sensors and the broader use of geospatial data within society will force public policy and law to move into a direction to protect the interests and rights of the people.
  • Spatial literacy will not be about learning GIS in schools but will be more centered on increasing spatial awareness and an understanding of the value of understanding place as context.
  • The role of National Mapping Agencies as an authoritative supplier of high quality data and of arbitrator of other geospatial data sources will continue to be crucial.
  • Monopolies held by National Mapping Agencies in some areas of specialized spatial data will be eroded completely.
  • More activities carried out by National Mapping Agencies will be outsourced and crowdsourced.
  • Crowdsourced data will push National Mapping Agencies towards niche markets.
  • National Mapping Agencies will be required to find new business models to provide simplified licenses and meet the demands for more free data from mapping agencies.
  • The integration of crowdsourced data with government data will increase over the next 5 to 10 years.
  • Crowdsourced content will decrease cost, improve accuracy and increase availability of rich geospatial information.
  •  There will be increased combining of imagery with crowdsourced data to create datasets that could not have been created affordably on their own.
  • Progress will be made on bridging the gap between authoritative data and crowdsourced data, moving towards true collaboration.
  • There will be an accelerated take-up of Volunteer Geographic Information over the next five years.
  • Within five years the level of detail on transport systems within OpenStreetMap will exceed virtually all other data sources & will be respected/used by major organisations & governments across the globe.
  • Community-based mapping will continue to grow.
  • There is unlikely to be a market for datasets like those currently sold to power navigation and location-based services solutions in 5 years, as they will have been superseded by crowdsourced datasets from OpenStreetMaps or other comparable initiatives.

Which trends have the experts missed? Do you think they’re completely off on any of the above? The full set of predictions on the future of global geospatial information management is available here as a PDF.

Does the Humanitarian Industry Have a Future in The Digital Age?

I recently had the distinct honor of being on the opening plenary of the 2012 Skoll World Forum in Oxford. The panel, “Innovation in Times of Flux: Opportunities on the Heels of Crisis” was moderated by Judith Rodin, CEO of the Rockefeller Foundation. I’ve spent the past six years creating linkages between the humanitarian space and technology community, so the conversations we began during the panel prompted me to think more deeply about innovation in the humanitarian industry. Clearly, humanitarian crises have catalyzed a number of important innovations in recent years. At the same time, however, these crises extend the cracks that ultimately reveal the inadequacies of existing organiza-tions, particularly those resistant to change; and “any organization that is not changing is a battle-field monument” (While 1992).

These cracks, or gaps, are increasingly filled by disaster-affected communities themselves thanks in part to the rapid commercialization of communication technology. Question is: will the multi-billion dollar humanitarian industry change rapidly enough to avoid being left in the dustbin of history?

Crises often reveal that “existing routines are inadequate or even counter-productive [since] response will necessarily operate beyond the boundary of planned and resourced capabilities” (Leonard and Howitt 2007). More formally, “the ‘symmetry-breaking’ effects of disasters undermine linearly designed and centralized administrative activities” (Corbacioglu 2006). This may explain why “increasing attention is now paid to the capacity of disaster-affected communities to ‘bounce back’ or to recover with little or no external assistance following a disaster” (Manyena 2006).

But disaster-affected populations have always self-organized in times of crisis. Indeed, first responders are by definition those very communities affected by disasters. So local communities—rather than humanitarian professionals—save the most lives following a disaster (Gilbert 1998). Many of the needs arising after a disaster can often be met and responded to locally. One doesn’t need 10 years of work experience with the UN in Darfur or a Masters degree to know basic first aid or to pull a neighbor out of the rubble, for example. In fact, estimates suggest that “no more than 10% of survival in emergencies can be attributed to external sources of relief aid” (Hilhorst 2004).

This figure may be higher today since disaster-affected communities now benefit from radically wider access to information and communication technologies (ICTs). After all, a “disaster is first of all seen as a crisis in communicating within a community—that is as a difficulty for someone to get informed and to inform other people” (Gilbert 1998). This communication challenge is far less acute today because disaster-affected communities are increasingly digital, and thus more and more the primary source of information communicated following a crisis. Of course, these communities were always sources of information but being a source in an analog world is fundamentally different than being a source of information in the digital age. The difference between “read-only” versus “read-write” comes to mind as an analogy. And so, while humanitarian organiza-tions typically faced a vacuum of information following sudden onset disasters—limited situational awareness that could only be filled by humanitarians on the ground or via established news organizations—one of the major challenges today is the Big Data produced by disaster-affected communities themselves.

Indeed, vacuums are not empty and local communities are not invisible. One could say that disaster-affected communities are joining the quantified self (QS) movement given that they are increasingly quantifying themselves. If inform-ation is power, then the shift of information sourcing and sharing from the select few—the humanitarian professionals—to the masses must also engender a shift in power. Indeed, humanitarians rarely have access to exclusive information any longer. And even though affected populations are increasingly digital, some groups believe that humanitarian organizations have largely failed at commu–nicating with disaster-affected communities. (Naturally, there are important and noteworthy exceptions).

So “Will Twitter Put the UN Out of Business?” (Reuters), or will humanitarian organizations cope with these radical changes by changing themselves and reshaping their role as institutions before it’s too late? Indeed, “a business that doesn’t communicate with its customers won’t stay in business very long—it’ll soon lose track of what its clients want, and clients won’t know what products or services are on offer,” whilst other actors fill the gaps (Reuters). “In the multi-billion dollar humanitarian aid industry, relief agencies are businesses and their beneficiaries are customers. Yet many agencies have muddled along for decades with scarcely a nod towards communicating with the folks they’re supposed to be serving” (Reuters).

The music and news industries were muddling along as well for decades. Today, however, they are facing tremendous pressures and are undergoing radical structural changes—none of them by choice. Of course, it would be different if affected communities were paying for humanitarian services but how much longer do humanitarian organizations have until they feel similar pressures?

Whether humanitarian organizations like it or not, disaster affected communities will increasingly communicate their needs publicly and many will expect a response from the humanitarian industry. This survey carried out by the American Red Cross two years ago already revealed that during a crisis the majority of the public expect a response to needs they communicate via social media. Moreover, they expect this response to materialize within an hour. Humanitarian organizations simply don’t have the capacity to deal with this surge in requests for help, nor are they organizationally structured to do so. But the fact of the matter is that humanitarian organizations have never been capable of dealing with this volume of requests in the first place. So “What Good is Crowd-sourcing When Everyone Needs Help?” (Reuters). Perhaps “crowdsourcing” is finally revealing all the cracks in the system, which may not be a bad thing. Surely by now it is no longer a surprise that many people may be in need of help after a disaster, hence the importance of disaster risk reduction and preparedness.

Naturally, humanitarian organizations could very well chose to continue ignoring calls for help and decide that communicating with disaster affected communities is simply not tenable. In the analog world of the past, the humanitarian industry was protected by the fact that their “clients” did not have a voice because they could not speak out digitally. So the cracks didn’t show. Today, “many traditional humanitarian players see crowdsourcing as an unwelcome distraction at a time when they are already overwhelmed. They worry that the noise-to-signal ration is just too high” (Reuters). I think there’s an important disconnect here worth emphasizing. Crowdsourced information is simply user-generated content. If humanitarians are to ignore user-generated content, then they can forget about two-way communications with disaster-affected communities and drop all the rhetoric. On the other hand, “if aid agencies are to invest time and resources in handling torrents of crowdsourced information in disaster zones, they should be confident it’s worth their while” (Reuters).

This last comment is … rather problematic for several reasons (how’s that for being diplomatic?). First of all, this kind of statement continues to propel the myth that we the West are the rescuers and aid does not start until we arrive (Barrs 2006). Unfortunately, we rarely arrive: how many “neglected crises” and so-called “forgotten emergencies” have we failed to intervene in? This kind of mindset may explain why humanitarian interventions often have the “propensity to follow a paternalistic mode that can lead to a skewing of activities towards supply rather than demand” and towards informing at the expense of listening (Manyena 2006).

Secondly, the assumption that crowdsourced data would be for the exclusive purpose of the humanitarian cavalry is somewhat arrogant and ignores the reality that local communities are by definition the first responders in a crisis. Disaster-affected communities (and Diasporas) are already collecting (and yes crowdsourcing) information to create their own crisis maps in times of need as a forthcoming report shows. And they’ll keep doing this whether or not humanita-rian organizations approve or leverage that information. As my colleague Tim McNamara has noted “Crisis mapping is not simply a technological shift, it is also a process of rapid decentralization of power. With extremely low barriers to entry, many new entrants are appearing in the fields of emergency and disaster response. They are ignoring the traditional hierarchies, because the new entrants perceive that there is something that they can do which benefits others.”

Thirdly, humanitarian organizations are far more open to using free and open source software than they were just two years ago. So the resources required to monitor and map crowdsourced information need not break the bank. Indeed, the Syria Crisis Map uses a free and open source data-mining platform called HealthMap, which has been monitoring some 2,000 English-based sources on a daily basis for months. The technology powering the map itself, Ushahidi, is also free and open source. Moreover, the team behind the project is comprised of just a handful of volunteers doing this in their own free time (for almost an entire year now). And as a result of this initiative, I am collaborating with a colleague from UNDP to pilot HealthMap’s data mining feature for conflict monitoring and peacebuilding purposes.

Fourth, other than UN Global Pulse, humanitarian agencies are not investing time and resources to manage Big (Crisis) Data. Why? Because they have neither the time nor the know-how. To this end, they are starting to “outsource” and indeed “crowdsource” these tasks—just as private sector businesses have been doing for years in order to extend their reach. Anyone actually familiar with this space and developments since Haiti already knows this. The CrisisMappers Network, Standby Volunteer Task Force (SBTF), Humanitarian OpenStreetMap (HOT) and Crisis Commons (CC) are four volunteer/technical networks that have already collaborated actively with a number of humanitarian organizations since Haiti to provide the “surge capacity” requested by the latter; this includes UN OCHA in Libya and Colombia, UNHCR in Somalia and WHO in Libya, to name a few. In fact, these groups even have their own acronym: Volunteer & Technical Communities (V&TCs).

As the former head of OCHA’s Information Services Section (ISS) noted after the SBTF launched the Libya Crisis Map, “Your efforts at tackling a difficult problem have definitely reduced the information overload; sorting through the multitude of signals on the crisis is not easy task” (March 8, 2011). Furthermore, the crowdsourced social media information mapped on the Libya Crisis Map was integrated into official UN OCHA information products. I dare say activating the SBTF was worth OCHA’s while. And it cost the UN a grand total of $0 to benefit from this support.

Credit: Chris Bow

The rapid rise of V&TC’s has catalyzed the launch of the Digital Humanitarian Network (DHN), formerly called the Humanitarian Standby Task Force (H-SBTF). Digital Humanitarians is a network-of-network catalyzed by the UN and comprising some of the most active members of the volunteer & technical co-mmunity. The purpose of the Digital Humanitarian platform (powered by Ning) is to provide a dedicated interface for traditional humanitarian organizations to outsource and crowdsource important information management tasks during and in-between crises. OCHA has also launched the Communities of Interest (COIs) platform to further leverage volunteer engagement in other areas of humanitarian response.

These are not isolated efforts. During the massive Russian fires of 2010, volunteers launched their own citizen-based disaster response agency that was seen by many as more visible and effective than the Kremlin’s response. Back in Egypt, volunteers used IntaFeen.com to crowdsource and coordinate their own humanitarian convoys to Libya, for example. The company LinkedIn has also taken innovative steps to enable the matching of volunteers with various needs. They recently added a “Volunteer and Causes” field to its member profile page, which is now available to 150 million LinkedIn users worldwide. Sparked.com is yet another group engaged in matching volunteers with needs. The company is the world’s first micro-volunteering network, sending challenges to registered volunteers that are targeted to their skill set and the causes that they are most passionate about.

It is not farfetched to envisage how these technologies could be repurposed or simply applied to facilitate and streamline volunteer management following a disaster. Indeed, researchers at the University of Queensland in Australia have already developed a new smart phone app to help mobilize and coordinate volunteer efforts during and following major disasters. The app not only provides information on preparedness but also gives real-time updates on volunteering opportunities by local area. For example, volunteers can register for a variety of tasks including community response to extreme weather events.

Meanwhile, the American Red Cross just launched a Digital Operations Center in partnership with Dell Labs, which allows them to leverage digital volunteers and Dell’s social media monitoring platforms to reduce the noise-to-signal ratio. This is a novel “social media-based operation devoted to humanitarian relief, demonstrating the growing importance of social media in emergency situations.” As part of this center, the Red Cross also “announced a Digital Volunteer program to help respond to question from and provide information to the public during disasters.”

While important challenges do exist, there are many positive externalities to leveraging digital volunteers. As deputy high commissioner of UNHCR noted about this UNHCR-volunteer project in Somalia, these types of projects create more citizen-engagement and raises awareness of humanitarian organizations and projects. This in part explains why UNHCR wants more, not less, engage-ment with digital volunteers. Indeed, these volunteers also develop important skills that will be increasingly sought after by humanitarian organizations recruit-ing for junior full-time positions. Humanitarian organizations are likely to be come smarter and more up to speed on humanitarian technologies and digital humanitarian skills as a result. This change should be embraced.

So given the rise of “self-quantified” disaster-affected communities and digitally empowered volunteer communities, is there a future for traditional humani-tarian organizations? Of course, anyone who suggests otherwise is seriously misguided and out of touch with innovation in the humanitarian space. Twitter will not put the UN out of business. Humanitarian organizations will continue to play some very important roles, especially those relating to logistics and coor-dination. These organizations will continue outsourcing some roles but will also take on some new roles. The issue here is simply one of comparative advantage. Humanitarian organizations used to have a comparative advantage in some areas, but this has shifted for all the reasons described above. So outsourcing in some cases makes perfect sense.

Interestingly, organizations like UN OCHA are also changing some of their own internal information management processes as a result of their collaboration with volunteer networks like the SBTF, which they expect will lead to a number of efficiency gains. Furthermore, OCHA is behind the Digital Humanitarians initiative and has also been developing a check-in app for humanitarian pro-fessionals to use in disaster response—clear signs of innovation and change. Meanwhile, the UK’s Department for International Development (DfID) has just launched a $75+ million fund to leverage new technologies in support of humani-tarian response; this includes mobile phones, satellite imagery, Twitter as well as other social media technologies, digital mapping and gaming technologies. Given that crisis mapping integrates these new technologies and has been at the cutting edge of innovation in the humanitarian space, I’ve invited DfID to participate in this year’s International Conference on Crisis Mapping (ICCM 2012).

In conclusion, and as argued two years ago, the humanitarian industry is shifting towards a more multi-polar system. The rise of new actors, from digitally empowered disaster-affected communities to digital volunteer networks, has been driven by the rapid commercialization of communication technology—particularly the mobile phone and social networking platforms. These trends are unlikely to change soon and crises will continue to spur innovations in this space. This does not mean that traditional humanitarian organizations are becoming obsolete. Their roles are simply changing and this change is proof that they are not battlefield monuments. Of course, only time will tell whether they change fast enough.

Why Bounded Crowdsourcing is Important for Crisis Mapping and Beyond

I coined the term “bounded crowdsourcing” a couple years back to distinguish the approach from other methodologies for information collection. As tends to happen, some Muggles (in the humanitarian community) ridiculed the term. They freaked out about the semantics instead of trying to understand the under-lying concept. It’s not their fault though, they’ve never been to Hogwarts and have never taken Crowdsourcery 101 (joke!).

Open crowdsourcing or “unbounded crowdsourcing” refers to the collection of information with no intentional constraints. Anyone who hears about an effort to crowdsource information can participate. This definition is inline with the original description put forward by Jeff Howe: outsourcing a task to a generally large group of people in the form of an open call.

In contrast, the point of “bounded crowdsourcing” is to start with a small number of trusted individuals and to have these individuals invite say 3 additional individuals to join the project–individuals who they fully trust and can vouch for. After joining and working on the project, these individuals in turn invite 3 additional people they fully trust. And so on and so forth at an exponential rate if desired. Just like crowdsourcing is nothing new in the field of statistics, neither is “bounded crowdsourcing”; it’s analog being snowball sampling.

In snowball sampling, a number of individuals are identified who meet certain criteria but unlike purposive sampling they are asked to recommend others who also meet this same criteria—thus expanding the network of participants. Although these “bounded” methods are unlikely to produce representative samples, they are more likely to produce trustworthy information. In addition, there are times when it may be the best—or indeed only—method available. Incidentally, a recent study that analyzed various field research methodologies for conflict environments concluded that snowball sampling was the most effective method (Cohen and Arieli 2011).

I introduced the concept of bounded crowdsourcing to the field of crisis mapping in response to concerns over the reliability of crowd sourced information. One excellent real world case study of bounded crowdsourcing for crisis response is this remarkable example from Kyrgyzstan. The “boundary” in bounded crowd-sourcing is dynamic and can grow exponentially very quickly. Participants may not all know each other (just like in open crowdsourcing) so in some ways they become a crowd but one bounded by an invite-only criteria.

I have since recommended this approach to several groups using the Ushahidi platform, like the #OWS movement. The statistical method known as snowball sampling is decades old. So I’m not introducing a new technique, simply applying a conventional approach from statistics to the field of crisis mapping and calling it bounded to distinguish the methodology from regular crowdsourcing efforts. What is different and exciting about combining snowball sampling with crowd-sourcing is that a far larger group can be sampled, a lot more quickly and also more cost-effectively given today’s real-time, free social networking platforms.

Google+ for Crowdsourcing Crisis Information, Crisis Mapping and Disaster Response

Facebook is increasingly used to crowdsource crisis information and response, as is Twitter. So is it just a matter of time until we see similar use cases with Google+? Another question I have is whether such uses cases will simply reflect more of the same or whether we’ll see new, unexpected applications and dynamics? Of course, it may be premature to entertain the role that Google+ might play in disaster response just days after it’s private beta launch, but the company seems fully committed to making  this new venture succeed. Entertain-ing how Google+ (G+) might be used as a humanitarian technology thus seems worthwhile.

The fact that G+ is open and searchable is probably one of the starkest differences with the walled-garden that is Facebook; that, and their Data Liberation policy. This will make activity on G+ relatively easier to find—Google is the King of Search, after all. This openness will render serendipity and synergies more likely.

The much talked about “Circles” feature is also very appealing for the kind of organic and collaborative crowdsourcing work that we see emerging following a crisis. Think about these “Circles” not only as networks but also as “honeycombs” for “flash” projects—i.e., short-term and temporary—very much along the lines that Skype is used for live collaborative crisis mapping operations.

Google+’s new Hangout feature could also be used instead of Skype chat and video, with the advantage of having multi-person video-conferencing. With a little more work, the Sparks feature could facilitate media monitoring—an important component of live crisis mapping. And then there’s Google+ mobile, which is accessible on most phones with a browser and already includes a “check-in” feature as well as geo-referenced status updates. The native app for the Android is already available and the iPhone app is coming soon.

Clicking on my status update above, produces the Google Maps page below. What’s particularly telling about this is how “underwhelming” the use of Google Maps currently is within G+.  There’s no doubt this will change dramatically as G+ evolves. The Google+ team has noted that they already have dozens of new features ready to be rolled out in the coming months. So expect G+ to make full use of Google’s formidable presence on the Geo Web—think MapMaker+ and Earth Engine+. This could be a big plus for live crowdsourced crisis mapping, especially of the multimedia kind.

One stark difference with Facebook’s status updates and check-in’s is that G+ allows you to decide which Circles (or networks of contacts) to share your updates and check-in’s with. This is an important difference that could allow for more efficient information sharing in near real-time. You could set up your Circles as different teams, perhaps even along UN Cluster lines.

As the G+ mobile website reveals, the team will also be integrating SMS, which is definitely key for crisis response. I imagine there will also be a way to connect your Twitter feed with Google+ in the near future. This will make G+ even more compelling as a mobile humanitarian technology platform. In addition, I expect there are also plans to integrate Google News, Google Reader, Google Groups, Google Docs and Google Translate with G+. GMail, YouTube and Picasa are already integrated.

One feature that will be important for humanitarian applications is offline functionality. Google Reader and GMail already have this feature (Google Gears), which I imagine could be added to G+’s Stream and perhaps eventually with Google Maps? In addition, if Google can provide customizable uses of G+, then this could also make the new platform more compelling for humanitarian organizations, e.g., if OCHA could have their own G+ (“iG+”) by customizing and branding their G+ interface; much like the flexibility afforded by the Ning platform. One first step in that direction might be to offer a range of “themes” for G+, just like Google does with GMail.

Finally, the ability to develop third party apps for G+ could be a big win. Think of a G+ store (in contrast to an App Store). I’d love to see a G+ app for Ushahidi and OSM, for example.

If successful, G+ could be the best example of “What Technology Wants” to date. G+ is convergence technology par excellence. It is a hub that connects many of Google’s excellent products and from the looks of it, the G+ team is just getting warmed up with the converging.

I’d love to hear from others who are also brainstorming about possible applications of Google+ in the humanitarian space. Am I off on any of the ideas above? What am I missing? Maybe we could set up a Google+ 4 Disaster Response Circle and get on Hangout to brainstorm together?

Information Sharing During Crisis Management in Hierarchical vs. Network Teams

The month of May turned out to be ridiculously busy, so much so that I haven’t been able to blog. And when that happens, I know I’m doing too much. So my plan for June is to slow down, prioritize and do more of what I enjoy, e.g., blog.

In the meantime, the Journal of Contingencies and Crisis Management just published an interesting piece on “Information Sharing During Crisis Management in Hierarchical vs. Network Teams.” The topic and findings have implications for digital activism as well as crisis management.

Here’s the abstract:

This study examines the differences between hierarchical and network teams in emergency management. A controlled experimental environment was created in which we could study teams that differed in decision rights, availability of information, information sharing, and task division. Thirty-two teams of either two (network) or three (hierarchy) participants (N=80 in total) received messages about an incident in a tunnel with high-ranking politicians possibly being present. Based on experimentally induced knowledge, teams had to decide as quickly and as accurately as possible what the likely cause of the incident was: an attack by Al Qaeda, by anti-globalists, or an accident. The results showed that network teams were overall faster and more accurate in difficult scenarios than hierarchical teams. Network teams also shared more knowledge in the difficult scenarios, compared with the easier scenarios. The advantage of being able to share information that is inherent in network teams is thus contingent upon the type of situation encountered.

The authors define a hierarchical team as one in which members pass on information to a leader, but not to each other. In a network team, members can freely exchange information with each other. Here’s more on the conclusions derived by the study:

Our goal with the present study was to focus on a relatively simple comparison between a classic hierarchical structure and a network structure. The structures differed in terms of decision rights, availability of information, information sharing, and task division. Although previous research has not found unequivocal support in terms of speed or accuracy for one structure or the other, we expected our network structure to perform better and faster on the decision problems. We also expected the network teams to learn faster and exchange more specialist knowledge than the hierarchical teams.

Our hypotheses are partially supported. Network teams are indeed faster than hierarchical teams. Further analyses showed that network teams were, on average, as fast as the slowest working individual in the hierarchical teams. Analyses also showed that network teams very early on converged on a rapid mode of arriving at a decision, whereas hierarchical teams took more time. The extra time needed by hierarchical teams is therefore due to the time needed by the team leader to arrive at his or her decision.

We did not find an overall effect of team structure on the quality of team decision, contrary to our prediction. Interestingly, we did find that network teams were significantly better than hierarchical teams on the Al Qaeda scenarios (as compared with the anti-globalist scenarios). The Al Qaeda scenarios were the most difficult scenarios. Furthermore, scores on the Post-test showed that there was a larger transfer of knowledge on Al Qaeda from the specialist to the nonspecialist in the network condition as compared with the hierarchical condition. These results indicate that a high level of team member interaction leads to shared specialist knowledge, particularly in difficult scenarios. This in turn leads to more accurate decisions.

This study focused on the information assessment part of crisis management, not on the operative part. However, there may not be that much of a difference in terms of the actual teamwork involved. When team members have to carry out particular tasks, they may frequently also have to share specialist knowledge. Wilson, Salas, Priest, and Andrews (2007) have studied how teamwork breakdowns in the military may contribute to fratricide, the accidental shooting of one’s own troops rather than the enemy. This is obviously a very operative part of the military task. Teamwork breakdowns are subdivided into communication, coordination and cooperation, with information exchange mutual performance monitoring, and mutual trust as representative teamwork behaviours for each category (Wilson et al., 2007).

We believe that it is precisely these behaviours that are fostered by network structures rather than hierarchical structures. Network structures allow teams to exchange information quickly, monitor each other’s performance, and build up mutual trust. This is just as important in the operative part of crisis management work as it is in the information assessment part.

In conclusion, then, network teams are faster than hierarchical teams, while at the same time maintaining the same level of accuracy in relatively simple environments. In relatively complex environments, on the other hand, network teams arrive at correct decisions more frequently than hierarchical teams. This may very likely be due to a better exchange of knowledge in network teams.

Patrick Philippe Meier