Tag Archives: crisis

Humanitarian Crisis Computing 101

Disaster-affected communities are increasingly becoming “digital” communities. That is, they increasingly use mobile technology & social media to communicate during crises. I often refer to this user-generated content as Big (Crisis) Data. Humanitarian crisis computing seeks to rapidly identify informative, actionable and credible content in this growing stack of real-time information. The challenge is akin to finding the proverbial needle in the haystack since the vast majority of reports posted on social media is often not relevant for humanitarian response. This is largely a result of the demand versus supply problem described here.

bd0

In any event, the few “needles” of information that are relevant, can relay information that is vital and indeed-life saving for relief efforts—both traditional top-down efforts and more bottom-up grassroots efforts. When disaster strikes, we increasingly see social media traffic explode. We know there are important “pins” of relevant information hidden in this growing stack of information but how do we find them in real-time?

bd2

Humanitarian organizations are ill-equipped to managing the deluge of Big Crisis Data. They tend to sift through the stack of information manually, which means they aren’t able to process more than a small volume of information. This is represented by the dotted green line in the picture below. Big Data is often described as filter failure. Our manual filters cannot manage the large volume, velocity and variety of information posted on social media during disasters. So all the information above the dotted line, Big Data, is completely ignored.

bd3

This is where Advanced Computing comes in. Advanced Computing uses Human and Machine Computing to manage Big Data and reduce filter failure, thus allowing humanitarian organizations to process a larger volume, velocity and variety of crisis information in less time. In other words, Advanced Computing helps us push the dotted green line up the information stack.

bd4

In the early days of digital humanitarian response, we used crowdsourcing to search through the haystack of user-generated content posted during disasters. Note that said content can also include text messages (SMS), like in Haiti. Crowd-sourcing crisis information is not as much fun as the picture below would suggest, however. In fact, crowdsourcing crisis information was (and can still be) quite a mess and a big pain in the haystack. Needless to say, crowdsourcing is not the best filter to make sense of Big Crisis Data.

bd5

Recently, digital humanitarians have turned to microtasking crisis information as described here and here. The UK Guardian and Wired have also written about this novel shift from crowdsourcing to microtasking.

bd6

Microtasking basically turns a haystack into little blocks of stacks. Each micro-stack is then processed by one ore more digital humanitarian volunteers. Unlike crowdsourcing, a microtasking approach to filtering crisis information is highly scalable, which is why we recently launched MicroMappers.

bd7

The smaller the micro-stack, the easier the tasks and the faster that they can be carried out by a greater number of volunteers. For example, instead of having 10 people classify 10,000 tweets based on the Cluster System, microtasking makes it very easy for 1,000 people to classify 10 tweets each. The former would take hours while the latter mere minutes. In response to the recent earthquake in Pakistan, some 100 volunteers used MicroMappers to classify 30,000+ tweets in about 30 hours, for example.

bd8

Machine Computing, in contrast, uses natural language processing (NLP) and machine learning (ML) to “quantify” the haystack of user-generated content posted on social media during disasters. This enable us to automatically identify relevant “needles” of information.

bd9

An example of a Machine Learning approach to crisis computing is the Artificial Intelligence for Disaster Response (AIDR) platform. Using AIDR, users can teach the platform to automatically identify relevant information from Twitter during disasters. For example, AIDR can be used to automatically identify individual tweets that relay urgent needs from a haystack of millions of tweets.

bd11
The pictures above are taken from the slide deck I put together for a keynote address I recently gave at the Canadian Ministry of Foreign Affairs.

bio

Analyzing Crisis Hashtags on Twitter (Updated)

Update: You can now upload your own tweets to the Crisis Hashtags Analysis Dashboard here

Hashtag footprints can be revealing. The map below, for example, displays the top 200 locations in the world with the most Twitter hashtags. The top 5 are Sao Paolo, London, Jakarta, Los Angeles and New York.

Hashtag map

A recent study (PDF) of 2 billion geo-tagged tweets and 27 million unique hashtags found that “hashtags are essentially a local phenomenon with long-tailed life spans.” The analysis also revealed that hashtags triggered by external events like disasters “spread faster than hashtags that originate purely within the Twitter network itself.” Like other metadata, hashtags can be  informative in and of themselves. For example, they can provide early warning signals of social tensions in Egypt, as demonstrated in this study. So might they also reveal interesting patterns during and after major disasters?

Tens of thousands of distinct crisis hashtags were posted to Twitter during Hurricane Sandy. While #Sandy and #hurricane featured most, thousands more were also used. For example: #SandyHelp, #rallyrelief, #NJgas, #NJopen, #NJpower, #staysafe, #sandypets, #restoretheshore, #noschool, #fail, etc. NJpower, for example, “helped keep track of the power situation throughout the state. Users and news outlets used this hashtag to inform residents where power outages were reported and gave areas updates as to when they could expect their power to come back” (1).

Sandy Hashtags

My colleagues and I at QCRI are studying crisis hashtags to better understand the variety of tags used during and in the immediate aftermath of major crises. Popular hashtags used during disasters often overshadow more hyperlocal ones making these less discoverable. Other challenges include the: “proliferation of hashtags that do not cross-pollinate and a lack of usability in the tools necessary for managing massive amounts of streaming information for participants who needed it” (2). To address these challenges and analyze crisis hashtags, we’ve just launched a Crisis Hashtags Analytics Dashboard. As displayed below, our first case study is Hurricane Sandy. We’ve uploaded about half-a-million tweets posted between October 27th to November 7th, 2012 to the dashboard.

QCRI_Dashboard

Users can visualize the frequency of tweets (orange line) and hashtags (green line) over time using different time-steps, ranging from 10 minute to 1 day intervals. They can also “zoom in” to capture more minute changes in the number of hashtags per time interval. (The dramatic drop on October 30th is due to a server crash. So if you have access to tweets posted during those hours, I’d be  grateful if you could share them with us).

Hashtag timeline

In the second part of the dashboard (displayed below), users can select any point on the graph to display the top “K” most frequent hashtags. The default value for K is 10 (e.g., top-10 most frequent hashtags) but users can change this by typing in a different number. In addition, the 10 least-frequent hashtags are displayed, as are the 10 “middle-most” hashtags. The top-10 newest hashtags posted during the selected time are also displayed as are the hashtags that have seen the largest increase in frequency. These latter two metrics, “New K” and “Top Increasing K”, may provide early warning signals during disasters. Indeed, the appearance of a new hashtag can reveal a new problem or need while a rapid increase in the frequency of some hashtags can denote the spread of a problem or need.

QCRI Dashboard 2

The third part of the dashboard allows users to visualize and compare the frequency of top hashtags over time. This feature is displayed in the screenshot below. Patterns that arise from diverging or converging hashtags may indicate important developments on the ground.

QCRI Dashboard 3

We’re only at the early stages of developing our hashtags analytics platform (above), but we hope the tool will provide insights during future disasters. For now, we’re simply experimenting and tinkering. So feel free to get in touch if you would like to collaborate and/or suggest some research questions.

Bio

Acknowledgements: Many thanks to QCRI colleagues Ahmed Meheina and Sofiane Abbar for their work on developing the dashboard.

Crowdsourcing Critical Thinking to Verify Social Media During Crises

My colleagues and I at QCRI and the Masdar Institute will be launching Verily in the near future. The project has already received quite a bit of media coverage—particularly after the Boston marathon bombings. So here’s an update. While major errors were made in the crowdsourced response to the bombings, social media can help to find quickly find individuals and resources during a crisis. Moreover, time-critical crowdsourcing can also be used to verify unconfirmed reports circulating on social media.

Screen Shot 2013-05-19 at 5.51.06 PM

The errors made following the bombings were the result of two main factors:

(1) the crowd is digitally illiterate
(2) the platforms used were not appropriate for the tasks at hand

The first factor has to do with education. Most of us are still in Kindergarden when it comes to the appropriate use social media. We lack the digital or media literacy required for the responsible use of social media during crises. The good news, however, is that the major backlash from the mistakes made in Boston are already serving as an important lesson to many in the crowd who are very likely to think twice about retweeting certain content or making blind allegations on social media in the future. The second factor has to do with design. Tools like Reddit and 4Chan that are useful for posting photos of cute cats are not always the tools best designed for finding critical information during crises. The crowd is willing to help, this much has been proven. The crowd simply needs better tools to focus and rationalize to goodwill of it’s members.

Verily was inspired from the DARPA Red Balloon Challenge which leveraged social media & social networks to find the location of 10 red weather balloons planted across the continental USA (3 million square miles) in under 9 hours. So Verily uses that same time-critical mobilization approach—negative incentive recursive mechanism—to rapidly collect evidence around a particular claim during a disaster, such as “The bridge in downtown LA has been destroyed by the earthquake”. Users of Verily can share this verification challenge directly from the Verily website (e.g., Share via Twitter, FB, and Email), which posts a link back to the Verily claim page.

This time-critical mobilization & crowdsourcing element is the first main component of Verily. Because disasters are far more geographically bounded than the continental US, we believe that relevant evidence can be crowdsourced in a matter of minutes rather than hours. Indeed, while the degree of separation in the analog world is 6, that number falls closer to 4 on social media, and we believe falls even more in bounded geographical areas like urban centers. This means that the 20+ people living opposite that bridge in LA are only 2 or 3 hops from your social network and could be tapped via Verily to take pictures of the bridge from their window, for example.

pinterest_blog

The second main component is to crowdsource critical thinking which is key to countering the spread of false rumors during crises. The interface to post evidence on Verily is modeled along the lines of Pinterest, but with each piece of content (text, image, video), users are required to add a sentence or two to explain why they think or know that piece of evidence is authentic or not. Others can comment on said evidence accordingly. This workflow prompts users to think critically rather than blindly share/RT content on Twitter without much thought, context or explanation. Indeed, we hope that with Verily more people will share links back to Verily pages rather than to out of context and unsubstantiated links of images/videos/claims, etc.

In other words, we want to redirect traffic to a repository of information that incentivises critical thinking. This means Verily is also looking to be an educational tool; we’ll have simple mini-guides on information forensics available to users (drawn from the BBC’s UGC, NPR’s Andy Carvin, etc). While we’ll include dig ups/downs on perceived authenticity of evidence posted to Verily, this is not the main focus of Verily. Dig ups/downs are similar to retweets and simply do not capture/explain whether said digger has voted based on her/his expertise or any critical thinking.

If you’re interested in supporting this project and/or sharing feedback, then please feel free to contact me at any time. For more background information on Verily, kindly see this post.

Bio

Social Media for Emergency Management: Question of Supply and Demand

I’m always amazed by folks who dismiss the value of social media for emergency management based on the perception that said content is useless for disaster response. In that case, libraries are also useless (bar the few books you’re looking for, but those rarely represent more than 1% of all the books available in a major library). Does that mean libraries are useless? Of course not. Is social media useless for disaster response? Of course not. Even if only 0.001% of the 20+ million tweets posted during Hurricane Sandy were useful, and only half of these were accurate, this would still mean over 1,000 real-time and informative tweets, or some 15,000 words—i.e., the equivalent of a 25-page, single-space document exclusively composed of fully relevant, actionable & timely disaster information.

LibTweet

Empirical studies clearly prove that social media reports can be informative for disaster response. Numerous case studies have also described how social media has saved lives during crises. That said, if emergency responders do not actively or explicitly create demand for relevant and high quality social media content during crises, then why should supply follow? If the 911 emergency number (999 in the UK) were never advertised, then would anyone call? If 911 were simply a voicemail inbox with no instructions, would callers know what type of actionable information to relay after the beep?

While the majority of emergency management centers do not create the demand for crowdsourced crisis information, members of the public are increasingly demanding that said responders monitor social media for “emergency posts”. But most responders fear that opening up social media as a crisis communication channel with the public will result in an unmanageable flood of requests, The London Fire Brigade seems to think otherwise, however. So lets carefully unpack the fear of information flooding.

First of all, New York City’s 911 operators receive over 10 million calls every year that are accidental, false or hoaxes. Does this mean we should abolish the 911 system? Of course not. Now, assuming that 10% of these calls takes an operator 10 seconds to manage, this represents close to 3,000 hours or 115 days worth of “wasted work”. But this filtering is absolutely critical and requires human intervention. In contrast, “emergency posts” published on social media can be automatically filtered and triaged thanks to Big Data Analytics and Social Computing, which could save time operators time. The Digital Operations Center at the American Red Cross is currently exploring this automated filtering approach. Moreover, just as it is illegal to report false emergency information to 911, there’s no reason why the same laws could not apply to social media when these communication channels are used for emergency purposes.

Second, if individuals prefer to share disaster related information and/or needs via social media, this means they are less likely to call in as well. In other words, double reporting is unlikely to occur and could also be discouraged and/or penalized. In other words, the volume of emergency reports from “the crowd” need not increase substantially after all. Those who use the phone to report an emergency today may in the future opt for social media instead. The only significant change here is the ease of reporting for the person in need. Again, the question is one of supply and demand. Even if relevant emergency posts were to increase without a comparable fall in calls, this would simply reveal that the current voice-based system creates a barrier to reporting that discriminates against certain users in need.

Third, not all emergency calls/posts require immediate response by a paid professional with 10+ years of experience. In other words, the various types of needs can be triaged and responded to accordingly. As part of their police training or internships, new cadets could be tasked to respond to less serious needs, leaving the more seasoned professionals to focus on the more difficult situations. While this approach certainly has some limitations in the context of 911, these same limitations are far less pronounced for disaster response efforts in which most needs are met locally by the affected communities themselves anyway. In fact, the Filipino government actively promotes the use of social media reporting and crisis hashtags to crowdsource disaster response.

In sum, if disaster responders and emergency management processionals are not content with the quality of crisis reporting found on social media, then they should do something about it by implementing the appropriate policies to create the demand for higher quality and more structured reporting. The first emergency telephone service was launched in London some 80 years ago in response to a devastating fire. At the time, the idea of using a phone to report emergencies was controversial. Today, the London Fire Brigade is paving the way forward by introducing Twitter as a reporting channel. This move may seem controversial to some today, but give it a few years and people will look back and ask what took us so long to adopt new social media channels for crisis reporting.

Bio

Tweets, Crises and Behavioral Psychology: On Credibility and Information Sharing

How we feel about the content we read on Twitter influences whether we accept and share it—particularly during disasters. My colleague Yasuaki Sakamoto at the Stevens Institute of Technology (SIT) and his PhD students analyzed this dyna-mic more closely in this recent study entitled “Perspective Matters: Sharing of Crisis Information in Social Media”. Using a series behavioral psychology experiments, they examined “how individuals share information related to the 9.0 magnitude earthquake, which hit northeastern Japan on March 11th, 2011.” Their results indicate that individuals were more likely to share crisis infor-mation (1) when they imagined that they were close to the disaster center, (2) when they were thinking about themselves, and (3) when they experienced negative emotions as a result of reading the information.

stevens1

Yasu and team are particularly interested in “the effects of perspective taking – considering self or other – and location on individuals’ intention to pass on information in a Twitter-like environment.” In other words: does empathy influence information sharing (retweeting) during crises? Does thinking of others in need eliminate the individual differences in perception that arise when thinking of one’s self instead? The authors hypothesize that “individuals’ information sharing decision can be influenced by (1) their imagined proximity, being close to or distant from the disaster center, (2) the perspective that they take, thinking about self or other, and (3) how they feel about the information that they are exposed to in social media, positive, negative or neutral.”

To test these hypotheses, Yasu and company collected one year’s worth of tweets posted by two major news agencies and five individuals following the Japan Earthquake on March 11, 2012. They randomly sampled 100 media tweets and 100 tweets produced by individuals, resulting a combined sample of 200 tweets. Sampling from these two sources (media vs user-generated) enables Yasu and team to test whether people treat the resulting content differently. Next, they recruited 468 volunteers from Amazon’s Mechanical Turk and paid them a nominal fee for their participation in a series of three behavioral psychology experiments.

In the first experiment, the “control” condition, volunteers read through the list of tweets and simply rated the likelihood of sharing a given tweet. The second experiment asked volunteers to read through the list and imagine they were in Fukushima. They were then asked to document their feelings and rate whether they would pass along a given message. Experiment three introduced a hypo-thetical person John based in Fukushima and prompted users to describe how each tweet might make John feel and rate whether they would share the tweet.

empathy

The results of these experiments suggest that, “people are more likely to spread crisis information when they think about themselves in the disaster situation. During disasters, then, one recommendation we can give to citizens would be to think about others instead of self, and think about others who are not in the disaster center. Doing so might allow citizens to perceive the information in a different way, and reduce the likelihood of impulsively spreading any seemingly useful but false information.” Yasu and his students also found that “people are more likely to share information associated with negative feelings.” Since rumors tend to evoke negativity,” they spread more quickly. The authors entertain possible ways to manage this problem such as “surrounding negative messages with positive ones,” for example.

In conclusion, Yasu and his students consider the design principles that ought to be considered when designing social media systems to verify and counter rumors. “In practice, designers need to devote significant efforts to understanding the effects of perspective taking and location, as shown in the current work, and develop techniques to mitigate negative influences of unproved information in social media.”

Bio

For more on Yasu’s work, see:

  • Using Crowdsourcing to Counter False Rumos on Social Media During Crises [Link]

Using #Mythbuster Tweets to Tackle Rumors During Disasters

The massive floods that swept through Queensland, Australia in 2010/2011 put an area almost twice the size of the United Kingdom under water. And now, a year later, Queensland braces itself for even worse flooding:

Screen Shot 2013-01-26 at 11.38.38 PM

More than 35,000 tweets with the hashtag #qldfloods were posted during the height of the flooding (January 10-16, 2011). One of the most active Twitter accounts belonged to the Queensland Police Service Media Unit: @QPSMedia. Tweets from (and to) the Unit were “overwhelmingly focussed on providing situational information and advice” (1). Moreover, tweets between @QPSMedia and followers were “topical and to the point, significantly involving directly affected local residents” (2). @QPSMedia also “introduced innovations such as the #Mythbuster series of tweets, which aimed to intervene in the spread of rumor and disinformation” (3).

rockhampton floods 2011

On the evening of January 11, @QPSMedia began to post a series of tweets with #Mythbuster in direct response to rumors and misinformation circulating on Twitter. Along with official notices to evacuate, these #Mythbuster tweets were the most widely retweeted @QPSMedia messages.” They were especially successful. Here is a sample: “#mythbuster: Wivenhoe Dam is NOT about to collapse! #qldfloods”; “#mythbuster: There is currently NO fuel shortage in Brisbane. #qldfloods.”

Screen Shot 2013-01-27 at 12.19.03 AM

This kind of pro-active intervention reminds me of the #fakesandy hashtag used during Hurricane Sandy and FEMA’s rumor control initiative during Hurricane Sandy. I expect to see greater use of this approach by professional emergency responders in future disasters. There’s no doubt that @QPSMedia will provide this service again with the coming floods and it appears that @QLDonline is already doing so (above tweet). Brisbane’s City Council has also launched this Crowdmap marking latest road closures, flood areas and sandbag locations. Hoping everyone in Queensland stays safe!

In the meantime, here are some relevant statistics on the crisis tweets posted during the 2010/2011 floods in Queensland:

  • 50-60% of #qldfloods messages were retweets (passing along existing messages, and thereby  making them more visible); 30-40% of messages contained links to further information elsewhere on the Web.
  • During the crisis, a number of Twitter users dedicated themselves almost exclusively to retweeting #qldfloods messages, acting as amplifiers of emergency information and thereby increasing its reach.
  • #qldfloods tweets largely managed to stay on topic and focussed predominantly on sharing directly relevant situational information, advice, news media and multimedia reports.
  • Emergency services and media organisations were amongst the most visible participants in #qldfloods, especially also because of the widespread retweeting of their messages.
  • More than one in every five shared links in the #qldfloods dataset was to an image hosted on one of several image-sharing services; and users overwhelmingly depended on Twitpic and other Twitter-centric image-sharing services to upload and distribute the photographs taken on their smartphones and digital cameras
  • The tenor of tweets during the latter days of the immediate crisis shifted more strongly towards organising volunteering and fundraising efforts: tweets containing situational information and advice, and news media and multimedia links were retweeted disproportionately often.
  • Less topical tweets were far less likely to be retweeted.

The Problem with Crisis Informatics Research

My colleague ChaTo at QCRI recently shared some interesting thoughts on the challenges of crisis informatics research vis-a-vis Twitter as a source of real-time data. The way he drew out the issue was clear, concise and informative. So I’ve replicated his diagram below.

ChaTo Diagram

What Emergency Managers Need: Those actionable tweets that provide situational awareness relevant to decision-making. What People Tweet: Those tweets posted during a crisis which are freely available via Twitter’s API (which is a very small fraction of the Twitter Firehose). What Computers Can Do: The computational ability of today’s algorithms to parse and analyze natural language at a large scale.

A: The small fraction of tweets containing valuable information for emergency responders that computer systems are able to extract automatically.
B: Tweets that are relevant to disaster response but are not able to be analyzed in real-time by existing algorithms due to computational challenges (e.g. data processing is too intensive, or requires artificial intelligence systems that do not exist yet).
C: Tweets that can be analyzed by current computing systems, but do not meet the needs of emergency managers.
D: Tweets that, if they existed, could be analyzed by current computing systems, and would be very valuable for emergency responders—but people do not write such tweets.

These limitations are not just academic. They make it more challenging to develop next-generation humanitarian technologies. So one question that naturally arises is this: How can we expand the size of A? One way is for governments to implement policies that expand access to mobile phones and the Internet, for example.

Area C is where the vast majority of social media companies operate today, on collecting business intelligence and sentiment analysis for private sector companies by combining natural language processing and machine learning methodologies. But this analysis rarely focuses on tweets posted during a major humanitarian crisis. Reaching out to these companies to let them know they could make a difference during disasters would help to expand the size of A + C.

Finally, Area D is composed of information that would be very valuable for emergency responders, and that could automatically extracted from tweets, but that Twitter users are simply not posting this kind of information during emergencies (for now). Here, government and humanitarian organizations can develop policies to incentivise disaster-affected communities to tweet about the impact of a hazard and resulting needs in a way that is actionable, for example. This is what the Philippine Government did during Typhoon Pablo.

Now recall that the circle “What People Tweet About” is actually a very small fraction of all posted tweets. The advantage of this small sample of tweets is that they are freely available via Twitter’s API. But said API limits the number of downloadable tweets to just a few thousand per day. (For comparative purposes, there were over 20 million tweets posted during Hurricane Sandy). Hence the need for data philanthropy for humanitarian response.

I would be grateful for your feedback on these ideas and the conceptual frame-work proposed by ChaTo. The point to remember, as noted in this earlier post, is that today’s challenges are not static; they can be addressed and overcome to various degrees. In other words, the sizes of the circles can and will change.

 

To Tweet or Not To Tweet During a Disaster?

Yes, only a small percentage of tweets generated during a disaster are directly relevant and informative for disaster response. No, this doesn’t mean we should dismiss Twitter as a source for timely, disaster-related information. Why? Because our efforts ought to focus on how that small percentage of informative tweets can be increased. What incentives or policies can be put in place? The following tweets by the Filipino government may shed some light.

Gov Twitter Pablo

The above tweet was posted three days before Typhoon Bopha (designated Pablo locally) made landfall in the Philippines. In the tweet below, the government directly and publicly encourages Filipinos to use the #PabloPH hashtag and to follow the Philippine Atmospheric, Geophysical & Astronomical Services Admin-istration (PAGASA) twitter feed, @dost_pagasa, which has over 400,000 follow-ers and also links to this official Facebook page.

Gov Twitter

The government’s official Twitter handle (@govph) is also retweeting tweets posted by The Presidential Communications Development and Strategic Plan-ning Office (@PCDCSO). This office is the “chief message-crafting body of the Office of the President.” In one such retweet (below), the office encourages those on Twitter to use different hashtags for different purposes (relief vs rescue). This mimics the use of official emergency numbers for different needs, e.g., police, fire, Ambulance, etc.

Twitter Pablo Gov

Given this kind of enlightened disaster response leadership, one would certainly expect that the quality of tweets received will be higher than without government endorsement. My team and I at QCRI are planning to analyze these tweets to de-termine whether or not this is the case. In the meantime, I expect we’ll see more examples of self-organized disaster response efforts using these hashtags, as per the earlier floods in August, which I blogged about here: Crowdsourcing Crisis Response following the Philippine Floods. This tech-savvy self-organization dynamic is important since the government itself may be unable to follow up on every tweeted request.

Launching a Library of Crisis Hashtags on Twitter

I recently posted the following question on the CrisisMappers list-serve: “Does anyone know whether a list of crisis hashtags exists?”

There are several reasons why such a hashtag list would be of added value to the CrisisMappers community and beyond. First, an analysis of Twitter hashtags used during crises over the past few years could be quite insightful; interesting new patterns may be evolving. Second, the resulting analysis could be used as a guide to find (and create) new hashtags when future crises unfold. Third, a library of hashtags would make it easier to collect historical datasets of crisis information shared on Twitter for the purposes of analysis & social computing research. To be sure, without this data, developing more sophisticated machine learning platforms like the Twitter Dashboard for the Humanitarian Cluster System would be serious challenge indeed.

After posting my question on CrisisMappers and Twitter, it was clear that no such library existed. So my colleague Sara Farmer launched a Google Spreadsheet to crowdsource an initial list. Since I was working on a similar list, I’ve created a combined spreadsheet which is available and editable here. Please do add any other crisis hashtags you may know about so we can make this the most comprehensive and up-to-date resource available to everyone. Thank you!

Whilst doing this research, I came across two potentially interesting and helpful hashtag websites: Hashonomy.com and Hashtags.org.

Become a (Social Media) Data Donor and Save a Life

I was recently in New York where I met up with my colleague Fernando Diaz from Microsoft Research. We were discussing the uses of social media in humanitarian crises and the various constraints of social media platforms like Twitter vis-a-vis their Terms of Service. And then this occurred to me: we have organ donation initiatives and organ donor cards that many of us carry around in our wallets. So why not become a “Data Donor” as well in the event of an emergency? After all, it has long been recognized that access to information during a crisis is as important as access to food, water, shelter and medical aid.

This would mean having a setting that gives others during a crisis the right (for a limited time) to use your public tweets or Facebook status updates for the ex-pressed purpose of supporting emergency response operations, such as live crisis maps. Perhaps switching this setting on would also come with the provision that the user confirms that s/he will not knowingly spread false or misleading information as part of their data donation. Of course, the other option is to simply continue doing what many have been doing all along, i.e., keep using social media updates for humanitarian response regardless of whether or not they violate the various Terms of Service.