Category Archives: Social Computing

Live Crisis Map of Disaster Damage Reported on Social Media

Update: See early results of MicroMappers deployment here

Digital humanitarian volunteers have been busing tagging images posted to social media in the aftermath of Typhoon Yolanda. More specifically, they’ve been using the new MicroMappers ImageClicker to rate the level of damage they see in each image. Thus far, they have clicked over 7,000 images. Those that are tagged as “Mild” and “Severe” damage are then geolocated by members of the Standby Volunteer Task Force (SBTF) who have partnered with GISCorps and ESRI to create this live Crisis Map of the disaster damage tagged using the ImageClicker. The map takes a few second to load, so please be patient.

YolandaPH Crisis Map 1

The more pictures are clicked using the ImageClicker, the more populated this crisis map will become. So please help out if you have a few seconds to spare—that’s really all it takes to click an image. If there are no picture left to click or the system is temporarily offline, then please come back a while later as we’re uploading images around the clock. And feel free to join our list-serve in the meantime if you wish to be notified when humanitarian organizations need your help in the future. No prior experience or training necessary. Anyone who knows how to use a computer mouse can become a digital humanitarian.

The SBTF, GISCorps and ESRI are members of the Digital Humanitarian Network (DHN), which my colleague Andrej Verity and I co-founded last year. The DHN serves as the official interface for direct collaboration between traditional “brick-and-mortar” humanitarian organizations and highly skilled digital volunteer networks. The SBTF Yolanda Team, spearheaded by my colleague Justine Mackinnon, for example, has also produced this map based on the triangulated results of the TweetClicker:

YolandaPH Crisis Map 2
There’s a lot of hype around the use of new technologies and social media for disaster response. So I want to be clear that our digital humanitarian operations in the Philippines have not been perfect. This means  that we’re learning (a lot) by doing (a lot). Such is the nature of innovation. We don’t have the luxury of locking ourselves up in a lab for a year to build the ultimate humanitarian technology platform. This means we have to work extra, extra hard when deploying new platforms during major disasters—because not only do we do our very best to carry out Plan A, but we often have to carry out  Plans B and C in parallel just in case Plan A doesn’t pan out. Perhaps Samuel Beckett summed it up best: “Ever tried. Ever failed. No matter. Try Again. Fail again. Fail better.”

bio

Digital Humanitarians: From Haiti Earthquake to Typhoon Yolanda

We’ve been able to process and make sense of a quarter of a million tweets in the aftermath of Typhoon Yolanda. Using both AIDR (still under development) and Twitris, we were able to collect these tweets in real-time and use automated algorithms to filter for both relevancy and uniqueness. The resulting ~55,000 tweets were then uploaded to MicroMappers (still under development). Digital volunteers from the world over used this humanitarian technology platform to tag tweets and now images from the disaster (click image below to enlarge). At one point, volunteers tagged some 1,500 tweets in just 10 minutes. In parallel, we used machine learning classifiers to automatically identify tweets referring to both urgent needs and offers of help. In sum, the response to Typhoon Yolanda is the first to make full use of advanced computing, i.e., both human computing and machine computing to make sense of Big (Crisis) Data.

ImageClicker YolandaPH

We’ve come a long way since the tragic Haiti Earthquake. There was no way we would’ve been able to pull off the above with the Ushahidi platform. We weren’t able to keep up with even a few thousand tweets a day back then, not to mention images. (Incidentally, MicroMappers can also be used to tag SMS). Furthermore, we had no trained volunteers on standby back when the quake struck. Today, not only do we have a highly experienced network of volunteers from the Standby Volunteer Task Force (SBTF) who serve as first (digital) responders, we also have an ecosystem of volunteers from the Digital Humanitarian Network (DHN). In the case of Typhoon Yolanda, we also had a formal partner, the UN Office for the Coordination of Humanitarian Affairs (OCHA), that officially requested digital humanitarian support. In other words, our efforts are directly in response to clearly articulated information needs. In contrast, the response to Haiti was “supply based” in that we simply pushed out all information that we figured might be of use to humanitarian responders. We did not have a formal partner from the humanitarian sector going into the Haiti operation.

Yolanda Prezi

What this new digital humanitarian operation makes clear is that preparedness, partnerships & appropriate humanitarian technology go a long way to ensuring that our efforts as digital humanitarians add value to the field-based operations in disaster zones. The above Prezi by SBTF co-founder Anahi (click on the image to launch the presentation) gives an excellent overview of how these digital humanitarian efforts are being coordinated in response to Yolanda. SBTF Core Team member Justine Mackinnon is spearheading the bulk of these efforts.

While there are many differences between the digital response to Haiti and Yolanda, several key similarities have also emerged. First, neither was perfect, meaning that we learned a lot in both deployments; taking a few steps forward, then a few steps back. Such is the path of innovation, learning by doing. Second, like our use of Skype in Haiti, there’s no way we could do this digital response work without Skype. Third, our operations were affected by telecommunications going offline in the hardest hit areas. We saw an 18.7% drop in relevant tweets on Saturday compared to the day before, for example. Fourth, while the (very) new technologies we are deploying are promising, they are still under development and have a long way to go. Fifth, the biggest heroes in response to Haiti were the volunteers—both from the Haitian Diaspora and beyond. The same is true of Yolanda, with hundreds of volunteers from the world over (including the Philippines and the Diaspora) mobilizing online to offer assistance.

A Filipino humanitarian worker in Quezon City, Philippines, for example, is volunteering her time on MicroMappers. As is customer care advisor from Eurostar in the UK and a fire officer from Belgium who recruited his uniformed colleagues to join the clicking. We have other volunteer Clickers from Makati (Philippines), Cape Town (South Africa), Canberra & Gold Coast (Australia), Berkeley, Brooklyn, Citrus Heights & Hinesburg (US), Kamloops (Canada), Paris & Marcoussis (France), Geneva (Switzerland), Sevilla (Spain), Den Haag (Holland), Munich (Germany) and Stokkermarke (Denmark) to name just a few! So this is as much a human story is it is one about technology. This is why online communities like MicroMappers are important. So please join our list-serve if you want to be notified when humanitarian organizations need your help.

Bio

Typhoon Yolanda: UN Needs Your Help Tagging Crisis Tweets for Disaster Response (Updated)

Final Update 14 [Nov 13th @ 4pm London]: Thank you for clicking to support the UN’s relief operations in the Philippines! We have now completed our mission as digital humanitarian volunteers. The early results of our collective online efforts are described here. Thank you for caring and clicking. Feel free to join our list-serve if you want to be notified when humanitarian organizations need your help again during the next disaster—which we really hope won’t be for a long, long time. In the meantime, our hearts and prayers go out to those affected by this devastating Typhoon.

-

The United Nations Office for the Coordination of Humanitarian Affairs (OCHA) just activated the Digital Humanitarian Network (DHN) in response to Typhoon Yolanda, which has already been described as possibly one of the strongest Category 5 storms in history. The Standby Volunteer Task Force (SBTF) was thus activated by the DHN to carry out a rapid needs & damage assessment by tagging reports posted to social media. So Ji Lucas and I at QCRI (+ Hemant & Andrew) and Justine Mackinnon from SBTF have launched MicroMappers to microtask the tagging of tweets & images. We need all the help we can get given the volume we’ve collected (and are continuing to collect). This is where you come in!

TweetClicker_PH2

You don’t need any prior experience or training, nor do you need to create an account or even login to use the MicroMappers TweetClicker. If you can read and use a computer mouse, then you’re all set to be a Digital Humanitarian! Just click here to get started. Every tweet will get tagged by 3 different volunteers (to ensure quality control) and those tweets that get identical tags will be shared with our UN colleagues in the Philippines. All this and more is explained in the link above, which will give you a quick intro so you can get started right away. Our UN colleagues need these tags to better understand who needs help and what areas have been affected.

ImageClicker YolandaPH

It only takes 3 seconds to tag a tweet or image, so if that’s all the time you have then that’s plenty! And better yet, if you also share this link with family, friends, colleagues etc., and invite them to tag along. We’ll soon be launching We have also launched the ImageClicker to tag images by level of damage. So please stay tuned. What we need is the World Wide Crowd to mobilize in support of those affected by this devastating disaster. So please spread the word. And keep in mind that this is only the second time we’re using MicroMappers, so we know it is not (yet) perfect : ) Thank you!

bio

p.s. If you wish to receive an alert next time MicroMappers is activated for disaster response, then please join the MicroMappers list-serve here. Thanks!

Previous updates:

Update 1: If you notice that all the tweets (tasks) have been completed, then please check back in 1/2 hour as we’re uploading more tweets on the fly. Thanks!

Update 2: Thanks for all your help! We are getting lots of traffic, so the Clicker is responding very slowly right now. We’re working on improving speed, thanks for your patience!

Update 3: We collected 182,000+ tweets on Friday from 5am-7pm (local time) and have automatically filtered this down to 35,175 tweets based on relevancy and uniqueness. These 35K tweets are being uploaded to the TweetClicker a few thousand tweets at a time. We’ll be repeating all this for just one more day tomorrow (Saturday). Thanks for your continued support!

Update 4: We/you have clicked through all of Friday’s 35K tweets and currently clicking through today’s 28,202 tweets, which we are about 75% of the way through. Many thanks for tagging along with us, please keep up the top class clicking, we’re almost there! (Sunday, 1pm NY time)

Update 5: Thanks for all your help! We’ll be uploading more tweets tomorrow (Monday, November 11th). To be notified, simply join this list-serve. Thanks again! [updated post on Sunday, November 10th at 5.30pm New York]

Update 6: We’ve uploaded more tweets! This is the final stretch, thanks for helping us on this last sprint of clicks!  Feel free to join our list-serve if you want to be notified when new tweets are available, many thanks! If the system says all tweets have been completed, please check again in 1/2hr as we are uploading new tweets around the clock. [updated Monday, November 11th at 9am London]

Update 7 [Nov 11th @ 1pm London]We’ve just launched the ImageClicker to support the UN’s relief efforts. So please join us in tagging images to provide rapid damage assessments to our humanitarian partners. Our TweetClicker is still in need of your clicks too. If the Clickers are slow, then kindly be patient. If all the tasks are done, please come back in 1/2hr as we’re uploading content to both clickers around the clock. Thanks for caring and helping the relief efforts. An update on the overall digital humanitarian effort is available here.

Update 8 [Nov 11th @ 6.30pm NY]We’ll be uploading more tweets and images to the TweetClicker & ImageClicker by 7am London on Nov 12th. Thank you very much for supporting these digital humanitarian efforts, the results of which are displayed here. Feel free to join our list-serve if you want to be notified when the Clickers have been fed!

Update 9 [Nov 12th @ 6.30am London]: We’ve fed both our TweetClicker and ImageClicker with new tweets and images. So please join us in clicking away to provide our UN partners with the situational awareness they need to coordinate their important relief efforts on the ground. The results of all our clicks are displayed here. Thank you for helping and for caring. If the Clickers or empty or offline temporarily, please check back again soon for more clicks.

Update 10 [Nov 12th @ 10am New York]: Were continuing to feed both our TweetClicker and ImageClicker with new tweets and images. So please join us in clicking away to provide our UN partners with the situational awareness they need to coordinate their important relief efforts on the ground. The results of all our clicks are displayed here. Thank you for helping and for caring. If the Clickers or empty or offline temporarily, please check back again soon for more clicks. Try different browsers if the tweets/images are not showing up.

Update 11 [Nov 12th @ 5pm New York]: Only one more day to go! We’ll be feeding our TweetClicker and ImageClicker with new tweets and images by 7am London on the 13th. We will phase out operations by 2pm London, so this is the final sprint. The results of all our clicks are displayed here. Thank you for helping and for caring. If the Clickers are empty or offline temporarily, please check back again soon for more clicks. Try different browsers if the tweets/images are not showing up.

Update 12 [Nov 13th @ 9am London]: This is the last stretch, Clickers! We’ve fed our TweetClicker and ImageClicker with new tweets and images. We’ll be refilling them until 2pm London (10pm Manila) and phasing out shortly thereafter. Given that MicroMappers is still under development, we are pleased that this deployment went so well considering. The results of all our clicks are displayed here. Thank you for helping and for caring. If the Clickers are empty or offline temporarily, please check back again soon for more clicks. Try different browsers if the tweets/images are not showing up.

Update 13 [Nov 13th @ 11am London]: Just 3 hours left! Our UN OCHA colleagues have just asked us to prioritize the ImageClicker, so please focus on that Clicker. We’ll be refilling the ImageClicker until 2pm London (10pm Manila) and phasing out shortly thereafter. Given that MicroMappers is still under development, we are pleased that this deployment went so well considering. The results of all our clicks are displayed here. Thank you for helping and for caring. If the ImageClicker is empty or offline temporarily, please check back again soon for more clicks. Try different browsers if images are not showing up.

Automatically Identifying Eyewitness Reporters on Twitter During Disasters

My colleague Kate Starbird recently shared a very neat study entitled “Learning from the Crowd: Collaborative Filtering Techniques for Identifying On-the-Ground Twitterers during Mass Disruptions” (PDF). As she and her co-authors rightly argue, “most Twitter activity during mass disruption events is generated by the remote crowd.” So can we use advanced computing to rapidly identify Twitter users who are reporting from ground zero? The answer is yes.

twitter-disaster-test

An important indicator of whether or not a Twitter user is reporting from the scene of a crisis is the number of times they are retweeted. During the Egyptian revolution in early 2011, “nearly 30% of highly retweeted Twitter users were physically present at those protest events.” Kate et al. drew on this insight to study tweets posted during the Occupy Wall Street (OWS) protests in September 2011. The authors manually analyzed a sample of more than 2,300 Twitter users to determine which were tweeting from the protests. They found that 4.5% of Twitter users in their sample were actually onsite. Using this dataset as training data, Kate et al. were able to develop a classifier that can automatically identify Twitter users reporting from the protests with an accuracy of just shy of 70%. I expect that more training data could very well help increase this accuracy score. 

In any event, “the information resulting from this or any filtering technique must be further combined with human judgment to assess its accuracy.” As the authors rightly note, “this ‘limitation’ fits well within an information space that is witnessing the rise of digital volunteer communities who monitor multiple data sources, including social media, looking to identify and amplify new information coming from the ground.” To be sure, “For volunteers like these, the use of techniques that increase the signal to noise ratio in the data has the potential to drastically reduce the amount of work they must do. The model that we have outlined does not result in perfect classification, but it does increase this signal-to-noise ratio substantially—tripling it in fact.”

I really hope that someone will leverage Kate’s important work to develop a standalone platform that automatically generates a list of Twitter users who are reporting from disaster-affected areas. This would be a very worthwhile contribution to the ecosystem of next-generation humanitarian technologies. In the meantime, perhaps QCRI’s Artificial Intelligence for Disaster Response (AIDR) platform will help digital humanitarians automatically identify tweets posted by eyewitnesses. I’m optimistic since we were able to create a machine learning classifier with an accuracy of 80%-90% for eyewitness tweets. More on this in our recent study

MOchin - talked to family

One question that remains is how to automatically identify tweets like the one above? This person is not an eyewitness but was likely on the phone with her family who are closer to the action. How do we develop a classifier to catch these “second-hand” eyewitness reports?

bio

Analyzing Fake Content on Twitter During Boston Marathon Bombings

As iRevolution readers already know, the application of Information Forensics to social media is one of my primary areas of interest. So I’m always on the lookout for new and related studies, such as this one (PDF), which was just published by colleagues of mine in India. The study by Aditi Gupta et al. analyzes fake content shared on Twitter during the Boston Marathon Bombings earlier this year.

bostonstrong

Gupta et al. collected close to 8 million unique tweets posted by 3.7 million unique users between April 15-19th, 2013. The table below provides more details. The authors found that rumors and fake content comprised 29% of the content that went viral on Twitter, while 51% of the content constituted generic opinions and comments. The remaining 20% relayed true information. Interestingly, approximately 75% of fake tweets were propagated via mobile phone devices compared to true tweets which comprised 64% of tweets posted via mobiles.

Table1 Gupta et al

The authors also found that many users with high social reputation and verified accounts were responsible for spreading the bulk of the fake content posted to Twitter. Indeed, the study shows that fake content did not travel rapidly during the first hour after the bombing. Rumors and fake information only goes viral after Twitter users with large numbers of followers start propagating the fake content. To this end, “determining whether some information is true or fake, based on only factors based on high number of followers and verified accounts is not possible in the initial hours.”

Gupta et al. also identified close to 32,000 new Twitter accounts created between April 15-19 that also posted at least one tweet about the bombings. About 20% (6,073 accounts) of these new accounts were subsequently suspended by Twitter. The authors found that 98.7% of these suspended accounts did not include the word Boston in their names and usernames. They also note that some of these deleted accounts were “quite influential” during the Boston tragedy. The figure below depicts the number of suspended Twitter accounts created in the hours and days following the blast.

Figure 2 Gupta et al

The authors also carried out some basic social network analysis of the suspended Twitter accounts. First, they removed from the analysis all suspended accounts that did not interact with each other, which left just 69 accounts. Next, they analyzed the network typology of these 69 accounts, which produced four distinct graph structures: Single Link, Closed Community, Star Typology and Self-Loops. These are displayed in the figure below (click to enlarge).

Figure 3 Gupta et al

The two most interesting graphs are the Closed Community and Star Typology graphs—the second and third graphs in the figure above.

Closed Community: Users that retweet and mention each other, forming a closed community as indicated by the high closeness centrality values produced by the social network analysis. “All these nodes have similar usernames too, all usernames have the same prefix and only numbers in the suffixes are different. This indicates that either these profiles were created by same or similar minded people for posting common propaganda posts.” Gupta et al. analyzed the content posted by these users and found that all were “tweeting the same propaganda and hate filled tweet.”

Star Typology: Easily mistakable for the authentic “BostonMarathon” Twitter account, the fake account “BostonMarathons” created plenty of confusion. Many users propagated the fake content posted by the BostonMarathons account. As the authors note, “Impersonation or creating fake profiles is a crime that results in identity theft and is punishable by law in many countries.”

The automatic detection of these network structures on Twitter may enable us to detect and counter fake content in the future. In the meantime, my colleagues and I at QCRI are collaborating with Aditi Gupta et al. to develop a “Credibility Plugin” for Twitter based on this analysis and earlier peer-reviewed research carried out by my colleague ChaTo. Stay tuned for updates.

Bio

See also:

  • Boston Bombings: Analyzing First 1,000 Seconds on Twitter [link]
  • Taking the Pulse of the Boston Bombings on Twitter [link]
  • Predicting the Credibility of Disaster Tweets Automatically [link]
  • Auto-Ranking Credibility of Tweets During Major Events [link]
  • Auto-Identifying Fake Images on Twitter During Disasters [link]
  • How to Verify Crowdsourced Information from Social Media [link]
  • Crowdsourcing Critical Thinking to Verify Social Media [link]

World Disaster Report: Next Generation Humanitarian Technology

This year’s World Disaster Report was just released this morning. I had the honor of authoring Chapter 3 on “Strengthening Humanitarian Information: The Role of Technology.” The chapter focuses on the rise of “Digital Humanitarians” and explains how “Next Generation Humanitarian Technology” is used to manage Big (Crisis) Data. The chapter complements the groundbreaking report “Humanitarianism in the Network Age” published by UN OCHA earlier this year.

The key topics addressed in the chapter include:

  • Big (Crisis) Data
  • Self-Organized Disaster Response
  • Crowdsourcing & Bounded Crowdsourcing
  • Verifying Crowdsourced Information
  • Volunteer & Technical Communities
  • Digital Humanitarians
  • Libya Crisis Map
  • Typhoon Pablo Crisis Map
  • Syria Crisis Map
  • Microtasking for Disaster Response
  • MicroMappers
  • Machine Learning for Disaster Response
  • Artificial Intelligence for Disaster Response (AIDR)
  • American Red Cross Digital Operations Center
  • Data Protection and Security
  • Policymaking for Humanitarian Technology

I’m particularly interested in getting feedback on this chapter, so feel free to pose any comments or questions you may have in the comments section below.

bio

See also:

  • What is Big (Crisis) Data? [link]
  • Humanitarianism in the Network Age [link]
  • Predicting Credibility of Disaster Tweets [link]
  • Crowdsourced Verification for Disaster Response [link]
  • MicroMappers: Microtasking for Disaster Response [link]
  • AIDR: Artificial Intelligence for Disaster Response [link]
  • Research Agenda for Next Generation Humanitarian Tech [link]

Humanitarian Crisis Computing 101

Disaster-affected communities are increasingly becoming “digital” communities. That is, they increasingly use mobile technology & social media to communicate during crises. I often refer to this user-generated content as Big (Crisis) Data. Humanitarian crisis computing seeks to rapidly identify informative, actionable and credible content in this growing stack of real-time information. The challenge is akin to finding the proverbial needle in the haystack since the vast majority of reports posted on social media is often not relevant for humanitarian response. This is largely a result of the demand versus supply problem described here.

bd0

In any event, the few “needles” of information that are relevant, can relay information that is vital and indeed-life saving for relief efforts—both traditional top-down efforts and more bottom-up grassroots efforts. When disaster strikes, we increasingly see social media traffic explode. We know there are important “pins” of relevant information hidden in this growing stack of information but how do we find them in real-time?

bd2

Humanitarian organizations are ill-equipped to managing the deluge of Big Crisis Data. They tend to sift through the stack of information manually, which means they aren’t able to process more than a small volume of information. This is represented by the dotted green line in the picture below. Big Data is often described as filter failure. Our manual filters cannot manage the large volume, velocity and variety of information posted on social media during disasters. So all the information above the dotted line, Big Data, is completely ignored.

bd3

This is where Advanced Computing comes in. Advanced Computing uses Human and Machine Computing to manage Big Data and reduce filter failure, thus allowing humanitarian organizations to process a larger volume, velocity and variety of crisis information in less time. In other words, Advanced Computing helps us push the dotted green line up the information stack.

bd4

In the early days of digital humanitarian response, we used crowdsourcing to search through the haystack of user-generated content posted during disasters. Note that said content can also include text messages (SMS), like in Haiti. Crowd-sourcing crisis information is not as much fun as the picture below would suggest, however. In fact, crowdsourcing crisis information was (and can still be) quite a mess and a big pain in the haystack. Needless to say, crowdsourcing is not the best filter to make sense of Big Crisis Data.

bd5

Recently, digital humanitarians have turned to microtasking crisis information as described here and here. The UK Guardian and Wired have also written about this novel shift from crowdsourcing to microtasking.

bd6

Microtasking basically turns a haystack into little blocks of stacks. Each micro-stack is then processed by one ore more digital humanitarian volunteers. Unlike crowdsourcing, a microtasking approach to filtering crisis information is highly scalable, which is why we recently launched MicroMappers.

bd7

The smaller the micro-stack, the easier the tasks and the faster that they can be carried out by a greater number of volunteers. For example, instead of having 10 people classify 10,000 tweets based on the Cluster System, microtasking makes it very easy for 1,000 people to classify 10 tweets each. The former would take hours while the latter mere minutes. In response to the recent earthquake in Pakistan, some 100 volunteers used MicroMappers to classify 30,000+ tweets in about 30 hours, for example.

bd8

Machine Computing, in contrast, uses natural language processing (NLP) and machine learning (ML) to “quantify” the haystack of user-generated content posted on social media during disasters. This enable us to automatically identify relevant “needles” of information.

bd9

An example of a Machine Learning approach to crisis computing is the Artificial Intelligence for Disaster Response (AIDR) platform. Using AIDR, users can teach the platform to automatically identify relevant information from Twitter during disasters. For example, AIDR can be used to automatically identify individual tweets that relay urgent needs from a haystack of millions of tweets.

bd11
The pictures above are taken from the slide deck I put together for a keynote address I recently gave at the Canadian Ministry of Foreign Affairs.

bio

AIDR: Artificial Intelligence for Disaster Response

Social media platforms are increasingly used to communicate crisis information when major disasters strike. Hence the rise of Big (Crisis) Data. Humanitarian organizations, digital humanitarians and disaster-affected communities know that some of this user-generated content can increase situational awareness. The challenge is to identify relevant and actionable content in near real-time to triangulate with other sources and make more informed decisions on the spot. Finding potentially life-saving information in this growing stack of Big Crisis Data, however, is like looking for the proverbial needle in a giant haystack. This is why my team and I at QCRI are developing AIDR.

haystpic_pic

The free and open source Artificial Intelligence for Disaster Response platform leverages machine learning to automatically identify informative content on Twitter during disasters. Unlike the vast majority of related platforms out there, we go beyond simple keyword search to filter for informative content. Why? Because recent research shows that keyword searches can miss over 50% of relevant content posted on Twitter. This is very far from optimal for emergency response. Furthermore, tweets captured via keyword search may not be relevant since words can have multiple meanings depending on context. Finally, keywords are restricted to one language only. Machine learning overcomes all these limitations, which is why we’re developing AIDR.

So how does AIDR work? There are three components of AIDR: the Collector, Trainer and Tagger. The Collector simply allows you to collect and save a collection of tweets posted during a disaster. You can download these tweets for analysis at any time and also use them to create an automated filter using machine learning, which is where the Trainer and Tagger come in. The Trainer allows one or more users to train the AIDR platform to automatically tag tweets of interest in a given collection of tweets. Tweets of interest could include those that refer to “Needs”, “Infrastructure Damage” or “Rumors” for example.

AIDR_Collector

A user creates a Trainer for tweets-of-interest by: 1) Creating a name for their Trainer, e.g., “My Trainer”; 2) Identifying topics of interest such as “Needs”, “Infrastructure Damage”,  “Rumors” etc. (as many topics as the user wants); and 3) Classifying tweets by topic of interest. This last step simply involves reading collected tweets and classifying them as “Needs”, “Infrastructure Damage”, “Rumor” or “Other,” for example. Any number of users can participate in classifying these tweets. That is, once a user creates a Trainer, she can classify the tweets herself, or invite her organization to help her classify, or ask the crowd to help classify the tweets, or all of the above. She simply shares a link to her training page with whoever she likes. If she choses to crowdsource the classification of tweets, AIDR includes a built-in quality control mechanism to ensure that the crowdsourced classification is accurate.

As noted here, we tested AIDR in response to the Pakistan Earthquake last week. We quickly hacked together the user interface displayed below, so functionality rather than design was our immediate priority. In any event, digital humanitarian volunteers from the Standby Volunteer Task Force (SBTF) tagged over 1,000 tweets based on the different topics (labels) listed below. As far as we know, this was the first time that a machine learning classifier was crowdsourced in the context of a humanitarian disaster. Click here for more on this early test.

AIDR_Trainer

The Tagger component of AIDR analyzes the human-classified tweets from the Trainer to automatically tag new tweets coming in from the Collector. This is where the machine learning kicks in. The Tagger uses the classified tweets to learn what kinds of tweets the user is interested in. When enough tweets have been classified (20 minimum), the Tagger automatically begins to tag new tweets by topic of interest. How many classified tweets is “enough”? This will vary but the more tweets a user classifies, the more accurate the Tagger will be. Note that each automatically tagged tweet includes an accuracy score—i.e., the probability that the tweet was correctly tagged by the automatic Tagger.

The Tagger thus displays a list of automatically tagged tweets updated in real-time. The user can filter this list by topic and/or accuracy score—display all tweets tagged as “Needs” with an accuracy of 90% or more, for example. She can also download the tagged tweets for further analysis. In addition, she can share the data link of her Tagger with developers so the latter can import the tagged tweets directly into to their own platforms, e.g., MicroMappers, Ushahidi, CrisisTracker, etc. (Note that AIDR already powers CrisisTracker by automating the classification of tweets). In addition, the user can share a display link with individuals who wish to embed the live feed into their websites, blogs, etc.

In sum, AIDR is an artificial intelligence engine developed to power consumer applications like MicroMappers. Any number of other tools can also be added to the AIDR platform, like the Credibility Plugin for Twitter that we’re collaborating on with partners in India. Added to AIDR, this plugin will score individual tweets based on the probability that they convey credible information. To this end, we hope AIDR will become a key node in the nascent ecosystem of next-generation humanitarian technologies. We plan to launch a beta version of AIDR at the 2013 CrisisMappers Conference (ICCM 2013) in Nairobi, Kenya this November.

In the meantime, we welcome any feedback you may have on the above. And if you want to help as an alpha tester, please get in touch so I can point you to the Collector tool, which you can start using right away. The other AIDR tools will be open to the same group of alpha tester in the coming weeks. For more on AIDR, see also this article in Wired.

AIDR_logo

The AIDR project is a joint collaboration with the United Nations Office for the Coordination of Humanitarian Affairs (OCHA). Other organizations that have expressed an interest in AIDR include the International Committee of the Red Cross (ICRC), American Red Cross (ARC), Federal Emergency Management Agency (FEMA), New York City’s Office for Emergency Management and their counterpart in the City of San Francisco. 

bio

Note: In the future, AIDR could also be adapted to take in Facebook status updates and text messages (SMS).

Developing MicroFilters for Digital Humanitarian Response

Filtering—or the lack thereof—presented the single biggest challenge when we tested MicroMappers last week in response to the Pakistan Earthquake. As my colleague Clay Shirky notes, the challenge with “Big Data” is not information overload but rather filter failure. We need to make damned sure that we don’t experience filter failure again in future deployments. To ensure this, I’ve decided to launch a stand-alone and fully interoperable platform called MicroFilters. My colleague Andrew Ilyas will lead the technical development of the platform with support from Ji Lucas. Our plan is to launch the first version of MicroFilters before the CrisisMappers conference (ICCM 2013) in November.

MicroFilters

A web-based solution, MicroFilters will allow users to upload their own Twitter data for automatic filtering purposes. Users will have the option of uploading this data using three different formats: text, CSV and JSON. Once uploaded, users can elect to perform one or more automatic filtering tasks from this menu of options:

[   ]  Filter out retweets
[   ]  Filter for unique tweets
[   ]  Filter tweets by language [English | Other | All]
[   ]  Filter for unique image links posted in tweets [Small | Medium | Large | All]
[   ]  Filter for unique video links posted in tweets [Short | Medium | Long | All]
[   ]  Filter for unique image links in news articles posted in tweets  [S | M | L | All]
[   ]  Filter for unique video links in news articles posted in tweets [S | M | L | All]

Note that “unique image and video links” refer to the long URLs not shortened URLs like bit.ly. After selecting the desired filtering option(s), the user simply clicks on the “Filter” button. Once the filtering is completed (a countdown clock is displayed to inform the user of the expected processing time), MicroFilters provides the user with a download link for the filtered results. The link remains live for 10 minutes after which the data is automatically deleted. If a CSV file was uploaded for filtering, the file format for download is also in CSV format; likewise for text and JSON files. Note that filtered tweets will appear in reverse chronological order (assuming time-stamp data was included in the uploaded file) when downloaded. The resulting file of filtered tweets can then be uploaded to MicroMappers within seconds.

In sum, MicroFilters will be invaluable for future deployments of MicroMappers. Solving the “filter failure” problem will enable digital humanitarians to process far more relevant data and in a more timely manner. Since MicroFilters will be a standalone platform, anyone else will also have access to these free and automatic filtering services. In the meantime, however, we very much welcome feedback, suggestions and offers of help, thank you!

bio

Results of MicroMappers Response to Pakistan Earthquake (Updated)

Update: We’re developing & launching MicroFilters to improve MicroMappers.

About 47 hours ago, the UN Office for the Coordination of Humanitarian Affairs (OCHA) activated the Digital Humanitarian Network (DHN) in response to the Pakistan Earthquake. The activation request was for 48 hours, so the deployment will soon phase out. As already described here, the Standby Volunteer Task Force (SBTF) teamed up with QCRI to carry out an early test of MicroMappers, which was not set to launch until next month. This post shares some initial thoughts on how the test went along with preliminary results.

Pakistan Quake

During ~40 hours, 109 volunteers from the SBTF and the public tagged just over 30,000 tweets that were posted during the first 36 hours or so after the quake. We were able to automatically collect these tweets thanks to our partnership with GNIP and specifically filtered for said tweets using half-a-dozen hashtags. Given the large volume of tweets collected, we did not require that each tweet be tagged at least 3 times by individual volunteers to ensure data quality control. Out of these 30,000+ tweets, volunteers tagged a total of 177 tweets as noting needs or infrastructure damage. A review of these tweets by the SBTF concluded that none were actually informative or actionable.

Just over 350 pictures were tweeted in the aftermath of the earthquake. These were uploaded to the ImageClicker for tagging purposes. However, none of the pictures captured evidence of infrastructure damage. In fact, the vast majority were unrelated to the earthquake. This was also true of pictures published in news articles. Indeed, we used an automated algorithm to identify all tweets with links to news articles; this algorithm would then crawl these articles for evidence of images. We found that the vast majority of these automatically extracted pictures were related to politics rather than infrastructure damage.

Pakistan Quake2

A few preliminary thoughts and reflections from this first test of MicroMappers. First, however, a big, huge, gigantic thanks to my awesome QCRI team: Ji Lucas, Imran Muhammad and Kiran Garimella; to my outstanding colleagues on the SBTF Core Team including but certainly not limited to Jus Mackinnon, Melissa Elliott, Anahi A. Iaccuci, Per Aarvik & Brendan O’Hanrahan (bios here); to the amazing SBTF volunteers and members of the general public who rallied to tag tweets and images—in particular our top 5 taggers: Christina KR, Leah H, Lubna A, Deborah B and Joyce M! Also bravo to volunteers in the Netherlands, UK, US and Germany for being the most active MicroMappers; and last but certainly not least, big, huge and gigantic thanks to Andrew Ilyas for developing the algorithms to automatically identify pictures and videos posted to Twitter.

So what did we learn over the past 48 hours? First, the disaster-affected region is a remote area of south-western Pakistan with a very light social media footprint, so there was practically no user-generated content directly relevant to needs and damage posted on Twitter during the first 36 hours. In other words, there were no needles to be found in the haystack of information. This is in stark contrast to our experience when we carried out a very similar operation following Typhoon Pablo in the Philippines. Obviously, if there’s little to no social media footprint in a disaster-affected area, then monitoring social media is of no use at all to anyone. Note, however, that MicroMappers could also be used to tag 30,000+ text messages (SMS). (Incidentally, since the earthquake struck around 12noon local time, there was only about 18 hours of daylight during the 36-hour period for which we collected the tweets).

Second, while the point of this exercise was not to test our pre-processing filters, it was clear that the single biggest problem was ultimately with the filtering. Our goal was to upload as many tweets as possible to the Clickers and stress-test the apps. So we only filtered tweets using a number of general hashtags such as #Pakistan. Furthermore, we did not filter out any retweets, which probably accounted for 2/3 of the data, nor did we filter by geography to ensure that we were only collecting and thus tagging tweets from users based in Pakistan. This was a major mistake on our end. We were so pre-occupied with testing the actual Clickers that we simply did not pay attention to the pre-processing of tweets. This was equally true of the images uploaded to the ImageClicker.

Pakistan Quake 3

So where do we go from here? Well we have pages and pages worth of feedback to go through and integrate in the next version of the Clickers. For me, one of the top priorities is to optimize our pre-processing algorithms and ensure that the resulting output can be automatically uploaded to the Clickers. We have to refine our algorithms and make damned sure that we only upload unique tweets and images to our Clickers. At most, volunteers should not see the same tweet or image more than 3 times for verification purposes. We should also be more careful with our hashtag filtering and also consider filtering by geography. Incidentally, when our free & open source AIDR platform becomes operational in November, we’ll also have the ability to automatically identify tweets referring to needs, reports of damage, and much, much more.

In fact, AIDR was also tested for the very first time. SBTF volunteers tagged about 1,000 tweets, and just over 130 of the tags enabled us to create an accurate classifier that can automatically identify whether a tweet is relevant for disaster response efforts specifically in Pakistan (80% accuracy). Now, we didn’t apply this classifier on incoming tweets because AIDR uses streaming Twitter data, not static, archived data which is what we had (in the form of CSV files). In any event, we also made an effort to create classifiers for needs and infrastructure damage but did not get enough tags to make these accurate enough. Typically, we need a minimum of 20 or so tags (i.e., examples of actual tweets referring to needs or damage). The more tags, the more accurate the classifier.

The reason there were so few tags, however, is because there were very few to no informative tweets referring to needs or infrastructure damage during the first 36 hours. In any event, I believe this was the very first time that a machine learning classifier was crowdsourced for disaster response purposes. In the future, we may want to first crowdsource a machine learning classifier for disaster relevant tweets and then upload the results to MicroMappers; this would reduce the number of unrelated tweets  displayed on a TweetClicker.

As expected, we have also received a lot of feedback vis-a-vis user experience and the user interface of the Clickers. Speed is at the top of the list. That is, making sure that once I’ve clicked on a tweet/image, the next tweet/image automatically appears. At times, I had to wait more than 20 seconds for the next item to load. We also need to add more progress bars such as the number of tweets or images that remain to be tagged—a countdown display, basically. I could go on and on, frankly, but hopefully these early reflections are informative and useful to others developing next-generation humanitarian technologies. In sum, there is a lot of work to be done still. Onwards!

bio