Category Archives: Crowdsourcing

Using AIDR to Collect and Analyze Tweets from Chile Earthquake

Wish you had a better way to make sense of Twitter during disasters than this?

Type in a keyword like #ChileEarthquake in Twitter’s search box above and you’ll see more tweets than you can possibly read in a day let alone keep up with for more than a few minutes. Wish there way were an easy, free and open source solution? Well you’ve come to the right place. My team and I at QCRI are developing the Artificial Intelligence for Disaster Response (AIDR) platform to do just this. Here’s how it works:

First you login to the AIDR platform using your own Twitter handle (click images below to enlarge):

AIDR login

You’ll then see your collection of tweets (if you already have any). In my case, you’ll see I have three. The first is a collection of English language tweets related to the Chile Earthquake. The second is a collection of Spanish tweets. The third is a collection of more than 3,000,000 tweets related to the missing Malaysia Airlines plane. A preliminary analysis of these tweets is available here.

AIDR collections

Lets look more closely at my Chile Earthquake 2014 collection (see below, click to enlarge). I’ve collected about a quarter of a million tweets in the past 30 hours or so. The label “Downloaded tweets (since last re-start)” simply refers to the number of tweets I’ve collected since adding a new keyword or hashtag to my collection. I started the collection yesterday at 5:39am my time (yes, I’m an early bird). Under “Keywords” you’ll see all the hashtags and keywords I’ve used to search for tweets related to the earthquake in Chile. I’ve also specified the geographic region I want to collect tweets from. Don’t worry, you don’t actually have to enter geographic coordinates when you set up your own collection, you simply highlight (on map) the area you’re interested in and AIDR does the rest.

AIDR - Chile Earthquake 2014

You’ll also note in the above screenshot that I’ve selected to only collect tweets in English, but you can collect all language tweets if you’d like or just a select few. Finally, the Collaborators section simply lists the colleagues I’ve added to my collection. This gives them the ability to add new keywords/hashtags and to download the tweets collected as shown below (click to enlarge). More specifically, collaborators can download the most recent 100,000 tweets (and also share the link with others). The 100K tweet limit is based on Twitter’s Terms of Service (ToS). If collaborators want all the tweets, Twitter’s ToS allows for sharing the TweetIDs for an unlimited number of tweets.

AIDR download CSV

So that’s the AIDR Collector. We also have the AIDR Classifier, which helps you make sense of the tweets you’re collecting (in real-time). That is, your collection of tweets doesn’t stop, it continues growing, and as it does, you can make sense of new tweets as they come in. With the Classifier, you simply teach AIDR to classify tweets into whatever topics you’re interested in, like “Infrastructure Damage”, for example. To get started with the AIDR Classifier, simply return to the “Details” tab of our Chile collection. You’ll note the “Go To Classifier” button on the far right:

AIDR go to Classifier

Clicking on that button allows you to create a Classifier, say on the topic of disaster damage in general. So you simply create a name for your Classifier, in this case “Disaster Damage” and then create Tags to capture more details with respect to damage-related tweets. For example, one Tag might be, say, “Damage to Transportation Infrastructure.” Another could be “Building Damage.” In any event, once you’ve created your Classifier and corresponding tags, you click Submit and find your way to this page (click to enlarge):

AIDR Classifier Link

You’ll notice the public link for volunteers. That’s basically the interface you’ll use to teach AIDR. If you want to teach AIDR by yourself, you can certainly do so. You also have the option of “crowdsourcing the teaching” of AIDR. Clicking on the link will take you to the page below.

AIDR to MicroMappers

So, I called my Classifier “Message Contents” which is not particularly insightful; I should have labeled it something like “Humanitarian Information Needs” or something, but bear with me and lets click on that Classifier. This will take you to the following Clicker on MicroMappers:

MicroMappers Clicker

Now this is not the most awe-inspiring interface you’ve ever seen (at least I hope not); reason being that this is simply our very first version. We’ll be providing different “skins” like the official MicroMappers skin (below) as well as a skin that allows you to upload your own logo, for example. In the meantime, note that AIDR shows every tweet to at least three different volunteers. And only if each of these 3 volunteers agree on how to classify a given tweet does AIDR take that into consideration when learning. In other words, AIDR wants to ensure that humans are really sure about how to classify a tweet before it decides to learn from that lesson. Incidentally, The MicroMappers smartphone app for the iPhone and Android will be available in the next few weeks. But I digress.

Yolanda TweetClicker4

As you and/or your volunteers classify tweets based on the Tags you created, AIDR starts to learn—hence the AI (Artificial Intelligence) in AIDR. AIDR begins to recognize that all the tweets you classified as “Infrastructure Damage” are indeed similar. Once you’ve tagged enough tweets, AIDR will decide that it’s time to leave the nest and fly on it’s own. In other words, it will start to auto-classify incoming tweets in real-time. (At present, AIDR can auto-classify some 30,000 tweets per minute; compare this to the peak rate of 16,000 tweets per minute observed during Hurricane Sandy).

Of course, AIDR’s first solo “flights” won’t always go smoothly. But not to worry, AIDR will let you know when it needs a little help. Every tweet that AIDR auto-tags comes with a Confidence level. That is, AIDR will let you know: “I am 80% sure that I correctly classified this tweet”. If AIDR has trouble with a tweet, i.e., if it’s confidence level is 65% or below, the it will send the tweet to you (and/or your volunteers) so it can learn from how you classify that particular tweet. In other words, the more tweets you classify, the more AIDR learns, and the higher AIDR’s confidence levels get. Fun, huh?

To view the results of the machine tagging, simply click on the View/Download tab, as shown below (click to enlarge). The page shows you the latest tweets that have been auto-tagged along with the Tag label and the confidence score. (Yes, this too is the first version of that interface, we’ll make it more user-friendly in the future, not to worry). In any event, you can download the auto-tagged tweets in a CSV file and also share the download link with your colleagues for analysis and so on. At some point in the future, we hope to provide a simple data visualization output page so that you can easily see interesting data trends.

AIDR Results

So that’s basically all there is to it. If you want to learn more about how it all works, you might fancy reading this research paper (PDF). In the meantime, I’ll simply add that you can re-use your Classifiers. If (when?) another earthquake strikes Chile, you won’t have to start from scratch. You can auto-tag incoming tweets immediately with the Classifier you already have. Plus, you’ll be able to share your classifiers with your colleagues and partner organizations if you like. In other words, we’re envisaging an “App Store” of Classifiers based on different hazards and different countries. The more we re-use our Classifiers, the more accurate they will become. Everybody wins.

And voila, that is AIDR (at least our first version). If you’d like to test the platform and/or want the tweets from the Chile Earthquake, simply get in touch!

bio

Note:

  • We’re adapting AIDR so that it can also classify text messages (SMS).
  • AIDR Classifiers are language specific. So if you speak Spanish, you can create a classifier to tag all Spanish language tweets/SMS that refer to disaster damage, for example. In other words, AIDR does not only speak English : )

Launching a Search and Rescue Challenge for Drone / UAV Pilots

My colleague Timothy Reuter (of AidDroids fame) kindly invited me to co-organize the Drone/UAV Search and Rescue Challenge for the DC Drone User Group. The challenge will take place on May 17th near Marshall in Virginia. The rules for the competition are based on the highly successful Search/Rescue challenge organized by my new colleague Chad with the North Texas Drone User Group. We’ll pretend that a person has gone missing by scattering (over a wide area) various clues such as pieces of clothing & personal affects. Competitors will use their UAVs to collect imagery of the area and will have 45 minutes after flying to analyze the imagery for clues. The full set of rules for our challenge are listed here but may change slightly as we get closer to the event.

searchrescuedrones

I want to try something new with this challenge. While previous competitions have focused exclusively on the use of drones/UAVs for the “Search” component of the challenge, I want to introduce the option of also engaging in the “Rescue” part. How? If UAVs identify a missing person, then why not provide that person with immediate assistance while waiting for the Search and Rescue team to arrive on site? The UAV could drop a small and light-weight first aid kit, or small water bottle, or even a small walkie talkie. Enter my new colleague Euan Ramsay who has been working on a UAV payloader solution for Search and Rescue; see the video demo below. Euan, who is based in Switzerland, has very kindly offered to share several payloader units for our UAV challenge. So I’ll be meeting up with him next month to take the units back to DC for the competition.

Another area I’d like to explore for this challenge is the use of crowdsourcing to analyze the aerial imagery & video footage. As noted here, the University of Central Lancashire used crowdsourcing in their UAV Search and Rescue pilot project last summer. This innovative “crowdsearching” approach is also being used to look for Malaysia Flight 370 that went missing several weeks ago. I’d really like to have this crowdsourcing element be an option for the DC Search & Rescue challenge.

UAV MicroMappers

My team and I at QCRI have developed a platform called MicroMappers, which can easily be used to crowdsource the analysis of UAV pictures and videos. The United Nations (OCHA) used MicroMappers in response to Typhoon Yolanda last year to crowdsource the tagging pictures posted on Twitter. Since then we’ve added video tagging capability. So one scenario for the UAV challenge would be for competitors to upload their imagery/videos to MicroMappers and have digital volunteers look through this content for clues of our fake missing person.

In any event, I’m excited to be collaborating with Timothy on this challenge and will be share updates on iRevolution on how all this pans out.

bio

See also:

  • Using UAVs for Search & Rescue [link]
  • Crowdsourcing Analysis of UAV Imagery for Search and Rescue [link]
  • How UAVs are Making a Difference in Disaster Response [link]
  • Grassroots UAVs for Disaster Response [link]

Results of the Crowdsourced Search for Malaysia Flight 370 (Updated)

Update: More than 3 million volunteers thus far have joined the crowdsourcing efforts to locate the missing Malaysian Airlines plane. These digital volunteers have viewed over a quarter-of-a-billion micro-maps and have tagged almost 3 million features in these satellite maps. Source of update.

Malaysian authorities have now gone on record to confirm that Flight 370 was hijacked, which reportedly explains why contact with the passenger jet abruptly ceased a week ago. The Search & Rescue operations now involve 13 countries around the world and over 100 ships, helicopters and airplanes. The costs of this massive operation must easily be running into the millions of dollars.

FlightSaR

Meanwhile, a free crowdsourcing platform once used by digital volunteers to search for Genghis Khan’s Tomb and displaced populations in Somalia (video below) has been deployed to search high-resolution satellite imagery for signs of the missing airliner. This is not the first time that crowdsourced satellite imagery analysis has been used to find a missing plane but this is certainly the highest profile operation yet, which may explain why the crowdsourcing platform used for the search (Tomnod) reportedly crashed for over a dozen of hours since the online search began. (Note that Zooniverse can easily handle this level of traffic). Click on the video below to learn more about the crowdsourced search for Genghis Khan and displaced peoples in Somalia.

NatGeoVideo

Having current, high-resolution satellite imagery is almost as good as having your own helicopter. So the digital version of these search operations includes tens of thousands of digital helicopters, whose virtual pilots are covering over 2,000 square miles of Thailand’s Gulf right from their own computers. They’re doing this entirely for free, around the clock and across multiple time zones. This is what Digital Humanitarians have been doing ever since the 2010 Haiti Earthquake, and most recently in response to Typhoon Yolanda.

Tomnod has just released the top results of the crowdsourced digital search efforts, which are displayed in the short video below. Like other microtasking platforms, Tomnod uses triangulation to calculate areas of greatest consensus by the crowd. This is explained further here. Note: The example shown in the video is NOT a picture of Flight 370 but perhaps of an airborne Search & Rescue plane.

While looking for evidence of the missing airliner is like looking for the proverbial needle in a massive stack of satellite images, perhaps the biggest value-added of this digital search lays in identifying where the aircraft is most definitely not located—that is, approaching this crowdsourced operation as a process of elimination. Professional imagery analysts can very easily and quickly review images tagged by the crowd, even if they are mistakenly tagged as depicting wreckage. In other words, the crowd can provide the first level filter so that expert analysts don’t waste their time looking at thousands of images of bare oceans. Basically, if the mandate is to leave no stone unturned, then the crowd can do that very well.

In sum, crowdsourcing can reduce the signal to noise ratio so that experts can focus more narrowly on analyzing the potential signals. This process may not be perfect just yet but it can be refined and improved. (Note that professionals also get it wrong, like Chinese analysts did with this satellite image of the supposed Malaysian airliner).

If these digital efforts continue and Flight 370 has indeed been hijacked, then this will certainly be the first time that crowdsourced satellite imagery analysis is used to find a hijacked aircraft. The latest satellite imagery uploaded by Tomnod is no longer focused on bodies of water but rather land. The blue strips below (left) is the area that the new satellite imagery covers.

Tomnod New Imagery 2

Some important questions will need to be addressed if this operation is indeed extended. What if the hijackers make contact and order the cessation of all offline and online Search & Rescue operations? Would volunteers be considered “digital combatants,” potentially embroiled in political conflict in which the lives of 227 hostages are at stake?

bio

Note: The Google Earth containing the top results of the search is available here.

See also: Analyzing Tweets on Malaysia Flight #MH370 [link]

Using Social Media to Predict Economic Activity in Cities

Economic indicators in most developing countries are often outdated. A new study suggests that social media may provide useful economic signals when traditional economic data is unavailable. In “Taking Brazil’s Pulse: Tracking Growing Urban Economies from Online Attention” (PDF), the authors accurately predict the GDPs of 45 Brazilian cities by analyzing data from a popular micro-blogging platform (Yahoo Meme). To make these predictions, the authors used the concept of glocality, which notes that “economically successful cities tend to be involved in interactions that are both local and global at the same time.” The results of the study reveals that “a city’s glocality, measured with social media data, effectively signals the city’s economic well-being.”

The authors are currently expanding their work by predicting social capital for these 45 cities based on social media data. As iRevolution readers will know, I’ve blogged extensively on using social media to measure social capital footprints at the city and sub-city level. So I’ve contacted the authors of the study and look forward to learning more about their research. As they rightly note:

“There is growing interesting in using digital data for development opportunities, since the number of people using social media is growing rapidly in developing countries as well. Local impacts of recent global shocks – food, fuel and financial – have proven not to be immediately visible and trackable, often unfolding ‘beneath the radar of traditional monitoring systems’. To tackle that problem, policymakers are looking for new ways of monitoring local impacts [...].”


bio

Using Crowd Computing to Analyze UAV Imagery for Search & Rescue Operations

My brother recently pointed me to this BBC News article on the use of drones for Search & Rescue missions in England’s Lake District, one of my favorite areas of the UK. The picture below is one I took during my most recent visit. In my earlier blog post on the use of UAVs for Search & Rescue operations, I noted that UAV imagery & video footage could be quickly analyzed using a microtasking platform (like MicroMappers, which we used following Typhoon Yolanda). As it turns out, an enterprising team at the University of Central Lancashire has been using microtasking as part of their UAV Search & Rescue exercises in the Lake District.

Lake District

Every year, the Patterdale Mountain Rescue Team assists hundreds of injured and missing persons in the North of the Lake District. “The average search takes several hours and can require a large team of volunteers to set out in often poor weather conditions.” So the University of Central Lancashire teamed up with the Mountain Rescue Team to demonstrate that UAV technology coupled with crowdsourcing can reduce the time it takes to locate and rescue individuals.

The project, called AeroSee Experiment, worked as follows. The Mountain Rescue service receives a simulated distress call. As they plan their Search & Rescue operation, the University team dispatches their UAV to begin the search. Using live video-streaming, the UAV automatically transmits pictures back to the team’s website where members of the public can tag pictures that members of the Mountain Rescue service should investigate further. These tagged pictures are then forwarded to “the Mountain Rescue Control Center for a final opinion and dispatch of search teams.” Click to enlarge the diagram below.

AeroSee

Members of the crowd would simply log on to the AeroSee website and begin tagging. Although the experiment is over, you can still do a Practice Run here. Below is a screenshot of the microtasking interface (click to enlarge). One picture at a time is displayed. If the picture displays potentially important clues, then the digital volunteer points to said area of the picture and types in why they believe the clue they’re pointing at might be important.

AeroSee MT2

The results were impressive. A total of 335 digital volunteers looked through 11,834 pictures and the “injured” walker (UAV image below) was found within 69 seconds of the picture being uploaded to microtasking website. The project team subsequently posted this public leaderboard to acknowledge all volunteers who participated, listing their scores and levels of accuracy for feedback purposes.

Aero MT3

Upon further review of the data and results, the project team concluded that the experiment was a success and that digital Search & Rescue volunteers were able to “home in on the location of our missing person before the drones had even landed!” The texts added to the tagged images were also very descriptive, which helped the team “locate the casualty very quickly from the more tentative tags on other images.”

If the area being surveyed during a Search & Rescue operation is fairly limited, then using the crowd to process UAV images is a quick and straightforward, especially if the crowd is relatively large. We have over 400 digital humanitarian volunteers signed up for MicroMappers (launched in November 2013) and hope to grow this to 1,000+ in 2014. But for much larger areas, like Kruger National Park, one would need far more volunteers. Kruger covers 7,523 square miles compared to the Lake District’s 885 square miles.

kruger-gate-sign

One answer to this need for more volunteers could be the good work that my colleagues over at Zooniverse are doing. Launched in February 2009, Zooniverse has a unique volunteer base of one million volunteers. Another solution is to use machine computing to prioritize the flights paths of UAVs in the first place, i.e., use advanced algorithms to considerably reduce the search area by ruling out areas that missing people or other objects of interest (like rhinos in Kruger) are highly unlikely to be based on weather, terrain, season and other data.

This is the area that my colleague Tom Snitch works in. As he noted in this recent interview (PDF), “We want to plan a flight path for the drone so that the number of unprotected animals is as small as possible.” To do this, he and his team use “exquisite mathematics and complex algorithms” to learn how “animals, rangers and poachers move through space and time.” In the case Search & Rescue, ruling out areas that are too steep and impossible for humans to climb or walk through could go a long way to reducing the search area not to mention the search time.

bio

See also:

  • Using UAVs for Search & Rescue [link]
  • MicroMappers: Microtasking for Disaster Response [link]
  • Results of MicroMappers Response to Typhoon Yolanda [link]
  • How UAVs are Making a Difference in Disaster Response [link]
  • Crowdsourcing Evaluation of Sandy Building Damage [link]

Rapid Disaster Damage Assessments: Reality Check

The Multi-Cluster/Sector Initial Rapid Assessment (MIRA) is the methodology used by UN agencies to assess and analyze humanitarian needs within two weeks of a sudden onset disaster. A detailed overview of the process, methodologies and tools behind MIRA is available here (PDF). These reports are particularly insightful when comparing them with the processes and methodologies used by digital humanitarians to carry out their rapid damage assessments (typically done within 48-72 hours of a disaster).

MIRA PH

Take the November 2013 MIRA report for Typhoon Haiyan in the Philippines. I am really impressed by how transparent the report is vis-à-vis the very real limitations behind the assessment. For example:

  • “The barangays [districts] surveyed do not constitute a represen-tative sample of affected areas. Results are skewed towards more heavily impacted municipalities [...].”
  • “Key informant interviews were predominantly held with baranguay captains or secretaries and they may or may not have included other informants including health workers, teachers, civil and worker group representatives among others.”
  • Barangay captains and local government staff often needed to make their best estimate on a number of questions and therefore there’s considerable risk of potential bias.”
  • Given the number of organizations involved, assessment teams were not trained in how to administrate the questionnaire and there may have been confusion on the use of terms or misrepresentation on the intent of the questions.”
  • “Only in a limited number of questions did the MIRA checklist contain before and after questions. Therefore to correctly interpret the information it would need to be cross-checked with available secondary data.”

In sum: The data collected was not representative; The process of selecting interviewees was biased given that said selection was based on a convenience sample; Interviewees had to estimate (guesstimate?) the answer for several questions, thus introducing additional bias in the data; Since assessment teams were not trained to administrate the questionnaire, this also introduces the problem of limited inter-coder reliability and thus limits the ability to compare survey results; The data still needs to be validated with secondary data.

I do not share the above to criticize, only to relay what the real world of rapid assessments resembles when you look “under the hood”. What is striking is how similar the above challenges are to the those that digital humanitarians have been facing when carrying out rapid damage assessments. And yet, I distinctly recall rather pointed criticisms leveled by professional humanitarians against groups using social media and crowdsourcing for humanitarian response back in 2010 & 2011. These criticisms dismissed social media reports as being unrepresentative, unreliable, fraught with selection bias, etc. (Some myopic criticisms continue to this day). I find it rather interesting that many of the shortcomings attributed to crowdsourcing social media reports are also true of traditional information collection methodologies like MIRA.

The fact is this: no data or methodology is perfect. The real world is messy, both off- and online. Being transparent about these limitations is important, especially for those who seek to combine both off- and online methodologies to create more robust and timely damage assessments.

bio

Yes, I’m Writing a Book (on Digital Humanitarians)

I recently signed a book deal with Taylor & Francis Press. The book, which is tentatively titled “Digital Humanitarians: How Big Data is Changing the Face of Disaster Response,” is slated to be published next year. The book will chart the rise of digital humanitarian response from the Haiti Earthquake to 2015, highlighting critical lessons learned and best practices. To this end, the book will draw on real-world examples of digital humanitarians in action to explain how they use new technologies and crowdsourcing to make sense of “Big (Crisis) Data”. In sum, the book will describe how digital humanitarians & humanitarian technologies are together reshaping the humanitarian space and what this means for the future of disaster response. The purpose of this book is to inspire and inform the next generation of (digital) humanitarians while serving as a guide for established humanitarian organizations & emergency management professionals who wish to take advantage of this transformation in humanitarian response.

2025

The book will thus consolidate critical lessons learned in digital humanitarian response (such as the verification of social media during crises) so that members of the public along with professionals in both international humanitarian response and domestic emergency management can improve their own relief efforts in the face of “Big Data” and rapidly evolving technologies. The book will also be of interest to academics and students who wish to better understand methodological issues around the use of social media and user-generated content for disaster response; or how technology is transforming collective action and how “Big Data” is disrupting humanitarian institutions, for example. Finally, this book will also speak to those who want to make a difference; to those who of you who may have little to no experience in humanitarian response but who still wish to help others affected during disasters—even if you happen to be thousands of miles away. You are the next wave of digital humanitarians and this book will explain how you can indeed make a difference.

The book will not be written in a technical or academic writing style. Instead, I’ll be using a more “storytelling” form of writing combined with a conversational tone. This approach is perfectly compatible with the clear documentation of critical lessons emerging from the rapidly evolving digital humanitarian space. This conversational writing style is not at odds with the need to explain the more technical insights being applied to develop next generation humanitarian technologies. Quite on the contrary, I’ll be using intuitive examples & metaphors to make the most technical details not only understandable but entertaining.

While this journey is just beginning, I’d like to express my sincere thanks to my mentors for their invaluable feedback on my book proposal. I’d also like to express my deep gratitude to my point of contact at Taylor & Francis Press for championing this book from the get-go. Last but certainly not least, I’d like to sincerely thank the Rockefeller Foundation for providing me with a residency fellowship this Spring in order to accelerate my writing.

I’ll be sure to provide an update when the publication date has been set. In the meantime, many thanks for being an iRevolution reader!

bio

Video: Humanitarian Response in 2025

I gave a talk on “The future of Humanitarian Response” at UN OCHA’s Global Humanitarian Policy Forum (#aid2025) in New York yesterday. More here for context. A similar version of the talk is available in the video presentation below.

Some of the discussions that ensued during the Forum were frustrating albeit an important reality check. Some policy makers still think that disaster response is about them and their international humanitarian organizations. They are still under the impression that aid does not arrive until they arrive. And yet, empirical research in the disaster literature points to the fact that the vast majority of survivals during disasters is the result of local agency, not external intervention.

In my talk (and video above), I note that local communities will increasingly become tech-enabled first responders, thus taking pressure off the international humanitarian system. These tech savvy local communities already exit. And they already respond to both “natural” (and manmade) disasters as noted in my talk vis-a-vis the information products produced by tech-savvy local Filipino groups. So my point about the rise of tech-enabled self-help was a more diplomatic way of conveying to traditional humanitarian groups that humanitarian response in 2025 will continue to happen with or without them; and perhaps increasingly without them.

This explains why I see OCHA’s Information Management (IM) Team increasingly taking on the role of “Information DJ”, mixing both formal and informal data sources for the purposes of both formal and informal humanitarian response. But OCHA will certainly not be the only DJ in town nor will they be invited to play at all “info events”. So the earlier they learn how to create relevant info mixes, the more likely they’ll still be DJ’ing in 2025.

Bio

Combining Radio, SMS and Advanced Computing for Disaster Response

I’m headed to the Philippines this week to collaborate with the UN Office for the Coordination of Humanitarian Affairs (OCHA) on humanitarian crowdsourcing and technology projects. I’ll be based in the OCHA Offices in Manila, working directly with colleagues Andrej Verity and Luis Hernando to support their efforts in response to Typhoon Yolanda. One project I’m exploring in this respect is a novel radio-SMS-computing initiative that my colleague Anahi Ayala (Internews) and I began drafting during ICCM 2013 in Nairobi last week. I’m sharing the approach here to solicit feedback before I land in Manila.

Screen Shot 2013-11-25 at 6.21.33 AM

The “Radio + SMS + Computing” project is firmly grounded in GSMA’s official Code of Conduct for the use of SMS in Disaster Response. I have also drawn on the Bellagio Big Data Principles when writing up the in’s and out’s of this initiative with Anahi. The project is first and foremost a radio-based initiative that seeks to answer the information needs of disaster-affected communities.

The project: Local radio stations in the Philippines would create and broadcast radio programs inviting local communities to serve as “community journalists” to describe how the Typhoon has impacted their communities. The radio stations would provide a free SMS short-code and invite said communities to text in their observations. Each radio station would include in their broadcast a unique 2-letter identifier and would ask those texting in to start their SMS with that identifier. They would also emphasize that text messages should not include any Personal Identifying Information (PII) and no location information either. Those messages that do include PII would be deleted.

Text messages sent to the SMS short code would be automatically triaged by radio station (using the 2-letter identifier) and forwarded to the respective radio stations via SMS. (At this point, few local radio stations have web access in the disaster-affected areas). These radio stations would be funded to create radio programs based on the SMS’s received. These programs would conclude by asking local communities to text in their information needs—again using the unique radio identifier as a prefix in the text messages. Radio stations would create follow-up programs to address the information needs texted in by local communities (“news you can use”). This could be replicated on a weekly basis and extended to the post-disaster reconstruction phase.

Yolanda destruction

In parallel, the text messages documenting the impact of the Typhoon at the community level would be categorized by Cluster—such as shelter, health, education, etc. Each classified SMS would then be forwarded to the appropriate Cluster Leads. This is where advanced computing comes in: the application of microtasking and machine learning. Trusted Filipino volunteers would be invited to tag each SMS by Cluster-category (and also translate relevant text messages into English). Once enough text messages have been tagged per category, the use of machine learning classifiers would enable the automatic classification of incoming SMS’s. As explained above, these classified SMS’s would then be automatically forwarded to a designated point of contact at each Cluster Agency.

This process would be repeated for SMS’s documenting the information needs of local communities. In other words, information needs would be classified by Cluster category and forwarded to Cluster Leads. The latter would share their responses to stated information needs with the radio stations who in turn would complement their broadcasts with the information provided by the humanitarian community, thus closing the feedback loop.

The radio-SMS project would be strictly opt-in. Radio programs would clearly state that the data sent in via SMS would be fully owned by local communities who could call in or text in at any time to have their SMS deleted. Phone numbers would only be shared with humanitarian organization if the individuals texting to radio stations consented (via SMS) to their numbers being shared. Inviting communities to act as “citizen journalists” rather than asking them to report their needs may help manage expectations. Radio stations can further manage these expectations during their programs by taking questions from listeners calling in. In addition, the project seeks to limit the number of SMS’s that communities have to send. The greater the amount of information solicited from disaster-affected communities, the more challenging managing expectations may be. The project also makes a point of focusing on local information needs as the primary entry point. Finally, the data collection limits the geographical resolution to the village level for the purposes of data privacy and protection.

AIDR logo

It remains to be seen whether this project gets funded, but I’d welcome any feedback iRevolution readers may have in any event since this approach could also be used in future disasters. In the meantime, my QCRI colleagues and I are looking to modify AIDR to automatically classify SMS’s (in addition to tweets). My UNICEF colleagues already expressed to me their need to automatically classify millions of text messages for their U-Report project, so I believe that many other humanitarian and development organizations will benefit from a free and open source platform for automatic SMS classification. At the technical level, this means adding “batch-processing” to AIDR’s current “streaming” feature. We hope to have an update on this in coming weeks. Note that a batch-processing feature will also allow users to upload their own datasets of tweets for automatic classification. 

Bio

Opening Keynote Address at CrisisMappers 2013

Screen Shot 2013-11-18 at 1.58.07 AM

Welcome to Kenya, or as we say here, Karibu! This is a special ICCM for me. I grew up in Nairobi; in fact our school bus would pass right by the UN every day. So karibu, welcome to this beautiful country (and continent) that has taught me so much about life. Take “Crowdsourcing,” for example. Crowdsourcing is just a new term for the old African saying “It takes a village.” And it took some hard-working villagers to bring us all here. First, my outstanding organizing committee went way, way above and beyond to organize this village gathering. Second, our village of sponsors made it possible for us to invite you all to Nairobi for this Fifth Annual, International Conference of CrisisMappers (ICCM).

I see many new faces, which is really super, so by way of introduction, my name is Patrick and I develop free and open source next generation humanitarian technologies with an outstanding team of scientists at the Qatar Computing Research Institute (QCRI), one of this year’s co-sponsors.

We’ve already had an exciting two-days of pre-conference site visits with our friends from Sisi ni Amani and our co-host Spatial Collective. ICCM participants observed first-hand how GIS, mobile technology and communication projects operate in informal settlements, covering a wide range of topics that include governance, civic education and peacebuilding. In addition, our friend Heather Leson from the Open Knowledge Foundation (OKF) coordinated an excellent set of trainings at the iHub yesterday. So a big thank you to Heather, Sisi ni Amani and Spatial Collective for these outstanding pre-conference events.

Screen Shot 2013-11-19 at 10.48.30 AM

This is my 5th year giving opening remarks at ICCM, so some of you will know from previous years that I often take this moment to reflect on the past 12 months. But just reflecting on the past 12 days alone requires it’s own separate ICCM. I’m referring, of course, to the humanitarian and digital humanitarian response to the devastating Typhoon in the Philippines. This response, which is still ongoing, is unparalleled in terms of the level of collaboration between members of the Digital Humanitarian Network (DHN) and formal humanitarian organizations like UN OCHA and WFP. All of these organizations, both formal and digital, are also members of the CrisisMapper Network.

Screen Shot 2013-11-18 at 2.07.59 AM

The Digital Humanitarian Network, or DHN, serves as the official interface between formal humanitarian organizations and global networks of tech-savvy digital volunteers. These digital volunteers provide humanitarian organizations with the skill and surge capacity they often need to make timely sense of “Big (Crisis) Data” during major disasters. By Big Crisis Data, I mean social media content and satellite imagery, for example. This overflow of such information generated during disasters can be as paralyzing to humanitarian response as the absence of information. And making sense of this overflow in response to Yolanda has required all hands on deck—i.e., an unprecedented level of collaboration between many members of the DHN.

So I’d like to share with you 2 initial observations from this digital humanitarian response to Yolanda; just 2 points that may be signs of things to come. Local Digital Villages and World Wide (good) Will.

Screen Shot 2013-11-18 at 2.09.42 AM

First, there were numerous local digital humanitarians on the ground in the Philippines. These digitally-savvy Filipinos were rapidly self-organizing and launching crisis maps well before any of us outside the Philippines had time to blink. One such group is Rappler, for example.

Screen Shot 2013-11-18 at 2.10.37 AM

We (the DHN) reached out to them early on, sharing both our data and volunteers. Remember that “Crowdsourcing” is just a new word for the old African saying that “it takes a village…” and sometimes, it takes a digital village to support humanitarian efforts on the ground. And Rappler is hardly the only local digital community that mobilizing in response to Yolanda, there are dozens of digital villages spearheading similar initiatives across the country.

The rise of local digital villages means that the distant future (or maybe not too distant future) of humanitarian operations may become less about the formal “brick-and-mortar” humanitarian organizations and, yes, also less about the Digital Humanitarian Network. Disaster response is and has always have been about local communities self-organizing and now local digital communities self-organizing. The majority of lives saved during disasters is attributed to this local agency, not international, external relief. Furthermore, these local digital villages are increasingly the source of humanitarian innovation, so we should pay close attention; we have a lot to learn from these digital villages. Naturally, they too are learning a lot from us.

The second point that struck me occurred when the Standby Volunteer Task Force (SBTF) completed its deployment of MicroMappers on behalf of OCHA. The response from several SBTF volunteers was rather pointed—some were disappointed that the deployment had closed; others were downright upset. What happened next was very interesting; you see, these volunteers simply kept going, they used (hacked) the SBTF Skype Chat for Yolanda (which already had over 160 members) to self-organize and support other digital humanitarian efforts that were still ongoing. So the SBTF Team sent an email to it’s 1,000+ volunteers with the following subject header: “Closing Yolanda Deployment, Opening Other Opportunities!”

Screen Shot 2013-11-18 at 2.11.28 AM

The email provided a list of the most promising ongoing digital volunteer opportunities for the Typhoon response and encouraged volunteers to support whatever efforts they were most drawn to. This second reveals that a “World Wide (good) Will” exists. People care. This is good! Until recently, when disasters struck in faraway lands, we would watch the news on television wishing we could somehow help. That private wish—that innate human emotion—would perhaps translate into a donation. Today, not only can you donate cash to support those affected by disasters, you can also donate a few minutes of your time to support the relief efforts on the ground thanks to new humanitarian technologies and platforms. In other words, you, me, all of us can now translate our private wishes into direct, online public action, which can support those working in disaster-affected areas including local digital villages.

Screen Shot 2013-11-18 at 2.12.21 AM

This surge of World Wide (good) Will explains why SBTF volunteers wanted to continue volunteering for as long as they wished even if our formal digital humanitarian network had phased out operations. And this is beautiful. We should not seek to limit or control this global goodwill or play the professional versus amateur card too quickly. Besides, who are we kidding? We couldn’t control this flood of goodwill even if we wanted to. But, we can embrace this goodwill and channel it. People care, they want to offer their time to help others thousands of miles away. This is beautiful and the kind of world I want to live in. To paraphrase the philosopher Hannah Arendt, the greatest harm in the world is caused not by evil but apathy. So we should cherish the digital goodwill that springs during disasters. This spring is the digital equivalent of mutual aid, of self-help. The global village of digital Good Samaritans is growing.

At the same time, this goodwill, this precious human emotion and the precious time it freely offers can cause more harm than good if it is not channeled responsibly. When international volunteers poor into disaster areas wanting to help, their goodwill can have the opposite effect, especially when they are inexperienced. This is also true of digital volunteers flooding in to help online.

We in the CrisisMappers community have the luxury of having learned a lot about digital humanitarian response since the Haiti Earthquake; we have learned important lessons about data privacy and protection, codes of conduct, the critical information needs of humanitarian organizations and disaster-affected populations, standardizing operating procedures, and so on. Indeed we now (for the first time) have data protection protocols that address crowdsourcing, social media and digital volunteers thanks to our colleagues at the ICRC. We also have an official code of conduct on the use of SMS for disaster response thanks to our colleagues at GSMA. This year’s World Disaster Report (WDR 2013) also emphasizes the responsible use of next generation humanitarian technologies and the crisis data they manage.

Screen Shot 2013-11-18 at 2.13.03 AM

Now, this doesn’t mean that we the formal (digital) humanitarian sector have figured it all out—far from it. This simply means that we’ve learned a few important and difficult lessons along the way. Unlike newcomers to the digital humanitarian space, we have the benefit of several years of hard experience to draw on when deploying for disasters like Typhoon Yolanda. While sharing these lessons and disseminating them as widely as possible is obviously a must, it is simply not good enough. Guidebooks and guidelines just won’t cut it. We also need to channel the global spring of digital goodwill and distribute it to avoid  “flash floods” of goodwill. So what might these goodwill channels look like? Well they already exist in the form of the Digital Humanitarian Network—more specifically the members of the DHN.

These are the channels that focus digital goodwill in support of the humanitarian organizations that physically deploy to disasters. These channels operate using best practices, codes of conduct, protocols, etc., and can be held accountable. At the same time, however, these channels also block the upsurge of goodwill from new digital volunteers—those outside our digital villages. How? Our channels block this World Wide (good) Will by requiring technical expertise to engage with us and/or  by requiring an inordinate amount of time commitment. So we should not be surprised if the “World Wide (Good) Will” circumvents our channels altogether, and in so doing causes more harm than good during disasters. Our channels are blocking their engagement and preventing them from joining our digital villages. Clearly we need different channels to focus the World Wide (Good) Will.

Screen Shot 2013-11-18 at 2.14.21 AM

Our friends at Humanitarian OpenStreetMap already figured this out two years ago when they set up their microtasking server, making it easier for less tech-savvy volunteers to engage. We need to democratize our humanitarian technologies to responsibly channel the huge surplus global goodwill that exists online. This explains why my team and I at QCRI are developing MicroMappers and why we deployed the platform in response to OCHA’s request within hours of Typhoon Yolanda making landfall in the Philippines.

Screen Shot 2013-11-18 at 2.15.21 AM

This digital humanitarian operation was definitely far from perfect, but it was super simple to use and channeled 208 hours of global goodwill in just a matter days. Those are 208 hours that did not cause harm. We had volunteers from dozens of countries around the world and from all ages and walks of life offering their time on MicroMappers. OCHA, which had requested this support, channeled the resulting data to their teams on the ground in the Philippines.

These digital volunteers all cared and took the time to try and help others thousands of miles away. The same is true of the remarkable digital volunteers supporting the Humanitarian OpenStreetMap efforts. This is the kind of world I want to live in; the world in which humanitarian technologies harvest the global goodwill and channels it to make a difference to those affected by disasters.

Screen Shot 2013-11-18 at 2.09.42 AM

So these are two important trends I see moving forward, the rise of well-organized, local digital humanitarian groups, like Rappler, and the rise of World Wide (Good) Will. We must learn from the former, from the local digital villages, and when asked, we should support them as best we can. We should also channel, even amplify the World Wide (Good) Will by democratizing humanitarian technologies and embracing new ways to engage those who want to make a difference. Again, Crowdsourcing is simply a new term for the old African proverb, that it takes a village. Let us not close the doors to that village.

So on this note, I thank *you* for participating in ICCM and for being a global village that cares, both on and offline. Big thanks as well to our current team of sponsors for caring about this community and making sure that our village does continue to meet in person every year. And now for the next 3 days, we have an amazing line-up of speakers, panelists & technologies for you. So please use these days to plot, partner and disrupt. And always remember: be tough on ideas, but gentle on people.

Thanks again, and keep caring.