Tag Archives: Response

Welcome to the Humanitarian UAV Network

UAViators Logo

The Humanitarian UAV Network (UAViators) is now live. Click here to access and join the network. Advisors include representatives from 3D Robotics, AirDroids, senseFly & DroneAdventures, OpenRelief, ShadowView Foundation, ICT4Peace Foundation, the United Nations and more. The website provides a unique set of resources, including the most comprehensive case study of humanitarian UAV deployments, a directory of organizations engaged in the humanitarian UAV space and a detailed list of references to keep track of ongoing research in this rapidly evolving area. All of these documents along with the network’s Code of Conduct—the only one of it’s kind—are easily accessible here.

UAViators 4 Teams

The UAViators website also includes 8 action-oriented Teams, four of which are displayed above. The Flight Team, for example, includes both new and highly experienced UAV pilots while the Imagery Team comprises members interested in imagery analysis. Other teams include the Camera, Legal and Policy Teams. In addition to this Team page, the site also has a dedicated Operations page to facilitate & coordinate safe and responsible UAV deployments in support of humanitarian efforts. In between deployments, the website’s Global Forum is a place where members share information about relevant news, events and more. One such event, for example, is the upcoming Drone/UAV Search & Rescue Challenge that UAViators is sponsoring.

When first announcing this initiative,  I duly noted that launching such a network will at first raise more questions than answers, but I welcome the challenge and believe that members of UAViators are well placed to facilitate the safe and responsible use of UAVs in a variety of humanitarian contexts.

Acknowledgements: Many thanks to colleagues and members of the Advisory Board who provided invaluable feedback and guidance in the lead-up to this launch. The Humanitarian UAV Network is result of collective vision and effort.

bio

See also:

  • How UAVs are Making a Difference in Disaster Response [link]
  • Humanitarians Using UAVs for Post Disaster Recovery [link]
  • Grassroots UAVs for Disaster Response [link]
  • Using UAVs for Search & Rescue [link]
  • Crowdsourcing Analysis of UAV Imagery for Search and Rescue [link]

Grassroots UAVs for Disaster Response

I was recently introduced to a new initiative that seeks to empower grassroots communities to deploy their own low-cost xUAVs. The purpose of this initiative? To support locally-led disaster response efforts and in so doing transfer math, science and engineering skills to local communities. The “x” in xUAV refers to expendable. The initiative is a partnership between California State University (Long Beach), University of Hawaii, Embry Riddle, The Philippine Council for Industry, Energy & Emerging Technology Research & Development, Skyeye, Aklan State University and Ateneo de Manila University in the Philippines. The team is heading back to the Philippines next week for their second field mission. This blog post provides a short overview of the project’s approach and the results from their first mission, which took place during December 2013-February 2014.

xUAV1

The xUAV team is specifically interested in a new category of UAVs, those that are locally available, locally deployable, low-cost, expendable and extremely easy to use. Their first field mission to the Philippines focused on exploring the possibilities. The pictures above/below (click to enlarge) were kindly shared by the Filipinos engaged in the project—I am very grateful to them for allowing me to share these publicly. Please do not reproduce these pictures without their written permission, thank you.

xUAV2

I spoke at length with one of the xUAV team leads, Ted Ralston, who is heading back to the Philippines the second field mission. The purpose of this follow up visit is to shift the xUAV concept from experimental to deployable. One area that his students will be focusing on with the University of Manila is the development of a very user-friendly interface (using a low-cost tablet) to pilot the xUAVs so that local communities can simply tag way-points on a map that the xUAV will then automatically fly to. Indeed, this is where civilian UAVs are headed, full automation. A good example of this trend towards full automation is the new DroidPlanner 2.0 App just released by 3DRobotics. This free app provides powerful features to very easily plan autonomous flights. You can even create new flight plans on the fly and edit them onsite.

DroidPlanner.png

So the xUAV team will focus on developing software for automated take-off and landing as well as automated adjustments for wind conditions when the xUAV is airborne, etc. The software will also automatically adjust the xUAV’s flight parameters for any added payloads. Any captured imagery would then be made easily viewable via touch-screen directly from the low-cost tablet.

xUAV3

One of the team’s top priorities throughout this project is to transfer their skills to young Filipinos, given them hands on training in science, math and engineering. An equally important, related priority, is their focus on developing local partnerships with multiple partners. We’re familiar with ideas behind Public Participatory GIS (PPGIS) vis-a-vis the participatory use of geospatial information systems and technologies. The xUAV team seeks to extend this grassroots approach to Public Participatory UAVs.

xUAV4

I’m supporting this xUAV initiative in a number of ways and will be uploading the team’s UAV imagery (videos & still photos) from their upcoming field mission to MicroMappers for some internal testing. I’m particularly interested in user-generated (aerial) content that is raw and not pre-processed or stitched together, however. Why? Because I expect this type of imagery to grow in volume given the very rapid growth of the personal micro-UAV market. For more professionally produced and stitched-together aerial content, an ideal platform is Humanitarian OpenStreetMap’s Tasking Server, which is tried and tested for satellite imagery and which was recently used to trace processed UAV imagery of Tacloban.

Screen Shot 2014-03-12 at 1.03.20 PM

I look forward to following the xUAV team’s efforts and hope to report on the outcome of their second field mission. The xUAV initiative fits very nicely with the goals of the Humanitarian UAV Network (UAViators). We’ll be learning a lot in the coming weeks and months from our colleagues in the Philippines.

bio

Crisis Mapping without GPS Coordinates

I recently spoke with a UK start-up that is doing away with GPS coordinates even though their company focuses on geographic information and maps. The start-up, What3Words, has divided the globe into 57 trillion squares and given each of these 3-by-3 meter areas a unique three-word code. Goodbye long postal addresses and cryptic GPS coordinates. Hello planet.inches.most. The start-up also offers a service called OneWord, which allows you to customize a one-word name for any square. In addition, the company has expanded to other languages such as Spanish, Swedish and Russian. They’re now working on including Arabic, Chinese, Japanese and others by mid-January 2014. Meanwhile, their API lets anyone build new applications that tap their global map of 57 trillion squares.

Credit: What3Words

When I spoke with CEO Chris Sheldrick, he noted that their very first users were emergency response organizations. One group in Australia, for example, is using What3Words as part of their SMS emergency service. “This will let people identify their homes with just three words, ensuring that emergency vehicles can find them as quickly as possible.” Such an approach provides greater accuracy, which is vital in rural areas. “Our ambulances have a terrible time with street addresses, particularly in The Bush.” Moreover, many places in the world have no addresses at all. So What3Words may also be useful for certain ICT4D projects in addition to crisis mapping. The real key to this service is simplicity, i.e., communicating three words over the phone, via SMS/Twitter or email is far easier (and less error prone) than dictating a postal address or a complicated set of GPS coordinates.

Credit: What3Words

How else do you think this service could be used vis-à-vis disaster response?

Bio

Video: Humanitarian Response in 2025

I gave a talk on “The future of Humanitarian Response” at UN OCHA’s Global Humanitarian Policy Forum (#aid2025) in New York yesterday. More here for context. A similar version of the talk is available in the video presentation below.

Some of the discussions that ensued during the Forum were frustrating albeit an important reality check. Some policy makers still think that disaster response is about them and their international humanitarian organizations. They are still under the impression that aid does not arrive until they arrive. And yet, empirical research in the disaster literature points to the fact that the vast majority of survivals during disasters is the result of local agency, not external intervention.

In my talk (and video above), I note that local communities will increasingly become tech-enabled first responders, thus taking pressure off the international humanitarian system. These tech savvy local communities already exit. And they already respond to both “natural” (and manmade) disasters as noted in my talk vis-a-vis the information products produced by tech-savvy local Filipino groups. So my point about the rise of tech-enabled self-help was a more diplomatic way of conveying to traditional humanitarian groups that humanitarian response in 2025 will continue to happen with or without them; and perhaps increasingly without them.

This explains why I see OCHA’s Information Management (IM) Team increasingly taking on the role of “Information DJ”, mixing both formal and informal data sources for the purposes of both formal and informal humanitarian response. But OCHA will certainly not be the only DJ in town nor will they be invited to play at all “info events”. So the earlier they learn how to create relevant info mixes, the more likely they’ll still be DJ’ing in 2025.

Bio

How UAVs Are Making a Difference in Disaster Response

I visited the University of Torino in 2007 to speak with the team developing UAVs for the World Food Program. Since then, I’ve bought and tested two small UAVs of my own so I can use this new technology to capture aerial imagery during disasters; like the footage below from the Philippines.

UAVs, or drones, have a very strong military connotation for many of us. But so did space satellites before Google Earth brought satellite imagery into our homes and changed our perceptions of said technology. So it stands to reason that UAVs and aerial imagery will follow suit. This explains why I’m a proponent of the Drone Social Innovation Award, which seeks to promote the use of civilian drone technology for the benefit of humanity. I’m on the panel of judges for this award, which is why I reached out to DanOffice IT, a Swiss-based company that deployed two drones in response to Typhoon Yolanda in the Philippines. The drones in question are Huginn X1’s, which have a flight time of 25 minutes with a range of 2 kilometers and maximum altitude of 150 meters.

HUGINN X1

I recently spoke with one of the Huginn pilots who was in Tacloban. He flew the drone to survey shelter damage, identify blocked roads and search for bodies in the debris (using thermal imaging cameras mounted on the drone for the latter). The imagery captured also helped to identify appropriate locations to set up camp. When I asked the pilot whether he was surprised by anything during the operation, he noted that road-clearance support was not a use-case he had expected. I’ll be meeting with him in Switzerland in the next few weeks to test-fly a Huginn and explore possible partnerships.

I’d like to see closer collaboration between the Digital Humanitarian Network (DHN) and groups like DanOffice, for example. Providing DHN-member Humanitarian OpenStreetMap (HOTosm) with up-to-date aerial imagery during disasters would be a major win. This was the concept behind OpenAerialMap, which was first discussed back in 2007. While the initiative has yet to formally launch, PIX4D is a platform that “converts thousands of aerial images, taken by lightweight UAV or aircraft into geo-referenced 2D mosaics and 3D surface models and point clouds.”

Drone Adventures

This platform was used in Haiti with the above drones. The International Organization for Migration (IOM) partnered with Drone Adventures to map over 40 square kilometers of dense urban territory including several shantytowns in Port-au-Prince, which was “used to count the number of tents and organize a ‘door-to-door’ census of the population, the first step in identifying aid requirements and organizing more permanent infrastructure.” This approach could also be applied to IDP and refugee camps in the immediate aftermath of a sudden-onset disaster. All the data generated by Drone Adventures was made freely available through OpenStreetMap.

If you’re interested in giving “drones for social good” a try, I recommend looking at the DJI Phantom and the AR.Drone Parrot. These are priced between $300- $600, which beats the $50,000 price tag of the Huginn X1.

 bio

Humanitarian Response in 2025

I’ve been invited to give a “very provocative talk” on what humanitarian response will look like in 2025 for the annual Global Policy Forum organized by the UN Office for the Coordination of Humanitarian Affairs (OCHA) in New York. I first explored this question in early 2012 and my colleague Andrej Verity recently wrote up this intriguing piece on the topic, which I highly recommend; intriguing because he focuses a lot on the future of the pre-deployment process, which is often overlooked.

2025

I only have 7 minutes to give my talk so am thinking of leading with one or two of the following ideas−but I’m very interested in getting feedback from iRevolution readers and welcome additional ideas about what 2025 might look like for OCHA.

•  Situational Awareness: damage & needs assessments are instantaneous and 3D Crisis Maps are updated in real-time along with 3W’s information. Global communication networks are now hyper resilient, thus enabling uninterrupted communications after major disasters. More than 90% of the world’s population generates a readable, geo-referenced and multimedia digital footprint, which is used to augment 3D situational awareness; Fully 100% of all news media and citizen journalism content is now on the web and automatically translated & analyzed every second; high-resolution satellite and aerial imagery for 90% of the planet is updated and automatically analyzed every minute; Billions of physical sensors provide feedback loops on transportation, infrastructure, public health, weather-related and environmental dynamics in real-time. Big Data Analytics & advances in predictive modeling enables situational awareness to be predicted, allowing for IDP/refugee flows and disease outbreaks to be anticipated well ahead of any displacement.

•  Operational Response: disaster response is predominately driven by local communities. The real first responders, after all, have always been the disaster-affected communities. In 2025, this grassroots response is highly networked and hyper-tech enabled, thus significantly accelerating and improving the efficiency of self-help and mutual-aid. The Digital Humanitarian Network (DHN) is no longer a purely virtual network and has local chapters (with flocks of UAVs) in over 100 countries that each contribute to local response efforts. Meanwhile, close to 90% of the world’s population has an augmented-reality Personal Context Assistant (PCA), a wearable device that provides hyper-customized information (drawn in part from Quantified Self data) on urgent needs, available resources and logistics. National humanitarian response organizations have largely replaced the need for external assistance and coordination save for extreme events. International humanitarian organizations increasingly play a financial, certification and accountability role.

•  Early Recovery: There are more 3D printers than 2D printers in 2025. The former are extensively used for rapid reconstruction and post-disaster resilient development using local resources and materials. Mobile-money is automatically disbursed to enable this recovery based on personal insurance & real-time needs assessments. In addition, the Share Economy is truly global, which means that communication, transportation, accommodation and related local services are all readily available in the vast majority of urban areas. During disasters, Share Economy companies play an active role by offering free use of their platforms.

•  Data Access & Privacy: Telecommunications companies, satellite imagery firms and large technology & social media companies have all signed up to the International Data Philanthropy Charter, enabling them to share anonymized emergency data (albeit temporarily) that is directly relevant for humanitarian response. User-generated content is owned by the user who can limit the use of this data along the lines of the Open Paths model.

If you feel like this future is a little too rosy, that’s because I’m thinking of presenting two versions of the future, one that is optimistic and the other less so. The latter would be a world riddled with ad hoc decision-making based on very subjective damage & needs-assessments, highly restrictive data-sharing licenses and even the continued use of PDFs for data dissemination. This less-than pleasant world would also be plagued by data privacy, protection and security challenges. A new digital volunteer group called “Black Hat Humanitarians” rises to prominence and has little patience for humanitarian principles or codes of conduct. In this future world, digital data is collected and shared with no concern for informed consent. In addition, the vast majority of data relevant for saving lives in humanitarian crises remains highly proprietary. Meanwhile, open data that is publicly shared during disasters is used by tech-savvy criminals to further their own ends.

These two future worlds may be extremes but whether we lean towards one or the other will depend in part on enlightened leadership and policymaking. What do you think humanitarian response will look like in 2025? Where am I off and/or making unfounded assumptions? What aspects of the pictures I’m painting are more likely to become reality? What am I completely missing?

Update: Video of presentation available here.

bio

Early Results of MicroMappers Response to Typhoon Yolanda (Updated)

We have completed our digital humanitarian operation in the Philippines after five continuous days with MicroMappers. Many, many thanks to all volunteers from all around the world who donated their time by clicking on tweets and images coming from the Philippines. Our UN OCHA colleagues have confirmed that the results are being shared widely with their teams in the field and with other humanitarian organizations on the ground. More here.

ImageClicker

In terms of preliminary figures (to be confirmed):

  • Tweets collected during first 48 hours of landfall = ~230,000
  • Tweets automatically filtered for relevancy/uniqueness = ~55,000
  • Tweets clicked using the TweetClicker = ~ 30,000
  • Relevant tweets triangulated using TweetClicker = ~3,800
  • Triangulated tweets published on live Crisis Map = ~600
  • Total clicks on TweetClicker = ~ 90,000
  • Images clicked using the ImageClicker = ~ 5,000
  • Relevant images triangulated using TweetClicker = ~1,200
  • Triangulated images published on live Crisis Map = ~180
  • Total clicks on ImageClicker = ~15,000
  • Total clicks on MicroMappers (Image + Tweet Clickers) = ~105,000

Since each single tweet and image uploaded to the Clickers was clicked on by (at least) three individual volunteers for quality control purposes, the number of clicks is three times the total number of tweets and images uploaded to the respective clickers. In sum, digital humanitarian volunteers have clocked a grand total of ~105,000 clicks to support humanitarian operations in the Philippines.

While the media has largely focused on the technology angle of our digital humanitarian operation, the human story is for me the more powerful message. This operation succeeded because people cared. Those ~105,000 clicks did not magically happen. Each and every single one of them was clocked by humans, not machines. At one point, we had over 300 digital volunteers from the world over clicking away at the same time on the TweetClicker and more than 200 on the ImageClicker. This kind of active engagement by total strangers—good “digital Samaritans”—explains why I find the human angle of this story to be the most inspiring outcome of MicroMappers. “Crowdsourcing” is just a new term for the old saying “it takes a village,” and sometimes it takes a digital village to support humanitarian efforts on the ground.

Until recently, when disasters struck in faraway lands, we would watch the news on television wishing we could somehow help. That private wish—that innate human emotion—would perhaps translate into a donation. Today, not only can you donate cash to support those affected by disasters, you can also donate a few minutes of your time to support the operational humanitarian response on the ground by simply clicking on MicroMappers. In other words, you can translate your private wish into direct, online public action, which in turn translates into supporting offline collective action in the disaster-affected areas.

Clicking is so simple that anyone with Internet access can help. We had high schoolers in Qatar clicking away, fire officers in Belgium, graduate students in Boston, a retired couple in Kenya and young Filipinos clicking away. They all cared and took the time to try and help others, often from thousands of miles away. That is the kind of world I want to live in. So if you share this vision, then feel free to join the MicroMapper list-serve.

Yolanda TweetClicker4

Considering that MicroMappers is still very much under development, we are all pleased with the results. There were of course many challenges; the most serious was the CrowdCrafting server which hosts our Clickers. Unfortunately, that server was not able to handle the load and traffic generated by digital volunteers. So their server crashed twice and also slowed our Clickers to a complete stop at least a dozen times during the past five days. At times, it would take 10-15 seconds for a new tweet or image to load, which was frustrating. We were also limited by the number of tweets and images we could upload at any given time, usually ~1,500 at most. Any larger load would seriously slow down the Clickers. So it is rather remarkable that digital volunteers managed to clock more than 100,000 clicks given the repeated interruptions. 

Besides the server issue, the other main bottleneck was the geo-location of the ~30,000 tweets and ~5,000 images tagged using the Clickers. We do have a Tweet and Image GeoClicker but these were not slated to launch until next week at CrisisMappers 2013, which meant they weren’t ready for prime time. We’ll be sure to launch them soon. Once they are operational, we’ll be able to automatically push triangulated tweets and images from the Tweet and Image Clickers directly to the corresponding GeoClickers so volunteers can also aid humanitarian organizations by mapping important tweets and images directly.

There’s a lot more that we’ve learned throughout the past 5 days and much room for improvement. We have a long list of excellent suggestions and feedback from volunteers and partners that we’ll be going through starting tomorrow. The most important next step is to get a more powerful server that can handle a lot more load and traffic. We’re already taking action on that. I have no doubt that our clicks would have doubled without the server constraints.

For now, though, BIG thanks to the SBTF Team and in particular Jus McKinnon, the QCRI et al team, in particular Ji Lucas, Hemant Purohit and Andrew Ilyas for putting in very, very long hours, day in and day out on top of their full-time jobs and studies. And finally, BIG thanks to the World Wide Crowd, to all you who cared enough to click and support the relief operations in the Philippines. You are the heroes of this story.

bio

Mining Mainstream Media for Emergency Management 2.0

There is so much attention (and hype) around the use of social media for emergency management (SMEM) that we often forget about mainstream media when it comes to next generation humanitarian technologies. The news media across the globe has become increasingly digital in recent years—and thus analyzable in real-time. Twitter added little value during the recent Pakistan Earthquake, for example. Instead, it was the Pakistani mainstream media that provided the immediate situational awareness necessary for a preliminary damage and needs assessment. This means that our humanitarian technologies need to ingest both social media and mainstream media feeds. 

Newspaper-covers

Now, this is hardly revolutionary. I used to work for a data mining company ten years ago that focused on analyzing Reuters Newswires in real-time using natural language processing (NLP). This was for a conflict early warning system we were developing. The added value of monitoring mainstream media for crisis mapping purposes has also been demonstrated repeatedly in recent years. In this study from 2008, I showed that a crisis map of Kenya was more complete when sources included mainstream media as well as user-generated content.

So why revisit mainstream media now? Simple: GDELT. The Global Data Event, Language and Tone dataset that my colleague Kalev Leetaru launched earlier this year. GDELT is the single largest public and global event-data catalog ever developed. Digital Humanitarians need no longer monitor mainstream media manually. We can simply develop a dedicated interface on top of GDELT to automatically extract situational awareness information for disaster response purposes. We’re already doing this with Twitter, so why not extend the approach to global digital mainstream media as well?

GDELT data is drawn from a “cross-section of all major international, national, regional, local, and hyper-local news sources, both print and broadcast, from nearly every corner of the globe, in both English and vernacular.” All identified events are automatically coded using the CAMEO coding framework (although Kalev has since added several dozen additional event-types). In short, GDELT codes all events by the actors involved, the type of event, location, time and other meta-data attributes. For example, actors include “Refugees,” “United Nations,” and “NGO”. Event-types include variables such as “Affect” which captures everything from refugees to displaced persons,  evacuations, etc. Humanitarian crises, aid, disasters, disaster relief, etc. are also included as an event-type. The “Provision of Humanitarian Aid” is another event-type, for example. GDELT data is currently updated every 24 hours, and Kalev has plans to provide hourly updates in the near future and ultimately 30-minute updates.

GDELT GKG

If this isn’t impressive enough, Kalev and colleagues have just launched the GDELT Global Knowledge Graph (GKG). “To sum up the GKG in a single sentence, it connects every person, organization, location, count, theme, news source, and event across the planet into a single massive network that captures what’s happening around the world, what its context is and who’s involved, and how the world is feeling about it, every single day.” The figure above (click to enlarge) is based on a subset of a single day of the GDELT Knowledge Graph, showing “how the cities of the world are connected to each other in a single day’s worth of news. A customized version of the GKG could perhaps prove useful for UN OCHA’s “Who Does What, Where” (3Ws) directory in the future. 

I’ve had long conversations with Kalev this month about leveraging GDELT for disaster response and he is very supportive of the idea. My hope is that we’ll be able to add a GDELT feed to MicroMappers next year. I’m also wondering whether we could eventually create a version of the AIDR platform that ingests GDELT data instead (or in addition to) Twitter. There is real potential here, which is why I’m excited that my colleagues at OCHA are exploring GDELT for humanitarian response. I’ll be meeting with them this week and next to explore ways to collaborate on making the most of GDELT for humanitarian response.

bio

Note: Mainstream media obviously includes television and radio as well. Some colleagues of mine in Austria are looking at triangulating television broadcasts with text-based media and social media for a European project.

Automatically Identifying Eyewitness Reporters on Twitter During Disasters

My colleague Kate Starbird recently shared a very neat study entitled “Learning from the Crowd: Collaborative Filtering Techniques for Identifying On-the-Ground Twitterers during Mass Disruptions” (PDF). As she and her co-authors rightly argue, “most Twitter activity during mass disruption events is generated by the remote crowd.” So can we use advanced computing to rapidly identify Twitter users who are reporting from ground zero? The answer is yes.

twitter-disaster-test

An important indicator of whether or not a Twitter user is reporting from the scene of a crisis is the number of times they are retweeted. During the Egyptian revolution in early 2011, “nearly 30% of highly retweeted Twitter users were physically present at those protest events.” Kate et al. drew on this insight to study tweets posted during the Occupy Wall Street (OWS) protests in September 2011. The authors manually analyzed a sample of more than 2,300 Twitter users to determine which were tweeting from the protests. They found that 4.5% of Twitter users in their sample were actually onsite. Using this dataset as training data, Kate et al. were able to develop a classifier that can automatically identify Twitter users reporting from the protests with an accuracy of just shy of 70%. I expect that more training data could very well help increase this accuracy score. 

In any event, “the information resulting from this or any filtering technique must be further combined with human judgment to assess its accuracy.” As the authors rightly note, “this ‘limitation’ fits well within an information space that is witnessing the rise of digital volunteer communities who monitor multiple data sources, including social media, looking to identify and amplify new information coming from the ground.” To be sure, “For volunteers like these, the use of techniques that increase the signal to noise ratio in the data has the potential to drastically reduce the amount of work they must do. The model that we have outlined does not result in perfect classification, but it does increase this signal-to-noise ratio substantially—tripling it in fact.”

I really hope that someone will leverage Kate’s important work to develop a standalone platform that automatically generates a list of Twitter users who are reporting from disaster-affected areas. This would be a very worthwhile contribution to the ecosystem of next-generation humanitarian technologies. In the meantime, perhaps QCRI’s Artificial Intelligence for Disaster Response (AIDR) platform will help digital humanitarians automatically identify tweets posted by eyewitnesses. I’m optimistic since we were able to create a machine learning classifier with an accuracy of 80%-90% for eyewitness tweets. More on this in our recent study

MOchin - talked to family

One question that remains is how to automatically identify tweets like the one above? This person is not an eyewitness but was likely on the phone with her family who are closer to the action. How do we develop a classifier to catch these “second-hand” eyewitness reports?

bio

AIDR: Artificial Intelligence for Disaster Response

Social media platforms are increasingly used to communicate crisis information when major disasters strike. Hence the rise of Big (Crisis) Data. Humanitarian organizations, digital humanitarians and disaster-affected communities know that some of this user-generated content can increase situational awareness. The challenge is to identify relevant and actionable content in near real-time to triangulate with other sources and make more informed decisions on the spot. Finding potentially life-saving information in this growing stack of Big Crisis Data, however, is like looking for the proverbial needle in a giant haystack. This is why my team and I at QCRI are developing AIDR.

haystpic_pic

The free and open source Artificial Intelligence for Disaster Response platform leverages machine learning to automatically identify informative content on Twitter during disasters. Unlike the vast majority of related platforms out there, we go beyond simple keyword search to filter for informative content. Why? Because recent research shows that keyword searches can miss over 50% of relevant content posted on Twitter. This is very far from optimal for emergency response. Furthermore, tweets captured via keyword search may not be relevant since words can have multiple meanings depending on context. Finally, keywords are restricted to one language only. Machine learning overcomes all these limitations, which is why we’re developing AIDR.

So how does AIDR work? There are three components of AIDR: the Collector, Trainer and Tagger. The Collector simply allows you to collect and save a collection of tweets posted during a disaster. You can download these tweets for analysis at any time and also use them to create an automated filter using machine learning, which is where the Trainer and Tagger come in. The Trainer allows one or more users to train the AIDR platform to automatically tag tweets of interest in a given collection of tweets. Tweets of interest could include those that refer to “Needs”, “Infrastructure Damage” or “Rumors” for example.

AIDR_Collector

A user creates a Trainer for tweets-of-interest by: 1) Creating a name for their Trainer, e.g., “My Trainer”; 2) Identifying topics of interest such as “Needs”, “Infrastructure Damage”,  “Rumors” etc. (as many topics as the user wants); and 3) Classifying tweets by topic of interest. This last step simply involves reading collected tweets and classifying them as “Needs”, “Infrastructure Damage”, “Rumor” or “Other,” for example. Any number of users can participate in classifying these tweets. That is, once a user creates a Trainer, she can classify the tweets herself, or invite her organization to help her classify, or ask the crowd to help classify the tweets, or all of the above. She simply shares a link to her training page with whoever she likes. If she choses to crowdsource the classification of tweets, AIDR includes a built-in quality control mechanism to ensure that the crowdsourced classification is accurate.

As noted here, we tested AIDR in response to the Pakistan Earthquake last week. We quickly hacked together the user interface displayed below, so functionality rather than design was our immediate priority. In any event, digital humanitarian volunteers from the Standby Volunteer Task Force (SBTF) tagged over 1,000 tweets based on the different topics (labels) listed below. As far as we know, this was the first time that a machine learning classifier was crowdsourced in the context of a humanitarian disaster. Click here for more on this early test.

AIDR_Trainer

The Tagger component of AIDR analyzes the human-classified tweets from the Trainer to automatically tag new tweets coming in from the Collector. This is where the machine learning kicks in. The Tagger uses the classified tweets to learn what kinds of tweets the user is interested in. When enough tweets have been classified (20 minimum), the Tagger automatically begins to tag new tweets by topic of interest. How many classified tweets is “enough”? This will vary but the more tweets a user classifies, the more accurate the Tagger will be. Note that each automatically tagged tweet includes an accuracy score—i.e., the probability that the tweet was correctly tagged by the automatic Tagger.

The Tagger thus displays a list of automatically tagged tweets updated in real-time. The user can filter this list by topic and/or accuracy score—display all tweets tagged as “Needs” with an accuracy of 90% or more, for example. She can also download the tagged tweets for further analysis. In addition, she can share the data link of her Tagger with developers so the latter can import the tagged tweets directly into to their own platforms, e.g., MicroMappers, Ushahidi, CrisisTracker, etc. (Note that AIDR already powers CrisisTracker by automating the classification of tweets). In addition, the user can share a display link with individuals who wish to embed the live feed into their websites, blogs, etc.

In sum, AIDR is an artificial intelligence engine developed to power consumer applications like MicroMappers. Any number of other tools can also be added to the AIDR platform, like the Credibility Plugin for Twitter that we’re collaborating on with partners in India. Added to AIDR, this plugin will score individual tweets based on the probability that they convey credible information. To this end, we hope AIDR will become a key node in the nascent ecosystem of next-generation humanitarian technologies. We plan to launch a beta version of AIDR at the 2013 CrisisMappers Conference (ICCM 2013) in Nairobi, Kenya this November.

In the meantime, we welcome any feedback you may have on the above. And if you want to help as an alpha tester, please get in touch so I can point you to the Collector tool, which you can start using right away. The other AIDR tools will be open to the same group of alpha tester in the coming weeks. For more on AIDR, see also this article in Wired.

AIDR_logo

The AIDR project is a joint collaboration with the United Nations Office for the Coordination of Humanitarian Affairs (OCHA). Other organizations that have expressed an interest in AIDR include the International Committee of the Red Cross (ICRC), American Red Cross (ARC), Federal Emergency Management Agency (FEMA), New York City’s Office for Emergency Management and their counterpart in the City of San Francisco. 

bio

Note: In the future, AIDR could also be adapted to take in Facebook status updates and text messages (SMS).