Tag Archives: UN

Humanitarians in the Sky: Using UAVs for Disaster Response

The following is a presentation that I recently gave at the 2014 Remotely Piloted Aircraft Systems Conference (RPAS 2014) held in Brussels, Belgium. The case studies on the Philippines and Haiti are also featured in my upcoming book on “Digital Humanitarians: How Big Data is Changing the Face of Humanitarian Response.” The book is slated to be published in January/February 2015.

Screen Shot 2014-06-24 at 2.20.54 PM

Good afternoon and many thanks to Peter van Blyenburgh for the kind invitation to speak on the role of UAVs in humanitarian contexts beyond the European region. I’m speaking today on behalf of the Humanitarian UAV Network, which brings together seasoned humanitarian professionals with UAV experts to facilitate the use of UAVs in humanitarian settings. I’ll be saying more about the Humanitarian UAV Network (UAViators, pronounced “way-viators”) at the end of my talk.

Screen Shot 2014-06-24 at 2.21.19 PM

The view from above is key for humanitarian response. Indeed, satellite imagery has played an important role in relief operations since Hurricane Mitch in 1998. And the Indian Ocean Tsunami was the first to be captured from space as the way was still propagating. Some 650 images were produced using data from 15 different sensors. During the immediate aftermath of the Tsunami, satellite images were used at headquarters to assess the extent of the emergency. Later, satellite images were used in the field directly, distributed by the Humanitarian Information Center (HIC) and others to support and coordinate relief efforts. 

Screen Shot 2014-06-24 at 2.21.30 PM

Satellites do present certain limitations, of course. These include cost, the time needed to acquire images, cloud cover, licensing issues and so on. In any event, two years after the Tsunami, an earlier iteration of the UN’s DRC Mission (MONUC) was supported by a European force (EUFOR), which used 4 Belgian UAVs. But I won’t be speaking about this type of UAV. For a variety of reasons, particularly affordability, ease of transport, regulatory concerns, and community engagement, UAVs used in humanitarian response are smaller systems or micro-UAVs that weigh just a few kilograms, such as one fixed-wing displayed below.

Screen Shot 2014-06-24 at 2.21.47 PM

The World Food Program’s UAVs were designed and built at the University of Torino “way back” in 2007. But they’ve been grounded until this year due to lack of legislation in Italy.

Screen Shot 2014-06-24 at 2.22.05 PM

In June 2014, the UN’s Office for the Coordination of Humanitarian Affairs (OCHA) purchased a small quadcopter for use in humanitarian response and advocacy. Incidentally, OCHA is on the Advisory Board of the Humanitarian UAV Network, or UAViators. 

Screen Shot 2014-06-24 at 2.22.41 PM

Now, there are many uses cases for the operation of UAVs in humanitarian settings (those listed above are only a subset). All of you here at RPAS 2014 are already very familiar with these applications. So let me jump directly to real world case studies from the Philippines and Haiti.

Screen Shot 2014-06-24 at 2.23.08 PM

Typhoon Haiyan, or Yolanda as it was known locally, was the most powerful Typhoon in recorded human history to make landfall. The impact was absolutely devastated. I joined UN/OCHA in the Philippines following the Typhoon and was struck by how many UAV projects were being launched. What follows is just a few of said projects.

Screen Shot 2014-06-24 at 2.26.45 PM

Danoffice IT, a company based in Lausanne, Switzerland, used the Sky-Watch Huginn X1 Quadcopter to support the humanitarian response in Tacloban. The rotary-wing UAV was used to identify where NGOs could set up camp. Later on, the UAV was used to support a range of additional tasks such as identifying which roads were passable for transportation/logistics. The quadcopter was also flown up the coast to assess the damage from the storm surge and flooding and to determine which villages had been most affected. This served to speed up the relief efforts and made the response more targeted vis-a-vis the provision of resources and assistance. Danoffice IT is also on the Board of the Humanitarian UAV Network (UAViators).

Screen Shot 2014-06-24 at 2.27.06 PM

A second UAV project was carried out by local UAV start-up called CorePhil DSI. The team used an eBee to capture aerial imagery of downtown Tacloban, one of the areas hardest-hit by Typhoon Yolanda. They captured 22 Gigabytes of imagery and shared this with the Humanitarian OpenStreetMap Team (HOT) who are also on the Board of UAViators. HOT subsequently crowdsourced the tracing of this imagery (and satellite imagery) to create the most detailed and up-to-date maps of the area. These maps were shared with and used by multiple humanitarian organizations as well as the Filipino Government.

Screen Shot 2014-06-24 at 2.27.28 PM

In a third project, the Swiss humanitarian organization Medair partnered with Drone Adventures to create a detailed set of 2D maps and 3D terrain models of the disaster-affected areas in which Medair works. These images were used to inform the humanitarian organization’s recovery and reconstruction programs. To be sure, Medair used the maps and models of Tacloban and Leyte to assist in assessing where the greatest need was and what level of assistance should be given to affected families as they continued to recover. Having these accurate aerial images of the affected areas allowed the Swiss organization to address the needs of individual households and—equally importantly—to advocate on their behalf when necessary.

Screen Shot 2014-06-24 at 3.20.08 PM

Drone Adventures also flew their fixed-wing UAVs (eBee’s) over Dulag, just north of Leyte, where more than 80% of homes and croplands were destroyed during the Typhoon. Medair is providing both materials and expertise to help build new shelters in Dulag. So the aerial imagery is proving invaluable to identify just how much material is needed and where. The captured imagery is also enabling community members themselves to better understand both where the greatest needs are an also what the potential solutions might be.

Screen Shot 2014-06-24 at 2.27.55 PM

The partners are also committed to Open Data. The imagery captured was made available online and for free, enabling community leaders and humanitarian organizations to use the information to coordinate other reconstruction efforts. In addition, Drone Adventures and Medair presented locally-printed maps to community leaders within 24 hours of flying the UAVs. Some of these maps were printed on rollable, water proof banners, which make them more durable when used in the field.

Screen Shot 2014-06-24 at 2.28.11 PM

In yet another UAV project, the local Filipino start-up SkyEye Inc partnered with the University of the Philippines in Manila to develop expendable UAVs or xUAVs. The purpose of this initiative is to empower grassroots communities to deploy their own low-cost xUAVs and thus support locally-deployed response efforts. The team has trained 4 out of 5 teams across the Philippines to locally deploy UAVs in preparation for the next Typhoon season. In so doing, they are also transferring math, science and engineering skills to local communities. It is worth noting that community perceptions of UAVs in the Philippines and elsewhere has always been very positive. Indeed, local communities perceive small UAVs as toys more than anything else.

Screen Shot 2014-06-24 at 2.28.37 PM

SkyEye worked with this group from the University of Hawaii to create disaster risk reduction models of flood-prone areas.

Screen Shot 2014-06-24 at 2.29.22 PM

Moving to Haiti, the International Organization for Migration (IOM) has partnered with Drone Adventures and other to produce accurate topographical and 3D maps of disaster prone areas in the Philippines. These aerial images have been used to inform disaster risk reduction and community resilience programs. The UAVs have also enabled IOM to assess destroyed houses and other types of damage caused by floods and droughts. In addition, UAVs have been used to monitor IDP camps, helping aid workers identify when shelters are empty and thus ready to be closed. Furthermore, the high resolution aerial imagery has been used to support a census survey of public building, shelters, hospitals as well as schools.

Screen Shot 2014-06-24 at 2.29.46 PM

After Hurricane Sandy, for example, aerial imagery enabled IOM to very rapidly assess how many houses had collapsed near Rivière Grise and how many people were affected by the flooding. The aerial imagery was also used to identify areas of standing water where mosquitos and epidemics could easily thrive. Throughout their work with UAVs, IOM has stressed that regular community engagement has been critical for the successful use of UAVs. Indeed, informing local communities of the aerial mapping projects and explaining how the collected information is to be used is imperative. Local capacity building is also paramount, which is why Drone Adventures has trained a local team of Haitians to locally deploy and maintain their own eBee UAV.

Screen Shot 2014-06-24 at 2.30.27 PM

The pictures above and below are some of the information products produced by IOM and Drone Adventures. The 3D model above was used to model flood risk in the area and to inform subsequent disaster risk reduction projects.

Screen Shot 2014-06-24 at 2.30.47 PM

Several colleagues of mine have already noted that aerial imagery presents a Big Data challenge. This means that humanitarian organizations and others will need to use advanced computing (human computing and machine computing) to make sense of Big (Aerial) Data.

Screen Shot 2014-06-24 at 2.31.54 PM

My colleagues at the European Commission’s Joint Research Center (JRC) are already beginning to apply advanced computing to automatically analyze aerial imagery. In the example from Haiti below, the JRC deployed a machine learning classifier to automatically identify rubble left over from the massive earthquake that struck Port-au-Prince in 2010. Their classifier had an impressive accuracy of 92%, “suggesting that the method in its simplest form is sufficiently reliable for rapid damage assessment.”

Screen Shot 2014-06-24 at 2.32.06 PM

Human computing (or crowdsourcing) can also be used to make sense of Big Data. My team and I at QCRI have partnered with the UN (OCHA) to create the MicroMappers platform, which is a free and open-source tool to make sense of large datasets created during disasters, like aerial data. We have access to thousands of digital volunteers who can rapidly tag and trace aerial imagery; the resulting analysis of this tagging/tracing can be used to increase the situational awareness  of humanitarian organizations in the field.

Screen Shot 2014-06-24 at 2.32.43 PM

 

Digital volunteers can trace features of interest such as shelters without roofs. Our plan is to subsequently use these traced features as training data to develop machine learning classifiers that can automatically identify these features in future aerial images. We’re also exploring the second use-case depicted below, ie, the rapid transcription of imagery, which can then be automatically geo-tagged and added to a crisis map.

Screen Shot 2014-06-24 at 2.32.55 PM

 

The increasing use of UAVs during humanitarian disasters is why UAViators, the Humanitarian UAV Network, was launched. Recall the relief operations in response to Typhoon Yolanda; an unprecedented number of UAV projects were in operation. But most operators didn’t know about each other, so they were not coordinating flights let alone sharing imagery with local communities. Since the launch of UAViators, we’ve developed the first ever Code of Conduct for the use of UAVs in humanitarian settings, which includes guidelines on data protection and privacy. We have also drafted an Operational Check-List to educate those who are new to humanitarian UAVs. We are now in the process of carrying out a comprehensive evaluation of UAV models along with cameras, sensors, payload mechanism and image processing software. The purpose of this evaluation is to identify which are the best fit for use by humanitarians in the field. Since the UN and others are looking for training and certification programs, we are actively seeking partners to provide these services.

Screen Shot 2014-06-24 at 2.34.04 PM

The above goals are all for the medium to long term. More immediately, UAViators is working to educate humanitarian organizations on both the opportunities and challenges of using UAVs in humanitarian settings. UAViators is also working to facilitate the coordinate UAV flights during major disasters, enabling operators to share their flight plans and contact details with each other via the UAViators website. We are also planning to set up an SMS service to enable direct communication between operators and others in the field during UAV flights. Lastly, we are developing an online map for operators to easily share the imagery/videos they are collecting during relief efforts.

Screen Shot 2014-06-24 at 2.34.36 PM

Data collection (imagery capture) is certainly not the only use case for UAVs in humanitarian contexts. The transportation of payloads may play an increasingly important role in the future. To be sure, my colleagues at UNICEF are actively exploring this with a number of partners in Africa.

Screen Shot 2014-06-24 at 2.34.47 PM

Other sensors also present additional opportunities for the use of UAVs in relief efforts. Sensors can be used to assess the impact of disasters on communication infrastructure, such as cell phone towers, for example. Groups are also looking into the use of UAVs to provide temporary communication infrastructure (“aerial cell phone towers”) following major disasters.

Screen Shot 2014-06-24 at 2.34.59 PM

The need for Sense and Avoid systems (a.k.a. Detection & Avoid solutions) has been highlighted in almost every other presentation given at RPAS 2014. We really need this new technology earlier rather than later (and that’s a major  understatement). At the same time, it is important to emphasize that the main added value of UAVs in humanitarian settings is to capture imagery of areas that are overlooked or ignored by mainstream humanitarian relief operations; that is, of areas that are partially or completely disconnected logistically. By definition, disaster-affected communities in these areas are likely to be more vulnerable than others in urban areas. In addition, the airspaces in these disconnected regions are not complex airspaces and thus present fewer challenges around safety and coordination, for example.

Screen Shot 2014-06-24 at 2.35.19 PM

UAVs were ready to go following the mudslides in Oso, Washington back in March of this year. The UAVs were going to be used to look for survivors but the birds were not allowed to fly. The decision to ground UAVs and bar them from supporting relief and rescue efforts will become increasingly untenable when lives are at stake. I genuinely applaud the principle of proportionality applied by the EU and respective RPAS Associations vis-a-vis risks and regulations, but there is one very important variable missing in the proportionality equation: social benefit. Indeed, the cost benefit calculus of UAV risk & regulation in the context of humanitarian use must include the expected benefit of lives saved and suffering alleviated. Let me repeat this to make sure I’m crystal clear: risks must be weighed against potential lives saved.

Screen Shot 2014-06-24 at 2.35.39 PM

At the end of the day, the humanitarian context is different from precision agriculture or other commercial applications of UAVs such as film making. The latter have no relation to the Humanitarian Imperative. Having over-regulation stand in the way of humanitarian principles will simply become untenable. At the same time, the principle of Do No Harm must absolutely be upheld, which is why it features prominently in the Humanitarian UAV Network’s Code of Conduct. In sum, like the Do No Harm principle, the cost benefit analysis of proportionality must include potential or expected benefits as part of the calculus.

Screen Shot 2014-06-24 at 2.35.56 PM

To conclude, a new (forthcoming) policy brief by the UN (OCHA) publicly calls on humanitarian organizations to support initiatives like the Humanitarian UAV Network. This is an important, public endorsement of our work thus far. But we also need support from non-humanitarian organizations like those you represent in this room. For example, we need clarity on existing legislation. Our partners like the UN need to have access to the latest laws by country to inform their use of UAVs following major disasters. We really need your help on this; and we also need your help in identifying which UAVs and related technologies are likely to be a good fit for humanitarians in the field. So if you have some ideas, then please find me during the break, I’d really like to speak with you, thank you!

bio

See Also:

  • Crisis Map of UAV/Aerial Videos for Disaster Response [link]
  • How UAVs are Making a Difference in Disaster Response [link]
  • Humanitarians Using UAVs for Post Disaster Recovery [link]
  • Grassroots UAVs for Disaster Response [link]
  • Using UAVs for Search & Rescue [link]
  • Debrief: UAV/Drone Search & Rescue Challenge [link]
  • Crowdsourcing Analysis of UAV Imagery for Search/Rescue [link]
  • Check-List for Flying UAVs in Humanitarian Settings [link]

Picture Credits:

  • Danoffice IT; Drone Adventures, SkyEye, JRC

 

MicroMappers Launched for Pakistan Earthquake Response (Updated)

Update 1: MicroMappers is now public! Anyone can join to help the efforts!
Update 2: Results of MicroMappers Response to Pakistan Earthquake [Link]

MicroMappers was not due to launch until next month but my team and I at QCRI received a time-sensitive request by colleagues at the UN to carry out an early test of the platform given yesterday’s 7.7 magnitude earthquake, which killed well over 300 and injured hundreds more in south-western Pakistan.

pakistan_quake_2013

Shortly after this request, the UN Office for the Coordination of Humanitarian Affairs (OCHA) in Pakistan officially activated the Digital Humanitarian Network (DHN) to rapidly assess the damage and needs resulting from the earthquake. The award-winning Standby Volunteer Task Force (SBTF), a founding member of the DHN. teamed up with QCRI to use MicroMappers in response to the request by OCHA-Pakistan. This exercise, however, is purely for testing purposes only. We made this clear to our UN partners since the results may be far from optimal.

MicroMappers is simply a collection of microtasking apps (we call them Clickers) that we have customized for disaster response purposes. We just launched both the Tweet and Image Clickers to support the earthquake relief and may also launch the Tweet and Image GeoClickers as well in the next 24 hours. The TweetClicker is pictured below (click to enlarge).

MicroMappers_Pakistan1

Thanks to our partnership with GNIP, QCRI automatically collected over 35,000 tweets related to Pakistan and the Earthquake (we’re continuing to collect more in real-time). We’ve uploaded these tweets to the TweetClicker and are also filtering links to images for upload to the ImageClicker. Depending on how the initial testing goes, we may be able to invite help from the global digital village. Indeed, “crowdsourcing” is simply another way of saying “It takes a village…” In fact, that’s precisely why MicroMappers was developed, to enable anyone with an Internet connection to become a digital humanitarian volunteer. The Clicker for images is displayed below (click to enlarge).

MicroMappers_Pakistan2

Now, whether this very first test of the Clickers goes well remains to be seen. As mentioned, we weren’t planning to launch until next month. But we’ve already learned heaps from the past few hours alone. For example, while the Clickers are indeed ready and operational, our automatic pre-processing filters are not yet optimized for rapid response. The purpose of these filters is to automatically identify tweets that link to images and videos so that they can be uploaded to the Clickers directly. In addition, while our ImageClicker is operational, our VideoClicker is still under development—as is our TranslateClicker, both of which would have been useful in this response. I’m sure will encounter other issues over the next 24-36 hours. We’re keeping track of these in a shared Google Spreadsheet so we can review them next week and make sure to integrate as much of the feedback as possible before the next disaster strikes.

Incidentally, we (QCRI) also teamed up with the SBTF to test the very first version of the Artificial Intelligence for Disaster Response (AIDR) platform for about six hours. As far as we know, this test represents the first time that machine learning classifiers for disaster resposne were created on the fly using crowdsourcing. We expect to launch AIDR publicly at the 2013 CrisisMappers conference this November (ICCM 2013). We’ll be sure to share what worked and didn’t work during this first AIDR pilot test. So stay tuned for future updates via iRevolution. In the meantime, a big, big thanks to the SBTF Team for rallying so quickly and for agreeing to test the platforms! If you’re interested in becoming a digital humanitarian volunteer, simply join us here.

Bio

Using Big Data to Inform Poverty Reduction Strategies

My colleagues and I at QCRI are spearheading a new experimental Research and Development (R&D) project with the United Nations Development Program (UNDP) team in Cairo, Egypt. Colleagues at Harvard University, MIT and UC Berkeley have also joined the R&D efforts as full-fledged partners. The research question: can an analysis of Twitter traffic in Egypt tell us anything about changes in unemployment and poverty levels? This question was formulated with UNDP’s Cairo-based Team during several conversations I had with them in early 2013.

Egyptian Tweets

As is well known, a major challenge in the development space is the lack of access to timely socio-economic data. So the question here is whether alternative, non-traditional sources of information (such as social media) can provide a timely and “good enough” indication of changing trends. Thanks to our academic partners, we have access to hundreds of millions of Egyptian tweets (both historical and current) along with census and demographic data for ground-truth purposes. If the research yields robust results, then our UNDP colleagues could draw on more real-time data to complement their existing datasets, which may better inform some of their local poverty reduction and development strategies. This more rapid feedback loop could lead to faster economic empowerment for local communities in Egypt. Of course, there are many challenges to working with social data vis-a-vis representation and sample bias. But that is precisely why this kind of experimental research is important—to determine whether any of our results are robust to biases in phone ownership, twitter-use, etc.

bio

Zooniverse: The Answer to Big (Crisis) Data?

Both humanitarian and development organizations are completely unprepared to deal with the rise of “Big Crisis Data” & “Big Development Data.” But many still hope that Big Data is but an illusion. Not so, as I’ve already blogged here, here and here. This explains why I’m on a quest to tame the Big Data Beast. Enter Zooniverse. I’ve been a huge fan of Zooniverse for as long as I can remember, and certainly long before I first mentioned them in this post from two years ago. Zooniverse is a citizen science platform that evolved from GalaxyZoo in 2007. Today, Zooniverse “hosts more than a dozen projects which allow volunteers to participate in scientific research” (1). So, why do I have a major “techie crush” on Zooniverse?

Oh let me count the ways. Zooniverse interfaces are absolutely gorgeous, making them a real pleasure to spend time with; they really understand user-centered design and motivations. The fact that Zooniverse is conversent in multiple disciplines is incredibly attractive. Indeed, the platform has been used to produce rich scientific data across multiple fields such as astronomy, ecology and climate science. Furthermore, this citizen science beauty has a user-base of some 800,000 registered volunteers—with an average of 500 to 1,000 new volunteers joining every day! To place this into context, the Standby Volunteer Task Force (SBTF), a digital humanitarian group has about 1,000 volunteers in total. The open source Zooniverse platform also scales like there’s no tomorrow, enabling hundreds of thousands to participate on a single deployment at any given time. In short, the software supporting these pioneering citizen science projects is well tested and rapidly customizable.

At the heart of the Zooniverse magic is microtasking. If you’re new to microtasking, which I often refer to as “smart crowdsourcing,” this blog post provides a quick introduction. In brief, Microtasking takes a large task and breaks it down into smaller microtasks. Say you were a major (like really major) astro-nomy buff and wanted to tag a million galaxies based on whether they are spiral or elliptical galaxies. The good news? The kind folks at the Sloan Digital Sky Survey have already sent you a hard disk packed full of telescope images. The not-so-good news? A quick back-of-the-envelope calculation reveals it would take 3-5 years, working 24 hours/day and 7 days/week to tag a million galaxies. Ugh!

Screen Shot 2013-03-25 at 4.11.14 PM

But you’re a smart cookie and decide to give this microtasking thing a go. So you upload the pictures to a microtasking website. You then get on Facebook, Twitter, etc., and invite (nay beg) your friends (and as many strangers as you can find on the suddenly-deserted digital streets), to help you tag a million galaxies. Naturally, you provide your friends, and the surprisingly large number good digital Samaritans who’ve just show up, with a quick 2-minute video intro on what spiral and elliptical galaxies look like. You explain that each participant will be asked to tag one galaxy image at a time by simply by clicking the “Spiral” or “Elliptical” button as needed. Inevitably, someone raises their hands to ask the obvious: “Why?! Why in the world would anyone want to tag a zillion galaxies?!”

Well, only cause analyzing the resulting data could yield significant insights that may force a major rethink of cosmology and our place in the Universe. “Good enough for us,” they say. You breathe a sigh of relief and see them off, cruising towards deep space to bolding go where no one has gone before. But before you know it, they’re back on planet Earth. To your utter astonishment, you learn that they’re done with all the tagging! So you run over and check the data to see if they’re pulling your leg; but no, not only are 1 million galaxies tagged, but the tags are highly accurate as well. If you liked this little story, you’ll be glad to know that it happened in real life. GalaxyZoo, as the project was called, was the flash of brilliance that ultimately launched the entire Zooniverse series.

Screen Shot 2013-03-25 at 3.23.53 PM

No, the second Zooniverse project was not an attempt to pull an Oceans 11 in Las Vegas. One of the most attractive features of many microtasking platforms such as Zooniverse is quality control. Think of slot machines. The only way to win big is by having three matching figures such as the three yellow bells in the picture above (righthand side). Hit the jackpot and the coins will flow. Get two out three matching figures (lefthand side), and some slot machines may toss you a few coins for your efforts. Microtasking uses the same approach. Only if three participants tag the same picture of a galaxy as being a spiral galaxy does that data point count. (Of course, you could decide to change the requirement from 3 volunteers to 5 or even 20 volunteers). This important feature allows micro-tasking initiatives to ensure a high standard of data quality, which may explain why many Zooniverse projects have resulted in major scientific break-throughs over the years.

The Zooniverse team is currently running 15 projects, with several more in the works. One of the most recent Zooniverse deployments, Planet Four, received some 15,000 visitors within the first 60 seconds of being announced on BBC TV. Guess how many weeks it took for volunteers to tag over 2,000,0000 satellite images of Mars? A total of 0.286 weeks, i.e., forty-eight hours! Since then, close to 70,000 volunteers have tagged and traced well over 6 million Martian “dunes.” For their Andromeda Project, digital volunteers classified over 7,500 star clusters per hour, even though there was no media or press announce-ment—just one newsletter sent to volunteers. Zooniverse de-ployments also involve tagging earth-based pictures (in contrast to telescope imagery). Take this Serengeti Snapshot deployment, which invited volunteers to classify animals using photographs taken by 225 motion-sensor cameras in Tanzania’s Serengeti National Park. Volunteers swarmed this project to the point that there are no longer any pictures left to tag! So Zooniverse is eagerly waiting for new images to be taken in Serengeti and sent over.

Screen Shot 2013-03-23 at 7.49.56 PM

One of my favorite Zooniverse features is Talk, an online discussion tool used for all projects to provide a real-time interface for volunteers and coordinators, which also facilitates the rapid discovery of important features. This also allows for socializing, which I’ve found to be particularly important with digital humanitarian deployments (such as these). One other major advantage of citizen science platforms like Zooniverse is that they are very easy to use and therefore do not require extensive prior-training (think slot machines). Plus, participants get to learn about new fields of science in the process. So all in all, Zooniverse makes for a great date, which is why I recently reached out to the team behind this citizen science wizardry. Would they be interested in going out (on a limb) to explore some humanitarian (and development) use cases? “Why yes!” they said.

Microtasking platforms have already been used in disaster response, such as MapMill during Hurricane SandyTomnod during the Somali Crisis and CrowdCrafting during Typhoon Pablo. So teaming up with Zooniverse makes a whole lot of sense. Their microtasking software is the most scalable one I’ve come across yet, it is open source and their 800,000 volunteer user-base is simply unparalleled. If Zooniverse volunteers can classify 2 million satellite images of Mars in 48 hours, then surely they can do the same for satellite images of disaster-affected areas on Earth. Volunteers responding to Sandy created some 80,000 assessments of infrastructure damage during the first 48 hours alone. It would have taken Zooniverse just over an hour. Of course, the fact that the hurricane affected New York City and the East Coast meant that many US-based volunteers rallied to the cause, which may explain why it only took 20 minutes to tag the first batch of 400 pictures. What if the hurricane had hit a Caribbean instead? Would the surge of volunteers may have been as high? Might Zooniverse’s 800,000+ standby volunteers also be an asset in this respect?

Screen Shot 2013-03-23 at 7.42.22 PM

Clearly, there is huge potential here, and not only vis-a-vis humanitarian use-cases but development one as well. This is precisely why I’ve already organized and coordinated a number of calls with Zooniverse and various humanitarian and development organizations. As I’ve been telling my colleagues at the United Nations, World Bank and Humanitarian OpenStreetMap, Zooniverse is the Ferrari of Microtasking, so it would be such a big shame if we didn’t take it out for a spin… you know, just a quick test-drive through the rugged terrains of humanitarian response, disaster preparedness and international development. 

bio

Postscript: As some iRevolution readers may know, I am also collaborating with the outstanding team at  CrowdCrafting, who have also developed a free & open-source microtasking platform for citizen science projects (also for disaster response here). I see Zooniverse and CrowCrafting as highly syner-gistic and complementary. Because CrowdCrafting is still in early stages, they fill a very important gap found at the long tail. In contrast, Zooniverse has been already been around for half-a-decade and can caters to very high volume and high profile citizen science projects. This explains why we’ll all be getting on a call in the very near future. 

A Research Framework for Next Generation Humanitarian Technology and Innovation

Humanitarian donors and organizations are increasingly championing innovation and the use of new technologies for humanitarian response. DfID, for example, is committed to using “innovative techniques and technologies more routinely in humanitarian response” (2011). In a more recent strategy paper, DfID confirmed that it would “continue to invest in new technologies” (2012). ALNAP’s important report on “The State of the Humanitarian System” documents the shift towards greater innovation, “with new funds and mechanisms designed to study and support innovation in humanitarian programming” (2012). A forthcoming land-mark study by OCHA makes the strongest case yet for the use and early adoption of new technologies for humanitarian response (2013).

picme8

These strategic policy documents are game-changers and pivotal to ushering in the next wave of humanitarian technology and innovation. That said, the reports are limited by the very fact that the authors are humanitarian professionals and thus not necessarily familiar with the field of advanced computing. The purpose of this post is therefore to set out a more detailed research framework for next generation humanitarian technology and innovation—one with a strong focus on information systems for crisis response and management.

In 2010, I wrote this piece on “The Humanitarian-Technology Divide and What To Do About It.” This divide became increasingly clear to me when I co-founded and co-directed the Harvard Humanitarian Initiative’s (HHI) Program on Crisis Mapping & Early Warning (2007-2009). So I co-founded the annual Inter-national CrisisMappers Conference series in 2009 and have continued to co-organize this unique, cross-disciplinary forum on humanitarian technology. The CrisisMappers Network also plays an important role in bridging the humanitarian and technology divide. My decision to join Ushahidi as Director of Crisis Mapping (2009-2012) was a strategic move to continue bridging the divide—and to do so from the technology side this time.

The same is true of my move to the Qatar Computing Research Institute (QCRI) at the Qatar Foundation. My experience at Ushahidi made me realize that serious expertise in Data Science is required to tackle the major challenges appearing on the horizon of humanitarian technology. Indeed, the key words missing from the DfID, ALNAP and OCHA innovation reports include: Data Science, Big Data Analytics, Artificial Intelligence, Machine Learning, Machine Translation and Human Computing. This current divide between the humanitarian and data science space needs to be bridged, which is precisely why I joined the Qatar Com-puting Research Institute as Director of Innovation; to develop and prototype the next generation of humanitarian technologies by working directly with experts in Data Science and Advanced Computing.

bridgetech

My efforts to bridge these communities also explains why I am co-organizing this year’s Workshop on “Social Web for Disaster Management” at the 2013 World Wide Web conference (WWW13). The WWW event series is one of the most prestigious conferences in the field of Advanced Computing. I have found that experts in this field are very interested and highly motivated to work on humanitarian technology challenges and crisis computing problems. As one of them recently told me: “We simply don’t know what projects or questions to prioritize or work on. We want questions, preferably hard questions, please!”

Yet the humanitarian innovation and technology reports cited above overlook the field of advanced computing. Their policy recommendations vis-a-vis future information systems for crisis response and management are vague at best. Yet one of the major challenges that the humanitarian sector faces is the rise of Big (Crisis) Data. I have already discussed this here, here and here, for example. The humanitarian community is woefully unprepared to deal with this tidal wave of user-generated crisis information. There are already more mobile phone sub-scriptions than people in 100+ countries. And fully 50% of the world’s population in developing countries will be using the Internet within the next 20 months—the current figure is 24%. Meanwhile, close to 250 million people were affected by disasters in 2010 alone. Since then, the number of new mobile phone subscrip-tions has increased by well over one billion, which means that disaster-affected communities today are increasingly likely to be digital communities as well.

In the Philippines, a country highly prone to “natural” disasters, 92% of Filipinos who access the web use Facebook. In early 2012, Filipinos sent an average of 2 billion text messages every day. When disaster strikes, some of these messages will contain information critical for situational awareness & rapid needs assess-ment. The innovation reports by DfID, ALNAP and OCHA emphasize time and time again that listening to local communities is a humanitarian imperative. As DfID notes, “there is a strong need to systematically involve beneficiaries in the collection and use of data to inform decision making. Currently the people directly affected by crises do not routinely have a voice, which makes it difficult for their needs be effectively addressed” (2012). But how exactly should we listen to millions of voices at once, let alone manage, verify and respond to these voices with potentially life-saving information? Over 20 million tweets were posted during Hurricane Sandy. In Japan, over half-a-million new users joined Twitter the day after the 2011 Earthquake. More than 177 million tweets about the disaster were posted that same day, i.e., 2,000 tweets per second on average.

Screen Shot 2013-03-20 at 1.42.25 PM

Of course, the volume and velocity of crisis information will vary from country to country and disaster to disaster. But the majority of humanitarian organizations do not have the technologies in place to handle smaller tidal waves either. Take the case of the recent Typhoon in the Philippines, for example. OCHA activated the Digital Humanitarian Network (DHN) to ask them to carry out a rapid damage assessment by analyzing the 20,000 tweets posted during the first 48 hours of Typhoon Pablo. In fact, one of the main reasons digital volunteer networks like the DHN and the Standby Volunteer Task Force (SBTF) exist is to provide humanitarian organizations with this kind of skilled surge capacity. But analyzing 20,000 tweets in 12 hours (mostly manually) is one thing, analyzing 20 million requires more than a few hundred dedicated volunteers. What’s more, we do not have the luxury of having months to carry out this analysis. Access to information is as important as access to food; and like food, information has a sell-by date.

We clearly need a research agenda to guide the development of next generation humanitarian technology. One such framework is proposed her. The Big (Crisis) Data challenge is composed of (at least) two major problems: (1) finding the needle in the haystack; (2) assessing the accuracy of that needle. In other words, identifying the signal in the noise and determining whether that signal is accurate. Both of these challenges are exacerbated by serious time con-straints. There are (at least) two ways too manage the Big Data challenge in real or near real-time: Human Computing and Artificial Intelligence. We know about these solutions because they have already been developed and used by other sectors and disciplines for several years now. In other words, our information problems are hardly as unique as we might think. Hence the importance of bridging the humanitarian and data science communities.

In sum, the Big Crisis Data challenge can be addressed using Human Computing (HC) and/or Artificial Intelligence (AI). Human Computing includes crowd-sourcing and microtasking. AI includes natural language processing and machine learning. A framework for next generation humanitarian technology and inno-vation must thus promote Research and Development (R&D) that apply these methodologies for humanitarian response. For example, Verily is a project that leverages HC for the verification of crowdsourced social media content generated during crises. In contrast, this here is an example of an AI approach to verification. The Standby Volunteer Task Force (SBTF) has used HC (micro-tasking) to analyze satellite imagery (Big Data) for humanitarian response. An-other novel HC approach to managing Big Data is the use of gaming, something called Playsourcing. AI for Disaster Response (AIDR) is an example of AI applied to humanitarian response. In many ways, though, AIDR combines AI with Human Computing, as does MatchApp. Such hybrid solutions should also be promoted   as part of the R&D framework on next generation humanitarian technology. 

There is of course more to humanitarian technology than information manage-ment alone. Related is the topic of Data Visualization, for example. There are also exciting innovations and developments in the use of drones or Unmanned Aerial Vehicles (UAVs), meshed mobile communication networks, hyper low-cost satellites, etc.. I am particularly interested in each of these areas will continue to blog about them. In the meantime, I very much welcome feedback on this post’s proposed research framework for humanitarian technology and innovation.

 bio

How the UN Used Social Media in Response to Typhoon Pablo (Updated)

Our mission as digital humanitarians was to deliver a detailed dataset of pictures and videos (posted on Twitter) which depict damage and flooding following the Typhoon. An overview of this digital response is available here. The task of our United Nations colleagues at the Office of the Coordination of Humanitarian Affairs (OCHA), was to rapidly consolidate and analyze our data to compile a customized Situation Report for OCHA’s team in the Philippines. The maps, charts and figures below are taken from this official report (click to enlarge).

Typhon PABLO_Social_Media_Mapping-OCHA_A4_Portrait_6Dec2012

This map is the first ever official UN crisis map entirely based on data collected from social media. Note the “Map data sources” at the bottom left of the map: “The Digital Humanitarian Network’s Solution Team: Standby Volunteer Task Force (SBTF) and Humanity Road (HR).” In addition to several UN agencies, the government of the Philippines has also made use of this information.

Screen Shot 2012-12-08 at 7.26.19 AM

Screen Shot 2012-12-08 at 7.29.24 AM

The cleaned data was subsequently added to this Google Map and also made public on the official Google Crisis Map of the Philippines.

Screen Shot 2012-12-08 at 7.32.17 AM

One of my main priorities now is to make sure we do a far better job at leveraging advanced computing and microtasking platforms so that we are better prepared the next time we’re asked to repeat this kind of deployment. On the advanced computing side, it should be perfectly feasible to develop an automated way to crawl twitter and identify links to images  and videos. My colleagues at QCRI are already looking into this. As for microtasking, I am collaborating with PyBossa and Crowdflower to ensure that we have highly customizable platforms on stand-by so we can immediately upload the results of QCRI’s algorithms. In sum, we have got to move beyond simple crowdsourcing and adopt more agile micro-tasking and social computing platforms as both are far more scalable.

In the meantime, a big big thanks once again to all our digital volunteers who made this entire effort possible and highly insightful.

Big Data for Development: Challenges and Opportunities

The UN Global Pulse report on Big Data for Development ought to be required reading for anyone interested in humanitarian applications of Big Data. The purpose of this post is not to summarize this excellent 50-page document but to relay the most important insights contained therein. In addition, I question the motivation behind the unbalanced commentary on Haiti, which is my only major criticism of this otherwise authoritative report.

Real-time “does not always mean occurring immediately. Rather, “real-time” can be understood as information which is produced and made available in a relatively short and relevant period of time, and information which is made available within a timeframe that allows action to be taken in response i.e. creating a feedback loop. Importantly, it is the intrinsic time dimensionality of the data, and that of the feedback loop that jointly define its characteristic as real-time. (One could also add that the real-time nature of the data is ultimately contingent on the analysis being conducted in real-time, and by extension, where action is required, used in real-time).”

Data privacy “is the most sensitive issue, with conceptual, legal, and technological implications.” To be sure, “because privacy is a pillar of democracy, we must remain alert to the possibility that it might be compromised by the rise of new technologies, and put in place all necessary safeguards.” Privacy is defined by the International Telecommunications Union as theright of individuals to control or influence what information related to them may be disclosed.” Moving forward, “these concerns must nurture and shape on-going debates around data privacy in the digital age in a constructive manner in order to devise strong principles and strict rules—backed by adequate tools and systems—to ensure “privacy-preserving analysis.”

Non-representative data is often dismissed outright since findings based on such data cannot be generalized beyond that sample. “But while findings based on non-representative datasets need to be treated with caution, they are not valueless […].” Indeed, while the “sampling selection bias can clearly be a challenge, especially in regions or communities where technological penetration is low […],  this does not mean that the data has no value. For one, data from “non-representative” samples (such as mobile phone users) provide representative information about the sample itself—and do so in close to real time and on a potentially large and growing scale, such that the challenge will become less and less salient as technology spreads across and within developing countries.”

Perceptions rather than reality is what social media captures. Moreover, these perceptions can also be wrong. But only those individuals “who wrongfully assume that the data is an accurate picture of reality can be deceived. Furthermore, there are instances where wrong perceptions are precisely what is desirable to monitor because they might determine collective behaviors in ways that can have catastrophic effects.” In other words, “perceptions can also shape reality. Detecting and understanding perceptions quickly can help change outcomes.”

False data and hoaxes are part and parcel of user-generated content. While the challenges around reliability and verifiability are real, Some media organizations, such as the BBC, stand by the utility of citizen reporting of current events: “there are many brave people out there, and some of them are prolific bloggers and Tweeters. We should not ignore the real ones because we were fooled by a fake one.” And have thus devised internal strategies to confirm the veracity of the information they receive and chose to report, offering an example of what can be done to mitigate the challenge of false information.” See for example my 20-page study on how to verify crowdsourced social media data, a field I refer to as information forensics. In any event, “whether false negatives are more or less problematic than false positives depends on what is being monitored, and why it is being monitored.”

“The United States Geological Survey (USGS) has developed a system that monitors Twitter for significant spikes in the volume of messages about earthquakes,” and as it turns out, 90% of user-generated reports that trigger an alert have turned out to be valid. “Similarly, a recent retrospective analysis of the 2010 cholera outbreak in Haiti conducted by researchers at Harvard Medical School and Children’s Hospital Boston demonstrated that mining Twitter and online news reports could have provided health officials a highly accurate indication of the actual spread of the disease with two weeks lead time.”

This leads to the other Haiti example raised in the report, namely the finding that SMS data was correlated with building damage. Please see my previous blog posts here and here for context. What the authors seem to overlook is that Benetech apparently did not submit their counter-findings for independent peer-review whereas the team at the European Commission’s Joint Research Center did—and the latter passed the peer-review process. Peer-review is how rigorous scientific work is validated. The fact that Benetech never submitted their blog post for peer-review is actually quite telling.

In sum, while this Big Data report is otherwise strong and balanced, I am really surprised that they cite a blog post as “evidence” while completely ignoring the JRC’s peer-reviewed scientific paper published in the Journal of the European Geosciences Union. Until counter-findings are submitted for peer review, the JRC’s results stand: unverified, non-representative crowd-sourced text messages from the disaster affected population in Port-au-Prince that were in turn translated from Haitian Creole to English via a novel crowdsourced volunteer effort and subsequently geo-referenced by hundreds of volunteers  which did not undergo any quality control, produced a statistically significant, positive correlation with building damage.

In conclusion, “any challenge with utilizing Big Data sources of information cannot be assessed divorced from the intended use of the information. These new, digital data sources may not be the best suited to conduct airtight scientific analysis, but they have a huge potential for a whole range of other applications that can greatly affect development outcomes.”

One such application is disaster response. Earlier this year, FEMA Administrator Craig Fugate, gave a superb presentation on “Real Time Awareness” in which he relayed an example of how he and his team used Big Data (twitter) during a series of devastating tornadoes in 2011:

“Mr. Fugate proposed dispatching relief supplies to the long list of locations immediately and received pushback from his team who were concerned that they did not yet have an accurate estimate of the level of damage. His challenge was to get the staff to understand that the priority should be one of changing outcomes, and thus even if half of the supplies dispatched were never used and sent back later, there would be no chance of reaching communities in need if they were in fact suffering tornado damage already, without getting trucks out immediately. He explained, “if you’re waiting to react to the aftermath of an event until you have a formal assessment, you’re going to lose 12-to-24 hours…Perhaps we shouldn’t be waiting for that. Perhaps we should make the assumption that if something bad happens, it’s bad. Speed in response is the most perishable commodity you have…We looked at social media as the public telling us enough information to suggest this was worse than we thought and to make decisions to spend [taxpayer] money to get moving without waiting for formal request, without waiting for assessments, without waiting to know how bad because we needed to change that outcome.”

“Fugate also emphasized that using social media as an information source isn’t a precise science and the response isn’t going to be precise either. “Disasters are like horseshoes, hand grenades and thermal nuclear devices, you just need to be close— preferably more than less.”