Tag Archives: Imagery

Humanitarians in the Sky: Using UAVs for Disaster Response

The following is a presentation that I recently gave at the 2014 Remotely Piloted Aircraft Systems Conference (RPAS 2014) held in Brussels, Belgium. The case studies on the Philippines and Haiti are also featured in my upcoming book on “Digital Humanitarians: How Big Data is Changing the Face of Humanitarian Response.” The book is slated to be published in January/February 2015.

Screen Shot 2014-06-24 at 2.20.54 PM

Good afternoon and many thanks to Peter van Blyenburgh for the kind invitation to speak on the role of UAVs in humanitarian contexts beyond the European region. I’m speaking today on behalf of the Humanitarian UAV Network, which brings together seasoned humanitarian professionals with UAV experts to facilitate the use of UAVs in humanitarian settings. I’ll be saying more about the Humanitarian UAV Network (UAViators, pronounced “way-viators”) at the end of my talk.

Screen Shot 2014-06-24 at 2.21.19 PM

The view from above is key for humanitarian response. Indeed, satellite imagery has played an important role in relief operations since Hurricane Mitch in 1998. And the Indian Ocean Tsunami was the first to be captured from space as the way was still propagating. Some 650 images were produced using data from 15 different sensors. During the immediate aftermath of the Tsunami, satellite images were used at headquarters to assess the extent of the emergency. Later, satellite images were used in the field directly, distributed by the Humanitarian Information Center (HIC) and others to support and coordinate relief efforts. 

Screen Shot 2014-06-24 at 2.21.30 PM

Satellites do present certain limitations, of course. These include cost, the time needed to acquire images, cloud cover, licensing issues and so on. In any event, two years after the Tsunami, an earlier iteration of the UN’s DRC Mission (MONUC) was supported by a European force (EUFOR), which used 4 Belgian UAVs. But I won’t be speaking about this type of UAV. For a variety of reasons, particularly affordability, ease of transport, regulatory concerns, and community engagement, UAVs used in humanitarian response are smaller systems or micro-UAVs that weigh just a few kilograms, such as one fixed-wing displayed below.

Screen Shot 2014-06-24 at 2.21.47 PM

The World Food Program’s UAVs were designed and built at the University of Torino “way back” in 2007. But they’ve been grounded until this year due to lack of legislation in Italy.

Screen Shot 2014-06-24 at 2.22.05 PM

In June 2014, the UN’s Office for the Coordination of Humanitarian Affairs (OCHA) purchased a small quadcopter for use in humanitarian response and advocacy. Incidentally, OCHA is on the Advisory Board of the Humanitarian UAV Network, or UAViators. 

Screen Shot 2014-06-24 at 2.22.41 PM

Now, there are many uses cases for the operation of UAVs in humanitarian settings (those listed above are only a subset). All of you here at RPAS 2014 are already very familiar with these applications. So let me jump directly to real world case studies from the Philippines and Haiti.

Screen Shot 2014-06-24 at 2.23.08 PM

Typhoon Haiyan, or Yolanda as it was known locally, was the most powerful Typhoon in recorded human history to make landfall. The impact was absolutely devastated. I joined UN/OCHA in the Philippines following the Typhoon and was struck by how many UAV projects were being launched. What follows is just a few of said projects.

Screen Shot 2014-06-24 at 2.26.45 PM

Danoffice IT, a company based in Lausanne, Switzerland, used the Sky-Watch Huginn X1 Quadcopter to support the humanitarian response in Tacloban. The rotary-wing UAV was used to identify where NGOs could set up camp. Later on, the UAV was used to support a range of additional tasks such as identifying which roads were passable for transportation/logistics. The quadcopter was also flown up the coast to assess the damage from the storm surge and flooding and to determine which villages had been most affected. This served to speed up the relief efforts and made the response more targeted vis-a-vis the provision of resources and assistance. Danoffice IT is also on the Board of the Humanitarian UAV Network (UAViators).

Screen Shot 2014-06-24 at 2.27.06 PM

A second UAV project was carried out by local UAV start-up called CorePhil DSI. The team used an eBee to capture aerial imagery of downtown Tacloban, one of the areas hardest-hit by Typhoon Yolanda. They captured 22 Gigabytes of imagery and shared this with the Humanitarian OpenStreetMap Team (HOT) who are also on the Board of UAViators. HOT subsequently crowdsourced the tracing of this imagery (and satellite imagery) to create the most detailed and up-to-date maps of the area. These maps were shared with and used by multiple humanitarian organizations as well as the Filipino Government.

Screen Shot 2014-06-24 at 2.27.28 PM

In a third project, the Swiss humanitarian organization Medair partnered with Drone Adventures to create a detailed set of 2D maps and 3D terrain models of the disaster-affected areas in which Medair works. These images were used to inform the humanitarian organization’s recovery and reconstruction programs. To be sure, Medair used the maps and models of Tacloban and Leyte to assist in assessing where the greatest need was and what level of assistance should be given to affected families as they continued to recover. Having these accurate aerial images of the affected areas allowed the Swiss organization to address the needs of individual households and—equally importantly—to advocate on their behalf when necessary.

Screen Shot 2014-06-24 at 3.20.08 PM

Drone Adventures also flew their fixed-wing UAVs (eBee’s) over Dulag, just north of Leyte, where more than 80% of homes and croplands were destroyed during the Typhoon. Medair is providing both materials and expertise to help build new shelters in Dulag. So the aerial imagery is proving invaluable to identify just how much material is needed and where. The captured imagery is also enabling community members themselves to better understand both where the greatest needs are an also what the potential solutions might be.

Screen Shot 2014-06-24 at 2.27.55 PM

The partners are also committed to Open Data. The imagery captured was made available online and for free, enabling community leaders and humanitarian organizations to use the information to coordinate other reconstruction efforts. In addition, Drone Adventures and Medair presented locally-printed maps to community leaders within 24 hours of flying the UAVs. Some of these maps were printed on rollable, water proof banners, which make them more durable when used in the field.

Screen Shot 2014-06-24 at 2.28.11 PM

In yet another UAV project, the local Filipino start-up SkyEye Inc partnered with the University of the Philippines in Manila to develop expendable UAVs or xUAVs. The purpose of this initiative is to empower grassroots communities to deploy their own low-cost xUAVs and thus support locally-deployed response efforts. The team has trained 4 out of 5 teams across the Philippines to locally deploy UAVs in preparation for the next Typhoon season. In so doing, they are also transferring math, science and engineering skills to local communities. It is worth noting that community perceptions of UAVs in the Philippines and elsewhere has always been very positive. Indeed, local communities perceive small UAVs as toys more than anything else.

Screen Shot 2014-06-24 at 2.28.37 PM

SkyEye worked with this group from the University of Hawaii to create disaster risk reduction models of flood-prone areas.

Screen Shot 2014-06-24 at 2.29.22 PM

Moving to Haiti, the International Organization for Migration (IOM) has partnered with Drone Adventures and other to produce accurate topographical and 3D maps of disaster prone areas in the Philippines. These aerial images have been used to inform disaster risk reduction and community resilience programs. The UAVs have also enabled IOM to assess destroyed houses and other types of damage caused by floods and droughts. In addition, UAVs have been used to monitor IDP camps, helping aid workers identify when shelters are empty and thus ready to be closed. Furthermore, the high resolution aerial imagery has been used to support a census survey of public building, shelters, hospitals as well as schools.

Screen Shot 2014-06-24 at 2.29.46 PM

After Hurricane Sandy, for example, aerial imagery enabled IOM to very rapidly assess how many houses had collapsed near Rivière Grise and how many people were affected by the flooding. The aerial imagery was also used to identify areas of standing water where mosquitos and epidemics could easily thrive. Throughout their work with UAVs, IOM has stressed that regular community engagement has been critical for the successful use of UAVs. Indeed, informing local communities of the aerial mapping projects and explaining how the collected information is to be used is imperative. Local capacity building is also paramount, which is why Drone Adventures has trained a local team of Haitians to locally deploy and maintain their own eBee UAV.

Screen Shot 2014-06-24 at 2.30.27 PM

The pictures above and below are some of the information products produced by IOM and Drone Adventures. The 3D model above was used to model flood risk in the area and to inform subsequent disaster risk reduction projects.

Screen Shot 2014-06-24 at 2.30.47 PM

Several colleagues of mine have already noted that aerial imagery presents a Big Data challenge. This means that humanitarian organizations and others will need to use advanced computing (human computing and machine computing) to make sense of Big (Aerial) Data.

Screen Shot 2014-06-24 at 2.31.54 PM

My colleagues at the European Commission’s Joint Research Center (JRC) are already beginning to apply advanced computing to automatically analyze aerial imagery. In the example from Haiti below, the JRC deployed a machine learning classifier to automatically identify rubble left over from the massive earthquake that struck Port-au-Prince in 2010. Their classifier had an impressive accuracy of 92%, “suggesting that the method in its simplest form is sufficiently reliable for rapid damage assessment.”

Screen Shot 2014-06-24 at 2.32.06 PM

Human computing (or crowdsourcing) can also be used to make sense of Big Data. My team and I at QCRI have partnered with the UN (OCHA) to create the MicroMappers platform, which is a free and open-source tool to make sense of large datasets created during disasters, like aerial data. We have access to thousands of digital volunteers who can rapidly tag and trace aerial imagery; the resulting analysis of this tagging/tracing can be used to increase the situational awareness  of humanitarian organizations in the field.

Screen Shot 2014-06-24 at 2.32.43 PM

 

Digital volunteers can trace features of interest such as shelters without roofs. Our plan is to subsequently use these traced features as training data to develop machine learning classifiers that can automatically identify these features in future aerial images. We’re also exploring the second use-case depicted below, ie, the rapid transcription of imagery, which can then be automatically geo-tagged and added to a crisis map.

Screen Shot 2014-06-24 at 2.32.55 PM

 

The increasing use of UAVs during humanitarian disasters is why UAViators, the Humanitarian UAV Network, was launched. Recall the relief operations in response to Typhoon Yolanda; an unprecedented number of UAV projects were in operation. But most operators didn’t know about each other, so they were not coordinating flights let alone sharing imagery with local communities. Since the launch of UAViators, we’ve developed the first ever Code of Conduct for the use of UAVs in humanitarian settings, which includes guidelines on data protection and privacy. We have also drafted an Operational Check-List to educate those who are new to humanitarian UAVs. We are now in the process of carrying out a comprehensive evaluation of UAV models along with cameras, sensors, payload mechanism and image processing software. The purpose of this evaluation is to identify which are the best fit for use by humanitarians in the field. Since the UN and others are looking for training and certification programs, we are actively seeking partners to provide these services.

Screen Shot 2014-06-24 at 2.34.04 PM

The above goals are all for the medium to long term. More immediately, UAViators is working to educate humanitarian organizations on both the opportunities and challenges of using UAVs in humanitarian settings. UAViators is also working to facilitate the coordinate UAV flights during major disasters, enabling operators to share their flight plans and contact details with each other via the UAViators website. We are also planning to set up an SMS service to enable direct communication between operators and others in the field during UAV flights. Lastly, we are developing an online map for operators to easily share the imagery/videos they are collecting during relief efforts.

Screen Shot 2014-06-24 at 2.34.36 PM

Data collection (imagery capture) is certainly not the only use case for UAVs in humanitarian contexts. The transportation of payloads may play an increasingly important role in the future. To be sure, my colleagues at UNICEF are actively exploring this with a number of partners in Africa.

Screen Shot 2014-06-24 at 2.34.47 PM

Other sensors also present additional opportunities for the use of UAVs in relief efforts. Sensors can be used to assess the impact of disasters on communication infrastructure, such as cell phone towers, for example. Groups are also looking into the use of UAVs to provide temporary communication infrastructure (“aerial cell phone towers”) following major disasters.

Screen Shot 2014-06-24 at 2.34.59 PM

The need for Sense and Avoid systems (a.k.a. Detection & Avoid solutions) has been highlighted in almost every other presentation given at RPAS 2014. We really need this new technology earlier rather than later (and that’s a major  understatement). At the same time, it is important to emphasize that the main added value of UAVs in humanitarian settings is to capture imagery of areas that are overlooked or ignored by mainstream humanitarian relief operations; that is, of areas that are partially or completely disconnected logistically. By definition, disaster-affected communities in these areas are likely to be more vulnerable than others in urban areas. In addition, the airspaces in these disconnected regions are not complex airspaces and thus present fewer challenges around safety and coordination, for example.

Screen Shot 2014-06-24 at 2.35.19 PM

UAVs were ready to go following the mudslides in Oso, Washington back in March of this year. The UAVs were going to be used to look for survivors but the birds were not allowed to fly. The decision to ground UAVs and bar them from supporting relief and rescue efforts will become increasingly untenable when lives are at stake. I genuinely applaud the principle of proportionality applied by the EU and respective RPAS Associations vis-a-vis risks and regulations, but there is one very important variable missing in the proportionality equation: social benefit. Indeed, the cost benefit calculus of UAV risk & regulation in the context of humanitarian use must include the expected benefit of lives saved and suffering alleviated. Let me repeat this to make sure I’m crystal clear: risks must be weighed against potential lives saved.

Screen Shot 2014-06-24 at 2.35.39 PM

At the end of the day, the humanitarian context is different from precision agriculture or other commercial applications of UAVs such as film making. The latter have no relation to the Humanitarian Imperative. Having over-regulation stand in the way of humanitarian principles will simply become untenable. At the same time, the principle of Do No Harm must absolutely be upheld, which is why it features prominently in the Humanitarian UAV Network’s Code of Conduct. In sum, like the Do No Harm principle, the cost benefit analysis of proportionality must include potential or expected benefits as part of the calculus.

Screen Shot 2014-06-24 at 2.35.56 PM

To conclude, a new (forthcoming) policy brief by the UN (OCHA) publicly calls on humanitarian organizations to support initiatives like the Humanitarian UAV Network. This is an important, public endorsement of our work thus far. But we also need support from non-humanitarian organizations like those you represent in this room. For example, we need clarity on existing legislation. Our partners like the UN need to have access to the latest laws by country to inform their use of UAVs following major disasters. We really need your help on this; and we also need your help in identifying which UAVs and related technologies are likely to be a good fit for humanitarians in the field. So if you have some ideas, then please find me during the break, I’d really like to speak with you, thank you!

bio

See Also:

  • Crisis Map of UAV/Aerial Videos for Disaster Response [link]
  • How UAVs are Making a Difference in Disaster Response [link]
  • Humanitarians Using UAVs for Post Disaster Recovery [link]
  • Grassroots UAVs for Disaster Response [link]
  • Using UAVs for Search & Rescue [link]
  • Debrief: UAV/Drone Search & Rescue Challenge [link]
  • Crowdsourcing Analysis of UAV Imagery for Search/Rescue [link]
  • Check-List for Flying UAVs in Humanitarian Settings [link]

Picture Credits:

  • Danoffice IT; Drone Adventures, SkyEye, JRC

 

Automatically Analyzing UAV/Aerial Imagery from Haiti

My colleague Martino Pesaresi from the European Community’s Joint Research Center (JRC) recently shared one of his co-authored studies with me on the use of advanced computing to analyze UAV (aerial) imagery. Given the rather technical nature of the title, “Rubble Detection from VHR Aerial Imagery Data Using Differential Morphological Profiles,” it is unlikely that many of my humanitarian colleagues have read the study. But the results have important implications for the development of next generation humanitarian technologies that focus on very high resolution (VHR) aerial imagery captured by UAVs.

Credit: BBC News

As Martino and his co-authors note, “The presence of rubble in urban areas can be used as an indicator of building quality, poverty level, commercial activity, and others. In the case of armed conflict or natural disasters, rubble is seen as the trace of the event on the affected area. The amount of rubble and its density are two important attributes for measuring the severity of the event, in contribution to the overall crisis assessment. In the post-disaster time scale, accurate mapping of rubble in relation to the building type and location is of critical importance in allocating response teams and relief resources immediately after event. In the longer run, this information is used for post-disaster needs assessment, recovery planning and other relief activities on the affected region.”

Martino and team therefore developed an “automated method for the rapid detection and quantification of rubble from very high resolution aerial imagery of urban regions.” The first step in this model is to transfer the information depicted in images to “some hierarchical representation structure for indexing and fast component retrieval.” This simply means that aerial images need to be converted into a format that will make them “readable” by a computer. One way to do this is by converting said images into Max-Trees like the one below (which I find rather poetic).

max tree

The conversion of aerial images into Max Trees enables Martino and company to analyze and compare as many images as they’d like to identify which combination of nodes and branches represent rubble. This pattern enables the team to subsequently use advanced statistical techniques to identify the rest of the rubble in the remaining aerial images, as shown below. The heat maps on the right depict the result of the analysis, with the red shapes denoting areas that have a high probability of being rubble.

rubble detector

The detection success rate of Martino et al.’s automated rubble detector was about 92%, “suggesting that the method in its simplest form is sufficiently reliable for rapid damage assessment.” The full study is available here and also appears in my forthcoming book “Digital Humanitarians: How Big Data Changes the Face of Disaster Response.”

bio

 

See Also:

  • Welcome to the Humanitarian UAV Network [link]
  • How UAVs are Making a Difference in Disaster Response [link]
  • Humanitarians Using UAVs for Post Disaster Recovery [link]
  • Grassroots UAVs for Disaster Response [link]
  • Using UAVs for Search & Rescue [link]
  • Debrief: UAV/Drone Search & Rescue Challenge [link]
  • Crowdsourcing Analysis of UAV Imagery for Search/Rescue [link]
  • Check-List for Flying UAVs in Humanitarian Settings [link]

Using MicroMappers to Make Sense of UAV Imagery During Disasters

Aerial imagery will soon become a Big Data problem for humanitarian response—particularly oblique imagery. This was confirmed to me by a number of imagery experts in both the US (FEMA) and Europe (JRC). Aerial imagery taken at an angle is referred to as oblique imagery; compared to vertical imagery, which is taken by cameras pointing straight down (like satellite imagery). The team from Humanitarian OpenStreetMap (HOT) is already well equipped to make sense of vertical aerial imagery. They do this by microtasking the tracing of said imagery, as depicted below. So how do we rapidly analyze oblique images, which often provide more detail vis-a-vis infrastructure damage than vertical pictures?

HOTosm PH

One approach is to microtask the tagging of oblique images. This was carried out very successfully after Hurricane Sandy (screenshot below).

This solution did not include any tracing and was not designed to inform the development of machine learning classifiers to automatically identify features of interest, like damaged buildings, for example. Making sense of Big (Aerial) Data will ultimately require the combined use of human computing (microtasking) and machine learning. As volunteers use microtasking to trace features of interest such as damaged buildings pictured in oblique aerial imagery, perhaps machine learning algorithms can learn to detect these features automatically if enough examples of damaged buildings are provided. There is obviously value in doing automated feature detection with vertical imagery as well. So my team and I at QCRI have been collaborating with a local Filipino UAV start-up (SkyEye) to develop a new “Clicker” for our MicroMappers collection. We’ll be testing the “Aerial Clicker” below with our Filipino partners this summer. Incidentally, SkyEye is on the Advisory Board of the Humanitarian UAV Network (UAViators).

Aerial Clicker

Aerial Clicker 2

SkyEye is interested in developing a machine learning classifier to automatically identify coconut trees, for example. Why? Because coconut trees are an important source of livelihood for many Filipinos. Being able to rapidly identify trees that are still standing versus uprooted would enable SkyEye to quickly assess the impact of a Typhoon on local agriculture, which is important for food security & long-term recovery. So we hope to use the Aerial Clicker to microtask the tracing of coconut trees in order to significantly improve the accuracy of the machine learning classifier that SkyEye has already developed.

Will this be successful? One way to find out is by experimenting. I realize that developing a “visual version” of AIDR is anything but trivial. While AIDR was developed to automatically identify tweets (i.e., text) of interest during disasters by using microtasking and machine learning, what if we also had a free and open source platform to microtask and then automatically identify visual features of interest in both vertical and oblique imagery captured by UAVs? To be honest, I’m not sure how feasible this is vis-a-vis oblique imagery. As an imagery analyst at FEMA recently told me, this is still a research question for now. So I’m hoping to take this research on at QCRI but I do not want to duplicate any existing efforts in this space. So I would be grateful for feedback on this idea and any related research that iRevolution readers may recommend.

In the meantime, here’s another idea I’m toying with for the Aerial Clicker:

Aerial Clicker 3

I often see this in the aftermath of major disasters; affected communities turning to “analog social medial” to communicate when cell phone towers are down. The aerial imagery above was taken following Typhoon Yolanda in the Philippines. And this is just one of several dozen images with analog media messages that I came across. So what if our Aerial Clicker were to ask digital volunteers to transcribe or categorize these messages? This would enable us to quickly create a crisis map of needs based on said content since every image is already geo-referenced. Thoughts?

bio

See Also:

  • Welcome to the Humanitarian UAV Network [link]
  • How UAVs are Making a Difference in Disaster Response [link]
  • Humanitarians Using UAVs for Post Disaster Recovery [link]
  • Grassroots UAVs for Disaster Response [link]
  • Using UAVs for Search & Rescue [link]
  • Debrief: UAV/Drone Search & Rescue Challenge [link]
  • Crowdsourcing Analysis of UAV Imagery for Search/Rescue [link]
  • Check-List for Flying UAVs in Humanitarian Settings [link]

Calling all UAV Pilots: Want to Support Humanitarian Efforts?

I’m launching a volunteer network to connect responsible civilian UAV pilots who are interested in safely and legally supporting humanitarian efforts when the need arises. I’ve been thinking through the concept for months now and have benefited from great feedback. The result is this draft strategy document; the keyword being draft. The concept is still being developed and there’s still room for improvement. So I very much welcome more constructive feedback.

Click here to join the list-serve for this initiative, which I’m referring to as the Humanitarian UAViators Network. Thank you for sharing this project far and wide—it will only work if we get a critical mass of UAV pilots from all around the world. Of course, launching such a network raises more questions than answers, but I welcome the challenge and believe members of UAViators will be well placed to address and manage these challenges.

bio

How UAVs Are Making a Difference in Disaster Response

I visited the University of Torino in 2007 to speak with the team developing UAVs for the World Food Program. Since then, I’ve bought and tested two small UAVs of my own so I can use this new technology to capture aerial imagery during disasters; like the footage below from the Philippines.

UAVs, or drones, have a very strong military connotation for many of us. But so did space satellites before Google Earth brought satellite imagery into our homes and changed our perceptions of said technology. So it stands to reason that UAVs and aerial imagery will follow suit. This explains why I’m a proponent of the Drone Social Innovation Award, which seeks to promote the use of civilian drone technology for the benefit of humanity. I’m on the panel of judges for this award, which is why I reached out to DanOffice IT, a Swiss-based company that deployed two drones in response to Typhoon Yolanda in the Philippines. The drones in question are Huginn X1’s, which have a flight time of 25 minutes with a range of 2 kilometers and maximum altitude of 150 meters.

HUGINN X1

I recently spoke with one of the Huginn pilots who was in Tacloban. He flew the drone to survey shelter damage, identify blocked roads and search for bodies in the debris (using thermal imaging cameras mounted on the drone for the latter). The imagery captured also helped to identify appropriate locations to set up camp. When I asked the pilot whether he was surprised by anything during the operation, he noted that road-clearance support was not a use-case he had expected. I’ll be meeting with him in Switzerland in the next few weeks to test-fly a Huginn and explore possible partnerships.

I’d like to see closer collaboration between the Digital Humanitarian Network (DHN) and groups like DanOffice, for example. Providing DHN-member Humanitarian OpenStreetMap (HOTosm) with up-to-date aerial imagery during disasters would be a major win. This was the concept behind OpenAerialMap, which was first discussed back in 2007. While the initiative has yet to formally launch, PIX4D is a platform that “converts thousands of aerial images, taken by lightweight UAV or aircraft into geo-referenced 2D mosaics and 3D surface models and point clouds.”

Drone Adventures

This platform was used in Haiti with the above drones. The International Organization for Migration (IOM) partnered with Drone Adventures to map over 40 square kilometers of dense urban territory including several shantytowns in Port-au-Prince, which was “used to count the number of tents and organize a ‘door-to-door’ census of the population, the first step in identifying aid requirements and organizing more permanent infrastructure.” This approach could also be applied to IDP and refugee camps in the immediate aftermath of a sudden-onset disaster. All the data generated by Drone Adventures was made freely available through OpenStreetMap.

If you’re interested in giving “drones for social good” a try, I recommend looking at the DJI Phantom and the AR.Drone Parrot. These are priced between $300- $600, which beats the $50,000 price tag of the Huginn X1.

 bio

Crowdsourcing the Evaluation of Post-Sandy Building Damage Using Aerial Imagery

Update (Nov 2): 5,739 aerial images tagged by over 3,000 volunteers. Please keep up the outstanding work!

My colleague Schuyler Erle from Humanitarian OpenStreetMap  just launched a very interesting effort in response to Hurricane Sandy. He shared the info below via CrisisMappers earlier this morning, which I’m turning into this blog post to help him recruit more volunteers.

Schuyler and team just got their hands on the Civil Air Patrol’s (CAP) super high resolution aerial imagery of the disaster affected areas. They’ve imported this imagery into their Micro-Tasking Server MapMill created by Jeff Warren and are now asking volunteers to help tag the images in terms of the damage depicted in each photo. “The 531 images on the site were taken from the air by CAP over New York, New Jersey, Rhode Island, and Massachusetts on 31 Oct 2012.”

To access this platform, simply click here: http://sandy.hotosm.org. If that link doesn’t work,  please try sandy.locative.us.

“For each photo shown, please select ‘ok’ if no building or infrastructure damage is evident; please select ‘not ok’ if some damage or flooding is evident; and please select ‘bad’ if buildings etc. seem to be significantly damaged or underwater. Our *hope* is that the aggregation of the ok/not ok/bad ratings can be used to help guide FEMA resource deployment, or so was indicated might be the case during RELIEF at Camp Roberts this summer.”

A disaster response professional working in the affected areas for FEMA replied (via CrisisMappers) to Schuyler’s efforts to confirm that:

“[G]overnment agencies are working on exploiting satellite imagery for damage assessments and flood extents. The best way that you can help is to help categorize photos using the tool Schuyler provides [...].  CAP imagery is critical to our decision making as they are able to work around some of the limitations with satellite imagery so that we can get an area of where the worst damage is. Due to the size of this event there is an overwhelming amount of imagery coming in, your assistance will be greatly appreciated and truly aid in response efforts.  Thank you all for your willingness to help.”

Schuyler notes that volunteers can click on the Grid link from the home page of the Micro-Tasking platform to “zoom in to the coastlines of Massachusetts or New Jersey” and see “judgements about building damages beginning to aggregate in US National Grid cells, which is what FEMA use operationally. Again, the idea and intention is that, as volunteers judge the level of damage evident in each photo, the heat map will change color and indicate at a glance where the worst damage has occurred.” See above screenshot.

Even if you just spend 5 or 10 minutes tagging the imagery, this will still go a long way to supporting FEMA’s response efforts. You can also help by spreading the word and recruiting others to your cause. Thank you!

The Best Way to Crowdsource Satellite Imagery Analysis for Disaster Response

My colleague Kirk Morris recently pointed me to this very neat study on iterative versus parallel models of crowdsourcing for the analysis of satellite imagery. The study was carried out by French researcher & engineer Nicolas Maisonneuve for the next GISscience2012 conference.

Nicolas finds that after reaching a certain threshold, adding more volunteers to the parallel model does “not change the representativeness of opinion and thus will not change the consensual output.” His analysis also shows that the value of this threshold has significant impact on the resulting quality of the parallel work and thus should be chosen carefully.  In terms of the iterative approach, Nicolas finds that “the first iterations have a high impact on the final results due to a path dependency effect.” To this end, “stronger commitment during the first steps are thus a primary concern for using such model,” which means that “asking expert/committed users to start,” is important.

Nicolas’s study also reveals that the parellel approach is better able to correct wrong annotations (wrong analysis of the satellite imagery) than the iterative model for images that are fairly straightforward to interpret. In contrast, the iterative model is better suited for handling more ambiguous imagery. But there is a catch: the potential path dependency effect in the iterative model means that  “mistakes could be propagated, generating more easily type I errors as the iterations proceed.” In terms of spatial coverage, the iterative model is more efficient since the parallel model leverages redundancy to ensure data quality. Still, Nicolas concludes that the “parallel model provides an output which is more reliable than that of a basic iterative [because] the latter is sensitive to vandalism or knowledge destruction.”

So the question that naturally follow is this: how can parallel and iterative methodologies be combined to produce a better overall result? Perhaps the parallel approach could be used as the default to begin with. However, images that are considered difficult to interpret would get pushed from the parallel workflow to the iterative workflow. The latter would first be processed by experts in order to create favorable path dependency. Could this hybrid approach be the wining strategy?