Tag Archives: Imagery

Automatically Analyzing UAV/Aerial Imagery from Haiti

My colleague Martino Pesaresi from the European Community’s Joint Research Center (JRC) recently shared one of his co-authored studies with me on the use of advanced computing to analyze UAV (aerial) imagery. Given the rather technical nature of the title, “Rubble Detection from VHR Aerial Imagery Data Using Differential Morphological Profiles,” it is unlikely that many of my humanitarian colleagues have read the study. But the results have important implications for the development of next generation humanitarian technologies that focus on very high resolution (VHR) aerial imagery captured by UAVs.

Credit: BBC News

As Martino and his co-authors note, “The presence of rubble in urban areas can be used as an indicator of building quality, poverty level, commercial activity, and others. In the case of armed conflict or natural disasters, rubble is seen as the trace of the event on the affected area. The amount of rubble and its density are two important attributes for measuring the severity of the event, in contribution to the overall crisis assessment. In the post-disaster time scale, accurate mapping of rubble in relation to the building type and location is of critical importance in allocating response teams and relief resources immediately after event. In the longer run, this information is used for post-disaster needs assessment, recovery planning and other relief activities on the affected region.”

Martino and team therefore developed an “automated method for the rapid detection and quantification of rubble from very high resolution aerial imagery of urban regions.” The first step in this model is to transfer the information depicted in images to “some hierarchical representation structure for indexing and fast component retrieval.” This simply means that aerial images need to be converted into a format that will make them “readable” by a computer. One way to do this is by converting said images into Max-Trees like the one below (which I find rather poetic).

max tree

The conversion of aerial images into Max Trees enables Martino and company to analyze and compare as many images as they’d like to identify which combination of nodes and branches represent rubble. This pattern enables the team to subsequently use advanced statistical techniques to identify the rest of the rubble in the remaining aerial images, as shown below. The heat maps on the right depict the result of the analysis, with the red shapes denoting areas that have a high probability of being rubble.

rubble detector

The detection success rate of Martino et al.’s automated rubble detector was about 92%, “suggesting that the method in its simplest form is sufficiently reliable for rapid damage assessment.” The full study is available here and also appears in my forthcoming book “Digital Humanitarians: How Big Data Changes the Face of Disaster Response.”

bio

 

See Also:

  • Welcome to the Humanitarian UAV Network [link]
  • How UAVs are Making a Difference in Disaster Response [link]
  • Humanitarians Using UAVs for Post Disaster Recovery [link]
  • Grassroots UAVs for Disaster Response [link]
  • Using UAVs for Search & Rescue [link]
  • Debrief: UAV/Drone Search & Rescue Challenge [link]
  • Crowdsourcing Analysis of UAV Imagery for Search/Rescue [link]
  • Check-List for Flying UAVs in Humanitarian Settings [link]

Using MicroMappers to Make Sense of UAV Imagery During Disasters

Aerial imagery will soon become a Big Data problem for humanitarian response—particularly oblique imagery. This was confirmed to me by a number of imagery experts in both the US (FEMA) and Europe (JRC). Aerial imagery taken at an angle is referred to as oblique imagery; compared to vertical imagery, which is taken by cameras pointing straight down (like satellite imagery). The team from Humanitarian OpenStreetMap (HOT) is already well equipped to make sense of vertical aerial imagery. They do this by microtasking the tracing of said imagery, as depicted below. So how do we rapidly analyze oblique images, which often provide more detail vis-a-vis infrastructure damage than vertical pictures?

HOTosm PH

One approach is to microtask the tagging of oblique images. This was carried out very successfully after Hurricane Sandy (screenshot below).

This solution did not include any tracing and was not designed to inform the development of machine learning classifiers to automatically identify features of interest, like damaged buildings, for example. Making sense of Big (Aerial) Data will ultimately require the combined use of human computing (microtasking) and machine learning. As volunteers use microtasking to trace features of interest such as damaged buildings pictured in oblique aerial imagery, perhaps machine learning algorithms can learn to detect these features automatically if enough examples of damaged buildings are provided. There is obviously value in doing automated feature detection with vertical imagery as well. So my team and I at QCRI have been collaborating with a local Filipino UAV start-up (SkyEye) to develop a new “Clicker” for our MicroMappers collection. We’ll be testing the “Aerial Clicker” below with our Filipino partners this summer. Incidentally, SkyEye is on the Advisory Board of the Humanitarian UAV Network (UAViators).

Aerial Clicker

Aerial Clicker 2

SkyEye is interested in developing a machine learning classifier to automatically identify coconut trees, for example. Why? Because coconut trees are an important source of livelihood for many Filipinos. Being able to rapidly identify trees that are still standing versus uprooted would enable SkyEye to quickly assess the impact of a Typhoon on local agriculture, which is important for food security & long-term recovery. So we hope to use the Aerial Clicker to microtask the tracing of coconut trees in order to significantly improve the accuracy of the machine learning classifier that SkyEye has already developed.

Will this be successful? One way to find out is by experimenting. I realize that developing a “visual version” of AIDR is anything but trivial. While AIDR was developed to automatically identify tweets (i.e., text) of interest during disasters by using microtasking and machine learning, what if we also had a free and open source platform to microtask and then automatically identify visual features of interest in both vertical and oblique imagery captured by UAVs? To be honest, I’m not sure how feasible this is vis-a-vis oblique imagery. As an imagery analyst at FEMA recently told me, this is still a research question for now. So I’m hoping to take this research on at QCRI but I do not want to duplicate any existing efforts in this space. So I would be grateful for feedback on this idea and any related research that iRevolution readers may recommend.

In the meantime, here’s another idea I’m toying with for the Aerial Clicker:

Aerial Clicker 3

I often see this in the aftermath of major disasters; affected communities turning to “analog social medial” to communicate when cell phone towers are down. The aerial imagery above was taken following Typhoon Yolanda in the Philippines. And this is just one of several dozen images with analog media messages that I came across. So what if our Aerial Clicker were to ask digital volunteers to transcribe or categorize these messages? This would enable us to quickly create a crisis map of needs based on said content since every image is already geo-referenced. Thoughts?

bio

See Also:

  • Welcome to the Humanitarian UAV Network [link]
  • How UAVs are Making a Difference in Disaster Response [link]
  • Humanitarians Using UAVs for Post Disaster Recovery [link]
  • Grassroots UAVs for Disaster Response [link]
  • Using UAVs for Search & Rescue [link]
  • Debrief: UAV/Drone Search & Rescue Challenge [link]
  • Crowdsourcing Analysis of UAV Imagery for Search/Rescue [link]
  • Check-List for Flying UAVs in Humanitarian Settings [link]

Calling all UAV Pilots: Want to Support Humanitarian Efforts?

I’m launching a volunteer network to connect responsible civilian UAV pilots who are interested in safely and legally supporting humanitarian efforts when the need arises. I’ve been thinking through the concept for months now and have benefited from great feedback. The result is this draft strategy document; the keyword being draft. The concept is still being developed and there’s still room for improvement. So I very much welcome more constructive feedback.

Click here to join the list-serve for this initiative, which I’m referring to as the Humanitarian UAViators Network. Thank you for sharing this project far and wide—it will only work if we get a critical mass of UAV pilots from all around the world. Of course, launching such a network raises more questions than answers, but I welcome the challenge and believe members of UAViators will be well placed to address and manage these challenges.

bio

How UAVs Are Making a Difference in Disaster Response

I visited the University of Torino in 2007 to speak with the team developing UAVs for the World Food Program. Since then, I’ve bought and tested two small UAVs of my own so I can use this new technology to capture aerial imagery during disasters; like the footage below from the Philippines.

UAVs, or drones, have a very strong military connotation for many of us. But so did space satellites before Google Earth brought satellite imagery into our homes and changed our perceptions of said technology. So it stands to reason that UAVs and aerial imagery will follow suit. This explains why I’m a proponent of the Drone Social Innovation Award, which seeks to promote the use of civilian drone technology for the benefit of humanity. I’m on the panel of judges for this award, which is why I reached out to DanOffice IT, a Swiss-based company that deployed two drones in response to Typhoon Yolanda in the Philippines. The drones in question are Huginn X1’s, which have a flight time of 25 minutes with a range of 2 kilometers and maximum altitude of 150 meters.

HUGINN X1

I recently spoke with one of the Huginn pilots who was in Tacloban. He flew the drone to survey shelter damage, identify blocked roads and search for bodies in the debris (using thermal imaging cameras mounted on the drone for the latter). The imagery captured also helped to identify appropriate locations to set up camp. When I asked the pilot whether he was surprised by anything during the operation, he noted that road-clearance support was not a use-case he had expected. I’ll be meeting with him in Switzerland in the next few weeks to test-fly a Huginn and explore possible partnerships.

I’d like to see closer collaboration between the Digital Humanitarian Network (DHN) and groups like DanOffice, for example. Providing DHN-member Humanitarian OpenStreetMap (HOTosm) with up-to-date aerial imagery during disasters would be a major win. This was the concept behind OpenAerialMap, which was first discussed back in 2007. While the initiative has yet to formally launch, PIX4D is a platform that “converts thousands of aerial images, taken by lightweight UAV or aircraft into geo-referenced 2D mosaics and 3D surface models and point clouds.”

Drone Adventures

This platform was used in Haiti with the above drones. The International Organization for Migration (IOM) partnered with Drone Adventures to map over 40 square kilometers of dense urban territory including several shantytowns in Port-au-Prince, which was “used to count the number of tents and organize a ‘door-to-door’ census of the population, the first step in identifying aid requirements and organizing more permanent infrastructure.” This approach could also be applied to IDP and refugee camps in the immediate aftermath of a sudden-onset disaster. All the data generated by Drone Adventures was made freely available through OpenStreetMap.

If you’re interested in giving “drones for social good” a try, I recommend looking at the DJI Phantom and the AR.Drone Parrot. These are priced between $300- $600, which beats the $50,000 price tag of the Huginn X1.

 bio

Crowdsourcing the Evaluation of Post-Sandy Building Damage Using Aerial Imagery

Update (Nov 2): 5,739 aerial images tagged by over 3,000 volunteers. Please keep up the outstanding work!

My colleague Schuyler Erle from Humanitarian OpenStreetMap  just launched a very interesting effort in response to Hurricane Sandy. He shared the info below via CrisisMappers earlier this morning, which I’m turning into this blog post to help him recruit more volunteers.

Schuyler and team just got their hands on the Civil Air Patrol’s (CAP) super high resolution aerial imagery of the disaster affected areas. They’ve imported this imagery into their Micro-Tasking Server MapMill created by Jeff Warren and are now asking volunteers to help tag the images in terms of the damage depicted in each photo. “The 531 images on the site were taken from the air by CAP over New York, New Jersey, Rhode Island, and Massachusetts on 31 Oct 2012.”

To access this platform, simply click here: http://sandy.hotosm.org. If that link doesn’t work,  please try sandy.locative.us.

“For each photo shown, please select ‘ok’ if no building or infrastructure damage is evident; please select ‘not ok’ if some damage or flooding is evident; and please select ‘bad’ if buildings etc. seem to be significantly damaged or underwater. Our *hope* is that the aggregation of the ok/not ok/bad ratings can be used to help guide FEMA resource deployment, or so was indicated might be the case during RELIEF at Camp Roberts this summer.”

A disaster response professional working in the affected areas for FEMA replied (via CrisisMappers) to Schuyler’s efforts to confirm that:

“[G]overnment agencies are working on exploiting satellite imagery for damage assessments and flood extents. The best way that you can help is to help categorize photos using the tool Schuyler provides […].  CAP imagery is critical to our decision making as they are able to work around some of the limitations with satellite imagery so that we can get an area of where the worst damage is. Due to the size of this event there is an overwhelming amount of imagery coming in, your assistance will be greatly appreciated and truly aid in response efforts.  Thank you all for your willingness to help.”

Schuyler notes that volunteers can click on the Grid link from the home page of the Micro-Tasking platform to “zoom in to the coastlines of Massachusetts or New Jersey” and see “judgements about building damages beginning to aggregate in US National Grid cells, which is what FEMA use operationally. Again, the idea and intention is that, as volunteers judge the level of damage evident in each photo, the heat map will change color and indicate at a glance where the worst damage has occurred.” See above screenshot.

Even if you just spend 5 or 10 minutes tagging the imagery, this will still go a long way to supporting FEMA’s response efforts. You can also help by spreading the word and recruiting others to your cause. Thank you!

The Best Way to Crowdsource Satellite Imagery Analysis for Disaster Response

My colleague Kirk Morris recently pointed me to this very neat study on iterative versus parallel models of crowdsourcing for the analysis of satellite imagery. The study was carried out by French researcher & engineer Nicolas Maisonneuve for the next GISscience2012 conference.

Nicolas finds that after reaching a certain threshold, adding more volunteers to the parallel model does “not change the representativeness of opinion and thus will not change the consensual output.” His analysis also shows that the value of this threshold has significant impact on the resulting quality of the parallel work and thus should be chosen carefully.  In terms of the iterative approach, Nicolas finds that “the first iterations have a high impact on the final results due to a path dependency effect.” To this end, “stronger commitment during the first steps are thus a primary concern for using such model,” which means that “asking expert/committed users to start,” is important.

Nicolas’s study also reveals that the parellel approach is better able to correct wrong annotations (wrong analysis of the satellite imagery) than the iterative model for images that are fairly straightforward to interpret. In contrast, the iterative model is better suited for handling more ambiguous imagery. But there is a catch: the potential path dependency effect in the iterative model means that  “mistakes could be propagated, generating more easily type I errors as the iterations proceed.” In terms of spatial coverage, the iterative model is more efficient since the parallel model leverages redundancy to ensure data quality. Still, Nicolas concludes that the “parallel model provides an output which is more reliable than that of a basic iterative [because] the latter is sensitive to vandalism or knowledge destruction.”

So the question that naturally follow is this: how can parallel and iterative methodologies be combined to produce a better overall result? Perhaps the parallel approach could be used as the default to begin with. However, images that are considered difficult to interpret would get pushed from the parallel workflow to the iterative workflow. The latter would first be processed by experts in order to create favorable path dependency. Could this hybrid approach be the wining strategy?

Imagery and Humanitarian Assistance: Gems, Errors and Omissions

The Center for Technology and National Security Policy based at National Defense University’s Institute for National Strategic Studies just published an 88-page report entitled “Constructive Convergence: Imagery and Humanitarian Assistance.” As noted by the author, “the goal of this paper is to illustrate to the technical community and interested humanitarian users the breadth of the tools and techniques now available for imagery collection, analysis, and distribution, and to provide brief recommendations with suggestions for next steps.” In addition, the report “presents a brief overview of the growing power of imagery, especially from volunteers and victims in disasters, and its place in emergency response. It also highlights an increasing technical convergence between professional and volunteer responders—and its limits.”

The study contains a number of really interesting gems, just a few errors and some surprising omissions. The point of this blog post is not to criticize but rather to provide constructive-and-hopefully-useful feedback should the report be updated in the future.

Lets begin with the important gems, excerpted below.

“The most serious issues overlooked involve liability protections by both the publishers and sources of imagery and its data. As far as our research shows there is no universally adopted Good Samaritan law that can protect volunteers who translate emergency help messages, map them, and distribute that map to response teams in the field.”

Whether a Good Samaritan law could ever realistically be universally adopted remains to be seen, but the point is that all of the official humanitarian data protection standards that I’ve reviewed thus far simply don’t take into account the rise of new digitally-empowered global volunteer networks (let alone the existence of social media). The good news is that some colleagues and I are working with the International Committee of the Red Cross (ICRC) and a consor-tium of major humanitarian organizations to update existing data protection protocols to take some of these new factors into account. This new document will hopefully be made publicly available in October 2012.

“Mobile devices such as tablets and mobile phones are now the primary mode for both collecting and sharing information in a response effort. A January 2011 report published by the Mobile Computing Promotion Consortium of Japan surveyed users of smart phones. Of those who had smart phones, 55 percent used a map application, the third most common application after Web browsing and email.”

I find this absolutely fascinating and thus read the January 2011 report, which is where I found the graphic below.

“The rapid deployment of Cellular on Wheels [COW] is dramatically improving. The Alcatel-Lucent Light Radio is 300 grams (about 10 ounces) and stackable. It also consumes very little power, eliminating large generation and storage requirements. It is capable of operating by solar, wind and/or battery power. Each cube fits into the size of a human hand and is fully integrated with radio processing, antenna, transmission, and software management of frequency. The device can operate on multiple frequencies simultaneously and work with existing infrastructure.”

“In Haiti, USSOUTHCOM found imagery, digital open source maps, and websites that hosted them (such as Ushahidi and OpenStreetMap) to occasionally be of greater value than their own assets.”

“It is recommended that clearly defined and restricted use of specialized #hashtags be implemented using a common crisis taxonomy. For example:

#country + location + emergency code + supplemental data

The above example, if located in Washington, DC, U.S.A., would be published as:

#USAWashingtonDC911Trapped

The specialized use of #hashtags could be implemented in the same cultural manner as 911, 999, and other emergency phone number systems. Metadata using these tags would also be given priority when sent over the Internet through communication networks (landline, broadband Internet, or mobile text or data). Abuse of ratified emergency #hashtag’s would be a prosecutable offense. Implementing such as system could reduce the amount of data that crisis mappers and other response organizations need to monitor and improve the quality of data to be filtered. Other forms of #Hashtags syllabus can also be implemented such as:

#country + location + information code (411) + supplemental data
#country + location + water (H20) + supplemental data
#country + location + Fire (FD) + supplemental data”

I found this very interesting and relevant to this earlier blog post: “Calling 911: What Humanitarians Can Learn from 50 Years of Crowdsourcing.” Perhaps a reference to Tweak the Tweet would have been worthwhile.

I also had not come across some of the platforms used in response to the 2011 earthquake in New Zealand. But the report did an excellent job sharing these.

EQviewer.co.nz

Some errors that need correcting:

Open source mapping tools such as Google Earth use imagery as a foundation for layering field data.”

Google Earth is not an open source tool.

CrisisMappers.net, mentioned earlier, is a group of more than 1,600 volunteers that have been brought together by Patrick Meier and Jen Ziemke. It is the core of collaboration efforts that can be deployed anywhere in the world. CrisisMappers has established workshops and steering committees to set guidelines and standardize functions and capabilities for sites that deliver imagery and layered datasets. This group, which today consists of diverse and talented volunteers from all walks of life, might soon evolve into a professional volunteer organization of trusted capabilities and skill sets and they are worth watching.”

CrisisMappers is not a volunteer network or an organization that deploys in any formal sense of the word. The CrisisMappers website explains what the mission and purpose of this informal network is. The initiative has some 3,500 members.

“Figure 16. How Ushahidi’s Volunteer Standby Task Force was Structured for Libya. Ushahidi’s platform success stems from its use by organized volunteers, each with skill sets that extract data from multiple sources for publication.”

The Standby Volunteer Task Force (SBTF) does not belong to Ushahidi, nor is the SBTF an Ushahidi project. A link to the SBTF website would have been appropriate. Also, the majority of applications of the Ushahidi platform have nothing to do with crises, or the SBTF, or any other large volunteer networks. The SBTF’s original success stems from organized volunteers who where well versed in the Ushahidi platform.

“Ushahidi accepts KML and KMZ if there is an agreement and technical assistance resources are available. An end user cannot on their own manipulate a Ushahidi portal as an individual, nor can external third party groups unless that group has an arrangement with the principal operators of the site. This offers new collaboration going forward. The majority of Ushahidi disaster portals are operated by volunteer organizations and not government agencies.”

The first sentence is unclear. If someone sets up an Ushahidi platform and they have KML/KMZ files that they want to upload, they can go ahead and do so. An end-user can do some manipulation of an Ushahidi portal and can also pull the Ushahidi data into their own platform (via the GeoRSS feed, for example). Thanks to the ESRI-Ushahidi plugin, they can then perform a range of more advanced GIS analysis. In terms of volunteers vs government agencies, indeed, it appears the former is leading the way vis-a-vis innovation.

Finally, below are some omissions and areas that I would have been very interested to learn more about. For some reason, the section on the Ushahidi deployment in New Zealand makes no reference to Ushahidi.

Staying on the topic of the earthquake in Christchurch, I was surprised to see no reference to the Tomnod deployment:

I had also hoped to read more about the use of drones (UAVs) in disaster response since these were used both in Haiti and Japan. What about the rise of DIY drones and balloon mapping? Finally, the report’s reference to Broadband Global Area Network (BGAN) doesn’t provide information on the range of costs associated with using BGANs in disasters.

In conclusion, the report is definitely an important contribution to the field of crisis mapping and should be required reading.