Tag Archives: Disaster

Using Computer Vision to Analyze Aerial Big Data from UAVs During Disasters

Recent scientific research has shown that aerial imagery captured during a single 20-minute UAV flight can take more than half-a-day to analyze. We flew several dozen flights during the World Bank’s humanitarian UAV mission in response to Cyclone Pam earlier this year. The imagery we captured would’ve taken a single expert analyst a minimum 20 full-time workdays to make sense of. In other words, aerial imagery is already a Big Data problem. So my team and I are using human computing (crowdsourcing), machine computing (artificial intelligence) and computer vision to make sense of this new Big Data source.

For example, we recently teamed up with the University of Southampton and EPFL to analyze aerial imagery of the devastation caused by Cyclone Pam in Vanuatu. The purpose of this research is to generate timely answers. Aid groups want more than high-resolution aerial images of disaster-affected areas, they want answers; answers like the number and location of damaged buildings, the number and location of displaced peoples, and which roads are still useable for the delivery of aid, for example. Simply handing over the imagery is not good enough. As demonstrated in my new book, Digital Humanitarians, both aid and development organizations are already overwhelmed by the vast volume and velocity of Big Data generated during and post-disasters. Adding yet another source, Big Aerial Data, may be pointless since these organizations may simply not have the time or capacity to make sense of this new data let alone integrate the results with their other datasets.

We therefore analyzed the crowdsourced results from the deployment of our MicroMappers platform following Cyclone Pam to determine whether those results could be used to train algorithms to automatically detect disaster damage in future disasters in Vanuatu. During this MicroMappers deployment, digital volunteers analyzed over 3,000 high-resolution oblique aerial images, tracing houses that were fully destroyed, partially damaged and largely intact. My colleague Ferda Ofli and I teamed up with Nicolas Rey (a graduate student from EPFL who interned with us over the summer) to explore whether these traces could be used to train our algorithms. The results below were written with Ferda and Nicolas. Our research is not just an academic exercise. Vanuatu is the most disaster-prone country in the world. What’s more, this year’s El Niño is expected to be one of the strongest in half-a-century.

Screen Shot 2015-10-11 at 6.11.04 PM

According to the crowdsourced results, 1,145 of the high-resolution images did not contain any buildings. Above is a simple histogram depicting the number of buildings per image. The aerial images of Vanuatu are very heterogeneous, and vary not only in diversity of features they exhibit but also in the angle of view and the altitude at which the pictures were taken. While the vast majority of the images are oblique, some are almost nadir images, and some were taken very close to the ground or even before take off.

Screen Shot 2015-10-11 at 6.45.15 PM

The heterogeneity of our dataset of images makes the automated analysis of this imagery a lot more difficult. Furthermore, buildings that are under construction, of which there are many in our dataset, represent a major difficulty because they look very similar to damaged buildings. Our first task thus focused on training our algorithms to determine whether or not any given aerial image shows some kind of building. This is an important task given that more than ~30% of the images in our dataset do not contain buildings. As such, if we can develop an accurate algorithm to automatically filter out these irrelevant images (like the “noise” below), this will allows us focus the crowdsourced analysis of relevant images only.


While our results are purely preliminary, we are still pleased with our findings thus far. We’ve been able to train our algorithms to determine whether or not an aerial image includes a building with just over 90% accuracy at the tile level. More specifically, our algorithms were able to recognize and filter out 60% of the images that do not contain any buildings (recall rate), and only 10% of the images that contain buildings were mistakingly discarded (precision rate of 90%). The example below is an example. There are still quite a number of major challenges, however, so we want to be sure not to over-promise anything at this stage. In terms of next steps, we would like to explore whether our computer vision algorithms can distinguish between destroyed an intact buildings.

Screen Shot 2015-10-11 at 6.57.05 PMScreen Shot 2015-10-11 at 6.57.15 PM

The UAVs we were flying in Vanuatu required that we landed them in order to get access to the collected imagery. Increasingly, newer UAVs offer the option of broadcasting the aerial images and videos back to base in real time. DJI’s new Phantom 3 UAV (pictured below), for example, allows you to broadcast your live aerial video feed directly to YouTube (assuming you have connectivity). There’s absolutely no doubt that this is where the UAV industry is headed; towards real-time data collection and analysis. In terms of humanitarian applications, and search and rescue, having the data-analysis carried out in real-time is preferable.


This explains why my team and I recently teamed up with Elliot Salisbury & Sarvapali Ramchurn from the University of Southampton to crowdsource the analysis of live aerial video footage of disaster zones and to combine this crowdsourcing with (hopefully) near real-time machine learning and automated feature detection. In other words, as digital volunteers are busy tagging disaster damage in video footage, we want our algorithms to learn from these volunteers in real-time. That is, we’d like the algorithms to learn what disaster damage looks like so they can automatically identify any remaining disaster damage in a given aerial video.

So we recently carried out a MicroMappers test-deployment using aerial videos from the humanitarian UAV mission to Vanuatu. Close to 100 digital volunteers participated in this deployment. Their task? To click on any parts of the videos that show disaster damage. And whenever 80% or more of these volunteers clicked on the same areas, we would automatically highlight these areas to provide near-real time feedback to the UAV pilot and humanitarian teams.

At one point during the simulations, we had some 30 digital volunteers clicking on areal videos at the same time, resulting in an average of 12 clicks per second for more than 5 minutes. In fact, we collectively clicked on the videos a total of 49,706 times! This provided more than enough real-time data for MicroMappers to act as a human-intelligence sensor for disaster damage assessments. In terms of accuracy, we had about 87% accuracy with the collective clicks. Here’s how the simulations looked like to the UAV pilots as we were all clicking away:

Thanks to all this clicking, we can export only the most important and relevant parts of the video footage while the UAV is still flying. These snippets, such as this one and this one, can then be pushed to MicroMappers for additional verification. These animations are small and quick, and reduce a long aerial video down to just the most important footage. We’re now analyzing the areas that were tagged in order to determine whether we can use this data to train our algorithms accordingly. Again, this is far more than just an academic curiosity. If we can develop robust algorithms during the next few months, we’ll be ready to use them effectively during the next Typhoon season in the Pacific.

In closing, big thanks to my team at QCRI for translating my vision of Micro-Mappers into reality and for trusting me well over a year ago when I said we needed to extend our work to aerial imagery. All of the above research would simply not have been possible without MicroMappers existing. Big thanks as well to our excellent partners at EPFL and Southampton for sharing our vision and for their hard work on our joint projects. Last but certainly not least, sincerest thanks to digital volunteers from SBTF and beyond for participating in these digital humanitarian deployments.

A Force for Good: How Digital Jedis are Responding to the Nepal Earthquake (Updated)

Digital Humanitarians are responding in full force to the devastating earthquake that struck Nepal. Information sharing and coordination is taking place online via CrisisMappers and on multiple dedicated Skype chats. The Standby Task Force (SBTF), Humanitarian OpenStreetMap (HOT) and others from the Digital Humanitarian Network (DHN) have also deployed in response to the tragedy. This blog post provides a quick summary of some of these digital humanitarian efforts along with what’s coming in terms of new deployments.

Update: A list of Crisis Maps for Nepal is available below.

Credit: http://www.thestar.com/content/dam/thestar/uploads/2015/4/26/nepal2.jpg

At the request of the UN Office for the Coordination of Humanitarian Affairs (OCHA), the SBTF is using QCRI’s MicroMappers platform to crowdsource the analysis of tweets and mainstream media (the latter via GDELT) to rapidly 1) assess disaster damage & needs; and 2) Identify where humanitarian groups are deploying (3W’s). The MicroMappers CrisisMaps are already live and publicly available below (simply click on the maps to open live version). Both Crisis Maps are being updated hourly (at times every 15 minutes). Note that MicroMappers also uses both crowdsourcing and Artificial Intelligence (AIDR).

Update: More than 1,200 Digital Jedis have used MicroMappers to sift through a staggering 35,000 images and 7,000 tweets! This has so far resulted in 300+ relevant pictures of disaster damage displayed on the Image Crisis Map and over 100 relevant disaster tweets on the Tweet Crisis Map.

Live CrisisMap of pictures from both Twitter and Mainstream Media showing disaster damage:

MM Nepal Earthquake ImageMap

Live CrisisMap of Urgent Needs, Damage and Response Efforts posted on Twitter:

MM Nepal Earthquake TweetMap

Note: the outstanding Kathmandu Living Labs (KLL) team have also launched an Ushahidi Crisis Map in collaboration with the Nepal Red Cross. We’ve already invited invited KLL to take all of the MicroMappers data and add it to their crisis map. Supporting local efforts is absolutely key.


The Humanitarian UAV Network (UAViators) has also been activated to identify, mobilize and coordinate UAV assets & teams. Several professional UAV teams are already on their way to Kathmandu. The UAV pilots will be producing high resolution nadir imagery, oblique imagery and 3D point clouds. UAViators will be pushing this imagery to both HOT and MicroMappers for rapid crowdsourced analysis (just like was done with the aerial imagery from Vanuatu post Cyclone Pam, more on that here). A leading UAV manufacturer is also donating several UAVs to UAViators for use in Nepal. These UAVs will be sent to KLL to support their efforts. In the meantime, DigitalGlobePlanet Labs and SkyBox are each sharing their satellite imagery with CrisisMappers, HOT and others in the Digital Humanitarian Network.

There are several other efforts going on, so the above is certainly not a complete list but simply reflect those digital humanitarian efforts that I am involved in or most familiar with. If you know of other major efforts, then please feel free to post them in the comments section. Thank you. More on the state of the art in digital humanitarian action in my new book, Digital Humanitarians.

List of Nepal Crisis Maps

Please add to the list below by posting new links in this Google Spreadsheet. Also, someone should really create 1 map that pulls from each of the listed maps.

Code for Nepal Casualty Crisis Map:

DigitalGlobe Crowdsourced Damage Assessment Map:

Disaster OpenRouteService Map for Nepal:

ESRI Damage Assessment Map:

Harvard WorldMap Tweets of Nepal:

Humanitarian OpenStreetMap Nepal:

Kathmandu Living Labs Crowdsourced Crisis Map: http://www.kathmandulivinglabs.org/earthquake

MicroMappers Disaster Image Map of Damage:

MicroMappers Disaster Damage Tweet Map of Needs:

NepalQuake Status Map:

UAViators Crisis Map of Damage from Aerial Pics/Vids:
http://uaviators.org/map (takes a while to load)

Visions SDSU Tweet Crisis Map of Nepal:

Low-Cost UAV Applications for Post-Disaster Assessments: A Streamlined Workflow

Colleagues Matthew Cua, Charles Devaney and others recently co-authored this excellent study on their latest use of low-cost UAVs/drones for post-disaster assessments, environmental development and infrastructure development. They describe the “streamlined workflow—flight planning and data acquisition, post-processing, data delivery and collaborative sharing,” that they created “to deliver acquired images and orthorectified maps to various stakeholders within [their] consortium” of partners in the Philippines. They conclude from direct hands-on experience that “the combination of aerial surveys, ground observations and collaborative sharing with domain experts results in richer information content and a more effective decision support system.”

Screen Shot 2014-10-03 at 11.26.12 AM

UAVs have become “an effective tool for targeted remote sensing operations in areas that are inaccessible to conventional manned aerial platforms due to logistic and human constraints.” As such, “The rapid development of unmanned aerial vehicle (UAV) technology has enabled greater use of UAVs as remote sensing platforms to complement satellite and manned aerial remote sensing systems.” The figure above (click to enlarge) depicts the aerial imaging workflow developed by the co-authors to generate and disseminate post-processed images. This workflow, the main components of which are “Flight Planning & Data Acquisition,” “Data Post-Processing” and “Data Delivery,” will “continuously be updated, with the goal of automating more activities in order to increase processing speed, reduce cost and minimize human error.”

Screen Shot 2014-10-03 at 11.27.02 AM

Flight Planning simply means developing a flight plan based on clearly defined data needs. The screenshot above (click to enlarge) is a “UAV flight plan of the coastal section of Tacloban city, Leyte generated using APM Mission Planner. The [flight] plan involved flying a small UAV 200 meters above ground level. The raster scan pattern indicated by the yellow line was designed to take images with 80% overlap & 75% side overlap. The waypoints indicating a change in direction of the UAV are shown as green markers.” The purpose of the overlapping is to stitch and accurately geo-referenced the images during post-processing. A video on how to program UAV flight is available here.  This video specifically focuses on post-disaster assessments in the Philippines.

“Once in the field, the team verifies the flight plans before the UAV is flown by performing a pre-flight survey [which] may be done through ground observations of the area, use of local knowledge or short range aerial observations with a rotary UAV to identify launch/recovery sites and terrain characteristics. This may lead to adjustment in the flight plans. After the flight plans have been verified, the UAV is deployed for data acquisition.”

Screen Shot 2014-10-03 at 11.27.33 AM

Matthew, Charles and team initially used a Micropilot MP-Vision UAV for data acquisition. “However, due to increased cost of maintenance and significant skill requirements of setting up the MP-Vision,” they developed their own custom UAV instead, which “uses semi-professional and hobby- grade components combined with open-source software” as depicted in the above figure (click to enlarge). “The UAV’s airframe is the Super SkySurfer fixed-wing EPO foam frame.” The team used the “ArduPilot Mega (APM) autopilot system consisting of an Arduino-based microprocessor board, airspeed sensor, pressure and tem-perature sensor, GPS module, triple-axis gyro and other sensors. The firmware for navigation and control is open-source.”

The custom UAV, which costs approximately $2,000, has “an endurance of about 30-50 minutes, depending on payload weight and wind conditions, and is able to survey an area of up to 4 square kilometers.” The custom platform was “easier to assemble, repair, maintain, modify & use. This allowed faster deploy-ability of the UAV. In addition, since the autopilot firmware is open-source, with a large community of developers supporting it, it became easier to identify and address issues and obtain software updates.” That said, the custom UAV was “more prone to hardware and software errors, either due to assembly of parts, wiring of electronics or bugs in the software code.” Despite these drawbacks, “use of the custom UAV turned out to be more feasible and cost effective than use of a commercial-grade UAV.”

In terms of payloads (cameras), three different kinds were used: Panasonic Lumix LX3, Canon S100, and GoPro Hero 3. These cameras come with both advantages and disadvantages for aerial mapping. The LX3 has better image quality but the servo triggering the shutter would often fail. The S100 is GPS-enabled and does not require mechanical triggering. The Hero-3 was used for video reconnaissance specifically.

Screen Shot 2014-10-04 at 5.31.47 AM

“The workflow at [the Data-Processing] stage focuses on the creation of an orthomosaic—an orthorectified, georeferenced and stitched map derived from aerial images and GPS and IMU (inertial measurement unit values, particularly yaw, pitch and roll) information.” In other words, “orthorectification is the process of stretching the image to match the spatial accuracy of a map by considering location, elevation, and sensor information.”

Transforming aerial images into orthomosaics involves: (1) manually removing take-off/landing, burry & oblique images; (2) applying contrast enhancement to images that are either over- or under-exposed using commercial image-editing software; (3) geo-referencing the resulting images; (4) creating an orthomosaic from the geo-tagged images. The geo-referencing step is not needed if the images are already geo-referenced (i.e., have GPS coordinates, like those taken with the Cannon S100. “For non-georeferenced images, georeferencing is done by a custom Python script that generates a CSV file containing the mapping between images and GPS/IMU information. In this case, the images are not embedded with GPS coordinates.” The sample orthomosaic above uses 785 images taken during two UAV flights (click to enlarge).

Matthew, Charles and team used the “Pix4Dmapper photomapping software developed by Pix4D to render their orthomosaics. “The program can use either geotagged or non-geotagged images. For non-geotagged images, the software accepts other inputs such as the CSV file generated by the custom Python script to georeference each image and generate the photomosaic. Pix4D also outputs a report containing information about the output, such as total area covered and ground resolution. Quantum GIS, an open-source GIS software, was used for annotating and viewing the photomosaics, which can sometimes be too large to be viewed using common photo viewing software.”

Screen Shot 2014-10-03 at 11.28.20 AM

Data Delivery involves uploading the orthomosaics to a common, web-based platform that stakeholders can access. Orthomosaics “generally have large file sizes (e.g around 300MB for a 2 sq. km. render),” so the team created a web-based geographic information systems (GIS) to facilitate sharing of aerial maps. “The platform, named VEDA, allows viewing of rendered maps and adding metadata. The key advantage of using this platform is that the aerial imagery data is located in one place & can be accessed from any computer with a modern Internet browser. Before orthomosaics can be uploaded to the VEDA platform, they need to be converted into an approprate format supported by the platform. The current format used is MBTiles developed by Mapbox. The MBTiles format specifies how to partition a map image into smaller image tiles for web access. Once uploaded, the orthomosaic map can then be annotated with additional information, such as markers for points of interest.” The screenshot above (click to enlarge) shows the layout of a rendered orthomosaic in VEDA.

Matthew, Charles and team have applied the above workflow in various mission-critical UAV projects in the Philippines including damage assessment work after Typhoon Haiyan in 2013. This also included assessing the impact of the Typhoon on agriculture, which was an ongoing concern for local government during the recovery efforts. “The coconut industry, in particular, which plays a vital role in the Philippine economy, was severely impacted due to millions of coconut trees being damaged or flattened after the storm hit. In order to get an accurate assessment of the damage wrought by the typhoon, and to make a decision on the scale of recovery assistance from national government, aerial imagery coupled with a ground survey is a potentially promising approach.”

So the team received permission from local government to fly several missions over areas in Eastern Visayas that [were] devoted to coconut stands prior to Typhoon Haiyan.” (As such, “The UAV field team operated mostly in rural areas and wilderness, which reduced the human risk factor in case of aircraft failure. Also, as a safety guideline, the UAV was not flown within 3 miles from an active airport”). The partners in the Philippines are developing image processing techniques to distinguish “coconut trees from wild forest and vegetation for land use assessment and carbon source and sink estimates. One technique involved use of superpixel classification, wherein the image pixels are divided into homogeneous regions (i.e. collection of similar pixels) called superpixels which serve as the basic unit for classification.”

Screen Shot 2014-10-03 at 11.29.07 AM

The image below shows the “results of the initial test run where areas containing coconut trees [above] have been segmented.”

Screen Shot 2014-10-03 at 11.29.23 AM

“Similar techniques could also be used for crop damage assessment after a disaster such as Typhoon Haiyan, where for example standing coconut trees could be distinguished from fallen ones in order to determine capacity to produce coconut-based products.” This is an area that my team and I at QCRI are exploring in partnership with Matthew, Charles and company. In particular, we’re interested in assessing whether crowdsourcing can be used to facilitate the development of machine learning classifiers for image feature detection. More on this herehere and on CNN here. In addition, since “aerial imagery augmented with ground observations would provide a richer source of informa-tion than either one could provide alone,” we are also exploring the integration of social media data with aerial imagery (as described here).

In conclusion, Matthew, Charles and team are looking to further develop the above framework by automating more processes, “such as image filtering and image contrast enhancement. Autonomous take-off & landing will be configured for the custom UAV in order to reduce the need for a skilled pilot. A catapult system will be created for the UAV to launch in areas with a small clearing and a parachute system will be added in order to reduce the risk of damage due to belly landings.” I very much look forward to following the team’s progress and to collaborating with them on imagery analysis for disaster response.


See Also:

  • Official UN Policy Brief on Humanitarian UAVs [link]
  • Common Misconceptions About Humanitarian UAVs [link]
  • Humanitarians in the Sky: Using UAVs for Disaster Response [link]
  • Humanitarian UAVs Fly in China After Earthquake [link]
  • Humanitarian UAV Missions During Balkan Floods [link]
  • Humanitarian UAVs in the Solomon Islands [link]
  • UAVs, Community Mapping & Disaster Risk Reduction in Haiti [link]

Humanitarian UAVs in the Solomon Islands

The Solomon Islands experienced heavy rains and flash floods following Tropical Cyclone Ita earlier this year. Over 50,000 people were affected and dozens killed, according to ReliefWeb. Infrastructure damage was extensive; entire houses were washed away and thousands lost their food gardens.


Disaster responders used a rotary-wing UAV (an “Oktocopter”) to assist with the damage assessment efforts. More specifically, the UAV was used to assess the extent of the flood damage in the most affected area along Mataniko River.

Solomons UAV

The UAV was also used to map an area proposed for resettlement. In addition, the UAV was flown over a dam to assess potential damage. These flights were pre-programmed and thus autonomous. (Here’s a quick video demo on how to program UAV flights for disaster response). The UAV was flown at 110 meters altitude in order to capture very high-resolution imagery. “The altitude of 110m also allowed for an operation below the traditional air space and ensured a continuous visibility of the UAV from the starting / landing position.”

Screen Shot 2014-09-25 at 4.04.49 AM

While responders faced several challenges with the UAV, they nevertheless stated that “The UAV was extremely useful for the required mapping” (PDF). Some of these challenges included the limited availability of batteries, which limited the number of UAV flights. The wind also posed a challenge.

Solomons Analysis

Responders took more than 800 pictures (during one 17 minute flight) over the above area which was proposed for resettlement. About 10% of these images were then stitched together to form the mosaic displayed above. The result below depicts flooded areas along Mataniko River. According to responders, “This image data can be utilized to demonstrate the danger of destruction to people who start to resettle in the Mataniko River Valley. These very high resolution images (~ 3 to 5 cm) show details such as destroyed cars, parts of houses, etc. which demonstrate the force of the high water.” In sum, “The maps together with the images of the river could be utilized to raise awareness not to settle again in these areas.”

Screen Shot 2014-09-25 at 4.14.33 AM

Images taken of the dam were used to create the Digital Terrain Model (DTM) below. This enables responders to determine areas where the dam is most likely to overflow due to damage or future floods.

Screen Shot 2014-09-25 at 4.15.40 AM

The result of this DTM analysis enables responders to target the placement of rubber mats fixed with sand bags around the damn’s most vulnerable points.

Solomons Dam

In conclusion, disaster responders write that the use of “UAVs for data acquisition can be highly recommended. The flexibility of an UAV can be of high benefit for mapping purposes, especially in cases where fast data acquisition is desired, e.g. natural hazards. An important advantage of a UAV as platform is that image data recording is performed at low height and not disturbed by cloud cover. In theory a fixed-wing UAV might be more efficient for rapid mapping. However, the DTM applications would not be possible in this resolution with a fixed wing UAV. Notably due to the flexibility for potential starting and landing areas and the handling of the topography characterized by step valleys and obstacles such as power lines between mountain tops within the study area. Especially within the flooded areas a spatially sufficient start and land area for fixed wing UAVs would have been hard to identify.”


See Also:

  • Official UN Policy Brief on Humanitarian UAVs [link]
  • Common Misconceptions About Humanitarian UAVs [link]
  • Humanitarians in the Sky: Using UAVs for Disaster Response [link]
  • Crisis Map of UAV Videos for Disaster Response [link]
  • Humanitarian UAV Missions During Balkan Floods [link]
  • UAVs, Community Mapping & Disaster Risk Reduction in Haiti [link]

Humanitarian UAVs Fly in China After Earthquake (updated)

A 6.1 magnitude earthquake struck Ludian County in Yunnan, China earlier this month. Some 600 people lost their lives; over 2,400 were injured and another 200,000 were forced to relocate. In terms of infrastructure damage, about 30,000 buildings were damaged and more than 12,000 homes collapsed. To rapidly search for survivors and assess this damage, responders in China turned to DJI’s office in Hong Kong. DJI is one of leading manufacturers of commercial UAVs in the world.

Rescuers search for survivors as they walk among debris of collapsed buildings after an earthquake hit Longtoushan township of Ludian county

DJI’s team of pilots worked directly with the China Association for Disaster and Emergency Response Medicine (CADERM). According to DJI, “This was the first time [the country] used [UAVs] in its relief efforts and as a result many of the cooperating agencies and bodies working on site have approached us for training / using UAS technology in the future […].” DJI flew two types of quadcopters, the DJI S900 and DJI Phantom 2 Vision+ pictured below (respectively):

DJI S900

Phantom 2

As mentioned here, The DJI Phantom 2 is the same one that the UN Office for the Coordination of Humanitarian Affairs (OCHA) is experimenting with:

Screen Shot 2014-06-24 at 2.22.05 PM

Given the dense rubble and vegetation in the disaster affected region of Ludian County in China, ground surveys were particularly challenging to carry out. So UAVs provided disaster responders with an unimpeded bird’s eye view of the damage, helping them prioritize their search and rescue efforts. DJI reports that the UAVs “were able to relay images back to rescue workers, who used them to determine which roads needed to be cleared first and which areas of the rubble to search for possible survivors. […].”

The video above shows some striking aerial footage of the disaster damage. This is the not first time that UAVs have been used for search and rescue or road clearance operations. Transporting urgent supplies to disaster areas requires that roads be cleared as quickly as possible, which is why UAVs were used for this and other purposes after Typhoon Haiyan in the Philippines. In Ludian, “Aerial images captured by the team were [also] used by workers in the epicenter area […] where most of the traditional buildings in the area collapsed.”

DJI was not the only group to fly UAVs in response to the quake in Yunnan. The Chinese government itself deployed UAVs (days before DJI). As the Associated Press reported several weeks ago already, “A novel part of the Yunnan response was the use of drones to map and monitor a quake-formed lake that threatened to flood areas downstream. China has rapidly developed drone use in recent years, and they helped save time and money while providing highly reliable data, said Xu Xiaokun, an engineer with the army reserves.”

Working with UAV manufacturers directly may prove to be the preferred route for humanitarian organizations requiring access to aerial imagery following major disasters. At the same time, having the capacity and skills in-house to rapidly deploy these UAVs affords several advantages over the partnership model. So combining in-house capacity with a partnership model may ultimately be the way to go but this will depend heavily on the individual mandates and needs of humanitarian organizations.


See Also:

  • Humanitarians in the Sky: Using UAVs for Disaster Response [link]
  • Live Crisis Map of UAV Videos for Disaster Response [link]
  • Humanitarian UAV Missions During Balkan Floods [link]
  • UAVs, Community Mapping & Disaster Risk Reduction in Haiti [link]
  • “TripAdvisor” for International UAV/Drone Travel [link]

Live: Crowdsourced Crisis Map of UAV/Aerial Photos & Videos for Disaster Response (Updated)

Update: Crisis Map now includes features to post photos in addition to videos!

The latest version of the Humanitarian UAV Network’s Crisis Map of UAV/aerial photos & videos is now live on the Network’s website. The crowdsourced map already features dozens of aerial videos of recent disasters. Now, users can also post aerial photographs areas. Like the use of social media for emergency management, this new medium—user-generated (aerial) content—can be used by humanitarian organizations to complement their damage assessments and thus improve situational awareness.

UAViators Map

The purpose of this Humanitarian UAV Network (UAViators) map is not only to provide humanitarian organizations and disaster-affected communities with an online repository of aerial information on disaster damage to augment their situational awareness; this crisis map also serves to raise awareness on how to safely & responsibly use small UAVs for rapid damage assessments. This explains why users who upload new content to the map must confirm that they have read the UAViator‘s Code of Conduct. They also have to confirm that the photos & videos conform to the Network’s mission and that they do not violate privacy or copyrights. In sum, the map seeks to crowdsource both aerial footage and critical thinking for the responsible use of UAVs in humanitarian settings.

UAViators Map 4

As noted above, this is the first version of the map, which means several other features are currently in the works. These new features will be rolled out incrementally over the next weeks and months. In the meantime, feel free to suggest any features you’d like to see in the comments section below. Thank you.


  • Humanitarian UAV Network: Strategy for 2014-2015 [link]
  • Humanitarians in the Sky: Using UAVs for Disaster Response [link]
  • Humanitarian UAV Missions During Balkan Floods [link]
  • Using UAVs for Disaster Risk Reduction in Haiti [link]
  • Using MicroMappers to Make Sense of UAV/Aerial Imagery During Disasters [link]

Humanitarians Using UAVs for Post Disaster Recovery

I recently connected with senseFly’s Adam Klaptocz who founded the non-profit group DroneAdventures to promote humanitarian uses of UAVs. I first came across Adam’s efforts last year when reading about his good work in Haiti, which demonstrated the unique role that UAV technology & imagery can play in post-disaster contexts. DroneAdventures has also been active in Japan and Peru. In the coming months, the team will also be working on “aerial archeology” projects in Turkey and Egypt. When Adam emailed me last week, he and his team had just returned from yet another flying mission, this time in the Philippines. I’ll be meeting up with Adam in a couple weeks to learn more about their recent adventures. In the meantime, here’s a quick recap of what they were up to in the Philippines this month.


Adam and team snapped hundreds of aerial images using their “eBee drones” to create a detailed set of 2D maps and 3D terrain models of the disaster-affected areas where partner Medair works. This is the first time that the Swiss humanitarian organization Medair is using UAVs to inform their recovery and rehabilitation programs. They plan to use the UAV maps & models of Tacloban and hard-hit areas in Leyte to assist in assessing “where the greatest need is” and what level of “assistance should be given to affected families as they continue to recover” (1). To this end, having accurate aerial images of these affected areas will allow the Swiss organization to “address the needs of individual households and advocate on their behalf when necessary” (2). 


An eBee Drone also flew over Dulag, north of Leyte, where more than 80% of the homes and croplands were destroyed following Typhoon Yolanda. Medair is providing both materials and expertise to build new shelters in Dulag. As one Medair representative noted during the UAV flights, “Recovery from a disaster of this magnitude can be complex. The maps produced from the images taken by the drones will give everyone, including community members themselves, an opportunity to better understand not only where the greatest needs are, but also their potential solutions” (3). The partners are also committed to Open Data: “The images will be made public for free online, enabling community leaders and humanitarian organizations to use the information to coordinate reconstruction efforts” (4). The pictures of the Philippines mission below were very kindly shared by Adam who asked that they be credited to DroneAdventures.

Credit: DroneAdventures

At the request of the local Mayor, DroneAdventures and MedAir also took aerial images of a relatively undamaged area some 15 kilometers north of Tacloban, which is where the city government is looking to relocate families displaced by Typhoon Yolanda. During the deployment, Adam noted that “Lightweight drones such as the eBee are safe and easy to operate and can provide crucial imagery at a precision and speed unattainable by satellite imagery. Their relatively low cost of deployment make the technology attainable even by small communities throughout the developing world. Not only can drones be deployed immediately following a disaster in order to assess damage and provide detailed information to first-responders like Medair, but they can also assist community leaders in planning recovery efforts” (5). As the Medair rep added, “You can just push a button or launch them by hand to see them fly, and you don’t need a remote anymore—they are guided by GPS and are inherently safe” (6).

Credit: DroneAdventures

I really look forward to meeting up with Adam and the DroneAdventures team at the senseFly office in Lausanne next month to learn more about their recent work and future plans. I will also be asking the team for their feedback and guidance on the Humanitarian UAV Network (UAViators) that I am launching. So stay tuned for updates!


See also:

  • Calling All UAV Pilots: Want to Support Humanitarian Efforts? [link]
  • How UAVs are Making a Difference in Disaster Response [link]
  • Grassroots UAVs for Disaster Response (in the Philippines) [link]