3D Digital Humanitarians: The Irony

In 2009 I wrote this blog post entitled “The Biggest Problem with Crisis Maps.” The gist of the post: crises are dynamic over time and space but our crisis maps are 2D and static. More than half-a-decade later, Digital Humanitarians have still not escaped from Plato’s Cave. Instead, they continue tracing 2D shadows cast by crisis data projected on their 2D crisis maps. Is there value in breaking free from our 2D data chains? Yes. And the time will soon come when Digital Humanitarians will have to make a 3D run for it.

Screen Shot 2015-07-21 at 3.31.27 PM

Aerial imagery captured by UAVs (Unmanned Aerial Vehicles) can be used to create very high-resolution 3D point clouds like the one below. It only took a 4-minute UAV flight to capture the imagery for this point cloud. Of course, the processing time to convert the 2D imagery to 3D took longer. But solutions already exist to create 3D point clouds on the fly, and these solutions will only get more sophisticated over time.

Stitching 2D aerial imagery into larger “mosaics” is already standard practice in the UAV space. But that’s so 2014. What we need is the ability to stitch together 3D point clouds. In other words, I should be able to mesh my 3D point cloud of a given area with other point clouds that overlap spatially with mine. This would enable us to generate high-resolution 3D point clouds for larger areas. Lets call these accumulated point clouds Cumulus Clouds. We could then create baseline data in the form of Cumulus Clouds. And when a disaster happens, we could create updated Cumulus Clouds for the affected area and compare them with our baseline Cumulus Cloud for changes. In other words, instead of solely generating 2D mapping data for the Missing Maps Project, we could add Cumulus Clouds.

Meanwhile, breakthroughs in Virtual Reality will enable Digital Humanitarians to swarm through these Cumulus Clouds. Innovations such as Oculus Rift, the first consumer-targeted virtual reality headsets, may become the pièce de résistance of future Digital Humanitarians. This shift to 3D doesn’t mean that our methods for analyzing 2D crisis maps are obsolete when we leave Plato’s Cave. We simply need to extend our microtasking and crowdsourcing solutions to the 3D space. As such, a 3D “tasking manager” would just assign specific areas of a Cumulus Cloud to individual Digital Jedis. This is no different to how field-based disaster assessment surveys get carried out in the “Solid World” (Real Word). Our Oculus headsets would “simply” need to allow Digital Jedis to “annotate” or “trace various” sections of the Cumulus Clouds just like they already do with 2D maps; otherwise we’ll be nothing more than disaster tourists.

wp oculus

The shift to 3D is not without challenges. This shift necessarily increases visual complexity. Indeed, 2D images are a radical (and often welcome) simplification of the Solid World. This simplification comes with a number of advantages like reducing the signal to noise ratio. But 2D imagery, like satellite imagery, “hides” information, which is one reason why imagery-interpretation and analysis is difficult, often requiring expert training. But 3D is more intuitive; 3D is the world we live in. Interpreting signs of damage in 3D may thus be easier than doing so with a lot less information in 2D. Of course, this also depends on the level of detail required for the 3D damage assessments. Regardless, appropriate tutorials will need to be developed to guide the analysis of 3D point clouds and Cumulus Clouds. Wait a minute—shouldn’t existing assessment methodologies used for field-based surveys in the Solid World do the trick? After all, the “Real World” is in 3D last time I checked.

Ah, there’s the rub. Some of the existing methodologies developed by the UN and World Bank to assess disaster damage are largely dysfunctional. Take for example the formal definition of “partial damage” used by the Bank to carry out their post-disaster damage and needs assessments: “the classification used is to say that if a building is 40% damaged, it needs to be repaired. In my view this is too vague a description and not much help. When we say 40%, is it the volume of the building we are talking about or the structural components?” The question is posed by a World Bank colleague with 15+ years of experience. Since high-resolution 3D data enables more of us to more easily see more details, our assessment methodologies will necessarily need to become more detailed both for manual and automated analysis solutions. This does add more complexity but such is the price if we actually want reliable damage assessments regardless.

Isn’t it ironic that our shift to Virtual Reality may ultimately improve the methodologies (and thus data quality) of field-based surveys carried out in the Solid World? In any event, I can already “hear” the usual critics complaining; the usual theatrics of cave-bound humanitarians who eagerly dismiss any technology that appears after the radio (and maybe SMS). Such is life. Moving along. I’m exploring practical ways to annotate 3D point clouds here but if anyone has additional ideas, do please get in touch. I’m also looking for any solutions out there (imperfect ones are fine too) that can can help us build Cumulus Clouds—i.e., stitch overlapping 3D point clouds. Lastly, I’d love to know what it would take to annotate Cumulus Clouds via Virtual Reality. Thanks!

Acknowledgements: Thanks to colleagues from OpenAerialMap, Cadasta and MapBox for helping me think through some of the ideas above.

Developing Guidelines for Humanitarian UAV Missions

New: The revised Code of Conduct and Guidelines are now publicly available as part of an open consultative process that will conclude on October 10th. We thus invite comments on the draft guidelines here (Google Doc). An MS Word Doc version is available for download here. Please note that only feedback provided via this Google Form will be reviewed. We’ll be running an open Webinar on September 16th to discuss the guidelines in more detail.

The Humanitarian UAV Network (UAViators) recently organized a 3-day Policy Forum on Humanitarian UAVs. The mission of UAViators is to promote the safe, coordinated and effective use of UAVs in a wide range of humanitarian settings. The Forum, the first of it’s kind, was generously sponsored and hosted by the Rockefeller Foundation at their conference center in Bellagio, Italy. The aerial panoramic photograph below was captured by UAV during the Forum.


UAViators brought together a cross-section of experts from the UN Office for the Coordination of Humanitarian Affairs (OCHA), UN Refugee Agency (UNHCR), UN Department for Peacekeeping Operations (DPKO), World Food Program (WFP), International Committee of the Red Cross (ICRC), American Red Cross, European Commission’s Humanitarian Aid Organization (ECHO), Medair, Humanitarian OpenStreetMap, ICT for Peace Foundation (ICT4Peace), DJI, BuildPeace, Peace Research Institute, Oslo (PRIO), Trilateral Research, Harvard University, Texas A&M, University of Central Lancashire, École Polytechnique Fédérale de Lausanne (EPFL), Pepperdine University School of Law and other independent experts. The purpose of the Forum, which I had the distinct pleasure of running: to draft guidelines for the safe, coordinated and effective use of UAVs in humanitarian settings.

Five key sets of guidelines were drafted, each focusing on priority areas where policy has been notably absent: 1) Code of Conduct; 2) Data Ethics; 3) Community Engagement; 4) Principled Partnerships; and 5) Conflict Sensitivity. These five policy areas were identified as priorities during the full-day Humanitarian UAV Experts Meeting co-organized at the UN Secretariat in New York by UAViators and OCHA (see summary here). After 3 very long days of deliberation in Bellagio, we converged towards an initial draft set of guidelines for each of the key areas. There was certainly no guarantee that this convergence would happen, so I’m particularly pleased and very grateful to all participants for their hard work. Indeed, I’m reminded of Alexander Aleinikoff (Deputy High Commissioner in the Office of UNHCR) who defines innovation as “dynamic problem solving among friends.” The camaraderie throughout the long hours had a lot to do with the positive outcome. Conferences typically take a group photo of participants; we chose to take an aerial video instead:

Of course, this doesn’t mean we’re done. The most immediate next step is to harmonize each of the guideline documents so that they “speak” to each other. We’ll then solicit internal institutional feedback from the organizations that were represented in Bellagio. Once this feedback has been considered and integrated where appropriate, we will organize a soft public launch of the guidelines in August 2015. The purpose of this soft launch is to actively solicit feedback from the broader humanitarian community. We plan to hold Webinars in August and September to invite this additional feedback. The draft guidelines will be further reviewed in October at the 2015 Humanitarian UAV Experts Meeting, which is being hosted at MIT and co-organized by UAViators, OCHA and the World Humanitarian Summit (WHS).

We’ll then review all the feedback received since Bellagio to produce the “final” version of the guidelines, which will be presented to donors and humanitarian organizations for public endorsement. The guidelines will be officially launched at the World Humanitarian Summit in 2016. In the meantime, these documents will serve as best practices to inform both humanitarian UAV trainings and missions. In other words, they will already serve to guide the safe, coordinated and effective use of UAVs in humanitarian settings. We will also use these draft guidelines to hold ourselves accountable. To be sure, humanitarian innovation is not simply about the technology; humanitarian innovation is also about the processes that enable the innovative use of emerging technologies.

While the first text message (SMS) was sent in 1992, it took 20 years (!) until a set of guidelines were developed to inform the use of SMS in disaster response. I’m relieved that we won’t have to wait until 2035 to produce UAV guidelines. Yes, the evidence base for the added value of UAVs in humanitarian missions is still thin, which is why it is all the more remarkable that forward-thinking guidelines are already being drafted. As several participants noted during the Forum, “The humanitarian community completely missed the boat on the mobile phone revolution. It is vital that we not make this same mistake again with newer, emerging technologies.” As such, the question for everyone at the Forum was not whether UAVs will have a significant impact, but rather what guidelines are needed now to guide the impact that this new technology will inevitably have on future humanitarian efforts.

The evidence base is necessarily thin since UAVs are only now emerging as a potential humanitarian technology. There is still a lot of learning and documenting to be done. The Humanitarian UAV Network has already taken on this task and will continue to enable learning and catalyze information sharing by convening expert meetings and documenting lessons learned in collaboration with key partners. The Network will also seek to partner with select groups on strategic projects with the aim of expanding the evidence base. In sum, I think we’re on the right track, and staying on the right track will require a joint and sustained effort with a cross-section of partners and stakeholders. To be sure, UAViators cannot accomplish the above alone. It took 22 dedicated experts and 3 long days to produce the draft guidelines. So consider this post an open invitation to join these efforts as we press on to make the use of UAVs in humanitarian crises safer, more coordinated and more effective.

In the meantime, a big thanks once again to all the experts who joined us for the Forum, and equally big thanks to the team at the Rockefeller Foundation for graciously hosting us in Bellagio.

Social Media for Disaster Response – Done Right!

To say that Indonesia’s capital is prone to flooding would be an understatement. Well over 40% of Jakarta is at or below sea level. Add to this a rapidly growing population of over 10 million and you have a recipe for recurring disasters. Increasing the resilience of the city’s residents to flooding is thus imperative. Resilience is the capacity of affected individuals to self-organize effectively, which requires timely decision-making based on accurate, actionable and real-time information. But Jakarta is also flooded with information during disasters. Indeed, the Indonesian capital is the world’s most active Twitter city.


So even if relevant, actionable information on rising flood levels could somehow be gleaned from millions of tweets in real-time, these reports could be inaccurate or completely false. Besides, only 3% of tweets on average are geo-located, which means any reliable evidence of flooding reported via Twitter is typically not actionable—that is, unless local residents and responders know where waters are rising, they can’t take tactical action in a timely manner. These major challenges explain why most discount the value of social media for disaster response.

But Digital Humanitarians in Jakarta aren’t your average Digital Humanitarians. These Digital Jedis recently launched one of the most promising humanitarian technology initiatives I’ve seen in years. Code named Peta Jakarta, the project takes social media and digital humanitarian action to the next level. Whenever someone posts a tweet with the word banjir (flood), they receive an automated tweet reply from @PetaJkt inviting them to confirm whether they see signs of flooding in their area: “Flooding? Enable geo-location, tweet @petajkt #banjir and check petajakarta.org.” The user can confirm their report by turning geo-location on and simply replying with the keyword banjir or flood. The result gets added to a live, public crisis map, like the one below.

Credit: Peta Jakarta

Over the course of the 2014/2015 monsoon season, Peta Jakarta automatically sent 89,000 tweets to citizens in Jakarta as a call to action to confirm flood conditions. These automated invitation tweets served to inform the user about the project and linked to the video below (via Twitter Cards) to provide simple instructions on how to submit a confirmed report with approximate flood levels. If a Twitter user forgets to turn on the geo-location feature of their smartphone, they receive an automated tweet reminding them to enable geo-location and resubmit their tweet. Finally, the platform “generates a thank you message confirming the receipt of the user’s report and directing them to PetaJakarta.org to see their contribution to the map.” Note that the “overall aim of sending programmatic messages is not to simply solicit a high volume of replies, but to reach active, committed citizen-users willing to participate in civic co-management by sharing nontrivial data that can benefit other users and government agencies in decision-making during disaster scenarios.”

A report is considered verified when a confirmed geo-tagged tweet includes a picture of the flooding, like in the tweet below. These confirmed and verified tweets get automatically mapped and also shared with Jakarta’s Emergency Management Agency (BPBD DKI Jakarta). The latter are directly involved in this initiative since they’re “regularly faced with the difficult challenge of anticipating & responding to floods hazards and related extreme weather events in Jakarta.” This direct partnership also serves to limit the “Data Rot Syndrome” where data is gathered but not utilized. Note that Peta Jakarta is able to carry out additional verification measures by manually assessing the validity of tweets and pictures by cross-checking other Twitter reports from the same district and also by monitoring “television and internet news sites, to follow coverage of flooded areas and cross-check reports.”

Screen Shot 2015-06-29 at 2.38.54 PM

During the latest monsoon season, Peta Jakarta “received and mapped 1,119 confirmed reports of flooding. These reports were formed by 877 users, indicating an average tweet to user ratio of 1.27 tweets per user. A further 2,091 confirmed reports were received without the required geolocation metadata to be mapped, highlighting the value of the programmatic geo-location ‘reminders’ […]. With regard to unconfirmed reports, Peta Jakarta recorded and mapped a total of 25,584 over the course of the monsoon.”

The Live Crisis Maps could be viewed via two different interfaces depending on the end user. For local residents, the maps could be accessed via smartphone with the visual display designed specifically for more tactical decision-making, showing flood reports at the neighborhood level and only for the past hour.


For institutional partners, the data is visualized in more aggregate terms for strategic decision-making based trends-analysis and data integration. “When viewed on a desktop computer, the web-application scaled the map to show a situational overview of the city.”

Credit: Peta Jakarta

Peta Jakarta has “proven the value and utility of social media as a mega-city methodology for crowdsourcing relevant situational information to aid in decision-making and response coordination during extreme weather events.” The initiative enables “autonomous users to make independent decisions on safety and navigation in response to the flood in real-time, thereby helping increase the resilience of the city’s residents to flooding and its attendant difficulties.” In addition, by “providing decision support at the various spatial and temporal scales required by the different actors within city, Peta Jakarta offers an innovative and inexpensive method for the crowdsourcing of time-critical situational information in disaster scenarios.” The resulting confirmed and verified tweets were used by BPBD DKI Jakarta to “cross-validate formal reports of flooding from traditional data sources, supporting the creation of information for flood assessment, response, and management in real-time.”

My blog post is based several conversations I had with Peta Jakarta team and on this white paper, which was just published a week ago. The report runs close to 100 pages and should absolutely be considered required reading for all Digital Humanitarians and CrisisMappers. The paper includes several dozen insights which a short blog post simply cannot do justice to. If you can’t find the time to read the report, then please see the key excerpts below. In a future blog post, I’ll describe how the Peta Jakarta team plans to leverage UAVs to complement social media reporting.

  • Extracting knowledge from the “noise” of social media requires designed engagement and filtering processes to eliminate unwanted information, reward valuable reports, and display useful data in a manner that further enables users, governments, or other agencies to make non-trivial, actionable decisions in a time-critical manner.
  • While the utility of passively-mined social media data can offer insights for offline analytics and derivative studies for future planning scenarios, the critical issue for frontline emergency responders is the organization and coordination of actionable, real-time data related to disaster situations.
  • User anonymity in the reporting process was embedded within the Peta Jakarta project. Whilst the data produced by Twitter reports of flooding is in the public domain, the objective was not to create an archive of users who submitted potentially sensitive reports about flooding events, outside of the Twitter platform. Peta Jakarta was thus designed to anonymize reports collected by separating reports from their respective users. Furthermore, the text content of tweets is only stored when the report is confirmed, that is, when the user has opted to send a message to the @petajkt account to describe their situation. Similarly, when usernames are stored, they are encrypted using a one-way hash function.
  • In developing the Peta Jakarta brand as the public face of the project, it was important to ensure that the interface and map were presented as community-owned, rather than as a government product or academic research tool. Aiming to appeal to first adopters—the young, tech-savvy Twitter-public of Jakarta—the language used in all the outreach materials (Twitter replies, the outreach video, graphics, and print advertisements) was intentionally casual and concise. Because of the repeated recurrence of flood events during the monsoon, and the continuation of daily activities around and through these flood events, the messages were intentionally designed to be more like normal twitter chatter and less like public service announcements.
  • It was important to design the user interaction with PetaJakarta.org to create a user experience that highlighted the community resource element of the project (similar to the Waze traffic app), rather than an emergency or information service. With this aim in mind, the graphics and language are casual and light in tone. In the video, auto-replies, and print advertisements, PetaJakarta.org never used alarmist or moralizing language; instead, the graphic identity is one of casual, opt-in, community participation.
  • The most frequent question directed to @petajkt on Twitter was about how to activate the geo-location function for tweets. So far, this question has been addressed manually by sending a reply tweet with a graphic instruction describing how to activate geo-location functionality.
  • Critical to the success of the project was its official public launch with, and promotion by, the Governor. This endorsement gave the platform very high visibility and increased legitimacy among other government agencies and public users; it also produced a very successful media event, which led substantial media coverage and subsequent public attention.

  • The aggregation of the tweets (designed to match the spatio-temporal structure of flood reporting in the system of the Jakarta Disaster Management Agency) was still inadequate when looking at social media because it could result in their overlooking reports that occurred in areas of especially low Twitter activity. Instead, the Agency used the @petajkt Twitter stream to direct their use of the map and to verify and cross-check information about flood-affected areas in real-time. While this use of social media was productive overall, the findings from the Joint Pilot Study have led to the proposal for the development of a more robust Risk Evaluation Matrix (REM) that would enable Peta Jakarta to serve a wider community of users & optimize the data collection process through an open API.
  • Developing a more robust integration of social media data also means leveraging other potential data sets to increase the intelligence produced by the system through hybridity; these other sources could include, but are not limited to, government, private sector, and NGO applications (‘apps’) for on- the-ground data collection, LIDAR or UAV-sourced elevation data, and fixed ground control points with various types of sensor data. The “citizen-as- sensor” paradigm for urban data collection will advance most effectively if other types of sensors and their attendant data sources are developed in concert with social media sourced information.

Review: The First Ever Course on Humanitarian UAVs


The Humanitarian UAV Network (UAViators) promotes the safe, coordinated and effective use of UAVs in a wide range of humanitarian settings. To this end, the Network’s mission includes training the first generation of Humanitarian UAV experts. This explains why I teamed up with VIVES Aeronautics College last year to create and launch the first ever UAV training specifically geared towards established humanitarian organizations. The 3-day, intensive and hands-on training took place this month in Belgium and went superbly well, which is why we’ll be offering it again next year and possibly a second time this year as well.

Participants included representatives from the United Nations Office for the Coordination of Humanitarian Affairs (OCHA), World Food Program (WFP), International Organization for Migration (IOM), European Union Humanitarian Aid Organization (ECHO), Medair, Direct Relief, and Germany’s Development Organization GIZ. We powered through the most important subjects, ranging from airspace regulations to the physics of flight, the in’s and out’s of civil aviation, aeronautics, weather forecasts, programming flight routes, operational safety, standard operating procedures, best practices, etc. I gave trainings on both Humanitarian UAV Applications and Humanitarian UAV Operations, which totaled well over 4 hours of instruction and discussion. The Ops training included a detailed review of best practices—summary available here. Knowing how to operate this new technology is definitely the easy part; knowing how to use UAVs effectively and responsibly in real-world humanitarian contexts is a far bigger challenge; hence the need for this training.

The purpose of the course was to provide the kind of training that humanitarian professionals need in order to 1) Fully understand the opportunities and limitations of this new technology; 2) Partner effectively with professional UAV teams during disasters; and 3) Operate safely, legally, responsibly and ethically. The accelerated training ended with a 2-hour simulation exercise in which participants were tasked with carrying out a UAV mission in Nepal for which they had to use all the standard operating procedures and best practices they had learned over the course of the training. Each team then had to present in detail how they would carry out the required mission.

Below are pictures from the 3-day event. Each day included some 12 hours of instructions and discussions—so these pictures certainly don’t cover all of the flights, materials and seminars. For more on this unique UAV training course, check out this excellent blog post by Andrew Schroeder from Direct Relief. In the meantime, please feel free to get in touch with me if you’re interested taking this course in the future or would like a customized one for your organization.


The VIVES Aeronautics campus in Belgium has a dedicated group focused on UAV courses, training and research in addition to traditional aviation training.


Facilities at VIVES include flight simulators, computer labs and labs dedicated to building UAVs.


We went through hundreds of pages and at least 400 slides of dedicated training material over the course of the 3 days.


We took a direct, hands-on approach from day one of the training. Naturally, we go all the necessary legal permissions to operate UAVs in this area.


Introducing participants to fixed-wing and multi-rotor UAVs.


Participants learned how the basics on how to operate this fixed-wing UAV.


Some fixed-wing UAVs are hand-launched, this one uses a dedicated launcher. We flew this one for about 20 minutes.


Multi-rotor UAVs were then introduced and participants were instructed on how to operate this model themselves during the training.


So each participant took to the controls under the supervision of certified UAV pilots and got a feel for manual flights.


Other multi-rotor UAVs were also introduced along with standard operating procedures related to safety, take-off, flight and landing.


Multi-rotors were compared with fixed-wing UAVs and participants asked about which type of asset to use for different humanitarian UAV missions.


We then visited a factory dedicated towards the manufacturing of long-range fixed-wing UAVs so participants could learn about airframes and hardware.


After half-a-day of outdoor, hands-on training on UAVs, the real-work began.


Intensive, back-to-back seminars to provide humanitarian participants with everything they need to know about UAVs, humanitarian applications and also humanitarian UAV missions.


From the laws of the skies and reading official air-route maps to understanding airspace classes and regulations.


Safety was an overriding theme throughout the 3-day training.


Safety training included standard operating procedures for hardware & batteries


Participants were also introduced to the principles of flight and aviation.


All seminars were highly interactive and also allowed for long question & answer sessions; some of these lasted up to an hour and continued during the breaks.


All aspects of UAV technology was introduced and discussed at length, such as First Person View (FPV).


Regulations were an important component of the 3-day training.


Participants learned how to program UAV flights; they were introduced to the software and provided with an overview of best practices on flight planning.


A hands-on introduction to imagery processing and analysis was also provided. Participants were taught how to use dedicated software to process and analyze the aerial imagery captured during the morning outdoor sessions.


Participants thus spent time in the dedicated computer lab working with this imagery and creating 3D point clouds, for example.


Humanitarian OpenStreetMap, MapBoxMicroMappers were also introduced.


The first UAViators & VIVES Class of 2015!


Handbook: How to Catalyze Humanitarian Innovation in Computing Research Institutes

This research was commissioned by the World Humanitarian Summit (WHS) Innovation Team, which I joined last year. An important goal of the Summit’s Innovation Team is to identify concrete innovation pathways that can transform the humanitarian industry into a more effective, scalable and agile sector. I have found that discussions on humanitarian innovation can sometimes tend towards conceptual, abstract and academic questions. This explains why I took a different approach vis-a-vis my contribution to the WHS Innovation Track.


The handbook below provides practical collaboration guidelines for both humanitarian organizations & computing research institutes on how to catalyze humanitarian innovation through successful partnerships. These actionable guidelines are directly applicable now and draw on extensive interviews with leading humanitarian groups and CRI’s including the International Committee of the Red Cross (ICRC), United Nations Office for the Coordination of Humanitarian Affairs (OCHA), United Nations Children’s Fund (UNICEF), United Nations High Commissioner for Refugees (UNHCR), UN Global Pulse, Carnegie Melon University (CMU), International Business Machines (IBM), Microsoft Research, Data Science for Social Good Program at the University of Chicago and others.

This handbook, which is the first of its kind, also draws directly on years of experience and lessons learned from the Qatar Computing Research Institute’s (QCRI) active collaboration and unique partnerships with multiple international humanitarian organizations. The aim of this blog post is to actively solicit feedback on this first, complete working draft, which is available here as an open and editable Google Doc. So if you’re interested in sharing your insights, kindly insert your suggestions and questions by using the Insert/Comments feature. Please do not edit the text directly.

I need to submit the final version of this report on July 1, so very much welcome constructive feedback via the Google Doc before this deadline. Thank you!

Humanitarian UAV Missions: Towards Best Practices


The purpose of the handbook below is to promote the safe, coordinated and effective use of UAVs in a wide range of humanitarian settings. The handbook draws on lessons learned during recent humanitarian UAV missions in Vanuatu (March-April 2015) and Nepal (April-May 2015) as well as earlier UAV missions in both Haiti and the Philippines. The handbook takes the form of an operational checklist divided into Pre-flight, In-flight and Post-flight sections. The best practices documented in each section are meant to serve as a minimum set of guidelines only. As such, this document is not the final word on best practices, which explains why the handbook is available here as an open, editable Google Doc. We invite humanitarian, UAV and research communities to improve this handbook and to keep our collective best practices current by inserting key comments and suggestions directly to the Google Doc. Both hardcopies and digital copies of this handbook are available for free and may not in part or in whole be used for commercial purposes. Click here for more information on the Humanitarian UAV Network.

Assessing Disaster Damage from 3D Point Clouds

Humanitarian and development organizations like the United Nations and the World Bank typically carry out disaster damage and needs assessments following major disasters. The ultimate goal of these assessments is to measure the impact of disasters on the society, economy and environment of the affected country or region. This includes assessing the damage caused to building infrastructure, for example. These assessment surveys are generally carried out in person—that is, on foot and/or by driving around an affected area. This is a very time-consuming process with very variable results in terms of data quality. Can 3D (Point Clouds) derived from very high resolution aerial imagery captured by UAVs accelerate and improve the post-disaster damage assessment process? Yes, but a number of challenges related to methods, data & software need to be overcome first. Solving these challenges will require pro-active cross-disciplinary collaboration.

The following three-tiered scale is often used to classify infrastructure damage: “1) Completely destroyed buildings or those beyond repair; 2) Partially destroyed buildings with a possibility of repair; and 3) Unaffected buildings or those with only minor damage . By locating on a map all dwellings and buildings affected in accordance with the categories noted above, it is easy to visualize the areas hardest hit and thus requiring priority attention from authorities in producing more detailed studies and defining demolition and debris removal requirements” (UN Handbook). As one World Bank colleague confirmed in a recent email, “From the engineering standpoint, there are many definitions of the damage scales, but from years of working with structural engineers, I think the consensus is now to use a three-tier scale – destroyed, heavily damaged, and others (non-visible damage).”

That said, field-based surveys of disaster damage typically overlook damage caused to roofs since on-the-ground surveyors are bound by the laws of gravity. Hence the importance of satellite imagery. At the same time, however, “The primary problem is the vertical perspective of [satellite imagery, which] largely limits the building information to the roofs. This roof information is well suited for the identification of extreme damage states, that is completely destroyed structures or, to a lesser extent, undamaged buildings. However, damage is a complex 3-dimensional phenomenon,” which means that “important damage indicators expressed on building façades, such as cracks or inclined walls, are largely missed, preventing an effective assessment of intermediate damage states” (Fernandez Galaretta et al. 2014).

Screen Shot 2015-04-06 at 10.58.31 AM

This explains why “Oblique imagery [captured from UAVs] has been identified as more useful, though the multi-angle imagery also adds a new dimension of complexity” as we experienced first-hand during the World Bank’s UAV response to Cyclone Pam in Vanuatu (Ibid, 2014). Obtaining photogrammetric data for oblique images is particularly challenging. That is, identifying GPS coordinates for a given house pictured in an oblique photograph is virtually impossible to do automatically with the vast majority of UAV cameras. (Only specialist cameras using gimbal mounted systems can reportedly infer photogrammetric data in oblique aerial imagery, but even then it is unclear how accurate this inferred GPS data is). In any event, oblique data also “lead to challenges resulting from the multi-perspective nature of the data, such as how to create single damage scores when multiple façades are imaged” (Ibid, 2014).

To this end, my colleague Jorge Fernandez Galarreta and I are exploring the use of 3D (point clouds) to assess disaster damage. Multiple software solutions like Pix4D and PhotoScan can already be used to construct detailed point clouds from high-resolution 2D aerial imagery (nadir and oblique). “These exceed standard LiDAR point clouds in terms of detail, especially at façades, and provide a rich geometric environment that favors the identification of more subtle damage features, such as inclined walls, that otherwise would not be visible, and that in combination with detailed façade and roof imagery have not been studied yet” (Ibid, 2014).

Unlike oblique images, point clouds give surveyors a full 3D view of an urban area, allowing them to “fly through” and inspect each building up close and from all angles. One need no longer be physically onsite, nor limited to simply one façade or a strictly field-based view to determine whether a given building is partially damaged. But what does partially damaged even mean when this kind of high resolution 3D data becomes available? Take this recent note from a Bank colleague with 15+ years of experience in disaster damage assessments: “In the [Bank’s] official Post-Disaster Needs Assessment, the classification used is to say that if a building is 40% damaged, it needs to be repaired. In my view this is too vague a description and not much help. When we say 40%, is it the volume of the building we are talking about or the structural components?”

Screen Shot 2015-05-17 at 1.45.50 PM

In their recent study, Fernandez Galaretta et al. used point clouds to generate per-building damage scores based on a 5-tiered classification scale (D1-D5). They chose to compute these damage scores based on the following features: “cracks, holes, intersection of cracks with load-carrying elements and dislocated tiles.” They also selected non-damage related features: “façade, window, column and intact roof.” Their results suggest that the visual assessment of point clouds is very useful to identify the following disaster damage features: total collapse, collapsed roof, rubble piles, inclined façades and more subtle damage signatures that are difficult to recognize in more traditional BDA [Building Damage Assessment] approaches. The authors were thus able to compute a per building damage score, taking into account both “the overall structure of the building,” and the “aggregated information collected from each of the façades and roofs of the building to provide an individual per-building damage score.”

Fernandez Galaretta et al. also explore the possibility of automating this damage assessment process based on point clouds. Their conclusion: “More research is needed to extract automatically damage features from point clouds, combine those with spectral and pattern indicators of damage, and to couple this with engineering understanding of the significance of connected or occluded damage indictors for the overall structural integrity of a building.” That said, the authors note that this approach would “still suffer from the subjectivity that characterizes expert-based image analysis.”

Hence my interest in using crowdsourcing to analyze point clouds for disaster damage. Naturally, crowdsourcing alone will not eliminate subjectivity. In fact, having more people analyze point clouds may yield all kinds of disparate results. This is explains why a detailed and customized imagery interpretation guide is necessary; like this one, which was just released by my colleagues at the Harvard Humanitarian Initiative (HHI). This also explains why crowdsourcing platforms require quality-control mechanisms. One easy technique is triangulation: have ten different volunteers look at each point cloud and tag features in said cloud that show cracks, holes, intersection of cracks with load-carrying elements and dislocated tiles. Surely more eyes are better than two for tasks that require a good eye for detail.

Screen Shot 2015-05-17 at 1.49.59 PM

Next, identify which features have the most tags—this is the triangulation process. For example, if one area of a point cloud is tagged as a “crack” by 8 or more volunteers, chances are there really is a crack there. One can then count the total number of distinct areas tagged as cracks by 8 or more volunteers across the point cloud to calculate the total number of cracks per façade. Do the same with the other metrics (holes, dislocated titles, etc.), and you can compute a per building damage score based on overall consensus derived from hundreds of crowdsourced tags. Note that “tags’ can also be lines or polygons; meaning that individual cracks could be traced by volunteers, thus providing information on the approximate lengths/size of a crack. This variable could also be factored in the overall per-building damage score.

In sum, crowdsourcing could potentially overcome some of the data quality issues that have already marked field-based damage assessment surveys. In addition, crowdsourcing could potentially speed up the data analysis since professional imagery and GIS analysts tend to already be hugely busy in the aftermath of major disasters. Adding more data to their plate won’t help anyone. Crowdsourcing the analysis of 3D point clouds may thus be our best bet.

So why hasn’t this all been done yet? For several reasons. For one, creating very high-resolution point clouds requires more pictures and thus more UAV flights, which can be time consuming. Second, processing aerial imagery to construct point clouds can also take some time. Third, handling, sharing and hosting point clouds can be challenging given how large those files quickly get. Fourth, no software platform currently exists to crowdsource the annotation of point clouds as described above (particularly when it comes to the automated quality control mechanisms that are necessary to ensure data quality). Fifth, we need more robust imagery interpretation guides. Sixth, groups like the UN and the World Bank are still largely thinking in 2D rather than 3D. And those few who are considering 3D tend to approach this from a data visualization angle rather than using human and machine computing to analyze 3D data. Seventh, this area, point cloud analysis for 3D feature detection, is still a very new area of research. Many of the methodology questions that need answers have yet to be answered, which is why my team and I at QCRI are starting to explore this area from the perspective of computer vision and machine learning.

The holy grail? Combining crowdsourcing with machine learning for real-time feature detection of disaster damage in 3D point clouds rendered in real-time via airborne UAVs surveying a disaster site. So what is it going to take to get there? Well, first of all, UAVs are becoming more sophisticated; they’re flying faster and for longer and will increasingly be working in swarms. (In addition, many of the new micro-UAVs come with a “follow me” function, which could enable the easy and rapid collection of aerial imagery during field assessments). So the first challenge described above is temporary as are the second and third challenges since computer processing power is increasing, not decreasing, over time.

This leaves us with the software challenge and imagery guides. I’m already collaborate with HHI on the latter. As for the former, I’ve spoken with a number of colleagues to explore possible software solutions to crowdsource the tagging of point clouds. One idea is simply to extend MicroMappers. Another is to add simple annotation features to PLAS.io and PointCloudViz since these platforms are already designed to visualize and interact with point clouds. A third option is to use a 3D model platform like SketchFab, which already enables annotations. (Many thanks to colleague Matthew Schroyer for pointing me to SketchFab last week). I’ve since had a long call with SketchFab and am excited by the prospects of using this platform for simple point cloud annotation.

In fact, Matthew already used SketcFab to annotate a 3D model of Durbar Square neighborhood in downtown Kathmandu post-earthquake. He found an aerial video of the area, took multiple screenshots of this video, created a point cloud from these and then generated a 3D model which he annotated within SketchFab. This model, pictured below, would have been much higher resolution if he had the original footage or 2D images. Click pictures to enlarge.

3D Model 1 Nepal

3D Model 2 Nepal

3D Model 3 Nepal

3D Model 4 Nepal

Here’s a short video with all the annotations in the 3D model:

And here’s the link to the “live” 3D model. And to drive home the point that this 3D model could be far higher resolution if the underlying imagery had been directly accessible to Matthew, check out this other SketchFab model below, which you can also access in full here.

Screen Shot 2015-05-16 at 9.35.20 AM

Screen Shot 2015-05-16 at 9.35.41 AM

Screen Shot 2015-05-16 at 9.37.33 AM

The SketchFab team has kindly given me a SketchFab account that allows up to 50 annotations per 3D model. So I’ll be uploading a number of point clouds from Vanuatu (post Cyclone Pam) and Nepal (post earthquakes) to explore the usability of SketchFab for crowdsourced disaster damage assessments. In the meantime, one could simply tag-and-number all major features in a point cloud, create a Google Form, and ask digital volunteers to rate the level of damage near each numbered tag. Not a perfect solution, but one that works. Ultimately, we’d need users to annotate point clouds by tracing 3D polygons if we wanted a more easy way to use the resulting data for automated machine learning purposes.

In any event, if readers do have any suggestions on other software platforms, methodologies, studies worth reading, etc., feel free to get in touch via the comments section below or by email, thank you. In the meantime, many thanks to colleagues Jorge, Matthew, Ferda & Ji (QCRI), Salvador (PointCloudViz), Howard (PLAS.io) and Corentin (SketchFab) for the time they’ve kindly spent brainstorming the above issues with me.