Assessing Disaster Damage from 3D Point Clouds

Humanitarian and development organizations like the United Nations and the World Bank typically carry out disaster damage and needs assessments following major disasters. The ultimate goal of these assessments is to measure the impact of disasters on the society, economy and environment of the affected country or region. This includes assessing the damage caused to building infrastructure, for example. These assessment surveys are generally carried out in person—that is, on foot and/or by driving around an affected area. This is a very time-consuming process with very variable results in terms of data quality. Can 3D (Point Clouds) derived from very high resolution aerial imagery captured by UAVs accelerate and improve the post-disaster damage assessment process? Yes, but a number of challenges related to methods, data & software need to be overcome first. Solving these challenges will require pro-active cross-disciplinary collaboration.

The following three-tiered scale is often used to classify infrastructure damage: “1) Completely destroyed buildings or those beyond repair; 2) Partially destroyed buildings with a possibility of repair; and 3) Unaffected buildings or those with only minor damage . By locating on a map all dwellings and buildings affected in accordance with the categories noted above, it is easy to visualize the areas hardest hit and thus requiring priority attention from authorities in producing more detailed studies and defining demolition and debris removal requirements” (UN Handbook). As one World Bank colleague confirmed in a recent email, “From the engineering standpoint, there are many definitions of the damage scales, but from years of working with structural engineers, I think the consensus is now to use a three-tier scale – destroyed, heavily damaged, and others (non-visible damage).”

That said, field-based surveys of disaster damage typically overlook damage caused to roofs since on-the-ground surveyors are bound by the laws of gravity. Hence the importance of satellite imagery. At the same time, however, “The primary problem is the vertical perspective of [satellite imagery, which] largely limits the building information to the roofs. This roof information is well suited for the identification of extreme damage states, that is completely destroyed structures or, to a lesser extent, undamaged buildings. However, damage is a complex 3-dimensional phenomenon,” which means that “important damage indicators expressed on building façades, such as cracks or inclined walls, are largely missed, preventing an effective assessment of intermediate damage states” (Fernandez Galaretta et al. 2014).

Screen Shot 2015-04-06 at 10.58.31 AM

This explains why “Oblique imagery [captured from UAVs] has been identified as more useful, though the multi-angle imagery also adds a new dimension of complexity” as we experienced first-hand during the World Bank’s UAV response to Cyclone Pam in Vanuatu (Ibid, 2014). Obtaining photogrammetric data for oblique images is particularly challenging. That is, identifying GPS coordinates for a given house pictured in an oblique photograph is virtually impossible to do automatically with the vast majority of UAV cameras. (Only specialist cameras using gimbal mounted systems can reportedly infer photogrammetric data in oblique aerial imagery, but even then it is unclear how accurate this inferred GPS data is). In any event, oblique data also “lead to challenges resulting from the multi-perspective nature of the data, such as how to create single damage scores when multiple façades are imaged” (Ibid, 2014).

To this end, my colleague Jorge Fernandez Galarreta and I are exploring the use of 3D (point clouds) to assess disaster damage. Multiple software solutions like Pix4D and PhotoScan can already be used to construct detailed point clouds from high-resolution 2D aerial imagery (nadir and oblique). “These exceed standard LiDAR point clouds in terms of detail, especially at façades, and provide a rich geometric environment that favors the identification of more subtle damage features, such as inclined walls, that otherwise would not be visible, and that in combination with detailed façade and roof imagery have not been studied yet” (Ibid, 2014).

Unlike oblique images, point clouds give surveyors a full 3D view of an urban area, allowing them to “fly through” and inspect each building up close and from all angles. One need no longer be physically onsite, nor limited to simply one façade or a strictly field-based view to determine whether a given building is partially damaged. But what does partially damaged even mean when this kind of high resolution 3D data becomes available? Take this recent note from a Bank colleague with 15+ years of experience in disaster damage assessments: “In the [Bank’s] official Post-Disaster Needs Assessment, the classification used is to say that if a building is 40% damaged, it needs to be repaired. In my view this is too vague a description and not much help. When we say 40%, is it the volume of the building we are talking about or the structural components?”

Screen Shot 2015-05-17 at 1.45.50 PM

In their recent study, Fernandez Galaretta et al. used point clouds to generate per-building damage scores based on a 5-tiered classification scale (D1-D5). They chose to compute these damage scores based on the following features: “cracks, holes, intersection of cracks with load-carrying elements and dislocated tiles.” They also selected non-damage related features: “façade, window, column and intact roof.” Their results suggest that the visual assessment of point clouds is very useful to identify the following disaster damage features: total collapse, collapsed roof, rubble piles, inclined façades and more subtle damage signatures that are difficult to recognize in more traditional BDA [Building Damage Assessment] approaches. The authors were thus able to compute a per building damage score, taking into account both “the overall structure of the building,” and the “aggregated information collected from each of the façades and roofs of the building to provide an individual per-building damage score.”

Fernandez Galaretta et al. also explore the possibility of automating this damage assessment process based on point clouds. Their conclusion: “More research is needed to extract automatically damage features from point clouds, combine those with spectral and pattern indicators of damage, and to couple this with engineering understanding of the significance of connected or occluded damage indictors for the overall structural integrity of a building.” That said, the authors note that this approach would “still suffer from the subjectivity that characterizes expert-based image analysis.”

Hence my interest in using crowdsourcing to analyze point clouds for disaster damage. Naturally, crowdsourcing alone will not eliminate subjectivity. In fact, having more people analyze point clouds may yield all kinds of disparate results. This is explains why a detailed and customized imagery interpretation guide is necessary; like this one, which was just released by my colleagues at the Harvard Humanitarian Initiative (HHI). This also explains why crowdsourcing platforms require quality-control mechanisms. One easy technique is triangulation: have ten different volunteers look at each point cloud and tag features in said cloud that show cracks, holes, intersection of cracks with load-carrying elements and dislocated tiles. Surely more eyes are better than two for tasks that require a good eye for detail.

Screen Shot 2015-05-17 at 1.49.59 PM

Next, identify which features have the most tags—this is the triangulation process. For example, if one area of a point cloud is tagged as a “crack” by 8 or more volunteers, chances are there really is a crack there. One can then count the total number of distinct areas tagged as cracks by 8 or more volunteers across the point cloud to calculate the total number of cracks per façade. Do the same with the other metrics (holes, dislocated titles, etc.), and you can computer a per building damage score based on overall consensus derived from hundreds of crowdsourced tags. Note that “tags’ can also be lines or polygons; meaning that individual cracks could be traced by volunteers, thus providing information on the approximate lengths/size of a crack. This variable could also be factored in the overall per-building damage score.

In sum, crowdsourcing could potentially overcome some of the data quality issues that have already marked field-based damage assessment surveys. In addition, crowdsourcing could potentially speed up the data analysis since professional imagery and GIS analysts tend to already be hugely busy in the aftermath of major disasters. Adding more data to their plate won’t help anyone. Crowdsourcing the analysis of 3D point clouds may thus be our best bet.

So why hasn’t this all been done yet? For several reasons. For one, creating very high-resolution point clouds requires more pictures and thus more UAV flights, which can be time consuming. Second, processing aerial imagery to construct point clouds can also take some time. Third, handling, sharing and hosting point clouds can be challenging given how large those files quickly get. Fourth, no software platform currently exists to crowdsource the annotation of point clouds as described above (particularly when it comes to the automated quality control mechanisms that are necessary to ensure data quality). Fifth, we need more robust imagery interpretation guides. Sixth, groups like the UN and the World Bank are still largely thinking in 2D rather than 3D. And those few who are considering 3D tend to approach this from a data visualization angle rather than using human and machine computing to analyze 3D data. Seventh, this area, point cloud analysis for 3D feature detection, is still a very new area of research. Many of the methodology questions that need answers have yet to be answered, which is why my team and I at QCRI are starting to explore this area from the perspective of computer vision and machine learning.

The holy grail? Combining crowdsourcing with machine learning for real-time feature detection of disaster damage in 3D point clouds rendered in real-time via airborne UAVs surveying a disaster site. So what is it going to take to get there? Well, first of all, UAVs are becoming more sophisticated; they’re flying faster and for longer and will increasingly be working in swarms. (In addition, many of the new micro-UAVs come with a “follow me” function, which could enable the easy and rapid collection of aerial imagery during field assessments). So the first challenge described above is temporary as are the second and third challenges since computer processing power is increasing, not decreasing, over time.

This leaves us with the software challenge and imagery guides. I’m already collaborate with HHI on the latter. As for the former, I’ve spoken with a number of colleagues to explore possible software solutions to crowdsource the tagging of point clouds. One idea is simply to extend MicroMappers. Another is to add simple annotation features to PLAS.io and PointCloudViz since these platforms are already designed to visualize and interact with point clouds. A third option is to use a 3D model platform like SketchFab, which already enables annotations. (Many thanks to colleague Matthew Schroyer for pointing me to SketchFab last week). I’ve since had a long call with SketchFab and am excited by the prospects of using this platform for simple point cloud annotation.

In fact, Matthew already used SketcFab to annotate a 3D model of Durbar Square neighborhood in downtown Kathmandu post-earthquake. He found an aerial video of the area, took multiple screenshots of this video, created a point cloud from these and then generated a 3D model which he annotated within SketchFab. This model, pictured below, would have been much higher resolution if he had the original footage or 2D images. Click pictures to enlarge.

3D Model 1 Nepal

3D Model 2 Nepal

3D Model 3 Nepal

3D Model 4 Nepal

Here’s a short video with all the annotations in the 3D model:

And here’s the link to the “live” 3D model. And to drive home the point that this 3D model could be far higher resolution if the underlying imagery had been directly accessible to Matthew, check out this other SketchFab model below, which you can also access in full here.

Screen Shot 2015-05-16 at 9.35.20 AM

Screen Shot 2015-05-16 at 9.35.41 AM

Screen Shot 2015-05-16 at 9.37.33 AM

The SketchFab team has kindly given me a SketchFab account that allows up to 50 annotations per 3D model. So I’ll be uploading a number of point clouds from Vanuatu (post Cyclone Pam) and Nepal (post earthquakes) to explore the usability of SketchFab for crowdsourced disaster damage assessments. In the meantime, one could simply tag-and-number all major features in a point cloud, create a Google Form, and ask digital volunteers to rate the level of damage near each numbered tag. Not a perfect solution, but one that works. Ultimately, we’d need users to annotate point clouds by tracing 3D polygons if we wanted a more easy way to use the resulting data for automated machine learning purposes.

In any event, if readers do have any suggestions on other software platforms, methodologies, studies worth reading, etc., feel free to get in touch via the comments section below or by email, thank you. In the meantime, many thanks to colleagues Jorge, Matthew, Ferda & Ji (QCRI), Salvador (PointCloudViz), Howard (PLAS.io) and Corentin (SketchFab) for the time they’ve kindly spent brainstorming the above issues with me.

Humanitarian UAV Missions in Nepal: Early Observations (Updated)

There are at the very least 9 humanitarian UAV teams operating in Nepal. We know this since these teams voluntarily chose to liaise with the Humanitarian UAV Network (UAViators). In this respect, the current humanitarian UAV response is far better coordinated than the one I witnessed in the Philippines right after Typhoon Haiyan in 2013. In fact, there was little to no coordination at the time amongst the multiple civilian UAV teams; let alone between these teams and humanitarian organizations, or the Filipino government for that matter. This lack of coordination coupled with the fact that I could not find any existing “Code of Conduct” for the use of UAVs in humanitarian settings is actually what prompted me to launch UAViators just months after leaving the Philippines.

DCIM100MEDIA

The past few days have made it clear that we still have a long way to go in the humanitarian UAV space. Below are some early observations (not to be taken as criticisms but early reflections only). UAV technology is highly disruptive and is only now starting to have visible impact (both good and bad) in humanitarian contexts. We don’t have all the answers; the institutions are not keeping up with the rapid pace of innovation, nor are the regulators. The challenges below cut across technical, organizational, regulatory challenges that are only growing more complex. So I welcome your constructive input on how to improve these efforts moving forward.

  • Yes, we now have a Code of Conduct which was drafted by several humanitarian professionals, UAV pilots & experts and academics. However, this doesn’t mean that every civilian UAV pilot in Nepal has taken the time to read this document let alone knows that this document exists. As such, most UAV pilots may not even realize that they require legal permission from the government in order to operate or that they should carry some form of insurance. Even professional pilots may not think to inform the local police that they have formal authorization to operate; or know how to communicate with Air Traffic Control or with the military for flight permissions. UAViators can’t force anyone in Nepal to comply with national regulations or the Code. The Network can only encourage UAV pilots to follow best practices. The majority of the problems vis-a-vis the use of UAVs in Nepal would have been avoided had the majority of UAV users followed the Humanitarian UAV Code of Conduct.
  • Yes, more countries have instituted UAV regulations. Some of these tend to be highly restrictive, equating 700-gram micro-UAVs with 50-kilo UAVs. Some apply the same sets of laws for the use of UAVs for amateur movie productions as for the professional use of UAVs for Search & Rescue. In any event, there are no (clear) regulations in Nepal as per research and phone calls made by the Humanitarian UAV Network (see also the UAViators Laws/Travel Wiki). To this end, UAViators has provided contact info to Nepal’s Civil Aviation Authority and Chief of Police. Update: All humanitarian UAV Teams are now required to obtain permission from the Ministry of Home Affairs to operate UAVs in Nepal. Once permission is granted, individual flight plants must be approved by the Nepal Army (via UNDAC). More info here (see May 8 Update). It has taken almost two weeks to get the above process in place. Clearly, without a strong backing or leadership from an established humanitarian group that is able and willing to mediate with appropriate Ministries and Civil Aviation authorities, there is only so much that UAViators can do to support the above process.
  • Yes, we now have all 9 UAV teams (soon 10) on one single dedicated email thread. And yes, UAViators has been able to vet many teams while keeping amateur UAV pilots on standby if the latter have less than 50 hours of flight experience. Incidentally, requests for imagery can be made here. That said, what about all the other civilian UAV pilots operating independently? These other pilots, some of them reporters and disaster junkies, have already undermined the use of UAVs for humanitarian efforts. Indeed, it was reported that “The Nepali Government became very irritated with reporters collecting disaster adventure footage using drones.” This has prompted the government to ban UAV flights with the exception of flights carried out for humanitarian purposes. The latter still require permission from the Ministry of Home Affairs. The problem with so-called “drone journalists” is not simply a safety issue, which is obviously the number one priority of a humanitarian UAV mission. Fact is, there are far more requests for aerial imagery than can be met with just 10 UAV teams on site. So coordination and data sharing is key—even with drone journalists if the latter are prepared to be a part of the solution by liaising with UAViators and following the Code of Conduct. Furthermore, local communities have already expressed anger at the fact that drone & humanitarian journalists have “have visited the same sites with no plans to share data, make the imagery publicly available, or to make an effort to communicate to villages why the flights are important and how the information will be used to assist in relief efforts.”
  • Yes, we have workflows in place for the UAV teams to share their imagery, and some already have. Alas, limited Internet bandwidth is significantly slowing down the pace of data sharing. Some UAV teams have not (yet) expressed an interest in sharing their imagery. Some have not provided information about where they’re flying. Of course, they are incredibly busy. And besides, they are not required to share any data or information. The best UAViators can do is simply to outline the added value of sharing this imagery & their flight plans. And without strong public backing from established humanitarian groups, there is little else the Network can do. Update: several UAV teams are now only sharing imagery with local and national authorities. If the UN and others want this imagery, they need to go through Nepali authorities.
  • Yes, UAViators is indeed in touch with a number of humanitarian organizations who would like aerial imagery for specific areas, however these groups are unable (or not yet willing) to make these requests public or formal until they better understand the risks (legal, political and operational), the extent of the value-added (they want to see the imagery first), the experience and reliability of the UAV teams, etc. They are also weary of having UAV teams take requests for imagery as carte blanche to say they are operating on their behalf. At the same time, these humanitarian organizations do not have the resources (or time) to provide any coordination support between the Humanitarian UAV Network, appropriate government ministries and Nepal’s Civil Aviation Authority.
  • Yes, we have a dedicated UAViators site for Nepal updated multiple times a day. Unfortunately, most UAV Teams are having difficulty accessing this site from Nepal due to continuing Internet connectivity issues. This is also true of the dedicated UAViators Google Spreadsheet being used to facilitate the coordination of UAV operations. This online resource includes each team’s contact info, UAV assets, requests for aerial imagery, data needs, etc. We’re now sharing this information via basic text within the body of emails; but this also contributes to email overload. Incidentally, the UAVs being used by the 7 Teams in Nepal are small UAVs such as DJI’s Phantom and Inspire and Aeryon SkyRangers and eBees for example.
  • Yes, we have set up a UAV-Flights-Twitter map for Nepal (big thanks to colleagues at LinkedIn) to increase the transparency of where and when UAVs are being flown across the country. Alas, none of the UAV teams have made use of this solution yet even though most are tweeting from the field. This service allows UAV teams to send a simple tweet about their next UAV flight which then gets mapped automatically. If not used in Nepal, perhaps this service will be used in the future & combined with SMS/WhatsApp.
  • Yes, UAViators is connected with the Digital Humanitarian Network (DHN); specifically Humanitarian OpenStreetMap (HOT) and the Standby Task Force (SBTF), with the latter ready to deploy QCRI’s MicroMappers platform for the analysis of oblique imagery. Yet we’re still not sure how best to combine the results of nadir imagery and oblique imagery analysis to add value. Every point on a nadir (vertical) image has a GPS coordinate; but this is not true of obliques (photos taken at an angle). The GPS data for oblique photographs is simply the GPS coordinates for the position of the camera at the time the oblique image was taken. (Specialist gimbal mounted cameras can provide GPS info for objects in oblique photographs, but these are not in use in Nepal).
  • Yes, UAViators has access to a local physical office in Kathmandu. Thanks to the kind offer from Kathmandu Living Labs (KLL), UAV pilots can meet and co-work at KLL. However, even finding a time for all the UAV teams to meet at this office has proven impossible. And yet this is so crucial; there are good reasons why humanitarians have Cluster meetings.
  • Yes, 3D models (Point Clouds) of disaster areas can add insights to disaster damage assessments. That said, these are often huge files and thus particularly challenging to upload. And when these do get posted on-line, what is the best way to have them analyzed? GIS experts and other professionals tend to be completely swamped during disasters. But even if a team were available, what methods & software should they be using to assess and indeed quantify the level of damage in each 3D model? Can this assessment be crowdsourced? And how can the results of 3D analysis be added to other datasets and official humanitarian information products?
  • Yes, the majority of UAV teams that have chosen to liaise with the Humanitarian UAV Network are now in Nepal, yet it took a while for some teams to get on site and there were delays with their UAV assets getting into the country. This points to the need for building local capacity within Nepal and other disaster-prone countries so that local organizations can rapidly deploy UAVs and analyze the resulting imagery themselves after major disasters. This explains why my colleague Nama Budhathoki (at KLL) and I have been looking to set up Kathmandu Flying Labs (basically a Humanitarian UAV Innovation Lab) for literally a year now. In any event, thanks to LinkedIn for Good, we were able to identify some local UAV pilots and students right after the earthquake; some of whom have since been paired with the international UAV teams. Building the capacity of local teams is also important because of the local knowledge and local contacts (and potentially the legal permissions) that these teams will already have.

So where do we go from here? Despite the above challenges, there is a lot more coordination and structure to the UAV response in Nepal than there was following Typhoon Haiyan in 2013. Then again, the challenges that come with UAV operations in disaster situations are only going to increase as more UAV teams deploy in future crises alongside members of the public, drone journalists, military UAVs, etc. At some point, hopefully sooner (before accidents and major mistakes happen) rather than later, an established humanitarian organization will take on the responsibility of mediating between UAV teams, UAViators, the government, civil aviation officials, military and other aid groups.

What we may need is something along the lines of what GSMA’s Disaster Response Program has done for Mobile Network Operators (MNOs) and the humanitarian community. GSMA has done a lot since the 2010 Haiti Earthquake to bridge MNOs and humanitarians, acting as convener, developing standard operating procedures, ethical guidelines, a global model agreement, etc. Another suggestion floated by a humanitarian colleague is the INSARAG Secretariat, which classifies and also categorizes Search and Rescue teams. Each teams has to “sign onto agreed guidelines (behavior, coordination, markings, etc). So, when the first one arrives, they know to setup a reception space; they all know that there will be coordination meetings, etc.” Perhaps INSARAG could serve as a model for UAViators 2.0. Update: UNDAC is now serving as liaison for UAV flights, which will likely set a precedence for future humanitarian UAV missions.

Coordination is never easy. And leveraging a new, disruptive technology for disaster response is also a major challenge. I, for one, am ready and want to take on these new challenges, but do I need a willing and able partner in the humanitarian community to take on these challenges with me and others. The added value of timely, very high-resolution aerial data during disaster is significant for disaster response, not to mention the use of UAVs for payload transportation and the provision of communication services via UAV. The World Humanitarian Summit (WHS) is coming up next year. Will we unveil a solution to the above challenges at this pivotal Summit or will we continue dragging our feet and forgo the humanitarian innovation opportunities that are right on front of our eyes in Nepal?

In the meantime, I want to thank and acknowledge the following UAV Teams for liaising with the Humanitarian UAV Network: Team RubiconSkyCatch, Halo Drop, GlobalMedic, Medair, Deploy Media and Paul Borrud. Almost all teams have already been able to share aerial imagery. If other responders on the ground are able to support these efforts in any way, e.g., CISCO providing better Internet connectivity, or if you know of other UAV groups that are moving faster and able to provide guidance, for example, then please do get in touch.

A Force for Good: How Digital Jedis are Responding to the Nepal Earthquake (Updated)

Digital Humanitarians are responding in full force to the devastating earthquake that struck Nepal. Information sharing and coordination is taking place online via CrisisMappers and on multiple dedicated Skype chats. The Standby Task Force (SBTF), Humanitarian OpenStreetMap (HOT) and others from the Digital Humanitarian Network (DHN) have also deployed in response to the tragedy. This blog post provides a quick summary of some of these digital humanitarian efforts along with what’s coming in terms of new deployments.

Update: A list of Crisis Maps for Nepal is available below.

Credit: http://www.thestar.com/content/dam/thestar/uploads/2015/4/26/nepal2.jpg

At the request of the UN Office for the Coordination of Humanitarian Affairs (OCHA), the SBTF is using QCRI’s MicroMappers platform to crowdsource the analysis of tweets and mainstream media (the latter via GDELT) to rapidly 1) assess disaster damage & needs; and 2) Identify where humanitarian groups are deploying (3W’s). The MicroMappers CrisisMaps are already live and publicly available below (simply click on the maps to open live version). Both Crisis Maps are being updated hourly (at times every 15 minutes). Note that MicroMappers also uses both crowdsourcing and Artificial Intelligence (AIDR).

Update: More than 1,200 Digital Jedis have used MicroMappers to sift through a staggering 35,000 images and 7,000 tweets! This has so far resulted in 300+ relevant pictures of disaster damage displayed on the Image Crisis Map and over 100 relevant disaster tweets on the Tweet Crisis Map.

Live CrisisMap of pictures from both Twitter and Mainstream Media showing disaster damage:

MM Nepal Earthquake ImageMap

Live CrisisMap of Urgent Needs, Damage and Response Efforts posted on Twitter:

MM Nepal Earthquake TweetMap

Note: the outstanding Kathmandu Living Labs (KLL) team have also launched an Ushahidi Crisis Map in collaboration with the Nepal Red Cross. We’ve already invited invited KLL to take all of the MicroMappers data and add it to their crisis map. Supporting local efforts is absolutely key.

WP_aerial_image_nepal

The Humanitarian UAV Network (UAViators) has also been activated to identify, mobilize and coordinate UAV assets & teams. Several professional UAV teams are already on their way to Kathmandu. The UAV pilots will be producing high resolution nadir imagery, oblique imagery and 3D point clouds. UAViators will be pushing this imagery to both HOT and MicroMappers for rapid crowdsourced analysis (just like was done with the aerial imagery from Vanuatu post Cyclone Pam, more on that here). A leading UAV manufacturer is also donating several UAVs to UAViators for use in Nepal. These UAVs will be sent to KLL to support their efforts. In the meantime, DigitalGlobePlanet Labs and SkyBox are each sharing their satellite imagery with CrisisMappers, HOT and others in the Digital Humanitarian Network.

There are several other efforts going on, so the above is certainly not a complete list but simply reflect those digital humanitarian efforts that I am involved in or most familiar with. If you know of other major efforts, then please feel free to post them in the comments section. Thank you. More on the state of the art in digital humanitarian action in my new book, Digital Humanitarians.


List of Nepal Crisis Maps

Please add to the list below by posting new links in this Google Spreadsheet. Also, someone should really create 1 map that pulls from each of the listed maps.

Code for Nepal Casualty Crisis Map:
http://bit.ly/1IpUi1f 

DigitalGlobe Crowdsourced Damage Assessment Map:
http://goo.gl/bGyHTC

Disaster OpenRouteService Map for Nepal:
http://www.openrouteservice.org/disaster-nepal

ESRI Damage Assessment Map:
http://arcg.is/1HVNNEm

Harvard WorldMap Tweets of Nepal:
http://worldmap.harvard.edu/maps/nepalquake 

Humanitarian OpenStreetMap Nepal:
http://www.openstreetmap.org/relation/184633

Kathmandu Living Labs Crowdsourced Crisis Map: http://www.kathmandulivinglabs.org/earthquake

MicroMappers Disaster Image Map of Damage:
http://maps.micromappers.org/2015/nepal/images/#close

MicroMappers Disaster Damage Tweet Map of Needs:
http://maps.micromappers.org/2015/nepal/tweets

NepalQuake Status Map:
http://www.nepalquake.org/status-map

UAViators Crisis Map of Damage from Aerial Pics/Vids:
http://uaviators.org/map (takes a while to load)

Visions SDSU Tweet Crisis Map of Nepal:
http://vision.sdsu.edu/ec2/geoviewer/nepal-kathmandu#

Can Massively Multiplayer Online Games also be Next Generation Humanitarian Technologies?

IRL

My colleague Peter Mosur and I launched the Internet Response League (IRL) at QCRI a while back to actively explore the intersection of massively multiplayer online games & humanitarian response. IRL is also featured in my new book, Digital Humanitarians, along with many other innovative ideas & technologies. Shortly after the book came out, Peter and I had the pleasure of exploring a collaboration with the team at Massive Multiplayer Online Science (MMOS) and CCP Games—makers of the popular game EVE Online.

MMOS is an awesome group that aims to enable online gamers to contribute to scientific research while playing video games. Our colleagues at MMOS kindly reached out to us earlier this year as they’re really interested in supporting humanitarian efforts as well. They are thus kindly bringing IRL on board to help them explore the use of online games for humanitarian projects.

CCP Games has already been mentioned on the IRL blog here. Their gamers managed to raise an impressive $190,890 for the Icelandic Red Cross in response to Typhoon Haiyan/Yolanda with their PLEX for Good initiative. This is on top of the $100,000 that the company has raised with the program for various disasters in Japan, Haiti, Pakistan, and the United States.

CCP Game’s flagship title EVE Online passed 500,000 subscribers in 2013. The game is extremely unique when it comes to MMORPGs. Rather than having a player base spanning across many different servers, EVE Online keeps keeps all players on one large server. Entitled “Tranquility”, this one server currently averages 25,000 players at any given time, with peaks of over 38,000 [1]. This equates to an average of 600,000 hours of human time spent playing EVE Online every day! The potential good to come out of a humanitarian partnership would be immensely valuable to the world!

So we’re currently exploring with the team at MMOS possible ways to process humanitarian data within EVE’s gaming environment. We’ll write another post soon detailing the unique challenges we’re facing in terms of seamlessly process-ing digital humanitarian tasks within EVE Online. This will require a lot of creativity to pull off and success is by no means guaranteed (just like life and online games). In sum, our humanitarian tasks must in no way disrupt the EVE Online experience; they basically need to be “invisible” to the gamer (besides an initial opt-in).

See the video below for an in-depth overview of the type of work that MMOS and CCP Games envision incorporated into EVE Online. The video was screened at the recent EVE Online Fanfest last month and also features a message from the Internet Response League at the 40:36 minute mark!

This blog post was co-authored with Peter Mosur.

Artificial Intelligence for Monitoring Elections (AIME)

AIME logo

I published a blog post with the same title a good while back. Here’s what I wrote at the time:

Citizen-based, crowdsourced election observation initiatives are on the rise. Leading election monitoring organizations are also looking to leverage citizen-based reporting to complement their own professional election monitoring efforts. Meanwhile, the information revolution continues apace, with the number of new mobile phone subscriptions up by over 1 billion in just the past 36 months alone. The volume of election-related reports generated by “the crowd” is thus expected to grow significantly in the coming years. But international, national and local election monitoring organizations are completely unprepared to deal with the rise of Big (Election) Data.

I thus introduced a new project to “develop a free and open source platform to automatically filter relevant election reports from the crowd.” I’m pleased to report that my team and I at QCRI have just tested AIME during an actual election for the very first time—the 2015 Nigerian Elections. My QCRI Research Assistant Peter Mosur (co-author of this blog post) collaborated directly with Oludotun Babayemi from Clonehouse Nigeria and Chuks Ojidoh from the Community Life Project & Reclaim Naija to deploy and test the AIME platform.

AIME is a free and open source (experimental) solution that combines crowd-sourcing with Artificial Intelligence to automatically identify tweets of interest during major elections. As organizations engaged in election monitoring well know, there can be a lot chatter on social media as people rally behind their chosen candidates, announce this to the world, ask their friends and family who they will be voting for, and updating others when they have voted while posting about election related incidents they may have witnessed. This can make it rather challenging to find reports relevant to election monitoring groups.

WP1

Election monitors typically monitor instances of violence, election rigging, and voter issues. These incidents are monitored because they reveal problems that arise with the elections. Election monitoring initiatives such as Reclaim Naija & Uzabe also monitor several other type of incidents but for the purposes of testing the AIME platform, we selected three types of events mentioned above. In order to automatically identify tweets related to these events, one must first provide AIME with example tweets. (Of course, if there is no Twitter traffic to begin with, then there won’t be much need for AIME, which is precisely why we developed an SMS extension that can be used with AIME).

So where does the crowdsourcing comes in? Users of AIME can ask the crowd to tag tweets related to election-violence, rigging and voter issues by simply clicking on tagging tweets posted to the AIME platform with the appropriate event type. (Several quality control mechanisms are built in to ensure data quality. Also, one does not need to use crowdsourcing to tag the tweets; this can be done internally as well or instead). What AIME does next is use a technique from Artificial Intelligence (AI) called statistical machine learning to understand patterns in the human-tagged tweets. In other words, it begins to recognize which tweets belong in which category type—violence, rigging and voter issues. AIME will then auto-classify new tweets that are related to these categories (and can auto-classify around 2 millions tweets or text messages per minute).

Screen Shot 2015-04-10 at 8.33.08 AM

Before creating our automatic classifier for the Nigerian Elections, we first needed to collect examples of tweets related to election violence, rigging and voter issues in order to teach AIME. Oludotun Babayemi and Chuks Ojidoh kindly provided the expert local knowledge needed to identify the keywords we should be following on Twitter (using AIME). They graciously gave us many different keywords to use as well as a list of trusted Twitter accounts to follow for election-related messages. (Due to difficulties with AIME, we were not able to use the trusted accounts. In addition, many of the suggested keywords were unusable since words like “aggressive”, “detonate”, and “security” would have resulted in large amount of false positives).

Here is the full list of keywords used by AIME:

Nigeria elections, nigeriadecides, Nigeria decides, INEC, GEJ, Change Nigeria, Nigeria Transformation, President Jonathan, Goodluck Jonathan, Sai Buhari, saibuhari, All progressives congress, Osibanjo, Sambo, Peoples Democratic Party, boko haram, boko, area boys, nigeria2015, votenotfight, GEJwinsit, iwillvoteapc, gmb2015, revoda, thingsmustchange,  and march4buhari   

Out of this list, “NigeriaDecides” was by far the most popular keyword used in the elections. It accounted for over 28,000 Tweets of a batch of 100,000. During the week leading up to the elections, AIME collected roughly 800,000 Tweets. Over the course of the elections and the few days following, the total number of collected Tweets jumped to well over 4 million.

We sampled just a handful of these tweets and manually tagged those related to violence, rigging and other voting issues using AIME. “Violence” was described as “threats, riots, arming, attacks, rumors, lack of security, vandalism, etc.” while “Election Rigging” was described as “Ballot stuffing, issuing invalid ballot papers, voter impersonation, multiple voting, ballot boxes destroyed after counting, bribery, lack of transparency, tampered ballots etc.” Lastly, “Voting Issues” was defined as “Polling station logistics issues, technical issues, people unable to vote, media unable to enter, insufficient staff, lack of voter assistance, inadequate voting materials, underage voters, etc.”

Any tweet that did not fall into these three categories was tagged as “Other” or “Not Related”. Our Election Classifiers were trained with a total of 571 human-tagged tweets which enabled AIME to automatically classify well over 1 million tweets (1,263,654 to be precise). The results in the screenshot below show accurate AIME was at auto-classifying tweets based on the different event types define earlier. AUC is what captures the “overall accuracy” of AIME’s classifiers.

AIME_Nigeria

AIME was rather good at correctly tagging tweets related to “Voting Issues” (98% accuracy) but drastically poor at tagging related to “Election Rigging” (0%). This is not AIME’s fault : ) since it only had 8 examples to learn from. As for “Violence”, the accuracy score was 47%, which is actually surprising given that AIME only had 14 human-tagged examples to learn from. Lastly, AIME did fairly well at auto-classifying unrelated tweets (accuracy of 86%).

Conclusion: this was the first time we tested AIME during an actual election and we’ve learned a lot in the process. The results are not perfect but enough to press on and experiment further with the AIME platform. If you’d like to test AIME yourself (and if you fully recognize that the tool is experimental and still under development, hence not perfect), then feel free to get in touch with me here. We have 2 slots open for testing. In the meantime, big thanks to my RA Peter for spearheading both this deployment and the subsequent research.

Crowdsourcing Point Clouds for Disaster Response

Point Clouds, or 3D models derived from high resolution aerial imagery, are in fact nothing new. Several software platforms already exist to reconstruct a series of 2D aerial images into fully fledged 3D-fly-through models. Check out these very neat examples from my colleagues at Pix4D and SenseFly:

What does a castle, Jesus and a mountain have to do with humanitarian action? As noted in my previous blog post, there’s only so much disaster damage one can glean from nadir (that is, vertical) imagery and oblique imagery. Lets suppose that the nadir image below was taken by an orbiting satellite or flying UAV right after an earthquake, for example. How can you possibly assess disaster damage from this one picture alone? Even if you had nadir imagery for these houses before the earthquake, your ability to assess structural damage would be limited.

Screen Shot 2015-04-09 at 5.48.23 AM

This explains why we also captured oblique imagery for the World Bank’s UAV response to Cyclone Pam in Vanuatu (more here on that humanitarian mission). But even with oblique photographs, you’re stuck with one fixed perspective. Who knows what these houses below look like from the other side; your UAV may have simply captured this side only. And even if you had pictures for all possible angles, you’d literally have 100’s of pictures to leaf through and make sense of.

Screen Shot 2015-04-09 at 5.54.34 AM

What’s that famous quote by Henry Ford again? “If I had asked people what they wanted, they would have said faster horses.” We don’t need faster UAVs, we simply need to turn what we already have into Point Clouds, which I’m indeed hoping to do with the aerial imagery from Vanuatu, by the way. The Point Cloud below was made only from single 2D aerial images.

It isn’t perfect, but we don’t need perfection in disaster response, we need good enough. So when we as humanitarian UAV teams go into the next post-disaster deployment and ask what humanitarians they need, they may say “faster horses” because they’re not (yet) familiar with what’s really possible with the imagery processing solutions available today. That obviously doesn’t mean that we should ignore their information needs. It simply means we should seek to expand their imaginations vis-a-vis the art of the possible with UAVs and aerial imagery. Here is a 3D model of a village in Vanuatu constructed using 2D aerial imagery:

Now, the title of my blog post does lead with the word crowdsourcing. Why? For several reasons. First, it takes some decent computing power (and time) to create these Point Clouds. But if the underlying 2D imagery is made available to hundreds of Digital Humanitarians, we could use this distributed computing power to rapidly crowdsource the creation of 3D models. Second, each model can then be pushed to MicroMappers for crowdsourced analysis. Why? Because having a dozen eyes scrutinizing one Point Cloud is better than 2. Note that for quality control purposes, each Point Cloud would be shown to 5 different Digital Humanitarian volunteers; we already do this with MicroMappers for tweets, pictures, videos, satellite images and of course aerial images as well. Each digital volunteer would then trace areas in the Point Cloud where they spot damage. If the traces from the different volunteers match, then bingo, there’s likely damage at those x, y and z coordinate. Here’s the idea:

We could easily use iPads to turn the process into a Virtual Reality experience for digital volunteers. In other words, you’d be able to move around and above the actual Point Cloud by simply changing the position of your iPad accordingly. This technology already exists and has for several years now. Tracing features in the 3D models that appear to be damaged would be as simple as using your finger to outline the damage on your iPad.

What about the inevitable challenge of Big Data? What if thousands of Point Clouds are generated during a disaster? Sure, we could try to scale our crowd-sourcing efforts by recruiting more Digital Humanitarian volunteers, but wouldn’t that just be asking for a “faster horse”? Just like we’ve already done with MicroMappers for tweets and text messages, we would seek to combine crowdsourcing and Artificial Intelligence to automatically detect features of interest in 3D models. This sounds to me like an excellent research project for a research institute engaged in advanced computing R&D.

I would love to see the results of this applied research integrated directly within MicroMappers. This would allow us to integrate the results of social media analysis via MicroMappers (e.g, tweets, Instagram pictures, YouTube videos) directly with the results of satellite imagery analysis as well as 2D and 3D aerial imagery analysis generated via MicroMappers.

Anyone interested in working on this?

How Digital Jedis Are Springing to Action In Response To Cyclone Pam

Digital Humanitarians sprung to action just hours after the Category 5 Cyclone collided with Vanuatu’s many islands. This first deployment focused on rapidly assessing the damage by analyzing multimedia content posted on social media and in the mainstream news. This request came directly from the United Nations (OCHA), which activated the Digital Humanitarian Network (DHN) to carry out the rapid damage assessment. So the Standby Task Force (SBTF), a founding member of the DHN, used QCRI′s MicroMappers platform to produce a digital, interactive Crisis Map of some 1,000+ geo-tagged pictures of disaster damage (screenshot below).

MM_ImageMap_Vanuatu

Within days of Cyclone Pam making landfall, the World Bank (WB) activated the Humanitarian UAV Network (UAViators) to quickly deploy UAV pilots to the affected islands. UAViators has access to a global network of 700+ professional UAV pilots is some 70+ countries worldwide. The WB identified two UAV teams from the Humanitarian UAV Network and deployed them to capture very high-resolution aerial photographs of the damage to support the Government’s post-disaster damage assessment efforts. Pictures from these early UAV missions are available here. Aerial images & videos of the disaster damage were also posted to the UAViators Crowdsourced Crisis Map.

Last week, the World Bank activated the DHN (for the first time ever) to help analyze the many, many GigaBytes of aerial imagery from Vanuatu. So Digital Jedis from the DHN are now using Humanitarian OpenStreetMap (HOT) and MicroMappers (MM) to crowdsource the search for partially damaged and fully destroyed houses in the aerial imagery. The OSM team is specifically looking at the “nadir imagery” captured by the UAVs while MM is exclusively reviewing the “oblique imagery“. More specifically, digital volunteers are using MM to trace destroyed houses red, partially damaged houses orange, and using blue to denote houses that appear to have little to no damage. Below is an early screenshot of the Aerial Crisis Map for the island of Efate. The live Crisis Map is available here.

Screen Shot 2015-04-06 at 10.56.09 AM

Clicking on one of these markers will open up the high resolution aerial pictures taken at that location. Here, two houses are traced in blue (little to no damage) and two on the upper left are traced in orange (partial damage expected).

Screen Shot 2015-04-06 at 10.57.17 AM

The cameras on the UAVs captured the aerial imagery in very high resolution, as you can see from the close up below. You’ll note two traces for the house. These two traces were done by two independent volunteers (for the purposes of quality control). In fact, each aerial image is shown to at least 3 different Digital Jedis.

Screen Shot 2015-04-06 at 10.58.31 AM

Once this MicroMappers deployment is over, we’ll be using the resulting traces to create automated featured detection algorithms; just like we did here for the MicroMappers Namibia deployment. This approach, combining crowdsourcing with Artificial Intelligence (AI), is explored in more detail here vis-a-vis disaster response. The purpose of taking this hybrid human-machine computing solution is to accelerate (semi-automate) future damage assessment efforts.

Meanwhile, back in Vanuatu, the HOT team has already carried out some tentative, preliminary analysis of the damage based on the aerial imagery provided. They are also up-dating their OSM maps of the affected islands thanks this imagery. Below is an initial damage assessment carried out by HOT for demonstration purposes only. Please visit their deployment page on the Vanuatu response for more information.

2015-04-04_18h04_00

So what’s next? Combining both the nadir and oblique imagery to interpret disaster damage is ultimately what is needed, so we’re actually hoping to make this happen (today) by displaying the nadir imagery directly within the Aerial Crisis Map produced by MicroMappers. (Many thanks to the MapBox team for their assistance on this). We hope this integration will help HOT and our World Bank partners better assess the disaster damage. This is the first time that we as a group are doing anything like this, so obviously lots of learning going on, which should improve future deployments. Ultimately, we’ll need to create 3D models (point clouds) of disaster affected areas (already easy to do with high-resolution aerial imagery) and then simply use MicroMappers to crowdsource the analysis of these 3D models.

And here’s a 3D model of a village in Vanuatu constructed using 2D aerial photos taken by UAV:

For now, though, Digital Jedis will continue working very closely with the World Bank to ensure that the latter have the results they need in the right format to deliver a comprehensive damage assessment to the Government of Vanuatu by the end of the week. In the meantime, if you’re interested in learning more about digital humanitarian action, then please check out my new book, which features UAViators, HOT, MM and lots more.