Category Archives: Big Data

Using Flash Crowds to Automatically Detect Earthquakes & Impact Before Anyone Else

It is said that our planet has a new nervous system; a digital nervous system comprised of digital veins and intertwined sensors that capture the pulse of our planet in near real-time. Next generation humanitarian technologies seek to leverage this new nervous system to detect and diagnose the impact of disasters within minutes rather than hours. To this end, LastQuake may be one of the most impressive humanitarian technologies that I have recently come across. Spearheaded by the European-Mediterranean Seismological Center (EMSC), the technology combines “Flashsourcing” with social media monitoring to auto-detect earthquakes before they’re picked up by seismometers or anyone else.

Screen Shot 2014-10-23 at 5.08.30 PM

Scientists typically draw on ground-motion prediction algorithms and data on building infrastructure to rapidly assess an earthquake’s potential impact. Alas, ground-motion predictions vary significantly and infrastructure data are rarely available at sufficient resolutions to accurately assess the impact of earthquakes. Moreover, a minimum of three seismometers are needed to calibrate a quake and said seismic data take several minutes to generate. This explains why the EMSC uses human sensors to rapidly collect relevant data on earthquakes as these reduce the uncertainties that come with traditional rapid impact assess-ment methodologies. Indeed, the Center’s important work clearly demonstrates how the Internet coupled with social media are “creating new potential for rapid and massive public involvement by both active and passive means” vis-a-vis earthquake detection and impact assessments. Indeed, the EMSC can automatically detect new quakes within 80-90 seconds of their occurrence while simultaneously publishing tweets with preliminary information on said quakes, like this one:

Screen Shot 2014-10-23 at 5.44.27 PM

In reality, the first human sensors (increases in web traffic) can be detected within 15 seconds (!) of a quake. The EMSC’s system continues to auto-matically tweet relevant information (including documents, photos, videos, etc.), for the first 90 minutes after it first detects an earthquake and is also able to automatically create a customized and relevant hashtag for individual quakes.

Screen Shot 2014-10-23 at 5.51.05 PM

How do they do this? Well, the team draw on two real-time crowdsourcing methods that “indirectly collect information from eyewitnesses on earthquakes’ effects.” The first is TED, which stands for Twitter Earthquake Detection–a system developed by the US Geological Survey (USGS). TED filters tweets by key word, location and time to “rapidly detect sharing events through increases in the number of tweets” related to an earthquake. The second method, called “flashsourcing” was developed by the European-Mediterranean to analyze traffic patterns on its own website, “a popular rapid earthquake information website.” The site gets an average of 1.5 to 2 million visits a month. Flashsourcing allows the Center to detect surges in web traffic that often occur after earthquakes—a detection method named Internet Earthquake Detection (IED). These traffic surges (“flash crowds”) are caused by “eyewitnesses converging on its website to find out the cause of their shaking experience” and can be detected by analyzing the IP locations of website visitors.

It is worth emphasizing that both TED and IED work independently from traditional seismic monitoring systems. Instead, they are “based on real-time statistical analysis of Internet-based information generated by the reaction of the public to the shaking.” As EMSC rightly notes in a forthcoming peer-reviewed scientific study, “Detections of felt earthquakes are typically within 2 minutes for both methods, i.e., considerably faster than seismographic detections in poorly instrumented regions of the world.” TED and IED are highly complementary methods since they are based on two entirely “different types of Internet use that might occur after an earthquake.” TED depends on the popularity of Twitter while IED’s effectiveness depends on how well known the EMSC website is in the area affected by an earthquake. LastQuake automatically publishes real-time information on earthquakes by automatically merging real-time data feeds from both TED and IED as well as non-crowdsourcing feeds.

infographie-CSEM-LastQuake2

Lets looks into the methodology that powers IED. Flashsourcing can be used to detect felt earthquakes and provide “rapid information (within 5 minutes) on the local effects of earthquakes. More precisely, it can automatically map the area where shaking was felt by plotting the geographical locations of statistically significant increases in traffic […].” In addition, flashsourcing can also “discriminate localities affected by alarming shaking levels […], and in some cases it can detect and map areas affected by severe damage or network disruption through the concomitant loss of Internet sessions originating from the impacted region.” As such, this “negative space” (where there are no signals) is itself an important signal for damage assessment, as I’ve argued before.

remypicIn the future, EMSC’s flashsourcing system may also be able discriminate power cuts between indoor and outdoor Internet connections at the city level since the system’s analysis of web traffic session will soon be based on web sockets rather than webserver log files. This automatic detection of power failures “is the first step towards a new system capable of detecting Internet interruptions or localized infrastructure damage.” Of course, flashsourcing alone does not “provide a full description of earthquake impact, but within a few minutes, independently of any seismic data, and, at little cost, it can exclude a number of possible damage scenarios, identify localities where no significant damage has occurred and others where damage cannot be excluded.”

Screen Shot 2014-10-23 at 5.59.20 PM

EMSC is complementing their flashsourching methodology with a novel mobile app that quickly enables smartphone users to report about felt earthquakes. Instead of requiring any data entry and written surveys, users simply click on cartoonish-type pictures that best describe the level of intensity they felt when the earthquake (or aftershocks) struck. In addition, EMSC analyzes and manually validates geo-located photos and videos of earthquake effects uploaded to their website (not from social media). The Center’s new app will also make it easier for users to post more pictures more quickly.

CSEM-tweets2

What about typical criticisms (by now broken records) that social media is biased and unreliable (and thus useless)? What about the usual theatrics about the digital divide invalidating any kind of crowdsourcing effort given that these will be heavily biased and hardly representative of the overall population? Despite these already well known short-comings and despite the fact that our inchoate digital networks are still evolving into a new nervous system for our planet, the existing nervous system—however imperfect and immature—still adds value. TED and LastQuake demonstrate this empirically beyond any shadow of a doubt. What’s more, the EMSC have found that crowdsourced, user-generated information is highly reliable: “there are very few examples of intentional misuses, errors […].”

My team and I at QCRI are honored to be collaborating with EMSC on integra-ting our AIDR platform to support their good work. AIDR enables uses to automatically detect tweets of interest by using machine learning (artificial intelligence) which is far more effective searching for keywords. I recently spoke with Rémy Bossu, one masterminds behind the EMSC’s LastQuake project about his team’s plans for AIDR:

“For us AIDR could be a way to detect indirect effects of earthquakes, and notably triggered landslides and fires. Landslides can be the main cause of earthquake losses, like during the 2001 Salvador earthquake. But they are very difficult to anticipate, depending among other parameters on the recent rainfalls. One can prepare a susceptibility map but whether there are or nor landslides, where they have struck and their extend is something we cannot detect using geophysical methods. For us AIDR is a tool which could potentially make a difference on this issue of rapid detection of indirect earthquake effects for better situation awareness.”

In other words, as soon as the EMSC system detects an earthquake, the plan is for that detection to automatically launch an AIDR deployment to automatically identify tweets related to landslides. This integration is already completed and being piloted. In sum, EMSC is connecting an impressive ecosystem of smart, digital technologies powered by a variety of methodologies. This explains why their system is one of the most impressive & proven examples of next generation humanitarian technologies that I’ve come across in recent months.

bio

Acknowledgements: Many thanks to Rémy Bossu for providing me with all the material and graphics I needed to write up this blog post.

See also:

  • Social Media: Pulse of the Planet? [link]
  • Taking Pulse of Boston Bombings [link]
  • The World at Night Through the Eyes of the Crowd [link]
  • The Geography of Twitter: Mapping the Global Heartbeat [link]

The Filipino Government’s Official Strategy on Crisis Hashtags

As noted here, the Filipino Government has had an official strategy on promoting the use of crisis hashtags since 2012. Recently, the Presidential Communications Development and Strategic Planning Office (PCDSPO) and the Office of the Presidential Spokesperson (PCDSPO-OPS) have kindly shared their their 7-page strategy (PDF), which I’ve summarized below.

Gov Twitter

The Filipino government first endorsed the use of the #rescuePH and #reliefPH in August 2012, when the country was experiencing storm-enhanced monsoon rains. These were initiatives from the private sector. Enough people were using the hashtags to make them trend for days. Eventually, we adopted the hashtags in our tweets for disseminating government advisories, and for collecting reports from the ground. We also ventured into creating new hashtags, and into convincing media outlets to use unified hashtags.” For new hashtags, “The convention is the local name of the storm + PH (e.g., #PabloPH, #YolandaPH). In the case of the heavy monsoon, the local name of the monsoon was used, plus the year (i.e., #Habagat2013).” After agreeing on the hashtags, ” the OPS issued an official statement to the media and the public to carry these hashtags when tweeting about weather-related reports.”

The Office of the Presidential Spokesperson (OPS) would then monitor the hashtags and “made databases and lists which would be used in aid of deployed government frontline personnel, or published as public information.” For example, the OPS  “created databases from reports from #rescuePH, containing the details of those in need of rescue, which we endorsed to the National Disaster Risk Reduction & Management Council, the Coast Guard, and the Department of Transportation and Communications. Needless to say, we assumed that the databases we created using these hashtags would be contaminated by invalid reports, such as spam & other inappropriate messages. We try to filter out these erroneous or malicious reports, before we make our official endorsements to the concerned agencies. In coordination with officers from the Department of Social Welfare and Development, we also monitored the hashtag #reliefPH in order to identify disaster survivors who need food and non-food supplies.”

During Typhoon Haiyan (Yolanda), “the unified hashtag #RescuePH was used to convey lists of people needing help.” This information was then sent to to the National Disaster Risk Reduction & Management Council so that these names could be “included in their lists of people/communities to attend to.” This rescue hashtag was also “useful in solving surplus and deficits of goods between relief operations centers.” So the government encouraged social media users to coordinate their #ReliefPH efforts with the Department of Social Welfare and Development’s on-the-ground relief-coordination efforts. The Government also “created an infographic explaining how to use the hashtag #RescuePH.”

Screen Shot 2014-06-30 at 10.10.51 AM

Earlier, during the 2012 monsoon rains, the government “retweeted various updates on the rescue and relief operations using the hashtag #SafeNow. The hashtag is used when the user has been rescued or knows someone who has been rescued. This helps those working on rescue to check the list of pending affected persons or families, and update it.”

The government’s strategy document also includes an assessment on their use of unified hashtags during disasters. On the positive side, “These hashtags were successful at the user level in Metro Manila, where Internet use penetration is high. For disasters in the regions, where internet penetration is lower, Twitter was nevertheless useful for inter-sector (media – government – NGOs) coordination and information dissemination.” Another positive was the use of a unified hashtag following the heavy monsoon rains of 2012, “which had damaged national roads, inconvenienced motorists, and posing difficulty for rescue operations. After the floods subsided, the government called on the public to identify and report potholes and cracks on the national highways of Metro Manila by tweeting pictures and details of these to the official Twitter account [...] , and by using the hashtag #lubak2normal. The information submitted was entered into a database maintained by the Department of Public Works and Highways for immediate action.”

Screen Shot 2014-06-30 at 10.32.57 AM

The hashtag was used “1,007 times within 2 hours after it was launched. The reports were published and locations mapped out, viewable through a page hosted on the PCDSPO website. Considering the feedback, we considered the hashtag a success. We attribute this to two things: one, we used a platform that was convenient for the public to report directly to the government; and two, the hashtag appealed to humor (lubak means potholes or rubble in the vernacular). Furthermore, due to the novelty of it, the media had no qualms helping us spread the word. All the reports we gathered were immediately endorsed [...] for roadwork and repair.” This example points to the potential expanded use of social media and crowdsourcing for rapid damage assessments.

On the negative side, the use of #SafeNow resulted mostly in “tweets promoting #safenow, and very few actually indicating that they have been successfully rescued and/or are safe.” The most pressing challenge, however, was filtering. “In succeeding typhoons/instances of flooding, we began to have a filtering problem, especially when high-profile Twitter users (i.e., pop-culture celebrities) began to promote the hashtags through Twitter. The actual tweets that were calls for rescue were being drowned by retweets from fans, resulting in many nonrescue-related tweets [...].” This explains the need for Twitter monitoring platforms like AIDR, which is free and open source.

Bio

Humanitarians in the Sky: Using UAVs for Disaster Response

The following is a presentation that I recently gave at the 2014 Remotely Piloted Aircraft Systems Conference (RPAS 2014) held in Brussels, Belgium. The case studies on the Philippines and Haiti are also featured in my upcoming book on “Digital Humanitarians: How Big Data is Changing the Face of Humanitarian Response.” The book is slated to be published in January/February 2015.

Screen Shot 2014-06-24 at 2.20.54 PM

Good afternoon and many thanks to Peter van Blyenburgh for the kind invitation to speak on the role of UAVs in humanitarian contexts beyond the European region. I’m speaking today on behalf of the Humanitarian UAV Network, which brings together seasoned humanitarian professionals with UAV experts to facilitate the use of UAVs in humanitarian settings. I’ll be saying more about the Humanitarian UAV Network (UAViators, pronounced “way-viators”) at the end of my talk.

Screen Shot 2014-06-24 at 2.21.19 PM

The view from above is key for humanitarian response. Indeed, satellite imagery has played an important role in relief operations since Hurricane Mitch in 1998. And the Indian Ocean Tsunami was the first to be captured from space as the way was still propagating. Some 650 images were produced using data from 15 different sensors. During the immediate aftermath of the Tsunami, satellite images were used at headquarters to assess the extent of the emergency. Later, satellite images were used in the field directly, distributed by the Humanitarian Information Center (HIC) and others to support and coordinate relief efforts. 

Screen Shot 2014-06-24 at 2.21.30 PM

Satellites do present certain limitations, of course. These include cost, the time needed to acquire images, cloud cover, licensing issues and so on. In any event, two years after the Tsunami, an earlier iteration of the UN’s DRC Mission (MONUC) was supported by a European force (EUFOR), which used 4 Belgian UAVs. But I won’t be speaking about this type of UAV. For a variety of reasons, particularly affordability, ease of transport, regulatory concerns, and community engagement, UAVs used in humanitarian response are smaller systems or micro-UAVs that weigh just a few kilograms, such as one fixed-wing displayed below.

Screen Shot 2014-06-24 at 2.21.47 PM

The World Food Program’s UAVs were designed and built at the University of Torino “way back” in 2007. But they’ve been grounded until this year due to lack of legislation in Italy.

Screen Shot 2014-06-24 at 2.22.05 PM

In June 2014, the UN’s Office for the Coordination of Humanitarian Affairs (OCHA) purchased a small quadcopter for use in humanitarian response and advocacy. Incidentally, OCHA is on the Advisory Board of the Humanitarian UAV Network, or UAViators. 

Screen Shot 2014-06-24 at 2.22.41 PM

Now, there are many uses cases for the operation of UAVs in humanitarian settings (those listed above are only a subset). All of you here at RPAS 2014 are already very familiar with these applications. So let me jump directly to real world case studies from the Philippines and Haiti.

Screen Shot 2014-06-24 at 2.23.08 PM

Typhoon Haiyan, or Yolanda as it was known locally, was the most powerful Typhoon in recorded human history to make landfall. The impact was absolutely devastated. I joined UN/OCHA in the Philippines following the Typhoon and was struck by how many UAV projects were being launched. What follows is just a few of said projects.

Screen Shot 2014-06-24 at 2.26.45 PM

Danoffice IT, a company based in Lausanne, Switzerland, used the Sky-Watch Huginn X1 Quadcopter to support the humanitarian response in Tacloban. The rotary-wing UAV was used to identify where NGOs could set up camp. Later on, the UAV was used to support a range of additional tasks such as identifying which roads were passable for transportation/logistics. The quadcopter was also flown up the coast to assess the damage from the storm surge and flooding and to determine which villages had been most affected. This served to speed up the relief efforts and made the response more targeted vis-a-vis the provision of resources and assistance. Danoffice IT is also on the Board of the Humanitarian UAV Network (UAViators).

Screen Shot 2014-06-24 at 2.27.06 PM

A second UAV project was carried out by local UAV start-up called CorePhil DSI. The team used an eBee to capture aerial imagery of downtown Tacloban, one of the areas hardest-hit by Typhoon Yolanda. They captured 22 Gigabytes of imagery and shared this with the Humanitarian OpenStreetMap Team (HOT) who are also on the Board of UAViators. HOT subsequently crowdsourced the tracing of this imagery (and satellite imagery) to create the most detailed and up-to-date maps of the area. These maps were shared with and used by multiple humanitarian organizations as well as the Filipino Government.

Screen Shot 2014-06-24 at 2.27.28 PM

In a third project, the Swiss humanitarian organization Medair partnered with Drone Adventures to create a detailed set of 2D maps and 3D terrain models of the disaster-affected areas in which Medair works. These images were used to inform the humanitarian organization’s recovery and reconstruction programs. To be sure, Medair used the maps and models of Tacloban and Leyte to assist in assessing where the greatest need was and what level of assistance should be given to affected families as they continued to recover. Having these accurate aerial images of the affected areas allowed the Swiss organization to address the needs of individual households and—equally importantly—to advocate on their behalf when necessary.

Screen Shot 2014-06-24 at 3.20.08 PM

Drone Adventures also flew their fixed-wing UAVs (eBee’s) over Dulag, just north of Leyte, where more than 80% of homes and croplands were destroyed during the Typhoon. Medair is providing both materials and expertise to help build new shelters in Dulag. So the aerial imagery is proving invaluable to identify just how much material is needed and where. The captured imagery is also enabling community members themselves to better understand both where the greatest needs are an also what the potential solutions might be.

Screen Shot 2014-06-24 at 2.27.55 PM

The partners are also committed to Open Data. The imagery captured was made available online and for free, enabling community leaders and humanitarian organizations to use the information to coordinate other reconstruction efforts. In addition, Drone Adventures and Medair presented locally-printed maps to community leaders within 24 hours of flying the UAVs. Some of these maps were printed on rollable, water proof banners, which make them more durable when used in the field.

Screen Shot 2014-06-24 at 2.28.11 PM

In yet another UAV project, the local Filipino start-up SkyEye Inc partnered with the University of the Philippines in Manila to develop expendable UAVs or xUAVs. The purpose of this initiative is to empower grassroots communities to deploy their own low-cost xUAVs and thus support locally-deployed response efforts. The team has trained 4 out of 5 teams across the Philippines to locally deploy UAVs in preparation for the next Typhoon season. In so doing, they are also transferring math, science and engineering skills to local communities. It is worth noting that community perceptions of UAVs in the Philippines and elsewhere has always been very positive. Indeed, local communities perceive small UAVs as toys more than anything else.

Screen Shot 2014-06-24 at 2.28.37 PM

SkyEye worked with this group from the University of Hawaii to create disaster risk reduction models of flood-prone areas.

Screen Shot 2014-06-24 at 2.29.22 PM

Moving to Haiti, the International Organization for Migration (IOM) has partnered with Drone Adventures and other to produce accurate topographical and 3D maps of disaster prone areas in the Philippines. These aerial images have been used to inform disaster risk reduction and community resilience programs. The UAVs have also enabled IOM to assess destroyed houses and other types of damage caused by floods and droughts. In addition, UAVs have been used to monitor IDP camps, helping aid workers identify when shelters are empty and thus ready to be closed. Furthermore, the high resolution aerial imagery has been used to support a census survey of public building, shelters, hospitals as well as schools.

Screen Shot 2014-06-24 at 2.29.46 PM

After Hurricane Sandy, for example, aerial imagery enabled IOM to very rapidly assess how many houses had collapsed near Rivière Grise and how many people were affected by the flooding. The aerial imagery was also used to identify areas of standing water where mosquitos and epidemics could easily thrive. Throughout their work with UAVs, IOM has stressed that regular community engagement has been critical for the successful use of UAVs. Indeed, informing local communities of the aerial mapping projects and explaining how the collected information is to be used is imperative. Local capacity building is also paramount, which is why Drone Adventures has trained a local team of Haitians to locally deploy and maintain their own eBee UAV.

Screen Shot 2014-06-24 at 2.30.27 PM

The pictures above and below are some of the information products produced by IOM and Drone Adventures. The 3D model above was used to model flood risk in the area and to inform subsequent disaster risk reduction projects.

Screen Shot 2014-06-24 at 2.30.47 PM

Several colleagues of mine have already noted that aerial imagery presents a Big Data challenge. This means that humanitarian organizations and others will need to use advanced computing (human computing and machine computing) to make sense of Big (Aerial) Data.

Screen Shot 2014-06-24 at 2.31.54 PM

My colleagues at the European Commission’s Joint Research Center (JRC) are already beginning to apply advanced computing to automatically analyze aerial imagery. In the example from Haiti below, the JRC deployed a machine learning classifier to automatically identify rubble left over from the massive earthquake that struck Port-au-Prince in 2010. Their classifier had an impressive accuracy of 92%, “suggesting that the method in its simplest form is sufficiently reliable for rapid damage assessment.”

Screen Shot 2014-06-24 at 2.32.06 PM

Human computing (or crowdsourcing) can also be used to make sense of Big Data. My team and I at QCRI have partnered with the UN (OCHA) to create the MicroMappers platform, which is a free and open-source tool to make sense of large datasets created during disasters, like aerial data. We have access to thousands of digital volunteers who can rapidly tag and trace aerial imagery; the resulting analysis of this tagging/tracing can be used to increase the situational awareness  of humanitarian organizations in the field.

Screen Shot 2014-06-24 at 2.32.43 PM

 

Digital volunteers can trace features of interest such as shelters without roofs. Our plan is to subsequently use these traced features as training data to develop machine learning classifiers that can automatically identify these features in future aerial images. We’re also exploring the second use-case depicted below, ie, the rapid transcription of imagery, which can then be automatically geo-tagged and added to a crisis map.

Screen Shot 2014-06-24 at 2.32.55 PM

 

The increasing use of UAVs during humanitarian disasters is why UAViators, the Humanitarian UAV Network, was launched. Recall the relief operations in response to Typhoon Yolanda; an unprecedented number of UAV projects were in operation. But most operators didn’t know about each other, so they were not coordinating flights let alone sharing imagery with local communities. Since the launch of UAViators, we’ve developed the first ever Code of Conduct for the use of UAVs in humanitarian settings, which includes guidelines on data protection and privacy. We have also drafted an Operational Check-List to educate those who are new to humanitarian UAVs. We are now in the process of carrying out a comprehensive evaluation of UAV models along with cameras, sensors, payload mechanism and image processing software. The purpose of this evaluation is to identify which are the best fit for use by humanitarians in the field. Since the UN and others are looking for training and certification programs, we are actively seeking partners to provide these services.

Screen Shot 2014-06-24 at 2.34.04 PM

The above goals are all for the medium to long term. More immediately, UAViators is working to educate humanitarian organizations on both the opportunities and challenges of using UAVs in humanitarian settings. UAViators is also working to facilitate the coordinate UAV flights during major disasters, enabling operators to share their flight plans and contact details with each other via the UAViators website. We are also planning to set up an SMS service to enable direct communication between operators and others in the field during UAV flights. Lastly, we are developing an online map for operators to easily share the imagery/videos they are collecting during relief efforts.

Screen Shot 2014-06-24 at 2.34.36 PM

Data collection (imagery capture) is certainly not the only use case for UAVs in humanitarian contexts. The transportation of payloads may play an increasingly important role in the future. To be sure, my colleagues at UNICEF are actively exploring this with a number of partners in Africa.

Screen Shot 2014-06-24 at 2.34.47 PM

Other sensors also present additional opportunities for the use of UAVs in relief efforts. Sensors can be used to assess the impact of disasters on communication infrastructure, such as cell phone towers, for example. Groups are also looking into the use of UAVs to provide temporary communication infrastructure (“aerial cell phone towers”) following major disasters.

Screen Shot 2014-06-24 at 2.34.59 PM

The need for Sense and Avoid systems (a.k.a. Detection & Avoid solutions) has been highlighted in almost every other presentation given at RPAS 2014. We really need this new technology earlier rather than later (and that’s a major  understatement). At the same time, it is important to emphasize that the main added value of UAVs in humanitarian settings is to capture imagery of areas that are overlooked or ignored by mainstream humanitarian relief operations; that is, of areas that are partially or completely disconnected logistically. By definition, disaster-affected communities in these areas are likely to be more vulnerable than others in urban areas. In addition, the airspaces in these disconnected regions are not complex airspaces and thus present fewer challenges around safety and coordination, for example.

Screen Shot 2014-06-24 at 2.35.19 PM

UAVs were ready to go following the mudslides in Oso, Washington back in March of this year. The UAVs were going to be used to look for survivors but the birds were not allowed to fly. The decision to ground UAVs and bar them from supporting relief and rescue efforts will become increasingly untenable when lives are at stake. I genuinely applaud the principle of proportionality applied by the EU and respective RPAS Associations vis-a-vis risks and regulations, but there is one very important variable missing in the proportionality equation: social benefit. Indeed, the cost benefit calculus of UAV risk & regulation in the context of humanitarian use must include the expected benefit of lives saved and suffering alleviated. Let me repeat this to make sure I’m crystal clear: risks must be weighed against potential lives saved.

Screen Shot 2014-06-24 at 2.35.39 PM

At the end of the day, the humanitarian context is different from precision agriculture or other commercial applications of UAVs such as film making. The latter have no relation to the Humanitarian Imperative. Having over-regulation stand in the way of humanitarian principles will simply become untenable. At the same time, the principle of Do No Harm must absolutely be upheld, which is why it features prominently in the Humanitarian UAV Network’s Code of Conduct. In sum, like the Do No Harm principle, the cost benefit analysis of proportionality must include potential or expected benefits as part of the calculus.

Screen Shot 2014-06-24 at 2.35.56 PM

To conclude, a new (forthcoming) policy brief by the UN (OCHA) publicly calls on humanitarian organizations to support initiatives like the Humanitarian UAV Network. This is an important, public endorsement of our work thus far. But we also need support from non-humanitarian organizations like those you represent in this room. For example, we need clarity on existing legislation. Our partners like the UN need to have access to the latest laws by country to inform their use of UAVs following major disasters. We really need your help on this; and we also need your help in identifying which UAVs and related technologies are likely to be a good fit for humanitarians in the field. So if you have some ideas, then please find me during the break, I’d really like to speak with you, thank you!

bio

See Also:

  • Crisis Map of UAV/Aerial Videos for Disaster Response [link]
  • How UAVs are Making a Difference in Disaster Response [link]
  • Humanitarians Using UAVs for Post Disaster Recovery [link]
  • Grassroots UAVs for Disaster Response [link]
  • Using UAVs for Search & Rescue [link]
  • Debrief: UAV/Drone Search & Rescue Challenge [link]
  • Crowdsourcing Analysis of UAV Imagery for Search/Rescue [link]
  • Check-List for Flying UAVs in Humanitarian Settings [link]

Picture Credits:

  • Danoffice IT; Drone Adventures, SkyEye, JRC

 

Automatically Analyzing UAV/Aerial Imagery from Haiti

My colleague Martino Pesaresi from the European Community’s Joint Research Center (JRC) recently shared one of his co-authored studies with me on the use of advanced computing to analyze UAV (aerial) imagery. Given the rather technical nature of the title, “Rubble Detection from VHR Aerial Imagery Data Using Differential Morphological Profiles,” it is unlikely that many of my humanitarian colleagues have read the study. But the results have important implications for the development of next generation humanitarian technologies that focus on very high resolution (VHR) aerial imagery captured by UAVs.

Credit: BBC News

As Martino and his co-authors note, “The presence of rubble in urban areas can be used as an indicator of building quality, poverty level, commercial activity, and others. In the case of armed conflict or natural disasters, rubble is seen as the trace of the event on the affected area. The amount of rubble and its density are two important attributes for measuring the severity of the event, in contribution to the overall crisis assessment. In the post-disaster time scale, accurate mapping of rubble in relation to the building type and location is of critical importance in allocating response teams and relief resources immediately after event. In the longer run, this information is used for post-disaster needs assessment, recovery planning and other relief activities on the affected region.”

Martino and team therefore developed an “automated method for the rapid detection and quantification of rubble from very high resolution aerial imagery of urban regions.” The first step in this model is to transfer the information depicted in images to “some hierarchical representation structure for indexing and fast component retrieval.” This simply means that aerial images need to be converted into a format that will make them “readable” by a computer. One way to do this is by converting said images into Max-Trees like the one below (which I find rather poetic).

max tree

The conversion of aerial images into Max Trees enables Martino and company to analyze and compare as many images as they’d like to identify which combination of nodes and branches represent rubble. This pattern enables the team to subsequently use advanced statistical techniques to identify the rest of the rubble in the remaining aerial images, as shown below. The heat maps on the right depict the result of the analysis, with the red shapes denoting areas that have a high probability of being rubble.

rubble detector

The detection success rate of Martino et al.’s automated rubble detector was about 92%, “suggesting that the method in its simplest form is sufficiently reliable for rapid damage assessment.” The full study is available here and also appears in my forthcoming book “Digital Humanitarians: How Big Data Changes the Face of Disaster Response.”

bio

 

See Also:

  • Welcome to the Humanitarian UAV Network [link]
  • How UAVs are Making a Difference in Disaster Response [link]
  • Humanitarians Using UAVs for Post Disaster Recovery [link]
  • Grassroots UAVs for Disaster Response [link]
  • Using UAVs for Search & Rescue [link]
  • Debrief: UAV/Drone Search & Rescue Challenge [link]
  • Crowdsourcing Analysis of UAV Imagery for Search/Rescue [link]
  • Check-List for Flying UAVs in Humanitarian Settings [link]

Zoomanitarians: Using Citizen Science and Next Generation Satellites to Accelerate Disaster Damage Assessments

Zoomanitarians has been in the works for well over a year, so we’re excited to be going fully public for the first time. Zoomanitarians is a joint initiative between Zooniverse (Brook Simmons), Planet Labs (Alex Bakir) and myself at QCRI. The purpose of Zoomanitarians is to accelerate disaster damage assessments by leveraging Planet Labs’ unique constellation of 28 satellites and Zooniverse’s highly scalable microtasking platform. As I noted in this earlier post, digital volunteers from Zooniverse tagged well over 2 million satellite images (of Mars, below) in just 48 hours. So why not invite Zooniverse volunteers to tag millions of images taken by Planet Labs following major disasters (on Earth) to help humanitarians accelerate their damage assessments?

Zooniverse Planet 4

That was the question I posed to Brooke and Alex in early 2013. “Why not indeed?” was our collective answer. So we reached out to several knowledgeable colleagues of mine including Kate Chapman from Humanitarian OpenStreetMap and Lars Bromley from UNOSAT for their feedback and guidance on the idea.

We’ll be able to launch our first pilot project later this year thanks to Kate who kindly provided us with very high-resolution UAV/aerial imagery of downtown Tacloban in the Philippines. Why do we want said imagery when the plan is to use Planet Labs imagery? Because Planet Labs imagery is currently available at 3-5 meter resolution so we’ll be “degrading” the resolution of the aerial imagery to determine just what level and type of damage can be captured at various resolutions as compared to the imagery from Planet Labs. The pilot project will therefore serve to (1) customize & test the Zoomanitarians microtasking platform and (2) determine what level of detail can be captured at various resolutions.

PlanetLabs

We’ll then spend the remainder of the year improving the platform based on the results of the pilot project during which time I will continue to seek input from humanitarian colleagues. Zooniverse’s microtasking platform has already been stress-tested extensively over the years, which is one reason why I approached Zooniverse last year. The other reason is that they have over 1 million digital volunteers on their list-serve. Couple this with Planet Labs’ unique constellation of 28 satellites, and you’ve got the potential for near real-time satellite imagery analysis for disaster response. Our plan is to produce “heat maps” based on the results and to share shape files as well for overlay on other maps.

It took imagery analysts well over 48 hours to acquire and analyze satellite imagery following Typhoon Yolanda. While Planet Labs imagery is not (yet) available at high-resolutions, our hope is that Zoomanitarians will be able to acquire and analyze relevant imagery within 12-24 hours of a request. Several colleagues have confirmed to me that the results of this rapid analysis will also prove invaluable for subsequent, higher-resolution satellite imagery acquisition and analysis. On a related note, I hope that our rapid satellite-based damage assessments will also serve as a triangulation mechanism (ground-truthing) for the rapid social-media-driven damage assessments carried out using the Artificial Intelligence for Disaster Response (AIDR) platform and MicroMappers.

While much work certainly remains, and while Zoomanitairans is still in the early phases of research and development, I’m nevertheless excited and optimistic about the potential impact—as are my colleagues Brooke and Alex. We’ll be announcing the date of the pilot later this summer, so stay tuned for updates!

Got TweetCred? Use it To Automatically Identify Credible Tweets (Updated)

Update: Users have created an astounding one million+ tags over the past few weeks, which will help increase the accuracy of TweetCred in coming months as we use these tags to further train our machine learning classifiers. We will be releasing our Firefox plugin in the next few days. In the meantime, we have just released our paper on TweetCred which describes our methodology & classifiers in more detail.

What if there were a way to automatically identify credible tweets during major events like disasters? Sounds rather far-fetched, right? Think again.

The new field of Digital Information Forensics is increasingly making use of Big Data analytics and techniques from artificial intelligence like machine learning to automatically verify social media. This is how my QCRI colleague ChaTo et al. already predicted both credible and non-credible tweets generated after the Chile Earthquake (with an accuracy of 86%). Meanwhile, my colleagues Aditi, et al. from IIIT Delhi also used machine learning to automatically rank the credibility of some 35 million tweets generated during a dozen major international events such as the UK Riots and the Libya Crisis. So we teamed up with Aditi et al. to turn those academic findings into TweetCred, a free app that identifies credible tweets automatically.

CNN TweetCred

We’ve just launched the very first version of TweetCred—key word being first. This means that our new app is still experimental. On the plus side, since TweetCred is powered by machine learning, it will become increasingly accurate over time as more users make use of the app and “teach” it the difference between credible and non-credible tweets. Teaching TweetCred is as simple as a click of the mouse. Take the tweet below, for example.

ARC TweetCred Teach

TweetCred scores each tweet based based on a 7-point system, the higher the number of blue dots, the more credible the content of the tweet is likely to be. Note that a TweetCred score also takes into account any pictures or videos included in a tweet along with the reputation and popularity of the Twitter user. Naturally, TweetCred won’t always get it right, which is where the teaching and machine learning come in. The above tweet from the American Red Cross is more credible than three dots would suggest. So you simply hover your mouse over the blue dots and click on the “thumbs down” icon to tell TweetCred it got that tweet wrong. The app will then ask you to tag the correct level of credibility for that tweet is.

ARC TweetCred Teach 3

That’s all there is to it. As noted above, this is just the first version of TweetCred. The more all of us use (and teach) the app, the more accurate it will be. So please try it out and spread the word. You can download the Chrome Extension for TweetCred here. If you don’t use Chrome, you can still use the browser version here although the latter has less functionality. We very much welcome any feedback you may have, so simply post feedback in the comments section below. Keep in mind that TweetCred is specifically designed to rate the credibility of disaster/crisis related tweets rather than any random topic on Twitter.

As I note in my book Digital Humanitarians (forthcoming), empirical studies have shown that we’re less likely to spread rumors on Twitter if false tweets are publicly identified by Twitter users as being non-credible. In fact, these studies show that such public exposure increases the number of Twitter users who then seek to stop the spread of said of rumor-related tweets by 150%. But, it makes a big difference whether one sees the rumors first or the tweets dismissing said rumors first. So my hope is that TweetCred will help accelerate Twitter’s self-correcting behavior by automatically identifying credible tweets while countering rumor-related tweets in real-time.

This project is a joint collaboration between IIIT and QCRI. Big thanks to Aditi and team for their heavy lifting on the coding of TweetCred. If the experiments go well, my QCRI colleagues and I may integrate TweetCred within our AIDR (Artificial Intelligence for Disaster Response) and Verily platforms.

Bio

See also:

  • New Insights on How to Verify Social Media [link]
  • Predicting the Credibility of Disaster Tweets Automatically [link]
  • Auto-Ranking Credibility of Tweets During Major Events [link]
  • Auto-Identifying Fake Images on Twitter During Disasters [link]
  • Truth in the Age of Social Media: A Big Data Challenge [link]
  • Analyzing Fake Content on Twitter During Boston Bombings [link]
  • How to Verify Crowdsourced Information from Social Media [link]
  • Crowdsourcing Critical Thinking to Verify Social Media [link]
  • Tweets, Crises and Behavioral Psychology: On Credibility and Information Sharing [link]

Using AIDR to Collect and Analyze Tweets from Chile Earthquake

Wish you had a better way to make sense of Twitter during disasters than this?

Type in a keyword like #ChileEarthquake in Twitter’s search box above and you’ll see more tweets than you can possibly read in a day let alone keep up with for more than a few minutes. Wish there way were an easy, free and open source solution? Well you’ve come to the right place. My team and I at QCRI are developing the Artificial Intelligence for Disaster Response (AIDR) platform to do just this. Here’s how it works:

First you login to the AIDR platform using your own Twitter handle (click images below to enlarge):

AIDR login

You’ll then see your collection of tweets (if you already have any). In my case, you’ll see I have three. The first is a collection of English language tweets related to the Chile Earthquake. The second is a collection of Spanish tweets. The third is a collection of more than 3,000,000 tweets related to the missing Malaysia Airlines plane. A preliminary analysis of these tweets is available here.

AIDR collections

Lets look more closely at my Chile Earthquake 2014 collection (see below, click to enlarge). I’ve collected about a quarter of a million tweets in the past 30 hours or so. The label “Downloaded tweets (since last re-start)” simply refers to the number of tweets I’ve collected since adding a new keyword or hashtag to my collection. I started the collection yesterday at 5:39am my time (yes, I’m an early bird). Under “Keywords” you’ll see all the hashtags and keywords I’ve used to search for tweets related to the earthquake in Chile. I’ve also specified the geographic region I want to collect tweets from. Don’t worry, you don’t actually have to enter geographic coordinates when you set up your own collection, you simply highlight (on map) the area you’re interested in and AIDR does the rest.

AIDR - Chile Earthquake 2014

You’ll also note in the above screenshot that I’ve selected to only collect tweets in English, but you can collect all language tweets if you’d like or just a select few. Finally, the Collaborators section simply lists the colleagues I’ve added to my collection. This gives them the ability to add new keywords/hashtags and to download the tweets collected as shown below (click to enlarge). More specifically, collaborators can download the most recent 100,000 tweets (and also share the link with others). The 100K tweet limit is based on Twitter’s Terms of Service (ToS). If collaborators want all the tweets, Twitter’s ToS allows for sharing the TweetIDs for an unlimited number of tweets.

AIDR download CSV

So that’s the AIDR Collector. We also have the AIDR Classifier, which helps you make sense of the tweets you’re collecting (in real-time). That is, your collection of tweets doesn’t stop, it continues growing, and as it does, you can make sense of new tweets as they come in. With the Classifier, you simply teach AIDR to classify tweets into whatever topics you’re interested in, like “Infrastructure Damage”, for example. To get started with the AIDR Classifier, simply return to the “Details” tab of our Chile collection. You’ll note the “Go To Classifier” button on the far right:

AIDR go to Classifier

Clicking on that button allows you to create a Classifier, say on the topic of disaster damage in general. So you simply create a name for your Classifier, in this case “Disaster Damage” and then create Tags to capture more details with respect to damage-related tweets. For example, one Tag might be, say, “Damage to Transportation Infrastructure.” Another could be “Building Damage.” In any event, once you’ve created your Classifier and corresponding tags, you click Submit and find your way to this page (click to enlarge):

AIDR Classifier Link

You’ll notice the public link for volunteers. That’s basically the interface you’ll use to teach AIDR. If you want to teach AIDR by yourself, you can certainly do so. You also have the option of “crowdsourcing the teaching” of AIDR. Clicking on the link will take you to the page below.

AIDR to MicroMappers

So, I called my Classifier “Message Contents” which is not particularly insightful; I should have labeled it something like “Humanitarian Information Needs” or something, but bear with me and lets click on that Classifier. This will take you to the following Clicker on MicroMappers:

MicroMappers Clicker

Now this is not the most awe-inspiring interface you’ve ever seen (at least I hope not); reason being that this is simply our very first version. We’ll be providing different “skins” like the official MicroMappers skin (below) as well as a skin that allows you to upload your own logo, for example. In the meantime, note that AIDR shows every tweet to at least three different volunteers. And only if each of these 3 volunteers agree on how to classify a given tweet does AIDR take that into consideration when learning. In other words, AIDR wants to ensure that humans are really sure about how to classify a tweet before it decides to learn from that lesson. Incidentally, The MicroMappers smartphone app for the iPhone and Android will be available in the next few weeks. But I digress.

Yolanda TweetClicker4

As you and/or your volunteers classify tweets based on the Tags you created, AIDR starts to learn—hence the AI (Artificial Intelligence) in AIDR. AIDR begins to recognize that all the tweets you classified as “Infrastructure Damage” are indeed similar. Once you’ve tagged enough tweets, AIDR will decide that it’s time to leave the nest and fly on it’s own. In other words, it will start to auto-classify incoming tweets in real-time. (At present, AIDR can auto-classify some 30,000 tweets per minute; compare this to the peak rate of 16,000 tweets per minute observed during Hurricane Sandy).

Of course, AIDR’s first solo “flights” won’t always go smoothly. But not to worry, AIDR will let you know when it needs a little help. Every tweet that AIDR auto-tags comes with a Confidence level. That is, AIDR will let you know: “I am 80% sure that I correctly classified this tweet”. If AIDR has trouble with a tweet, i.e., if it’s confidence level is 65% or below, the it will send the tweet to you (and/or your volunteers) so it can learn from how you classify that particular tweet. In other words, the more tweets you classify, the more AIDR learns, and the higher AIDR’s confidence levels get. Fun, huh?

To view the results of the machine tagging, simply click on the View/Download tab, as shown below (click to enlarge). The page shows you the latest tweets that have been auto-tagged along with the Tag label and the confidence score. (Yes, this too is the first version of that interface, we’ll make it more user-friendly in the future, not to worry). In any event, you can download the auto-tagged tweets in a CSV file and also share the download link with your colleagues for analysis and so on. At some point in the future, we hope to provide a simple data visualization output page so that you can easily see interesting data trends.

AIDR Results

So that’s basically all there is to it. If you want to learn more about how it all works, you might fancy reading this research paper (PDF). In the meantime, I’ll simply add that you can re-use your Classifiers. If (when?) another earthquake strikes Chile, you won’t have to start from scratch. You can auto-tag incoming tweets immediately with the Classifier you already have. Plus, you’ll be able to share your classifiers with your colleagues and partner organizations if you like. In other words, we’re envisaging an “App Store” of Classifiers based on different hazards and different countries. The more we re-use our Classifiers, the more accurate they will become. Everybody wins.

And voila, that is AIDR (at least our first version). If you’d like to test the platform and/or want the tweets from the Chile Earthquake, simply get in touch!

bio

Note:

  • We’re adapting AIDR so that it can also classify text messages (SMS).
  • AIDR Classifiers are language specific. So if you speak Spanish, you can create a classifier to tag all Spanish language tweets/SMS that refer to disaster damage, for example. In other words, AIDR does not only speak English : )