Category Archives: Crowdsourcing

From Russia with Love: A Match.com for Disaster Response

I’ve been advocating for the development of a “Match.com” for disaster response since early 2010. Such a platform would serve to quickly match hyperlocal needs with relevant resources available at the local and national level, thus facilitating and accelerating self-organization following major disasters. Why advocate for a platform modeled after an online dating website? Because self-organized mutual-aid is an important driver of community resilience.

Russian Bell

Obviously, self-organization is not dependent on digital technology. The word Rynda, for example, is an old Russian word for a “village bell” which was used by local communities to self-organize during emergencies. Interestingly, Rynda became a popular meme on social media during fires in 2010. As my colleague Gregory Asmolov notes in his brilliant new study, a Russian blogger at the time of the fires “posted an emotional open letter to Prime Minister Putin, describing the lack of action by local authorities and emergency services.” In effect, the blogger demanded a “return to an old tradition of self-organization in local communities,” subsequently exclaiming “bring back the Rynda!” This demand grew into a popular meme symbolizing the catastrophic failure of the formal system’s response to the massive fires.

At the time, my colleagues Gregory, Alexey Sidorenko & Glafira Parinos launched the Help Map above in an effort to facilitate self-organization and mutual aid. But as Gregory notes in his new study, “The more people were willing to help, the more difficult it was to coordinate the assistance and to match resources with needs.” Moreover, the Help Map continued to receive reports on needs and offers-of-help after the fires had subsided. To be sure, reports of flooding soon found their way to the map, for example. Gregory, Alexey, Glarifa and team thus launched “Virtual Rynda: The Help Atlas” to facilitate self-help in response to a variety of situations beyond sudden-onset crises.

“We believed that in order to develop the capacity and resilience to respond to crisis situations we would have to develop the potential for mutual aid in everyday life. This would rely on an idea that emergency and everyday-life situations were interrelated. While people’s motivation to help one another is lower during non-emergency situations, if you facilitate mutual aid in everyday life and allow people to acquire skills in using Internet-based technologies to help one another or in asking for assistance, this will help to create an improved capacity to fulfill the potential of mutual aid the next time a disaster happens. [...] The idea was that ICTs could expand the range within which the tolling of the emergency bell could be heard. Everyone could ‘ring’ the ‘Virtual Rynda’ when they needed help, and communication networks would magnify the sound until it reached those who could come and help.”

In order to accelerate and scale the matching of needs & resources, Gregory and team (pictured below) sought to develop a matchmaking algorithm. Rynda would ask users to specify what the need was, where (geographically) the need was located and when (time-wise) the need was requested. “On the basis of this data, computer-based algorithms & human moderators could match offers with requests and optimize the process of resource allocation.” Rynda also included personal profiles, enabling volunteers “to develop an online reputation and increase trust between those needing help and those who could offer assistance. Every volunteer profile included not only personal information, but also a history of the individual’s previous activities within the platform.” To this end, in addition to “Help Requests” & “Help Offers,” Rynda also included an entry for “Help Provided” to close the feedback loop.

Asmolov1

As Gregory acknowledges, the results were mixed but certainly interesting and insightful. “Most of the messages [posted to the Rynda platform dealt] with requests for various types of social help, like clothing and medical equipment for children, homes for orphans, people with limited capabilities, or families in need. [...]. Some requests from environmental NGOs were related to the mobilization of volunteers to fight against deforestation or to fight wildfires. [...]. In another case, a volunteer who responded to a request on the platform helped to transport resources to a family with many children living far from a big city. [...]. Many requests concern[ed] children or disabled people. In one case, Rynda found a volunteer who helped a young woman leave her flat for walks, something she could not do alone. In some cases, the platform helped to provide medicine.” In any event, an analysis of the needs posted to Rynda suggests that “the most needed resource is not the thing itself, but the capacity to take it to the person who needs it. Transportation becomes a crucial resource, especially in a country as big as Russia.”

Alas, “Despite the efforts to create a tool that would automatically match a request with a potential help provider, the capacity of the algorithm to optimize the allocation of resources was very limited.” To this end, like the Help Map initiative, digital volunteers who served as social moderators remained pivotal to the Virtual Ryndal platform. As Alexey notes, “We’ve never even got to the point of the discussion of more complex models of matching.” Perhaps Rynda should have included more structured categories to enable more automated-matching since the volunteer match-makers are simply not scalable. “Despite the intention that the ‘matchmaking’ algorithm would support the efficient allocation of resources between those in need and those who could help, the success of the ‘matchmaking’ depended on the work of the moderators, whose resources were limited. As a result, a gap emerged between the broad issues that the project could address and the limited resources of volunteers.”

To this end, Gregory readily admits that “the initial definition of the project as a general mutual aid platform may have been too broad and unspecific.” I agree with this diagnostic. Take the online dating platform Match.com for example. Match.com’s sole focus is online dating; Airbnb’s sole purpose is to match those looking for a place to stay with those offering their places; Uber’s sole purpose is matching those who need to get somewhere with a local car service. To this end, matching platform for mutual-aid may indeed been too broad—at least to begin with. Amazon began with books, but later diversified.

In any case, as Gregory rightly notes, “The relatively limited success of Rynda didn’t mean the failure of the idea of mutual aid. What [...] Rynda demonstrates is the variety of challenges encountered along the way of the project’s implementation.” To be sure, “Every society or community has an inherent potential mutual aid structure that can be strengthened and empowered. This is more visible in emergency situations; however, major mutual aid capacity building is needed in everyday, non-emergency situations.” Thanks to Gregory and team, future digital matchmakers can draw on the above insights and Rynda’s open source code when designing their own mutual-aid and self-help platforms.

For me, one of the key take-aways is the need for a scalable matching platform. Match.com would not be possible if the matching were done primarily manually. Nor would Match.com work as well if the company sought to match interests beyond the romantic domain. So a future Match.com for mutual-aid would need to include automated matching and begin with a very specific matching domain. 

Bio

 

See also:

  • Using Waze, Uber, AirBnB, SeeClickFix for Disaster Response [link]
  • MatchApp: Next Generation Disaster Response App? [link]
  • A Marketplace for Crowdsourcing Crisis Response [link]

Live: Crowdsourced Verification Platform for Disaster Response

Earlier this year, Malaysian Airlines Flight 370 suddenly vanished, which set in motion the largest search and rescue operation in history—both on the ground and online. Colleagues at DigitalGlobe uploaded high resolution satellite imagery to the web and crowdsourced the digital search for signs of Flight 370. An astounding 8 million volunteers rallied online, searching through 775 million images spanning 1,000,000 square kilometers; all this in just 4 days. What if, in addition to mass crowd-searching, we could also mass crowd-verify information during humanitarian disasters? Rumors and unconfirmed reports tend to spread rather quickly on social media during major crises. But what if the crowd were also part of the solution? This is where our new Verily platform comes in.

Verily Image 1

Verily was inspired by the Red Balloon Challenge in which competing teams vied for a $40,000 prize by searching for ten weather balloons secretly placed across some 8,000,0000 square kilometers (the continental United States). Talk about a needle-in-the-haystack problem. The winning team from MIT found all 10 balloons within 8 hours. How? They used social media to crowdsource the search. The team later noted that the balloons would’ve been found more quickly had competing teams not posted pictures of fake balloons on social media. Point being, all ten balloons were found astonishingly quickly even with the disinformation campaign.

Verily takes the exact same approach and methodology used by MIT to rapidly crowd-verify information during humanitarian disasters. Why is verification important? Because humanitarians have repeatedly noted that their inability to verify social media content is one of the main reasons why they aren’t making wider user of this medium. So, to test the viability of our proposed solution to this problem, we decided to pilot the Verily platform by running a Verification Challenge. The Verily Team includes researchers from the University of Southampton, the Masdar Institute and QCRI.

During the Challenge, verification questions of various difficulty were posted on Verily. Users were invited to collect and post evidence justifying their answers to the “Yes or No” verification questions. The photograph below, for example, was posted with the following question:

Verily Image 3

Unbeknownst to participants, the photograph was actually of an Italian town in Sicily called Caltagirone. The question was answered correctly within 4 hours by a user who submitted another picture of the same street. The results of the new Verily experiment are promissing. Answers to our questions were coming in so rapidly that we could barely keep up with posting new questions. Users drew on a variety of techniques to collect their evidence & answer the questions we posted:

Verily was designed with the goal of tapping into collective critical thinking; that is, with the goal of encouraging people think about the question rather than use their gut feeling alone. In other words, the purpose of Verily is not simply to crowdsource the collection of evidence but also to crowdsource critical thinking. This explains why a user can’t simply submit a “Yes” or “No” to answer a verification question. Instead, they have to justify their answer by providing evidence either in the form of an image/video or as text. In addition, Verily does not make use of Like buttons or up/down votes to answer questions. While such tools are great for identifying and sharing content on sites like Reddit, they are not the right tools for verification, which requires searching for evidence rather than liking or retweeting.

Our Verification Challenge confirmed the feasibility of the Verily platform for time-critical, crowdsourced evidence collection and verification. The next step is to deploy Verily during an actual humanitarian disaster. To this end, we invite both news and humanitarian organizations to pilot the Verily platform with us during the next natural disaster. Simply contact me to submit a verification question. In the future, once Verily is fully developed, organizations will be able to post their questions directly.

bio

See Also:

  • Verily: Crowdsourced Verification for Disaster Response [link]
  • Crowdsourcing Critical Thinking to Verify Social Media [link]
  • Six Degrees of Separation: Implications for Verifying Social Media [link]

Live: Crowdsourced Crisis Map of UAV/Aerial Videos for Disaster Response

The first version of the Humanitarian UAV Network’s Crisis Map of UAV/aerial videos is now live on the Network’s website. The crowdsourced map features dozens of aerial videos of recent disasters. Like social media, this new medium—user-generated (aerial) content—can be used by humanitarian organizations to complement their damage assessments and thus improve situational awareness.

UAViators Map

The purpose of this Humanitarian UAV Network (UAViators) map is not only to provide humanitarian organizations and disaster-affected communities with an online repository of aerial information on disaster damage to augment their situational awareness; this crisis map also serves to raise awareness on how to safely & responsibly use small UAVs for rapid damage assessments. This explains why users who upload new content to the map must confirm that they have read the UAViator‘s Code of Conduct. They also have to confirm that the videos conform to the Network’s mission and that they do not violate privacy or copyrights. In sum, the map seeks to crowdsource both aerial footage and critical thinking for the responsible use of UAVs in humanitarian settings.

UAViators Map 4

As noted above, this is the first version of the map, which means several other features are currently in the works. These new features will be rolled out incrementally over the next weeks and months. In the meantime, feel free to suggest any features you’d like to see in the comments section below. Thank you.

Bio

  • Humanitarian UAV Network: Strategy for 2014-2015 [link]
  • Humanitarians in the Sky: Using UAVs for Disaster Response [link]
  • Humanitarian UAV Missions During Balkan Floods [link]
  • Using UAVs for Disaster Risk Reduction in Haiti [link]
  • Using MicroMappers to Make Sense of UAV/Aerial Imagery During Disasters [link]

The Filipino Government’s Official Strategy on Crisis Hashtags

As noted here, the Filipino Government has had an official strategy on promoting the use of crisis hashtags since 2012. Recently, the Presidential Communications Development and Strategic Planning Office (PCDSPO) and the Office of the Presidential Spokesperson (PCDSPO-OPS) have kindly shared their their 7-page strategy (PDF), which I’ve summarized below.

Gov Twitter

The Filipino government first endorsed the use of the #rescuePH and #reliefPH in August 2012, when the country was experiencing storm-enhanced monsoon rains. These were initiatives from the private sector. Enough people were using the hashtags to make them trend for days. Eventually, we adopted the hashtags in our tweets for disseminating government advisories, and for collecting reports from the ground. We also ventured into creating new hashtags, and into convincing media outlets to use unified hashtags.” For new hashtags, “The convention is the local name of the storm + PH (e.g., #PabloPH, #YolandaPH). In the case of the heavy monsoon, the local name of the monsoon was used, plus the year (i.e., #Habagat2013).” After agreeing on the hashtags, ” the OPS issued an official statement to the media and the public to carry these hashtags when tweeting about weather-related reports.”

The Office of the Presidential Spokesperson (OPS) would then monitor the hashtags and “made databases and lists which would be used in aid of deployed government frontline personnel, or published as public information.” For example, the OPS  “created databases from reports from #rescuePH, containing the details of those in need of rescue, which we endorsed to the National Disaster Risk Reduction & Management Council, the Coast Guard, and the Department of Transportation and Communications. Needless to say, we assumed that the databases we created using these hashtags would be contaminated by invalid reports, such as spam & other inappropriate messages. We try to filter out these erroneous or malicious reports, before we make our official endorsements to the concerned agencies. In coordination with officers from the Department of Social Welfare and Development, we also monitored the hashtag #reliefPH in order to identify disaster survivors who need food and non-food supplies.”

During Typhoon Haiyan (Yolanda), “the unified hashtag #RescuePH was used to convey lists of people needing help.” This information was then sent to to the National Disaster Risk Reduction & Management Council so that these names could be “included in their lists of people/communities to attend to.” This rescue hashtag was also “useful in solving surplus and deficits of goods between relief operations centers.” So the government encouraged social media users to coordinate their #ReliefPH efforts with the Department of Social Welfare and Development’s on-the-ground relief-coordination efforts. The Government also “created an infographic explaining how to use the hashtag #RescuePH.”

Screen Shot 2014-06-30 at 10.10.51 AM

Earlier, during the 2012 monsoon rains, the government “retweeted various updates on the rescue and relief operations using the hashtag #SafeNow. The hashtag is used when the user has been rescued or knows someone who has been rescued. This helps those working on rescue to check the list of pending affected persons or families, and update it.”

The government’s strategy document also includes an assessment on their use of unified hashtags during disasters. On the positive side, “These hashtags were successful at the user level in Metro Manila, where Internet use penetration is high. For disasters in the regions, where internet penetration is lower, Twitter was nevertheless useful for inter-sector (media – government – NGOs) coordination and information dissemination.” Another positive was the use of a unified hashtag following the heavy monsoon rains of 2012, “which had damaged national roads, inconvenienced motorists, and posing difficulty for rescue operations. After the floods subsided, the government called on the public to identify and report potholes and cracks on the national highways of Metro Manila by tweeting pictures and details of these to the official Twitter account [...] , and by using the hashtag #lubak2normal. The information submitted was entered into a database maintained by the Department of Public Works and Highways for immediate action.”

Screen Shot 2014-06-30 at 10.32.57 AM

The hashtag was used “1,007 times within 2 hours after it was launched. The reports were published and locations mapped out, viewable through a page hosted on the PCDSPO website. Considering the feedback, we considered the hashtag a success. We attribute this to two things: one, we used a platform that was convenient for the public to report directly to the government; and two, the hashtag appealed to humor (lubak means potholes or rubble in the vernacular). Furthermore, due to the novelty of it, the media had no qualms helping us spread the word. All the reports we gathered were immediately endorsed [...] for roadwork and repair.” This example points to the potential expanded use of social media and crowdsourcing for rapid damage assessments.

On the negative side, the use of #SafeNow resulted mostly in “tweets promoting #safenow, and very few actually indicating that they have been successfully rescued and/or are safe.” The most pressing challenge, however, was filtering. “In succeeding typhoons/instances of flooding, we began to have a filtering problem, especially when high-profile Twitter users (i.e., pop-culture celebrities) began to promote the hashtags through Twitter. The actual tweets that were calls for rescue were being drowned by retweets from fans, resulting in many nonrescue-related tweets [...].” This explains the need for Twitter monitoring platforms like AIDR, which is free and open source.

Bio

Crowdsourcing a Crisis Map of UAV/Aerial Videos for Disaster Response

Journalists and citizen journalists are already using small UAVs during disasters. And some are also posting their aerial videos online: Typhoon Haiyan (Yolanda), Moore Tornado, Arkansas Tornado and recent floods in Florida, for example. Like social media, this new medium—user-generated (aerial) content—can be used by humanitarian organizations to augment their damage assessments and situational awareness. I’m therefore spearheading the development of a crisis map to crowdsource the collection of aerial footage during disasters. This new “Humanitarian UAV Map” (HUM) project is linked to the Humanitarian UAV Network (UAViators).

Travel by Drone

The UAV Map, which will go live shortly, is inspired by Travel by Drone Map displayed above. In other words, we’re aiming for simplicity. Unlike the above map, however, we’ll be using OpenStreetMap (OSM) instead of Google Maps as our base map since the former is open source. What’s more, and as noted in my forthcoming book, the Humanitarian OSM Team (HOT) does outstanding work crowdsourcing up-to-date maps during disasters. So having OSM as a base map makes perfect sense.

Screen Shot 2014-06-17 at 2.39.17 PM

Given that we’ve already developed a VideoClicker as part of our MicroMappers platform, we’ll be using said Clicker to crowdsource the analysis & quality control of videos posted to our crisis map. Stay tuned for the launch, our Crisis Aerial Map will be live shortly.

bio

See Also:

  • Welcome to the Humanitarian UAV Network [link]
  • How UAVs are Making a Difference in Disaster Response [link]
  • Humanitarians Using UAVs for Post Disaster Recovery [link]
  • Grassroots UAVs for Disaster Response [link]
  • Using UAVs for Search & Rescue [link]
  • Debrief: UAV/Drone Search & Rescue Challenge [link]
  • Crowdsourcing Analysis of UAV Imagery for Search/Rescue [link]
  • Check-List for Flying UAVs in Humanitarian Settings [link]

Zoomanitarians: Using Citizen Science and Next Generation Satellites to Accelerate Disaster Damage Assessments

Zoomanitarians has been in the works for well over a year, so we’re excited to be going fully public for the first time. Zoomanitarians is a joint initiative between Zooniverse (Brook Simmons), Planet Labs (Alex Bakir) and myself at QCRI. The purpose of Zoomanitarians is to accelerate disaster damage assessments by leveraging Planet Labs’ unique constellation of 28 satellites and Zooniverse’s highly scalable microtasking platform. As I noted in this earlier post, digital volunteers from Zooniverse tagged well over 2 million satellite images (of Mars, below) in just 48 hours. So why not invite Zooniverse volunteers to tag millions of images taken by Planet Labs following major disasters (on Earth) to help humanitarians accelerate their damage assessments?

Zooniverse Planet 4

That was the question I posed to Brooke and Alex in early 2013. “Why not indeed?” was our collective answer. So we reached out to several knowledgeable colleagues of mine including Kate Chapman from Humanitarian OpenStreetMap and Lars Bromley from UNOSAT for their feedback and guidance on the idea.

We’ll be able to launch our first pilot project later this year thanks to Kate who kindly provided us with very high-resolution UAV/aerial imagery of downtown Tacloban in the Philippines. Why do we want said imagery when the plan is to use Planet Labs imagery? Because Planet Labs imagery is currently available at 3-5 meter resolution so we’ll be “degrading” the resolution of the aerial imagery to determine just what level and type of damage can be captured at various resolutions as compared to the imagery from Planet Labs. The pilot project will therefore serve to (1) customize & test the Zoomanitarians microtasking platform and (2) determine what level of detail can be captured at various resolutions.

PlanetLabs

We’ll then spend the remainder of the year improving the platform based on the results of the pilot project during which time I will continue to seek input from humanitarian colleagues. Zooniverse’s microtasking platform has already been stress-tested extensively over the years, which is one reason why I approached Zooniverse last year. The other reason is that they have over 1 million digital volunteers on their list-serve. Couple this with Planet Labs’ unique constellation of 28 satellites, and you’ve got the potential for near real-time satellite imagery analysis for disaster response. Our plan is to produce “heat maps” based on the results and to share shape files as well for overlay on other maps.

It took imagery analysts well over 48 hours to acquire and analyze satellite imagery following Typhoon Yolanda. While Planet Labs imagery is not (yet) available at high-resolutions, our hope is that Zoomanitarians will be able to acquire and analyze relevant imagery within 12-24 hours of a request. Several colleagues have confirmed to me that the results of this rapid analysis will also prove invaluable for subsequent, higher-resolution satellite imagery acquisition and analysis. On a related note, I hope that our rapid satellite-based damage assessments will also serve as a triangulation mechanism (ground-truthing) for the rapid social-media-driven damage assessments carried out using the Artificial Intelligence for Disaster Response (AIDR) platform and MicroMappers.

While much work certainly remains, and while Zoomanitairans is still in the early phases of research and development, I’m nevertheless excited and optimistic about the potential impact—as are my colleagues Brooke and Alex. We’ll be announcing the date of the pilot later this summer, so stay tuned for updates!

An Operational Check-List for Flying UAVs in Humanitarian Settings

The Humanitarian UAV Network (UAViators) has taken off much faster than I expected. More than 240 members in 32 countries have joined the network since it’s launch just a few weeks ago.

UAViators Logo

I was also pleasantly surprised by the number of humanitarian organizations that got in touch with me right after the launch. Many of them are just starting to explore this space. And I found it refreshing that every single one of them considers the case for humanitarian UAVs to be perfectly obvious. Almost all of the groups also mentioned how they would have made use of UAVs in recent disasters. Some are even taking steps now to set up rapid-response UAV teams.

Credit: MicroDrones

My number one priority after launching the network was to start working on a Code of Conduct to guide the use of UAVs in humanitarian settings—the only one of it’s kind as far as I know. While I had initially sought to turn this Code of Conduct into a check-list, it became clear from the excellent feedback provided by members and the Advisory Board that we needed two separate documents. So my RA’s and I have created a more general Code of Conduct along with a more detailed operational check-list for flying UAVs in humanitarian settings. You’ll find the check-list here. Big thanks to Advisory Board member Gene Robinson for letting me draw on his excellent book for this check-list. Both the Code of Conduct and Check-List will continue to be updated on a monthly basis, so please do chime in and help us improve them.

One of my main goals for 2014 is to have the Code of Conduct officially and publicly endorsed by both UAV companies and humanitarian organizations alike. This may end up being a time-consuming process, but this endorsement is a must if the Code of Conduct is to become an established policy document. Another one of my goals is to organize a Humanitarian UAV Strategy Meeting in November that will bring together a select number of experts in Humanitarian UAVs with seasoned humanitarian professionals from the UN and Red Cross. The purpose of this meeting is to establish closer relationships between the humanitarian and UAV communities in order to facilitate coordination and information sharing during major disasters.

Also in the works for this year is a disaster response simulation with partners in the Philippines (including local government). The purpose of this simulation will be to rapidly deploy UAVs over a made-believe disaster area; then to upload the thousands of aerial images to MicroMappers to rapidly crowdsource the tagging of features-of-interest. Our (ambitious) hope is that the entire process can be done within a matter of hours. The simulation will help us identify the weak-links in our workflows. Our Filipino partners will also be using the resulting tagged images to create more accurate machine learning classifiers for automated feature detection. This will help them accelerate the analysis of aerial imagery during real disasters.

So stay tuned for updates, which will be posted on iRevolution and UAViators.

bio

See also:

  • How UAVs are Making a Difference in Disaster Response [link]
  • Humanitarians Using UAVs for Post Disaster Recovery [link]
  • Grassroots UAVs for Disaster Response [link]
  • Using UAVs for Search & Rescue [link]
  • Crowdsourcing Analysis of UAV Imagery for Search/Rescue [link]

Using AIDR to Collect and Analyze Tweets from Chile Earthquake

Wish you had a better way to make sense of Twitter during disasters than this?

Type in a keyword like #ChileEarthquake in Twitter’s search box above and you’ll see more tweets than you can possibly read in a day let alone keep up with for more than a few minutes. Wish there way were an easy, free and open source solution? Well you’ve come to the right place. My team and I at QCRI are developing the Artificial Intelligence for Disaster Response (AIDR) platform to do just this. Here’s how it works:

First you login to the AIDR platform using your own Twitter handle (click images below to enlarge):

AIDR login

You’ll then see your collection of tweets (if you already have any). In my case, you’ll see I have three. The first is a collection of English language tweets related to the Chile Earthquake. The second is a collection of Spanish tweets. The third is a collection of more than 3,000,000 tweets related to the missing Malaysia Airlines plane. A preliminary analysis of these tweets is available here.

AIDR collections

Lets look more closely at my Chile Earthquake 2014 collection (see below, click to enlarge). I’ve collected about a quarter of a million tweets in the past 30 hours or so. The label “Downloaded tweets (since last re-start)” simply refers to the number of tweets I’ve collected since adding a new keyword or hashtag to my collection. I started the collection yesterday at 5:39am my time (yes, I’m an early bird). Under “Keywords” you’ll see all the hashtags and keywords I’ve used to search for tweets related to the earthquake in Chile. I’ve also specified the geographic region I want to collect tweets from. Don’t worry, you don’t actually have to enter geographic coordinates when you set up your own collection, you simply highlight (on map) the area you’re interested in and AIDR does the rest.

AIDR - Chile Earthquake 2014

You’ll also note in the above screenshot that I’ve selected to only collect tweets in English, but you can collect all language tweets if you’d like or just a select few. Finally, the Collaborators section simply lists the colleagues I’ve added to my collection. This gives them the ability to add new keywords/hashtags and to download the tweets collected as shown below (click to enlarge). More specifically, collaborators can download the most recent 100,000 tweets (and also share the link with others). The 100K tweet limit is based on Twitter’s Terms of Service (ToS). If collaborators want all the tweets, Twitter’s ToS allows for sharing the TweetIDs for an unlimited number of tweets.

AIDR download CSV

So that’s the AIDR Collector. We also have the AIDR Classifier, which helps you make sense of the tweets you’re collecting (in real-time). That is, your collection of tweets doesn’t stop, it continues growing, and as it does, you can make sense of new tweets as they come in. With the Classifier, you simply teach AIDR to classify tweets into whatever topics you’re interested in, like “Infrastructure Damage”, for example. To get started with the AIDR Classifier, simply return to the “Details” tab of our Chile collection. You’ll note the “Go To Classifier” button on the far right:

AIDR go to Classifier

Clicking on that button allows you to create a Classifier, say on the topic of disaster damage in general. So you simply create a name for your Classifier, in this case “Disaster Damage” and then create Tags to capture more details with respect to damage-related tweets. For example, one Tag might be, say, “Damage to Transportation Infrastructure.” Another could be “Building Damage.” In any event, once you’ve created your Classifier and corresponding tags, you click Submit and find your way to this page (click to enlarge):

AIDR Classifier Link

You’ll notice the public link for volunteers. That’s basically the interface you’ll use to teach AIDR. If you want to teach AIDR by yourself, you can certainly do so. You also have the option of “crowdsourcing the teaching” of AIDR. Clicking on the link will take you to the page below.

AIDR to MicroMappers

So, I called my Classifier “Message Contents” which is not particularly insightful; I should have labeled it something like “Humanitarian Information Needs” or something, but bear with me and lets click on that Classifier. This will take you to the following Clicker on MicroMappers:

MicroMappers Clicker

Now this is not the most awe-inspiring interface you’ve ever seen (at least I hope not); reason being that this is simply our very first version. We’ll be providing different “skins” like the official MicroMappers skin (below) as well as a skin that allows you to upload your own logo, for example. In the meantime, note that AIDR shows every tweet to at least three different volunteers. And only if each of these 3 volunteers agree on how to classify a given tweet does AIDR take that into consideration when learning. In other words, AIDR wants to ensure that humans are really sure about how to classify a tweet before it decides to learn from that lesson. Incidentally, The MicroMappers smartphone app for the iPhone and Android will be available in the next few weeks. But I digress.

Yolanda TweetClicker4

As you and/or your volunteers classify tweets based on the Tags you created, AIDR starts to learn—hence the AI (Artificial Intelligence) in AIDR. AIDR begins to recognize that all the tweets you classified as “Infrastructure Damage” are indeed similar. Once you’ve tagged enough tweets, AIDR will decide that it’s time to leave the nest and fly on it’s own. In other words, it will start to auto-classify incoming tweets in real-time. (At present, AIDR can auto-classify some 30,000 tweets per minute; compare this to the peak rate of 16,000 tweets per minute observed during Hurricane Sandy).

Of course, AIDR’s first solo “flights” won’t always go smoothly. But not to worry, AIDR will let you know when it needs a little help. Every tweet that AIDR auto-tags comes with a Confidence level. That is, AIDR will let you know: “I am 80% sure that I correctly classified this tweet”. If AIDR has trouble with a tweet, i.e., if it’s confidence level is 65% or below, the it will send the tweet to you (and/or your volunteers) so it can learn from how you classify that particular tweet. In other words, the more tweets you classify, the more AIDR learns, and the higher AIDR’s confidence levels get. Fun, huh?

To view the results of the machine tagging, simply click on the View/Download tab, as shown below (click to enlarge). The page shows you the latest tweets that have been auto-tagged along with the Tag label and the confidence score. (Yes, this too is the first version of that interface, we’ll make it more user-friendly in the future, not to worry). In any event, you can download the auto-tagged tweets in a CSV file and also share the download link with your colleagues for analysis and so on. At some point in the future, we hope to provide a simple data visualization output page so that you can easily see interesting data trends.

AIDR Results

So that’s basically all there is to it. If you want to learn more about how it all works, you might fancy reading this research paper (PDF). In the meantime, I’ll simply add that you can re-use your Classifiers. If (when?) another earthquake strikes Chile, you won’t have to start from scratch. You can auto-tag incoming tweets immediately with the Classifier you already have. Plus, you’ll be able to share your classifiers with your colleagues and partner organizations if you like. In other words, we’re envisaging an “App Store” of Classifiers based on different hazards and different countries. The more we re-use our Classifiers, the more accurate they will become. Everybody wins.

And voila, that is AIDR (at least our first version). If you’d like to test the platform and/or want the tweets from the Chile Earthquake, simply get in touch!

bio

Note:

  • We’re adapting AIDR so that it can also classify text messages (SMS).
  • AIDR Classifiers are language specific. So if you speak Spanish, you can create a classifier to tag all Spanish language tweets/SMS that refer to disaster damage, for example. In other words, AIDR does not only speak English : )

Launching a Search and Rescue Challenge for Drone / UAV Pilots

My colleague Timothy Reuter (of AidDroids fame) kindly invited me to co-organize the Drone/UAV Search and Rescue Challenge for the DC Drone User Group. The challenge will take place on May 17th near Marshall in Virginia. The rules for the competition are based on the highly successful Search/Rescue challenge organized by my new colleague Chad with the North Texas Drone User Group. We’ll pretend that a person has gone missing by scattering (over a wide area) various clues such as pieces of clothing & personal affects. Competitors will use their UAVs to collect imagery of the area and will have 45 minutes after flying to analyze the imagery for clues. The full set of rules for our challenge are listed here but may change slightly as we get closer to the event.

searchrescuedrones

I want to try something new with this challenge. While previous competitions have focused exclusively on the use of drones/UAVs for the “Search” component of the challenge, I want to introduce the option of also engaging in the “Rescue” part. How? If UAVs identify a missing person, then why not provide that person with immediate assistance while waiting for the Search and Rescue team to arrive on site? The UAV could drop a small and light-weight first aid kit, or small water bottle, or even a small walkie talkie. Enter my new colleague Euan Ramsay who has been working on a UAV payloader solution for Search and Rescue; see the video demo below. Euan, who is based in Switzerland, has very kindly offered to share several payloader units for our UAV challenge. So I’ll be meeting up with him next month to take the units back to DC for the competition.

Another area I’d like to explore for this challenge is the use of crowdsourcing to analyze the aerial imagery & video footage. As noted here, the University of Central Lancashire used crowdsourcing in their UAV Search and Rescue pilot project last summer. This innovative “crowdsearching” approach is also being used to look for Malaysia Flight 370 that went missing several weeks ago. I’d really like to have this crowdsourcing element be an option for the DC Search & Rescue challenge.

UAV MicroMappers

My team and I at QCRI have developed a platform called MicroMappers, which can easily be used to crowdsource the analysis of UAV pictures and videos. The United Nations (OCHA) used MicroMappers in response to Typhoon Yolanda last year to crowdsource the tagging pictures posted on Twitter. Since then we’ve added video tagging capability. So one scenario for the UAV challenge would be for competitors to upload their imagery/videos to MicroMappers and have digital volunteers look through this content for clues of our fake missing person.

In any event, I’m excited to be collaborating with Timothy on this challenge and will be share updates on iRevolution on how all this pans out.

bio

See also:

  • Using UAVs for Search & Rescue [link]
  • Crowdsourcing Analysis of UAV Imagery for Search and Rescue [link]
  • How UAVs are Making a Difference in Disaster Response [link]
  • Grassroots UAVs for Disaster Response [link]

Results of the Crowdsourced Search for Malaysia Flight 370 (Updated)

Update: More than 3 million volunteers thus far have joined the crowdsourcing efforts to locate the missing Malaysian Airlines plane. These digital volunteers have viewed over a quarter-of-a-billion micro-maps and have tagged almost 3 million features in these satellite maps. Source of update.

Malaysian authorities have now gone on record to confirm that Flight 370 was hijacked, which reportedly explains why contact with the passenger jet abruptly ceased a week ago. The Search & Rescue operations now involve 13 countries around the world and over 100 ships, helicopters and airplanes. The costs of this massive operation must easily be running into the millions of dollars.

FlightSaR

Meanwhile, a free crowdsourcing platform once used by digital volunteers to search for Genghis Khan’s Tomb and displaced populations in Somalia (video below) has been deployed to search high-resolution satellite imagery for signs of the missing airliner. This is not the first time that crowdsourced satellite imagery analysis has been used to find a missing plane but this is certainly the highest profile operation yet, which may explain why the crowdsourcing platform used for the search (Tomnod) reportedly crashed for over a dozen of hours since the online search began. (Note that Zooniverse can easily handle this level of traffic). Click on the video below to learn more about the crowdsourced search for Genghis Khan and displaced peoples in Somalia.

NatGeoVideo

Having current, high-resolution satellite imagery is almost as good as having your own helicopter. So the digital version of these search operations includes tens of thousands of digital helicopters, whose virtual pilots are covering over 2,000 square miles of Thailand’s Gulf right from their own computers. They’re doing this entirely for free, around the clock and across multiple time zones. This is what Digital Humanitarians have been doing ever since the 2010 Haiti Earthquake, and most recently in response to Typhoon Yolanda.

Tomnod has just released the top results of the crowdsourced digital search efforts, which are displayed in the short video below. Like other microtasking platforms, Tomnod uses triangulation to calculate areas of greatest consensus by the crowd. This is explained further here. Note: The example shown in the video is NOT a picture of Flight 370 but perhaps of an airborne Search & Rescue plane.

While looking for evidence of the missing airliner is like looking for the proverbial needle in a massive stack of satellite images, perhaps the biggest value-added of this digital search lays in identifying where the aircraft is most definitely not located—that is, approaching this crowdsourced operation as a process of elimination. Professional imagery analysts can very easily and quickly review images tagged by the crowd, even if they are mistakenly tagged as depicting wreckage. In other words, the crowd can provide the first level filter so that expert analysts don’t waste their time looking at thousands of images of bare oceans. Basically, if the mandate is to leave no stone unturned, then the crowd can do that very well.

In sum, crowdsourcing can reduce the signal to noise ratio so that experts can focus more narrowly on analyzing the potential signals. This process may not be perfect just yet but it can be refined and improved. (Note that professionals also get it wrong, like Chinese analysts did with this satellite image of the supposed Malaysian airliner).

If these digital efforts continue and Flight 370 has indeed been hijacked, then this will certainly be the first time that crowdsourced satellite imagery analysis is used to find a hijacked aircraft. The latest satellite imagery uploaded by Tomnod is no longer focused on bodies of water but rather land. The blue strips below (left) is the area that the new satellite imagery covers.

Tomnod New Imagery 2

Some important questions will need to be addressed if this operation is indeed extended. What if the hijackers make contact and order the cessation of all offline and online Search & Rescue operations? Would volunteers be considered “digital combatants,” potentially embroiled in political conflict in which the lives of 227 hostages are at stake?

bio

Note: The Google Earth containing the top results of the search is available here.

See also: Analyzing Tweets on Malaysia Flight #MH370 [link]