Category Archives: Social Media

Disaster Tweets Coupled With UAV Imagery Give Responders Valuable Data on Infrastructure Damage

My colleague Leysia Palen recently co-authored an important study (PDF) on tweets posted during last year’s major floods in Colorado. As Leysia et al. write, “Because the flooding was widespread, it impacted many canyons and closed off access to communities for a long duration. The continued storms also prevented airborne reconnaissance. During this event, social media and other remote sources of information were sought to obtain reconnaissance information [...].”

1coloflood

The study analyzed 212,672 unique tweets generated by 57,049 unique Twitter users. Of these tweets, 2,658 were geo-tagged. The researchers combed through these geo-tagged tweets for any information on infrastructure damage. A sample of these are included below (click to enlarge). Leysia et al. were particularly interested in geo-tagged tweets with pictures of infrastructure damage.

Screen Shot 2014-09-07 at 3.17.34 AM

They overlaid these geo-tagged pictures on satellite and UAV/aerial imagery of the disaster-affected areas. The latter was captured by Falcon UAV. The satellite and aerial imagery provided the researchers with an easy way to distinguish between vegetation and water. “Most tweets appeared to fall primarily within the high flood hazard zones. Most bridges and roads that were located in the flood plains were expected to experience a high risk of damage, and the tweets and remote data confirmed this pattern.” According to Shideh Dashti, an assistant professor of civil, environmental and architectural engineering, and one of the co-authors, “we compared those tweets to the damage reported by engineering reconnaissance teams and they were well correlated.”

falcon uav flooding

To this end, by making use of real-time reporting by those affected in a region, including their posting of visual data,” Leysia and team “show that tweets may be used to directly support engineering reconnaissance by helping to digitally survey a region and navigate optimal paths for direct observation.” In sum, the results of this study demonstrate “how tweets, particularly with postings of visual data and references to location, may be used to directly support geotechnical experts by helping to digitally survey the affected region and to navigate optimal paths through the physical space in preparation for direct observation.”

Since the vast majority of tweets are not geo-tagged, GPS coordinates for potentially important pictures in these tweets are not available. The authors thus recommend looking into using natural language processing (NLP) techniques to “expose hazard-specific and site-specific terms and phrases that the layperson uses to report damage in situ.” They also suggest that a “more elaborate campaign that instructs people how to report such damage via tweets [...] may help get better reporting of damage across a region.”

These findings are an important contribution to the humanitarian computing space. For us at QCRI, this research suggests we may be on the right track with MicroMappers, a crowdsourcing (technically a microtasking) platform to filter and geo-tag social media content including pictures and videos. MicroMappers was piloted last year in response to Typhoon Haiyan. We’ve since been working on improving the platform and extending it to also analyze UAV/aerial imagery. We’ll be piloting this new feature in coming weeks. Ultimately, our aim is for MicroMappers to create near real-time Crisis Maps that provide an integrated display of relevant Tweets, pictures, videos and aerial imagery during disasters.

Bio

See also:

  • Using AIDR to Automatically Collect & Analyze Disaster Tweet [link]
  • Crisis Map of UAV Videos for Disaster Response [link]
  • Humanitarians in the Sky: Using UAVs for Disaster Response [link]
  • Digital Humanitarian Response: Why Moving from Crowdsourcing to Microtasking is Important [link]

Live: Crowdsourced Verification Platform for Disaster Response

Earlier this year, Malaysian Airlines Flight 370 suddenly vanished, which set in motion the largest search and rescue operation in history—both on the ground and online. Colleagues at DigitalGlobe uploaded high resolution satellite imagery to the web and crowdsourced the digital search for signs of Flight 370. An astounding 8 million volunteers rallied online, searching through 775 million images spanning 1,000,000 square kilometers; all this in just 4 days. What if, in addition to mass crowd-searching, we could also mass crowd-verify information during humanitarian disasters? Rumors and unconfirmed reports tend to spread rather quickly on social media during major crises. But what if the crowd were also part of the solution? This is where our new Verily platform comes in.

Verily Image 1

Verily was inspired by the Red Balloon Challenge in which competing teams vied for a $40,000 prize by searching for ten weather balloons secretly placed across some 8,000,0000 square kilometers (the continental United States). Talk about a needle-in-the-haystack problem. The winning team from MIT found all 10 balloons within 8 hours. How? They used social media to crowdsource the search. The team later noted that the balloons would’ve been found more quickly had competing teams not posted pictures of fake balloons on social media. Point being, all ten balloons were found astonishingly quickly even with the disinformation campaign.

Verily takes the exact same approach and methodology used by MIT to rapidly crowd-verify information during humanitarian disasters. Why is verification important? Because humanitarians have repeatedly noted that their inability to verify social media content is one of the main reasons why they aren’t making wider user of this medium. So, to test the viability of our proposed solution to this problem, we decided to pilot the Verily platform by running a Verification Challenge. The Verily Team includes researchers from the University of Southampton, the Masdar Institute and QCRI.

During the Challenge, verification questions of various difficulty were posted on Verily. Users were invited to collect and post evidence justifying their answers to the “Yes or No” verification questions. The photograph below, for example, was posted with the following question:

Verily Image 3

Unbeknownst to participants, the photograph was actually of an Italian town in Sicily called Caltagirone. The question was answered correctly within 4 hours by a user who submitted another picture of the same street. The results of the new Verily experiment are promissing. Answers to our questions were coming in so rapidly that we could barely keep up with posting new questions. Users drew on a variety of techniques to collect their evidence & answer the questions we posted:

Verily was designed with the goal of tapping into collective critical thinking; that is, with the goal of encouraging people think about the question rather than use their gut feeling alone. In other words, the purpose of Verily is not simply to crowdsource the collection of evidence but also to crowdsource critical thinking. This explains why a user can’t simply submit a “Yes” or “No” to answer a verification question. Instead, they have to justify their answer by providing evidence either in the form of an image/video or as text. In addition, Verily does not make use of Like buttons or up/down votes to answer questions. While such tools are great for identifying and sharing content on sites like Reddit, they are not the right tools for verification, which requires searching for evidence rather than liking or retweeting.

Our Verification Challenge confirmed the feasibility of the Verily platform for time-critical, crowdsourced evidence collection and verification. The next step is to deploy Verily during an actual humanitarian disaster. To this end, we invite both news and humanitarian organizations to pilot the Verily platform with us during the next natural disaster. Simply contact me to submit a verification question. In the future, once Verily is fully developed, organizations will be able to post their questions directly.

bio

See Also:

  • Verily: Crowdsourced Verification for Disaster Response [link]
  • Crowdsourcing Critical Thinking to Verify Social Media [link]
  • Six Degrees of Separation: Implications for Verifying Social Media [link]

The Filipino Government’s Official Strategy on Crisis Hashtags

As noted here, the Filipino Government has had an official strategy on promoting the use of crisis hashtags since 2012. Recently, the Presidential Communications Development and Strategic Planning Office (PCDSPO) and the Office of the Presidential Spokesperson (PCDSPO-OPS) have kindly shared their their 7-page strategy (PDF), which I’ve summarized below.

Gov Twitter

The Filipino government first endorsed the use of the #rescuePH and #reliefPH in August 2012, when the country was experiencing storm-enhanced monsoon rains. These were initiatives from the private sector. Enough people were using the hashtags to make them trend for days. Eventually, we adopted the hashtags in our tweets for disseminating government advisories, and for collecting reports from the ground. We also ventured into creating new hashtags, and into convincing media outlets to use unified hashtags.” For new hashtags, “The convention is the local name of the storm + PH (e.g., #PabloPH, #YolandaPH). In the case of the heavy monsoon, the local name of the monsoon was used, plus the year (i.e., #Habagat2013).” After agreeing on the hashtags, ” the OPS issued an official statement to the media and the public to carry these hashtags when tweeting about weather-related reports.”

The Office of the Presidential Spokesperson (OPS) would then monitor the hashtags and “made databases and lists which would be used in aid of deployed government frontline personnel, or published as public information.” For example, the OPS  “created databases from reports from #rescuePH, containing the details of those in need of rescue, which we endorsed to the National Disaster Risk Reduction & Management Council, the Coast Guard, and the Department of Transportation and Communications. Needless to say, we assumed that the databases we created using these hashtags would be contaminated by invalid reports, such as spam & other inappropriate messages. We try to filter out these erroneous or malicious reports, before we make our official endorsements to the concerned agencies. In coordination with officers from the Department of Social Welfare and Development, we also monitored the hashtag #reliefPH in order to identify disaster survivors who need food and non-food supplies.”

During Typhoon Haiyan (Yolanda), “the unified hashtag #RescuePH was used to convey lists of people needing help.” This information was then sent to to the National Disaster Risk Reduction & Management Council so that these names could be “included in their lists of people/communities to attend to.” This rescue hashtag was also “useful in solving surplus and deficits of goods between relief operations centers.” So the government encouraged social media users to coordinate their #ReliefPH efforts with the Department of Social Welfare and Development’s on-the-ground relief-coordination efforts. The Government also “created an infographic explaining how to use the hashtag #RescuePH.”

Screen Shot 2014-06-30 at 10.10.51 AM

Earlier, during the 2012 monsoon rains, the government “retweeted various updates on the rescue and relief operations using the hashtag #SafeNow. The hashtag is used when the user has been rescued or knows someone who has been rescued. This helps those working on rescue to check the list of pending affected persons or families, and update it.”

The government’s strategy document also includes an assessment on their use of unified hashtags during disasters. On the positive side, “These hashtags were successful at the user level in Metro Manila, where Internet use penetration is high. For disasters in the regions, where internet penetration is lower, Twitter was nevertheless useful for inter-sector (media – government – NGOs) coordination and information dissemination.” Another positive was the use of a unified hashtag following the heavy monsoon rains of 2012, “which had damaged national roads, inconvenienced motorists, and posing difficulty for rescue operations. After the floods subsided, the government called on the public to identify and report potholes and cracks on the national highways of Metro Manila by tweeting pictures and details of these to the official Twitter account [...] , and by using the hashtag #lubak2normal. The information submitted was entered into a database maintained by the Department of Public Works and Highways for immediate action.”

Screen Shot 2014-06-30 at 10.32.57 AM

The hashtag was used “1,007 times within 2 hours after it was launched. The reports were published and locations mapped out, viewable through a page hosted on the PCDSPO website. Considering the feedback, we considered the hashtag a success. We attribute this to two things: one, we used a platform that was convenient for the public to report directly to the government; and two, the hashtag appealed to humor (lubak means potholes or rubble in the vernacular). Furthermore, due to the novelty of it, the media had no qualms helping us spread the word. All the reports we gathered were immediately endorsed [...] for roadwork and repair.” This example points to the potential expanded use of social media and crowdsourcing for rapid damage assessments.

On the negative side, the use of #SafeNow resulted mostly in “tweets promoting #safenow, and very few actually indicating that they have been successfully rescued and/or are safe.” The most pressing challenge, however, was filtering. “In succeeding typhoons/instances of flooding, we began to have a filtering problem, especially when high-profile Twitter users (i.e., pop-culture celebrities) began to promote the hashtags through Twitter. The actual tweets that were calls for rescue were being drowned by retweets from fans, resulting in many nonrescue-related tweets [...].” This explains the need for Twitter monitoring platforms like AIDR, which is free and open source.

Bio

Latest Findings on Disaster Resilience: From Burma to California via the Rockefeller Foundation

I’ve long been interested in disaster resilience particularly when considered through the lens of self-organization. To be sure, the capacity to self-organize is an important feature of resilient societies. So what facilitates self-organization? There are several factors, of course, but the two I’m most interested in are social capital and communication technologies. My interest in disaster resilience also explains why one of our Social Innovation Tracks at QCRI is specifically focused on resilience. So I’m always on the lookout for new research on resilience. The purpose of this blog post is to summarize the latest insights.

Screen Shot 2014-05-12 at 4.23.33 PM

This new report (PDF) on Burma assesses the influence of social capital on disaster resilience. More specifically, the report focuses on the influence of bonding, bridging and linking social capital on disaster resilience in remote rural communities in the Ayerwaddy Region of Myanmar. Bonding capital refers to ties that are shared between individuals with common characteristics characteristics such as religion or ethnicity. Bridging capital relates to ties that connect individuals with those outside their immediate communities. These ties could be the result of shared geographical space, for example. Linking capital refers to vertical links between a community and individuals or groups outside said community. The relationship between a village and the government or a donor and recipients, for example.

As the report notes, “a balance of bonding, bridging and linking capitals is important of social and economic stability as well as resilience. It will also play a large role in a community’s ability to reduce their risk of disaster and cope with external shocks as they play a role in resource management, sustainable livelihoods and coping strategies.” In fact, “social capital can be a substitute for a lack of government intervention in disaster planning, early warning and recovery.” The study also notes that “rural communities tend to have stronger social capital due to their geographical distance from government and decision-making structures necessitating them being more self-sufficient.”

Results of the study reveal that villages in the region are “mutually supportive, have strong bonding capital and reasonably strong bridging capital […].” This mutual support “plays a part in reducing vulnerability to disasters in these communities.” Indeed, “the strong bonding capital found in the villages not only mobilizes communities to assist each other in recovering from disasters and building community coping mechanisms, but is also vital for disaster risk reduction and knowledge and information sharing. However, the linking capital of villages is “limited and this is an issue when it comes to coping with larger scale problems such as disasters.”

sfres

Meanwhile, in San Francisco, a low-income neighborhood is  building a culture of disaster preparedness founded on social capital. “No one had to die [during Hurricane Katrina]. No one had to even lose their home. It was all a cascading series of really bad decisions, bad planning, and corrupted social capital,” says Homsey, San Francisco’s director of neighborhood resiliency who spearheads the city’s Neighborhood Empowerment Network (NEN). The Network takes a different approach to disaster preparedness—it is reflective, not prescriptive. The group also works to “strengthen the relationships between governments and the community, nonprofits and other agencies [linking capital]. They make sure those relationships are full of trust and reciprocity between those that want to help and those that need help.” In short, they act as a local Match.com for disaster preparedness and response.

Providence Baptist Church of San Francisco is unusual because unlike most other American churches, this one has a line item for disaster preparedness. Hodge, who administrates the church, takes issue with the government’s disaster plan for San Francisco. “That plan is to evacuate the city. Our plan is to stay in the city. We aren’t going anywhere. We know that if we work together before a major catastrophe, we will be able to work together during a major catastrophe.” This explains why he’s teaming up with the Neighborhood Network (NEN) which will “activate immediately after an event. It will be entirely staffed and managed by the community, for the community. It will be a hyper-local, problem-solving platform where people can come with immediate issues they need collective support for,” such as “evacuations, medical care or water delivery.”

Screen Shot 2014-05-12 at 4.27.06 PM

Their early work has focused on “making plans to protect the neighborhood’s most vulnerable residents: its seniors and the disabled.” Many of these residents have thus received “kits that include a sealable plastic bag to stock with prescription medication, cash, phone numbers for family and friends. They also have door-hangers to help speed up search-and-rescue efforts (above pics).

Lastly, colleagues at the Rockefeller Foundation have just released their long-awaited City Resilience Framework after several months of extensive fieldwork, research and workshops in six cities: Cali, Columbia; Concepción, Chile; New Orleans, USA; Cape Town, South Africa; Surat, India; and Semarang, Indonesia. “The primary purpose of the fieldwork was to understand what contributes to resilience in cities, and how resilience is understood from the perspective of different city stakeholder groups in different contexts. The results are depicted in the graphic below, which figures the 12 categories identified by Rockefeller and team (in yellow).

City Resilience Framework

These 12 categories are important because “one must be able to relate resilience to other properties that one has some means of ascertaining, through observation.” The four categories that I’m most interested in observing are:

Collective identity and mutual support: this is observed as active community engagement, strong social networks and social integration. Sub-indicators include community and civic participation, social relationships and networks, local identity and culture and integrated communities.

Empowered stakeholders: this is underpinned by education for all, and relies on access to up-to-date information and knowledge to enable people and organizations to take appropriate action. Sub-indicators include risk monitoring & alerts and communication between government & citizens.

Reliable communications and mobility: this is enabled by diverse and affordable multi-modal transport systems and information and communication technology (ICT) networks, and contingency planning. Sub-indicators include emergency communication services.

Effective leadership and management: this relates to government, business and civil society and is recognizable in trusted individuals, multi-stakeholder consultation, and evidence-based decision-making. Sub-indicators include emergency capacity and coordination.

How am I interested in observing these drivers of resilience? Via social media. Why? Because that source of information is 1) available in real-time; 2) enables two-way communication; and 3) remains largely unexplored vis-a-vis disaster resilience. Whether or not social media can be used as a reliable proxy to measure resilience is still very much a  research question at this point—meaning more research is required to determine whether social media can indeed serve as a proxy for city resilience.

As noted above, one of our Social Innovation research tracks at QCRI is on resilience. So we’re currently reviewing the list of 32 cities that the Rockefeller Foundation’s 100 Resilient Cities project is partnering with to identify which have a relatively large social media footprint. We’ll then select three cities and begin to explore whether collective identity and mutual support can be captured via the social media activity in each city. In other words, we’ll be applying data science & advanced computing—specifically computational social science—to explore whether digital data can shed light on city resilience. Ultimately, we hope our research will support the Rockefeller Foundation’s next phase in their 100 Resilient Cities project: the development of a Resilient City Index.

Bio

See also:

  • How to Create Resilience Through Big Data [link]
  • Seven Principles for Big Data & Resilience Projects [link]
  • On Technology and Building Resilient Societies [link]
  • Using Social Media to Predict Disaster Resilience [link]
  • Social Media = Social Capital = Disaster Resilience? [link]
  • Does Social Capital Drive Disaster Resilience? [link]
  • Failing Gracefully in Complex Systems: A Note on Resilience [link]
  • Big Data, Lord of the Rings and Disaster Resilience [link]

Got TweetCred? Use it To Automatically Identify Credible Tweets (Updated)

Update: Users have created an astounding one million+ tags over the past few weeks, which will help increase the accuracy of TweetCred in coming months as we use these tags to further train our machine learning classifiers. We will be releasing our Firefox plugin in the next few days. In the meantime, we have just released our paper on TweetCred which describes our methodology & classifiers in more detail.

What if there were a way to automatically identify credible tweets during major events like disasters? Sounds rather far-fetched, right? Think again.

The new field of Digital Information Forensics is increasingly making use of Big Data analytics and techniques from artificial intelligence like machine learning to automatically verify social media. This is how my QCRI colleague ChaTo et al. already predicted both credible and non-credible tweets generated after the Chile Earthquake (with an accuracy of 86%). Meanwhile, my colleagues Aditi, et al. from IIIT Delhi also used machine learning to automatically rank the credibility of some 35 million tweets generated during a dozen major international events such as the UK Riots and the Libya Crisis. So we teamed up with Aditi et al. to turn those academic findings into TweetCred, a free app that identifies credible tweets automatically.

CNN TweetCred

We’ve just launched the very first version of TweetCred—key word being first. This means that our new app is still experimental. On the plus side, since TweetCred is powered by machine learning, it will become increasingly accurate over time as more users make use of the app and “teach” it the difference between credible and non-credible tweets. Teaching TweetCred is as simple as a click of the mouse. Take the tweet below, for example.

ARC TweetCred Teach

TweetCred scores each tweet based based on a 7-point system, the higher the number of blue dots, the more credible the content of the tweet is likely to be. Note that a TweetCred score also takes into account any pictures or videos included in a tweet along with the reputation and popularity of the Twitter user. Naturally, TweetCred won’t always get it right, which is where the teaching and machine learning come in. The above tweet from the American Red Cross is more credible than three dots would suggest. So you simply hover your mouse over the blue dots and click on the “thumbs down” icon to tell TweetCred it got that tweet wrong. The app will then ask you to tag the correct level of credibility for that tweet is.

ARC TweetCred Teach 3

That’s all there is to it. As noted above, this is just the first version of TweetCred. The more all of us use (and teach) the app, the more accurate it will be. So please try it out and spread the word. You can download the Chrome Extension for TweetCred here. If you don’t use Chrome, you can still use the browser version here although the latter has less functionality. We very much welcome any feedback you may have, so simply post feedback in the comments section below. Keep in mind that TweetCred is specifically designed to rate the credibility of disaster/crisis related tweets rather than any random topic on Twitter.

As I note in my book Digital Humanitarians (forthcoming), empirical studies have shown that we’re less likely to spread rumors on Twitter if false tweets are publicly identified by Twitter users as being non-credible. In fact, these studies show that such public exposure increases the number of Twitter users who then seek to stop the spread of said of rumor-related tweets by 150%. But, it makes a big difference whether one sees the rumors first or the tweets dismissing said rumors first. So my hope is that TweetCred will help accelerate Twitter’s self-correcting behavior by automatically identifying credible tweets while countering rumor-related tweets in real-time.

This project is a joint collaboration between IIIT and QCRI. Big thanks to Aditi and team for their heavy lifting on the coding of TweetCred. If the experiments go well, my QCRI colleagues and I may integrate TweetCred within our AIDR (Artificial Intelligence for Disaster Response) and Verily platforms.

Bio

See also:

  • New Insights on How to Verify Social Media [link]
  • Predicting the Credibility of Disaster Tweets Automatically [link]
  • Auto-Ranking Credibility of Tweets During Major Events [link]
  • Auto-Identifying Fake Images on Twitter During Disasters [link]
  • Truth in the Age of Social Media: A Big Data Challenge [link]
  • Analyzing Fake Content on Twitter During Boston Bombings [link]
  • How to Verify Crowdsourced Information from Social Media [link]
  • Crowdsourcing Critical Thinking to Verify Social Media [link]
  • Tweets, Crises and Behavioral Psychology: On Credibility and Information Sharing [link]

Using AIDR to Collect and Analyze Tweets from Chile Earthquake

Wish you had a better way to make sense of Twitter during disasters than this?

Type in a keyword like #ChileEarthquake in Twitter’s search box above and you’ll see more tweets than you can possibly read in a day let alone keep up with for more than a few minutes. Wish there way were an easy, free and open source solution? Well you’ve come to the right place. My team and I at QCRI are developing the Artificial Intelligence for Disaster Response (AIDR) platform to do just this. Here’s how it works:

First you login to the AIDR platform using your own Twitter handle (click images below to enlarge):

AIDR login

You’ll then see your collection of tweets (if you already have any). In my case, you’ll see I have three. The first is a collection of English language tweets related to the Chile Earthquake. The second is a collection of Spanish tweets. The third is a collection of more than 3,000,000 tweets related to the missing Malaysia Airlines plane. A preliminary analysis of these tweets is available here.

AIDR collections

Lets look more closely at my Chile Earthquake 2014 collection (see below, click to enlarge). I’ve collected about a quarter of a million tweets in the past 30 hours or so. The label “Downloaded tweets (since last re-start)” simply refers to the number of tweets I’ve collected since adding a new keyword or hashtag to my collection. I started the collection yesterday at 5:39am my time (yes, I’m an early bird). Under “Keywords” you’ll see all the hashtags and keywords I’ve used to search for tweets related to the earthquake in Chile. I’ve also specified the geographic region I want to collect tweets from. Don’t worry, you don’t actually have to enter geographic coordinates when you set up your own collection, you simply highlight (on map) the area you’re interested in and AIDR does the rest.

AIDR - Chile Earthquake 2014

You’ll also note in the above screenshot that I’ve selected to only collect tweets in English, but you can collect all language tweets if you’d like or just a select few. Finally, the Collaborators section simply lists the colleagues I’ve added to my collection. This gives them the ability to add new keywords/hashtags and to download the tweets collected as shown below (click to enlarge). More specifically, collaborators can download the most recent 100,000 tweets (and also share the link with others). The 100K tweet limit is based on Twitter’s Terms of Service (ToS). If collaborators want all the tweets, Twitter’s ToS allows for sharing the TweetIDs for an unlimited number of tweets.

AIDR download CSV

So that’s the AIDR Collector. We also have the AIDR Classifier, which helps you make sense of the tweets you’re collecting (in real-time). That is, your collection of tweets doesn’t stop, it continues growing, and as it does, you can make sense of new tweets as they come in. With the Classifier, you simply teach AIDR to classify tweets into whatever topics you’re interested in, like “Infrastructure Damage”, for example. To get started with the AIDR Classifier, simply return to the “Details” tab of our Chile collection. You’ll note the “Go To Classifier” button on the far right:

AIDR go to Classifier

Clicking on that button allows you to create a Classifier, say on the topic of disaster damage in general. So you simply create a name for your Classifier, in this case “Disaster Damage” and then create Tags to capture more details with respect to damage-related tweets. For example, one Tag might be, say, “Damage to Transportation Infrastructure.” Another could be “Building Damage.” In any event, once you’ve created your Classifier and corresponding tags, you click Submit and find your way to this page (click to enlarge):

AIDR Classifier Link

You’ll notice the public link for volunteers. That’s basically the interface you’ll use to teach AIDR. If you want to teach AIDR by yourself, you can certainly do so. You also have the option of “crowdsourcing the teaching” of AIDR. Clicking on the link will take you to the page below.

AIDR to MicroMappers

So, I called my Classifier “Message Contents” which is not particularly insightful; I should have labeled it something like “Humanitarian Information Needs” or something, but bear with me and lets click on that Classifier. This will take you to the following Clicker on MicroMappers:

MicroMappers Clicker

Now this is not the most awe-inspiring interface you’ve ever seen (at least I hope not); reason being that this is simply our very first version. We’ll be providing different “skins” like the official MicroMappers skin (below) as well as a skin that allows you to upload your own logo, for example. In the meantime, note that AIDR shows every tweet to at least three different volunteers. And only if each of these 3 volunteers agree on how to classify a given tweet does AIDR take that into consideration when learning. In other words, AIDR wants to ensure that humans are really sure about how to classify a tweet before it decides to learn from that lesson. Incidentally, The MicroMappers smartphone app for the iPhone and Android will be available in the next few weeks. But I digress.

Yolanda TweetClicker4

As you and/or your volunteers classify tweets based on the Tags you created, AIDR starts to learn—hence the AI (Artificial Intelligence) in AIDR. AIDR begins to recognize that all the tweets you classified as “Infrastructure Damage” are indeed similar. Once you’ve tagged enough tweets, AIDR will decide that it’s time to leave the nest and fly on it’s own. In other words, it will start to auto-classify incoming tweets in real-time. (At present, AIDR can auto-classify some 30,000 tweets per minute; compare this to the peak rate of 16,000 tweets per minute observed during Hurricane Sandy).

Of course, AIDR’s first solo “flights” won’t always go smoothly. But not to worry, AIDR will let you know when it needs a little help. Every tweet that AIDR auto-tags comes with a Confidence level. That is, AIDR will let you know: “I am 80% sure that I correctly classified this tweet”. If AIDR has trouble with a tweet, i.e., if it’s confidence level is 65% or below, the it will send the tweet to you (and/or your volunteers) so it can learn from how you classify that particular tweet. In other words, the more tweets you classify, the more AIDR learns, and the higher AIDR’s confidence levels get. Fun, huh?

To view the results of the machine tagging, simply click on the View/Download tab, as shown below (click to enlarge). The page shows you the latest tweets that have been auto-tagged along with the Tag label and the confidence score. (Yes, this too is the first version of that interface, we’ll make it more user-friendly in the future, not to worry). In any event, you can download the auto-tagged tweets in a CSV file and also share the download link with your colleagues for analysis and so on. At some point in the future, we hope to provide a simple data visualization output page so that you can easily see interesting data trends.

AIDR Results

So that’s basically all there is to it. If you want to learn more about how it all works, you might fancy reading this research paper (PDF). In the meantime, I’ll simply add that you can re-use your Classifiers. If (when?) another earthquake strikes Chile, you won’t have to start from scratch. You can auto-tag incoming tweets immediately with the Classifier you already have. Plus, you’ll be able to share your classifiers with your colleagues and partner organizations if you like. In other words, we’re envisaging an “App Store” of Classifiers based on different hazards and different countries. The more we re-use our Classifiers, the more accurate they will become. Everybody wins.

And voila, that is AIDR (at least our first version). If you’d like to test the platform and/or want the tweets from the Chile Earthquake, simply get in touch!

bio

Note:

  • We’re adapting AIDR so that it can also classify text messages (SMS).
  • AIDR Classifiers are language specific. So if you speak Spanish, you can create a classifier to tag all Spanish language tweets/SMS that refer to disaster damage, for example. In other words, AIDR does not only speak English : )

Analyzing Tweets on Malaysia Flight #MH370

My QCRI colleague Dr. Imran is using our AIDR platform (Artificial Intelligence for Disaster Response) to collect & analyze tweets related to Malaysia Flight 370 that went missing several days ago. He has collected well over 850,000 English-language tweets since March 11th; using the following keywords/hashtags: Malaysia Airlines flight, #MH370m #PrayForMH370 and #MalaysiaAirlines.

MH370 Prayers

Imran then used AIDR to create a number of “machine learning classifiers” to automatically classify all incoming tweets into categories that he is interested in:

  • Informative: tweets that relay breaking news, useful info, etc

  • Praying: tweets that are related to prayers and faith

  • Personal: tweets that express personal opinions

The process is super simple. All he does is tag several dozen incoming tweets into their respective categories. This teaches AIDR what an “Informative” tweet should “look like”. Since our novel approach combines human intelligence with artificial intelligence, AIDR is typically far more accurate at capturing relevant tweets than Twitter’s keyword search.

And the more tweets that Imran tags, the more accurate AIDR gets. At present, AIDR can auto-classify ~500 tweets per second, or 30,000 tweets per minute. This is well above the highest velocity of crisis tweets recorded thus far—16,000 tweets/minute during Hurricane Sandy.

The graph below depicts the number of tweets generated since the day we started collecting the AIDR collection, i.e., March 11th.

Volume of Tweets per Day

This series of pie charts simply reflects the relative share of tweets per category over the past four days.

Tweets Trends

Below are some of the tweets that AIDR has automatically classified as being Informative (click to enlarge). The “Confidence” score simply reflects how confident AIDR is that it has correctly auto-classified a tweet. Note that Imran could also have crowdsourced the manual tagging—that is, he could have crowdsourced the process of teaching AIDR. To learn more about how AIDR works, please see this short overview and this research paper (PDF).

AIDR output

If you’re interested in testing AIDR (still very much under development) and/or would like the Tweet ID’s for the 850,000+ tweets we’ve collected using AIDR, then feel free to contact me. In the meantime, we’ll start a classifier that auto-collects tweets related to hijacking, criminal causes, and so on. If you’d like us to create a classifier for a different topic, let us know—but we can’t make any promises since we’re working on an important project deadline. When we’re further along with the development of AIDR, anyone will be able to easily collect & download tweets and create & share their own classifiers for events related to humanitarian issues.

Bio

Acknowledgements: Many thanks to Imran for collecting and classifying the tweets. Imran also shared the graphs and tabular output that appears above.