Category Archives: Social Media

Artificial Intelligence for Monitoring Elections (AIME)

AIME logo

I published a blog post with the same title a good while back. Here’s what I wrote at the time:

Citizen-based, crowdsourced election observation initiatives are on the rise. Leading election monitoring organizations are also looking to leverage citizen-based reporting to complement their own professional election monitoring efforts. Meanwhile, the information revolution continues apace, with the number of new mobile phone subscriptions up by over 1 billion in just the past 36 months alone. The volume of election-related reports generated by “the crowd” is thus expected to grow significantly in the coming years. But international, national and local election monitoring organizations are completely unprepared to deal with the rise of Big (Election) Data.

I thus introduced a new project to “develop a free and open source platform to automatically filter relevant election reports from the crowd.” I’m pleased to report that my team and I at QCRI have just tested AIME during an actual election for the very first time—the 2015 Nigerian Elections. My QCRI Research Assistant Peter Mosur (co-author of this blog post) collaborated directly with Oludotun Babayemi from Clonehouse Nigeria and Chuks Ojidoh from the Community Life Project & Reclaim Naija to deploy and test the AIME platform.

AIME is a free and open source (experimental) solution that combines crowd-sourcing with Artificial Intelligence to automatically identify tweets of interest during major elections. As organizations engaged in election monitoring well know, there can be a lot chatter on social media as people rally behind their chosen candidates, announce this to the world, ask their friends and family who they will be voting for, and updating others when they have voted while posting about election related incidents they may have witnessed. This can make it rather challenging to find reports relevant to election monitoring groups.

WP1

Election monitors typically monitor instances of violence, election rigging, and voter issues. These incidents are monitored because they reveal problems that arise with the elections. Election monitoring initiatives such as Reclaim Naija & Uzabe also monitor several other type of incidents but for the purposes of testing the AIME platform, we selected three types of events mentioned above. In order to automatically identify tweets related to these events, one must first provide AIME with example tweets. (Of course, if there is no Twitter traffic to begin with, then there won’t be much need for AIME, which is precisely why we developed an SMS extension that can be used with AIME).

So where does the crowdsourcing comes in? Users of AIME can ask the crowd to tag tweets related to election-violence, rigging and voter issues by simply clicking on tagging tweets posted to the AIME platform with the appropriate event type. (Several quality control mechanisms are built in to ensure data quality. Also, one does not need to use crowdsourcing to tag the tweets; this can be done internally as well or instead). What AIME does next is use a technique from Artificial Intelligence (AI) called statistical machine learning to understand patterns in the human-tagged tweets. In other words, it begins to recognize which tweets belong in which category type—violence, rigging and voter issues. AIME will then auto-classify new tweets that are related to these categories (and can auto-classify around 2 millions tweets or text messages per minute).

Screen Shot 2015-04-10 at 8.33.08 AM

Before creating our automatic classifier for the Nigerian Elections, we first needed to collect examples of tweets related to election violence, rigging and voter issues in order to teach AIME. Oludotun Babayemi and Chuks Ojidoh kindly provided the expert local knowledge needed to identify the keywords we should be following on Twitter (using AIME). They graciously gave us many different keywords to use as well as a list of trusted Twitter accounts to follow for election-related messages. (Due to difficulties with AIME, we were not able to use the trusted accounts. In addition, many of the suggested keywords were unusable since words like “aggressive”, “detonate”, and “security” would have resulted in large amount of false positives).

Here is the full list of keywords used by AIME:

Nigeria elections, nigeriadecides, Nigeria decides, INEC, GEJ, Change Nigeria, Nigeria Transformation, President Jonathan, Goodluck Jonathan, Sai Buhari, saibuhari, All progressives congress, Osibanjo, Sambo, Peoples Democratic Party, boko haram, boko, area boys, nigeria2015, votenotfight, GEJwinsit, iwillvoteapc, gmb2015, revoda, thingsmustchange,  and march4buhari   

Out of this list, “NigeriaDecides” was by far the most popular keyword used in the elections. It accounted for over 28,000 Tweets of a batch of 100,000. During the week leading up to the elections, AIME collected roughly 800,000 Tweets. Over the course of the elections and the few days following, the total number of collected Tweets jumped to well over 4 million.

We sampled just a handful of these tweets and manually tagged those related to violence, rigging and other voting issues using AIME. “Violence” was described as “threats, riots, arming, attacks, rumors, lack of security, vandalism, etc.” while “Election Rigging” was described as “Ballot stuffing, issuing invalid ballot papers, voter impersonation, multiple voting, ballot boxes destroyed after counting, bribery, lack of transparency, tampered ballots etc.” Lastly, “Voting Issues” was defined as “Polling station logistics issues, technical issues, people unable to vote, media unable to enter, insufficient staff, lack of voter assistance, inadequate voting materials, underage voters, etc.”

Any tweet that did not fall into these three categories was tagged as “Other” or “Not Related”. Our Election Classifiers were trained with a total of 571 human-tagged tweets which enabled AIME to automatically classify well over 1 million tweets (1,263,654 to be precise). The results in the screenshot below show accurate AIME was at auto-classifying tweets based on the different event types define earlier. AUC is what captures the “overall accuracy” of AIME’s classifiers.

AIME_Nigeria

AIME was rather good at correctly tagging tweets related to “Voting Issues” (98% accuracy) but drastically poor at tagging related to “Election Rigging” (0%). This is not AIME’s fault : ) since it only had 8 examples to learn from. As for “Violence”, the accuracy score was 47%, which is actually surprising given that AIME only had 14 human-tagged examples to learn from. Lastly, AIME did fairly well at auto-classifying unrelated tweets (accuracy of 86%).

Conclusion: this was the first time we tested AIME during an actual election and we’ve learned a lot in the process. The results are not perfect but enough to press on and experiment further with the AIME platform. If you’d like to test AIME yourself (and if you fully recognize that the tool is experimental and still under development, hence not perfect), then feel free to get in touch with me here. We have 2 slots open for testing. In the meantime, big thanks to my RA Peter for spearheading both this deployment and the subsequent research.

Artificial Intelligence Powered by Crowdsourcing: The Future of Big Data and Humanitarian Action

There’s no point spewing stunning statistics like this recent one from The Economist, which states that 80% of adults will have access to smartphones before 2020. The volume, velocity and variety of digital data will continue to skyrocket. To paraphrase Douglas Adams, “Big Data is big. You just won’t believe how vastly, hugely, mind-bogglingly big it is.”

WP1

And so, traditional humanitarian organizations have a choice when it comes to battling Big Data. They can either continue business as usual (and lose) or get with the program and adopt Big Data solutions like everyone else. The same goes for Digital Humanitarians. As noted in my new book of the same title, those Digital Humanitarians who cling to crowdsourcing alone as their pièce de résistance will inevitably become the ivy-laden battlefield monuments of 2020.

bookcover

Big Data comprises a variety of data types such as text, imagery and video. Examples of text-based data includes mainstream news articles, tweets and WhatsApp messages. Imagery includes Instagram, professional photographs that accompany news articles, satellite imagery and increasingly aerial imagery as well (captured by UAVs). Television channels, Meerkat and YouTube broadcast videos. Finding relevant, credible and actionable pieces of text, imagery and video in the Big Data generated during major disasters is like looking for a needle in a meadow (haystacks are ridiculously small datasets by comparison).

Humanitarian organizations, like many others in different sectors, often find comfort in the notion that their problems are unique. Thankfully, this is rarely true. Not only is the Big Data challenge not unique to the humanitarian space, real solutions to the data deluge have already been developed by groups that humanitarian professionals at worst don’t know exist and at best rarely speak with. These groups are already using Artificial Intelligence (AI) and some form of human input to make sense of Big Data.

Data digital flow

How does it work? And why do you still need some human input if AI is already in play? The human input, which can be via crowdsourcing or a few individuals is needed to train the AI engine, which uses a technique from AI called machine learning to learn from the human(s). Take AIDR, for example. This experimental solution, which stands for Artificial Intelligence for Disaster Response, uses AI powered by crowdsourcing to automatically identify relevant tweets and text messages in an exploding meadow of digital data. The crowd tags tweets and messages they find relevant and the AI engine learns to recognize the relevance patterns in real-time, allowing AIDR to automatically identify future tweets and messages.

As far as we know, AIDR is the only Big Data solution out there that combines crowdsourcing with real-time machine learning for disaster response. Why do we use crowdsourcing to train the AI engine? Because speed is of the essence in disasters. You need a crowd of Digital Humanitarians to quickly tag as many tweets/messages as possible so that AIDR can learn as fast as possible. Incidentally, once you’ve created an algorithm that accurately detects tweets relaying urgent needs after a Typhoon in the Philippines, you can use that same algorithm again when the next Typhoon hits (no crowd needed).

What about pictures? After all, pictures are worth a thousand words. Is it possible to combine artificial intelligence with human input to automatically identify pictures that show infrastructure damage? Thanks to recent break-throughs in computer vision, this is indeed possible. Take Metamind, for example, a new startup I just met with in Silicon Valley. Metamind is barely 6 months old but the team has already demonstrated that one can indeed automatically identify a whole host of features in pictures by using artificial intelligence and some initial human input. The key is human input since this is what trains the algorithms. The more human-generated training data you have, the better your algorithms.

My team and I at QCRI are collaborating with Metamind to create algorithms that can automatically detect infrastructure damage in pictures. The Silicon Valley start-up is convinced that we’ll be able to create a highly accurate algorithms if we have enough training data. This is where MicroMappers comes in. We’re already using MicroMappers to create training data for tweets and text messages (which is what AIDR uses to create algorithms). In addition, we’re already using MicroMappers to tag and map pictures of disaster damage. The missing link—in order to turn this tagged data into algorithms—is Metamind. I’m excited about the prospects, so stay tuned for updates as we plan to start teaching Metamind’s AI engine this month.

Screen Shot 2015-03-16 at 11.45.31 AM

How about videos as a source of Big Data during disasters? I was just in Austin for SXSW 2015 and met up with the CEO of WireWax, a British company that uses—you guessed it—artificial intelligence and human input to automatically detect countless features in videos. Their platform has already been used to automatically find guns and Justin Bieber across millions of videos. Several other groups are also working on feature detection in videos. Colleagues at Carnegie Melon University (CMU), for example, are working on developing algorithms that can detect evidence of gross human rights violations in YouTube videos coming from Syria. They’re currently applying their algorithms on videos of disaster footage, which we recently shared with them, to determine whether infrastructure damage can be automatically detected.

What about satellite & aerial imagery? Well the team driving DigitalGlobe’s Tomnod platform have already been using AI powered by crowdsourcing to automatically identify features of interest in satellite (and now aerial) imagery. My team and I are working on similar solutions with MicroMappers, with the hope of creating real-time machine learning solutions for both satellite and aerial imagery. Unlike Tomnod, the MicroMappers platform is free and open source (and also filters social media, photographs, videos & mainstream news).

Screen Shot 2015-03-16 at 11.43.23 AM

Screen Shot 2015-03-16 at 11.41.21 AM

So there you have it. The future of humanitarian information systems will not be an App Store but an “Alg Store”, i.e, an Algorithm Store providing a growing menu of algorithms that have already been trained to automatically detect certain features in texts, imagery and videos that gets generated during disasters. These algorithms will also “talk to each other” and integrate other feeds (from real-time sensors, Internet of Things) thanks to data-fusion solutions that already exist and others that are in the works.

Now, the astute reader may have noted that I omitted audio/speech in my post. I’ll be writing about this in a future post since this one is already long enough.

How to Become a Digital Sherlock Holmes and Support Relief Efforts

Humanitarian organizations need both timely and accurate information when responding to disasters. Where is the most damage located? Who needs the most help? What other threats exist? Respectable news organizations also need timely and accurate information during crisis events to responsibly inform the public. Alas, both humanitarian & mainstream news organizations are often confronted with countless rumors and unconfirmed reports. Investigative journalists and others have thus developed a number of clever strategies to rapidly verify such reports—as detailed in the excellent Verification Handbook. There’s just one glitch: Journalists and humanitarians alike are increasingly overwhelmed by the “Big Data” generated during crises, particularly information posted on social media. They rarely have enough time or enough staff to verify the majority of unconfirmed reports. This is where Verily comes in, a new type of Detective Agency for a new type of detective: The Virtual Digital Detective.

Screen Shot 2015-02-26 at 5.47.35 AM

The purpose of Verily is to rapidly crowdsource the verification of unconfirmed reports during major disasters. The way it works is simple. If a humanitarian or news organization has a verification request, they simply submit this request online at Verily. This request must be phrased in the form of a Yes-or-No question, such as: “Has the Brooklyn Bridge been destroyed by the Hurricane?”; “Is this Instagram picture really showing current flooding in Indonesia”?; “Is this new YouTube video of the Chile earthquake fake?”; “Is it true that the bush fires in South Australia are getting worse?” and so on.

Verily helps humanitarian & news organizations find answers to these questions by rapidly crowdsourcing the collection of clues that can help answer said questions. Verification questions are communicated widely across the world via Verily’s own email-list of Digital Detectives and also via social media. This new bread of Digital Detectives then scour the web for clues that can help answer the verification questions. Anyone can become a Digital Detective at Verily. Indeed, Verily provides a menu of mini-verification guides for new detectives. These guides were written by some of the best Digital Detectives on the planet, the authors of the Verification Handbook. Verily Detectives post the clues they find directly to Verily and briefly explain why these clues help answer the verification question. That’s all there is to it.

xlarge

If you’re familiar with Reddit, you may be thinking “Hold on, doesn’t Reddit do this already?” In part yes, but Reddit is not necessarily designed to crowdsource critical thinking or to create skilled Digital Detectives. Recall this fiasco during the Boston Marathon Bombings which fueled disastrous “witch hunts”. Said disaster would not have happened on Verily because Verily is deliberately designed to focus on the process of careful detective work while providing new detectives with the skills they need to precisely avoid the kind of disaster that happened on Reddit. This is no way a criticism of Reddit! One single platform alone cannot be designed to solve every problem under the sun. Deliberate, intentional design is absolutely key.

In sum, our goal at Verily is to crowdsource Sherlock Holmes. Why do we think this will work? For several reasons. First, authors of the Verification Handbook have already demonstrated that individuals working alone can, and do, verify unconfirmed reports during crises. We believe that creating a community that can work together to verify rumors will be even more powerful given the Big Data challenge. Second, each one of us with a mobile phone is a human sensor, a potential digital witness. We believe that Verily can help crowdsource the search for eyewitnesses, or rather the search for digital content that these eyewitnesses post on the Web. Third, the Red Balloon Challenge was completed in a matter of hours. This Challenge focused on crowdsourcing the search for clues across an entire continent (3 million square miles). Disasters, in contrast, are far more narrow in terms of geographic coverage. In other words, the proverbial haystack is smaller and thus the needles easier to find. More on Verily here & here.

So there’s reason to be optimistic that Verily can succeed given the above and recent real-world deployments. Of course, Verily is is still very much in early phase and still experimental. But both humanitarian organizations and high-profile news organizations have expressed a strong interest in field-testing this new Digital Detective Agency. To find out more about Verily and to engage with experts in verification, please join us on Tuesday, March 3rd at 10:00am (New York time) for this Google Hangout with the Verily Team and our colleague Craig Silverman, the Co-Editor of the Verification Handbook. Click here for the Event Page and here to follow on YouTube. You can also join the conversations on Twitter and pose questions or comments using the hashtag #VerilyLive.

This is How Social Media Can Inform UN Needs Assessments During Disasters

My team at QCRI just published their latest findings on our ongoing crisis computing and humanitarian technology research. They focused on UN/OCHA, the international aid agency responsible for coordinating humanitarian efforts across the UN system. “When disasters occur, OCHA must quickly make decisions based on the most complete picture of the situation they can obtain,” but “given that complete knowledge of any disaster event is not possible, they gather information from myriad available sources, including social media.” QCRI’s latest research, which also drew on multiple interviews, shows how “state-of-the-art social media processing methods can be used to produce information in a format that takes into account what large international humanitarian organizations require to meet their constantly evolving needs.”

ClusterPic

QCRI’s new study (PDF) focuses specifically on the relief efforts in response to Typhoon Yolanda (known locally as Haiyan). “When Typhoon Yolanda struck the Philippines, the combination of widespread network access, high Twitter use, and English proficiency led to many located in the Philippines to tweet about the typhoon in English. In addition, outsiders located elsewhere tweeted about the situation, leading to millions of English-language tweets that were broadcast about the typhoon and its aftermath.”

When disasters like Yolanda occur, the UN uses the Multi Cluster/Sector Initial Rapid Assessment (MIRA) survey to assess the needs of affected populations. “The first step in the MIRA process is to produce a ‘Situation Analysis’ report,” which is produced within the first 48 hours of a disaster. Since the Situation Analysis needs to be carried out very quickly, “OCHA is open to using new sources—including social media communications—to augment the information that they and partner organizations so desperately need in the first days of the immediate post-impact period. As these organizations work to assess needs and distribute aid, social media data can potentially provide evidence in greater numbers than what individuals and small teams are able to collect on their own.”

My QCRI colleagues therefore analyzed the 2 million+ Yolanda-related tweets published between November 7-13, 2013 to assess whether any of these could have augmented OCHA’s situational awareness at the time. (OCHA interviewees stated that this “six-day period would be of most interest to them”). QCRI subsequently divided the tweets into two periods:

Screen Shot 2015-02-14 at 8.31.58 AM

Next, colleagues geo-located the tweets by administrative region and compared the frequency of tweets in each region with the number of people who were later found to have been affected in the respective region. The result of this analysis is displayed below (click to enlarge).

Screen Shot 2015-02-14 at 8.33.21 AM

While the “activity on Twitter was in general more significant in regions heavily affected by the typhoon, the correlation is not perfect.” This should not come as a surprise. This analysis is nevertheless a “worthwhile exercise, as it can prove useful in some circumstances.” In addition, knowing exactly what kinds of biases exist on Twitter, and which are “likely to continue is critical for OCHA to take into account as they work to incorporate social media data into future response efforts.”

QCRI researchers also analyzed the 2 million+ tweets to determine which  contained useful information. An informative tweet is defined as containing “information that helps you understand the situation.” They found that 42%-48% of the 2 million tweets fit this category, which is particularly high. Next, they classified those one million informative tweets using the Humanitarian Cluster System. The Up/Down arrows below indicate a 50%+ increase/decrease of tweets in that category during period 2.

Screen Shot 2015-02-14 at 8.35.53 AM

“In the first time period (roughly the first 48 hours), we observe concerns focused on early recovery and education and child welfare. In the second time period, these concerns extend to topics related to shelter, food, nutrition, and water, sanitation and hygiene (WASH). At the same time, there are proportionally fewer tweets regarding telecommunications, and safety and security issues.” The table above shows a “significant increase of useful messages for many clusters between period 1 and period 2. It is also clear that the number of potentially useful tweets in each cluster is likely on the order of a few thousand, which are swimming in the midst of millions of tweets. This point is illustrated by the majority of tweets falling into the ‘None of the above’ category, which is expected and has been shown in previous research.”

My colleagues also examined how “information relevant to each cluster can be further categorized into useful themes.” They used topic modeling to “quickly group thousands of tweets [and] understand the information they contain. In the future, this method can help OCHA staff gain a high- level picture of what type of information to expect from Twitter, and to decide which clusters or topics merit further examination and/or inclusion in the Situation Analysis.” The results of this topic modeling is displayed in the table below (click to enlarge).

Screen Shot 2015-02-14 at 8.34.37 AM

When UN/OCHA interviewees were presented with these results, their “feedback was positive and favorable.” One OCHA interviewee noted that this information “could potentially give us an indicator as to what people are talking most about— and, by proxy, apply that to the most urgent needs.” Another interviewee stated that “There are two places in the early hours that I would want this: 1) To add to our internal “one-pager” that will be released in 24-36 hours of an emergency, and 2) the Situation Analysis: [it] would be used as a proxy for need.” Another UN staffer remarked that “Generally yes this [information] is very useful, particularly for building situational awareness in the first 48 hours.” While some of the analysis may at times be too general, an OCHA interviewee “went on to say the table [above] gives a general picture of severity, which is an advantage during those first hours of response.”

As my QCRI team rightly notes, “This validation from UN staff supports our continued work on collecting, labeling, organizing, and presenting Twitter data to aid humanitarian agencies with a focus on their specific needs as they perform quick response procedures.” We are thus on the right track with both our AIDR and MicroMappers platforms. Our task moving forward is to use these platforms to produce the analysis discussed above, and to do so in near real-time. We also need to (radically) diversify our data sources and thus include information from text messages (SMS), mainstream media, Facebook, satellite imagery and aerial imagery (as noted here).

But as I’ve noted before, we also need enlightened policy making to make the most of these next generation humanitarian technologies. This OCHA proposal  on establishing specific social media standards for disaster response, and the official social media strategy implemented by the government of the Philippines during disasters serve as excellent examples in this respect.

bookcover

Lots more on humanitarian technology, innovation, computing as well as policy making in my new book Digital Humanitarians: How Big Data is Changing the Face of Humanitarian Action.

Could This Be The Most Comprehensive Study of Crisis Tweets Yet?

I’ve been looking forward to blogging about my team’s latest research on crisis computing for months; the delay being due to the laborious process of academic publishing, but I digress. I’m now able to make their  findings public. The goal of their latest research was to “understand what affected populations, response agencies and other stakeholders can expect—and not expect—from [crisis tweets] in various types of disaster situations.”

Screen Shot 2015-02-15 at 12.08.54 PM

As my colleagues rightly note, “Anecdotal evidence suggests that different types of crises elicit different reactions from Twitter users, but we have yet to see whether this is in fact the case.” So they meticulously studied 26 crisis-related events between 2012-2013 that generated significant activity on twitter. The lead researcher on this project, my colleague & friend Alexandra Olteanu from EPFL, also appears in my new book.

Alexandra and team first classified crisis related tweets based on the following categories (each selected based on previous research & peer-reviewed studies):

Screen Shot 2015-02-15 at 11.01.48 AM

Written in long form: Caution & Advice; Affected Individuals; Infrastructure & Utilities; Donations & Volunteering; Sympathy & Emotional Support, and Other Useful Information. Below are the results of this analysis sorted by descending proportion of Caution & Advice related tweets (click to enlarge).

Screen Shot 2015-02-15 at 10.59.55 AM

The category with the largest number of tweets is “Other Useful Info.” On average 32% of tweets fall into this category (minimum 7%, maximum 59%). Interestingly, it appears that most crisis events that are spread over a relatively large geographical area (i.e., they are diffuse), tend to be associated with the lowest number of “Other” tweets. As my QCRI rightly colleagues note, “it is potentially useful to know that this type of tweet is not prevalent in the diffused events we studied.”

Tweets relating to Sympathy and Emotional Support are present in each of the 26 crises. On average, these account for 20% of all tweets. “The 4 crises in which the messages in this category were more prevalent (above 40%) were all instantaneous disasters.” This finding may imply that “people are more likely to offer sympathy when events […] take people by surprise.”

On average, 20% of tweets in the 26 crises relate to Affected Individuals. “The 5 crises with the largest proportion of this type of information (28%–57%) were human-induced, focalized, and instantaneous. These 5 events can also be viewed as particularly emotionally shocking.”

Tweets related to Donations & Volunteering accounted for 10% of tweets on average. “The number of tweets describing needs or offers of goods and services in each event varies greatly; some events have no mention of them, while for others, this is one of the largest information categories. “

Caution and Advice tweets constituted on average 10% of all tweets in a given crisis. The results show a “clear separation between human-induced hazards and natural: all human induced events have less caution and advice tweets (0%–3%) than all the events due to natural hazards (4%–31%).”

Finally, tweets related to Infrastructure and Utilities represented on average 7% of all tweets posted in a given crisis. The disasters with the highest number of such tweets tended to be flood situations.

In addition to the above analysis, Alexandra et al. also categorized tweets by their source:

Screen Shot 2015-02-15 at 11.23.19 AM

The results depicted below (click to enlarge) are sorted by descending order of eyewitness tweets.

Screen Shot 2015-02-15 at 11.27.57 AM

On average, about 9% of tweets generated during a given crises were written by Eyewitnesses; a figure that increased to 54% for the haze crisis in Singapore. “In general, we find a larger proportion of eyewitness accounts during diffused disasters caused by natural hazards.”

Traditional and/or Internet Media were responsible for 42% of tweets on average. ” The 6 crises with the highest fraction of tweets coming from a media source (54%–76%) are instantaneous, which make “breaking news” in the media.

On average, Outsiders posted 38% of the tweets in a given crisis while NGOs were responsible for about 4% of tweets and Governments 5%. My colleagues surmise that these low figures are due to the fact that both NGOs and governments seek to verify information before they release it. The highest levels of NGO and government tweets occur in response to natural disasters.

Finally, Businesses account for 2% of tweets on average. The Alberta floods of 2013 saw the highest proportion (9%) of tweets posted by businesses.

All the above findings are combined and displayed below (click to enlarge). The figure depicts the “average distribution of tweets across crises into combinations of information types (rows) and sources (columns). Rows and columns are sorted by total frequency, starting on the bottom-left corner. The cells in this figure add up to 100%.”

Screen Shot 2015-02-15 at 11.42.39 AM

The above analysis suggests that “when the geographical spread [of a crisis] is diffused, the proportion of Caution and Advice tweets is above the median, and when it is focalized, the proportion of Caution and Advice tweets is below the median. For sources, […] human-induced accidental events tend to have a number of eyewitness tweets below the median, in comparison with intentional and natural hazards.” Additional analysis carried out by my colleagues indicate that “human-induced crises are more similar to each other in terms of the types of information disseminated through Twitter than to natural hazards.” In addition, crisis events that develop instantaneously also look the same when studied through the lens of tweets.

In conclusion, the analysis above demonstrates that “in some cases the most common tweet in one crisis (e.g. eyewitness accounts in the Singapore haze crisis in 2013) was absent in another (e.g. eyewitness accounts in the Savar building collapse in 2013). Furthermore, even two events of the same type in the same country (e.g. Typhoon Yolanda in 2013 and Typhoon Pablo in 2012, both in the Philippines), may look quite different vis-à-vis the information on which people tend to focus.” This suggests the uniqueness of each event.

“Yet, when we look at the Twitter data at a meta-level, our analysis reveals commonalities among the types of information people tend to be concerned with, given the particular dimensions of the situations such as hazard category (e.g. natural, human-induced, geophysical, accidental), hazard type (e.g. earth-quake, explosion), whether it is instantaneous or progressive, and whether it is focalized or diffused. For instance, caution and advice tweets from government sources are more common in progressive disasters than in instantaneous ones. The similarities do not end there. When grouping crises automatically based on similarities in the distributions of different classes of tweets, we also realize that despite the variability, human-induced crises tend to be more similar to each other than to natural hazards.”

Needless to say, these are exactly the kind of findings that can improve the way we use MicroMappers & other humanitarian technologies for disaster response. So if want to learn more, the full study is available here (PDF). In addition, all the Twitter datasets used for the analysis are available at CrisisLex. If you have questions on the research, simply post them in the comments section below and I’ll ask my colleagues to reply there.

bookcover

In the meantime, there is a lot more on humanitarian technology and computing in my new book Digital Humanitarians. As I note in said book, we also need enlightened policy making to tap the full potential of social media for disaster response. Technology alone can only take us so far. If we don’t actually create demand for relevant tweets in the first place, then why should social media users supply a high volume of relevant and actionable tweets to support relief efforts? This OCHA proposal on establishing specific social media standards for disaster response, and this official social media strategy developed and implemented by the Filipino government are examples of what enlightened leadership looks like.

Video: Digital Humanitarians & Next Generation Humanitarian Technology

How do international humanitarian organizations make sense of the “Big Data” generated during major disasters? They turn to Digital Humanitarians who craft and leverage ingenious crowdsourcing solutions with trail-blazing insights from artificial intelligence to make sense of vast volumes of social media, satellite imagery and even UAV/aerial imagery. They also use these “Big Data” solutions to verify user-generated content and counter rumors during disasters. The talk below explains how Digital Humanitarians do this and how their next generation humanitarian technologies work.

Many thanks to TTI/Vanguard for having invited me to speak. Lots more on Digital Humanitarians in my new book of the same title.

bookcover

Videos of my TEDx talks and the talks I’ve given at the White House, PopTech, Where 2.0, National Geographic, etc., are all available here.

Reflections on Digital Humanitarians – The Book

In January 2014, I wrote this blog post announcing my intention to write a book on Digital Humanitarians. Well, it’s done! And launches this week. The book has already been endorsed by scholars at Harvard, MIT, Stanford, Oxford, etc; by practitioners at the United Nations, World Bank, Red Cross, USAID, DfID, etc; and by others including Twitter and National Geographic. These and many more endorsements are available here. Brief summaries of each book chapter are available here; and the short video below provides an excellent overview of the topics covered in the book. Together, these overviews make it clear that this book is directly relevant to many other fields including journalism, human rights, development, activism, business management, computing, ethics, social science, data science, etc. In short, the lessons that digital humanitarians have learned (often the hard way) over the years and the important insights they have gained are directly applicable to fields well beyond the humanitarian space. To this end, Digital Humanitarians is written in a “narrative and conversational style” rather than with dense, technical language.

The story of digital humanitarians is a multifaceted one. Theirs is not just a story about using new technologies to make sense of “Big Data”. For the most part, digital humanitarians are volunteers; volunteers from all walks of life and who occupy every time zone. Many are very tech-savvy and pull all-nighters, but most simply want to make a difference using the few minutes they have with the digital technologies already at their fingertips. Digital humanitarians also include pro-democracy activists who live in countries ruled by tyrants. This story is thus also about hope and humanity; about how technology can extend our humanity during crises. To be sure, if no one cared, if no one felt compelled to help others in need, or to change the status quo, then no one even would bother to use these new, next generation humanitarian technologies in the first place.

I believe this explains why Professor Leysia Palen included the following in her very kind review of my book: “I dare you to read this book and not have both your heart and mind opened.” As I reflected to my editor while in the midst of book writing, an alternative tag line for the title could very well be “How Big Data and Big Hearts are Changing the Face of Humanitarian Response.” It is personally and deeply important to me that the media, would-be volunteers  and others also understand that the digital humanitarians story is not a romanticized story about a few “lone heroes” who accomplish the impossible thanks to their super human technical powers. There are thousands upon thousands of largely anonymous digital volunteers from all around the world who make this story possible. And while we may not know all their names, we certainly do know about their tireless collective action efforts—they mobilize online from all corners of our Blue Planet to support humanitarian efforts. My book explains how these digital volunteers do this, and yes, how you can too.

Digital humanitarians also include a small (but growing) number of forward-thinking professionals from large and well-known humanitarian organizations. After the tragic, nightmarish earthquake that struck Haiti in January 2010, these seasoned and open-minded humanitarians quickly realized that making sense of “Big Data” during future disasters would require new thinking, new risk-taking, new partnerships, and next generation humanitarian technologies. This story thus includes the invaluable contributions of those change-agents and explains how these few individuals are enabling innovation within the large bureaucracies they work in. The story would thus be incomplete without these individuals; without their appetite for risk-taking, their strategic understanding of how to change (and at times circumvent) established systems from the inside to make their organizations still relevant in a hyper-connected world. This may explain why Tarun Sarwal of the International Committee of the Red Cross (ICRC) in Geneva included these words (of warning) in his kind review: “For anyone in the Humanitarian sector — ignore this book at your peril.”

bookcover

Today, this growing, cross-disciplinary community of digital humanitarians are crafting and leveraging ingenious crowdsourcing solutions with trail-blazing insights from advanced computing and artificial intelligence in order to make sense of “Big Data” generated during disasters. In virtually real-time, these new solutions (many still in early prototype stages) enable digital volunteers to make sense of vast volumes of social media, SMS and imagery captured from satellites & UAVs to support relief efforts worldwide.

All of this obviously comes with a great many challenges. I certainly don’t shy away from these in the book (despite my being an eternal optimist : ). As Ethan Zuckerman from MIT very kindly wrote in his review of the book,

“[Patrick] is also a careful scholar who thinks deeply about the limits and potential dangers of data-centric approaches. His book offers both inspiration for those around the world who want to improve our disaster response and a set of fertile challenges to ensure we use data wisely and ethically.”

Digital humanitarians are not perfect, they’re human, they make mistakes, they fail; innovation, after all, takes experimenting, risk-taking and failing. But most importantly, these digital pioneers learn, innovate and over time make fewer mistakes. In sum, this book charts the sudden and spectacular rise of these digital humanitarians and their next generation technologies by sharing their remarkable, real-life stories and the many lessons they have learned and hurdles both cleared & still standing. In essence, this book highlights how their humanity coupled with innovative solutions to “Big Data” is changing humanitarian response forever. Digital Humanitarians will make you think differently about what it means to be humanitarian and will invite you to join the journey online. And that is what it’s ultimately all about—action, responsible & effective action.

Why did I write this book? The main reason may perhaps come as a surprise—one word: hope. In a world seemingly overrun by heart-wrenching headlines and daily reminders from the news and social media about all the ugly and cruel ways that technologies are being used to spy on entire populations, to harass, oppress, target and kill each other, I felt the pressing need to share a different narrative; a narrative about how selfless volunteers from all walks of life, from all ages, nationalities, creeds use digital technologies to help complete strangers on the other side of the planet. I’ve had the privilege of witnessing this digital good-will first hand and repeatedly over the years. This goodwill is what continues to restore my faith in humanity and what gives me hope, even when things are tough and not going well. And so, I wrote Digital Humanitarians first and fore-most to share this hope more widely. We each have agency and we can change the world for the better. I’ve seen this and witnessed the impact first hand. So if readers come away with a renewed sense of hope and agency after reading the book, I will have achieved my main objective.

For updates on events, talks, trainings, webinars, etc, please click here. I’ll be organizing a Google Hangout on March 5th for readers who wish to discuss the book in more depth and/or follow up with any questions or ideas. If you’d like additional information on this and future Hangouts, please click on the previous link. If you wish to join ongoing conversations online, feel free to do so with the FB & Twitter hashtag #DigitalJedis. If you’d like to set up a book talk and/or co-organize a training at your organization, university, school, etc., then do get in touch. If you wish to give a talk on the book yourself, then let me know and I’d be happy to share my slides. And if you come across interesting examples of digital humanitarians in action, then please consider sharing these with other readers and myself by using the #DigitalJedis hashtag and/or by sending me an email so I can include your observation in my monthly newsletter and future blog posts. I also welcome guest blog posts on iRevolutions.

Naturally, this book would never have existed were it for digital humanitarians volunteering their time—day and night—during major disasters across the world. This book would also not have seen the light of day without the thoughtful guidance and support I received from these mentors, colleagues, friends and my family. I am thus deeply and profoundly grateful for their spirit, inspiration and friendship. Onwards!