Tag Archives: rumors

How to Become a Digital Sherlock Holmes and Support Relief Efforts

Humanitarian organizations need both timely and accurate information when responding to disasters. Where is the most damage located? Who needs the most help? What other threats exist? Respectable news organizations also need timely and accurate information during crisis events to responsibly inform the public. Alas, both humanitarian & mainstream news organizations are often confronted with countless rumors and unconfirmed reports. Investigative journalists and others have thus developed a number of clever strategies to rapidly verify such reports—as detailed in the excellent Verification Handbook. There’s just one glitch: Journalists and humanitarians alike are increasingly overwhelmed by the “Big Data” generated during crises, particularly information posted on social media. They rarely have enough time or enough staff to verify the majority of unconfirmed reports. This is where Verily comes in, a new type of Detective Agency for a new type of detective: The Virtual Digital Detective.

Screen Shot 2015-02-26 at 5.47.35 AM

The purpose of Verily is to rapidly crowdsource the verification of unconfirmed reports during major disasters. The way it works is simple. If a humanitarian or news organization has a verification request, they simply submit this request online at Verily. This request must be phrased in the form of a Yes-or-No question, such as: “Has the Brooklyn Bridge been destroyed by the Hurricane?”; “Is this Instagram picture really showing current flooding in Indonesia”?; “Is this new YouTube video of the Chile earthquake fake?”; “Is it true that the bush fires in South Australia are getting worse?” and so on.

Verily helps humanitarian & news organizations find answers to these questions by rapidly crowdsourcing the collection of clues that can help answer said questions. Verification questions are communicated widely across the world via Verily’s own email-list of Digital Detectives and also via social media. This new bread of Digital Detectives then scour the web for clues that can help answer the verification questions. Anyone can become a Digital Detective at Verily. Indeed, Verily provides a menu of mini-verification guides for new detectives. These guides were written by some of the best Digital Detectives on the planet, the authors of the Verification Handbook. Verily Detectives post the clues they find directly to Verily and briefly explain why these clues help answer the verification question. That’s all there is to it.


If you’re familiar with Reddit, you may be thinking “Hold on, doesn’t Reddit do this already?” In part yes, but Reddit is not necessarily designed to crowdsource critical thinking or to create skilled Digital Detectives. Recall this fiasco during the Boston Marathon Bombings which fueled disastrous “witch hunts”. Said disaster would not have happened on Verily because Verily is deliberately designed to focus on the process of careful detective work while providing new detectives with the skills they need to precisely avoid the kind of disaster that happened on Reddit. This is no way a criticism of Reddit! One single platform alone cannot be designed to solve every problem under the sun. Deliberate, intentional design is absolutely key.

In sum, our goal at Verily is to crowdsource Sherlock Holmes. Why do we think this will work? For several reasons. First, authors of the Verification Handbook have already demonstrated that individuals working alone can, and do, verify unconfirmed reports during crises. We believe that creating a community that can work together to verify rumors will be even more powerful given the Big Data challenge. Second, each one of us with a mobile phone is a human sensor, a potential digital witness. We believe that Verily can help crowdsource the search for eyewitnesses, or rather the search for digital content that these eyewitnesses post on the Web. Third, the Red Balloon Challenge was completed in a matter of hours. This Challenge focused on crowdsourcing the search for clues across an entire continent (3 million square miles). Disasters, in contrast, are far more narrow in terms of geographic coverage. In other words, the proverbial haystack is smaller and thus the needles easier to find. More on Verily here & here.

So there’s reason to be optimistic that Verily can succeed given the above and recent real-world deployments. Of course, Verily is is still very much in early phase and still experimental. But both humanitarian organizations and high-profile news organizations have expressed a strong interest in field-testing this new Digital Detective Agency. To find out more about Verily and to engage with experts in verification, please join us on Tuesday, March 3rd at 10:00am (New York time) for this Google Hangout with the Verily Team and our colleague Craig Silverman, the Co-Editor of the Verification Handbook. Click here for the Event Page and here to follow on YouTube. You can also join the conversations on Twitter and pose questions or comments using the hashtag #VerilyLive.

How to Counter Rumors and Prevent Violence Using UAVs

The Sentinel Project recently launched their Human Security UAV program in Kenya’s violence-prone Tana Delta to directly support Una Hakika (“Are You Sure”). Hakika is an information service that serves to “counteract malicious misinformation [disinformation] which has been the trigger for recent outbreaks of violence in the region.” While the Tana Delta is one of Kenya’s least developed areas, both “mobile phone and internet usage is still surprisingly high.” At the same time, misinformation has “played a significant role in causing fear, distrust and hatred between communities” because the Tana Delta is perhaps parado-xically also an “information-starved environment in which most people still rely on word-of-mouth to get news about the world around them.”


In other words, there are no objective, authoritative sources of information per se, so Una Hakika (“Are You Sure”) seeks to be the first accurate, neutral and reliable source of information. Una Hakika is powered by a dedicated toll-free SMS short code and an engaged, trusted network of volunteer ambassadors. When the team receives a rumor verification request via SMS, they proceed to verify the rumor and report the findings back (via SMS) to the community. This process involves “gathering a lot of information from various different sources and trying to make sense of it […]. That’s where WikiRumours comes in as our purpose-built software for managing the Una Hakika workflow.”


A year after implementing the project, the Sentinel team carried out a series of focus groups to assess impact The findings are particularly encouraging. In a way, the Sentinel team has formalized and stream-lined the organic verification process I describe here: How To Use Technology To Counter Rumors During Crises: Anecdotes from Kyrgyzstan. So where do UAVs come in?

The Sentinel team recently introduced the use of UAVs to support Una Hakika’s verification efforts and will be expanding the program to include a small fleet of multi-rotor and fixed wing platforms. Before piloting this new technology, the team carried out research to better understand local perceptions around UAVs (also referred to as Unmanned Aerial Systems, UAS):

“Common public opinion concerns in places like Europe and North America relate to the invasion of privacy, misuse by government or law enforcement, a related concern about an overbearing security state, and fears of an aviation disaster. Concerns found among residents of the Tana Delta revolve around practical issues such as whether the UAS-mounted camera would be powerful enough to be useful, how far such systems can operate, whether they are hampered by weather, how quickly a drone can be deployed in an emergency, and who will be in physical possession of the system.”

“For the most part, they [local residents] are genuinely curious, have a plethora of questions about the implementation of UAS in their communities, and are enthusiastic about the many possibilities. This genuine technological optimism makes the Tana Delta a likely site for one of the first programs of its kind. The Sentinel Project is conducting its UAS operations with the policy of ‘progress through caution,’ which seeks to engage communities within the proposed deployment while offering complete transparency and involvement but always emphasizing exposure to (and demonstration of) systems in the field with the people who have the potential to benefit from these initiatives. This approach has been extremely well received & has already resulted in improvements to implementation.”

While Una Hakika’s verification network includes hundreds of volunteer ambassadors, they can’t be everywhere at the same time. As the Sentinel team mentioned during one of our recent conversations, there are some places that simply can’t be reached by foot reliably. In addition, the UAVs can operate both day and night; wandering around at night can be dangerous for Una Hakika’s verification ambassadors. The Sentinel team thus plans to add InfraRed, thermal imaging capabilities to the UAVs. The core of the program will be to use UAVs to set up perimeter security areas around threatened communities. In addition, the program can address other vectors which have led to recent violence: using the UAVs to help find lost (potentially stolen) cattle, track crop health, and monitor contested land use. The team mentioned that the UAVs could also be used to support search and rescue efforts during periods of drought and floods.


Lastly, they’ve started discussing the use of UAVs for payload transportation. For example, UAVs could deliver medical supplies to remote villages that have been attacked. After all, the World Health Organization (WHO) is already using UAVs for this purpose. With each of these applications, the Sentinel team clearly emphasizes that the primary users and operators of the UAVs must be the local staff in the region. “We believe that successful technology driven programs must not only act as tools to serve these communities but also allow community members to have direct involvement in their use”.

As the Sentinel team rightly notes, their approach helps to “counteract the paralysis which arises from the unknowns of a new endeavour when studied in a purely academic setting. The Sentinel Project team believes that a cautious but active strategy of real-world deployments will best demonstrate the value of such programs to governments and global citizens.” This very much resonates with me, which is why I am pleased to serve on the organization’s Advisory Board.

Live: Crowdsourced Verification Platform for Disaster Response

Earlier this year, Malaysian Airlines Flight 370 suddenly vanished, which set in motion the largest search and rescue operation in history—both on the ground and online. Colleagues at DigitalGlobe uploaded high resolution satellite imagery to the web and crowdsourced the digital search for signs of Flight 370. An astounding 8 million volunteers rallied online, searching through 775 million images spanning 1,000,000 square kilometers; all this in just 4 days. What if, in addition to mass crowd-searching, we could also mass crowd-verify information during humanitarian disasters? Rumors and unconfirmed reports tend to spread rather quickly on social media during major crises. But what if the crowd were also part of the solution? This is where our new Verily platform comes in.

Verily Image 1

Verily was inspired by the Red Balloon Challenge in which competing teams vied for a $40,000 prize by searching for ten weather balloons secretly placed across some 8,000,0000 square kilometers (the continental United States). Talk about a needle-in-the-haystack problem. The winning team from MIT found all 10 balloons within 8 hours. How? They used social media to crowdsource the search. The team later noted that the balloons would’ve been found more quickly had competing teams not posted pictures of fake balloons on social media. Point being, all ten balloons were found astonishingly quickly even with the disinformation campaign.

Verily takes the exact same approach and methodology used by MIT to rapidly crowd-verify information during humanitarian disasters. Why is verification important? Because humanitarians have repeatedly noted that their inability to verify social media content is one of the main reasons why they aren’t making wider user of this medium. So, to test the viability of our proposed solution to this problem, we decided to pilot the Verily platform by running a Verification Challenge. The Verily Team includes researchers from the University of Southampton, the Masdar Institute and QCRI.

During the Challenge, verification questions of various difficulty were posted on Verily. Users were invited to collect and post evidence justifying their answers to the “Yes or No” verification questions. The photograph below, for example, was posted with the following question:

Verily Image 3

Unbeknownst to participants, the photograph was actually of an Italian town in Sicily called Caltagirone. The question was answered correctly within 4 hours by a user who submitted another picture of the same street. The results of the new Verily experiment are promissing. Answers to our questions were coming in so rapidly that we could barely keep up with posting new questions. Users drew on a variety of techniques to collect their evidence & answer the questions we posted:

Verily was designed with the goal of tapping into collective critical thinking; that is, with the goal of encouraging people think about the question rather than use their gut feeling alone. In other words, the purpose of Verily is not simply to crowdsource the collection of evidence but also to crowdsource critical thinking. This explains why a user can’t simply submit a “Yes” or “No” to answer a verification question. Instead, they have to justify their answer by providing evidence either in the form of an image/video or as text. In addition, Verily does not make use of Like buttons or up/down votes to answer questions. While such tools are great for identifying and sharing content on sites like Reddit, they are not the right tools for verification, which requires searching for evidence rather than liking or retweeting.

Our Verification Challenge confirmed the feasibility of the Verily platform for time-critical, crowdsourced evidence collection and verification. The next step is to deploy Verily during an actual humanitarian disaster. To this end, we invite both news and humanitarian organizations to pilot the Verily platform with us during the next natural disaster. Simply contact me to submit a verification question. In the future, once Verily is fully developed, organizations will be able to post their questions directly.


See Also:

  • Verily: Crowdsourced Verification for Disaster Response [link]
  • Crowdsourcing Critical Thinking to Verify Social Media [link]
  • Six Degrees of Separation: Implications for Verifying Social Media [link]

Got TweetCred? Use it To Automatically Identify Credible Tweets (Updated)

Update: Users have created an astounding one million+ tags over the past few weeks, which will help increase the accuracy of TweetCred in coming months as we use these tags to further train our machine learning classifiers. We will be releasing our Firefox plugin in the next few days. In the meantime, we have just released our paper on TweetCred which describes our methodology & classifiers in more detail.

What if there were a way to automatically identify credible tweets during major events like disasters? Sounds rather far-fetched, right? Think again.

The new field of Digital Information Forensics is increasingly making use of Big Data analytics and techniques from artificial intelligence like machine learning to automatically verify social media. This is how my QCRI colleague ChaTo et al. already predicted both credible and non-credible tweets generated after the Chile Earthquake (with an accuracy of 86%). Meanwhile, my colleagues Aditi, et al. from IIIT Delhi also used machine learning to automatically rank the credibility of some 35 million tweets generated during a dozen major international events such as the UK Riots and the Libya Crisis. So we teamed up with Aditi et al. to turn those academic findings into TweetCred, a free app that identifies credible tweets automatically.

CNN TweetCred

We’ve just launched the very first version of TweetCred—key word being first. This means that our new app is still experimental. On the plus side, since TweetCred is powered by machine learning, it will become increasingly accurate over time as more users make use of the app and “teach” it the difference between credible and non-credible tweets. Teaching TweetCred is as simple as a click of the mouse. Take the tweet below, for example.

ARC TweetCred Teach

TweetCred scores each tweet based based on a 7-point system, the higher the number of blue dots, the more credible the content of the tweet is likely to be. Note that a TweetCred score also takes into account any pictures or videos included in a tweet along with the reputation and popularity of the Twitter user. Naturally, TweetCred won’t always get it right, which is where the teaching and machine learning come in. The above tweet from the American Red Cross is more credible than three dots would suggest. So you simply hover your mouse over the blue dots and click on the “thumbs down” icon to tell TweetCred it got that tweet wrong. The app will then ask you to tag the correct level of credibility for that tweet is.

ARC TweetCred Teach 3

That’s all there is to it. As noted above, this is just the first version of TweetCred. The more all of us use (and teach) the app, the more accurate it will be. So please try it out and spread the word. You can download the Chrome Extension for TweetCred here. If you don’t use Chrome, you can still use the browser version here although the latter has less functionality. We very much welcome any feedback you may have, so simply post feedback in the comments section below. Keep in mind that TweetCred is specifically designed to rate the credibility of disaster/crisis related tweets rather than any random topic on Twitter.

As I note in my book Digital Humanitarians (forthcoming), empirical studies have shown that we’re less likely to spread rumors on Twitter if false tweets are publicly identified by Twitter users as being non-credible. In fact, these studies show that such public exposure increases the number of Twitter users who then seek to stop the spread of said of rumor-related tweets by 150%. But, it makes a big difference whether one sees the rumors first or the tweets dismissing said rumors first. So my hope is that TweetCred will help accelerate Twitter’s self-correcting behavior by automatically identifying credible tweets while countering rumor-related tweets in real-time.

This project is a joint collaboration between IIIT and QCRI. Big thanks to Aditi and team for their heavy lifting on the coding of TweetCred. If the experiments go well, my QCRI colleagues and I may integrate TweetCred within our AIDR (Artificial Intelligence for Disaster Response) and Verily platforms.


See also:

  • New Insights on How to Verify Social Media [link]
  • Predicting the Credibility of Disaster Tweets Automatically [link]
  • Auto-Ranking Credibility of Tweets During Major Events [link]
  • Auto-Identifying Fake Images on Twitter During Disasters [link]
  • Truth in the Age of Social Media: A Big Data Challenge [link]
  • Analyzing Fake Content on Twitter During Boston Bombings [link]
  • How to Verify Crowdsourced Information from Social Media [link]
  • Crowdsourcing Critical Thinking to Verify Social Media [link]
  • Tweets, Crises and Behavioral Psychology: On Credibility and Information Sharing [link]

Analyzing Fake Content on Twitter During Boston Marathon Bombings

As iRevolution readers already know, the application of Information Forensics to social media is one of my primary areas of interest. So I’m always on the lookout for new and related studies, such as this one (PDF), which was just published by colleagues of mine in India. The study by Aditi Gupta et al. analyzes fake content shared on Twitter during the Boston Marathon Bombings earlier this year.


Gupta et al. collected close to 8 million unique tweets posted by 3.7 million unique users between April 15-19th, 2013. The table below provides more details. The authors found that rumors and fake content comprised 29% of the content that went viral on Twitter, while 51% of the content constituted generic opinions and comments. The remaining 20% relayed true information. Interestingly, approximately 75% of fake tweets were propagated via mobile phone devices compared to true tweets which comprised 64% of tweets posted via mobiles.

Table1 Gupta et al

The authors also found that many users with high social reputation and verified accounts were responsible for spreading the bulk of the fake content posted to Twitter. Indeed, the study shows that fake content did not travel rapidly during the first hour after the bombing. Rumors and fake information only goes viral after Twitter users with large numbers of followers start propagating the fake content. To this end, “determining whether some information is true or fake, based on only factors based on high number of followers and verified accounts is not possible in the initial hours.”

Gupta et al. also identified close to 32,000 new Twitter accounts created between April 15-19 that also posted at least one tweet about the bombings. About 20% (6,073 accounts) of these new accounts were subsequently suspended by Twitter. The authors found that 98.7% of these suspended accounts did not include the word Boston in their names and usernames. They also note that some of these deleted accounts were “quite influential” during the Boston tragedy. The figure below depicts the number of suspended Twitter accounts created in the hours and days following the blast.

Figure 2 Gupta et al

The authors also carried out some basic social network analysis of the suspended Twitter accounts. First, they removed from the analysis all suspended accounts that did not interact with each other, which left just 69 accounts. Next, they analyzed the network typology of these 69 accounts, which produced four distinct graph structures: Single Link, Closed Community, Star Typology and Self-Loops. These are displayed in the figure below (click to enlarge).

Figure 3 Gupta et al

The two most interesting graphs are the Closed Community and Star Typology graphs—the second and third graphs in the figure above.

Closed Community: Users that retweet and mention each other, forming a closed community as indicated by the high closeness centrality values produced by the social network analysis. “All these nodes have similar usernames too, all usernames have the same prefix and only numbers in the suffixes are different. This indicates that either these profiles were created by same or similar minded people for posting common propaganda posts.” Gupta et al. analyzed the content posted by these users and found that all were “tweeting the same propaganda and hate filled tweet.”

Star Typology: Easily mistakable for the authentic “BostonMarathon” Twitter account, the fake account “BostonMarathons” created plenty of confusion. Many users propagated the fake content posted by the BostonMarathons account. As the authors note, “Impersonation or creating fake profiles is a crime that results in identity theft and is punishable by law in many countries.”

The automatic detection of these network structures on Twitter may enable us to detect and counter fake content in the future. In the meantime, my colleagues and I at QCRI are collaborating with Aditi Gupta et al. to develop a “Credibility Plugin” for Twitter based on this analysis and earlier peer-reviewed research carried out by my colleague ChaTo. Stay tuned for updates.


See also:

  • Boston Bombings: Analyzing First 1,000 Seconds on Twitter [link]
  • Taking the Pulse of the Boston Bombings on Twitter [link]
  • Predicting the Credibility of Disaster Tweets Automatically [link]
  • Auto-Ranking Credibility of Tweets During Major Events [link]
  • Auto-Identifying Fake Images on Twitter During Disasters [link]
  • How to Verify Crowdsourced Information from Social Media [link]
  • Crowdsourcing Critical Thinking to Verify Social Media [link]

Using Crowdsourcing to Counter the Spread of False Rumors on Social Media During Crises

My new colleague Professor Yasuaki Sakamoto at the Stevens Institute of Tech-nology (SIT) has been carrying out intriguing research on the spread of rumors via social media, particularly on Twitter and during crises. In his latest research, “Toward a Social-Technological System that Inactivates False Rumors through the Critical Thinking of Crowds,” Yasu uses behavioral psychology to under-stand why exposure to public criticism changes rumor-spreading behavior on Twitter during disasters. This fascinating research builds very nicely on the excellent work carried out by my QCRI colleague ChaTo who used this “criticism dynamic” to show that the credibility of tweets can be predicted (by topic) with-out analyzing their content. Yasu’s study also seeks to find the psychological basis for the Twitter’s self-correcting behavior identified by ChaTo and also John Herman who described Twitter as a  “Truth Machine” during Hurricane Sandy.


Twitter is still a relatively new platform, but the existence and spread of false rumors is certainly not. In fact, a very interesting study dated 1950 found that “in the past 1,000 years the same types of rumors related to earthquakes appear again and again in different locations.” Early academic studies on the spread of rumors revealed that “that psychological factors, such as accuracy, anxiety, and impor-tance of rumors, affect rumor transmission.” One such study proposed that the spread of a rumor “will vary with the importance of the subject to the individuals concerned times the ambiguity of the evidence pertaining to the topic at issue.” Later studies added “anxiety as another key element in rumormongering,” since “the likelihood of sharing a rumor was related to how anxious the rumor made people feel. At the same time, however, the literature also reveals that counter-measures do exist. Critical thinking, for example, decreases the spread of rumors. The literature defines critical thinking as “reasonable reflective thinking focused on deciding what to believe or do.”

“Given the growing use and participatory nature of social media, critical thinking is considered an important element of media literacy that individuals in a society should possess.” Indeed, while social media can “help people make sense of their situation during a disaster, social media can also become a rumor mill and create social problems.” As discussed above, psychological factors can influence rumor spreading, particularly when experiencing stress and mental pressure following a disaster. Recent studies have also corroborated this finding, confirming that “differences in people’s critical thinking ability […] contributed to the rumor behavior.” So Yasu and his team ask the following interesting question: can critical thinking be crowdsourced?

Screen Shot 2013-03-30 at 3.37.40 PM

“Not everyone needs to be a critical thinker all the time,” writes Yasu et al. As long as some individuals are good critical thinkers in a specific domain, their timely criticisms can result in an emergent critical thinking social system that can mitigate the spread of false information. This goes to the heart of the self-correcting behavior often observed on social media and Twitter in particular. Yasu’s insight also provides a basis for a bounded crowdsourcing approach to disaster response. More on this here, here and here.

“Related to critical thinking, a number of studies have paid attention to the role of denial or rebuttal messages in impeding the transmission of rumor.” This is the more “visible” dynamic behind the self-correcting behavior observed on Twitter during disasters. So while some may spread false rumors, others often try to counter this spread by posting tweets criticizing rumor-tweets directly. The following questions thus naturally arise: “Are criticisms on Twitter effective in mitigating the spread of false rumors? Can exposure to criticisms minimize the spread of rumors?”

Yasu and his colleagues set out to test the following hypotheses: Exposure to criticisms reduces people’s intent to spread rumors; which mean that ex-posure to criticisms lowers perceived accuracy, anxiety, and importance of rumors. They tested these hypotheses on 87 Japanese undergraduate and grad-uate students by using 20 rumor-tweets related to the 2011 Japan Earthquake and 10 criticism-tweets that criticized the corresponding rumor-tweets. For example:

Rumor-tweet: “Air drop of supplies is not allowed in Japan! I though it has already been done by the Self- Defense Forces. Without it, the isolated people will die! I’m trembling with anger. Please retweet!”

Criticism-tweet: “Air drop of supplies is not prohibited by the law. Please don’t spread rumor. Please see 4-(1)-4-.”

The researchers found that “exposing people to criticisms can reduce their intent to spread rumors that are associated with the criticisms, providing support for the system.” In fact, “Exposure to criticisms increased the proportion of people who stop the spread of rumor-tweets approximately 1.5 times [150%]. This result indicates that whether a receiver is exposed to rumor or criticism first makes a difference in her decision to spread the rumor. Another interpretation of the result is that, even if a receiver is exposed to a number of criticisms, she will benefit less from this exposure when she sees rumors first than when she sees criticisms before rumors.”

Screen Shot 2013-03-30 at 3.53.02 PM

Findings also revealed three psychological factors that were related to the differences in the spread of rumor-tweets: one’s own perception of the tweet’s accuracy, the anxiety cause by the tweet, and the tweet’s perceived importance. The results also indicate that “exposure to criticisms reduces the perceived accuracy of the succeeding rumor-tweets, paralleling the findings by previous research that refutations or denials decrease the degree of belief in rumor.” In addition, the perceived accuracy of criticism-tweets by those exposed to rumors first was significantly higher than the criticism-first group. The results were similar vis-à-vis anxiety. “Seeing criticisms before rumors reduced anxiety associated with rumor-tweets relative to seeing rumors first. This result is also consistent with previous research findings that denial messages reduce anxiety about rumors. Participants in the criticism-first group also perceived rumor-tweets to be less important than those in the rumor-first group.” The same was true vis-à-vis the perceived importance of a tweet. That said, “When the rumor-tweets are perceived as more accurate, the intent to spread the rumor-tweets are stronger; when rumor-tweets cause more anxiety, the intent to spread the rumor-tweets is stronger; when the rumor-tweets are perceived as more im-portance, the intent to spread the rumor-tweets is also stronger.”

So how do we use these findings to enhance the critical thinking of crowds and design crowdsourced verification platforms such as Verily? Ideally, such a platform would connect rumor tweets with criticism-tweets directly. “By this design, information system itself can enhance the critical thinking of the crowds.” That said, the findings clearly show that sequencing matters—that is, being exposed to rumor tweets first vs criticism tweets first makes a big differ-ence vis-à-vis rumor contagion. The purpose of a platform like Verily is to act as a repo-sitory for crowdsourced criticisms and rebuttals; that is, crowdsourced critical thinking. Thus, the majority of Verily users would first be exposed to questions about rumors, such as: “Has the Vincent Thomas Bridge in Los Angeles been destroyed by the Earthquake?” Users would then be exposed to the crowd-sourced criticisms and rebuttals.

In conclusion, the spread of false rumors during disasters will never go away. “It is human nature to transmit rumors under uncertainty.” But social-technological platforms like Verily can provide a repository of critical thinking and ed-ucate users on critical thinking processes themselves. In this way, we may be able to enhance the critical thinking of crowds.


See also:

  • Wiki on Truthiness resources (Link)
  • How to Verify and Counter Rumors in Social Media (Link)
  • Social Media and Life Cycle of Rumors during Crises (Link)
  • How to Verify Crowdsourced Information from Social Media (Link)
  • Analyzing the Veracity of Tweets During a Crisis (Link)
  • Crowdsourcing for Human Rights: Challenges and Opportunities for Information Collection & Verification (Link)
  • The Crowdsourcing Detective: Crisis, Deception and Intrigue in the Twittersphere (Link)

Haiti: Lies, Damned Lies and Crisis Mapping

You’d think there was some kind of misinformation campaign going on about the Ushahidi-Haiti Crisis Map given the number of new lies that are still being manu-factured even though it has been over three years since the earthquake. Please, if you really want a professional, independent and rigorous account of the project, read this evaluation. The findings are mixed but the report remains the only comprehensive, professional and independent evaluation of the Ushahidi-Haiti and 4636 efforts. So if you have questions about the project, please read the report and/or contact the evaluators directly.

Screen Shot 2013-02-25 at 2.10.47 AM

In the meantime, I’ve decided to collect the most ridiculous lies & rumors and post my all-time favorites below.

1. “Mission 4636  & Haitian volunteers very strongly opposed the publishing of 4636 SMS’s on the Ushahidi-Haiti Crisis Map given data privacy concerns.”

Robert, the person responsible for Mission 4636, agreed (in writing) to publish the SMS’s after two lawyers noted that there was implied consent to make these messages public. The screenshot of the email below clearly proves this. Further-more, he and I co-authored this peer-reviewed study several months after the earthquake to document the lessons learned from the SMS response in Haiti. Surely if one of us had heard about these concerns from the Diaspora, we would have known this and reconsidered the publishing of the SMS’s. We would also have written this up as a major issue in our study. Moreover, the independent and professional evaluators referred to above would also have documented this major issue if it were true.

Screen Shot 2013-02-26 at 9.35.59 AM

I, for one, did not receive a single email from anyone involved in Mission 4636 demanding that the SMS’s not be made public. None of the Boston-based Haitian volunteers who I met in person ever asked for the messages to remain con-fidential; nor did Haitian Diaspora journalists who interviewed us or the many Haitians who called into the radio interviews we participated in ask for the messages to remain secret. Also, the joint decision to (only) map the most urgent and actionable life-and-death messages was supported by a number of humani-tarian colleagues who agreed that the risks of making this information public were minimal vis-à-vis the Do No Harm principle.

On a practical note, time was a luxury we did not have; an entire week had already passed since the earthquake and we were already at the tail end of the search and rescue phase. This meant that literally every hour counted for potential survivors still trapped under the rubble. There was no time to second-guess the lawyers or to organize workshops on the question. Making the most urgent and actionable life-and-death text messages public meant that the Haitian Diaspora, which was incredibly active in the response, could use that information to help coordinate efforts. NGOs in Haiti could also make use of this information—not to mention the US Marine Corps, which claimed to have saved hundreds of lives thanks to the Ushahidi-Haiti Crisis Map.

Crisis Mapping can be risky business, there’s no doubt about that. Sometimes tough-but-calculated decisions are needed. If one of the two lawyers had opined that the messages should not be made public, then the SMS’s would not have been published, end of story. In any case, the difficulties we faced during this crisis mapping response to Haiti is precisely why I’ve been working hard with GSMA’s Disaster Response Program to create this SMS Code of Conduct. I have also been collaborating directly with the International Committee of the Red Cross (ICRC) to update Data Privacy and Protection Protocols so they include guidelines on social media use and crisis mapping. This new report will be officially launched in Geneva this April followed by a similar event in DC.

2. “Mission 4636 was a completely separate and independent initiative to the Ushahidi Haiti Crisis Map.”

Then why was Josh Nesbit looking for an SMS solution specifically for Ushahidi? The entire impetus for 4636 was the Haiti Crisis Map. Thanks to his tweet, Josh was put in touch with a contact at Digicel Haiti in Port-au-Prince. Several days later, the 4636 short code was set up and integrated with the Ushahidi platform.


3. “The microtasking platform developed by Ushahidi to translate the text messages during the first two weeks of operation was built by Tim Schwartz, i.e., not Ushahidi.”

Tim Schwartz is a good friend and wonderful colleague. So when I came across this exciting new rumor, I emailed him right away to thank him: “I’m super surprised since no one ever told me this before. If it is indeed true, then I owe you a huge huge thanks!!” His reply: “Well… not exactly:) Brian [from Ushahidi] took our code from the haitianquake.com and modified it to make the base of 4636. Then I came in and wrote the piece that let volunteers translate missing persons messages and put them into Google Person Finder. Brian definitely wrote the original volunteer part for 4636. He’s the rockstar:)”

4. “Digital Democracy (Dd) developed all the workflows for the Ushahidi-Haiti Crisis Map and also trained the majority of volunteers.”

Dd’s co-founder Emily Jacobi is a close friend and trusted colleague. So I emailed her about this fun new rumor back in October to see if I had somehow missed something. Emily replied: “It’s totally ludicrous to claim that Dd solely set up any of those processes. I do think we played an important role in helping to inform, document & systematize those workflows, which is a world away from claiming sole or even lead ownership of any of it.” Indeed, the workflows kept changing on a daily basis and hundreds of volunteers were trained in person or online–often several times a day. That said, Dd absolutely took the lead in crafting the work-flows & training the bulk of volunteers who spearheaded the Chile Crisis Map. I recommend reading up on Dd’s awesome projects in Haiti and worldwide here.

5. “FEMA Administrator Craig Fugate’s comment below about the Ushahidi Haiti Crisis Map was actually not about the Ushahidi project. Craig was confused and was actually referring to the Humanitarian OpenStreet Map (OSM) of Haiti.”

Again, I was stunned, but in a good way. Kate Chapman, the director of Humani-tarian OpenStreetMap, is a good friend and trusted colleague, so I emailed her the following: “I still hear all kinds of rumors about Haiti but this is the *first* time I’ve come across this one and if this is indeed true then goodness gracious I really need to know so I can give credit where credit is due!” Her reply? She too had never heard this claim before. I trust her 100% so if she ever does tell me that this new rumor is true, I’ll be the first to blog and tweet about it. I’m a huge fan of Humanitarian OpenStreetMap, they really do remarkable work, which is why I included 3 of their projects as case studies in a book chapter I just sub-mitted for publication. In any event, I fully share Kate’s feelings on the rumors: “My feelings on anything that had to do with Haiti is it doesn’t really matter anymore. It has been 2 and a half years. Let’s look on to preparedness and how to improve.” Wise words from a wise woman.


6. “Sabina Carlson who acted as the main point of contact between the Ushahidi Haiti project and the Haitian Diaspora also spearheaded the translation efforts and is critical of her Ushahidi Haiti Team members and in particular Patrick Meier for emphasizing the role of international actors and ignoring the Haitian Diaspora.”

This is probably one of the strangest lies yet. Everyone in Boston knows full well that Sabina was not directly focused on translation but rather on outreach and partnership building with the Haitian Diaspora. Sabina, who is a treasured friend, emailed me (out of the blue) when she heard about some of the poisonous rumors circulating. “This was a shock to me,” she wrote, “I would never say anything to put you down, Patrick, and I’m upset that my words were mis-interpreted and used to do just that.” She then detailed exactly how the lie was propagated and by whom (she has the entire transcript).

The fact is this: none of us in Boston ever sought to portray the Diaspora as insignificant or to downplay their invaluable support. Why in the world would we ever do that? Robert and I detailed the invaluable role played by the Diaspora in our peer-reviewed study, for example. Moreover, I invited Sabina to join our Ushahidi-Haiti team precisely because the Diaspora were already responding in amazing ways and I knew they’d stay the course after the end of the emergency phase—we wanted to transfer full ownership of the Haiti Crisis Map to Haitian hands.  In sum, it was crystal clear to every single one of us that Sabina was the perfect person to take on this very important responsibility. She represented the voice and interests of Haitians with incredible agility, determination and intell-igence throughout our many months of work together, both in Boston and Haiti.