Tag Archives: rumors

Analyzing Fake Content on Twitter During Boston Marathon Bombings

As iRevolution readers already know, the application of Information Forensics to social media is one of my primary areas of interest. So I’m always on the lookout for new and related studies, such as this one (PDF), which was just published by colleagues of mine in India. The study by Aditi Gupta et al. analyzes fake content shared on Twitter during the Boston Marathon Bombings earlier this year.

bostonstrong

Gupta et al. collected close to 8 million unique tweets posted by 3.7 million unique users between April 15-19th, 2013. The table below provides more details. The authors found that rumors and fake content comprised 29% of the content that went viral on Twitter, while 51% of the content constituted generic opinions and comments. The remaining 20% relayed true information. Interestingly, approximately 75% of fake tweets were propagated via mobile phone devices compared to true tweets which comprised 64% of tweets posted via mobiles.

Table1 Gupta et al

The authors also found that many users with high social reputation and verified accounts were responsible for spreading the bulk of the fake content posted to Twitter. Indeed, the study shows that fake content did not travel rapidly during the first hour after the bombing. Rumors and fake information only goes viral after Twitter users with large numbers of followers start propagating the fake content. To this end, “determining whether some information is true or fake, based on only factors based on high number of followers and verified accounts is not possible in the initial hours.”

Gupta et al. also identified close to 32,000 new Twitter accounts created between April 15-19 that also posted at least one tweet about the bombings. About 20% (6,073 accounts) of these new accounts were subsequently suspended by Twitter. The authors found that 98.7% of these suspended accounts did not include the word Boston in their names and usernames. They also note that some of these deleted accounts were “quite influential” during the Boston tragedy. The figure below depicts the number of suspended Twitter accounts created in the hours and days following the blast.

Figure 2 Gupta et al

The authors also carried out some basic social network analysis of the suspended Twitter accounts. First, they removed from the analysis all suspended accounts that did not interact with each other, which left just 69 accounts. Next, they analyzed the network typology of these 69 accounts, which produced four distinct graph structures: Single Link, Closed Community, Star Typology and Self-Loops. These are displayed in the figure below (click to enlarge).

Figure 3 Gupta et al

The two most interesting graphs are the Closed Community and Star Typology graphs—the second and third graphs in the figure above.

Closed Community: Users that retweet and mention each other, forming a closed community as indicated by the high closeness centrality values produced by the social network analysis. “All these nodes have similar usernames too, all usernames have the same prefix and only numbers in the suffixes are different. This indicates that either these profiles were created by same or similar minded people for posting common propaganda posts.” Gupta et al. analyzed the content posted by these users and found that all were “tweeting the same propaganda and hate filled tweet.”

Star Typology: Easily mistakable for the authentic “BostonMarathon” Twitter account, the fake account “BostonMarathons” created plenty of confusion. Many users propagated the fake content posted by the BostonMarathons account. As the authors note, “Impersonation or creating fake profiles is a crime that results in identity theft and is punishable by law in many countries.”

The automatic detection of these network structures on Twitter may enable us to detect and counter fake content in the future. In the meantime, my colleagues and I at QCRI are collaborating with Aditi Gupta et al. to develop a “Credibility Plugin” for Twitter based on this analysis and earlier peer-reviewed research carried out by my colleague ChaTo. Stay tuned for updates.

Bio

See also:

  • Boston Bombings: Analyzing First 1,000 Seconds on Twitter [link]
  • Taking the Pulse of the Boston Bombings on Twitter [link]
  • Predicting the Credibility of Disaster Tweets Automatically [link]
  • Auto-Ranking Credibility of Tweets During Major Events [link]
  • Auto-Identifying Fake Images on Twitter During Disasters [link]
  • How to Verify Crowdsourced Information from Social Media [link]
  • Crowdsourcing Critical Thinking to Verify Social Media [link]

Using Crowdsourcing to Counter the Spread of False Rumors on Social Media During Crises

My new colleague Professor Yasuaki Sakamoto at the Stevens Institute of Tech-nology (SIT) has been carrying out intriguing research on the spread of rumors via social media, particularly on Twitter and during crises. In his latest research, “Toward a Social-Technological System that Inactivates False Rumors through the Critical Thinking of Crowds,” Yasu uses behavioral psychology to under-stand why exposure to public criticism changes rumor-spreading behavior on Twitter during disasters. This fascinating research builds very nicely on the excellent work carried out by my QCRI colleague ChaTo who used this “criticism dynamic” to show that the credibility of tweets can be predicted (by topic) with-out analyzing their content. Yasu’s study also seeks to find the psychological basis for the Twitter’s self-correcting behavior identified by ChaTo and also John Herman who described Twitter as a  “Truth Machine” during Hurricane Sandy.

criticalthink

Twitter is still a relatively new platform, but the existence and spread of false rumors is certainly not. In fact, a very interesting study dated 1950 found that “in the past 1,000 years the same types of rumors related to earthquakes appear again and again in different locations.” Early academic studies on the spread of rumors revealed that “that psychological factors, such as accuracy, anxiety, and impor-tance of rumors, affect rumor transmission.” One such study proposed that the spread of a rumor “will vary with the importance of the subject to the individuals concerned times the ambiguity of the evidence pertaining to the topic at issue.” Later studies added “anxiety as another key element in rumormongering,” since “the likelihood of sharing a rumor was related to how anxious the rumor made people feel. At the same time, however, the literature also reveals that counter-measures do exist. Critical thinking, for example, decreases the spread of rumors. The literature defines critical thinking as “reasonable reflective thinking focused on deciding what to believe or do.”

“Given the growing use and participatory nature of social media, critical thinking is considered an important element of media literacy that individuals in a society should possess.” Indeed, while social media can “help people make sense of their situation during a disaster, social media can also become a rumor mill and create social problems.” As discussed above, psychological factors can influence rumor spreading, particularly when experiencing stress and mental pressure following a disaster. Recent studies have also corroborated this finding, confirming that “differences in people’s critical thinking ability […] contributed to the rumor behavior.” So Yasu and his team ask the following interesting question: can critical thinking be crowdsourced?

Screen Shot 2013-03-30 at 3.37.40 PM

“Not everyone needs to be a critical thinker all the time,” writes Yasu et al. As long as some individuals are good critical thinkers in a specific domain, their timely criticisms can result in an emergent critical thinking social system that can mitigate the spread of false information. This goes to the heart of the self-correcting behavior often observed on social media and Twitter in particular. Yasu’s insight also provides a basis for a bounded crowdsourcing approach to disaster response. More on this here, here and here.

“Related to critical thinking, a number of studies have paid attention to the role of denial or rebuttal messages in impeding the transmission of rumor.” This is the more “visible” dynamic behind the self-correcting behavior observed on Twitter during disasters. So while some may spread false rumors, others often try to counter this spread by posting tweets criticizing rumor-tweets directly. The following questions thus naturally arise: “Are criticisms on Twitter effective in mitigating the spread of false rumors? Can exposure to criticisms minimize the spread of rumors?”

Yasu and his colleagues set out to test the following hypotheses: Exposure to criticisms reduces people’s intent to spread rumors; which mean that ex-posure to criticisms lowers perceived accuracy, anxiety, and importance of rumors. They tested these hypotheses on 87 Japanese undergraduate and grad-uate students by using 20 rumor-tweets related to the 2011 Japan Earthquake and 10 criticism-tweets that criticized the corresponding rumor-tweets. For example:

Rumor-tweet: “Air drop of supplies is not allowed in Japan! I though it has already been done by the Self- Defense Forces. Without it, the isolated people will die! I’m trembling with anger. Please retweet!”

Criticism-tweet: “Air drop of supplies is not prohibited by the law. Please don’t spread rumor. Please see 4-(1)-4-.”

The researchers found that “exposing people to criticisms can reduce their intent to spread rumors that are associated with the criticisms, providing support for the system.” In fact, “Exposure to criticisms increased the proportion of people who stop the spread of rumor-tweets approximately 1.5 times [150%]. This result indicates that whether a receiver is exposed to rumor or criticism first makes a difference in her decision to spread the rumor. Another interpretation of the result is that, even if a receiver is exposed to a number of criticisms, she will benefit less from this exposure when she sees rumors first than when she sees criticisms before rumors.”

Screen Shot 2013-03-30 at 3.53.02 PM

Findings also revealed three psychological factors that were related to the differences in the spread of rumor-tweets: one’s own perception of the tweet’s accuracy, the anxiety cause by the tweet, and the tweet’s perceived importance. The results also indicate that “exposure to criticisms reduces the perceived accuracy of the succeeding rumor-tweets, paralleling the findings by previous research that refutations or denials decrease the degree of belief in rumor.” In addition, the perceived accuracy of criticism-tweets by those exposed to rumors first was significantly higher than the criticism-first group. The results were similar vis-à-vis anxiety. “Seeing criticisms before rumors reduced anxiety associated with rumor-tweets relative to seeing rumors first. This result is also consistent with previous research findings that denial messages reduce anxiety about rumors. Participants in the criticism-first group also perceived rumor-tweets to be less important than those in the rumor-first group.” The same was true vis-à-vis the perceived importance of a tweet. That said, “When the rumor-tweets are perceived as more accurate, the intent to spread the rumor-tweets are stronger; when rumor-tweets cause more anxiety, the intent to spread the rumor-tweets is stronger; when the rumor-tweets are perceived as more im-portance, the intent to spread the rumor-tweets is also stronger.”

So how do we use these findings to enhance the critical thinking of crowds and design crowdsourced verification platforms such as Verily? Ideally, such a platform would connect rumor tweets with criticism-tweets directly. “By this design, information system itself can enhance the critical thinking of the crowds.” That said, the findings clearly show that sequencing matters—that is, being exposed to rumor tweets first vs criticism tweets first makes a big differ-ence vis-à-vis rumor contagion. The purpose of a platform like Verily is to act as a repo-sitory for crowdsourced criticisms and rebuttals; that is, crowdsourced critical thinking. Thus, the majority of Verily users would first be exposed to questions about rumors, such as: “Has the Vincent Thomas Bridge in Los Angeles been destroyed by the Earthquake?” Users would then be exposed to the crowd-sourced criticisms and rebuttals.

In conclusion, the spread of false rumors during disasters will never go away. “It is human nature to transmit rumors under uncertainty.” But social-technological platforms like Verily can provide a repository of critical thinking and ed-ucate users on critical thinking processes themselves. In this way, we may be able to enhance the critical thinking of crowds.


bio

See also:

  • Wiki on Truthiness resources (Link)
  • How to Verify and Counter Rumors in Social Media (Link)
  • Social Media and Life Cycle of Rumors during Crises (Link)
  • How to Verify Crowdsourced Information from Social Media (Link)
  • Analyzing the Veracity of Tweets During a Crisis (Link)
  • Crowdsourcing for Human Rights: Challenges and Opportunities for Information Collection & Verification (Link)
  • The Crowdsourcing Detective: Crisis, Deception and Intrigue in the Twittersphere (Link)

Haiti: Lies, Damned Lies and Crisis Mapping

You’d think there was some kind of misinformation campaign going on about the Ushahidi-Haiti Crisis Map given the number of new lies that are still being manu-factured even though it has been over three years since the earthquake. Please, if you really want a professional, independent and rigorous account of the project, read this evaluation. The findings are mixed but the report remains the only comprehensive, professional and independent evaluation of the Ushahidi-Haiti and 4636 efforts. So if you have questions about the project, please read the report and/or contact the evaluators directly.

Screen Shot 2013-02-25 at 2.10.47 AM

In the meantime, I’ve decided to collect the most ridiculous lies & rumors and post my all-time favorites below.

1. “Mission 4636  & Haitian volunteers very strongly opposed the publishing of 4636 SMS’s on the Ushahidi-Haiti Crisis Map given data privacy concerns.”

Robert, the person responsible for Mission 4636, agreed (in writing) to publish the SMS’s after two lawyers noted that there was implied consent to make these messages public. The screenshot of the email below clearly proves this. Further-more, he and I co-authored this peer-reviewed study several months after the earthquake to document the lessons learned from the SMS response in Haiti. Surely if one of us had heard about these concerns from the Diaspora, we would have known this and reconsidered the publishing of the SMS’s. We would also have written this up as a major issue in our study. Moreover, the independent and professional evaluators referred to above would also have documented this major issue if it were true.

Screen Shot 2013-02-26 at 9.35.59 AM

I, for one, did not receive a single email from anyone involved in Mission 4636 demanding that the SMS’s not be made public. None of the Boston-based Haitian volunteers who I met in person ever asked for the messages to remain con-fidential; nor did Haitian Diaspora journalists who interviewed us or the many Haitians who called into the radio interviews we participated in ask for the messages to remain secret. Also, the joint decision to (only) map the most urgent and actionable life-and-death messages was supported by a number of humani-tarian colleagues who agreed that the risks of making this information public were minimal vis-à-vis the Do No Harm principle.

On a practical note, time was a luxury we did not have; an entire week had already passed since the earthquake and we were already at the tail end of the search and rescue phase. This meant that literally every hour counted for potential survivors still trapped under the rubble. There was no time to second-guess the lawyers or to organize workshops on the question. Making the most urgent and actionable life-and-death text messages public meant that the Haitian Diaspora, which was incredibly active in the response, could use that information to help coordinate efforts. NGOs in Haiti could also make use of this information—not to mention the US Marine Corps, which claimed to have saved hundreds of lives thanks to the Ushahidi-Haiti Crisis Map.

Crisis Mapping can be risky business, there’s no doubt about that. Sometimes tough-but-calculated decisions are needed. If one of the two lawyers had opined that the messages should not be made public, then the SMS’s would not have been published, end of story. In any case, the difficulties we faced during this crisis mapping response to Haiti is precisely why I’ve been working hard with GSMA’s Disaster Response Program to create this SMS Code of Conduct. I have also been collaborating directly with the International Committee of the Red Cross (ICRC) to update Data Privacy and Protection Protocols so they include guidelines on social media use and crisis mapping. This new report will be officially launched in Geneva this April followed by a similar event in DC.

2. “Mission 4636 was a completely separate and independent initiative to the Ushahidi Haiti Crisis Map.”

Then why was Josh Nesbit looking for an SMS solution specifically for Ushahidi? The entire impetus for 4636 was the Haiti Crisis Map. Thanks to his tweet, Josh was put in touch with a contact at Digicel Haiti in Port-au-Prince. Several days later, the 4636 short code was set up and integrated with the Ushahidi platform.

jn4636

3. “The microtasking platform developed by Ushahidi to translate the text messages during the first two weeks of operation was built by Tim Schwartz, i.e., not Ushahidi.”

Tim Schwartz is a good friend and wonderful colleague. So when I came across this exciting new rumor, I emailed him right away to thank him: “I’m super surprised since no one ever told me this before. If it is indeed true, then I owe you a huge huge thanks!!” His reply: “Well… not exactly:) Brian [from Ushahidi] took our code from the haitianquake.com and modified it to make the base of 4636. Then I came in and wrote the piece that let volunteers translate missing persons messages and put them into Google Person Finder. Brian definitely wrote the original volunteer part for 4636. He’s the rockstar:)”

4. “Digital Democracy (Dd) developed all the workflows for the Ushahidi-Haiti Crisis Map and also trained the majority of volunteers.”

Dd’s co-founder Emily Jacobi is a close friend and trusted colleague. So I emailed her about this fun new rumor back in October to see if I had somehow missed something. Emily replied: “It’s totally ludicrous to claim that Dd solely set up any of those processes. I do think we played an important role in helping to inform, document & systematize those workflows, which is a world away from claiming sole or even lead ownership of any of it.” Indeed, the workflows kept changing on a daily basis and hundreds of volunteers were trained in person or online–often several times a day. That said, Dd absolutely took the lead in crafting the work-flows & training the bulk of volunteers who spearheaded the Chile Crisis Map. I recommend reading up on Dd’s awesome projects in Haiti and worldwide here.

5. “FEMA Administrator Craig Fugate’s comment below about the Ushahidi Haiti Crisis Map was actually not about the Ushahidi project. Craig was confused and was actually referring to the Humanitarian OpenStreet Map (OSM) of Haiti.”

Again, I was stunned, but in a good way. Kate Chapman, the director of Humani-tarian OpenStreetMap, is a good friend and trusted colleague, so I emailed her the following: “I still hear all kinds of rumors about Haiti but this is the *first* time I’ve come across this one and if this is indeed true then goodness gracious I really need to know so I can give credit where credit is due!” Her reply? She too had never heard this claim before. I trust her 100% so if she ever does tell me that this new rumor is true, I’ll be the first to blog and tweet about it. I’m a huge fan of Humanitarian OpenStreetMap, they really do remarkable work, which is why I included 3 of their projects as case studies in a book chapter I just sub-mitted for publication. In any event, I fully share Kate’s feelings on the rumors: “My feelings on anything that had to do with Haiti is it doesn’t really matter anymore. It has been 2 and a half years. Let’s look on to preparedness and how to improve.” Wise words from a wise woman.

CraigFEMAtweet

6. “Sabina Carlson who acted as the main point of contact between the Ushahidi Haiti project and the Haitian Diaspora also spearheaded the translation efforts and is critical of her Ushahidi Haiti Team members and in particular Patrick Meier for emphasizing the role of international actors and ignoring the Haitian Diaspora.”

This is probably one of the strangest lies yet. Everyone in Boston knows full well that Sabina was not directly focused on translation but rather on outreach and partnership building with the Haitian Diaspora. Sabina, who is a treasured friend, emailed me (out of the blue) when she heard about some of the poisonous rumors circulating. “This was a shock to me,” she wrote, “I would never say anything to put you down, Patrick, and I’m upset that my words were mis-interpreted and used to do just that.” She then detailed exactly how the lie was propagated and by whom (she has the entire transcript).

The fact is this: none of us in Boston ever sought to portray the Diaspora as insignificant or to downplay their invaluable support. Why in the world would we ever do that? Robert and I detailed the invaluable role played by the Diaspora in our peer-reviewed study, for example. Moreover, I invited Sabina to join our Ushahidi-Haiti team precisely because the Diaspora were already responding in amazing ways and I knew they’d stay the course after the end of the emergency phase—we wanted to transfer full ownership of the Haiti Crisis Map to Haitian hands.  In sum, it was crystal clear to every single one of us that Sabina was the perfect person to take on this very important responsibility. She represented the voice and interests of Haitians with incredible agility, determination and intell-igence throughout our many months of work together, both in Boston and Haiti.

bio

How To Use Technology To Counter Rumors During Crises: Anecdotes from Kyrgyzstan

I just completed a short field mission to Kyrgyzstan with UN colleagues and I’m already looking forward to the next mission. Flipping through several dozen pages of my handwritten notes just now explains why: example after example of the astute resourcefulness and creative uses of information and communication technologies in Kyrgyzstan is inspiring. I learned heaps.

For example, one challenge that local groups faced during periods of ethnic tension and violent conflict last year was the spread of rumors, particularly via SMS. These deliberate rumors ranged from humanitarian aid being poisoned to cross border attacks carried out by a particular ethnic group. But many civil society groups were able to verify these rumors in near real-time using Skype.

When word of the conflict spread, the director of one such groups got online and invited her friends and colleagues to a dedicate Skype chat group. Within two hours, some 2,000 people across the country had joined the chat group with more knocking but the group had reached the maximum capacity allowed by Skype. (They subsequently migrated to a web-based platform to continue the real-time filtering of information from around the country).

The Skype chat was abuzz with people sharing and validating information in near real-time. When someone got wind of a rumor, they’d simply jump on Skype and ask if anyone could verify. This method proved incredibly effective. Why? Because members of this Skype group constituted a relevant, trusted and geographically distributed network. A person would only add a colleague or two to the chat if they knew who this individual was, could vouch for them and believed that they had—or could have—important information to contribute given their location and/or contacts. (This reminded me of Gmail back in the day when you only had a certain number of invites, so one tended to chose carefully how to “spend” those invites).

The degrees of separation needed to verify a rumor was close to one. In the case of the supposed border attack, one member of the chat group had a contact with the army unit guarding the border crossing in question. They called them on their cell phone and confirmed within minutes that no attack was taking place. As for the rumor about the poisoned humanitarian aid, another member of the chat found the original phone numbers from which these false SMS’s were being sent. They called a personal contact at one of the telecommunication companies and asked whether the owners of these phones were in fact texting from the place where the aid was reportedly poisoned; they weren’t. Meanwhile, another member of the chat group had himself investigated the rumor in person and confirmed that the text messages were false.

This Skype detective network proved an effective method for the early detection and response to rumors. Once a rumor was identified as such, 2,000 people could share that information with their own networks within minutes. In addition, members of this Skype group were able to ping their media contacts and have the word spread even further. In at least two cases and in two different cities, telecommunication companies also collaborated by sending out broadcast SMS to notify subscribers about the false rumors.

I wonder if this model can be further improved on and replicated. Any thoughts from iRevolution readers would be most welcome.