Tag Archives: Balloon

Verily: Crowdsourced Verification for Disaster Response

Social media is increasingly used for communicating during crises. This rise in Big (Crisis) Data means that finding the proverbial needle in the growing haystack of information is becoming a major challenge. Social media use during Hurricane Sandy produced a “haystack” of half-a-million Instagram photos and 20 million tweets. But which of these were actually relevant for disaster response and could they have been detected in near real-time? The purpose of QCRI’s experimental Twitter Dashboard for Disaster Response project is to answer this question. But what about the credibility of the needles in the info-stack?

10-Red-Balloons

To answer this question, our Crisis Computing Team at QCRI has partnered with the Social Computing & Artificial Intelligence Lab at the Masdar Institute of Science and Technology. This applied research project began with a series of conversations in mid-2012 about DARPA’s Red Balloon Challenge. This challenge posted in 2009 offered $40K to the individual or team that could find the correct location of 10 red weather balloons discretely placed across the continental United States, an area covering well over 3 million square miles (8 million square kilometers). My friend Riley Crane at MIT spearheaded the team that won the challenge in 8 hours and 52 minutes by using social media.

Riley and I connected right after the Haiti Earthquake to start exploring how we might apply his team’s winning strategy to disaster response. But we were pulled in different directions due to PhD & post-doc obligations and start-up’s. Thank-fully, however, Riley’s colleague Iyad Rahwan got in touch with me to continue these conversations when I joined QCRI. Iyad is now at the Masdar Institute. We’re collaborating with him and his students to apply collective intelligence insights from the balloon to address the problem of false or misleading content shared on social media during  disasters.

Screen Shot 2013-02-16 at 2.26.41 AM

If 10 balloons planted across 3 million square miles can be found in under 9 hours, then surely the answer to the question “Did Hurricane Sandy really flood this McDonald’s in Virginia?” can be found in under 9 minutes given that  Virginia is 98% smaller than the “haystack” of the continental US. Moreover, the location of the restaurant would already be known or easily findable. The picture below, which made the rounds on social media during the hurricane is in reality part of an art exhibition produced in 2009. One remarkable aspect of the social media response to Hurricane Sandy was how quickly false information got debunked and exposed as false—not only by one good (digital) Samaritan, but by several.

SandyFake

Having access to accurate information during a crisis leads to more targeted self-organized efforts at the grassroots level. Accurate information is also important for emergency response professionals. The verification efforts during Sandy were invaluable but disjointed and confined to the efforts of a select few individuals. What if thousands could be connected and mobilized to cross-reference and verify suspicious content shared on social media during a disaster?

Say an earthquake struck Santiago, Chile a few minutes ago and contradictory reports begin to circulate on social media that the bridge below may have been destroyed. Determining whether transportation infrastructure is still useable has important consequences for managing the logistics of a disaster response opera-tion. So what if instead of crowdsourcing the correct location of  balloons across an entire country, one could crowdsource the collection of evidence in just one city struck by a disaster to determine whether said bridge had actually been destroyed in a matter of minutes?

santiagobridge

To answer these questions, QCRI and Masdar have launched an experimental  platform called Verily. We are applying best practices in time-critical crowd-sourcing coupled with gamification and reputation mechanisms to leverage the good will of (hopefully) thousands of digital Samaritans during disasters. This is experimental research, which means it may very well not succeed as envisioned. But that is a luxury we have at QCRI—to innovate next-generation humanitarian technologies via targeted iteration and experimentation. For more on this project, our concept paper is available as a Google Doc here. We invite feedback and welcome collaborators.

In the meantime, we are exploring the possibility of integrating the InformCam mobile application as part of Verily. InformaCam adds important metadata to images and videos taken by eyewitnesses. “The metadata includes information like the user’s current GPS coordinates, altitude, compass bearing, light meter readings, the signatures of neighboring devices, cell towers, and wifi net-works; and serves to shed light on the exact circumstances and contexts under which the digital image was taken.” We are also talking to our partners at MIT’s Computer Science & Artificial Intelligence Lab in Boston about other mobile solutions that may facilitate the use of Verily.

Again, this is purely experimental and applied research at this point. We hope to have an update on our progress in the coming months.

Bio

See also:

  •  Crowdsourcing Critical Thinking to Verify Social Media During Crises [Link]
  •  Using Crowdsourcing to Counter Rumors on Social Media [Link]

Six Degrees of Separation: Implications for Verifying Social Media

The Economist recently published this insightful article entitled” Six Degrees of Mobilisation: To what extent can social networking make it easier to find people and solve real-world problems?” The notion, six degrees of separation, comes from Stanley Milgram’s experiment in the 1960s which found that there were, on average, six degrees of separation between any two people in the US. Last year, Facebook found that users on the social network were separated by an average of 4.7 hops. The Economist thus asks the following, fascinating question:

“Can this be used to solve real-world problems, by taking advantage of the talents and connections of one’s friends, and their friends? That is the aim of a new field known as social mobilisation, which treats the population as a distributed knowledge resource which can be tapped using modern technology.”

The article refers to DARPA’s Red Balloon Challenge, which I already blogged about here: “Time-Critical Crowdsourcing for Social Mobilization and Crowd-Solving.”  The Economist also references DARPA’s TagChallenge. In both cases, the winning teams leveraged social media using crowdsourcing and clever incentive mechanisms. Can this approach also be used to verify social media content during a crisis?

This new study on disasters suggests that the “degrees of separation” between any two organizations in the field is 5. So if the location of red balloons and individuals can be crowdsourced surprisingly quickly, then can the evidence necessary to verify social media content during a disaster be collected as rapidly and reliably? If we are only separated by four-to-six degrees, then this would imply that it only takes that many hops to find someone connected to me (albeit indirectly) who could potentially confirm or disprove the authenticity of a particularly piece of information. This approach was used very successfully in Kyrgyzstan a couple years ago. Can we develop a platform to facilitate this process? And if so, what design features (e.g., gamification) are necessary to mobilize participants and make this tool a success?

Time-Critical Crowdsourcing for Social Mobilization and Crowd-Solving

My good friend Riley Crane just co-authored a very interesting study entitled “Time-Critical Social Mobilization” in the peer-reviewed journal Science. Riley spearheaded the team at MIT that won the DARPA Red Balloon competition last year. His team found the locations of all 10 weather balloons hidden around the continental US in under 9 hours. While we were already discussing alternative approaches to crowdsourcing for social impact before the competition, the approach he designed to win the competition certainly gave us a whole lot more to talk about given the work I’d been doing on crowd sourcing crisis information and near real-time crisis mapping.

Crowd-solving non-trivial problems in quasi real-time poses two important challenges. A very large number of participants is typically required couple with extremely fast execution. Another common challenge is the need for some sort of search process. “For example, search may be conducted by members of the mobilized community for survivors after a natural disaster.” Recruiting large numbers of participants, however, requires that individuals be motivated to actually conduct the search and participate in the information diffusion. Clearly, “providing appropriate incentives is a key challenge in social mobilization.”

This explains the rationale behind DARPA decision to launch their Red Balloon Challenge: “to explore the roles the Internet and social networking play in the timely communication, wide-area team-building, and urgent mobilization required to solve broad-scope, time-critical problems.” So 10 red weather balloons were discretely placed at different locations in the continental US. A senior analyst at the National Geospatial-Intelligence Agency is said to have characterized the challenge is impossible for conventional intelligence-gathering methods. Riley’s team found all 10 balloons in 8 hours and 36 minutes. How did they do it?

Some 36 hours before the start of the challenge, the team at MIT had already recruited over 4,000 participants using a “recursive incentive mechanism.” They used the $40,000 prize money that would be awarded by the winners of the challenge as a “financial incentive structure rewarding not only the people who correctly located the balloons but also those connecting the finder [back to the MIT team].” If Riley and colleagues won:

we would allocate $4000 in prize money to each of the 10 balloons. We promised $2000 per balloon to the first person to send in the cor- rect balloon coordinates. We promised $1000 to the person who invited that balloon finder onto the team, $500 to whoever invited the in- viter, $250 to whoever invited that person, and so on. The underlying structure of the “recursive incentive” was that whenever a person received prize money for any reason, the person who in- vited them would also receive money equal to half that awarded to their invitee

In other words, the reward offers by Team MIT “scales with the size of the entire recruitment tree (because larger trees are more likely to succeed), rather than depending solely on the immediate recruited friends.” What is stunning about Riley et al.’s approach is that their “attrition rate” was almost half the rate of other comparable social network experiments. In other words, participants in the MIT recruitment tree were about twice as likely to “play the game” so-to-speak rather than give up. In addition, the number recruited by each individual followed a power law distribution, which suggests a possible tipping point dynamic.

In conclusion, the mechanism devised by the winning team “simultaneously provides incentives for participation and for recruiting more individuals to the cause.” So what insights does this study provide vis-a-vis live crisis mapping initiatives that are volunteer-based, like those spearheaded by the Standby Volunteer Task Force (SBTF) and the Humanitarian OpenStreetMap (HOT) communities? While these networks don’t have any funding to pay volunteers (this would go against the spirit of volunteerism in any case), I think a number of insights can nevertheless be drawn.

In the volunteer sector, the “currency of exchange” is credit. That is, the knowledge and acknowledgement that I participated in the Libya Crisis Map to support the UN’s humanitarian operations, for example. I recently introduced SBTF “deployment badges” to serve in part the public acknowledgment incentive. SBTF volunteers can now add badges for deployments there were engaged in, e.g., “Sudan 2011″; “New Zealand 2011″, etc.

What about using a recursive credit mechanism? For example, it would be ideal if volunteers could find out how a given report they worked on was ultimately used by a humanitarian colleague monitoring a live map. Using the Red Balloon analogy, the person who finds the balloon should be able to reward all those in her “recruitment tree” or in our case “SBTF network”. Lets say Helena works for the UN and used the Libya Crisis Map whilst in Tripoli. She finds an important report on the map and shares this with her colleagues on the Tunisian border who decide to take some kind of action as a result. Now lets say this report came from a tweet that Chrissy in the Media Monitoring Team found while volunteering on the deployment. She shared the tweet with Jess in the GPS Team who found the coordinates for the location referred to in that tweet. Melissa then added this to the live map being monitored by the UN. Wouldn’t be be ideal if each could be sent an email letting them know about Helena’s response? I realize this isn’t trivial to implement but what would have to be in place to make something like this actually happen? Any thoughts?

On the recruitment side, we haven’t really done anything explicitly to incentivize current volunteers to recruit additional volunteers. Could we incentivize this beyond giving credit? Perhaps we could design a game-like point system? Or a fun ranking system with different titles assigned according to the number of volunteers recruited? Another thought would be to simply ask existing volunteers to recruit one or two additional volunteers every year. We currently have about 700 volunteers in the SBTF, so this might be one way to increase substantially in size.

I’m not sure what type of mechanism we could devise to simultaneously provide incentives for participation and recruitment. Perhaps those incentives already exist, in the sense that the SBTF response to international crises, which perhaps serves as a sufficient draw. I’d love to hear what iRevolution readers think, especially if you have good ideas that we could realistically implement!