Category Archives: Crowdsourcing

May the Crowd Be With You

Three years ago, 167 digital volunteers and I combed through satellite imagery of Somalia to support the UN Refugee Agency (UNHCR) on this joint project. The purpose of this digital humanitarian effort was to identify how many Somalis had been displaced (easily 200,000) due to fighting and violence. Earlier this year, 239 passengers and crew went missing when Malaysia Flight 370 suddenly disappeared. In response, some 8 million digital volunteers mobilized as part of the digital search & rescue effort that followed.

May the Crowd be With You

So in the first case, 168 volunteers were looking for 200,000+ people displaced by violence and in the second case, some 8,000,000 volunteers were looking for 239 missing souls. Last year, in response to Typhoon Haiyan, digital volunteers spent 200 hours or so tagging social media content in support of the UN’s rapid disaster damage assessment efforts. According to responders at the time, some 11 million people in the Philippines were affected by the Typhoon. In contrast, well over 20,000 years of volunteer time went into the search for Flight 370’s missing passengers.

What to do about this heavily skewed distribution of volunteer time? Can (or should) we do anything? Are we simply left with “May the Crowd be with You”?The massive (and as yet unparalleled) online response to Flight 370 won’t be a one-off. We’re entering an era of mass-sourcing where entire populations can be mobilized online. What happens when future mass-sourcing efforts ask digital volunteers to look for military vehicles and aircraft in satellite images taken of a mysterious, unnamed “enemy country” for unknown reasons? Think this is far-fetched? As noted in my forthcoming book, Digital Humanitarians, this online, crowdsourced military surveillance operation already took place (at least once).

As we continue heading towards this new era of mass-sourcing, those with the ability to mobilize entire populations online will indeed yield an impressive new form of power. And as millions of volunteers continue tagging, tracing various features, this volunteer-generated data combined with machine learning will be used to automate future tagging and tracing needs of militaries and multi-billion dollar companies, thus obviating the need for large volumes of volunteers (especially handy should volunteers seek to boycott these digital operations).

At the same time, however, the rise of this artificial intelligence may level the playing field. But few players out there have ready access to high resolution satellite imagery and the actual technical expertise to turn volunteer-generated tags/traces into machine learning classifiers. To this end, perhaps one way forward is to try and “democratize” access to both satellite imagery and the technology needed to make sense of this “Big Data”. Easier said than done. But maybe less impossible than we may think. Perhaps new, disruptive initiatives like Planet Labs will help pave the way forward.

bio

Proof: How Crowdsourced Election Monitoring Makes a Difference

My colleagues Catie Bailard & Steven Livingston have just published the results of their empirical study on the impact of citizen-based crowdsourced election monitoring. Readers of iRevolution may recall that my doctoral dissertation analyzed the use of crowdsourcing in repressive environments and specifically during contested elections. This explains my keen interest in the results of my colleagues’ news data-driven study, which suggests that crowdsourcing does have a measurable and positive impact on voter turnout.

Reclaim Naija

Catie and Steven are “interested in digitally enabled collective action initiatives” spearheaded by “nonstate actors, especially in places where the state is incapable of meeting the expectations of democratic governance.” They are particularly interested in measuring the impact of said initiatives. “By leveraging the efficiencies found in small, incremental, digitally enabled contributions (an SMS text, phone call, email or tweet) to a public good (a more transparent election process), crowdsourced elections monitoring constitutes [an] important example of digitally-enabled collective action.” To be sure, “the successful deployment of a crowdsourced elections monitoring initiative can generate information about a specific political process—information that would otherwise be impossible to generate in nations and geographic spaces with limited organizational and administrative capacity.”

To this end, their new study tests for the effects of citizen-based crowdsourced election monitoring efforts on the 2011 Nigerian presidential elections. More specifically, they analyzed close to 30,000 citizen-generated reports of failures, abuses and successes which were publicly crowdsourced and mapped as part of the Reclaim Naija project. Controlling for a number of factors, Catie and Steven find that the number and nature of crowdsourced reports is “significantly correlated with increased voter turnout.”

Reclaim Naija 2

What explains this correlation? The authors “do not argue that this increased turnout is a result of crowdsourced reports increasing citizens’ motivation or desire to vote.” They emphasize that their data does not speak to individual citizen motivations. Instead, Catie and Steven show that “crowdsourced reports provided operationally critical information about the functionality of the elections process to government officials. Specifically, crowdsourced information led to the reallocation of resources to specific polling stations (those found to be in some way defective by information provided by crowdsourced reports) in preparation for the presidential elections.”

(As an aside, this finding is also relevant for crowdsourced crisis mapping efforts in response to natural disasters. In these situations, citizen-generated disaster reports can—and in some cases do—provide humanitarian organizations with operationally critical information on disaster damage and resulting needs).

In sum, “the electoral deficiencies revealed by crowdsourced reports […] provided actionable information to officials that enabled them to reallocate election resources in preparation for the presidential election […]. This strengthened the functionality of those polling stations, thereby increasing the number of votes that could be successfully cast and counted–an argument that is supported by both quantitative and qualitative data brought to bear in this analysis.” Another important finding is that the resulting “higher turnout in the presidential election was of particular benefit to the incumbent candidate.” As Catie and Steven rightly note, “this has important implications for how various actors may choose to utilize the information generated by new [technologies].”

In conclusion, the authors argue that “digital technologies fundamentally change information environments and, by doing so, alter the opportunities and constraints that the political actors face.” This new study is an important contribution to the literature and should be required reading for anyone interested in digitally-enabled, crowdsourced collective action. Of course, the analysis focuses on “just” one case study, which means that the effects identified in Nigeria may not occur in other crowdsourced, election monitoring efforts. But that’s another reason why this study is important—it will no doubt catalyze future research to determine just how generalizable these initial findings are.

bio

See also:

  • Traditional Election Monitoring Versus Crowdsourced Monitoring: Which Has More Impact? [link]
  • Artificial Intelligence for Monitoring Elections (AIME) [link]
  • Automatically Classifying Crowdsourced Election Reports [link]
  • Evolution in Live Mapping: The Egyptian Elections [link]

Piloting MicroMappers: How to Become a Digital Ranger in Namibia (Revised!)

Many thanks to all of you who have signed up to search and protect Namibia’s beautiful wildlife! (There’s still time to sign up here; you’ll receive an email on Friday, September 26th with the link to volunteer).

Our MicroMappers Wildlife Challenge will launch on Friday, September 26th and run through Sunday, September 28th. More specifically, we’ll begin the search for Namibia’s wildlife at 12noon Namibia time that Friday (which is 12noon Geneva, 11am London, 6am New York, 6pm Shanghai, 7pm Tokyo, 8pm Sydney). You can join the expedition at any time after this. Simply add yourself to this list-serve to participate. Anyone who can get online can be a digital ranger, no prior experience necessary. We’ll continue our digital search until sunset on Sunday evening.

Namibia Map 1

As noted here, rangers at Kuzikus Wildlife Reserve need our help to find wild animals in their reserve. This will help our ranger friends to better protect these beautiful animals from poachers and other threats. According to the rangers, “Rhino poaching continues to be a growing problem that threatens to extinguish some rhino species within a decade or two. Rhino monitoring is thus important for their protection. Using digital maps in combination with MicroMappers to trace aerial images of rhinos could greatly improve rhino monitoring efforts.”

NamibiaMap2
At 12noon Namibia time on Friday, September 26th, we’ll send an email to the above list-serve with the link to our MicroMappers Aerial Clicker, which we’ll use to crowdsource the search for Namibia’s wildlife. We’ll also publish a blog post on MicroMappers.org with the link. Here’s what the Clicker looks like (click to enlarge the Clicker):

MM Aerial Clicker Namibia

When we find animals, we’ll draw “digital shields” around them. Before we show you how to draw these shields and what types of animals we’ll be looking for, here are examples of helpful shields (versus unhelpful ones); note that we’ve had to change these instructions, so please review them carefully! 

MM Rihno Zoom

This looks like two animals! So lets draw two shields.

MM Rhine New YES

The white outlines are the shields that we drew using the Aerial Clicker above. Notice that our shields include the shadows of the animals, this important. If the animals are close to each other, the shields can overlap but there can only be one shield per animal (one shield per rhino in this case : )

MM Rhino New NO

These shields are too close to the animals, please give them more room!

MM Rhino No
These shields are too big.

If you’ve found something that may be an animal but you’re not sure, then please draw a shield anyway just in case. Don’t worry if most pictures don’t have any animals. Knowing where the animals are not is just as important as knowing where they are!

MM Giraffe Zoom

This looks like a giraffe! So lets draw a shield.

MM Giraffe No2

This shield does not include the giraffe’s shadow! So lets try again.

MM Giraffe No

This shield is too large. Lets try again!

MM Giraffe New YES

Now that’s perfect!

Here are some more pictures of animals that we’ll be looking for. As a digital ranger, you’ll simply need to draw shields around these animals, that’s all there is to it. The shields can overlap if need be, but remember: one shield per animal, include their shadows and give them some room to move around : )

MM Ostritch

Can you spot the ostriches? Click picture above to enlarge. You’ll be abel to zoom in with the Aerial Clicker during the Wildlife Challenge.

MM Oryx

Can you spot the five oryxes in the above? (Actually, there may be a 6th one, can you see it in the shadows?).

MM Impala

And the impalas in the left of the picture? Again, you’ll be able zoom in with the Aerial Clicker.

So how exactly does this Aerial Clicker work? Here’s a short video that shows just easy it is to draw a digital shield using the Clicker (note that we’ve had to change the instructions, so please review this video carefully!):

Thanks for reading and for watching! The results of this expedition will help rangers in Namibia make sure they have found all the animals, which is important for their wildlife protection efforts. We’ll have thousands of aerial photographs to search through next week, which means that our ranger friends in Namibia need as much help as possible! So this is where you come on in: please spread the word and invite your friends, families and colleagues to search and protect Namibia’s beautiful wildlife.

MicroMappers is a joint project with the United Nations (OCHA), and the purpose of this pilot is also to test the Aerial Clicker for future humanitarian response efforts. More here. Any questions or suggestions? Feel free to email me at patrick@iRevolution.net or add them in the comments section below.

Thank you!

Piloting MicroMappers: Crowdsourcing the Analysis of UAV Imagery for Disaster Response

New update here!

UAVs are increasingly used in humanitarian response. We have thus added a new Clicker to our MicroMappers collection. The purpose of the “Aerial Clicker” is to crowdsource the tagging of aerial imagery captured by UAVs in humanitarian settings. Trying out new technologies during major disasters can pose several challenges, however. So we’re teaming up with Drone Adventures, Kuzikus Wildlife Reserve, Polytechnic of Namibia, and l’École Polytechnique Fédérale de Lausanne (EPFL) to try out our new Clicker using high-resolution aerial photographs of wild animals in Namibia.

Kuzikus1
As part of their wildlife protection efforts, rangers at Kuzikus want to know how many animals (and what kinds) are roaming about their wildlife reserve. So Kuzikus partnered with Drone Adventures and EPFL’s Cooperation and Development Center (CODEV) and the Laboratory of Geographic Information Systems (LASIG) to launch the SAVMAP project, which stands for “Near real-time ultrahigh-resolution imaging from unmanned aerial vehicles for sustainable land management and biodiversity conservation in semi-arid savanna under regional and global change.” SAVMAP was co-funded by CODEV through LASIG. You can learn more about their UAV flights here.

Our partners are interested in experimenting with crowdsourcing to make sense of this aerial imagery and raise awareness about wildlife in Namibia. As colleagues at Kuzikus recently told us, “Rhino poaching continues to be a growing problem that threatens to extinguish some rhino species within a decade or two. Rhino monitoring is thus important for their protection. One problematic is to detect rhinos in large areas and/or dense bush areas. Using digital maps in combination with MicroMappers to trace aerial images of rhinos could greatly improve rhino monitoring efforts.” 

So our pilot project serves two goals: 1) Trying out the new Aerial Clicker for future humanitarian deployments; 2) Assessing whether crowdsourcing can be used to correctly identify wild animals.

MM Aerial Clicker

Can you spot the zebras in the aerial imagery above? If so, you’re already a digital ranger! No worries, you won’t need to know that those are actually zebras, you’ll simply outline any animals you find (using your mouse) and click on “Add my drawings.” Yes, it’s that easy : )

We’ll be running our Wildlife Challenge from September 26th-28th. To sign up for this digital expedition to Namibia, simply join the MicroMappers list-serve here. We’ll be sure to share the results of the Challenge with all volunteers who participate and with our partners in Namibia. We’ll also be creating a wildlife map based on the results so our friends know where the animals have been spotted (by you!).

MM_Rhino

Given that rhino poaching continues to be a growing problem in Namibia (and elsewhere), we will obviously not include the location of rhinos in our wildlife map. You’ll still be able to look for and trace rhinos (like those above) as well as other animals like ostriches, oryxes & giraffes, for example. Hint: shadows often reveal the presence of wild animals!

MM_Giraffe

Drone Adventures hopes to carry out a second mission in Namibia early next year. So if we’re successful in finding all the animals this time around, then we’ll have the opportunity to support the Kuzikus Reserve again in their future protection efforts. Either way, we’ll be better prepared for the next humanitarian disaster thanks to this pilot. MicroMappers is developed by QCRI and is a joint project with the United Nations Office for the Coordination of Humanitarian Affairs (OCHA).

Any questions or suggestions? Feel free to email me at patrick@iRevolution.net or add them in the comments section below. Thank you!

Disaster Tweets Coupled With UAV Imagery Give Responders Valuable Data on Infrastructure Damage

My colleague Leysia Palen recently co-authored an important study (PDF) on tweets posted during last year’s major floods in Colorado. As Leysia et al. write, “Because the flooding was widespread, it impacted many canyons and closed off access to communities for a long duration. The continued storms also prevented airborne reconnaissance. During this event, social media and other remote sources of information were sought to obtain reconnaissance information [...].”

1coloflood

The study analyzed 212,672 unique tweets generated by 57,049 unique Twitter users. Of these tweets, 2,658 were geo-tagged. The researchers combed through these geo-tagged tweets for any information on infrastructure damage. A sample of these are included below (click to enlarge). Leysia et al. were particularly interested in geo-tagged tweets with pictures of infrastructure damage.

Screen Shot 2014-09-07 at 3.17.34 AM

They overlaid these geo-tagged pictures on satellite and UAV/aerial imagery of the disaster-affected areas. The latter was captured by Falcon UAV. The satellite and aerial imagery provided the researchers with an easy way to distinguish between vegetation and water. “Most tweets appeared to fall primarily within the high flood hazard zones. Most bridges and roads that were located in the flood plains were expected to experience a high risk of damage, and the tweets and remote data confirmed this pattern.” According to Shideh Dashti, an assistant professor of civil, environmental and architectural engineering, and one of the co-authors, “we compared those tweets to the damage reported by engineering reconnaissance teams and they were well correlated.”

falcon uav flooding

To this end, by making use of real-time reporting by those affected in a region, including their posting of visual data,” Leysia and team “show that tweets may be used to directly support engineering reconnaissance by helping to digitally survey a region and navigate optimal paths for direct observation.” In sum, the results of this study demonstrate “how tweets, particularly with postings of visual data and references to location, may be used to directly support geotechnical experts by helping to digitally survey the affected region and to navigate optimal paths through the physical space in preparation for direct observation.”

Since the vast majority of tweets are not geo-tagged, GPS coordinates for potentially important pictures in these tweets are not available. The authors thus recommend looking into using natural language processing (NLP) techniques to “expose hazard-specific and site-specific terms and phrases that the layperson uses to report damage in situ.” They also suggest that a “more elaborate campaign that instructs people how to report such damage via tweets [...] may help get better reporting of damage across a region.”

These findings are an important contribution to the humanitarian computing space. For us at QCRI, this research suggests we may be on the right track with MicroMappers, a crowdsourcing (technically a microtasking) platform to filter and geo-tag social media content including pictures and videos. MicroMappers was piloted last year in response to Typhoon Haiyan. We’ve since been working on improving the platform and extending it to also analyze UAV/aerial imagery. We’ll be piloting this new feature in coming weeks. Ultimately, our aim is for MicroMappers to create near real-time Crisis Maps that provide an integrated display of relevant Tweets, pictures, videos and aerial imagery during disasters.

Bio

See also:

  • Using AIDR to Automatically Collect & Analyze Disaster Tweet [link]
  • Crisis Map of UAV Videos for Disaster Response [link]
  • Humanitarians in the Sky: Using UAVs for Disaster Response [link]
  • Digital Humanitarian Response: Why Moving from Crowdsourcing to Microtasking is Important [link]

From Russia with Love: A Match.com for Disaster Response

I’ve been advocating for the development of a “Match.com” for disaster response since early 2010. Such a platform would serve to quickly match hyperlocal needs with relevant resources available at the local and national level, thus facilitating and accelerating self-organization following major disasters. Why advocate for a platform modeled after an online dating website? Because self-organized mutual-aid is an important driver of community resilience.

Russian Bell

Obviously, self-organization is not dependent on digital technology. The word Rynda, for example, is an old Russian word for a “village bell” which was used by local communities to self-organize during emergencies. Interestingly, Rynda became a popular meme on social media during fires in 2010. As my colleague Gregory Asmolov notes in his brilliant new study, a Russian blogger at the time of the fires “posted an emotional open letter to Prime Minister Putin, describing the lack of action by local authorities and emergency services.” In effect, the blogger demanded a “return to an old tradition of self-organization in local communities,” subsequently exclaiming “bring back the Rynda!” This demand grew into a popular meme symbolizing the catastrophic failure of the formal system’s response to the massive fires.

At the time, my colleagues Gregory, Alexey Sidorenko & Glafira Parinos launched the Help Map above in an effort to facilitate self-organization and mutual aid. But as Gregory notes in his new study, “The more people were willing to help, the more difficult it was to coordinate the assistance and to match resources with needs.” Moreover, the Help Map continued to receive reports on needs and offers-of-help after the fires had subsided. To be sure, reports of flooding soon found their way to the map, for example. Gregory, Alexey, Glarifa and team thus launched “Virtual Rynda: The Help Atlas” to facilitate self-help in response to a variety of situations beyond sudden-onset crises.

“We believed that in order to develop the capacity and resilience to respond to crisis situations we would have to develop the potential for mutual aid in everyday life. This would rely on an idea that emergency and everyday-life situations were interrelated. While people’s motivation to help one another is lower during non-emergency situations, if you facilitate mutual aid in everyday life and allow people to acquire skills in using Internet-based technologies to help one another or in asking for assistance, this will help to create an improved capacity to fulfill the potential of mutual aid the next time a disaster happens. [...] The idea was that ICTs could expand the range within which the tolling of the emergency bell could be heard. Everyone could ‘ring’ the ‘Virtual Rynda’ when they needed help, and communication networks would magnify the sound until it reached those who could come and help.”

In order to accelerate and scale the matching of needs & resources, Gregory and team (pictured below) sought to develop a matchmaking algorithm. Rynda would ask users to specify what the need was, where (geographically) the need was located and when (time-wise) the need was requested. “On the basis of this data, computer-based algorithms & human moderators could match offers with requests and optimize the process of resource allocation.” Rynda also included personal profiles, enabling volunteers “to develop an online reputation and increase trust between those needing help and those who could offer assistance. Every volunteer profile included not only personal information, but also a history of the individual’s previous activities within the platform.” To this end, in addition to “Help Requests” & “Help Offers,” Rynda also included an entry for “Help Provided” to close the feedback loop.

Asmolov1

As Gregory acknowledges, the results were mixed but certainly interesting and insightful. “Most of the messages [posted to the Rynda platform dealt] with requests for various types of social help, like clothing and medical equipment for children, homes for orphans, people with limited capabilities, or families in need. [...]. Some requests from environmental NGOs were related to the mobilization of volunteers to fight against deforestation or to fight wildfires. [...]. In another case, a volunteer who responded to a request on the platform helped to transport resources to a family with many children living far from a big city. [...]. Many requests concern[ed] children or disabled people. In one case, Rynda found a volunteer who helped a young woman leave her flat for walks, something she could not do alone. In some cases, the platform helped to provide medicine.” In any event, an analysis of the needs posted to Rynda suggests that “the most needed resource is not the thing itself, but the capacity to take it to the person who needs it. Transportation becomes a crucial resource, especially in a country as big as Russia.”

Alas, “Despite the efforts to create a tool that would automatically match a request with a potential help provider, the capacity of the algorithm to optimize the allocation of resources was very limited.” To this end, like the Help Map initiative, digital volunteers who served as social moderators remained pivotal to the Virtual Ryndal platform. As Alexey notes, “We’ve never even got to the point of the discussion of more complex models of matching.” Perhaps Rynda should have included more structured categories to enable more automated-matching since the volunteer match-makers are simply not scalable. “Despite the intention that the ‘matchmaking’ algorithm would support the efficient allocation of resources between those in need and those who could help, the success of the ‘matchmaking’ depended on the work of the moderators, whose resources were limited. As a result, a gap emerged between the broad issues that the project could address and the limited resources of volunteers.”

To this end, Gregory readily admits that “the initial definition of the project as a general mutual aid platform may have been too broad and unspecific.” I agree with this diagnostic. Take the online dating platform Match.com for example. Match.com’s sole focus is online dating; Airbnb’s sole purpose is to match those looking for a place to stay with those offering their places; Uber’s sole purpose is matching those who need to get somewhere with a local car service. To this end, matching platform for mutual-aid may indeed been too broad—at least to begin with. Amazon began with books, but later diversified.

In any case, as Gregory rightly notes, “The relatively limited success of Rynda didn’t mean the failure of the idea of mutual aid. What [...] Rynda demonstrates is the variety of challenges encountered along the way of the project’s implementation.” To be sure, “Every society or community has an inherent potential mutual aid structure that can be strengthened and empowered. This is more visible in emergency situations; however, major mutual aid capacity building is needed in everyday, non-emergency situations.” Thanks to Gregory and team, future digital matchmakers can draw on the above insights and Rynda’s open source code when designing their own mutual-aid and self-help platforms.

For me, one of the key take-aways is the need for a scalable matching platform. Match.com would not be possible if the matching were done primarily manually. Nor would Match.com work as well if the company sought to match interests beyond the romantic domain. So a future Match.com for mutual-aid would need to include automated matching and begin with a very specific matching domain. 

Bio

 

See also:

  • Using Waze, Uber, AirBnB, SeeClickFix for Disaster Response [link]
  • MatchApp: Next Generation Disaster Response App? [link]
  • A Marketplace for Crowdsourcing Crisis Response [link]

Live: Crowdsourced Verification Platform for Disaster Response

Earlier this year, Malaysian Airlines Flight 370 suddenly vanished, which set in motion the largest search and rescue operation in history—both on the ground and online. Colleagues at DigitalGlobe uploaded high resolution satellite imagery to the web and crowdsourced the digital search for signs of Flight 370. An astounding 8 million volunteers rallied online, searching through 775 million images spanning 1,000,000 square kilometers; all this in just 4 days. What if, in addition to mass crowd-searching, we could also mass crowd-verify information during humanitarian disasters? Rumors and unconfirmed reports tend to spread rather quickly on social media during major crises. But what if the crowd were also part of the solution? This is where our new Verily platform comes in.

Verily Image 1

Verily was inspired by the Red Balloon Challenge in which competing teams vied for a $40,000 prize by searching for ten weather balloons secretly placed across some 8,000,0000 square kilometers (the continental United States). Talk about a needle-in-the-haystack problem. The winning team from MIT found all 10 balloons within 8 hours. How? They used social media to crowdsource the search. The team later noted that the balloons would’ve been found more quickly had competing teams not posted pictures of fake balloons on social media. Point being, all ten balloons were found astonishingly quickly even with the disinformation campaign.

Verily takes the exact same approach and methodology used by MIT to rapidly crowd-verify information during humanitarian disasters. Why is verification important? Because humanitarians have repeatedly noted that their inability to verify social media content is one of the main reasons why they aren’t making wider user of this medium. So, to test the viability of our proposed solution to this problem, we decided to pilot the Verily platform by running a Verification Challenge. The Verily Team includes researchers from the University of Southampton, the Masdar Institute and QCRI.

During the Challenge, verification questions of various difficulty were posted on Verily. Users were invited to collect and post evidence justifying their answers to the “Yes or No” verification questions. The photograph below, for example, was posted with the following question:

Verily Image 3

Unbeknownst to participants, the photograph was actually of an Italian town in Sicily called Caltagirone. The question was answered correctly within 4 hours by a user who submitted another picture of the same street. The results of the new Verily experiment are promissing. Answers to our questions were coming in so rapidly that we could barely keep up with posting new questions. Users drew on a variety of techniques to collect their evidence & answer the questions we posted:

Verily was designed with the goal of tapping into collective critical thinking; that is, with the goal of encouraging people think about the question rather than use their gut feeling alone. In other words, the purpose of Verily is not simply to crowdsource the collection of evidence but also to crowdsource critical thinking. This explains why a user can’t simply submit a “Yes” or “No” to answer a verification question. Instead, they have to justify their answer by providing evidence either in the form of an image/video or as text. In addition, Verily does not make use of Like buttons or up/down votes to answer questions. While such tools are great for identifying and sharing content on sites like Reddit, they are not the right tools for verification, which requires searching for evidence rather than liking or retweeting.

Our Verification Challenge confirmed the feasibility of the Verily platform for time-critical, crowdsourced evidence collection and verification. The next step is to deploy Verily during an actual humanitarian disaster. To this end, we invite both news and humanitarian organizations to pilot the Verily platform with us during the next natural disaster. Simply contact me to submit a verification question. In the future, once Verily is fully developed, organizations will be able to post their questions directly.

bio

See Also:

  • Verily: Crowdsourced Verification for Disaster Response [link]
  • Crowdsourcing Critical Thinking to Verify Social Media [link]
  • Six Degrees of Separation: Implications for Verifying Social Media [link]