Tag Archives: Rights

Crowdsourcing for Human Rights Monitoring: Challenges and Opportunities for Information Collection & Verification

This new book, Human Rights and Information Communication Technologies: Trends and Consequences of Use, promises to be a valuable resource to both practitioners and academics interested in leveraging new information & communication technologies (ICTs) in the context of human rights work. I had the distinct pleasure of co-authoring a chapter for this book with my good colleague and friend Jessica Heinzelman. We focused specifically on the use of crowdsourcing and ICTs for information collection and verification. Below is the Abstract & Introduction for our chapter.

Abstract

Accurate information is a foundational element of human rights work. Collecting and presenting factual evidence of violations is critical to the success of advocacy activities and the reputation of organizations reporting on abuses. To ensure credibility, human rights monitoring has historically been conducted through highly controlled organizational structures that face mounting challenges in terms of capacity, cost and access. The proliferation of Information and Communication Technologies (ICTs) provide new opportunities to overcome some of these challenges through crowdsourcing. At the same time, however, crowdsourcing raises new challenges of verification and information overload that have made human rights professionals skeptical of their utility. This chapter explores whether the efficiencies gained through an open call for monitoring and reporting abuses provides a net gain for human rights monitoring and analyzes the opportunities and challenges that new and traditional methods pose for verifying crowdsourced human rights reporting.

Introduction

Accurate information is a foundational element of human rights work. Collecting and presenting factual evidence of violations is critical to the success of advocacy activities and the reputation of organizations reporting on abuses. To ensure credibility, human rights monitoring has historically been conducted through highly controlled organizational structures that face mounting challenges in terms of capacity, cost and access.

The proliferation of Information and Communication Technologies (ICTs) may provide new opportunities to overcome some of these challenges. For example, ICTs make it easier to engage large networks of unofficial volunteer monitors to crowdsource the monitoring of human rights abuses. Jeff Howe coined the term “crowdsourcing” in 2006, defining it as “the act of taking a job traditionally performed by a designated agent and outsourcing it to an undefined, generally large group of people in the form of an open call” (Howe, 2009). Applying this concept to human rights monitoring, Molly Land (2009) asserts that, “given the limited resources available to fund human rights advocacy…amateur involvement in human rights activities has the potential to have a significant impact on the field” (p. 2). That said, she warns that professionalization in human rights monitoring “has arisen not because of an inherent desire to control the process, but rather as a practical response to the demands of reporting – namely, the need to ensure the accuracy of the information contained in the report” (Land, 2009, p. 3).

Because “accuracy is the human rights monitor’s ultimate weapon” and the advocate’s “ability to influence governments and public opinion is based on the accuracy of their information,” the risk of inaccurate information may trump any advantages gained through crowdsourcing (Codesria & Amnesty International, 2000, p. 32). To this end, the question facing human rights organizations that wish to leverage the power of the crowd is “whether [crowdsourced reports] can accomplish the same [accurate] result without a centralized hierarchy” (Land, 2009). The answer to this question depends on whether reliable verification techniques exist so organizations can use crowdsourced information in a way that does not jeopardize their credibility or compromise established standards. While many human rights practitioners (and indeed humanitarians) still seem to be allergic to the term crowdsourcing, further investigation reveals that established human rights organizations already use crowdsourcing and verification techniques to validate crowdsourced information and that there is great potential in the field for new methods of information collection and verification.

This chapter analyzes the opportunities and challenges that new and traditional methods pose for verifying crowdsourced human rights reporting. The first section reviews current methods for verification in human rights monitoring. The second section outlines existing methods used to collect and validate crowdsourced human rights information. Section three explores the practical opportunities that crowdsourcing offers relative to traditional methods. The fourth section outlines critiques and solutions for crowdsourcing reliable information. The final section proposes areas for future research.

The book is available for purchase here. Warning: you won’t like the price but at least they’re taking an iTunes approach, allowing readers to purchase single chapters if they prefer. Either way, Jess and I were not paid for our contribution.

For more information on how to verify crowdsourced information, please visit the following links:

  • Information Forensics: Five Case Studies on How to Verify Crowdsourced Information from Social Media (Link)
  • How to Verify and Counter Rumors in Social Media (Link)
  • Social Media and Life Cycle of Rumors during Crises (Link)
  • Truthiness as Probability: Moving Beyond the True or False Dichotomy when Verifying Social Media (Link)
  • Crowdsourcing Versus Putin (Link)

Crisis Mapping Syria: Automated Data Mining and Crowdsourced Human Intelligence

The Syria Tracker Crisis Map is without doubt one of the most impressive crisis mapping projects yet. Launched just a few weeks after the protests began one year ago, the crisis map is spearheaded by a just handful of US-based Syrian activists have meticulously and systematically documented 1,529 reports of human rights violations including a total of 11,147 killings. As recently reported in this NewScientist article, “Mapping the Human Cost of Syria’s Uprising,” the crisis map “could be the most accurate estimate yet of the death toll in Syria’s uprising [...].” Their approach? “A combination of automated data mining and crowdsourced human intelligence,” which “could provide a powerful means to assess the human cost of wars and disasters.”

On the data-mining side, Syria Tracker has repurposed the HealthMap platform, which mines thousands of online sources for the purposes of disease detection and then maps the results, “giving public-health officials an easy way to monitor local disease conditions.” The customized version of this platform for Syria Tracker (ST), known as HealthMap Crisis, mines English information sources for evidence of human rights violations, such as killings, torture and detainment. As the ST Team notes, their data mining platform “draws from a broad range of sources to reduce reporting biases.” Between June 2011 and January 2012, for example, the platform collected over 43,o00 news articles and blog posts from almost 2,000 English-based sources from around the world (including some pro-regime sources).

Syria Tracker combines the results of this sophisticated data mining approach with crowdsourced human intelligence, i.e., field-based eye-witness reports shared via webform, email, Twitter, Facebook, YouTube and voicemail. This naturally presents several important security issues, which explains why the main ST website includes an instructions page detailing security precautions that need to be taken while sub-mitting reports from within Syria. They also link to this practical guide on how to protect your identity and security online and when using mobile phones. The guide is available in both English and Arabic.

Eye-witness reports are subsequently translated, geo-referenced, coded and verified by a group of volunteers who triangulate the information with other sources such as those provided by the HealthMap Crisis platform. They also filter the reports and remove dupli-cates. Reports that have a low con-fidence level vis-a-vis veracity are also removed. Volunteers use a dig-up or vote-up/vote-down feature to “score” the veracity of eye-witness reports. Using this approach, the ST Team and their volunteers have been able to verify almost 90% of the documented killings mapped on their platform thanks to video and/or photographic evidence. They have also been able to associate specific names to about 88% of those reported killed by Syrian forces since the uprising began.

Depending on the levels of violence in Syria, the turn-around time for a report to be mapped on Syria Tracker is between 1-3 days. The team also produces weekly situation reports based on the data they’ve collected along with detailed graphical analysis. KML files that can be uploaded and viewed using Google Earth are also made available on a regular basis. These provide “a more precisely geo-located tally of deaths per location.”

In sum, Syria Tracker is very much breaking new ground vis-a-vis crisis mapping. They’re combining automated data mining technology with crowdsourced eye-witness reports from Syria. In addition, they’ve been doing this for a year, which makes the project the longest running crisis maps I’ve seen in a hostile environ-ment. Moreover, they’ve been able to sustain these import efforts with just a small team of volunteers. As for the veracity of the collected information, I know of no other public effort that has taken such a meticulous and rigorous approach to documenting the killings in Syria in near real-time. On February 24th, Al-Jazeera posted the following estimates:

Syrian Revolution Coordination Union: 9,073 deaths
Local Coordination Committees: 8,551 deaths
Syrian Observatory for Human Rights: 5,581 deaths

At the time, Syria Tracker had a total of 7,901 documented killings associated with specific names, dates and locations. While some duplicate reports may remain, the team argues that “missing records are a much bigger source of error.” Indeed, They believe that “the higher estimates are more likely, even if one chooses to disregard those reports that came in on some of the most violent days where names were not always recorded.”

The Syria Crisis Map itself has been viewed by visitors from 136 countries around the world and 2,018 cities—with the top 3 cities being Damascus, Washington DC and, interestingly, Riyadh, Saudia Arabia. The witnessing has thus been truly global and collective. When the Syrian regime falls, “the data may help sub-sequent governments hold him and other senior leaders to account,” writes the New Scientist. This was one of the principle motivations behind the launch of the Ushahidi platform in Kenya over four years ago. Syria Tracker is powered by Ushahidi’s cloud-based platform, Crowdmap. Finally, we know for a fact that the International Criminal Court (ICC) and Amnesty International (AI) closely followed the Libya Crisis Map last year.

Drones for Human Rights: Brilliant or Foolish? (Updated)

My colleague Mark Hanis recently co-authored this Op-Ed in the New York Times advocating for the use of drones in human rights monitoring, particularly in Syria. The Op-Ed has provoked quite the debate on a number of list-serves like CrisisMappers, and several blog posts have been published on the question. I’ve long been interested this topic, which is why I included a section on drones in this official UN Foundation Report on “New Technologies in Emergen-cies and Conflicts: The Role of Information and Social Networks.” I also blogged about the World Food Program’s (WFP) use of drones some four years ago.

Some critics have made some good points vis-a-vis the limitation of drones for human rights surveillance. But some have also twisted the Op-Ed’s language and arguments. The types of drones or UAVs that an NGO might be able to purchase would not have the advanced technology required to capture the identify of perpetrators, according this critic. But at no point do Mark and his co-author, Andrew Sniderman, actually argue that drones should be used to document the identity of those committing human rights violations. Rather, “A drone would let us count demonstrators, gun barrels and pools of blood.” And what if a consortium of NGOs do receive substantial funding to acquire a high-end drone for human rights surveillance purposes? Moreover, as drones become cheaper and smaller, using them to capture the identity of perpetrators will become increasingly possible.

This same critic notes quite rightly that humanitarian drones would “not have been able to monitor any mistreatment of Mandela in his cell on Robben Island. Nor will they be able to monitor torture in Syrian detention facilities.” Indeed, but again, nowhere in the Op-Ed do the authors claim that drones could serve this purpose. So this is again a counter-argument to an argument that was never made in the first place. (This critic seems to enjoy this kind of debating tactic).

As the authors fully acknowledge, the use of humanitarian drones would “violate Syrian airspace, and perhaps a number of Syrian and international laws.” Some are concerned that this would “cause the Syrian government to even further escalate its military response.” If this is really the argument made against the use of drones, then this would beg the following question: should existing interventions in Syria also be vetoed since they too risk provoking the regime? This argument almost seeks to make a case for non-interference and non-intervention. The argument also supposes that the Syrian regime actually needs an excuse to escalate the slaughter of civilians.

This is a clear case where the regime has clearly and repeatedly violated the Responsibility to Protect (R2P) principle and has thus given up any legitimate claim to territorial sovereignty. “In any event, violations of Syrian sovereignty would be the direct consequence of the Syrian state’s brutality, not the imperialism of outsiders” (NYT Op-Ed). And yet, one critic still argues that using drones in Syria would “set an unfortunate precedent [...] that human rights organizations are willing to violate international law [...].” According to R2P, Syria’s claim to sovereignty expired almost a year ago.

Granted, R2P is an international norm, not (yet) international law, but as the authors of the Op-Ed acknowledge, this type of intervention “isn’t the kind of thing nongovernmental organizations usually do. But it is very different from what governments and armies do. Yes, we (like them) have an agenda, but ours is transparent: human rights. We have a duty, recognized internationally, to monitor governments that massacre their own people in large numbers. Human rights organizations have always done this. Why not get drones to assist the good work?” Besides, to assume that human rights organizations have never violated laws in the past would be naive at best. Human rights organizations often smuggle information and/or people across borders, I know this for a fact.

As for the argument that using drones “could make even traditional human rights monitoring in repressive countries more difficult,” this is certainly true, as is any other type of intervention and use of technology, like digital cameras, Twitter, blogging, satellite imagery, etc. This same critic quotes another who points to surface-to-air misslies as being a regime’s obvious antidote to human rights drones. Indeed, such cases have been reported in Sri Lanka, as I learned back in 2005 from a colleague based in Colombo. Providing a regime with non-human targets is preferable to them using live ammunition on children. Regimes can also destroy mobile phones, digital cameras, etc. So does that mean human rights activists should refrain from using these technologies as well?

More from the critic: “cell phones can go more places than drones. Most people own one, and two year olds can use iPads. Cell phones can take photos that identify who is wearing what uniform and beating which protesters.” Indeed, the Op-Ed does not make any claims to the contrary. Cell phones may be able to go to more places than drones, but can they do so “unmanned“?  Can cell phones take pictures of uniforms up close and personal with zero risk to the cell phone owner? The observers of the recent Arab League Mission were not free to move around as they pleased, which is one reason why the Op-Ed makes the case for humanitarian drones. Still, the critic points out that she could attach a cell phone to a weather balloon and thus create a mini-drone. For sure, DIY drones are becoming more and more popular given the new technologies available and the lower costs; as is balloon mapping. Nothing in the Op-Ed suggests that the authors would rule out these solutions.

So what impact might the use of drones for human rights have? This is another entirely separate but equally important question. What kinds of documented human rights violations (and on from what types of media) might have the greatest chance prompting individuals and policy makers to act? As this critic asks, “What is the point of diminishing marginal returns on ‘bearing witness’”? And as the previous critic argues, “plenty of graphic images and videos from Syria have been captured and made public. Most are taken by digital cameras and cell phones in close quarters or indoors. None have caused the outrage and response Hanis and Sniderman seek.”

I beg to differ on this last point. Many of us have been outraged by the images captured and shared by activists on Twitter, Facebook , etc; so have human rights organizations and policy makers, including members of the UN Security Council and the Arab League. How to translate this outrage into actual response, how-ever, is an entirely different and separate challenge; one that is no less important. Mark and Andrew do not argue or pretend that surveillance imagery captured by  drones would be a silver bullet to resolving the political inertia on Syria. Indeed: “as with any intelligence-gathering process, surveillance missions necessarily operate in a political, rather than neutral space.”

In my mind, a combination of efforts is required—call it a networked, ecosystem approach. Naturally, whether such a combination (with drones in the mix) makes sense will depend on the context and the situation. Using drones will not always make sense, the cost-benefit analysis may differ considerably depending on the use-case and also over time. From the perspective of civil resistance and non-violent action, the use of drones makes sense. It gives the regime another issue to deal with and requires them to allocate time and resources accordingly. In fact, even if human rights activists had access to the cheapest drones that do not have the ability to take pictures, flying these over Syrian airspace would likely get the attention of the regime.

The result? This would “force” the regime to deal with something new and hopefully draw their fire away from civilians, even if momentarily. At the very least, it would use up some of their military ammunitions. More importantly, there’s also a plausible psychological effect here: no one likes mosquitos buzzing around their heads. It’s annoying and frustrating. Harassing repressive regimes can certainly have negative consequences. But they are part and parcel of civil resistance tactics. In certain circumstances, these risks may be worth taking, especially if those who decide to use drones for these purposes are Syrian activists themselves or operating under the direction of these activists. Either way, the duty to bear witness remains and is recognized internationally.

From a financial cost-benefit perspective, there’s no doubt that “the comparative advantage on technological platforms lies with foreign governments, rather than the NGO community,” as this critic points out. But foreign governments do not readily make their imagery public for the purposes of advocacy. This would likely place unwanted pressure on them to react if they publicly shared the extent of the evidence they had on the atrocities being committed in Syria and elsewhere.

Update 1: An iRevolution reader commenting on another blog post just shared this news that the US Ambassador to Syria, Robert Ford, used his Facebook page to post “declassified US imagery of Syrian military attacks against civilians in the besieged city of Homs.” The US State Department explained that “Our intent here is to obviously expose the ruthlessness of the brutality of this regime and its overwhelming predominant advantage and the horrible kind of weaponry that it is deploying against its people.”

The news article adds that “Moscow and Beijing are also part of the intended audience for these images following their veto of a U.N. Security Council resolution backing Arab League action against President Assad.” In the context of my blog post above, one could argue that the USG could have made this type of information public 6 months ago in order to expose the brutality of the regime? And that a humanitarian drone might have exposed this earlier? In any case, this is a very interesting development. And as one colleague noted, “this proves point that images of atrocities are leveraged to build political pressure.”

Update 2: I wrote this follow-up post on the use of drones for civil resistance.

Combining Crowdsourced Satellite Imagery Analysis with Crisis Reporting: An Update on Syria

Members of the the Standby Volunteer Task Force (SBTF) Satellite Team are currently tagging the location of hundreds of Syrian tanks and other heavy mili-tary equipment on the Tomnod micro-tasking platform using very recent high-resolution satellite imagery provided by Digital Globe.

We’re focusing our efforts on the following three key cities in Syria as per the request of Amnesty International USA’s (AI-USA) Science for Human Rights Program.

For more background information on the project, please see the following links:

To recap, the purpose of this experimental pilot project is to determine whether satellite imagery analysis can be crowdsourced and triangulated to provide data that might help AI-USA corroborate numerous reports of human rights abuses they have been collecting from a multitude of other sources over the past few months. The point is to use the satellite tagging in combination with other data, not in isolation.
 
To this end, I’ve recommended that we take it one step further. The Syria Tracker Crowdmap has been operations for months. Why not launch an Ushahidiplatform that combines the triangulated features from the crowdsourced satellite imagery analysis with crowdsourced crisis reports from multiple sources?

The satellite imagery analyzed by the SBTF was taken in early September. We could grab the August and September crisis data from Syria Tracker and turn the satellite imagery analysis data into layers. For example, the “Military tag” which includes large military equipment like tanks and artillery could be uploaded to Ushahidi as a KML file. This would allow AI-USA and others to cross-reference their own reports, with those on Syria Tracker and then also place that analysis into context vis-a-vis the location of military equipment, large crowds and check-points over the same time period.

The advantage of adding these layers to an Ushahidi platform is that they could be updated and compared over time. For example, we could compare the location of Syrian tanks versus on-the-ground reports of shelling for the month of August, September, October, etc. Perhaps we could even track the repositioning of  some military equipment if we repeated this crowdsourcing initiative more frequently. Incidentally, President Eisenhower proposed this idea to the UN during the Cold War, see here.

In any case, this initiative is still very much experimental and there’s lots to learn. The SBTF Tech Team headed by Nigel McNie is looking to make the above integration happen, which I’m super excited about. I’d love to see closer integration with satellite imagery analysis data in future Ushahidi deployments that crowdsource crisis reporting from the field. Incidentally, we could scale this feature tagging approach to include hundreds if not thousands of volunteers.

In other news, my SBTF colleague Shadrock Roberts and I had a very positive conference call with UNHCR this week. The SBTF will be partnering with HCR on an official project to tag the location of informal shelters in the Afgooye corridor in the near future. Unlike our trial run from several weeks ago, we will have a far more developed and detailed rule-set & feature-key thanks to some very useful information that our colleagues at HCR have just shared with us. We’ll be adding the triangulated features from the imagery analysis to a dedicated UNHCR Ushahidi platform. We hope to run this project in October and possibly again in January so HCR can do some simple change detection using Ushahidi.

In parallel, we’re hoping to partner with the Joint Research Center (JRC), which has developed automated methods for shelter detection. Comparing crowdsourced feature tagging with an automated approach would provide yet more information to UNHCR to corroborate their assessments.

Help Crowdsource Satellite Imagery Analysis for Syria: Building a Library of Evidence

Update: Project featured on UK Guardian Blog! Also, for the latest on the project, please see this blog post.

This blog post follows from this previous one: “Syria – Crowdsourcing Satellite Imagery Analysis to Identify Mass Human Rights Violations.” As part of the first phase of this project, we are building a library of satellite images for features we want to tag using crowdsourcing.

In particular, we are looking to identify the following evidence using high-resolution satellite imagery:

  • Large military equipment
  • Large crowds
  • Checkpoints
The idea is to provide volunteers the Standby Volunteer Task Force (SBTF) Satellite Team with as much of road map as possible so they know exactly what they’re looking for in the  satellite imagery they’ll be tagging using the Tomnod system:

Here are some of the pictures we’ve been able to identify thanks to the help of my good colleague Christopher Albon:
I’ve placed these and other examples in this Google Doc which is open for comment. We need your help to provide us with other imagery depicting heavy Syrian military equipment, large crowds and checkpoints. Please provide links and screenshots of such imagery in this open and editable Google Doc.Here are some of the links that Chris already sent us for the above imagery:

 

Syria: Crowdsourcing Satellite Imagery Analysis to Identify Mass Human Rights Violations

Update: See this blog post for the latest. Also, our project was just featured on the UK Guardian Blog!

What if we crowdsourced satellite imagery analysis of key cities in Syria to identify evidence of mass human rights violations? This is precisely the question that my colleagues at Amnesty International USA’s Science for Human Rights Program asked me following this pilot project I coordinated for Somalia. AI-USA has done similar work in the past with their Eyes on Darfur project, which I blogged about here in 2008. But using micro-tasking with backend triangulation to crowdsource the analysis of high resolution satellite imagery for human rights purposes is definitely breaking new ground.

A staggering amount of new satellite imagery is produced every day; millions of square kilometers’ worth according to one knowledgeable colleague. This is a big data problem that needs mass human intervention until the software can catch up. I recently spoke with Professor Ryan Engstrom, the Director of the Spatial Analysis Lab at George Washington University, and he confirmed that automated algorithms for satellite imagery analysis still have a long, long way to go. So the answer for now has to be human-driven analysis.

But professional satellite imagery experts who have plenty of time to volunteer their skills are far and few between. The Satellite Sentinel Project (SSP), which I blogged about here, is composed of a very small team and a few interns. Their focus is limited to the Sudan and they are understandably very busy. My colleagues at AI-USA analyze satellite imagery for several conflicts, but this takes them far longer than they’d like and their small team is still constrained given the number of conflicts and vast amounts of imagery that could be analyzed. This explains why they’re interested in crowdsourcing.

Indeed, crowdsourcing imagery analysis has proven to be a workable solution in several other projects & sectors. The “crowd” can indeed scan and tag vast volumes of satellite imagery data when that imagery is “sliced and diced” for micro-tasking. This is what we did for the Somalia pilot project thanks to the Tomnod platform and the imagery provided by Digital Globe. The yellow triangles below denote the “sliced images” that individual volunteers from the Standby Task Force (SBTF) analyzed and tagged one at a time.

We plan do the same with high resolution satellite imagery of three key cities in Syria selected by the AI-USA team. The specific features we will look for and tag include: “Burnt and/or darkened building features,” “Roofs absent,” “Blocks on access roads,” “Military equipment in residential areas,” “Equipment/persons on top of buildings indicating potential sniper positions,” “Shelters composed of different materials than surrounding structures,” etc. SBTF volunteers will be provided with examples of what these features look like from a bird’s eye view and from ground level.

Like the Somalia project, only when a feature—say a missing roof—is tagged identically  by at least 3 volunteers will that location be sent to the AI-USA team for review. In addition, if volunteers are unsure about a particular feature they’re looking at, they’ll take a screenshot of said feature and share it on a dedicated Google Doc for the AI-USA team and other satellite imagery experts from the SBTF team to review. This feedback mechanism is key to ensure accurate tagging and inter-coder reliability. In addition, the screenshots shared will be used to build a larger library of features, i.e., what a missing roof looks like as well military equipment in residential areas, road blocks, etc. Volunteers will also be in touch with the AI-USA team via a dedicated Skype chat.

There will no doubt be a learning curve, but the sooner we climb that learning curve the better. Democratizing satellite imagery analysis is no easy task and one or two individuals have opined that what we’re trying to do can’t be done. That may be, but we won’t know unless we try. This is how innovation happens. We can hypothesize and talk all we want, but concrete results are what ultimately matters. And results are what can help us climb that learning curve. My hope, of course, is that democratizing satellite imagery analysis enables AI-USA to strengthen their advocacy campaigns and makes it harder for perpetrators to commit mass human rights violations.

SBTF volunteers will be carrying out the pilot project this month in collaboration with AI-USA, Tomnod and Digital Globe. How and when the results are shared publicly will be up to the AI-USA team as this will depend on what exactly is found. In the meantime, a big thanks to Digital Globe, Tomnod and SBTF volunteers for supporting the AI-USA team on this initiative.

If you’re interested in reading more about satellite imagery analysis, the following blog posts may also be of interest:

• Geo-Spatial Technologies for Human Rights
• Tracking Genocide by Remote Sensing
• Human Rights 2.0: Eyes on Darfur
• GIS Technology for Genocide Prevention
• Geo-Spatial Analysis for Global Security
• US Calls for UN Aerial Surveillance to Detect Preparations for Attacks
• Will Using ‘Live’ Satellite Imagery to Prevent War in the Sudan Actually Work?
• Satellite Imagery Analysis of Kenya’s Election Violence: Crisis Mapping by Fire
• Crisis Mapping Uganda: Combining Narratives and GIS to Study Genocide
• Crowdsourcing Satellite Imagery Analysis for Somalia: Results of Trial Run
• Genghis Khan, Borneo & Galaxies: Crowdsourcing Satellite Imagery Analysis
• OpenStreetMap’s New Micro-Tasking Platform for Satellite Imagery Tracing