Monthly Archives: April 2009

Ushahidi Comes to India for the Elections (Updated)

I’m very please to announce that the Ushahidi platform has been deployed at VoteReport.in to crowdsource the monitoring of India’s upcoming elections. The roll out followed our preferred model: an amazing group of Indian partners took the initiative to drive the project forward and are doing a superb job. I’m learning a lot from their strategic thinking.

picture-3

We’re also excited about developing Swift River as part of VoteReport India to apply a crowdsourcing approach to filter the incoming information for accuracy. This is of course all experimental and we’ll be learning a lot in the process. For a visual introduction to Swift River, please see Erik Hersman’s recent video documentary on our conversations on Swift River, which we had a few weeks ago in Orlando.

picture-5

As per our latest Ushahidi deployments, VoteReport users can report on the Indian elections by email, SMS, Tweet or by submitting an incident directly online at VoteReport. Users can also subscribe to email alerts—a functionality I’m particularly excited about as this closes the crowdsourcing to crowdfeeding feedback loop; so I’m hoping we can also add SMS alerts, funding permitted. For more on crowdfeeding, please see my previous post on “Ushahidi: From Crowdsourcing to Crowdfeeding.

picture-4

You can read more about the project here and about the core team here. It really is an honor to be a part of this amazing group. We also have an official VoteReport blog here. I also highly recommend reading Gaurav Mishra‘s blog post on VoteReport here and Ushahidi’s here.

Next Steps

  • We’re thinking of using a different color to depict “All Categories” since red has cognitive connotations of violence and we don’t want this to be the first impression given by the map.
  • I’m hoping we can add a “download feature” that will allow users to directly download the VoteReport data as a CSV file and as a KML Google Earth Layer. The latter will allow users to dynamically visualize VoteReports over space and time just like [I did here] with the Ushahidi data during the Kenyan elections.
  • We’re also hoping to add a feature that asks those submitting incidents to check-off that the information they submit is true. The motivation behind this is inspired from recent lessons learned in behavioral economics as explained in my blog post on “Crowdsourcing Honesty.

Patrick Philippe Meier

iRevolution One Year On…

I started iRevolution exactly one year ago and it’s been great fun! I owe the Fletcher A/V Club sincere thanks for encouraging me to blog. Little did I know that blogging was so stimulating or that I’d be blogging from the Sudan.

Here are some stats from iRevolution Year One:

  • Total number of blog posts = 212
  • Total number of comments = 453
  • Busiest day ever = December 15, 2008

And the Top 10 posts:

  1. Crisis Mapping Kenya’s Election Violence
  2. The Past and Future of Crisis Mapping
  3. Mobile Banking for the Bottom Billion
  4. Impact of ICTs on Repressive Regimes
  5. Towards an Emergency News Agency
  6. Intellipedia for Humanitarian Warning/Response
  7. Crisis Mapping Africa’s Cross-border Conflicts
  8. 3D Crisis Mapping for Disaster Simulation
  9. Digital Resistance: Digital Activism and Civil Resistance
  10. Neogeography and Crisis Mapping Analytics

I do have a second blog that focuses specifically on Conflict Early Warning, which I started at the same time. I have authored a total of 48 blog posts.

That makes 260 posts in 12 months. Now I know where all the time went!

The Top 10 posts:

  1. Crimson Hexagon: Early Warning 2.0
  2. CSIS PCR: Review of Early Warning Systems
  3. Conflict Prevention: Theory, Police and Practice
  4. New OECD Report on Early Warning
  5. Crowdsourcing and Data Validation
  6. Sri Lanka: Citizen-based Early Warning/Response
  7. Online Searches as Early Warning Indicators
  8. Conflict Early Warning: Any Successes?
  9. Ushahidi and Conflict Early Response
  10. Detecting Rumors with Web-based Text Mining System

I look forward to a second year of blogging! Thanks to everyone for reading and commenting, I really appreciate it!

Patrick Philippe Meier

Peer Producing Human Rights

Molly Land at New York Law School has written an excellent paper on peer producing human rights, which will appear in the Alberta Law Review, 2009. This is one of the best pieces of research that I have come across on the topic. I highly recommend reading her article when published.

Molly considers Wikipedia, YouTube and Witness.org in her excellent research but somewhat surprisingly does not reference Ushahidi. I thus summarize her main points below and draw on the case study of Ushahidi—particularly Swift River—to compare and contrast her analysis with my own research and experience.

Introduction

Funding for human rights monitoring and advocacy is particularly limited, which is why “amateur involvement in human rights activities has the potential to have a significant impact on the field.” At the same time, Molly recognizes that peer producing human rights may “present as many problems as it solves.”

Human rights reporting is the most professionalized activity of human rights organizations. This professionalization exists “not because of an inherent desire to control the process, but rather as a practical response to the demands of reporting-namely, the need to ensure accuracy of the information contained in the report.” The question is whether peer-produced human rights reporting can achieve the same degree of accuracy without a comparable centralized hierarchy.

Accurate documentation of human rights abuses is very important for building up a reputation as a credible human rights organization. Accuracy is also important to counter challenges by repressive regimes that question the validity of certain human rights reports. Moreover, “inaccurate reporting risks injury not only to the organization’s credibility and influence but also to those whose behalf the organization advocates.”

Control vs Participation

A successful model for peer producing human rights monitoring would represent an important leap forward in the human rights community. Such a model would enable us to process a lot more information in a timelier manner and would also “increase the extent to which ordinary individuals connect to human rights issues, thus fostering the ability of the movement to mobilize broad constituencies and influence public opinion in support of human rights.”

Increased participation is often associated with an increased risk of inaccuracy. In fact, “even the perception of unreliability can be enough to provide [...] a basis for critiquing the information as invalid.” Clearly, ensuring the trustworthiness of information in any peer-reviewed project is a continuing challenge.

Wikipedia uses corrective editing as the primary mechanism to evaluate the accuracy of crowdsourced information. Molly argues that this may not work well in the human rights context because direct observation, interviews and interpretation are central to human rights research.

To this end, “if the researcher contributes this information to a collaboratively-edited report, other contributors will be unable to verify the statements because they do not have access to either the witness’s statement or the information that led the researcher to conclude it was reliable.” Even if they were able to verify statements, much of human rights reporting is interpretive, which means that even experienced human rights professionals disagree about interpretive conclusions.

Models for Peer Production

Molly presents three potential models to outline how human rights reporting and advocacy might be democratized. The first two models focus on secondary and primary information respectively, while the third proposes certification by local NGOs. Molly outlines the advantages and challenges that each model presents. Below is a summary with my critiques. I do not address the third model because as noted by Molly it is not entirely participatory.

Model 1. This approach would limit peer-production to collecting, synthesizing and verifying secondary information. Examples include “portals or spin-offs of existing portals, such as Wikipedia,” which could “allow participants to write about human rights issues but require them to rely only on sources that are verifiable [...].” Accuracy challenges could be handled in the same way that Wikipedia does; namely through a “combination of collaborative editing and policies; all versions of the page are saved and it is easy for editors who notice gaming or vandalism to revert to the earlier version.”

The two central limitations of this approach are that (1) the model would be limited to a subset of available information restricted to online or print media; and (2) even limiting the subset of information might be insufficient to ensure reliability. To this end, this model might be best used to complement, not substitute, existing fact-finding efforts.

Model 2. This approach would limit the peer-production of human rights report to those with first-hand knowledge. While Molly doesn’t reference Ushahidi in her research, she does mention the possibility of using a website that would allow witnesses to report human rights abuses that they saw or experienced. Molly argues that this first-hand information on human rights violations could be particularly useful for human rights organizations that seek to “augment their capacity to collect primary information.”

This model still presents accuracy problems, however. “There would be no way to verify the information contributed and it would be easy for individuals to manipulate the system.” I don’t agree. The statement: “there would be no way to verify the information” is an exaggeration. There multiple methods that could be employed to determine the probability that the contributed information is reliable, which is the motivation behind our Swift River project at Ushahidi, which seeks to use crowdsourcing to filter human rights information.

Since Swift River deserves an entire blog post to itself, I won’t describe the project. I’d just like to mention that the Ushahidi team just spent two days brainstorming creative ways that crowdsourced information could be verified. Stay tuned for more on Swift River.

We can still address Molly’s concerns without reference to Ushahidi’s Swift River.

Individuals who wanted to spread false allegations about a particular government or group, or to falsely refute such allegations, might make multiple entries (which would therefore corroborate each other) regarding a specific incident. Once picked up by other sources, such allegations ‘may take on a life of their own.’ NGOs using such information may feel compelled to verify this information, thus undermining some of the advantages that might otherwise be provided by peer production.

Unlike Molly, I don’t see the challenge of crowdsourced human rights data as first and foremost a problem of accuracy but rather volume. Accuracy, in many instances, is a function of how many data points exist in our dataset.

To be sure, more crowdsourced information can provide an ideal basis for triangulation and validation of peer produced human rights reporting-particularly if we embrace multimedia in addition to simply text. In addition, more information allows us to use probability analysis to determine the potential reliability of incoming reports. This would not undermine the advantages of peer-production.

Of course, this method also faces some challenges since the success of triangulating crowdsourced human rights reports is dependent on volume. I’m not suggesting this is a perfect fix, but I do argue that this method will become increasingly tenable since we are only going to see more user-generated content, not less. For more on crowdsourcing and data validation, please see my previous posts here.

Molly is concerned that a website allowing peer-production based on primary information may “become nothing more than an opinion site.” However, a crowdsourcing platform like Ushahidi is not an efficient platform for interactive opinion sharing. Witnesses simply report on events, when they took place and where. Unlike blogs, the platform does not provide a way for users to comment on individual reports.

Capacity Building

Molly does raise an excellent point vis-à-vis the second model, however. The challenges of accuracy and opinion competition might be resolved by “shifting the purpose for which the information is used from identifying violations to capacity building.” As we all know, “most policy makers and members of the political elite know the facts already; what they want to know is what they should do about them.”

To this end, “the purpose of reporting in the context of capacity building is not to establish what happened, but rather to collect information about particular problems and generate solutions. As a result, the information collected is more often in the form of opinion testimony from key informants rather than the kind of primary material that needs to be verified for accuracy.”

This means that the peer produced reporting does not “purport to represent a kind of verifiable ‘truth’ about the existence or non-existence of a particular set of facts,” so the issue of “accuracy is somewhat less acute.” Molly suggests that accuracy might be further improved by “requiring participants to register and identify themselves when they post information,” which would “help minimize the risk of manipulation of the system.” Moreover, this would allow participants to view each other’s contributions and enable a contributor to build a reputation for credible contributions.

However, Molly points out that these potential solutions don’t change the fact that only those with Internet access would be able to contribute human right reports, which could “introduce significant bias considering that most victims and eyewitnesses of human rights violations are members of vulnerable populations with limited, if any, such access.” I agree with this general observation, but I’m surprised that Molly doesn’t reference the use of mobile phones (and other mobile technologies) as a way to collect testimony from individuals without access to the Internet or in inaccessible areas.

Finally, Molly is concerned that Model 2 by itself “lacks the deep participation that can help mobilize ordinary individuals to become involved in human rights advocacy.” This is increasingly problematic since “traditional  ‘naming and shaming’ may, by itself, be increasingly less effective in its ability to achieve changes state conduct regarding human rights.” So Molly rightly encourages the human rights community to “investigate ways to mobilize the public to become involved in human rights advocacy.”

In my opinion, peer produced advocacy faces the same challenges as traditional human rights advocacy. It is therefore important that the human rights community adopt a more tactical approach to human rights monitoring. At Ushahidi, for example, we’re working to add a “subscribe-to-alerts” feature, which will allow anyone to receive SMS alerts for specific locations.

P2P Human Rights

The point is to improve the situational awareness of those who find themselves at risk so they can get out of harm’s way and not become another human rights statistic. For more on tactical human rights, please see my previous blog post.

Human rights organizations that are engaged in intervening to prevent human rights violations would also benefit from subscribing to Ushahidi. More importantly, the average person on the street would have the option of intervening as well. I, for one, am optimistic about the possibility of P2P human rights protection.

Patrick Philippe Meier