Crowdsourcing and the Veil of Ignorance: A Question of Morality?

Patrick Ball and I had a series of long email exchanges this past week on the much talked-about-issue of crowdsourcing versus representative sampling. It’s an old issue that keeps coming up. But there’s really no debate, in my opinion. Crowdsourced data is not necessarily representative. That really should not be breaking news.

Also, it is worth repeating that Ushahidi is a platform, not a methodology. So an election-monitoring organization like the National Democratic Institute (NDI) could certainly generate representative polling data using Ushahidi by applying random sampling methods, for example. I already blogged about this several months ago in a post titled “Three Common Misconceptions About Ushahidi.” So I’m not going to rehash this here. Instead, I’d like to take a more “philosophical” approach.

In a “Theory of Justice,” the philosopher John Rawls introduces the “veil of ignorance“, a thought-experiment designed to determine the morality of a certain issue. The idea goes something like this: imagine that you have to decide on the morality of an issue before you are born, i.e., you stand behind a veil of ignorance as you don’t know where you will be born, what race, with what kind of family, etc.

As put by John Rawls himself … “no one knows his place in society, his class position or social status; nor does he know his fortune in the distribution of natural assets and abilities, his intelligence and strength, and the like.”

For example, in the imaginary society, you might or might not be intelligent, rich, or born into a preferred class. Since you may occupy any position in the society once the veil is lifted, this theory encourages thinking about society from the perspective of all members. The veil of ignorance is part of the long tradition of thinking in terms of a social contract.

What does this have to do with crowdsourcing? If you were standing behind this metaphorical veil of ignorance, would you outlaw the crowdsourcing of crisis information on the basis that the data may not be  representative? Or would you still like to receive SMS alerts from crowdsourced information? The text messages sent to Ushahidi-Haiti by Haitians in life-and-death situations were not necessarily statistically representative, but they saved lives.

What would you choose?

Patrick Philippe Meier

21 responses to “Crowdsourcing and the Veil of Ignorance: A Question of Morality?

  1. My answer: absolutely not, if behind the veil of ignorance I would not outlaw crowdsourcing of crisis data. And I would suggest that those who are affected by the crisis, if behind the veil of ignorance would not abuse the system by reporting falsified information either.
    I do think however that the two main challenges at this point are: education about how to use a crowdsourcing platform like Ushahidi and verification of information.
    With time and improvement in these areas, however, it seems likely that crowdsourcing data will become more representative and therefore more statistically significant!

    • Thanks Althea! And very good point re: “it seems likely that crowdsourcing data will become more representative and therefore more statistically significant!” I’ve thought that too but never articulated so directly. So thanks for weighing in!

  2. Lol… you mean outlaw crowdsourced crisis information on the off chance that I was born an oppressive dictator, or on the off chance that I was born myopic about statistical sampling?

    Well for my money I would bet on being born one of the other 6 billion people in the world that would benefit from an aid worker getting an SMS like “Oh crap! I’m buried under a building! =( Send Help Pls!” than not.

    Don’t get me wrong though, as a Physicist I love thought experiments as much as the next guy, I mean seriously I use them all the time… heck we wouldn’t have Relativity or the field of Quantum Mechanics (no more iPhone!) if it weren’t for an Aspie German-born Jew daydreaming up thought experiments at work.

    But thought experiments do have some serious limitations. When it comes to questions of morality, for example, you have to really look at where the rubber meets the road on issues like these. A simple truism that I like to use is that ‘morality needs context to be relevant,’ and one thing that thought experiments are notoriously poor at doing is providing any context other than your own. So the conclusions one might reach in thought experiments may have little or nothing to do with the reality 8 hours after an earthquake somewhere in Port Au Prince.

    Also conflating issues of morality and science can be a sticky if not detrimental endeavor – something Einstein struggled with his whole life.

    So without being flip I’m trying to understand the context of the offline discussion you’re clearly referencing here. Obviously crowdsourced information is self-selecting to some degree – people without cell phones, for example, will be less likely to SMS crisis information – but where the rubber meets the road, at least from a scientific standpoint, it’s almost universally better to have ‘some data’ – even skewed data – than ‘no data.’ The trick of making skewed data into useful data is first having data, and second having some idea of how it’s skewed, which doesn’t have to happen in synchrony with data collection. In fact, very often in science that happens well after the fact.

    But I don’t want to stick my foot in my mouth – perhaps a little more in the way of context is in order here?

    • Awesome comment as always, Sean, thanks! You do crack me up, so keep the humor coming :) <– "heck we wouldn’t have Relativity or the field of Quantum Mechanics (no more iPhone!) if it weren’t for an Aspie German-born Jew daydreaming up thought experiments at work."

      Very much agreed re: "it’s almost universally better to have ‘some data’ – even skewed data – than ‘no data.’ The trick of making skewed data into useful data is first having data, and second having some idea of how it’s skewed, which doesn’t have to happen in synchrony with data collection. In fact, very often in science that happens well after the fact."

  3. Great post, Patrick.

    Unfortunately the argument that ‘some data’ is better than ‘no data’, still fails to convince for people who don’t get Ushahidi or the utility of crowdsourcing.

    In my opinion, it’s a flaw in logic. There’s two main issues people seem to have. 1) the platform can’t reach everyone and is thus non-representative 2) the dataset is imperfect and open to bias

    The first scenario implies that Ushahidi or people using it are indeed selling the system and data collected as a representation of the population. This is rarely, if ever, the case. The data collected may be representative or it may not, that depends on the circumstances and the people contributing to the system.

    The second implies that we’re saying the collected data is somehow infallible. Again, it’s just layers of information to add to our understanding of an event. If we’re monitoring elections in a city and we get 100,000 bogus reports of violence, the information gathered is still telling us something. In this case it’s telling us that someone is working very hard to skew the system. It would be a mistake to take any data on face value without looking at the greater context so why would using Ushahidi be any different?

    It’s also rare that people who are skeptical of crowdsourced platforms like Ushahidi ever put forth any alternatives. This is because there’s never been an ‘ideal’ way to poll the public. As we learned from Gallup polls in the last U.S. Presidential elections, ‘sampling’ based on demographic or class is also often wrong.

    However, it’s also a mistake to argue one method versus the other, the two methods can co-exist. Data collected from Ushahidi simply supplements data collected elsewhere.

    • Thanks for commenting, Jon! Very well put. I particularly like these really good points you make:

      * The second implies that we’re saying the collected data is somehow infallible. Again, it’s just layers of information to add to our understanding of an event. If we’re monitoring elections in a city and we get 100,000 bogus reports of violence, the information gathered is still telling us something. In this case it’s telling us that someone is working very hard to skew the system.

      * It’s also rare that people who are skeptical of crowdsourced platforms like Ushahidi ever put forth any alternatives. This is because there’s never been an ‘ideal’ way to poll the public.

  4. Patrick,

    The veil of ignorance is just fine, but can easily conflict with a humanitarian principle of “do no harm”. Reconciling those two is hard.

    I embrace crowdsourced data, but as we well know there’s a real difference between having an information source as just another data point, including it in a considered decision making process, and having it potentially distort the judgement of resource prioritisation etc. Call it the CNN effect if you like.

    So in the end it comes down to having good decision makers receiving all relevant data in a comprehensible way, and good ways to explain and communicate those decisions.

    Thus the discussion I really want to have around this is what does crowdsourced information mean for accountability in the age of intense media coverage. How do you explain “do no harm” in this context? Some good things, some not so good things. There’s no doubt that agencies and “beneficiaries” need to be having a very different discussion from the traditional “we monitor and evaluate” conversation.

    • Paul,

      I think we’re all in agreement that Ushahidi is a move in the right direction (a new direction to be sure) and that crowdsourced crisis mapping has many proven tangible benefits. But I personally take exception with the starting point of your argument, namely 1) that “do no harm” is a relevant reference point in evaluating crowdsourced crisis information, and 2) that it is still a relevant doctrine in humanitarian work in general.

      I feel, as do many others, that the doctrine of ‘do no harm’ is largely responsible for the outcome of genocide in Rwanda, for example. (Ref: “We Wish to Inform You That Tomorrow We Will be Killed With Our Families” by Philip Gourevitch) If after reading Gourevitch you’re still not convinced that ‘do no harm’ is largely an irrelevant concept read, “Worse Than War: Genocide, Eliminationism, and the Ongoing Assault on Humanity” by Daniel Goldhagen.

      Goldhagen makes the case (a very convincing case I might add) that in essence ‘do no harm’ in the hands of decision makers and politicians (for different reasons) too often becomes the doctrine of ‘do nothing’ or ‘wait and see’, which begets genocidal events like Rwanda and Bosnia.

      Ushahidi and tools that have yet to emerge around it are not the end-all but the beginning of a new era of accountability and violence-cycle interruption. I’m convinced that Ushahidi (or a successor) will prevent an “eliminationism event” (i.e. genocide) in the future.

      As we move forward we will likely see the current gap between the output of tools like Ushahidi and reporting agencies like CNN, BBC, and Al Jazeera close – perhaps becoming one and the same in many instances. Tools like these will help pull back the “fog of war” (blamed in virtually every instance as the reason for inaction) and force action by good intentioned, but process driven, decision makers.

      I hope it doesn’t seem like I attacking you personally for taking this point of view (it may not be your own in fact,) I’m only taking exception with the ‘frame’ of the ‘status quo’ in this case- something that I feel very passionate about.

      • Super, nicely put, Sean and very well argued. Fully agree with you, thanks very much for replying to this.

      • Sean,

        sorry it’s taken so long for me to reply, but, like any overly-opinionated person I just can’t leave your (“non-personal”) comments stand unanswered :-).

        While I very much like Gourevitch’s book, I sometimes think that claiming that “do no harm” is invalidated as a humanitarian principle because of the Rwandan genocide appears to be like claiming that we shouldn’t care about weapons of mass destruction because they were used as an excuse for the 2003 invasion of Iraq. There’s such a gulf of difference between political decision making at the international level (which will find their way almost not matter what, which we clearly see with the debate on R2P), and how we choose to design, run and evaluate aid programs and interventions. Including software. It’s clearly a utilitarian view, with all that means, and needs to have other considerations balanced alongside it.

        WFP sending millions of metric tons of wheat into Afghanistan in early 2002 destroyed the market for what was the best local harvest in decades, and arguably led to the poppyfication of much of the economy with all that this has entailed. The point is that decisions made by rote adherence to a useful policy touchstone can be calamitous.

        But back to the core point – I love much of what Ushahidi has been able to enable and encourage, and fervently believe in the power and need for crowdsourcing, but we can’t claim that tools are completely neutral. Our design choices make a difference in how they’re used, and what outcomes the enable and possibly favour. I’m simply arguing that we need to consider how we make these judgements. Do no harm is one component, the IFRC’s humanitarian code of conduct is another etc.

        One of the reasons that Imogen Wall and I prodded Patrick and others about this is that I desperately want an open discussion on this (Rawls notwithstanding). The “amateurisation of aid” (not necessarily a pejorative IMHO, though there are other opinions: http://talesfromethehood.wordpress.com/2010/05/17/time/) is radically affecting (“complexifying” :-) the humanitarian system, and giving a voice to those labelled as “beneficiaries”.

        So how can we try and make it more likely that those in need actually get the help they require (information as well as food and shelter, empowerment and commerce as well as assistance), rather than scream in frustration because of their belief in (yet more) unfulfilled promises.

        As per usual I’ve run on, but thanks for engaging (and Patrick for hosting.)

        With respect,

        Nigel

        ps. and if you want to have a talk about genocide, evil and political/personal responsibility, I’m happy to (and will give you some of my context – my wife was a colleague and friend of Alison Des Forges as a starter) but let’s take it offline over a beer or three.

        pps. And Patrick – calling it a platform and not a methodology is skating the issue that I know you want to address via various means: what is good practice and how can you encourage it? The Ushahidi brand was used on the Haiti instance and many other deployments, and is reported wildly in the public media and in various fora, so it’s de facto a methodology and a set of programs.

      • Thanks very much, Nigel

        I still stand by my comment that Ushahidi is a platform, not a methodology. Because most have used crowdsourcing when deploying the platform does not make Ushahidi a methodology. Ushahidi is the technology that facilitates various methodologies. Microsoft Word is a spreadsheet software program, not a methodology. So I’m not skating. Chris Blow’s recent blog post makes this point in a different but elegant way:

        http://blog.ushahidi.com/index.php/2010/05/19/allocation-of-time-deploying-ushahidi

        On good practice and how to encourage it, yes, I believe that’s what many of us are interested in doing.

      • Nigel- thank you so much for your thoughtful follow up.

        Well there’s a lot there and I’m not sure that I completely understand everything that you’re saying- so forgive me if we end up talking past each other a bit. Now let me see if I can give an equally thoughtful response…

        In my mind genocide holds a very unique place- I would say that I treat genocide as such a unique event that analogies just fall short. In fact, there is nothing I can think of that’s like a genocide enough to be able to make an apt analogy. So, long story short- I must disagree with your analogy above… and I think that tools like Ushahidi will have a place in that interruption cycle… which it sounds like you don’t necessarily disagree with.

        And that exception aside, I totally agree with what you’re saying. I get it, I agree- the road to hell is paved with good intentions. The WFP Afghan wheat example (which I was previously unaware of) seems like a perfect example of where a “do no harm” mindset/doctrine/policy would have been incredibly helpful.

        To the point of ‘amateurization of aid’ however, I think there might actually be a little bit of a disconnect happening here. Having observed both sides of what Nigel is talking about and what Patrick is talking about, I’m thinking that there needs to be a clarification. Namely, that Ushahidi and innovations like it live in a significantly different space than ill-conceived on-the-ground armature aid programs.

        That said, without a doubt there is good aid and bad aid, and the ‘1 million t-shits’ is a great example of the bad. But I would disagree with those that would say that the ‘amateur aid’ has no place- in fact I think that’s the wrong answer to the wrong question. The right question is (and Nigel I think you might agree with this) ‘how can we harness the power of the inspired and motivated but clueless crowd members’ in such a way that is has high relevance, high efficacy and maximum sustained impact?’

        (…On the flipside of that, and this is more a direct response to the article you referenced, the main problem with the idea of ‘professional aid’ is the same problem that’s facing ‘professional medicine’ – namely that it silos, stagnates, and rejects innovation by nature, which – perhaps needless to say – leads to bad things for those on the receiving end.)

        I think the new winning approach to global issues (starting with crisis mapping and hopefully before long- Global Health) is going to be a thoughtful and agile balance between the “professional” and the “crowd” (or ‘amateur.’) In fact my partners and I are having these very same conversations around Global Health… We’re trying to take a first go at formulating the “special sauce” for crowdsourcing solutions to big intractable Global Health problems… bringing innovation from the edges into the center, as it were.

        I’m getting off track though…

        I think that Ushahidi and similar crowdsourced “pull platforms” need their own classification and an exemption to the ‘pro’ vs. ‘amateur aid’ argument. Namely, because they don’t have the same problems that the on-the-ground amateur aid programs tend to have- namely, burning bridges and disappointing people. Fundamentally if these pull platforms don’t work for people, then people don’t use them and they never gain any traction- no great financial or political resources were appropriated for their use or creation, no village elders got burned thus resisting future aid efforts, and no one pushed this on anyone. Chris Blow’s article is a perfect illustration of one aspect of this effect.

        So I suggest that we need a new distinction in types or approaches to aid, what I’ll term “push aid” (i.e. ‘WFP Afghan wheat’ and ‘1 million t-shirts’) and “pull aid” (i.e. Ushahidi, CrowdPATH – a project my team is working on – and?)

        … to the point of ‘platform’ vs. ‘methodology’ I have to say that from where I’m standing it falls squarely in the definition of platform. That said, the terms methodology and platform are nebulous at best, and in general I agree that a dialog about the unintended effects of these platform approaches would probably be a beneficial thing to everyone involved.

        -Sean

    • Thanks for commenting, Nigel.

      I don’t think that the Veil of Ignorance is incompatible with the humanitarian principle of “Do No Harm”. In fact, the latter can be used to maximize the possibility of the latter.

      What people fail to understand is that sometimes there is no alternative to crowdsourced information right after a major disaster! FEMA (publicly) called the Ushahidi-Haiti deployment the most up-to-date and comprehensive information available on Haiti. The map was virtually a “live” map of the events unfolding in the ravaged country. It’s all nice and well to criticize crowdsourced data, but I’d like to see the UN scramble within 2 hours after a major disaster and start collecting information using random/representative sampling.

      Thoughts?

  5. Very interesting post and comments.
    I would like to add another point of view, in the same path.
    Sometimes people think that “new” is worse than “what we are accustomed to” and they wear the “veil of ignorance” to protect themselves. Does past dependencies seem to be stronger than “saving lives” goal?
    On the other hand emergency manager sometimes do not trust people they are supposed to save. Talking about crowdsourcing in my country – Italy – the usual objection is: information from the “crowd” is not reliable, people know less than the emergency professionals do.
    In this case I remember a true story: the dam disaster occurred in 9 October 1963 in Vajont – Italy.
    A dam was built in a place were there was a mountain called “Toc”, that in the regional dialect means “that slides” .
    The engineers, the institutions didn’t talked to people, they just made the big business.
    The dam was built, finished, a lot of people found a job. But one night a piece of the mountain broke away falling into the artificial lake provoking a wave that jumped over the dam falling down into the valley: 1910 people died.
    What would have happened in the web 2.0 era?
    The “crowd” would have a different and heavier role in the story?

    I believe that we all should switch towards a new approach: the resilience approach.
    In this perspective all the players should have the responsibility and the commitment to face the emergencies.
    It is a matter of culture. Culture changes with a very slow pace, and it needs a strong effort.
    We are facing a new era, we are like pioneers.

  6. I completely agree with your resilience approach.

  7. Pingback: Seeking the Trustworthy Tweet: Can “Tweetsourcing” Ever Fit the Needs of Humanitarian Organizations? | iRevolution

  8. Pingback: The Best of iRevolution: Four Years of Blogging | iRevolution

  9. crowdsourcing is the difference between having or not having information to be beginning with in the crisis or conflict situation… it’s not representative and that’s OK because as a method of non probability sampling its purpose is to be exploratory, to generate hypothesis, to identify where are the hot spots…

  10. Pingback: Big Data for Disaster Response: A List of Wrong Assumptions | iRevolution

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s