Tag Archives: Ethics

Code of Conduct: Cyber Crowdsourcing for Good

There is currently no unified code of conduct for digital crowdsourcing efforts in the development, humanitarian or human rights space. As such, we propose the following principles (displayed below) as a way to catalyze a conversation on these issues and to improve and/or expand this Code of Conduct as appropriate.

Screen Shot 2014-10-20 at 5.22.21 PM

This initial draft was put together by Kate ChapmanBrooke Simons and myself. The link above points to this open, editable Google Doc. So please feel free to contribute your thoughts by inserting comments where appropriate. Thank you.

An organization that launches a digital crowdsourcing project must:

  • Provide clear volunteer guidelines on how to participate in the project so that volunteers are able to contribute meaningfully.
  • Test their crowdsourcing platform prior to any project or pilot to ensure that the system will not crash due to obvious bugs.
  • Disclose the purpose of the project, exactly which entities will be using and/or have access to the resulting data, to what end exactly, over what period of time and what the expected impact of the project is likely to be.
  • Disclose whether volunteer contributions to the project will or may be used as training data in subsequent machine learning research.
  • Not ask volunteers to carry out any illegal tasks.
  • Explain any risks (direct and indirect) that may come with volunteer participation in a given project. To this end, carry out a risk assessment and produce corresponding risk mitigation strategies.
  • Clearly communicate if the results of volunteer tasks will or are likely to be sold to partners/clients.
  • Limit the level of duplication required (for data quality assurance) to a reasonable number based on previous research and experience. In sum, do not waste volunteers’ time and do not offer tasks that are not meaningful. When all tasks have been carried, inform volunteers accordingly.
  • Be fully transparent on the results of the project even if the results are poor or unusable.
  • Only launch a full-scale crowdsourcing project if they are not able to analyze the results and deliver the findings within a timeframe that provides added value to end-users of the data.

An organization that launches a digital crowdsourcing project should:

  • Share as much of the resulting data with volunteers as possible without violating data privacy or the principle of Do No Harm.
  • Enable volunteers to opt out of having their tasks contribute to subsequent machine learning research. Provide digital volunteers with the option of having their contributions withheld from subsequent machine learning studies.
  • Assess how many digital volunteers are likely to be needed for a project and recruit appropriately. Using additional volunteers just because they are available is not appropriate. Should recruitment nevertheless exceed need, adjust project to inform volunteers as soon as their inputs are no longer needed, and possibly give them options for redirecting their efforts.
  • Explain that the same crowdsourcing task (microtask) may/will be given to multiple digital volunteers for data control purposes. This often reassures volunteers who initially lack confidence when contributing to a project.

May the Crowd Be With You

Three years ago, 167 digital volunteers and I combed through satellite imagery of Somalia to support the UN Refugee Agency (UNHCR) on this joint project. The purpose of this digital humanitarian effort was to identify how many Somalis had been displaced (easily 200,000) due to fighting and violence. Earlier this year, 239 passengers and crew went missing when Malaysia Flight 370 suddenly disappeared. In response, some 8 million digital volunteers mobilized as part of the digital search & rescue effort that followed.

May the Crowd be With You

So in the first case, 168 volunteers were looking for 200,000+ people displaced by violence and in the second case, some 8,000,000 volunteers were looking for 239 missing souls. Last year, in response to Typhoon Haiyan, digital volunteers spent 200 hours or so tagging social media content in support of the UN’s rapid disaster damage assessment efforts. According to responders at the time, some 11 million people in the Philippines were affected by the Typhoon. In contrast, well over 20,000 years of volunteer time went into the search for Flight 370’s missing passengers.

What to do about this heavily skewed distribution of volunteer time? Can (or should) we do anything? Are we simply left with “May the Crowd be with You”?The massive (and as yet unparalleled) online response to Flight 370 won’t be a one-off. We’re entering an era of mass-sourcing where entire populations can be mobilized online. What happens when future mass-sourcing efforts ask digital volunteers to look for military vehicles and aircraft in satellite images taken of a mysterious, unnamed “enemy country” for unknown reasons? Think this is far-fetched? As noted in my forthcoming book, Digital Humanitarians, this online, crowdsourced military surveillance operation already took place (at least once).

As we continue heading towards this new era of mass-sourcing, those with the ability to mobilize entire populations online will indeed yield an impressive new form of power. And as millions of volunteers continue tagging, tracing various features, this volunteer-generated data combined with machine learning will be used to automate future tagging and tracing needs of militaries and multi-billion dollar companies, thus obviating the need for large volumes of volunteers (especially handy should volunteers seek to boycott these digital operations).

At the same time, however, the rise of this artificial intelligence may level the playing field. But few players out there have ready access to high resolution satellite imagery and the actual technical expertise to turn volunteer-generated tags/traces into machine learning classifiers. To this end, perhaps one way forward is to try and “democratize” access to both satellite imagery and the technology needed to make sense of this “Big Data”. Easier said than done. But maybe less impossible than we may think. Perhaps new, disruptive initiatives like Planet Labs will help pave the way forward.

bio

On UAVs for Peacebuilding and Conflict Prevention

My colleague Helena Puig Larrauri recently published this excellent piece on the ethical problems & possible solutions to using UAVs for good in conflict settings. I highly recommend reading her article. The purpose of my blog post is simply to reflect on the important issues that Helena raises.

DPKOdrone

One of Helena’s driving questions in this: “Does the local population get a say in what data is collected, and to what purpose?” She asks this in the context of the surveillance drones (pictured above) used by the United Nation’s Department for Peacekeeping Operations (DPKO) in the Democratic Republic of the Congo (DRC). While the use of non-lethal UAVs in conflict zones raises a number of complicated issues, Helena is right to insist that we begin discussing these hard issues earlier rather than later. To this end, she presents “three problems and two possible solutions to start a conversation on drones, ethics and conflict.” I italicized solutions because much of the nascent discourse on this topic seems preoccupied with repeating all the problems that have already been identified, leaving little time and consideration to discussions on possible solutions. So kudos to Helena.

Problem 1: Privacy and Consent. How viable is it to obtain consent from those being imaged for UAV-collected data? As noted in this blog post on data protection protocols for crisis mapping, the International Committee of the Red Cross recognizes that, “When such consent cannot be realistically obtained, information allowing the identification of victims or witnesses, should only be relayed in the public domain if the expected protection outcome clearly outweighs the risks. In case of doubt, displaying only aggregated data, with no individual markers, is strongly recommended.” But Helena argues that drawing the line on what is actually life-threatening in a conflict context is particularly hard. “UAVs cannot detect intent, so how are imagery analysts to determine if a situation is likely to result in loss of life?” These are really important questions, and I certainly do not have all, most or any of the answers.

In terms of UAVs not being able to detect intent, could other data sources be used to monitor tactics and strategies that may indicate intent to harm? On a different note, DigitalGlobe’s latest & most sophisticated satellite, WorldView-3, captures images at an astounding 31-centimeter resolution and can even see wildfires beneath the smoke. What happens when commercial satellites are able to capture imagery at 20 or 10 centimeter resolutions? Will DigitalGlobe ask the planet’s population for their consent? Does anyone know of any studies out there that have analyzed just how much—and also what kind—of personal identifying information can be captured via satellite and UAV imagery across various resolutions, especially when linked to other datasets?

Screen Shot 2014-09-07 at 2.02.42 PM

Problem 2: Fear and Confusion. Helena kindly refers to this blog post of mine on common misconceptions about UAVs. I gasped when she quite rightly noted that my post didn’t explicitly distinguish between the use of UAVs in response to natural hazards versus violent, armed conflict. To be clear, I was speaking strictly and only about the former. The very real possibility for fear and confusion that Helena and others describe is precisely why I’ve remained stead-fast about including the following guideline in the Humanitarian UAV Network’s Code of Conduct:

“Do not operate humanitarian UAVs in conflict zones or in countries under repressive, authoritarian rule; particularly if military drones have recently been used in these countries.”

As Helena notes, a consortium of NGOs working in the DRC have warned that DPKO’s use of surveillance drones in the country could “blur the lines between military and humanitarian actors.” According to Daniel Gilman from the UN Office for the Coordination of Humanitarian Affairs (OCHA), who also authored OCHA’s Policy Brief on Humanitarian UAVs,

“The DRC NGO position piece has to be understood in the context of the Oslo Guidelines on the use of Military and Civil Defense Assets in Disaster Relief – from conversations with some people engaged on the ground, the issue was less the tech itself [i.e., the drones] than the fact that the mission was talking about using this [tech] both for military interventions and ‘humanitarian’ needs, particularly since [DPKO's] Mission doesn’t have a humanitarian mandate. We should be careful of eliding issues around dual-use by military actors with use by humanitarians in conflicts or with general concerns about privacy” (Email exchange on Sept. 8, 2014, permission to publish this excerpt granted in writing).

This is a very important point. Still, distinguishing between UAVs operated by the military versus those used by humanitarian organizations for non-military purposes is no easy task—assuming it is even possible. Does this mean that UAVs should simply not be used for good in conflict zones? I’m conflicted. (As an aside, this dilemma reminds me of the “Security Dilemma” in International Relations Theory and in particular the related “Offense-Defense Theory“).

Perhaps an alternative is for DPKO to use their helicopters instead (like the one below), which, for some (most?) civilians, may look somewhat more scary than DPKO’s drone above. Keep in mind that such helicopters & military cargo planes are also significantly louder, which may add to the fear. Also, using helicopters to capture aerial imagery doesn’t really solve the privacy and consent problem.

UNheli

On the plus side, we can at least distinguish these UN-marked helicopters from other military attack helicopters used by repressive regimes. Then again, what prevents a ruthless regime from painting their helicopters white and adding big UN letters to maintain an element of surprise when bombing their own civilians?

un-drone

Going back to DPKO’s drone, it is perhaps worth emphasizing that these models are definitely on the larger and heavier end of the spectrum. Compare the above with the small, ultralight UAV below, which was used following Typhoon Haiyan in the Philippines. This UAV is almost entirely made of foam and thus weighs only ~600 grams. When airborne, it looks like a bird. So it may elicit less fear even if DPKO ends up using this model in the future.

Problem 3: Response and Deterrence. Helena asks whether it is ethical for DPKO or other UN/NGO actors to deploy UAVs “if they do not have the capacity to respond to increased information on threats?” Could the use of UAV raise expectations of a response? “One possible counter-argument is to say that the presence of UAVs is in itself a deterrent” to would-be perpetrators of violence, “just as the presence of UN peacekeepers is meant to be a deterrent.” As Helena writes, the head of DPKO has suggested that deterrence is actually a direct aim of the UN’s drone program. “But the notion that a digital Panopticon can deter violent acts is disputable (see for example here), since most conflict actors on the ground are unlikely to be aware that they are being watched and / or are immune to the consequences of surveillance.”

I suppose this leads to the following question: are there ways to make conflict actors on the ground aware that they are perhaps being watched? Then again, if they do realize that they’re being watched, won’t they simply adapt and evolve strategies to evade or shoot down DPKO’s UAVs? This would then force DPKO to change it’s own strategy, perhaps adopting more stealthy UAVs. What broader consequences and possible unintended impact could this have on civilian, crisis-affected communities?

Solution 1: Education and Civic Engagement. I completely agree with Helena’s emphasis on both education and civic engagements, two key points I’ve made in a number of posts (here, here & here). I also agree that “This can make way for informed consent about the operation of drones, allowing communities to engage critically, offer grounded advice and hold drone operators to account.” But this brings us back to Helena’s first question: “what happens if a community, after being educated and openly consulted about a UAV program, decides that drones pose too much of a risk or are otherwise not beneficial? In other words, can communities stop UN- or NGO-operated drones from collecting information they have not consented to sharing? Education will be insufficient if there are no mechanisms in place for participatory decision-making on drone use in conflict settings.” So what to do? Perhaps Helena’s second solution may shed some light.

Solution 2: From Civic Engagement to Empowerment. In Helena’s view, “the critical ethical question about drones and conflict is how they shift the balance of power. As with other data-driven, tech-enabled tools, ultimately the only ethical solution (and probably also the most effective at achieving impact) is community-driven implementation of UAV programs.” I completely agree with this as well, which is why I’m very interested in this community-based project in Haiti and this grassroots UAV initiative; in fact, I invited the latter’s team leads to join the Advisory Board of the Humanitarian UAV Network (UAViators) given their expertise in UAVs and their explicit focus on community engagement.

UAViators Long Logo

In terms of peacebuilding applications, Helena writes that “there is plenty that local peacebuilders could use drones for in conflict settings: from peace activism using tactics for civil resistance, to citizen journalism that communicates the effects of conflict, to community monitoring and reporting of displacement due to violence.” But as she rightly notes, these novel applications exacerbate the three ethical problems outlined above. So what now?

I have some (unformed) ideas but this blog post is long enough already. I’ll leave this for a future post and simply add the following for now. First, in terms of civil resistance and the need to distinguish between a regime’s UAV versus activist UAVs, perhaps secret codes could be used to signal that a UAV flying for a civil resistance mission. This could mean painting certain patterns on the UAV or flying in a particular pattern. Of course, this leads back to the age-old challenge of disseminating the codes widely enough while keeping them from falling into the wrong hands.

Second, I used to work extensively in the conflict prevention and conflict early warning space (see my original blog on this). During this time, I was a strong advocate for a people-centered approach to early warning and rapid response systems. The UN ‘s Global Survey of Early Warning Systems (PDF), defines the purpose of people-centered early warning systems as follows:

“… to empower individuals and communities threatened by hazards to act in sufficient time & in an appropriate manner so as to reduce the possibility of personal injury, loss of life, damage to property and the environment, and loss of livelihoods.”

This shift is ultimately a shift in the balance of power, away from state-centric power to people-power, which is why I wholeheartedly agree with Helena’s closing thoughts: “The more I consider how drones could be used for good in conflict settings, the more I think that local peacebuilders need to turn the ethics discourse on its head: as well as defending privacy and holding drone operators to account, start using the same tools and engage from a place of power.” This is not about us.

bio

See Also:

  • Crisis Map of UAV Videos for Disaster Response [link]
  • Official UN Policy Brief on Humanitarian UAVs [link]
  • Reflections on Use of UAVs in Humanitarian Interventions [link]
  • The Use of Drones for Nonviolent Civil Resistance [link]
  • Drones for Human Rights: Brilliant or Foolish? [link]

Perils of Crisis Mapping: Lessons from Gun Map

Any CrisisMapper who followed the social firestorm surrounding the gun map published by the Journal News will have noted direct parallels with the perils of Crisis Mapping. The digital and interactive gun map displayed the (lega-lly acquired) names and addresses of 33,614 handgun permit holders in two counties of New York. Entitled “The Gun Owner Next Door,” the project was launched on December 23, 2012 to highlight the extent of gun proliferation in the wake of the school shooting in Newtown. The map has been viewed over 1 million times since. This blog post documents the consequences of the gun map and explains how to avoid making the same mistakes in the field of Crisis Mapping.

gunmap

The backlash against Journal News was swift, loud and intense. The interactive map included the names and addresses of police officers and other law enforcement officials such as prison guards. The latter were subsequently threatened by inmates who used the map to find out exactly where they lived. Former crooks and thieves confirmed the map would be highly valuable for planning crimes (“news you can use”). They warned that criminals could easily use the map either to target houses with no guns (to avoid getting shot) or take the risk and steal the weapons themselves. Shotguns and hand-guns have a street value of $300-$400 per gun. This could lead to a proliferation of legally owned guns on the street.

The consequences of publishing the gun map didn’t end there. Law-abiding citizens who do not own guns began to fear for their safety. A Democratic legislator told the media: “I never owned a gun but now I have no choice [...]. I have been exposed as someone that has no gun. And I’ll do anything, anything to protect my family.” One resident feared that her ex-husband, who had attempted to kill her in the past, might now be able to find her thanks to the map. There were also consequences for the journalists who published the map. They began to receive death threats and had to station an armed guard outside one of their offices. One disenchanted blogger decided to turn the tables (reverse panopticon) by publishing a map with the names and addresses of key editorial staffers who work at  Journal News. The New York Times reported that the location of the editors’ children’s schools had also been posted online. Suspicious packages containing white powder were also mailed to the newsroom (later found to be harmless).

News about a burglary possibly tied to the gun map began to circulate (although I’m not sure whether the link was ever confirmed). But according to one report, “said burglars broke in Saturday evening, and went straight for the gun safe. But they could not get it open.” Even if there was no link between this specific burglary and the gun map, many county residents fear that their homes have become a target. The map also “demonized” gun owners.

gunmap2

After weeks of fierce and heated “debate” the Journal News took the map down. But were the journalists right in publishing their interactive gun map in the first place? There was nothing illegal about it. But should the map have been published? In my opinion: No. At least not in that format. The rationale behind this public map makes sense. After all, “In the highly charged debate over guns that followed the shooting, the extent of ownership was highly relevant. [...] By publishing the ‘gun map,’ the Journal News gave readers a visceral understanding of the presence of guns in their own community.” (Politico). It was the implementation of the idea that was flawed.

I don’t agree with the criticism that suggests the map was pointless because criminals obviously don’t register their guns. Mapping criminal activity was simply not the rationale behind the map. Also, while Journal News could simply have published statistics on the proliferation of gun ownership, the impact would not have been as … dramatic. Indeed, “ask any editor, advertiser, artist or curator—hell, ask anyone whose ever made a PowerPoint presentation—which editorial approach would be a more effective means of getting the point across” (Politico). No, this is not an endorsement of the resulting map, simply an acknowledgement that the decision to use mapping as a medium for data visualization made sense.

The gun map could have been published without the interactive feature and without corresponding names and addresses. This is eventually what the jour-nalists decided to do, about four weeks later. Aggregating the statistics would have also been an option in order to get away from individual dots representing specific houses and locations. Perhaps a heat map that leaves enough room for geographic ambiguity would have been less provocative but still effective in de-picting the extent of gun proliferation. Finally, an “opt out” feature should have been offered, allowing those owning guns to remove themselves from the map (still in the context of a heat map). Now, these are certainly not perfect solutions—simply considerations that could mitigate some of the negative consequences that come with publishing a hyper-local map of gun ownership.

The point, quite simply, is that there are various ways to map sensitive data such that the overall data visualization is rendered relatively less dangerous. But there is another perhaps more critical observation that needs to be made here. The New York Time’s Bill Keller gets to the heart of the matter in this piece on the gun map:

“When it comes to privacy, we are all hypocrites. We howl when a newspaper publishes public records about personal behavior. At the same time, we are acquiescing in a much more sweeping erosion of our privacy —government surveillance, corporate data-mining, political micro-targeting, hacker invasions—with no comparable outpouring of protest. As a society we have no coherent view of what information is worth defending and how to defend it. When our personal information is exploited this way, we may grumble, or we may seek the largely false comfort of tweaking our privacy settings [...].”

In conclusion, the “smoking guns” (no pun intended) were never found. Law enforcement officials and former criminals seemed to imply that thieves would go on a rampage with map in hand. So why did we not see a clear and measurable increase in burglaries? The gun map should obviously have given thieves the edge. But no, all we have is just one unconfirmed report of an unsuccessful crime that may potentially be linked to the map. Surely, there should be an arsenal of smoking guns given all the brouhaha.

In any event, the controversial gun map provides at least six lessons for those of us engaged in crisis mapping complex humanitarian emergencies:

First, just because data is publicly-accessible does not mean that a map of said data is ethical or harmless. Second, there are dozens of ways to visualize and “blur” sensitive data on a map. Third, a threat and risk mitigation strategy should be standard operating procedure for crisis maps. Fourth, since crisis mapping almost always entails risk-taking when tracking conflicts, the benefits that at-risk communities gain from the resulting map must always and clearly outweigh the expected costs. This means carrying out a Cost Benefit Analysis, which goes to the heart of the “Do No Harm” principle. Fifth, a code of conduct on data protection and data security for digital humanitarian response needs to be drafted, adopted and self-enforced; something I’m actively working on with both the International Committee of the Red Cross (ICRC) and GSMA’s  Disaster Response Program. Sixth, the importance of privacy can—and already has—been hijacked by attention-seeking hypocrites who sensationalize the issue to gain notoriety and paralyze action. Non-action in no way implies no-harm.

Update: Turns out the gan ownership data was highly inaccurate!

See also:

  • Does Digital Crime Mapping Work? Insights on Engagement, Empowerment & Transparency [Link]
  • On Crowdsourcing, Crisis Mapping & Data Protection [Link]
  • What do Travel Guides and  Nazi Germany have to do with Crisis Mapping and Security? [Link]

Stranger than Fiction: A Few Words About An Ethical Compass for Crisis Mapping

The good people at the Sudan Sentinel Project (SSP), housed at my former “alma matter,” the Harvard Humanitarian Initiative (HHI), have recently written this curious piece on crisis mapping and the need for an “ethical compass” in this new field. They made absolutely sure that I’d read the piece by directly messaging me via the @CrisisMappers twitter feed. Not to worry, good people, I read your masterpiece. Interestingly enough, it was published the day after my blog post reviewing IOM’s data protection standards.

To be honest, I was actually not going to spend any time writing up a response because the piece says absolutely nothing new and is hardly pro-active. Now, before any one spins and twists my words: the issues they raise are of paramount importance. But if the authors had actually taken the time to speak with their fellow colleagues at HHI, they would know that several of us participated in a brilliant workshop last year which addressed these very issues. Organized by World Vision, the workshop included representatives from the International Committee of the Red Cross (ICRC), Care International, Oxfam GB, UN OCHA, UN Foundation, Standby Volunteer Task Force (SBTF), Ushahidi, the Harvard Humanitarian Initiative (HHI) and obviously Word Vision. There were several data protection experts at this workshop, which made the event one of the most important workshops I attended in all of 2011. So a big thanks again to Phoebe Wynn-Pope at World Vision for organizing.

We discussed in-depth issues surrounding Do No Harm, Informed Consent, Verification, Risk Mitigation, Ownership, Ethics and Communication, Impar-tiality, etc. As expected, the outcome of the workshop was the clear need for data protection standards that are applicable for the new digital context we operate in, i.e., a world of social media, crowdsourcing and volunteer geographical informa-tion. Our colleagues at the ICRC have since taken the lead on drafting protocols relevant to a data 2.0 world in which volunteer networks and disaster-affected communities are increasingly digital. We expect to review this latest draft in the coming weeks (after Oxfam GB has added their comments to the document). Incidentally, the summary report of the workshop organized by World Vision is available here (PDF) and highly recommended. It was also shared on the Crisis Mappers Google Group. By the way, my conversations with Phoebe about these and related issues began at this conference in November 2010, just a month after the SBTF launched.

I should confess the following: one of my personal pet peeves has to do with people stating the total obvious and calling for action but actually doing absolutely nothing else. Talk for talk’s sake just makes it seem like the authors of the article are simply looking for attention. Meanwhile, many of us are working on these new data protection challenges in our own time, as volunteers. And by the way, the SSP project is first and foremost focused on satellite imagery analysis and the Sudan, not on crowdsourcing or on social media. So they’re writing their piece as outsiders and, well, are hence less informed as a result—particularly since they didn’t do their homework.

Their limited knowledge of crisis mapping is blatantly obvious throughout the article. Not only do the authors not reference the World Vision workshop, which HHI itself attended, they also seem rather confused about the term “crisis mappers” which they keep using. This is somewhat unfortunate since the Crisis Mappers Network is an offshoot of HHI. Moreover, SSP participated and spoke at last year’s Crisis Mappers Conference—just a few months ago, in fact. One outcome of this conference was the launch of a dedicated Working Group on Security and Privacy, which will now become two groups, one addressing security issues and the other data protection. This information was shared on the Crisis Mappers Google Group and one of the authors is actually part of the Security Working Group.

To this end, one would have hoped, and indeed expected, that the authors would write a somewhat more informed piece about these issues. At the very least, they really ought to have documented some of the efforts to date in this innovative space. But they didn’t and unfortunately several statements they make in their article are, well… completely false and rather revealing at the same time. (Incidentally, the good people at SSP did their best to disuade the SBTF from launching a Satellite Team on the premise that only experts are qualified to tag satellite imagery; seems like they’re not interested in citizen science even though some experts I’ve spoken to have referred to SSP as citizen science).

In any case, the authors keep on referring to “crisis mappers this” and “crisis mappers that” throughout their article. But who exactly are they referring to? Who knows. On the one hand, there is the International Network of Crisis Mappers, which is a loose, decentralized, and informal network of some 3,500 members and 1,500 organizations spanning 150+ countries. Then there’s the Standby Volunteer Task Force (SBTF), a distributed, global network of 750+ volunteers who partner with established organizations to support live mapping efforts. And then, easily the largest and most decentralized “group” of all, are all those “anonymous” individuals around the world who launch their own maps using whatever technologies they wish and for whatever purposes they want. By the way, to define crisis mapping as mapping highly volatile and dangerous conflict situations is really far from being accurate either. Also, “equating” crisis mapping with crowdsourcing, which the authors seem to do, is further evidence that they are writing about a subject that they have very little understanding of. Crisis mapping is possible without crowdsourcing or social media. Who knew?

Clearly, the authors are confused. They appear to refer to “crisis mappers” as if the group were a legal entity, with funding, staff, administrative support and brick-and-mortar offices. Furthermore, and what the authors don’t seem to realize, is that much of what they write is actually true of the formal professional humanitarian sector vis-a-vis the need for new data protection standards. But the authors have obviously not done their homework, and again, this shows. They are also confused about the term “crisis mapping” when they refer to “crisis mapping data” which is actually nothing other than geo-referenced data. Finally, a number of paragraphs in the article have absolutely nothing to do with crisis mapping even though the authors seem insinuate otherwise. Also, some of the sensationalism that permeates the article is simply unnecessary and poor taste.

The fact of the matter is that the field of crisis mapping is maturing. When Dr. Jennifer Leaning and I co-founded and co-directed HHI’s Program on Crisis Mapping and Early Warning from 2007-2009, the project was very much an exploratory, applied-research program. When Dr. Jen Ziemke and I launched the Crisis Mappers Network in 2009, we were just at the beginning of a new experiment. The field has come a long way since and one of the consequences of rapid innovation is obviously the lack of any how-to-guide or manual. These certainly need to be written and are being written.

So, instead of  stating the obvious, repeating the obvious, calling for the obvious and making embarrassing factual errors in a public article (which, by the way, is also quite revealing of the underlying motives), perhaps the authors could actually have done some research and emailed the Crisis Mappers Google Group. Two of the authors also have my email address; one even has my private phone number; oh, and they could also have DM’d me on Twitter like they just did.