Tag Archives: Protection

#NoShare: A Personal Twist on Data Privacy

Countless computers worldwide automatically fingerprint our use of social media around the clock without our knowledge or consent. So we’re left with the following choice: stay digital and face the Eye of Sauron, or excommunicate ourselves from social media and face digital isolation from society. I’d chose the latter were it not for the life-saving role that social media can play during disasters. So what if there were a third way? An alternative that enabled us to use social media without being fed to the machines. Imagine if the choice were ours. My PopRock Fellows (PopTech & Rockefeller Foundation) and I are pondering this question within the context of ethical community-driven resilience in the era Big Data.

privacy_image

One result of this pondering is the notion of #noshare or #ns hashtag. We propose using this hashtag on anything that we don’t want sensed and turned into fodder for the machines. This could include Facebook updates, tweets, emails, SMS, post cards, cars, buildings and even our physical selves. Buildings, for example, are increasingly captured by cameras on orbiting satellites and also by high-resolution cameras fixed to cars used for Google Streetview.

The #noshare hashtag is a humble attempt at regaining some agency over the machines—and yes the corporations and governments using said machines. To this end, #noshare is a social hack that seeks to make a public statement and establish a new norm: the right to be social without being sensed or exploited without our knowledge or consent. While traditional privacy may be dead, most of us know the difference between right and wrong. This may foster positive social pressure to respect the use of #noshare.

Think of #ns hashtag as drawing a line in the sand. When you post a public tweet and want that tweet to serve the single purpose of read-only by humans, then add #noshare. This tag simply signals the public sphere that your tweet is for human consumption only, and not to be used by machines; not for download, retweet, copying, analysis, sensing, modeling or prediction. Your use of #noshare regardless of the medium represents your public vote for trust & privacy; a vote for tuning this hashtag into a widespread social norm.

Screen Shot 2013-09-15 at 5.24.57 AM

Of course, this #noshare norm is not enforceable in a traditional sense. This means that one could search for, collect and analyze all tweets with the #noshare or #ns hashtag. We’re well aware of this “Barbara Streisand effect” and there’s nothing we can do about it just yet. But the point is to draw a normative line in the sand, to create a public and social norm that provokes strong public disapproval when people violate the #ns principle. What if this could become a social norm? What if positive social pressure could make it unacceptable to violate this norm? Could this create a deterrence effect?

Either way, the line between right and wrong would be rendered publicly explicit. There would thus be no excuse: any analysis, sensing, copying, etc., of #ns tweets would be the result of a human decision to willingly violate the public norm. This social hack would make it very easy for corporations and governments to command their data mining algorithms to ignore all our digital fingerprints that use the #ns hashtag. Crossing the #noshare line would thus provide basis for social action against the owners of the machines in question. Social pressure is favorable to norm creation. Could #ns eventually become part of a Creative Commons type license?

Obviously, #ns tagged content does not mean that content should not be made public. Contented tagged with #ns is meant to be public, but only for the human public and not for computers to store and analyze. The point is simple: we want the option of being our public digital selves without being mined, sensed and analyzed by machines without our knowledge and consent. In sum, #noshare is an awareness raising initiative that seeks to educate the public about our increasingly sensed environment. Indeed, Big Data = Big Sensing.

We suggest that #ns may return a sense of moral control to individuals, a sense of trust and local agency. These are important elements for social capital and resilience, for ethical, community-driven resilience. If this norm gains traction, we may be able to code this norm into social media platforms. In sum, sensing is not bad; sensing of social media during disasters can save lives. But the decision of whether or not to be sensed should be the decision of the individual.

My PopRock Fellows and I are looking for feedback on this proposal. We’re aware of some of the pitfalls, but are we missing anything? Are there ways to strengthen this campaign? Please let us know in the comments section below. Thank you!

Bio

Acknowledgements: Many thanks to PopRock Fellows Gustavo, Amy, Kate, Claudia and Jer for their valuable feedback on earlier versions of this post. 

Data Protection: This Tweet Will Self-Destruct In…

The permanence of social media such as tweets presents an important challenge for data protection and privacy. This is particularly true when social media is used to communicate during crises. Indeed, social media users tend to volunteer personal identifying information during disasters that they otherwise would not share, such as phone numbers and home addresses. They typically share this sensitive information to offer help or seek assistance. What if we could limit the visibility of these messages after their initial use?

Twitter self destruct

Enter TwitterSpirit and Efemr, which enable users to schedule their tweets for automatic deletion after a specified period of time using hashtags like #1m, #2h or #3d. According to Wired, using these services will (in some cases) also delete retweets. That said, tweets with #time hashtags can always be copied manually in any number of ways, so the self-destruction is not total. Nevertheless, their visibility can still be reduced by using TwitterSpirit and Efemr. Lastly, the use of these hashtags also sends a social signal that these tweets are intended to have limited temporal use.

bio

Note: My fellow PopTech and Rockefeller Foundation Fellows and I have been thinking of related solutions, which we plan to blog about shortly. Hence my interest in Spirit & Efemr, which I stumbled upon by chance just now.

Data Protection Protocols for Crisis Mapping

The day after the CrisisMappers 2011 Conference in Geneva, my colleague Phoebe Wynn-Pope organized and facilitated the most important workshop I attended that year. She brought together a small group of seasoned crisis mappers and experts in protection standards. The workshop concluded with a pressing action item: update the International Committee of the Red Cross’s (ICRC) Professional Standards for Protection Work in order to provide digital humanitarians with expert guidance on protection standards for humani-tarianism in the network age.

My colleague Anahi Ayala and I were invited to provide feedback on the new 20+ page chapter specifically dedicated to data management and new technologies. We added many, many comments and suggestions on the draft. The full report is available here (PDF). Today, thanks to ICRC, I am in Switzerland to give a Keynote on Next Generation Humanitarian Technology for the official launch of the report. The purpose of this blog post is to list the protection protocols that relate most directly to Crisis Mapping &  Digital Humanitarian Response; and to problematize some of these protocols. 

The Protocols

In the preface of the ICRC’s 2013 Edition of the Professional Standards for Protection Work, the report lists three reasons for the updated edition. The first has to do with new technologies:

In light of the rapidly proliferating initiatives to make new uses of information technology for protection purposes, such as satellite imagery, crisis mapping and publicizing abuses and violations through social media, the advisory group agreed to review the scope and language of the standards on managing sensitive information. The revised standards reflect the experiences and good practices of humanitarian and human rights organizations as well as of information & communication technology actors.

The new and most relevant protection standards relating—or applicable to—digital humanitarians are listed below (indented text) together with commentary.

Protection actors must only collect information on abuses and violations when necessary 
for the design or implementation of protection activities. It may not be used for other purposes without additional consent.

A number of Digital Humanitarian Networks such as the Standby Volunteer Task Force (SBTF) only collect crisis information specifically requested by the “Activating Organization,” such as the UN Office for the Coordination of Humanitarian Affairs (OCHA) for example. Volunteer networks like the SBTF are not “protection actors” but rather provide direct support to humanitarian organizations when the latter meet the SBTF’s activation criteria. In terms of what type of information the SBTF collects, again it is the Activating Organization that decides this, not the SBTF. For example, the Libya Crisis Map launched by the SBTF at the request of OCHA displayed categories of information that were decided by the UN team in Geneva.

Protection actors must collect and handle information containing personal details in accordance with the rules and principles of international law and other relevant regional or national laws on individual data protection.

These international, regional and national rules, principles and laws need to be made available to Digital Humanitarians in a concise, accessible and clear format. Such a resource is still missing.

Protection actors seeking information bear the responsibility to assess threats to the persons providing information, and to take necessary measures to avoid negative consequences for those from whom they are seeking information.

Protection actors setting up systematic information collection through the Internet or other media must analyse the different potential risks linked to the collection, sharing or public display of the information and adapt the way they collect, manage and publicly release the information accordingly.

Interestingly, when OCHA activated the SBTF in response to the Libya Crisis, it was the SBTF, not the UN, that took the initiative to formulate a Threat and Risks Mitigation Strategy that was subsequently approved by the UN. Furthermore, unlike other digital humanitarian networks, the Standby Task Force’s  “Prime Directive” is to not interact with the crisis-affected population. Why? Precisely to minimize the risk to those voluntarily sharing information on social media.

Protection actors must determine the scope, level of precision and depth of detail
of the information collection process, in relation to the intended use of the information collected.

Again, this is determined by the protection actor activating a digital humanitarian network like the SBTF.

Protection actors should systematically review the information collected in order to confirm that it is reliable, accurate, and updated.

The SBTF has a dedicated Verification Team that strives to do this. The verification of crowdsourced, user-generated content posted on social media during crises is no small task. But the BBC’s User-Generated Hub (UGC) has been doing just this for 8 years. Meanwhile, new strategies and technologies are under development to facilitate the rapid verification of such content. Also, the ICRC report notes that “Combining and cross-checking such [crowdsourced] information with other sources, including information collected directly from communities and individuals affected, is becoming standard good practice.”

Protection actors should be explicit as to the level of reliability and accuracy of information they use or share.

Networks like the SBTF make explicit whether a report published on a crisis map has been verified or not. If the latter, the report is clearly marked as “Unverified”. There are more nuanced ways to do this, however. I have recently given feedback on some exciting new research that is looking to quantify the probable veracity of user-generated content.

Protection actors must gather and subsequently process protection information in an objective and impartial manner, to avoid discrimination. They must identify and minimize bias that may affect information collection.

Objective, impartial, non-discriminatory and unbiased information is often more a fantasy than reality even with traditional data. Meeting these requirements in a conflict zone can be prohibitively expensive, overly time consuming and/or downright dangerous. This explains why advanced statistical methods dedicated to correcting biases exist. These can and have been applied to conflict and human rights data. They can also be applied to user-generated content on social media to the extent that the underlying demographic & census based information is possible.

To place this into context, Harvard University Professor Gary King, reminded me that the vast majority of medical data is not representative either. Nor is the vast majority of crime data. Does that render these datasets void? Of course not. Please see this post on Demystifying Crowdsourcing: An Introduction to Non-Probability Sampling.

Security safeguards appropriate to the sensitivity of the information must be in place prior
to any collection of information, to ensure protection from loss or theft, unauthorized access, disclosure, copying, use or modification, in any format in which it is kept.

One of the popular mapping technologies used by digital humanitarian networks is the Ushahidi platform. When the SBTF learned in 2012 that security holes had still not been patched almost a year after reporting them to Ushahidi Inc., the SBTF Core Team made an executive decision to avoid using Ushahidi technology whenever possible given that the platform could be easily hacked. (Just last month, a colleague of mine who is not a techie but a UN practitioner was able to scrape Ushahidi’s entire Kenya election monitoring data form March 2013, which included some personal identifying information). The SBTF has thus been exploring work-arounds and is looking to make greater use of GeoFeedia and Google’s new mapping technology, Stratomap, in future crisis mapping operations.

Protection actors must integrate the notion of informed consent when calling upon the general public, or members of a community, to spontaneously send them information through SMS, an open Internet platform, or any other means of communication, or when using information already available on the Internet.

This is perhaps the most problematic but important protection protocol as far as digital humanitarian work is concerned. While informed consent is absolutely of critical importance, the vast majority of crowdsourced content displayed on crisis maps is user-generated and voluntarily shared on social media. The very act of communicating with these individuals to request their consent not only runs the risk of endangering these individuals but also violates the SBTF’s Prime Directive for the exact same reason. Moreover, interacting with crisis-affected communities may raise expectations of response that digital humanitarians are simply not in position to guarantee. In situations of armed conflict and other situations of violence, conducting individual interviews can put people at risk not only because of the sensitive nature of the information collected, but because mere participation in the process can cause these people to be stigmatized or targeted.

That said, the ICRC does recognize that, “When such consent cannot be realistically obtained, information allowing the identification of victims or witnesses, should only be relayed in the public domain if the expected protection outcome clearly outweighs the risks. In case of doubt, displaying only aggregated data, with no individual markers, is strongly recommended.”

Protection actors should, to the degree possible, keep victims or communities having transmitted information on abuses and violations informed of the action they have taken
on their behalf – and of the ensuing results. Protection actors using information provided
by individuals should remain alert to any negative repercussions on the individuals or communities concerned, owing to the actions they have taken, and take measures to mitigate these repercussions.

Part of this protocol is problematic for the same reason as the above protocol. The very act of communicating with victims could place them in harm’s way. As far as staying alert to any negative repercussions, I believe the more seasoned digital humanitarian networks make this one of their top priorities.

When handling confidential and sensitive information on abuses and violations, protection actors should endeavor when appropriate and feasible, to share aggregated data on the trends they observed.

The purpose of the SBTF’s Analysis Team is precisely to serve this function.

Protection actors should establish formal procedures on the information handling process, from collection to exchange,  archiving or destruction.

Formal procedures to archive & destroy crowdsourced crisis information are largely lacking. Moving forward, the SBTF will defer this responsibility to the Activating Organization.

Conclusion

In conclusion, the ICRC notes that, “When it comes to protection, crowdsourcing can be an extremely efficient way to collect data on ongoing violence and abuses and/or their effects on individuals and communities. Made possible by the wide availability of Internet or SMS in countries affected by violence, crowdsourcing has rapidly gained traction.” To this end,

Although the need for caution is a central message [in the ICRC report], it should in no way be interpreted as a call to avoid sharing information. On the contrary, when the disclosing of protection information is thought to be of benefit to the individuals and communities concerned, it should be shared, as appropriate, with local, regional or national authorities, UN peacekeeping operations, other protection actors, and last but not least with service providers.

This is inline with the conclusions reached by OCHA’s landmark report, which notes that “Concern over the protection of information and data is not a sufficient reason to avoid using new communications technologies in emergencies, but it must be taken into account.” And so, “Whereas the first exercises were conducted without clear procedures to assess and to subsequently limit the risks faced by individuals who participated or who were named, the groups engaged in crisis mapping efforts over the years have become increasingly sensitive to the need to identify & manage these risks” (ICRC 2013).

It is worth recalling that the vast majority of the groups engaged in crisis mapping efforts, such as the SBTF, are first and foremost volunteers who are not only continuing to offer their time, skills and services for free, but are also taking it upon themselves to actively manage the risks involved in crisis mapping—risks that they, perhaps better than anyone else, understand and worry about the most because they are after all at the frontlines of these digital humanitarian efforts. And they do this all on a grand operational budget of $0 (as far as the SBTF goes). And yet, these volunteers continue to mobilize at the request of international humanitarian organizations and are always looking to learn, improve and do better. They continue to change the world for one map at a time.

I have organized a CrisisMappers Webinar on April 17, 2013, featuring presentations and remarks by the lead authors of the new ICRC report. Please join the CrisisMappers list-serve for more information.

Bio

See also:

  • SMS Code of Conduct for Disaster Response (Link)
  • Humanitarian Accountability Handbook (PDF)

Launching: SMS Code of Conduct for Disaster Response

Shortly after the devastating Haiti Earthquake of January 12, 2010, I published this blog post on the urgent need for an SMS code of conduct for disaster response. Several months later, I co-authored this peer-reviewed study on the lessons learned from the unprecedented use of SMS following the Haiti Earth-quake. This week, at the Mobile World Congress (MWC 2013) in Barcelona, GSMA’s Disaster Response Program organized two panels on mobile technology for disaster response and used the event to launch an official SMS Code of Conduct for Disaster Response (PDF). GSMA members comprise nearly 800 mobile operators based in more than 220 countries.

Screen Shot 2013-02-18 at 2.27.32 PM

Thanks to Kyla Reid, Director for Disaster Response at GSMA, and to Souktel’s Jakob Korenblummy calls for an SMS code of conduct were not ignored. The three of us spent a considerable amount of time in 2012 drafting and re-drafting a detailed set of principles to guide SMS use in disaster response. During this process, we benefited enormously from many experts on the mobile operators side and the humanitarian community; many of whom are at MWC 2013 for the launch of the guidelines. It is important to note that there have been a number of parallel efforts that our combined work has greatly benefited from. The Code of Conduct we launched this week does not seek to duplicate these important efforts but rather serves to inform GSMA members about the growing importance of SMS use for disaster response. We hope this will help catalyze a closer relationship between the world’s leading mobile operators and the international humanitarian community.

Since the impetus for this week’s launch began in response to the Haiti Earth-quake, I was invited to reflect on the crisis mapping efforts I spearheaded at the time. (My slides for the second panel organized by GSMA are available here. My more personal reflections on the 3rd year anniversary of the earthquake are posted here). For several weeks, digital volunteers updated the Ushahidi-Haiti Crisis Map (pictured above) with new information gathered from hundreds of different sources. One of these information channels was SMS. My colleague Josh Nesbit secured an SMS short code for Haiti thanks to a tweet he posted at 1:38pm on Jan 13th (top left in image below). Several days later, the short code (4636) was integrated with the Ushahidi-Haiti Map.

Screen Shot 2013-02-18 at 2.40.09 PM

We received about 10,000 text messages from the disaster-affected population during the during the Search and Rescue phase. But we only mapped about 10% of these because we prioritized the most urgent and actionable messages. While mapping these messages, however, we had to address a critical issue: data privacy and protection. There’s an important trade-off here: the more open the data, the more widely useable that information is likely to be for professional disaster responders, local communities and the Diaspora—but goodbye privacy.

Time was not a luxury we had; an an entire week had already passed since the earthquake. We were at the tail end of the search and rescue phase, which meant that literally every hour counted for potential survivors still trapped under the rubble. So we immediately reached out to 2 trusted lawyers in Boston, one of them a highly reputable Law Professor at The Fletcher School of Law and Diplomacy who also a specialist on Haiti. You can read the lawyers’ written email replies along with the day/time they were received on the right-hand side of the slide. Both lawyers opined that consent was implied vis-à-vis the publishing of personal identifying information. We shared this opinion with all team members and partners working with us. We then made a joint decision 24 hours later to move ahead and publish the full content of incoming messages. This decision was supported by an Advisory Board I put together comprised of humanitarian colleagues from the Harvard Humanitarian Initiative who agreed that the risks of making this info public were minimal vis-à-vis the principle of Do No HarmUshahidi thus launched a micro-tasking platform to crowdsource the translation efforts and hosted this on 4636.Ushahidi.com [link no longer live], which volunteers from the Diaspora used to translate the text messages.

I was able to secure a small amount of funding in March 2010 to commission a fully independent evaluation of our combined efforts. The project was evaluated a year later by seasoned experts from Tulane University. The results were mixed. While the US Marine Corps publicly claimed to have saved hundreds of lives thanks to the map, it was very hard for the evaluators to corroborate this infor-mation during their short field visit to Port-au-Prince more than 12 months after the earthquake. Still, this evaluation remains the only professional, independent and rigorous assessment of Ushahidi and 4636 to date.

Screen Shot 2013-02-25 at 2.10.47 AM

The use of mobile technology for disaster response will continue to increase for years to come. Mobile operators and humanitarian organizations must therefore be pro-active in managing this increase demand by ensuring that the technology is used wisely. I, for one, never again want to spend 24+ precious hours debating whether or not urgent life-and-death text messages can or cannot be mapped because of uncertainties over data privacy and protection—24 hours during a Search and Rescue phase is almost certain to make the difference between life and death. More importantly, however, I am stunned that a bunch of volunteers with little experience in crisis response and no affiliation whatsoever to any established humanitarian organization were able to secure and use an official SMS short code within days of a major disaster. It is little surprise that we made mistakes. So a big thank you to Kyla and Jakob for their leadership and perseverance in drafting and launching GSMA’s official SMS Code of Conduct to make sure the same mistakes are not made again.

While the document we’ve compiled does not solve every possible challenge con-ceivable, we hope it is seen as a first step towards a more informed and responsible use of SMS for disaster response. Rest assured that these guidelines are by no means written in stone. Please, if you have any feedback, kindly share them in the comments section below or privately via email. We are absolutely committed to making this a living document that can be updated.

To connect this effort with the work that my CrisisComputing Team and I are doing at QCRI, our contact at Digicel during the Haiti response had given us the option of sending out a mass SMS broadcast to their 2 million subscribers to get the word out about 4636. (We had thus far used local community radio stations). But given that we were processing incoming SMS’s manually, there was no way we’d be able to handle the increased volume and velocity of incoming text messages following the SMS blast. So my team and I are exploring the use of advanced computing solutions to automatically parse and triage large volumes of text messages posted during disasters. The project, which currently uses Twitter, is described here in more detail.

bio

Stranger than Fiction: A Few Words About An Ethical Compass for Crisis Mapping

The good people at the Sudan Sentinel Project (SSP), housed at my former “alma matter,” the Harvard Humanitarian Initiative (HHI), have recently written this curious piece on crisis mapping and the need for an “ethical compass” in this new field. They made absolutely sure that I’d read the piece by directly messaging me via the @CrisisMappers twitter feed. Not to worry, good people, I read your masterpiece. Interestingly enough, it was published the day after my blog post reviewing IOM’s data protection standards.

To be honest, I was actually not going to spend any time writing up a response because the piece says absolutely nothing new and is hardly pro-active. Now, before any one spins and twists my words: the issues they raise are of paramount importance. But if the authors had actually taken the time to speak with their fellow colleagues at HHI, they would know that several of us participated in a brilliant workshop last year which addressed these very issues. Organized by World Vision, the workshop included representatives from the International Committee of the Red Cross (ICRC), Care International, Oxfam GB, UN OCHA, UN Foundation, Standby Volunteer Task Force (SBTF), Ushahidi, the Harvard Humanitarian Initiative (HHI) and obviously Word Vision. There were several data protection experts at this workshop, which made the event one of the most important workshops I attended in all of 2011. So a big thanks again to Phoebe Wynn-Pope at World Vision for organizing.

We discussed in-depth issues surrounding Do No Harm, Informed Consent, Verification, Risk Mitigation, Ownership, Ethics and Communication, Impar-tiality, etc. As expected, the outcome of the workshop was the clear need for data protection standards that are applicable for the new digital context we operate in, i.e., a world of social media, crowdsourcing and volunteer geographical informa-tion. Our colleagues at the ICRC have since taken the lead on drafting protocols relevant to a data 2.0 world in which volunteer networks and disaster-affected communities are increasingly digital. We expect to review this latest draft in the coming weeks (after Oxfam GB has added their comments to the document). Incidentally, the summary report of the workshop organized by World Vision is available here (PDF) and highly recommended. It was also shared on the Crisis Mappers Google Group. By the way, my conversations with Phoebe about these and related issues began at this conference in November 2010, just a month after the SBTF launched.

I should confess the following: one of my personal pet peeves has to do with people stating the total obvious and calling for action but actually doing absolutely nothing else. Talk for talk’s sake just makes it seem like the authors of the article are simply looking for attention. Meanwhile, many of us are working on these new data protection challenges in our own time, as volunteers. And by the way, the SSP project is first and foremost focused on satellite imagery analysis and the Sudan, not on crowdsourcing or on social media. So they’re writing their piece as outsiders and, well, are hence less informed as a result—particularly since they didn’t do their homework.

Their limited knowledge of crisis mapping is blatantly obvious throughout the article. Not only do the authors not reference the World Vision workshop, which HHI itself attended, they also seem rather confused about the term “crisis mappers” which they keep using. This is somewhat unfortunate since the Crisis Mappers Network is an offshoot of HHI. Moreover, SSP participated and spoke at last year’s Crisis Mappers Conference—just a few months ago, in fact. One outcome of this conference was the launch of a dedicated Working Group on Security and Privacy, which will now become two groups, one addressing security issues and the other data protection. This information was shared on the Crisis Mappers Google Group and one of the authors is actually part of the Security Working Group.

To this end, one would have hoped, and indeed expected, that the authors would write a somewhat more informed piece about these issues. At the very least, they really ought to have documented some of the efforts to date in this innovative space. But they didn’t and unfortunately several statements they make in their article are, well… completely false and rather revealing at the same time. (Incidentally, the good people at SSP did their best to disuade the SBTF from launching a Satellite Team on the premise that only experts are qualified to tag satellite imagery; seems like they’re not interested in citizen science even though some experts I’ve spoken to have referred to SSP as citizen science).

In any case, the authors keep on referring to “crisis mappers this” and “crisis mappers that” throughout their article. But who exactly are they referring to? Who knows. On the one hand, there is the International Network of Crisis Mappers, which is a loose, decentralized, and informal network of some 3,500 members and 1,500 organizations spanning 150+ countries. Then there’s the Standby Volunteer Task Force (SBTF), a distributed, global network of 750+ volunteers who partner with established organizations to support live mapping efforts. And then, easily the largest and most decentralized “group” of all, are all those “anonymous” individuals around the world who launch their own maps using whatever technologies they wish and for whatever purposes they want. By the way, to define crisis mapping as mapping highly volatile and dangerous conflict situations is really far from being accurate either. Also, “equating” crisis mapping with crowdsourcing, which the authors seem to do, is further evidence that they are writing about a subject that they have very little understanding of. Crisis mapping is possible without crowdsourcing or social media. Who knew?

Clearly, the authors are confused. They appear to refer to “crisis mappers” as if the group were a legal entity, with funding, staff, administrative support and brick-and-mortar offices. Furthermore, and what the authors don’t seem to realize, is that much of what they write is actually true of the formal professional humanitarian sector vis-a-vis the need for new data protection standards. But the authors have obviously not done their homework, and again, this shows. They are also confused about the term “crisis mapping” when they refer to “crisis mapping data” which is actually nothing other than geo-referenced data. Finally, a number of paragraphs in the article have absolutely nothing to do with crisis mapping even though the authors seem insinuate otherwise. Also, some of the sensationalism that permeates the article is simply unnecessary and poor taste.

The fact of the matter is that the field of crisis mapping is maturing. When Dr. Jennifer Leaning and I co-founded and co-directed HHI’s Program on Crisis Mapping and Early Warning from 2007-2009, the project was very much an exploratory, applied-research program. When Dr. Jen Ziemke and I launched the Crisis Mappers Network in 2009, we were just at the beginning of a new experiment. The field has come a long way since and one of the consequences of rapid innovation is obviously the lack of any how-to-guide or manual. These certainly need to be written and are being written.

So, instead of  stating the obvious, repeating the obvious, calling for the obvious and making embarrassing factual errors in a public article (which, by the way, is also quite revealing of the underlying motives), perhaps the authors could actually have done some research and emailed the Crisis Mappers Google Group. Two of the authors also have my email address; one even has my private phone number; oh, and they could also have DM’d me on Twitter like they just did.

On Crowdsourcing, Crisis Mapping and Data Protection Standards

The International Organization for Migration (IOM) just published their official Data Protection Manual. This report is hugely informative and should be required reading. At the same time, the 150-page report does not mention social media even once. This is perfectly understandable given IOM’s work, but there is no denying that disaster-affected communities are becoming more digitally-enabled—and thus increasingly the source of important, user-generated information. Moreover, it is difficult to ascertain exactly how to apply all of IOM’s Data Protection Principles to this new digital context and the work of the Standby Volunteer Task Force (SBTF).

The IOM Manual recommends that a risk-benefit assessment be conducted prior to data collection. This means weighing the probability of harm against the anticipated benefits and ensuring that the latter significantly outweigh the potential risks. But IOM explains that “the risk–benefit assessment is not a technical evaluation that is valid under all circumstances. Rather, it is a value judgement that often depends on various factors, including, inter alia, the prevailing social, cultural and religious attitudes of the target population group or individual data subject.”

The Manual also states that data collectors should always put themselves in the shoes of the data subject and consider: “How would a reasonable person, in the position of data subject, react to the data collection and data processing practices?” Again, this a value judgment rather than a technical evaluation. Applying this consistently across IOM will no doubt be a challenge.

The IOM Principles, which form the core of the manual, are as follows (keep in mind that they are obviously written with IOM’s mandate explicitly in mind):

1. Lawful & Fair Collection
2. Specified and Legitimate Purpose
3. Data quality
4. Consent
5. Transfer to Third Parties
6. Confidentiality
7. Access and Transparency
8. Data Security
9. Retention of Personal Data
10. Application of the Principles
11. Ownership of Personal Data
12. Oversight, Compliance & Internal Remedies
13. Exceptions

Take the first principle, which states that “Personal data must be obtained by lawful and fair means with the knowledge or consent of the data subject.” What does this mean when the data is self-generated and voluntarily placed in the public domain? This question also applies to a number of other principles including “Consent” and “Confidentiality”. In the section on “Consent”, the manual lists various ways that consent can be acquired. Perhaps the most a propos to our discussion is “Implicit Consent: no oral declaration or written statement is obtained, but the action or inaction of the data subjects un-equivocally indicates voluntary participation in the IOM project.”

Indeed, during the Ushahidi-Haiti Crisis Mapping Project (UHP), a renowned professor and lawyer at The Fletcher School of Law and Diplomacy was consulted to determine whether or not text messages from the disaster-affected community could be added to a public map). This professor stated there was “Implicit Consent” to map these text messages. (Incidentally, experts at Harvard’s Berkman Center were also consulted on this question at the time).

The first IOM principle further stipulates that “communication with data subjects should be encouraged at all stages of the data collection process.” But what if this communication poses a danger to the data subject? The manual further states that “Personal data should be collected in a safe and secure environment and data controllers should take all necessary steps to ensure that individual vulnerabilities and potential risks are not enhanced.” What if data subjects are not in a safe and secure environment but nevertheless voluntarily share potentially important information on social media channels?

Perhaps the only guidance provided by IOM on this question is as follows: “Data controllers should choose the most appropriate method of data collection that will enhance efficiency and protect the confidentiality of the personal data collected.” But again, what if the data subject has already volunteer information with their personal data and placed this information in the public domain?

The third principle, “Data Quality” is obviously key but the steps provided to ensure accuracy are difficult to translate within the context of crowdsourced information from the social media space. The same is true of several IOM Data Protection Principles. But some are certainly applicable with modification. Take the seventh principle on “Access and Transparency” which recommends that complaint procedures should be relatively straightforward so that data subjects can easily request to rectify or delete content previously collected from them.

“Data Security”, the eighth principle, is also directly applicable. For example, data from social media could be classified according the appropriate level of sensitivity and treated accordingly. During the response to the Haiti earthquake, for example, we kept new information on the location of orphans confidential, sharing this only with trusted colleagues in the humanitarian community. “Separating personal data from non-personal data” is another procedure that can (and has) been used in crisis mapping projects. This is for me an absolutely crucial point. Depending on the situation, we need to separate information mana-gement systems that contain data with personal identifiers from crisis mapping platforms. Obviously, the former thus need to be more secure. Encryption is also proposed for data security and applicable to crisis mapping.

The tenth IOM principle, i.e., “The Application of the Principles”, provides additional guidance on how to implement data protection and security. For example, the manual describes three appropriate methods for depersonalizing data: data-coding;  pseudonymization; and anonymization. Each of these could be applied to crisis mapping projects.

To conclude, the IOM Data Protection Manual is an important contribution and some of the principles described therein can be applied to crowdsourcing and crisis mapping. I look forward to folding these into the workflows and standard operating procedures of the SBTF (with guidance from the SBTF’s Advisory Board and other experts). There still remains a gap, however, vis-a-vis those IOM principles that are not easily customizable for the context in which the SBTF operates. There is also an issue vis-a-vis the Terms of Service of many social media platforms with respect to privacy and data protection standards.

This explains why I am actively collaborating with a major humanitarian organi-zation to explore the development of appropriate data protection standards for crowdsourcing crisis information in the context of social media. Many humanitarian organizations are struggling with these exact same issues. Yes, these organizations have long had data privacy and protection protocols in place but these were designed for a world devoid of social media. One major social media company is also looking to revisit its terms of service agreements given the increasing relevance of their platform in humanitarian response. The challenge, for all, will be to strike the right balance between innovation and regulation.