Category Archives: Satellite Imagery

A Research Framework for Next Generation Humanitarian Technology and Innovation

Humanitarian donors and organizations are increasingly championing innovation and the use of new technologies for humanitarian response. DfID, for example, is committed to using “innovative techniques and technologies more routinely in humanitarian response” (2011). In a more recent strategy paper, DfID confirmed that it would “continue to invest in new technologies” (2012). ALNAP’s important report on “The State of the Humanitarian System” documents the shift towards greater innovation, “with new funds and mechanisms designed to study and support innovation in humanitarian programming” (2012). A forthcoming land-mark study by OCHA makes the strongest case yet for the use and early adoption of new technologies for humanitarian response (2013).

picme8

These strategic policy documents are game-changers and pivotal to ushering in the next wave of humanitarian technology and innovation. That said, the reports are limited by the very fact that the authors are humanitarian professionals and thus not necessarily familiar with the field of advanced computing. The purpose of this post is therefore to set out a more detailed research framework for next generation humanitarian technology and innovation—one with a strong focus on information systems for crisis response and management.

In 2010, I wrote this piece on “The Humanitarian-Technology Divide and What To Do About It.” This divide became increasingly clear to me when I co-founded and co-directed the Harvard Humanitarian Initiative’s (HHI) Program on Crisis Mapping & Early Warning (2007-2009). So I co-founded the annual Inter-national CrisisMappers Conference series in 2009 and have continued to co-organize this unique, cross-disciplinary forum on humanitarian technology. The CrisisMappers Network also plays an important role in bridging the humanitarian and technology divide. My decision to join Ushahidi as Director of Crisis Mapping (2009-2012) was a strategic move to continue bridging the divide—and to do so from the technology side this time.

The same is true of my move to the Qatar Computing Research Institute (QCRI) at the Qatar Foundation. My experience at Ushahidi made me realize that serious expertise in Data Science is required to tackle the major challenges appearing on the horizon of humanitarian technology. Indeed, the key words missing from the DfID, ALNAP and OCHA innovation reports include: Data Science, Big Data Analytics, Artificial Intelligence, Machine Learning, Machine Translation and Human Computing. This current divide between the humanitarian and data science space needs to be bridged, which is precisely why I joined the Qatar Com-puting Research Institute as Director of Innovation; to develop and prototype the next generation of humanitarian technologies by working directly with experts in Data Science and Advanced Computing.

bridgetech

My efforts to bridge these communities also explains why I am co-organizing this year’s Workshop on “Social Web for Disaster Management” at the 2013 World Wide Web conference (WWW13). The WWW event series is one of the most prestigious conferences in the field of Advanced Computing. I have found that experts in this field are very interested and highly motivated to work on humanitarian technology challenges and crisis computing problems. As one of them recently told me: “We simply don’t know what projects or questions to prioritize or work on. We want questions, preferably hard questions, please!”

Yet the humanitarian innovation and technology reports cited above overlook the field of advanced computing. Their policy recommendations vis-a-vis future information systems for crisis response and management are vague at best. Yet one of the major challenges that the humanitarian sector faces is the rise of Big (Crisis) Data. I have already discussed this here, here and here, for example. The humanitarian community is woefully unprepared to deal with this tidal wave of user-generated crisis information. There are already more mobile phone sub-scriptions than people in 100+ countries. And fully 50% of the world’s population in developing countries will be using the Internet within the next 20 months—the current figure is 24%. Meanwhile, close to 250 million people were affected by disasters in 2010 alone. Since then, the number of new mobile phone subscrip-tions has increased by well over one billion, which means that disaster-affected communities today are increasingly likely to be digital communities as well.

In the Philippines, a country highly prone to “natural” disasters, 92% of Filipinos who access the web use Facebook. In early 2012, Filipinos sent an average of 2 billion text messages every day. When disaster strikes, some of these messages will contain information critical for situational awareness & rapid needs assess-ment. The innovation reports by DfID, ALNAP and OCHA emphasize time and time again that listening to local communities is a humanitarian imperative. As DfID notes, “there is a strong need to systematically involve beneficiaries in the collection and use of data to inform decision making. Currently the people directly affected by crises do not routinely have a voice, which makes it difficult for their needs be effectively addressed” (2012). But how exactly should we listen to millions of voices at once, let alone manage, verify and respond to these voices with potentially life-saving information? Over 20 million tweets were posted during Hurricane Sandy. In Japan, over half-a-million new users joined Twitter the day after the 2011 Earthquake. More than 177 million tweets about the disaster were posted that same day, i.e., 2,000 tweets per second on average.

Screen Shot 2013-03-20 at 1.42.25 PM

Of course, the volume and velocity of crisis information will vary from country to country and disaster to disaster. But the majority of humanitarian organizations do not have the technologies in place to handle smaller tidal waves either. Take the case of the recent Typhoon in the Philippines, for example. OCHA activated the Digital Humanitarian Network (DHN) to ask them to carry out a rapid damage assessment by analyzing the 20,000 tweets posted during the first 48 hours of Typhoon Pablo. In fact, one of the main reasons digital volunteer networks like the DHN and the Standby Volunteer Task Force (SBTF) exist is to provide humanitarian organizations with this kind of skilled surge capacity. But analyzing 20,000 tweets in 12 hours (mostly manually) is one thing, analyzing 20 million requires more than a few hundred dedicated volunteers. What’s more, we do not have the luxury of having months to carry out this analysis. Access to information is as important as access to food; and like food, information has a sell-by date.

We clearly need a research agenda to guide the development of next generation humanitarian technology. One such framework is proposed her. The Big (Crisis) Data challenge is composed of (at least) two major problems: (1) finding the needle in the haystack; (2) assessing the accuracy of that needle. In other words, identifying the signal in the noise and determining whether that signal is accurate. Both of these challenges are exacerbated by serious time con-straints. There are (at least) two ways too manage the Big Data challenge in real or near real-time: Human Computing and Artificial Intelligence. We know about these solutions because they have already been developed and used by other sectors and disciplines for several years now. In other words, our information problems are hardly as unique as we might think. Hence the importance of bridging the humanitarian and data science communities.

In sum, the Big Crisis Data challenge can be addressed using Human Computing (HC) and/or Artificial Intelligence (AI). Human Computing includes crowd-sourcing and microtasking. AI includes natural language processing and machine learning. A framework for next generation humanitarian technology and inno-vation must thus promote Research and Development (R&D) that apply these methodologies for humanitarian response. For example, Verily is a project that leverages HC for the verification of crowdsourced social media content generated during crises. In contrast, this here is an example of an AI approach to verification. The Standby Volunteer Task Force (SBTF) has used HC (micro-tasking) to analyze satellite imagery (Big Data) for humanitarian response. An-other novel HC approach to managing Big Data is the use of gaming, something called Playsourcing. AI for Disaster Response (AIDR) is an example of AI applied to humanitarian response. In many ways, though, AIDR combines AI with Human Computing, as does MatchApp. Such hybrid solutions should also be promoted   as part of the R&D framework on next generation humanitarian technology. 

There is of course more to humanitarian technology than information manage-ment alone. Related is the topic of Data Visualization, for example. There are also exciting innovations and developments in the use of drones or Unmanned Aerial Vehicles (UAVs), meshed mobile communication networks, hyper low-cost satellites, etc.. I am particularly interested in each of these areas will continue to blog about them. In the meantime, I very much welcome feedback on this post’s proposed research framework for humanitarian technology and innovation.

 bio

Crowdsourcing the Evaluation of Post-Sandy Building Damage Using Aerial Imagery

Update (Nov 2): 5,739 aerial images tagged by over 3,000 volunteers. Please keep up the outstanding work!

My colleague Schuyler Erle from Humanitarian OpenStreetMap  just launched a very interesting effort in response to Hurricane Sandy. He shared the info below via CrisisMappers earlier this morning, which I’m turning into this blog post to help him recruit more volunteers.

Schuyler and team just got their hands on the Civil Air Patrol’s (CAP) super high resolution aerial imagery of the disaster affected areas. They’ve imported this imagery into their Micro-Tasking Server MapMill created by Jeff Warren and are now asking volunteers to help tag the images in terms of the damage depicted in each photo. “The 531 images on the site were taken from the air by CAP over New York, New Jersey, Rhode Island, and Massachusetts on 31 Oct 2012.”

To access this platform, simply click here: http://sandy.hotosm.org. If that link doesn’t work,  please try sandy.locative.us.

“For each photo shown, please select ‘ok’ if no building or infrastructure damage is evident; please select ‘not ok’ if some damage or flooding is evident; and please select ‘bad’ if buildings etc. seem to be significantly damaged or underwater. Our *hope* is that the aggregation of the ok/not ok/bad ratings can be used to help guide FEMA resource deployment, or so was indicated might be the case during RELIEF at Camp Roberts this summer.”

A disaster response professional working in the affected areas for FEMA replied (via CrisisMappers) to Schuyler’s efforts to confirm that:

“[G]overnment agencies are working on exploiting satellite imagery for damage assessments and flood extents. The best way that you can help is to help categorize photos using the tool Schuyler provides [...].  CAP imagery is critical to our decision making as they are able to work around some of the limitations with satellite imagery so that we can get an area of where the worst damage is. Due to the size of this event there is an overwhelming amount of imagery coming in, your assistance will be greatly appreciated and truly aid in response efforts.  Thank you all for your willingness to help.”

Schuyler notes that volunteers can click on the Grid link from the home page of the Micro-Tasking platform to “zoom in to the coastlines of Massachusetts or New Jersey” and see “judgements about building damages beginning to aggregate in US National Grid cells, which is what FEMA use operationally. Again, the idea and intention is that, as volunteers judge the level of damage evident in each photo, the heat map will change color and indicate at a glance where the worst damage has occurred.” See above screenshot.

Even if you just spend 5 or 10 minutes tagging the imagery, this will still go a long way to supporting FEMA’s response efforts. You can also help by spreading the word and recruiting others to your cause. Thank you!

The Best Way to Crowdsource Satellite Imagery Analysis for Disaster Response

My colleague Kirk Morris recently pointed me to this very neat study on iterative versus parallel models of crowdsourcing for the analysis of satellite imagery. The study was carried out by French researcher & engineer Nicolas Maisonneuve for the next GISscience2012 conference.

Nicolas finds that after reaching a certain threshold, adding more volunteers to the parallel model does “not change the representativeness of opinion and thus will not change the consensual output.” His analysis also shows that the value of this threshold has significant impact on the resulting quality of the parallel work and thus should be chosen carefully.  In terms of the iterative approach, Nicolas finds that “the first iterations have a high impact on the final results due to a path dependency effect.” To this end, “stronger commitment during the first steps are thus a primary concern for using such model,” which means that “asking expert/committed users to start,” is important.

Nicolas’s study also reveals that the parellel approach is better able to correct wrong annotations (wrong analysis of the satellite imagery) than the iterative model for images that are fairly straightforward to interpret. In contrast, the iterative model is better suited for handling more ambiguous imagery. But there is a catch: the potential path dependency effect in the iterative model means that  “mistakes could be propagated, generating more easily type I errors as the iterations proceed.” In terms of spatial coverage, the iterative model is more efficient since the parallel model leverages redundancy to ensure data quality. Still, Nicolas concludes that the “parallel model provides an output which is more reliable than that of a basic iterative [because] the latter is sensitive to vandalism or knowledge destruction.”

So the question that naturally follow is this: how can parallel and iterative methodologies be combined to produce a better overall result? Perhaps the parallel approach could be used as the default to begin with. However, images that are considered difficult to interpret would get pushed from the parallel workflow to the iterative workflow. The latter would first be processed by experts in order to create favorable path dependency. Could this hybrid approach be the wining strategy?

How Can Innovative Technology Make Conflict Prevention More Effective?

I’ve been asked to participate in an expert working group in support of a research project launched by the International Peace Institute (IPI) on new technologies for conflict prevention. Both UNDP and USAID are also partners in this effort. To this end, I’ve been invited to make some introductory remarks during our upcoming working group meeting. The purpose of this blog post is to share my preliminary thoughts on this research and provide some initial suggestions.

Before I launch into said thoughts, some context may be in order. I spent several years studying, launching and improving conflict early warning systems for violence prevention. While I haven’t recently blogged about conflict prevention on iRevolution, you’ll find my writings on this topic posted on my other blog, Conflict Early Warning. I have also published and presented several papers on conflict prevention, most of which are available here. The most relevant ones include the following:

  • Meier, Patrick. 2011. Early Warning Systems and the Prevention of Violent Conflict. In Peacebuilding in the Information Age: Sifting Hype from Reality, ed. Daniel Stauffacher et al. GenevaICT4Peace. Available online.
  • Leaning, Jennifer and Patrick Meier. 2009. “The Untapped Potential of Information Communication Technology for Conflict Early Warning and Crisis Mapping,” Working Paper Series, Harvard Humanitarian Initiative (HHI), Harvard University. Available online.
  • Leaning, Jennifer and Patrick Meier. 2008. “Community Based Conflict Early Warning and Response Systems: Opportunities and Challenges.” Working Paper Series, Harvard Humanitarian Initiative (HHI), Harvard University. Available online.
  • Leaning, Jennifer and Patrick Meier. 2008. “Conflict Early Warning and Response: A Critical Reassessment.” Working Paper Series, Harvard Humanitarian Initiative (HHI), Harvard University. Available online.
  • Meier, Patrick. 2008. “Upgrading the Role of Information Communication Technology (ICT) for Tactical Early Warning/Response.” Paper prepared for the 49th Annual Convention of the International Studies Association (ISA) in San Francisco. Available online.
  • Meier, Patrick. 2007. “New Strategies for Effective Early Response: Insights from Complexity Science.” Paper prepared for the 48th Annual Convention of the International Studies Association (ISA) in Chicago.Available online.
  • Campbell, Susanna and Patrick Meier. 2007. “Deciding to Prevent Violent Conflict: Early Warning and Decision-Making at the United Nations.” Paper prepared for the 48th Annual Convention of the International Studies Association (ISA) in Chicago. Available online.
  • Meier, Patrick. 2007. From Disaster to Conflict Early Warning: A People-Centred Approach. Monday Developments 25, no. 4, 12-14. Available online.
  • Meier, Patrick. 2006. “Early Warning for Cowardly Lions: Response in Disaster & Conflict Early Warning Systems.” Unpublished academic paper, The Fletcher SchoolAvailable online.
  • I was also invited to be an official reviewer of this 100+ page workshop summary on “Communication and Technology for Violence Prevention” (PDF), which was just published by the National Academy of Sciences. In addition, I was an official referee for this important OECD report on “Preventing Violence, War and State Collapse: The Future of Conflict Early Warning and Response.”

An obvious first step for IPI’s research would be to identify the conceptual touch-points between the individual functions or components of conflict early warning systems and information & communication technology (ICT). Using this concep-tual framework put forward by ISDR would be a good place to start:

That said, colleagues at IPI should take care not to fall prey to technological determinism. The first order of business should be to understand exactly why previous (and existing) conflict early warning systems are complete failures—a topic I have written extensively about and been particularly vocal on since 2004. Throwing innovative technology at failed systems will not turn them into successful operations. Furthermore, IPI should also take note of the relatively new discourse on people-centered approaches to early warning and distinguish between first, second, third and fourth generation conflict early warning systems.

On this note, IPI ought to focus in particular on third and fourth generation systems vis-a-vis the role of innovative technology. Why? Because first and second generation systems are structured for failure due to constraints explained by organizational theory. They should thus explore the critical importance of conflict preparedness and the role that technology can play in this respect since preparedness is key to the success of third and fourth generation systems. In addition, IPI should consider the implications of crowdsourcing, crisis mapping, Big Data, satellite imagery and the impact that social media analytics might play for the early detection and respons to violent conflict. They should also take care not to ignore critical insights from the field of nonviolent civil resistance vis-a-vis preparedness and tactical approaches to community-based early response. Finally, they should take note of new and experimental initiatives in this space, such as PeaceTXT.

IPI’s plans to write up several case studies on conflict early warning systems to understand how innovative technology might (or already are) making these more effective. I would recommend focusing on specific systems in Kenya, Kyrgyzstan Sri Lanka and Timor-Leste. Note that some community-based systems are too sensitive to make public, such as one in Burma for example. In terms of additional experts worth consulting, I would recommend David Nyheim, Joe Bock, Maria Stephan, Sanjana Hattotuwa, Scott Edwards and Casey Barrs. I would also shy away from inviting too many academics or technology companies. The former tend to focus too much on theory while the latter often have a singular focus on technology.

Many thanks to UNDP for including me in the team of experts. I look forward to the first working group meeting and reviewing IPI’s early drafts. In the meantime, if iRevolution readers have certain examples or questions they’d like me to relay to the working group, please do let me know via the comments section below and I’ll be sure to share.

Predicting the Future of Global Geospatial Information Management

The United Nations Committee of Experts on Global Information Management (GGIM) recently organized a meeting of thought-leaders and visionaries in the geo-spatial world to identify the future of this space over the next 5-10 years. These experts came up with some 80+ individual predictions. I’ve included some of the more interesting ones below.

  • The use of Unmanned Aerial Vehicles (UAVs) as a tool for rapid geospatial data collection will increase.
  • 3D and even 4D geospatial information, incorporating time as the fourth dimension, will increase.
  • Technology will move faster than legal and governance structures.
  • The link between geospatial information and social media, plus other actor networks, will become more and more important.
  • Real-time info will enable more dynamic modeling & response to disasters.
  • Free and open source software will continue to grow as viable alternatives both in terms of software, and potentially in analysis and processing.
  • Geospatial computation will increasingly be non-human consumable in nature, with an increase in fully-automated decision systems.
  • Businesses and Governments will increasingly invest in tools and resources to manage Big Data. The technologies required for this will enable greater use of raw data feeds from sensors and other sources of data.
  • In ten years time it is likely that all smart phones will be able to film 360 degree 3D video at incredibly high resolution by today’s standards & wirelessly stream it in real time.
  • There will be a need for geospatial use governance in order to discern the real world from the virtual/modelled world in a 3D geospatial environ-ment.
  • Free and open access to data will become the norm and geospatial information will increasingly be seen as an essential public good.
  • Funding models to ensure full data coverage even in non-profitable areas will continue to be a challenge.
  • Rapid growth will lead to confusion and lack of clarity over data ownership, distribution rights, liabilities and other aspects.
  • In ten years, there will be a clear dividing line between winning and losing nations, dependent upon whether the appropriate legal and policy frameworks have been developed that enable a location-enabled society to flourish.
  • Some governments will use geospatial technology as a means to monitor or restrict the movements and personal interactions of their citizens. Individuals in these countries may be unwilling to use LBS or applications that require location for fear of this information being shared with authorities.
  • The deployment of sensors and the broader use of geospatial data within society will force public policy and law to move into a direction to protect the interests and rights of the people.
  • Spatial literacy will not be about learning GIS in schools but will be more centered on increasing spatial awareness and an understanding of the value of understanding place as context.
  • The role of National Mapping Agencies as an authoritative supplier of high quality data and of arbitrator of other geospatial data sources will continue to be crucial.
  • Monopolies held by National Mapping Agencies in some areas of specialized spatial data will be eroded completely.
  • More activities carried out by National Mapping Agencies will be outsourced and crowdsourced.
  • Crowdsourced data will push National Mapping Agencies towards niche markets.
  • National Mapping Agencies will be required to find new business models to provide simplified licenses and meet the demands for more free data from mapping agencies.
  • The integration of crowdsourced data with government data will increase over the next 5 to 10 years.
  • Crowdsourced content will decrease cost, improve accuracy and increase availability of rich geospatial information.
  •  There will be increased combining of imagery with crowdsourced data to create datasets that could not have been created affordably on their own.
  • Progress will be made on bridging the gap between authoritative data and crowdsourced data, moving towards true collaboration.
  • There will be an accelerated take-up of Volunteer Geographic Information over the next five years.
  • Within five years the level of detail on transport systems within OpenStreetMap will exceed virtually all other data sources & will be respected/used by major organisations & governments across the globe.
  • Community-based mapping will continue to grow.
  • There is unlikely to be a market for datasets like those currently sold to power navigation and location-based services solutions in 5 years, as they will have been superseded by crowdsourced datasets from OpenStreetMaps or other comparable initiatives.

Which trends have the experts missed? Do you think they’re completely off on any of the above? The full set of predictions on the future of global geospatial information management is available here as a PDF.

Imagery and Humanitarian Assistance: Gems, Errors and Omissions

The Center for Technology and National Security Policy based at National Defense University’s Institute for National Strategic Studies just published an 88-page report entitled “Constructive Convergence: Imagery and Humanitarian Assistance.” As noted by the author, “the goal of this paper is to illustrate to the technical community and interested humanitarian users the breadth of the tools and techniques now available for imagery collection, analysis, and distribution, and to provide brief recommendations with suggestions for next steps.” In addition, the report “presents a brief overview of the growing power of imagery, especially from volunteers and victims in disasters, and its place in emergency response. It also highlights an increasing technical convergence between professional and volunteer responders—and its limits.”

The study contains a number of really interesting gems, just a few errors and some surprising omissions. The point of this blog post is not to criticize but rather to provide constructive-and-hopefully-useful feedback should the report be updated in the future.

Lets begin with the important gems, excerpted below.

“The most serious issues overlooked involve liability protections by both the publishers and sources of imagery and its data. As far as our research shows there is no universally adopted Good Samaritan law that can protect volunteers who translate emergency help messages, map them, and distribute that map to response teams in the field.”

Whether a Good Samaritan law could ever realistically be universally adopted remains to be seen, but the point is that all of the official humanitarian data protection standards that I’ve reviewed thus far simply don’t take into account the rise of new digitally-empowered global volunteer networks (let alone the existence of social media). The good news is that some colleagues and I are working with the International Committee of the Red Cross (ICRC) and a consor-tium of major humanitarian organizations to update existing data protection protocols to take some of these new factors into account. This new document will hopefully be made publicly available in October 2012.

“Mobile devices such as tablets and mobile phones are now the primary mode for both collecting and sharing information in a response effort. A January 2011 report published by the Mobile Computing Promotion Consortium of Japan surveyed users of smart phones. Of those who had smart phones, 55 percent used a map application, the third most common application after Web browsing and email.”

I find this absolutely fascinating and thus read the January 2011 report, which is where I found the graphic below.

“The rapid deployment of Cellular on Wheels [COW] is dramatically improving. The Alcatel-Lucent Light Radio is 300 grams (about 10 ounces) and stackable. It also consumes very little power, eliminating large generation and storage requirements. It is capable of operating by solar, wind and/or battery power. Each cube fits into the size of a human hand and is fully integrated with radio processing, antenna, transmission, and software management of frequency. The device can operate on multiple frequencies simultaneously and work with existing infrastructure.”

-

“In Haiti, USSOUTHCOM found imagery, digital open source maps, and websites that hosted them (such as Ushahidi and OpenStreetMap) to occasionally be of greater value than their own assets.”

-

“It is recommended that clearly defined and restricted use of specialized #hashtags be implemented using a common crisis taxonomy. For example:

#country + location + emergency code + supplemental data

The above example, if located in Washington, DC, U.S.A., would be published as:

#USAWashingtonDC911Trapped

The specialized use of #hashtags could be implemented in the same cultural manner as 911, 999, and other emergency phone number systems. Metadata using these tags would also be given priority when sent over the Internet through communication networks (landline, broadband Internet, or mobile text or data). Abuse of ratified emergency #hashtag’s would be a prosecutable offense. Implementing such as system could reduce the amount of data that crisis mappers and other response organizations need to monitor and improve the quality of data to be filtered. Other forms of #Hashtags syllabus can also be implemented such as:

#country + location + information code (411) + supplemental data
#country + location + water (H20) + supplemental data
#country + location + Fire (FD) + supplemental data”

I found this very interesting and relevant to this earlier blog post: “Calling 911: What Humanitarians Can Learn from 50 Years of Crowdsourcing.” Perhaps a reference to Tweak the Tweet would have been worthwhile.

I also had not come across some of the platforms used in response to the 2011 earthquake in New Zealand. But the report did an excellent job sharing these.

EQviewer.co.nz

Some errors that need correcting:

Open source mapping tools such as Google Earth use imagery as a foundation for layering field data.”

Google Earth is not an open source tool.

CrisisMappers.net, mentioned earlier, is a group of more than 1,600 volunteers that have been brought together by Patrick Meier and Jen Ziemke. It is the core of collaboration efforts that can be deployed anywhere in the world. CrisisMappers has established workshops and steering committees to set guidelines and standardize functions and capabilities for sites that deliver imagery and layered datasets. This group, which today consists of diverse and talented volunteers from all walks of life, might soon evolve into a professional volunteer organization of trusted capabilities and skill sets and they are worth watching.”

CrisisMappers is not a volunteer network or an organization that deploys in any formal sense of the word. The CrisisMappers website explains what the mission and purpose of this informal network is. The initiative has some 3,500 members.

-

“Figure 16. How Ushahidi’s Volunteer Standby Task Force was Structured for Libya. Ushahidi’s platform success stems from its use by organized volunteers, each with skill sets that extract data from multiple sources for publication.”

The Standby Volunteer Task Force (SBTF) does not belong to Ushahidi, nor is the SBTF an Ushahidi project. A link to the SBTF website would have been appropriate. Also, the majority of applications of the Ushahidi platform have nothing to do with crises, or the SBTF, or any other large volunteer networks. The SBTF’s original success stems from organized volunteers who where well versed in the Ushahidi platform.

“Ushahidi accepts KML and KMZ if there is an agreement and technical assistance resources are available. An end user cannot on their own manipulate a Ushahidi portal as an individual, nor can external third party groups unless that group has an arrangement with the principal operators of the site. This offers new collaboration going forward. The majority of Ushahidi disaster portals are operated by volunteer organizations and not government agencies.”

The first sentence is unclear. If someone sets up an Ushahidi platform and they have KML/KMZ files that they want to upload, they can go ahead and do so. An end-user can do some manipulation of an Ushahidi portal and can also pull the Ushahidi data into their own platform (via the GeoRSS feed, for example). Thanks to the ESRI-Ushahidi plugin, they can then perform a range of more advanced GIS analysis. In terms of volunteers vs government agencies, indeed, it appears the former is leading the way vis-a-vis innovation.

Finally, below are some omissions and areas that I would have been very interested to learn more about. For some reason, the section on the Ushahidi deployment in New Zealand makes no reference to Ushahidi.

Staying on the topic of the earthquake in Christchurch, I was surprised to see no reference to the Tomnod deployment:

I had also hoped to read more about the use of drones (UAVs) in disaster response since these were used both in Haiti and Japan. What about the rise of DIY drones and balloon mapping? Finally, the report’s reference to Broadband Global Area Network (BGAN) doesn’t provide information on the range of costs associated with using BGANs in disasters.

In conclusion, the report is definitely an important contribution to the field of crisis mapping and should be required reading.

Stranger than Fiction: A Few Words About An Ethical Compass for Crisis Mapping

The good people at the Sudan Sentinel Project (SSP), housed at my former “alma matter,” the Harvard Humanitarian Initiative (HHI), have recently written this curious piece on crisis mapping and the need for an “ethical compass” in this new field. They made absolutely sure that I’d read the piece by directly messaging me via the @CrisisMappers twitter feed. Not to worry, good people, I read your masterpiece. Interestingly enough, it was published the day after my blog post reviewing IOM’s data protection standards.

To be honest, I was actually not going to spend any time writing up a response because the piece says absolutely nothing new and is hardly pro-active. Now, before any one spins and twists my words: the issues they raise are of paramount importance. But if the authors had actually taken the time to speak with their fellow colleagues at HHI, they would know that several of us participated in a brilliant workshop last year which addressed these very issues. Organized by World Vision, the workshop included representatives from the International Committee of the Red Cross (ICRC), Care International, Oxfam GB, UN OCHA, UN Foundation, Standby Volunteer Task Force (SBTF), Ushahidi, the Harvard Humanitarian Initiative (HHI) and obviously Word Vision. There were several data protection experts at this workshop, which made the event one of the most important workshops I attended in all of 2011. So a big thanks again to Phoebe Wynn-Pope at World Vision for organizing.

We discussed in-depth issues surrounding Do No Harm, Informed Consent, Verification, Risk Mitigation, Ownership, Ethics and Communication, Impar-tiality, etc. As expected, the outcome of the workshop was the clear need for data protection standards that are applicable for the new digital context we operate in, i.e., a world of social media, crowdsourcing and volunteer geographical informa-tion. Our colleagues at the ICRC have since taken the lead on drafting protocols relevant to a data 2.0 world in which volunteer networks and disaster-affected communities are increasingly digital. We expect to review this latest draft in the coming weeks (after Oxfam GB has added their comments to the document). Incidentally, the summary report of the workshop organized by World Vision is available here (PDF) and highly recommended. It was also shared on the Crisis Mappers Google Group. By the way, my conversations with Phoebe about these and related issues began at this conference in November 2010, just a month after the SBTF launched.

I should confess the following: one of my personal pet peeves has to do with people stating the total obvious and calling for action but actually doing absolutely nothing else. Talk for talk’s sake just makes it seem like the authors of the article are simply looking for attention. Meanwhile, many of us are working on these new data protection challenges in our own time, as volunteers. And by the way, the SSP project is first and foremost focused on satellite imagery analysis and the Sudan, not on crowdsourcing or on social media. So they’re writing their piece as outsiders and, well, are hence less informed as a result—particularly since they didn’t do their homework.

Their limited knowledge of crisis mapping is blatantly obvious throughout the article. Not only do the authors not reference the World Vision workshop, which HHI itself attended, they also seem rather confused about the term “crisis mappers” which they keep using. This is somewhat unfortunate since the Crisis Mappers Network is an offshoot of HHI. Moreover, SSP participated and spoke at last year’s Crisis Mappers Conference—just a few months ago, in fact. One outcome of this conference was the launch of a dedicated Working Group on Security and Privacy, which will now become two groups, one addressing security issues and the other data protection. This information was shared on the Crisis Mappers Google Group and one of the authors is actually part of the Security Working Group.

To this end, one would have hoped, and indeed expected, that the authors would write a somewhat more informed piece about these issues. At the very least, they really ought to have documented some of the efforts to date in this innovative space. But they didn’t and unfortunately several statements they make in their article are, well… completely false and rather revealing at the same time. (Incidentally, the good people at SSP did their best to disuade the SBTF from launching a Satellite Team on the premise that only experts are qualified to tag satellite imagery; seems like they’re not interested in citizen science even though some experts I’ve spoken to have referred to SSP as citizen science).

In any case, the authors keep on referring to “crisis mappers this” and “crisis mappers that” throughout their article. But who exactly are they referring to? Who knows. On the one hand, there is the International Network of Crisis Mappers, which is a loose, decentralized, and informal network of some 3,500 members and 1,500 organizations spanning 150+ countries. Then there’s the Standby Volunteer Task Force (SBTF), a distributed, global network of 750+ volunteers who partner with established organizations to support live mapping efforts. And then, easily the largest and most decentralized “group” of all, are all those “anonymous” individuals around the world who launch their own maps using whatever technologies they wish and for whatever purposes they want. By the way, to define crisis mapping as mapping highly volatile and dangerous conflict situations is really far from being accurate either. Also, “equating” crisis mapping with crowdsourcing, which the authors seem to do, is further evidence that they are writing about a subject that they have very little understanding of. Crisis mapping is possible without crowdsourcing or social media. Who knew?

Clearly, the authors are confused. They appear to refer to “crisis mappers” as if the group were a legal entity, with funding, staff, administrative support and brick-and-mortar offices. Furthermore, and what the authors don’t seem to realize, is that much of what they write is actually true of the formal professional humanitarian sector vis-a-vis the need for new data protection standards. But the authors have obviously not done their homework, and again, this shows. They are also confused about the term “crisis mapping” when they refer to “crisis mapping data” which is actually nothing other than geo-referenced data. Finally, a number of paragraphs in the article have absolutely nothing to do with crisis mapping even though the authors seem insinuate otherwise. Also, some of the sensationalism that permeates the article is simply unnecessary and poor taste.

The fact of the matter is that the field of crisis mapping is maturing. When Dr. Jennifer Leaning and I co-founded and co-directed HHI’s Program on Crisis Mapping and Early Warning from 2007-2009, the project was very much an exploratory, applied-research program. When Dr. Jen Ziemke and I launched the Crisis Mappers Network in 2009, we were just at the beginning of a new experiment. The field has come a long way since and one of the consequences of rapid innovation is obviously the lack of any how-to-guide or manual. These certainly need to be written and are being written.

So, instead of  stating the obvious, repeating the obvious, calling for the obvious and making embarrassing factual errors in a public article (which, by the way, is also quite revealing of the underlying motives), perhaps the authors could actually have done some research and emailed the Crisis Mappers Google Group. Two of the authors also have my email address; one even has my private phone number; oh, and they could also have DM’d me on Twitter like they just did.