Tag Archives: ICT

Crisis Mapping in Areas of Limited Statehood

I had the great pleasure of contributing a chapter to this new book recently published by Oxford University Press: Bits and Atoms: Information and Communication Technology in Areas of Limited Statehood. My chapter addresses the application of crisis mapping to areas of limited statehood, drawing both on theory and hands-on experience. The short introduction to my chapter is provided below to help promote and disseminate the book.

Collection-national-flags

Introduction

Crises often challenge or limit statehood and the delivery of government services. The concept of “limited statehood” thus allows for a more realistic description of the territorial and temporal variations of governance and service delivery. Total statehood, in any case, is mostly imagined—a cognitive frame or pre-structured worldview. In a sense, all states are “spatially challenged” in that the projection of their governance is hardly enforceable beyond a certain geographic area and period of time. But “limited statehood” does not imply the absence of governance or services. Rather, these may simply take on alternate forms, involving procedures that are non-institutional (see Chapter 1). Therein lies the tension vis-à-vis crises, since “the utopian, immanent, and continually frustrated goal of the modern state is to reduce the chaotic, disorderly, constantly changing social reality beneath it to something more closely resembling the administrative grid of its observations” (Scott 1998). Crises, by definition, publicly disrupt these orderly administrative constructs. They are brutal audits of governance structures, and the consequences can be lethal for state continuity. Recall the serious disaster response failures that occurred following the devastating cyclone of 1970 in East Pakistan.

To this day, Cyclone Bhola still remains the most deadly cyclone on record, killing some 500,000 people. The lack of timely and coordinated government response was one of the triggers for the war of independence that resulted in the creation of Bangladesh (Kelman 2007). While crises can challenge statehood, they also lead to collective, self-help behavior among disaster-affected communities—particularly in areas of limited statehood. Recently, this collective action—facilitated by new information and communication technologies—has swelled and resulted in the production of live crisis maps that identify the disaggregated, raw impact of a given crisis along with resulting needs for services typically provided by the government (see Chapter  7). These crisis maps are sub-national and are often crowdsourced in near real-time. They empirically reveal the limited contours of governance and reframe how power is both perceived and projected (see Chapter 8).

Indeed, while these live maps outline the hollows of governance during times of upheaval, they also depict the full agency and public expression of citizens who self-organize online and offline to fill these troughs with alternative, parallel forms of services and thus governance. This self-organization and public expression also generate social capital between citizen volunteers—weak and strong ties that nurture social capital and facilitate future collective action both on and offline.

The purpose of this chapter is to analyze how the rise of citizen-generated crisis maps replaces governance in areas of limited statehood and to distill the conditions for their success. Unlike other chapters in this book, the analysis below focuses on a variable that has been completely ignored in the literature:  digital social capital. The chapter is thus structured as follows. The first section provides a brief introduction to crisis mapping and frames this overview using James Scott’s discourse from Seeing Like a State (1998). The next section briefly highlights examples of crisis maps in action—specifically those responding to natural disasters, political crises, and contested elections. The third section provides a broad comparative analysis of these case studies, while the fourth section draws on the findings of this analysis to produce a list of ingredients that are likely to render crowdsourced crisis-mapping more successful in areas of limited statehood. These ingredients turn out to be factors that nurture and thrive on digital social capital such as trust, social inclusion, and collective action. These drivers need to be studied and monitored as conditions for successful crisis maps and as measures of successful outcomes of online digital collaboration. In sum, digital crisis maps both reflect and change social capital.

Bio

Crowdsourcing for Human Rights Monitoring: Challenges and Opportunities for Information Collection & Verification

This new book, Human Rights and Information Communication Technologies: Trends and Consequences of Use, promises to be a valuable resource to both practitioners and academics interested in leveraging new information & communication technologies (ICTs) in the context of human rights work. I had the distinct pleasure of co-authoring a chapter for this book with my good colleague and friend Jessica Heinzelman. We focused specifically on the use of crowdsourcing and ICTs for information collection and verification. Below is the Abstract & Introduction for our chapter.

Abstract

Accurate information is a foundational element of human rights work. Collecting and presenting factual evidence of violations is critical to the success of advocacy activities and the reputation of organizations reporting on abuses. To ensure credibility, human rights monitoring has historically been conducted through highly controlled organizational structures that face mounting challenges in terms of capacity, cost and access. The proliferation of Information and Communication Technologies (ICTs) provide new opportunities to overcome some of these challenges through crowdsourcing. At the same time, however, crowdsourcing raises new challenges of verification and information overload that have made human rights professionals skeptical of their utility. This chapter explores whether the efficiencies gained through an open call for monitoring and reporting abuses provides a net gain for human rights monitoring and analyzes the opportunities and challenges that new and traditional methods pose for verifying crowdsourced human rights reporting.

Introduction

Accurate information is a foundational element of human rights work. Collecting and presenting factual evidence of violations is critical to the success of advocacy activities and the reputation of organizations reporting on abuses. To ensure credibility, human rights monitoring has historically been conducted through highly controlled organizational structures that face mounting challenges in terms of capacity, cost and access.

The proliferation of Information and Communication Technologies (ICTs) may provide new opportunities to overcome some of these challenges. For example, ICTs make it easier to engage large networks of unofficial volunteer monitors to crowdsource the monitoring of human rights abuses. Jeff Howe coined the term “crowdsourcing” in 2006, defining it as “the act of taking a job traditionally performed by a designated agent and outsourcing it to an undefined, generally large group of people in the form of an open call” (Howe, 2009). Applying this concept to human rights monitoring, Molly Land (2009) asserts that, “given the limited resources available to fund human rights advocacy…amateur involvement in human rights activities has the potential to have a significant impact on the field” (p. 2). That said, she warns that professionalization in human rights monitoring “has arisen not because of an inherent desire to control the process, but rather as a practical response to the demands of reporting – namely, the need to ensure the accuracy of the information contained in the report” (Land, 2009, p. 3).

Because “accuracy is the human rights monitor’s ultimate weapon” and the advocate’s “ability to influence governments and public opinion is based on the accuracy of their information,” the risk of inaccurate information may trump any advantages gained through crowdsourcing (Codesria & Amnesty International, 2000, p. 32). To this end, the question facing human rights organizations that wish to leverage the power of the crowd is “whether [crowdsourced reports] can accomplish the same [accurate] result without a centralized hierarchy” (Land, 2009). The answer to this question depends on whether reliable verification techniques exist so organizations can use crowdsourced information in a way that does not jeopardize their credibility or compromise established standards. While many human rights practitioners (and indeed humanitarians) still seem to be allergic to the term crowdsourcing, further investigation reveals that established human rights organizations already use crowdsourcing and verification techniques to validate crowdsourced information and that there is great potential in the field for new methods of information collection and verification.

This chapter analyzes the opportunities and challenges that new and traditional methods pose for verifying crowdsourced human rights reporting. The first section reviews current methods for verification in human rights monitoring. The second section outlines existing methods used to collect and validate crowdsourced human rights information. Section three explores the practical opportunities that crowdsourcing offers relative to traditional methods. The fourth section outlines critiques and solutions for crowdsourcing reliable information. The final section proposes areas for future research.

The book is available for purchase here. Warning: you won’t like the price but at least they’re taking an iTunes approach, allowing readers to purchase single chapters if they prefer. Either way, Jess and I were not paid for our contribution.

For more information on how to verify crowdsourced information, please visit the following links:

  • Information Forensics: Five Case Studies on How to Verify Crowdsourced Information from Social Media (Link)
  • How to Verify and Counter Rumors in Social Media (Link)
  • Social Media and Life Cycle of Rumors during Crises (Link)
  • Truthiness as Probability: Moving Beyond the True or False Dichotomy when Verifying Social Media (Link)
  • Crowdsourcing Versus Putin (Link)

Video Introduction to Crisis Mapping

I’ve given many presentations on crisis mapping over the past two years but these were never filmed. So I decided to create this video presentation with narration in order to share my findings more widely and hopefully get a lot of feedback in the process. The presentation is not meant to be exhaustive although the video does run to about 30 minutes.

The topics covered in this presentation include:

  • Crisis Map Sourcing – information collection;
  • Mobile Crisis Mapping – mobile technology;
  • Crisis Mapping Visualization – data visualization;
  • Crisis Mapping Analysis – spatial analysis.

The presentation references several blog posts of mine in addition to several operational projects to illustrate the main concepts behind crisis mapping. The individual blog posts featured in the presentation are listed below:

This research is the product of a 2-year grant provided by Humanity United  (HU) to the Harvard Humanitarian Initiative’s (HHI) Program on Crisis Mapping and Early Warning, where I am a doctoral fellow.

I look forward to any questions/suggestions you may have on the video primer!

Patrick Philippe Meier

ICT for Development Highlights

Credit: http://farm2.static.flickr.com/1403/623843568_7fa3c0cbe9.jpg?v=0

For a moment there, during the 8-hour drive from Kassala back to Khartoum, I thought Doha was going to be a miss. My passport was still being processed by the Sudanese Ministry of Foreign Affairs and my flight to Doha was leaving in a matter of hours. I began resigning myself to the likelihood that I would miss ICT4D 2009. But thanks to the incredible team at IOM, not only did I get my passport back, but I got a one-year, mulitple re-entry visa as well.

I had almost convinced myself that missing ICT4D would ok. How wrong I would have been. When the quality of poster presentations and demo’s at a conference rival the panels and presentation, you know that you’re in for a treat. As the title of this posts suggest, I’m just going to point out a few highlights here and there.

Panels

  • Onno Purbo gave a great presentation on wokbolic, a  cost saving wi-fi receiver  antenna made in Indonesia using a wok. The wokbolic has as 4km range, costs $5-$10/month. Great hack.

wok

  • Kentaro Toyama with Microsoft Research India (MSR India) made the point that all development is paternalistic and that we should stop fretting about this since development will by definition be paternalistic. I’m not convinced. Partnership is possible without paternalism.
  • Ken Banks noted the work of QuestionBox, which I found very interesting. I’d be interested to know how they remain sustainable, a point made by another colleague of mine at DigiActive.
  • Other interesting comments by various panelists included (and I paraphrase): “Contact books and status are more important than having an email address”; “Many people still think of mobile phones as devices one holds to the ear… How do we show that phones can also be used to view and edit content?”

Demo’s & Posters

I wish I could write more about the demo’s and posters below but these short notes and few pictures will have to do for now.

dudes

  • Analyzing Statistical Relationships between Global Indicators through Visualization:

geostats

  • Numeric Paper Forms for NGOs:

paperforms

  • Uses of Mobile Phones in Post-Conflict Liberia:

liberiaphones

  • Improving Data Quality with Dynamic Forms

datavalidate

  • Open Source Data Collection Tools:

opensourcecollection

Patrick Philippe Meier

iRevolution One Year On…

I started iRevolution exactly one year ago and it’s been great fun! I owe the Fletcher A/V Club sincere thanks for encouraging me to blog. Little did I know that blogging was so stimulating or that I’d be blogging from the Sudan.

Here are some stats from iRevolution Year One:

  • Total number of blog posts = 212
  • Total number of comments = 453
  • Busiest day ever = December 15, 2008

And the Top 10 posts:

  1. Crisis Mapping Kenya’s Election Violence
  2. The Past and Future of Crisis Mapping
  3. Mobile Banking for the Bottom Billion
  4. Impact of ICTs on Repressive Regimes
  5. Towards an Emergency News Agency
  6. Intellipedia for Humanitarian Warning/Response
  7. Crisis Mapping Africa’s Cross-border Conflicts
  8. 3D Crisis Mapping for Disaster Simulation
  9. Digital Resistance: Digital Activism and Civil Resistance
  10. Neogeography and Crisis Mapping Analytics

I do have a second blog that focuses specifically on Conflict Early Warning, which I started at the same time. I have authored a total of 48 blog posts.

That makes 260 posts in 12 months. Now I know where all the time went!

The Top 10 posts:

  1. Crimson Hexagon: Early Warning 2.0
  2. CSIS PCR: Review of Early Warning Systems
  3. Conflict Prevention: Theory, Police and Practice
  4. New OECD Report on Early Warning
  5. Crowdsourcing and Data Validation
  6. Sri Lanka: Citizen-based Early Warning/Response
  7. Online Searches as Early Warning Indicators
  8. Conflict Early Warning: Any Successes?
  9. Ushahidi and Conflict Early Response
  10. Detecting Rumors with Web-based Text Mining System

I look forward to a second year of blogging! Thanks to everyone for reading and commenting, I really appreciate it!

Patrick Philippe Meier

Project Cybersyn: Chile 2.0 in 1973

My colleague Lokman Tsui at the Berkman Center kindly added me to the Harvard-MIT-Yale Cyberscholars working group and I attended the second roundtable of the year yesterday. These roundtables typically comprise three sets of presentations followed by discussions.

Introducing Cybersyn

We were both stunned by what was possibly one of the coolest tech presentations we’ve been to at Berkman. Assistant Professor Eden Medina from Indiana University’s School of Informatics presented her absolutely fascinating research on Project Cybsersyn. This project ties together cybernetics, political transitions, organizational theory, complex systems and the history of technology.

cybersyn_control_room

I had never heard of this project but Eden’s talk made we want to cancel all my weekend plans and read her dissertation from MIT, which I’m literally downloading as I type this. If you’d like an abridged version, I’d recommend reading her peer-reviewed article which won the 2007 IEEE Life Member’s Prize in Electrical History: “Designing Freedom, Regulating a Nation: Socialist Cybernetics in Allende’s Chile” (PDF).

Project Cybersyn is an early computer network developed in Chile during the socialist presidency of Salvador Allende (1970–1973) to regulate the growing social property area and manage the transition of Chile’s economy from capitalism to socialism.

Under the guidance of British cybernetician Stafford Beer, often lauded as the ‘father of management cybernetics’, an interdisciplinary Chilean team designed cybernetic models of factories within the nationalized sector and created a network for the rapid transmission of economic data between the government and the factory floor. The article describes the construction of this unorthodox system, examines how its structure reflected the socialist ideology of the Allende government, and documents the contributions of this technology to the Allende administration.

The purpose of Cybersyn was to “network every firm in the expanding nationalized  sector of the economy to a central computer in Santiago, enabling the government to grasp the status of production quickly and respond to economic crises in real time.”

Heartbeat of Cybersyn

Stafford is considered the ‘Father of Management Cybernetics” and at the heart of Stafford’s genius is the “Viable System Model” (VSM). Eden explains that “Cybersyn’s design cannot be understood without a basic grasp of this model, which played a pivotal role in merging the politics of the Allende government with the design of this technological system.”

VSM is a model of the organizational structure of any viable or autonomous system. A viable system is any system organised in such a way as to meet the demands of surviving in the changing environment. One of the prime features of systems that survive is that they are adaptable.

vsm

Beer believed that this five-tier, recursive model existed in all stable organizations—biological, mechanical and social.

VSM recursive

Synergistic Cybersyn

Based on this model, Stafford’s team sought ways to enable communications among factories, state enterprises, sector committees, the management of the country’s development agency and the central mainframe housed at the agency’s headquarters.

Eventually, they settled on an existing telex network previously used to track satellites. Unlike the heterogeneous networked computer systems in use today, telex  networks mandate the use of specific terminals and can only transmit ASCII characters. However, like the Internet of today, this early network of telex machines was driven by the idea of creating a high-speed web of information exchange.

Eden writes that Project Cybersyn eventually consisted of four sub-projects: Cybernet, Cyberstride, Checo and Opsroom.

  • Cybernet: This component “expanded the existing telex network to include every firm in nationalized sector, thereby helping to create a national network of communication throughout Chile’s three-thousand-mile-long territory. Cybersyn team members occasionally used the promise of free telex installation to cajole factory managers into lending their support to the project. Stafford Beer’s early reports describe the system as a tool for real-time economic control, but in actuality each firm could only transmit data once per day.”
  • Cyberstride: This component “encompassed the suite of computer programmes written to collect, process, and distribute data to and from each of the state enterprises. Members of the Cyberstride team created ‘ quantitative flow charts of activities within each enterprise that would highlight all important activities ’, including a parameter for ‘ social unease ’[...]. The software used statistical methods to detect production trends based on historical data, theoretically allowing [headquarters] to prevent problems before they began. If a particular variable fell outside of the range specified by Cyberstride, the system emitted a warning [...]. Only the interventor from the affected enterprise would receive the algedonic warning initially and would have the freedom, within a given time frame, to deal with the problem as he saw fit. However, if the enterprise failed to correct the irregularity within this timeframe, members of the Cyberstride team alerted the next level management [...].”
  • CHECO: This stood for CHilean ECOnomy, a component of Cybersyn which “constituted an ambitious effort to model the Chilean economy and provide simulations of future economic behaviour. Appropriately, it was sometimes referred to as ‘Futuro’. The simulator would serve as the ‘government’s experimental laboratory ’ – an instrumental equivalent to Allende’s frequent likening of Chile to a ‘social laboratory’. [...] The simulation programme used the DYNAMO compiler developed by MIT Professor Jay Forrester [...]. The CHECO team initially used national statistics to test the accuracy of the simulation program. When these results failed, Beer and his fellow team members faulted the time differential in the generation of statistical inputs, an observation that re-emphasized the perceived necessity for real-time data.
  • Opsroom: The fourth component “created a new environment for decision making, one modeled after a British WWII war room. It consisted of seven chairs arranged in an inward facing circle flanked by a series of projection screens, each displaying the data collected from the nationalized enterprises. In the Opsroom, all industries were homogenized by a uniform system of iconic representation, meant to facilitate the maximum extraction of information by an individual with a minimal amount of scientific training. [...] Although [the Opsroom] never became operational, it quickly captured the imagination of all who viewed it, including members of the military, and became the symbolic heart of the project.

Outcome

Cybersyn never really took off. Stafford had hoped to install “algedonic meters” or early warning public opinion meters in “a representative sample of Chilean homes that would allow Chilean citizens to transmit their pleasure or displeasure with televised political speeches to the government or television studio in real time.”

[Stafford] dubbed this undertaking ‘ The People’s Project ’ and ‘ Project Cyberfolk ’ because he believed the meters would enable the government to respond rapidly to public demands, rather than repress opposing views.

As Cybersyn expanded beyond the initial goals of economic regulation to political-structural transformation, Stafford grew concerned that Cybersyn could prove dangerous if the system wasn’t fully completed and only individual components of the project adopted. He feared this could result in “result in ‘ an old system of government with some new tools … For if the invention is dismantled, and the tools used are not the tools we made, they could become instruments of oppression.” In fact, Stafford soon “received invitations from the repressive governments in Brazil and South Africa to build comparable systems.”

Back in Chile, the Cybernet component of Cybersyn “proved vital to the government during the opposition-led strike of October 1972 (Paro de Octubre).” The strike threatened the government’s survival so high-ranking government officials used Cybernet to monitor “the two thousand telexes sent per day that covered activities from the northern to the southern ends of the country.” In fact, “the rapid flow of messages over the telex lines enabled the government to react quickly to the strike activity  [...].”

The project’s telex network was subsequently—albeit briefly—used for economic mapping:

[The] telex network permitted a new form of economic mapping that enabled the government to collapse the data sent from all over the country into a single report, written daily at [headquarters], and hand delivered to [the presidential palace]. The detailed charts and graphs filling its pages provided the government with an overview of national production, transportation, and points of crisis in an easily understood format, using data generated several days earlier. The introduction of this form of reporting represented a considerable advance over the previous six-month lag required to collect statistics on the Chilean economy [...].

Ultimately, according to Stafford, Cybersyn did not succeed because it wasn’t accepted as a network of people as well as machines, a revolution in behavior as well as in instrumental capability. In 1973, Allende was overthrown by the military and the Cybersyn project all but vanished from Chilean memory.

Patrick Philippe Meier

InSTEDD’s Mesh4X Explained

I’ve had the pleasure of crossing paths with InSTEDD’s Robert Kirkpatrick on several occasions this year and always come away from our conversations having learned something new. Robert has recently been presenting InSTEDD’s new Mesh4X project. I confessed to him that I wasn’t entirely sure I fully grasped all the technical language he used to describe Mesh4X (which may serve as one answer to Paul Curion’s recent questions on The Innovation Fallacy).

Shortly after our recent CrisisMappers Meeting in Orlando, Robert kindly took the time to rework his description of Mesh4X for non techies. What follows is this description in Robert’s own words: “Having now heard the message a second time, I’m trying to clarify my description of Mesh4x for a lay audience. This version is more of a ‘product brochure’ in style, but I hope you find it useful in filling in any gaps.”

_____________________________________________

InSTEDD Mesh4X

Problem:  cross-organizational data sharing shouldn’t be this hard.

A major obstacle to effective humanitarian action today is that while advances in information technology have made it possible for individual organizations to collect, organize, and analyze data as never before, sharing of data between organizations remains problematic.  Organizations choose to adopt different information systems and software applications for many good reasons, yet a consequence of this is that data ends up fragmented across multiple organizations’ servers, PCs, and networks and remains “trapped” in different databases and formats.

This fragmentation incurs a high opportunity cost, as each organization working on a problem ends up having to act based on a fraction of what is actually known collectively. When data is shared today, it typically involves staff manually exporting from a database,  emailing spreadsheets files, and them importing them manually on the receiving end – a cumbersome and error-prone process further complicated by situations where Internet access is slow, unreliable, or completely unavailable.

Solution: Mesh4X – critical data when you need it, where you need it.

  • Imagine if that spreadsheet on your desktop, filled with health surveys, supply requests, or project status reports, were seamlessly linked to databases, programs, map software, websites and PDAs of others you want to share with, so that whenever you add or update data, the changes end up  being reflected everyone else as well, and all of their changes would also show up in your spreadsheet automatically.
  • Imagine being able to see all of this collective information on a map – a map that updates itself whenever anyone makes a change  to shared data.
  • Now imagine being able to exchange data with others even when no Internet access is available.

InSTEDD Mesh4X is a technology designed to create seamless cross-organizational information sharing between different databases, desktop applications, websites, and devices. It allows you to create or join a shared “data mesh” that links together disparate software and servers and synchronizes data between them automatically. You choose the data you wish to share, others do the same, and now everyone’s data ends up everywhere it needs to be.

  • Using Mesh4X, changes to data in any one location in the mesh are automatically synchronized to every other location.
  • If you’re offline at the time, all of your data will synchronize the next time you connect to the network.
  • For cases where no Internet access is available at all, there is no longer any need for the slow transport of files physically between locations.  Mesh4X gives you the option to synchronize all data via a series of SMS text messages – just plug a compatible phone into your laptop, and Mesh4X does the rest.
Using Mesh4X, you’ll have access to more information, and sooner, when making critical decisions.  When you need to collaborate with multiple organizations toward a shared goal, everyone will have a more complete and up-to-date understanding of needs, resources, and who is doing what where.

_____________________________________________

Thanks again to Robert for pulling this version together. I’m now more assured that I did grasp the in’s and out’s of Mesh4X. My next question to Robert and the InSTEDD team is whether Mesh4X is at point where it’s “plug and play”? That is, as easy to download and set up as, say, a blog on wordpress? Will the setup process be facilitated by a Microsoft-like-wizard for easy guidance and implementation?

InSTEDD’s Mesh4X Explained

I’ve had the pleasure of crossing paths with InSTEDD’s Robert Kirkpatrick on several occasions this year and always come away from our conversations having learned something new. Robert has recently been presenting InSTEDD’s new Mesh4X project. I confessed to him that I wasn’t entirely sure I fully grasped all the technical language he used to describe Mesh4X (which may serve as one answer to Paul Curion’s recent questions on The Innovation Fallacy).

Shortly after our recent CrisisMappers Meeting in Orlando, Robert kindly took the time to rework his description of Mesh4X for non techies. What follows is this description in Robert’s own words: “Having now heard the message a second time, I’m trying to clarify my description of Mesh4x for a lay audience. This version is more of a ‘product brochure’ in style, but I hope you find it useful in filling in any gaps.”

_____________________________________________

InSTEDD Mesh4X

Problem:  cross-organizational data sharing shouldn’t be this hard.

A major obstacle to effective humanitarian action today is that while advances in information technology have made it possible for individual organizations to collect, organize, and analyze data as never before, sharing of data between organizations remains problematic.  Organizations choose to adopt different information systems and software applications for many good reasons, yet a consequence of this is that data ends up fragmented across multiple organizations’ servers, PCs, and networks and remains “trapped” in different databases and formats.

This fragmentation incurs a high opportunity cost, as each organization working on a problem ends up having to act based on a fraction of what is actually known collectively. When data is shared today, it typically involves staff manually exporting from a database,  emailing spreadsheets files, and them importing them manually on the receiving end – a cumbersome and error-prone process further complicated by situations where Internet access is slow, unreliable, or completely unavailable.

Solution: Mesh4X – critical data when you need it, where you need it.

  • Imagine if that spreadsheet on your desktop, filled with health surveys, supply requests, or project status reports, were seamlessly linked to databases, programs, map software, websites and PDAs of others you want to share with, so that whenever you add or update data, the changes end up  being reflected everyone else as well, and all of their changes would also show up in your spreadsheet automatically.
  • Imagine being able to see all of this collective information on a map – a map that updates itself whenever anyone makes a change  to shared data.
  • Now imagine being able to exchange data with others even when no Internet access is available.

InSTEDD Mesh4X is a technology designed to create seamless cross-organizational information sharing between different databases, desktop applications, websites, and devices. It allows you to create or join a shared “data mesh” that links together disparate software and servers and synchronizes data between them automatically. You choose the data you wish to share, others do the same, and now everyone’s data ends up everywhere it needs to be.

  • Using Mesh4X, changes to data in any one location in the mesh are automatically synchronized to every other location.
  • If you’re offline at the time, all of your data will synchronize the next time you connect to the network.
  • For cases where no Internet access is available at all, there is no longer any need for the slow transport of files physically between locations.  Mesh4X gives you the option to synchronize all data via a series of SMS text messages – just plug a compatible phone into your laptop, and Mesh4X does the rest.
Using Mesh4X, you’ll have access to more information, and sooner, when making critical decisions.  When you need to collaborate with multiple organizations toward a shared goal, everyone will have a more complete and up-to-date understanding of needs, resources, and who is doing what where.

_____________________________________________

Thanks again to Robert for pulling this version together. I’m now more assured that I did grasp the in’s and out’s of Mesh4X. My next question to Robert and the InSTEDD team is whether Mesh4X is at point where it’s “plug and play”? That is, as easy to download and set up as, say, a blog on wordpress? Will the setup process be facilitated by a Microsoft-like-wizard for easy guidance and implementation?

Policy Briefing: Information in Humanitarian Responses

The BBC World Service Trust just released an excellent Policy Brief (PDF) on “The Unmet Need for Information in Humanitarian Responses.” The majority of the report’s observations and conclusions are in line with the findings identified during Harvard Humanitarian Initiative’s (HHI) 18-month applied research project on Conflict Early Warning and Crisis Mapping.

I include below excerpts that resonated particularly strongly.

  • People need information as much as water, food, medicine or shelter. Information can save lives, livelihoods and resources. Information bestows power.
  • Effective information and communication exchange with affected populations are among the least understood and most complex challenges facing the humanitarian sector in the 21st century.
  • Disaster victims need information about their options in order to take any meaningful choices about their future. Poor information follow is undoubtedly the biggest source of dissatisfaction, anger and frustration among affected people.
  • Information—and just as important communication—is essential for people to start claiming a sense of power and purpose over their own destiny.

In this context, recall the purpose of people-centered early warning as defined by the UN International Strategy for Disaster Reduction (UNISDR) during the Third International Conference on Early Warning (EWC III) in 2006:

To empower individuals and communities threatened by hazards to act in sufficient time and in an appropriate manner so as to reduce the possibility of personal injury, loss of life, damage to property and the environment, and loss of livelihoods.

picture-6

Other important observations worth noting from the Policy Brief:

  • Sometimes information is the only help that can be made available, especially when isolated populations are cut off and beyond the reach of aid.
  • There are still misplaced assumptions and confusion about how and what to think about information and communication—and where organizationally to locate it. Humanitarian actors systematically fail to see the difference between public relations and communications with affected populations, and thus funds neither the expertise nor infrastructure necessary.
  • The information needs of people affected by disasters remain largely unmet because the people, systems and resources that are required to meet them simply don’t exist in a meaningful way.
  • The humanitarian system is not equipped with either the capacity or the resources to begin tackling the challenge of providing information to those affected by crises.
  • A prior understanding of how populations in disaster prone areas source information is vital in determining the best channels for information flow: for example, local media, local religious networks and local civil society groups.
  • Studies have shown that affected populations go to great lengths to reinstate their media infrastructure and access to information at the earliest opportunity following a disaster. Relief efforts should recognize these community-driven priorities and response accordingly.

My one criticism of the report has to do with the comments in parentheses in this paragraph:

Rebuilding the local media infrastructure for sustained operations must be prioritized as aid efforts continue. This may be as simple as providing a generator to a radio station that has lost its electricity supply, using UN communications structures such as the World Food Program towers to relay local radio stations (though in politically complex environments this needs careful thought)…

The BBC’s Policy Brief focuses on the unmet need for information in humanitarian responses but leaves out of the equation “politically complex environments.” This is problematic. As the UNISDR remarked in it’s 2006 Global Survey of Early Warning Systems (PDF), “the occurrence of “natural” disasters amid complex political crises is increasingly widespread: over 140 natural disasters have occurred alongside complex political crises in the past five years alone.”

Operating in politically volatile, repressive environments and conflict zones presents a host of additional issues that the majority of policy briefs and reports tend to ignore. HHI’s research has sought to outline these important challenges and to highlight potential solutions both in terms of technology and tactics.

picture-7

The importance technology design has been all but ignored in our field. We may continue to use every day communication tools and adopt them for our purposes, but these will inevitably come with constraints. Mobile phones were not designed for operation in hostile environments. This means, for example, that mobile phones don’t come preinstalled with encrypted SMS options. Nor are mobile phones designed to operate in a peer-to-peer (mesh) configuration, which would render mobile phones less susceptible to repressive regimes switching off entire mobile phone networks.

What are today’s most vexing problems in the field of humanitarian early warning and response? This is a question often posed by my colleague Ted Okada to remind us that we often avoid the most important challenges in the humanitarian field. It’s one thing to respond in a post-disaster environment with easy access to refugee populations and donor funding. It’s quite another to be operating in a conflict zone, with restrictions on mobility, with no clear picture of the effected population and with donors reluctant to fund experimental communication projects.

It is high time we focus our attention and resources on tackling the most vexing issues in our field since the solutions we develop there will have universal application elsewhere. In contrast, identifying solutions to the less vexing problems will be of little benefit to large humanitarian community operating in political complex environments. As my colleague Erik Hersman is fond of saying, “If it works in Africa, it’ll work anywhere.” I’ve been making a similar argument over the past year: “If it works in conflict zones, it’ll work anywhere.”

Patrick Philippe Meier

Developing ICT to Meed Social Needs

I just came across Jim Fruchterman‘s excellent piece on “Developing Information Technology to Meet Social Needs,” which was recently published in Innovations. If Jim’s name sounds familiar, that’s because he’s Benetech‘s CEO.

Jim recognizes that when technology innovation doesn’t generate major financial returns, it is rarely pursued. This is where Benetech comes in. Jim’s objective is to “overcome market failure in socially beneficial applications of information technology.” The Benetech story makes for an interesting and important historical case study on how Jim and colleagues adapated the high-tech company to develop technology for social causes.

What follows are some interesting excerpts from Jim’s piece along with some of my comments.

Our initial idea was spying for human rights, using the same kind of technology as the government intelligence agencies. [In June 2000, however], it was clear that “Spying for Humanity” wasn’t the first place that technology should be used. There were much more basic needs to IT than sophisticated surveillance tools. We needed to build tools that could be used by unsophisticated human rights activists in the field.

In general, I think mainstream tools are still too complicated and cumbersome. The emergence of citizen journalism means that anyone can become a human rights activist. These individuals will use their own everyday-tools to document such abuses, e.g., camera phones, Youtube, blogs, etc.

The tools are already out there, whether we like it or not, and crowdsourcing human rights information may be the way to go. Of course, I realize that the quality of the data may not be up to par with Patrick Ball‘s methods at Benetech, but this could perhaps change with time.

On a related note, I would recommend reading Clay Shirky’s new book “Here Comes Everybody” and Leysia Palen’s piece on “Citizen Communications in Crisis: Anticipating a Future of ICT-Supported Public Participation.”

To this end, “Spying for Humanity” is already happening. The question I ask in my dissertation is whether “humanity” will be able to “out-spy” repressive regimes, or vice-versa.

Think of the human rights sector as a processing industry with a typical pyramidic structure. At the base of the pyramid are the grassroots human rights organizations numbering in the tens of thousands. These groups are on the front lines of human rights violations. [...]. [The] narratives [they provide] are the raw material of human rights work; everything else in human rights work is built with these raw materials.

Above the grassroots groups in the pyramid are the provincial or national groups. These larger groups are politically better connected, [...]. They also play a role in quality control: membership in a bona fide network confers more credibility to the reports of a grassroots group.

Regional and international groups concentrate the human rights information even more. This information is aggregated and processed into higher value forms. The single incident of human rights abuse is combined with other incidents into a pattern of abuse. These patterns are the basis for international human rights campaigns [...].

I find this a really neat way to describe the human rights sector. My concern, coming from the field of conflict early warning/response, is that we always think of the base of the pyramid, ie, the grassroots, as sources for raw material that feed into our work, but we rarely view the base of the pyramid as first-responders. We tend to leave that for “ourselves” at the national, regional and international level. What is most lacking at the grassroots level is tactical training in field craft.

On patterns, see my previous blog on Crisis Mapping Analytics. Satellite imagery provides an important underutilized resource for pattern analysis of mass atrocities. This a gap that the Harvard Humanitarian Initiative (HHI) seeks to address in the near future.

The common product of the human rights community at all levels in the pyramid is information. The human rights sector is an information processing industry. Because of the limited resources available, computers and information technology are not used to anywhere near full potential. The paradox of the human rights community is that it is an information-processing industry that has limited access to information technology.

A very interesting point.

Later on in his piece, Jim describes the criteria that Benetech considers when deciding to pick a project. I include these below as they may be of interest to colleagues also working in this space.

How Benetech picks projects:

  • Return on investment: In our case, the return is to society, not to us. We frequently use benchmarking as a method of assessing returns.
  • Uniqueness: We want to be dramatically different: no interest in being 10% better than some other solution. If it already exists, we should be doing it for a fraction of the existing cost or bringing it to a completely different community.
  • A sustainability case: How can we keep this going without draining resources from Benetech forever?
  • Low technical risk: We assume the technology is out there, but nobody is motivated to bring it to the social application.
  • Deal size: Ideally in the $1 to $4 million range to encourage sustainability.
  • Fit of the technology with our capabilities: Is it in a field that Benetech knows something about?
  • Exit options: We try to devise three exit options before we start a project.
  • Access to resources: Can we access the resources we need to succeed?
  • Potential partnerships: What partners can we leverage? How can we encourage community involvement in this project?

Patrick Philippe Meier