Monthly Archives: December 2009

Top 10 Posts on iRevolution in 2009

Here are the top 10 most popular posts on iRevolution in 2009:

  1. How To Communicate Securely in Repressive Environments
  2. A Brief History of Crisis Mapping
  3. Crisis Mapping Kenya’s Election Violence
  4. Video Introduction to Crisis Mapping
  5. Impact of ICTs on Repressive Regimes
  6. Proposing the Field of Crisis Mapping
  7. Mobile Banking for the Bottom of the Pyramid
  8. Digital Resistance: Between Digital Activism and Civil Resistance
  9. Moving Forward with Swift River
  10. Why Dictators Love the Web or: How I Learned to Stop Worrying and Say So What?!

Note the contrasting titles of posts #1 and #10. I actually wrote the former back in June during Iran’s post-election crackdown and the latter just a few weeks ago in response to reading a laundry list of “techtics” (technologies + tactics) that repressive regimes like Iran’s employ.

As David Sasaki recently noted, making lists of what is wrong is all fine and well, but we also need action items for “what needs to be done to make it right” so that  “month by month, year by year, we’re slowly [able to be] checking those items off.” So consider the most popular post (communicating securely in repressive environments), a collection of action items (gathered from multiple sources) to deal with some laundry lists of what is wrong.

I was glad to see one of my posts on Ushahidi’s Swift River appear in the top 10, and surprised to see a post on mobile banking in position 7. I’ll be looking to blog more about mobile banking in 2010 especially as my Fletcher colleagues (and alumni) are becoming leaders in their own right in this exciting space ripe for iRevolutions. See this link for the international conference on Mobile Banking that we co-organized in Kenya earlier this year.

As for crisis mapping, I think 2009 will mark the launch of crisis mapping as a field in it’s own right. Thank you to all the donors who have helped to make this happen. There’s no doubt that 2010 will be a year of many more iRevolutions. I look forward to it!

Patrick Philippe Meier

Google’s New Earth Engine for Satellite Imagery Analysis: Applications to Humanitarian Crises

So that’s what they’ve been up to. Google is developing a new computational platform for global-scale analysis of satellite imagery to monitor deforestation. But this is just “the first of many Earth Engine applications that will help scientists, policymakers, and the general public to better monitor and understand the Earth’s ecosystems.”

How about the Earth’s social systems? Humanitarian crises? Armed conflicts? This has been one of the main drivers of the Program on Crisis Mapping and Early Warning (CM&EW) which I co-direct at the Harvard Humanitarian Initiative (HHI) with Dr. Jennifer Leaning. Indeed, we had a meeting with the Google Earth team earlier this year to discuss the development of a computational platform to analyze satellite imagery of humanitarian crises for the purposes of early detection and early response.

In particular, we were interested in determining whether certain spatial patterns could be identified and if so whether we could develop a taxonomy of different spatial patterns of humanitarian crises; something like a library of “crisis finger prints.” As we noted to Google in writing following the conversations,

It is our view that the work of interpretation will be powerfully enhanced by the development of valid patterns relating to issues of importance in specific sets of circumstances that can be reproducibly recognized in satellite imagery. To be sure, the geo-spatial analysis of humanitarian crisis can serve as an important control mechanism for Google’s efforts in extending the functionality of Google Earth and Google’s analytical expertise.

This is something that a consortium of organizations including HHI can get engaged in. Population movement and settlement, shelter options and conditions, environmental threats, access to food and water, are discernible from various elements and resolution levels of satellite imagery.  But much more could be apprehended from these images were patterns assembled and then tested against other information sources and empirical field assessments. For an excellent presentation on this, see my colleague Jennifer Leaning’s excellent Keynote address at ICCM 2009:

The military uses of satellite imagery are far more developed than the humanitarian capacities because the interpretive link between what can be seen in the image and what is actually happening on the ground has been made, in great iterative detail, over a period of many years, encompassing a wide span of geographies and technological deployments. We need to develop a process to explore and validate what can be understood from satellite imagery about key humanitarian concerns by augmenting standard satellite analytics with time-specific and informed assessments of what was concurrently taking place in the location being photographed.

The potential for such applications has just begun to surface in humanitarian circles.  The Darfur Google initiative has demonstrated the force of vivid images of destruction tethered to actual locations of villages across the span of Darfur.  Little further detail is available from the actual images, however, and much of the associated information depicted by clicking on the image is static derived from other sources, somewhat laboriously acquired.  The full power of what might be gleaned simply from the satellite image remains to be explored.

Because systematic and empirical analysis of what a series of satellite images might reveal about humanitarian issues has not yet been undertaken, any effort to draw inferences from current images does not lead far.  The recent coverage of the war in Sri Lanka included satellite photos of the same contested terrain in the northeast, for two time frames, a month apart.  The attempt to determine what had transpired in that interim, relating to population movement, shelter de-construction and reconstruction, and land bombardment, was a matter of conjecture.

Bridging this gap from image to insight will not only be a matter of technological enhancement of satellite imaging. It will require interrogating the satellite images through the filter of questions and concerns that are relevant to humanitarian action and then infusing other kinds of information, gathered through a range of methods, to create visual metrics for understanding what the images project.

There is a lot of exciting work to be done in this space and I do hope that Google will seek to partner with humanitarian organizations and applied research institutes to develop an Earth Engine for Humanitarian Crises. While the technological and analytical breakthroughs are path breaking, let us remember that they can be even more breathtaking by applying them to save lives in humanitarian crises.

Patrick Philippe Meier

Is Journalism Just Failed Crowdsourcing?

This provocative question materialized during a recent conversation I had with a Professor of Political Science whilst in New York in this week. Major news companies like CNN have started to crowdsource citizen generated news on the basis that “looking at the news from different angles gives us a deeper understanding of what’s going on.” CNN’s iReporter thus invites citizens to help shape the news “in order to paint a more complete picture of the news.”

This would imply that traditional journalism has provided a relatively incomplete picture of global events. So the question is, if crowdsourcing platforms had been available to journalists one hundred years ago, would they view these platforms as an exciting opportunity to get early leads on breaking stories? The common counter argument is: but crowdsourcing “opens the floodgates” of information and we simply can’t follow up on everything. Yes, but whoever said that every lead requires follow up?

Journalists are not always interested in following up on every lead that comes their way. They’ll select a few sources, interview them and then write up the story. What crowdsourcing citizen generated news does, however, is to provide them with many more leads to choose from. Isn’t this an ideal set up for a journalist? Instead of having to chase down leads across town, the leads come directly to them with names, phone numbers and email addresses.

Imagine that the field of journalism had started out using crowdsourcing platforms combined with investigative journalism. If these platforms were then outlawed for whatever reason, would investigative journalists be hindered in their ability to cover the news from different angles? Or would they still be able to paint an equally complete picture of the news?

Granted, one common criticism of citizen journalism is the lack of context they provide especially when using Twitter given the 140 characters restriction. But surely 140 characters are plenty for the purposes of a potential lead. And if a mountain of Tweets started to point to the same lead story, then a professional journalist could take advantage of this information when deciding whether or not to follow up.

Source: CoolThing

I also find the criticism against Twitter interesting coming from traditional journalists. In the early 1900s, large newspapers started hiring war correspondents “who used the new telegraph and expanding railways to move news faster to their newspapers.” However, the cost of sending telegrams forced reporters to develop a “new concise or ‘tight’ style of writing which became the standard for journalism through the next century.”

Today, the costs of hiring professional journalists means that a newspaper like the Herald (at the time),  is not going to send any modern Henry Stanley to find a certain Dr. Livingstone in Africa. And besides, if the Herald had global crowdsourcing platforms back in the 1870s, they may have instead used Twitter to crowdsource the coordinates of Dr. Livingstone.

This may imply that traditional journalism was primarily shaped by the constraint of technology at the time. In a teleological sense, then, crowdsourcing may simply by the next phase in the future of journalism.

Patrick Philippe Meier

Crisis Information and The End of Crowdsourcing

When Wired journalist Jeff Howe coined the term crowdsourcing back in 2006, he did so in contradistinction to the term outsourcing and defined crowdsourcing as tapping the talent of the crowd. The tag line of his article was: “Remember outsourcing? Sending jobs to India and China is so 2003. The new pool of cheap labor: everyday people using their spare cycles to create content, solve problems, even do corporate R & D.”

If I had a tag line for this blog post it would be: “Remember crowdsourcing? Cheap labor to create content and solve problems using the Internet is so 2006. What’s new and cool today is the tapping of official and unofficial sources using new technologies to create and validate quality content.” I would call this allsourcing.

The word “crowdsourcing” is obviously a compound word that combines “crowd” and “sourcing”. But what exactly does “crowd” mean in this respect? And how has “sourcing” changed since Jeff introduced the term crowdsourcing over three-and-a-half years ago?

Lets tackle the question of “sourcing” first. In his June 2006 article on crowdsourcing, Jeff provides case studies that all relate to a novel application of a website and perhaps the most famous example of crowdsourcing is Wikipedia, another website. But we’ve just recently seen some interesting uses of mobile phones to crowdsource information. See Ushahidi or Nathan Eagle’s talk at ETech09, for example:

So the word “sourcing” here goes beyond the website-based e-business approach that Jeff originally wrote about in 2006. The mobile technology component here is key. A “crowd” is not still. A crowd moves, especially in crisis, which is my area of interest. So the term “allsourcing” not only implies collecting information from all sources but also the use of “all” technologies to collect said information in different media.

As for the word “crowd”, I recently noted in this Ushahidi blog post that we may need some qualifiers—namely bounded and unbounded crowdsourcing. In other words, the term “crowd” can mean a large group of people (unbounded crowdsourcing) or perhaps a specific group (bounded crowdsourcing). Unbounded crowdsourcing implies that the identity of individuals reporting the information is unknown whereas bounded crowdsourcing would describe a known group of individuals supplying information.

The term “allsourcing” represents a combination of bounded and unbounded crowdsourcing coupled with new “sourcing” technologies. An allsourcing approach would combined information supplied by known/official sources and unknown/unofficial sources using the Web, e-mail, SMS, Twitter, Flickr, YouTube etc. I think the future of crowdsourcing is allsourcing because allsourcing combines the strengths of both bounded and unbounded approaches while reducing the constraints inherent to each individual approach.

Let me explain. One main important advantage of unbounded crowdsourcing is the ability to collect information from unofficial sources. I consider this an advantage over bounded crowdsourcing since more information can be collected this way. The challenge of course is how to verify the validity of said information. Verifying information is by no means a new process, but unbounded crowdsourcing has the potential to generate a lot more information than bounded crowdsourcing since the former does not censor unofficial content. This presents a challenge.

At the same time, bounded crowdsourcing has the advantage of yielding reliable information since the reports are produced by known/official sources. However, bounded crowdsourcing is constrained to a relatively small number of individuals doing the reporting. Obviously, these individuals cannot be everywhere at the same time. But if we combined bounded and unbounded crowdsourcing, we would see an increase in (1) overall reporting, and (2) in the ability to validate reports from unknown sources.

The increased ability to validate information is due to the fact that official and unofficial sources can be triangulated when using an allsourcing approach. Given that official sources are considered trusted sources, any reports from unofficial sources that match official reports can be considered more reliable along with their associated sources. And so the combined allsourcing approach in effect enables the identification of new reliable sources even if the identify of these sources remains unknown.

Ushahidi is good example of an allsourcing platform. Organizations can use Ushahidi to capture both official and unofficial sources using all kinds of new sourcing technologies. Allsourcing is definitely something new so there’s still much to learn. I have a hunch that there is huge potential. Jeff Howe titled his famous article in Wired “The Rise of Crowdsourcing.” Will a future edition of Wired include an article on “The Rise of Allsourcing”?

Patrick Philippe Meier

Three Common Misconceptions About Ushahidi

Cross posted on Ushahidi

Here are three interesting misconceptions about Ushahidi and crowdsourcing in general:

  1. Ushahidi takes the lead in deploying the Ushahidi platform
  2. Crowdsourced information is statistically representative
  3. Crowdsourced information cannot be validated

Lets start with the first. We do not take the lead in deploying Ushahidi platforms. In fact, we often learn about new deployments second-hand via Twitter. We are a non-profit tech company and our goal is to continue developing innovative crowdsourcing platforms that cater to the growing needs of our current and prospective partners. We provide technical and strategic support when asked but otherwise you’ll find us in the backseat, which is honestly where we prefer to be. Our comparative advantage is not in deployment. So the credit for Ushahidi deployments really go the numerous organizations that continue to implement the platform in new and innovative ways.

On this note, keep in mind that the first downloadable Ushahidi platform was made available just this May, and the second version just last week. So implementing organizations have been remarkable test pilots, experimenting and learning on the fly without recourse to any particular manual or documented best practices. Most election-related deployments, for example, were even launched before May, when platform stability was still an issue and the code was still being written. So our hats go off to all the organizations that have piloted Ushahidi and continue to do so. They are the true pioneers in this space.

Also keep in mind that these organizations rarely had more than a month or two of lead-time before scheduled elections, like in India. If all of us have learned anything from watching these deployments in 2009, it is this: the challenge is not one of technology but election awareness and voter education. So we’re impressed that several organizations are already customizing the Ushahidi platform for elections that are more than 6-12 months away. These deployments will definitely be a first for Ushahidi and we look forward to learning all we can from implementing organizations.

The second misconception, “crowdsourced information is statistically representative,” often crops up in conversations around election monitoring. The problem is largely one of language. The field of election monitoring is hardly new. Established organizations have been involved in election monitoring for decades and have gained a wealth of knowledge and experience in this area. For these organizations, the term “election monitoring” has specific connotations, such as random sampling and statistical analysis, verification, validation and accredited election monitors.

When partners use Ushahidi for election monitoring, I think they mean something different. What they generally mean is citizen-powered election monitoring aided by crowdsourcing. Does this imply that crowdsourced information is statistically representative of all the events taking place across a given country? Of course not: I’ve never heard anyone suggest that crowdsourcing is equivalent to random sampling.

Citizen-powered election monitoring is about empowering citizens to take ownership over their elections and to have a voice. Indeed, elections do not start and stop at the polling booth. Should we prevent civil society groups from crowdsourcing crisis information on the basis that their reports may not be statistically representative? No. This is not our decision to make and the data is not even meant for us.

Another language-related problem has to due with the term “crowdsourcing”. The word  “crowd” here can literally mean anyone (unbounded crowdsourcing) or a specific group (bounded crowdsourcing) such as designated election monitors. If these official monitors use Ushahidi and they are deliberately positioned across a country for random sampling purposes, then this becomes no different at all to standard and established approaches to election monitoring. Bounded crowdsourcing can be statistically representative.

The third misconception about Ushahidi has to do with the tradeoff between unbounded crowdsourcing and the validation of said crowdsourced information. One of the main advantages of unbounded crowdsourcing is the ability to collect a lot of information from a variety of sources and media—official and nonofficial sources—in near real time. Of course, this means that a lot more of information can be reported at once, which can make the validation of said information a challenging process.

A common reaction to this challenge is to dismiss crowdsourcing altogether because unofficial sources may be unreliable or at worse deliberately misleading. Some organizations thus find it easier to write off all unofficial content because of these concerns. Ushahidi takes a different stance. We recognize that user-generated content is not about to disappear any time soon and that a lot of good can come out of such content, not least because official information can too easily become proprietary and guarded instead of shared.

So we’re not prepared to write off user-generated content because validating information happens to be challenging. Crowdsourcing crisis information is our business and so is (obviously) the validation of crowdsourced information. This is why Ushahidi is fully committed to developing Swift River. Swift is a free and open source platform that validates crowdsourced information in near real-time. Follow the Ushahidi blog for exciting updates!

Where I Stand on Digital Activism

Journalists, activists, students, donors and most recently a millionaire investment banker have all recently asked me where I stand on Digital Activism. More precisely, the popular question is: Who is going to win? And by that, they refer to the cat-and-mouse dynamics that characterize the digital battle between repressive regimes and civil resistance movements.

My personal opinion (a.k.a. untested hunch) is that this cat-and-mouse game is bound to continue for some time. That said, I ultimately think that repressive regimes will eventually lag behind the adoption and application of innovative methods and technologies. I also think that resistance movements that employ digital technologies will continue to have a first-mover advantage, even if that advantage is short-lived.

Why? Because of Organizational Theory 101. It is well known in the study of complex systems and network dynamics that organizational typologies for command and control structures do not adapt very well to rapidly changing environments. On the other hand, relatively decentralized forms of organization are typically more nimble and adaptable. Decentralized networks are often first movers, which give them a temporary albeit important advantage. They have more feedback loops.

As I wrote in 2006 conference paper (citing Bazerman and Watkins 2004),

Feedback mechanisms enable an organization to manage the complexity of their internal and external environments in four important ways. They allow an organization to: (1) scan the environment and collect sufficient information; (2) integrate and analyze information from multiple sources; (3) respond in a timely manner and observe the results; and (4) reflect on what happened and incorporate lessons-learned into the “institutional memory” of the organization, in order to avoid repetition of past mistakes.

In contrast, hierarchical structures require the executive to rely on others to scan information. Excellent communication “between floors” is therefore critical. In the process of communication, however, “organizational members filter information as it rises through hierarchies” and “those at the top inevitably receive incomplete and distorted data [and] overload may prevent them from keeping up-to-date with incoming information.” This limits the organization’s ability to adapt and change, and “any organization that is not changing is a battlefield monument.”

Furthermore, as Brafman and Beckstrom have shown in The Starfish and the Spider, “when attacked, a decentralized organization tends to become even more open and decentralized.” This means that government crackdowns against resistance movements tend to make the latter more decentralized and harder to track down.

I often use the cat-and-mouse game analogy but perhaps a better analogy is the spider and the starfish. Even if an arm of the starfish is cut off, it will regenerate. Not so with the spider, which has a centralized nervous system. As Brafman and Beckstrom write, “A starfish is a neural network–basically a network of cells. Instead of having a head, like a spider, the starfish functions as a decentralized network.” Of course, resistance movements are not completely decentralized; they need only be more decentralized relatively to repressive regimes.

Notice that I have not referred to technology a single time in this blog post about Digital Activism. That’s because my take on the competition between the spider and starfish ultimately rests on organizational dynamics, not technology.

Organization is a formidable force in social systems and natural systems. The only difference between a water droplet and solid ice is organization—the way the molecules are organized. Asymmetric warfare is possible because of organizational differences. I highly recommend reading this book by my colleagues Shultz and Dew (2006): Insurgents, Territories, Militias: Warriors of Contemporary Combat to understand the power of organization.

So this is ultimately where I stand on Digital Activism and what I wrote over a year ago in my dissertation proposal. We can go on all we want with anecdotal acrobatics but I personally think that doing so is simply barking up the wrong tree and missing the forest for the trees.

Patrick Philippe Meier

New Tech in Emergencies and Conflicts: Role of Information and Social Networks

I had the distinct pleasure of co-authoring this major new United Nations Foundation & Vodafone Foundation Technology Report with my distinguished colleague Diane Coyle. The report looks at innovation in the use of technology along the time line of crisis response, from emergency preparedness and alerts to recovery and rebuilding.

“It profiles organizations whose work is advancing the frontlines of innovation, offers an overview of international efforts to increase sophistication in the use of IT and social networks during emergencies, and provides recommendations for how governments, aid groups, and international organizations can leverage this innovation to improve community resilience.”

Case studies include:

  • Global Impact and Vulnerability Alert System (GIVAS)
  • European Media Monitor (EMM, aka OPTIMA)
  • Emergency Preparedness Information Center (EPIC)
  • Ushahidi Crowdsourcing Crisis Information
  • Télécoms sans Frontières (TSF)
  • Impact of Social Networks in Iran
  • Social Media, Citizen Journalism and Mumbai Terrorist Attacks
  • Global Disaster Alert and Coordination System (GDACS)
  • AAAS Geospatial Technologies for Human Rights
  • Info Technology for Humanitarian Assistance, Cooperation and Action (ITHACA)
  • Camp Roberts
  • OpenStreetMap and Walking Papers
  • UNDP Threat and Risk Mapping Analysis project (TRMA)
  • Geo-Spatial Info Analysis for Global Security, Stability Program (ISFEREA)
  • FrontlineSMS
  • M-PESA and M-PAISA
  • Souktel

I think this long and diverse list of case studies clearly shows that the field of humanitarian technology is coming into it’s own.  Have a look at the report to learn how all these fit in the ecosystem of humanitarian technologies. And check out the tag #Tech4Dev on Twitter or the UN Foundation’s Facebook page to discuss the report and feel free to add any comments to this blog post below. I’m happy to answer all questions. In the meantime, I salute the UN Foundation for producing a forward looking report on projects that are barely two years old, and some just two months old.

Patrick Philippe Meier

From Baselines to Basemaps: Crisis Mapping for Monitoring & Evaluation (M&E)

I was just in Berlin for meetings with Transparency International and the topic of mapping for Monitoring & Evaluation came up yet again. Earlier this year, Mercy Corps and UNDP Sudan both expressed an interest in exploring the application of crisis mapping platforms for M&E. Problem is, the field of M&E—particularly with regards to peacebuilding and post-conflict reconstruction—is devoid of any references to mapping.

As part of my consulting work with UNDP Sudan, I therefore produced a short concept paper this topic back in June. Here’s a summary of what I wrote.


Peacebuilding and post-conflict reconstruction programs necessarily operate within a dynamic environment. This means that they must adapt to changing circumstances or else run the risk of misallocating resources or worse, exacerbating tensions not to mention creating new sources of violent conflict. Hence the need for conflict sensitive programming, which UNDP’s TRMA initiative is designed to support in the Sudan.

The threat and risk maps produced by UNDP provide spatial risk assessments that can inform programmatic response in Sudan’s post-conflict states. This need not be a one-off decision-support exercise, however. Indeed, the use of spatial risk assessments updated over time is an even more compelling use of crisis maps for decision-support.

A changing post-conflict environment means that projects designed half-a-year ago may no longer be having the intended impact they were funded to have. To this end, it is important that UN and local-government partners have regular updates on the changing context in order to adapt programming respectively. Crisis mapping can play a pivotal role in this decision support process.


I therefore propose a new approach to crisis mapping called “basemapping”. The purpose of basemapping is to combine M&E and crisis mapping to produce basemaps against which projects can be monitored and evaluated. The basemapping process constitutes three distinct mapping steps:

1. Ideal World Basemapping: mapping the ideal world that a given project seeks to achieve over a given period of time.

2. Real World Basemapping: mapping the current state of affairs in the specific world that the project seeks to change

3. Changed World Basemapping: ongoing mapping to compare the change between the ideal world and real world basemaps.

The following section provides a brief background to M&E and formulates a proposed methodology for basemapping. As basemapping is a new concept that has little to no precedent, the purpose of the proposed methodology is to catalyze discussion on the subject and to make the methodology more sophisticated over time.

Monitoring and Evaluation (M&E)

A strong M&E framework will include theories and types of change, an achievable goal with clear objectives, outputs and activities as well as reliable indicators and baselines. However, M&E frameworks should be case-specific and must be tailored to the purpose of individual projects. This deliverable assumes that UNDP is already well versed in M&E frameworks and methodologies. The analysis that follows thus focuses specifically on the contributing role of crisis mapping to the M&E process.

Baselines are the most often forgotten component within design, monitoring and evaluation, yet they are key to proving that change has truly taken place. They also provide the most intuitive link to crisis mapping. While baselines typically represent a snapshot in time and space against which deviations represent progress or failure, the concept of “basemaps” can play the same role albeit dynamically.

Perhaps the closest analogy to baselines in the field of conflict analysis is the conflict assessment. A conflict assessment is an exploration of the realities of the conflict and an analysis of its underlying causes. An assessment can be done at any time, independently of a program or as a part of an existing program. Assessments are often conducted to determine whether an intervention is needed and, if so, what type of intervention.

Assessments (or risk assessments) are also typically carried out as a first step in the development of a conflict early warning system. They serve to identify appropriate conflict early warning indicators. In a sense, an assessment is the basis from which the programming will be designed. Conversely, a baseline identifies the status of the targeted change before the project starts but after it has been designed.

M&E experts caution that assessments and baselines should not be blended together. Nor do they suggest using one as a substitute for the other since their raison d’être, focus, and implementation are very different. We beg to differ and would even go so far as proposing that dynamic crisis mapping platforms can bring both conflict assessments and M&E baselines together with considerable added value.

To explain the potential of basemaps in more detail, a sound understanding of baselines is important. A baseline provides a starting point or reference from which a comparison can be made.  Baselines are conducted prior to the beginning of a program intervention and are the point of comparison for monitoring and evaluation data. The bulk of baseline studies focus on the intended outcomes of a project. They can also take into account secondary outcomes and assumptions, though these are not the primary emphasis.

Baseline information can be used in a number of ways. Perhaps the most intuitive application is the comparison of baseline information with subsequent information to show the change (or lack thereof) that has taken place over time (again, change over space is all-too often ignored). Baseline information can also be used to refine programming decisions or set achievable and realistic targets. Finally, baseline information enables monitoring data to have greater utility earlier in the project cycle.

Baselines have three possible focus areas: change, secondary outcomes and assumptions: change, secondary changes and assumptions. It is important that DAI and partners agree on the formulation of an M&E framework that clearly focuses on one of these focus areas. Note that the first area, change, is required of all baselines, while the other two are optional depending on the project.

Towards Dynamic Basemaps

Conflict, or threat and risk data, typically has a geographic dimension. Consequently, baseline data on conflict dynamics also have a geographic element. To this end, it makes far more sense to use basemaps than baselines since the former includes spatially relevant information that changes over time, which is necessarily important for decision-making vis-à-vis programmatic response in post-conflict environments.

UNDP’s TRMA enjoys a distinctive advantage in this respect since the project already maps threat and risk data (via hard-copy print-outs and a dynamic mapping tools called the 4W’s). The main handicap at the moment is the lack of regularly updated threat and risk data in order to visualize change over time. That said, the TRMA team is planning to shift towards more regular data collection, which will make the use of the 4Ws even more compelling for M&E purposes.

In the meantime, drawing on the use of baselines for M&E can guide the development of a general methodology for basemaps. This deliverable assumes that a standard M&E framework has already been developed for a given project. The challenge is to now translate this framework into a basemap. Recall that a strong M&E framework will include theories and types of change, an achievable goal with clear objectives, outputs and activities as well as reliable indicators and baselines. The first step, then, is to consider the theory (or theories) of change formulated for a given project.

Theories of change help planners and evaluators stay aware of the assumptions behind their choices, verify that the activities and objectives are logically aligned, and identify opportunities for integrated programming to spark synergies and leverage greater results. Types of change refer to specific changes expressed in the actual program design and/or evaluation, either as goals, objectives, or indicators.  Common examples include changes in behavior, practice, process, status, etc.  Both the theory of change and the types of changes sought should be evident in a well-designed program.

Although theories and types of change may not have obvious or intuitive geographical dimensions, it is important for “basemapping” to start thinking in geographical terms from the first step in the M&E process. Take “the reduction of violence theory of change,” which suggests that peace will result as we reduce the levels of violence perpetrated by combatants or their representatives. Methods include cease-fires, introduction of peacekeeping forces and conflict sensitive programming, for example. Cease-fires, peacekeeping operations and conflict sensitive development all have a geographical component. So the point is simply to ask the question “Where?” and to ensure the answer is woven into the theory of change.

Next, the basemapping process should consider the program design, e.g., the design hierarchy:

  • Goal: broadest change in the conflict.
  • Objectives: types of changes that are prerequisites to achieve stated goal.
  • Outputs: deliverables or products, often tangible from the activities.
  • Activities: concrete events or services performed.

Each element in the design hierarchy has a spatial component. The goal is to be achieved in a specific location or locations. So are the objectives, outputs and activities. Again, this geographical dimension needs to be explicitly articulated in each step of the hierarchy. The first phase of the basemapping process is about fully geographically mapping a project’s design hierarchy (social network mapping is also possible but not included in this deliverable).

In essence, the first phase of basemapping is to map the “ideal world” that a given project is mean to achieve. The second phase of basemapping comprises the mapping of a standard albeit georeferenced baseline of indicators. This phase seeks to capture an accurate picture of the current state of affairs in the “real world” and is in effect a conflict or risk assessment. Many conflict/risk assessment frameworks have already been developed and applied so this will not be duplicated here.

The third phase of basemapping comes after “Ideal World” and “Real World” mapping. The purpose of “Changed World” basemapping is to compare ideal and real world basemaps in order to isolate any positive changes that can be attributed to the project being implemented. The purpose of basemaps is not to prove causation but rather to suggest correlation.

Perhaps the most critical component of “Changed World” basemaps is the selection of the “change indicators”. That is, identifying those indicators that can geographically denote whether program activities are in fact change the real world as intended. Often, these indicators will already have been identified during standard baseline studies and simply need to be tied to specific geographic coordinates. In other situations, proxy indicators with a deliberate geographic dimension will need to be identified.

The temporal resolution of “change indicators” also needs to be a deliberate decision. For example, these geo-referenced indicators can be monitored on a daily, weekly or monthly basis. Mobile technology such as mobile phones and PDAs can be used to document change indicators and map them in quasi-real time. The Ushahidi mapping platform could be ideal for basemapping in this regards.

Basemapping and Decision-Support

Basemapping should not be considered distinct from decision-support processes and tools. This is particularly true of “Changed World Basemapping” which is meant to highlight in space and time the difference between the “Ideal World” and “Real World” basemaps. In other words, “Changed World Basemapping” should draw on geospatial analysis to create “heat maps” (and other relevant visualization techniques) to depict progress towards the ideal world in both time and space.

It is precisely because heat maps depict change that they should be considered as decision-support tools, particularly if these are viewed on a dynamic platform that permits the user to query and analyze the heat map. To this end, the purpose of basemapping is to combine M&E, conflict assessment and decision support by using one-and-only one integrated dynamic mapping tool.

Basemap Challenges

Like any new idea and methodology, there are important challenges that need to be addressed. For example, projects are likely to have impact at different spatial levels. How do we capture cross-scale effects? In addition, how do we introduce (spatial) control variables in order to isolate intervening (spatial) variables? Finally, can control groups be introduced in order to provide compelling evidence of impact or does this raise some important ethical issues as has happened in other fields?

Patrick Philippe Meier

Applying Technology to Crisis Mapping and Early Warning in Humanitarian Settings

The Harvard Humanitarian Initiative (HHI) just published a working paper I co-authored with my colleague Dr. Jennifer Leaning. Jennifer and I co-founded the Program on Crisis Mapping and Early Warning (CM&EW) back in 2007 with the generous support of Humanity United (HU).

During this two-year period, HU commissioned a series of internal working papers to inform their thinking in the field of crisis mapping. The report just published by HHI is one of the first internal papers we produced for HU. I am particularly indebted to my HHI colleague Enzo Bollettino for pushing this initiative working paper series at HHI.

This inaugural working paper presents a conceptual framework that distinguishes between the “big world” and “small world” to assess the use of ICTs for communication in conflict zones. The study does so by delineating the multiple information pathways relevant for conflict early warning, crisis mapping and humanitarian response.

The second and third working paper in the series will address information collection and visual analysis respectively. Each working paper will highlight existing projects or case studies; draw on informative anecdotes; and/or relay the most recent thinking on future applications of ICTs.

This working paper series is not meant to be exhaustive since humanitarian tech as a field of study and practice is still in formative phases. The analysis that follows is simply one step forward in trying to understand where the field is headed. We very much welcome feedback and input from fellow colleagues in the community. Feel free to use the comments section below to share your thoughts.

The working paper is available on the website of HHI’s Crisis Mapping Program.

Patrick Philippe Meier