Tag Archives: Ushahidi

Location Based Mobile Alerts for Disaster Response in Haiti

Using demand-side and supply-side economics as an analogy for the use of communication and information technology (ICT) in disaster response may yield some interesting insights. Demand-side economics (a.k.a. Keynesian economics) argues that government policies should seek to “increase aggregate demand, thus increasing economic activity and reducing unemployment.” Supply-side economics, in contrast, argues that “overall economic well-being is maximized by lowering the barriers to producing goods and services.”

I’d like to take this analogy and apply it to the subject of text messaging in Haiti. The 4636 SMS system was set up in Haiti by the Emergency Information Service or EIS (video) with InSTEDD (video), Ushahidi (video) and the US State Department. The system allows for both demand-side and supply-side disaster response. Anyone in the country can text 4636 with their location and needs, i.e., demand-side. The system is also being used to supply some mobile phone users with important information updates, i.e., supply-side.

Both communication features are revolutionizing disaster response. Lets take the supply-side approach first. EIS together with WFP, UNICEF, IOM, the Red Cross and others are using the system to send out SMS to all ~7,500 mobile phones (the number is increasing daily) with important information updates. Here are screen shots of the latest messages sent out from the EIS system:

The supply-side approach is possible thanks to the much lower (technical and financial) barriers to disseminating this information in near real-time. Providing some beneficiaries with this information can serve to reassure them that aid is on the way and to inform them where they can access various services thus maximizing overall economic well-being.

Ushahidi takes both a demand-side and supply-side approach by using the 4636 SMS system. 4636 is used to solicit text messages from individuals in urgent need. These SMS’s are then geo-tagged in near real-time on Ushahidi’s interactive map of Haiti. In addition, Ushahidi provides a feature for users to receive alerts about specific geographic locations. As the screen shot below depicts, users can specify the location and geographical radius they want to receive information on via automated email and/or SMS alerts; i.e., supply-side.

The Ushahidi Tech Team is currently working to allow users to subscribe to specific alert categories/indicators based on the categories/indicators already being used to map the disaster and humanitarian response in Haiti. See the Ushahidi Haiti Map for the list. This will enable subscribers to receive even more targeted location based mobile alerts,  thus further improving their situational awareness, which will enable them to take more informed decisions about their disaster response activities.

Both the demand- and supply-side approaches are important. They comprise an unprecedented ability to provide location-based mobile alerts for disaster response; something not dissimilar to location based mobile advertising, i.e., targeted communication based on personal preferences and location. The next step, therefore, is to make all supply-side text messages location based when necessary. For example, the following SMS broadcast would only go to mobile phone subscribers in Port-au-Prince:

It is important that both demand- and supply-side mobile alerts be location based when needed. Otherwise, we fall prey to Seeing Like a State.

“If we imagine a state that has no reliable means of enumerating and locating its population, gauging its wealth, and mapping its land, resources, and settlements, we are imagining a state whose interventions in that society are necessarily crude.”

In “Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed,” James Scott uses the following elegant analogy to emphasize the importance of locality.

“When a large freighter or passenger liner approaches a major port, the captain typically turns the control of his vessel over to a local pilot, who brings it into the harbor and to its berth. The same procedure is followed when the ship leaves its berth until it is safely out into the sea-lanes. This sensible procedure, designed to avoid accidents, reflects the fact that navigation on the open sea (a more “abstract” space) is the more general skill. While piloting a ship through traffic in a particular port is a highly contextual skill. We might call the art of piloting a “local and situated knowledge.”

An early lesson learned in the SMS deployment in Haiti is that more communication between the demand- and supply-side organizations need to happen. We are sharing the 4636 number,  so we are dependent on each other and need to ensure that changes to the system be up for open discussion. This lack of joint outreach has been the single most important challenge in my opinion. The captains are just not talking to the local pilots.

Patrick Philippe Meier

Crisis Information and The End of Crowdsourcing

When Wired journalist Jeff Howe coined the term crowdsourcing back in 2006, he did so in contradistinction to the term outsourcing and defined crowdsourcing as tapping the talent of the crowd. The tag line of his article was: “Remember outsourcing? Sending jobs to India and China is so 2003. The new pool of cheap labor: everyday people using their spare cycles to create content, solve problems, even do corporate R & D.”

If I had a tag line for this blog post it would be: “Remember crowdsourcing? Cheap labor to create content and solve problems using the Internet is so 2006. What’s new and cool today is the tapping of official and unofficial sources using new technologies to create and validate quality content.” I would call this allsourcing.

The word “crowdsourcing” is obviously a compound word that combines “crowd” and “sourcing”. But what exactly does “crowd” mean in this respect? And how has “sourcing” changed since Jeff introduced the term crowdsourcing over three-and-a-half years ago?

Lets tackle the question of “sourcing” first. In his June 2006 article on crowdsourcing, Jeff provides case studies that all relate to a novel application of a website and perhaps the most famous example of crowdsourcing is Wikipedia, another website. But we’ve just recently seen some interesting uses of mobile phones to crowdsource information. See Ushahidi or Nathan Eagle’s talk at ETech09, for example:

So the word “sourcing” here goes beyond the website-based e-business approach that Jeff originally wrote about in 2006. The mobile technology component here is key. A “crowd” is not still. A crowd moves, especially in crisis, which is my area of interest. So the term “allsourcing” not only implies collecting information from all sources but also the use of “all” technologies to collect said information in different media.

As for the word “crowd”, I recently noted in this Ushahidi blog post that we may need some qualifiers—namely bounded and unbounded crowdsourcing. In other words, the term “crowd” can mean a large group of people (unbounded crowdsourcing) or perhaps a specific group (bounded crowdsourcing). Unbounded crowdsourcing implies that the identity of individuals reporting the information is unknown whereas bounded crowdsourcing would describe a known group of individuals supplying information.

The term “allsourcing” represents a combination of bounded and unbounded crowdsourcing coupled with new “sourcing” technologies. An allsourcing approach would combined information supplied by known/official sources and unknown/unofficial sources using the Web, e-mail, SMS, Twitter, Flickr, YouTube etc. I think the future of crowdsourcing is allsourcing because allsourcing combines the strengths of both bounded and unbounded approaches while reducing the constraints inherent to each individual approach.

Let me explain. One main important advantage of unbounded crowdsourcing is the ability to collect information from unofficial sources. I consider this an advantage over bounded crowdsourcing since more information can be collected this way. The challenge of course is how to verify the validity of said information. Verifying information is by no means a new process, but unbounded crowdsourcing has the potential to generate a lot more information than bounded crowdsourcing since the former does not censor unofficial content. This presents a challenge.

At the same time, bounded crowdsourcing has the advantage of yielding reliable information since the reports are produced by known/official sources. However, bounded crowdsourcing is constrained to a relatively small number of individuals doing the reporting. Obviously, these individuals cannot be everywhere at the same time. But if we combined bounded and unbounded crowdsourcing, we would see an increase in (1) overall reporting, and (2) in the ability to validate reports from unknown sources.

The increased ability to validate information is due to the fact that official and unofficial sources can be triangulated when using an allsourcing approach. Given that official sources are considered trusted sources, any reports from unofficial sources that match official reports can be considered more reliable along with their associated sources. And so the combined allsourcing approach in effect enables the identification of new reliable sources even if the identify of these sources remains unknown.

Ushahidi is good example of an allsourcing platform. Organizations can use Ushahidi to capture both official and unofficial sources using all kinds of new sourcing technologies. Allsourcing is definitely something new so there’s still much to learn. I have a hunch that there is huge potential. Jeff Howe titled his famous article in Wired “The Rise of Crowdsourcing.” Will a future edition of Wired include an article on “The Rise of Allsourcing”?

Patrick Philippe Meier

Three Common Misconceptions About Ushahidi

Cross posted on Ushahidi

Here are three interesting misconceptions about Ushahidi and crowdsourcing in general:

  1. Ushahidi takes the lead in deploying the Ushahidi platform
  2. Crowdsourced information is statistically representative
  3. Crowdsourced information cannot be validated

Lets start with the first. We do not take the lead in deploying Ushahidi platforms. In fact, we often learn about new deployments second-hand via Twitter. We are a non-profit tech company and our goal is to continue developing innovative crowdsourcing platforms that cater to the growing needs of our current and prospective partners. We provide technical and strategic support when asked but otherwise you’ll find us in the backseat, which is honestly where we prefer to be. Our comparative advantage is not in deployment. So the credit for Ushahidi deployments really go the numerous organizations that continue to implement the platform in new and innovative ways.

On this note, keep in mind that the first downloadable Ushahidi platform was made available just this May, and the second version just last week. So implementing organizations have been remarkable test pilots, experimenting and learning on the fly without recourse to any particular manual or documented best practices. Most election-related deployments, for example, were even launched before May, when platform stability was still an issue and the code was still being written. So our hats go off to all the organizations that have piloted Ushahidi and continue to do so. They are the true pioneers in this space.

Also keep in mind that these organizations rarely had more than a month or two of lead-time before scheduled elections, like in India. If all of us have learned anything from watching these deployments in 2009, it is this: the challenge is not one of technology but election awareness and voter education. So we’re impressed that several organizations are already customizing the Ushahidi platform for elections that are more than 6-12 months away. These deployments will definitely be a first for Ushahidi and we look forward to learning all we can from implementing organizations.

The second misconception, “crowdsourced information is statistically representative,” often crops up in conversations around election monitoring. The problem is largely one of language. The field of election monitoring is hardly new. Established organizations have been involved in election monitoring for decades and have gained a wealth of knowledge and experience in this area. For these organizations, the term “election monitoring” has specific connotations, such as random sampling and statistical analysis, verification, validation and accredited election monitors.

When partners use Ushahidi for election monitoring, I think they mean something different. What they generally mean is citizen-powered election monitoring aided by crowdsourcing. Does this imply that crowdsourced information is statistically representative of all the events taking place across a given country? Of course not: I’ve never heard anyone suggest that crowdsourcing is equivalent to random sampling.

Citizen-powered election monitoring is about empowering citizens to take ownership over their elections and to have a voice. Indeed, elections do not start and stop at the polling booth. Should we prevent civil society groups from crowdsourcing crisis information on the basis that their reports may not be statistically representative? No. This is not our decision to make and the data is not even meant for us.

Another language-related problem has to due with the term “crowdsourcing”. The word  “crowd” here can literally mean anyone (unbounded crowdsourcing) or a specific group (bounded crowdsourcing) such as designated election monitors. If these official monitors use Ushahidi and they are deliberately positioned across a country for random sampling purposes, then this becomes no different at all to standard and established approaches to election monitoring. Bounded crowdsourcing can be statistically representative.

The third misconception about Ushahidi has to do with the tradeoff between unbounded crowdsourcing and the validation of said crowdsourced information. One of the main advantages of unbounded crowdsourcing is the ability to collect a lot of information from a variety of sources and media—official and nonofficial sources—in near real time. Of course, this means that a lot more of information can be reported at once, which can make the validation of said information a challenging process.

A common reaction to this challenge is to dismiss crowdsourcing altogether because unofficial sources may be unreliable or at worse deliberately misleading. Some organizations thus find it easier to write off all unofficial content because of these concerns. Ushahidi takes a different stance. We recognize that user-generated content is not about to disappear any time soon and that a lot of good can come out of such content, not least because official information can too easily become proprietary and guarded instead of shared.

So we’re not prepared to write off user-generated content because validating information happens to be challenging. Crowdsourcing crisis information is our business and so is (obviously) the validation of crowdsourced information. This is why Ushahidi is fully committed to developing Swift River. Swift is a free and open source platform that validates crowdsourced information in near real-time. Follow the Ushahidi blog for exciting updates!

Crowdsourcing for Peace Mapping

Lynda Gratton at the London Business School gave one of the best Keynote speeches that I’ve heard all year. Her talk was a tour de force on how to catalyze innovation and one of her core recommendations really hit home for me: “If you really want to be at the cutting edge of innovation, then you better make sure that 20% of your team is under the age of 27.” Lynda upholds this principle in all her business ventures.

I find this absolutely brilliant, which explains why I prefer teaching undergraduate seminars and why I always try to keep in touch with former students. Without fail, they continue to be an invaluable source of inspiration and innovative thinking.

A former student of mine, Adam White, recently introduced me to another undergraduate student at Tufts University, Rachel Brown. Rachel is a perfect example of why I value interacting with bright young minds. She wants to return to Kenya next year to identify and connect local peace initiatives in Nairobi in preparation for the 2012 elections.

Rachel was inspired by the story of Solo 7, a Kenyan graffiti artist in Kibera who drew messages of peace throughout the slum as a way to prevent violence from escalating shortly after the elections. “Imagine,” she said, “if we could identify all the Solo 7’s of Nairobi, all the individuals and local communities engaged in promoting peace.”

I understood at once why Adam recommended I meet with Rachel: Ushahidi.

I immediately told Rachel about Ushahidi, a free and open source platform that uses crowdsourcing to map crisis information. I suggested she consider using the platform to crowdsource and map local peace initiatives across Kenya, not just Nairobi. I’ve been so focused on crisis mapping that I’ve completely ignored my previous work in the field of conflict early warning. An integral part of this field is to monitor indicators of conflict and cooperation.

There are always pockets of cooperation no matter how dire a conflict is. Even in Nazi Germany and the Rwandan genocide we find numerous stories of people risking their lives to save others. The fact is that most people, most of the time in most places choose cooperation over conflict. If that weren’t the case, we’d be living in state of total war as described by Clausewitz.

If we only monitor indicators of war and violence, then that’s all we’ll see. Our crisis maps only depict a small part of reality. It is incredibly important that we also map indicators of peace and cooperation. By identifying the positive initiatives that exist before and during a crisis, we automatically identify multiple entry points for intervention and a host of options for conflict prevention. If we only map conflict, then we may well identify where most of the conflict is taking place, but we won’t necessarily know who in the area might be best placed to intervene.

Documenting peace and cooperation also has positive psychological effects. How often do we lament the fact that the only kind of news available in the media is bad news? We turn on CNN or BBC and there’s bad news—sometimes breaking news of bad news. It’s easy to get depressed and to assume that only bad things happen. But violence is actually very rare statistically speaking. The problem is that we don’t systematically document peace, which means that our perceptions are completely skewed.

Take the following anecdote, which occurred to me several years ago when I taught my first undergraduate course on conflict early warning systems. I was trying to describe the important psychological effects of documenting peace and cooperation by using the example of the London underground (subway).

If you’ve been to London, you’ve probably experienced the frequent problems and delays with the underground system. And like most other subway systems, announcements are made to inform passengers of annoying delays and whatnot. But unlike other subway systems I’ve used, the London underground also makes announcements to let passengers know that all lines are currently running on time.

Now lets take this principle and apply it to Rachel’s project proposal combined with Ushahidi. Imagine if she were to promote the crowdsourcing of local peace initiatives all across Kenya. She could work with national and local media to get the word out. Individuals could send text messages to report what kinds of peace activities they are involved in.

This would allow Rachel and others to follow up on select text messages to learn more about each activity. In fact, she could use Ushahidi’s customizable reporting forms to ask individuals texting in information to elaborate on their initiatives. Rachel wants to commit no less than a year to this project, which should give her and colleagues plenty of time to map hundreds of local peace initiatives across Kenya.

Just imagine a map covered with hundreds of doves or peace dots representing local peace initiatives? What a powerful image. The Peace Map would be public, so that anyone with Internet access could learn about the hundreds of different peace initiatives in Kenya. Kenyan peace activists themselves could make use of this map to learn about creative approaches to conflict prevention and conflict management. They could use Ushashidi’s subscription feature to receive automatic updates when a new peace project is reported in their neighborhood, town or province.

When peace activists (and anyone else, for that matter) find peace projects they like on Ushahidi’s Peace Map, they can “befriend” that project, much like the friend feature in Facebook. That way they can receive updates from a particular project via email, SMS or even Twitter. These updates could include information on how to get involved. When two projects (or two individuals) are connected this way, Ushahidi could depict the link on the map with a line connecting the two nodes.

Imagine if this Peace Map were then shown on national television in the lead up to the elections. Not only would there be hundreds of peace dots representing individual peace efforts, but many of these would be linked, depicting a densely connected peace network.

The map could also be printed in Kenya’s national and local newspapers. I think a Peace Map of Kenya would send a powerful message that Kenyans want peace and won’t stand for a repeat of the 2007 post-election violence. When the elections do happen, this Peace Map could be used operationally to quickly respond to any signs of escalating tensions.

Rachel could use the Peace Map to crowdsource reports of any election violence that might take place. Local peace activists could use Ushahidi’s subscription feature to receive alerts of violent events taking place in their immediate vicinity. They would receive these via email and/or SMS in near real-time.

This could allow peace activists to mobilize and quickly respond to escalating signs of violence, especially if preparedness measures and contingency plans already in place. This is what I call fourth generation conflict early warning and early response (4G). See this blog post for more on 4G systems. This is where The Third Side framework for conflict resolution meets the power of new technology platforms like Ushahidi.

It is when I meet inspiring students like Rachel that I wish I were rich so I could just write checks to turn innovative ideas into reality. The next best thing I can do is to work with Rachel and her undergraduate friends to write up a strong proposal. So if you want to get involved or you know a donor, foundation or a philanthropist who might be interested in funding Rachel’s project, please do email me so I can put you directly in touch with her: Patrick@iRevolution.net.

In the meantime, if you’re about to start a project, remember Lynda’s rule of thumb: make sure 20% of your team is under 27. You won’t regret it.

Patrick Philippe Meier

Twitter vs. Tyrants: Ushahidi and Data Verification

My colleague Chris Doten asked me to suggest panelists for this congressional briefing on the role of new media in authoritarian states. I blogged about the opening remarks of each panelist here. But the key issues really came to fore during the Q/A session.

These issues addressed Ushahidi, data validation, security and education. This blog post addresses the issues raised around Ushahid and data validation. The text below includes my concerns with respect to a number of comments and assumptions made by some of the panelists.

Nathan Freitas (NYU):

  • It’s [Ushahidi] a crisis-mapping platform that  has grown out of the movement in Africa after the Kenyan elections.  It’s akin  to a blog system, but for mapping crisis, and what’s unique about it is it allows you to capture unverified and verified information.

Me: Many thanks to Nathan for referencing Ushahidi in the Congressional Briefing. Nathan’s comments are spot on. One of the unique features of Ushahidi is that the platform allows for the collection of both unverified and verified information.

But what’s the difference between these two types of information in the first place? In general, unverified information simply means information reported by “unknown sources” whereas verified tends to be associated with known sources of reporting, such as official election monitors.

The first and most important point to understand is that both approaches to information collection are compatible and complementary. Official election monitors, like professional journalists, cannot be everywhere at the same time. The “crowd” in crowdsourcing, on the other hand, has a comparative advantage in this respect (see supporting *empirical evidence here).

Clearly, the crowd has many more eyes and ears than any official monitoring network ever will. So discounting any and all information originating from the crowd is hard to justify. One would have to entirely dismiss the added value of all the Tweets, photos and YouTube footage generated by the “crowd” during the post-election violence in Iran.

  • And what’s interesting, I think we’ve seen the first round, the 1.0 of a lot of  this election monitoring.  As these systems come in place, they’ll be running  all the time, and they’ll be used in local elections and in state-level  elections, and the movement for – these tools will be easier, just like blogs.   Everyone blogs; in a few years, everyone’s got their own crisis-mapping  platform.

Me: What a great comment and indeed one of Ushahidi’s goals: for everyone to have their own crisis mapping platform in the near future. That’s what I call an iRevolution. Nathan’s point about the first round of these systems is also really important. The first version of the Ushahidi platform only became downloadable in May of this year; that’s just 5 months ago. We’re just getting started.

Daniel Calingaert (Freedom House):

  • [T]here’s a very critical component [...] often  overlooked in these kinds of programs:  The information needs to be verified. It is useless or even counterproductive to simply be passing around rumors, and  rumor-mongering is very big in elections, and especially Election Day.

Me: Daniel certainly makes an important point although I personally don’t think that the need for verification is often overlooked in election monitoring. In any case, one should note that  rumors themselves need to be monitored and documented pre, during and post-elections. To be sure, if the information collection protocol is too narrow (say using only official monitors are allowed to submit evidence), then rumors (and other important information) may simply be dismissed and go unreported even though they could fuel conflict.

  • So it’s  important as part of the structure that you have qualified people to sort through the information and call what is credible reporting from citizens from very unsubstantiated information.

Me: Honestly, I’m always a little weary when I read comments along the lines of “you need to have qualified people” or “only experts should carry out the task.” Why? Because they tend to dismiss the added value that hundreds of bystanders can bring to the table. As Chris Spence noted about their operations in Moldova, NDI’s main partner organization “was harassed and kicked out of the country” while “the NDI program [was] largely shut down.” So who’s left to monitor? Exactly.

As my colleague Ory Okolloh recently noted, “Kenya had thousands election observers including many NDI monitors.” So what happened? “When it came to sharing their data as far as their observations at the polling everyone balked especially the EU and IRI because it was too “political”. IRI actually released their data almost 8 months later and yet they were supposed to be the filter.”

And so, Okolloh adds, “At a time when some corroboration could have prevented bloodshed, the ‘professionals’ were nowhere to be seen, so if we are talking about verification, legitimacy, and so on … lets start there.”

Chris Spence (NDI):

  • Monitoring groups – and this kind of gets to the threshold questions about Ushahidi and some of the platforms where you’re getting a lot of interesting  information from citizens, but at the end of the day, you’ve really got to  decide, have thresholds been reached which call into question the legitimacy of  the process?  And that’s really the political question that election observers and the groups that we work with have to grapple with.

Me: An interesting comment from NDI but one that perplexes me. I don’t recall users of Ushahidi suggesting that they should be the sole source of information to qualify for threshold points. Again, the most important point to understand is that different approaches to information collection can complement each other in important ways. We need to think less in linear terms and more in terms of information ecosystems with various ecologies of information sources.

  • And there’s so much involved in that methodology that one of the concerns about  the crisis mapping or the crowdsourcing [sic] is that the public can then draw interpretations about the outcome of elections without necessarily having the  filter required.  You know, you can look at a map of some city and see four or  five or 10 or several violations of election law reported by citizens who – you  know, you have to deal with the verification problem – but is that significant in the big picture?

Me: Ok, first of all, lets not confuse “crisis mapping” and “crowdsourcing” or use the terms interchangeably. Second, individuals for the large part are not thick. The maps can clearly state that the information represented is unfiltered and unverified, hence may be misleading. Third (apologies for repeating myself), none of the groups using Ushahidi claim that the data collected is representative of the bigger picture. This gets to the issue of significance.

And fourth, (more repeating), no one I know has suggested we go with once information feed, i.e., one source of information. I’m rather surprised that Chris Spence never brings up the importance of triangulation even though he acknowledges in his opening remarks that there are projects (like Swift River) that are specifically based on triangulation mechanisms to validate crowdsourced information.

Crowdsourced information can be an important repository for triangulation. The more crowdsourced information we have, the more self-triangulation is possible and the more this data can be used as a control mechanism for officially collected information.

Yes, there are issues around verification of data and an Ushahidi powered map may not be random enough for statistical accuracy but, as my colleague Ory Okolloh notes, “the data can point to areas/issues that need further investigation, especially in real-time.”

  • [I]t’s really important that, as these tools get better –  and we like the tools; Ushahidi and the other platforms are great – but we need  to make a distinction between what can be expected out of a professional  monitoring exercise and what can be drawn from unsolicited inputs from  citizens.  And I think there are good things that can be taken from both.

Me: Excellent, I couldn’t agree more. How about organizing a full-day workshop or barcamp on the role of new technologies in contemporary election monitoring? I think this would provide an ideal opportunity to hash out the important points raised by Nathan, Daniel and Chris.

Patrick Philippe Meier

Evolving a Global System of Info Webs

I’ve already blogged about what an ecosystem approach to conflict early warning and response entails. But I have done so with a country focus rather than thinking globally. This blog post applies a global perspective to the ecosystem approach given the proliferation of new platforms with global scalability.

Perhaps the most apt analogy here is one of food webs where the food happens to be information. Organisms in a food web are grouped into primary producers, primary consumers and secondary consumers. Primary producers such as grass harvest an energy source such as sunlight that they turn into biomass. Herbivores are primary consumers of this biomass while carnivores are secondary consumers of herbivores. There is thus a clear relationship known as a food chain.

This is an excellent video visualizing food web dynamics produced by researchers affiliated with the Santa Fe Institute (SFI):

Our information web (or Info Web) is also composed of multiple producers and consumers of information each interlinked by communication technology in increasingly connected ways. Indeed, primary producers, primary consumers and secondary consumers also crawl and dynamically populate the Info Web. But the shock of the information revolution is altering the food chains in our ecosystem. Primary consumers of information can now be primary producers, for example.

At the smallest unit of analysis, individuals are the most primary producers of information. The mainstream media, social media, natural language parsing tools, crowdsourcing platforms, etc, arguably comprise the primary consumers of that information. Secondary consumers are larger organisms such as the global Emergency Information Service (EIS) and the Global Impact and Vulnerability Alert System (GIVAS).

These newly forming platforms are at different stages of evolution. EIS and GIVAS are relatively embryonic while the Global Disaster Alert and Coordination Systems (GDACS) and Google Earth are far more evolved. A relatively new organism in the Info Web is the UAV as exemplified by ITHACA. The BrightEarth Humanitarian Sensor Web (SensorWeb) is further along the information chain while Ushahidi’s Crisis Mapping platform and the Swift River driver are more mature but have not yet deployed as a global instance.

InSTEDD’s GeoChat, Riff and Mesh4X solutions have already iterated through a number of generations. So have ReliefWeb and the Humanitarian Information Unit (HIU). There are of course additional organisms in this ecosystem, but the above list should suffice to demonstrate my point.

What if we connected these various organisms to catalyze a super organism? A Global System of Systems (GSS)? Would the whole—a global system of systems for crisis mapping and early warning—be greater than the sum of its parts? Before we can answer this question in any reasonable way, we need to know the characteristics of each organism in the ecosystem. These organisms represent the threads that may be woven into the GSS, a global web of crisis mapping and early warning systems.

Global System of Systems

Emergency Information Service (EIS) is slated to be a unified communications solution linking citizens, journalists, governments and non-governmental organizations in a seamless flow of timely, accurate and credible information—even when local communication infrastructures are rendered inoperable. This feature will be made possible by utilizing SMS as the communications backbone of the system.

In the event of a crisis, the EIS team would sift, collate, make sense of and verify the myriad of streams of information generated by a large humanitarian intervention. The team would gather information from governments, local media, the military, UN agencies and local NGOs to develop reporting that will be tailored to the specific needs of the affected population and translated into local languages. EIS would work closely with local media to disseminate messages of critical, life saving information.

Global Impact and Vulnerability Alert System (GIVAS) is being designed to closely monitor vulnerabilities and accelerate communication between the time a global crisis hits and when information reaches decision makers through official channels. The system is mandated to provide the international community with early, real-time evidence of how a global crisis is affecting the lives of the poorest and to provide decision-makers with real time information to ensure that decisions take the needs of the most vulnerable into account.

BrightEarth Humanitarian Sensor Web (SensorWeb) is specifically designed for UN field-based agencies to improve real time situational awareness. The dynamic mapping platform enables humanitarians to easily and quickly map infrastructure relevant for humanitarian response such as airstrips, bridges, refugee camps, IDP camps, etc. The SensorWeb is also used to map events of interest such as cholera outbreaks. The platform leverages mobile technology as well as social networking features to encourage collaborative analytics.

Ushahidi integrates web, mobile and dynamic mapping technology to crowdsource crisis information. The platform uses FrontlineSMS and can be deployed quickly as a crisis unfolds. Users can visualize events of interest on a dynamic map that also includes an animation feature to visualize the reported data over time and space.

Swift River is under development but designed to validate crowdsourced information in real time by combining machine learning for predictive tagging with human crowdsourcing for filtering purposes. The purpsose of the platform is to create veracity scores to denote the probability of an event being true when reported across several media such as Twitter, Online news, SMS, Flickr, etc.

GeoChat and Mesh4X could serve as the nodes connecting the above platforms in dynamic ways. Riff could be made interoperable with Swift River.

Can such a global Info Web be catalyzed? The question hinges on several factors the most important of which are probably awareness and impact. The more these individual organisms know about each other, the better picture they will have of the potential synergies between their efforts and then find incentives to collaborate. This is one of the main reasons I am co-organizing the first International Conference on Crisis Mapping (ICCM 2009) next week.

Patrick Philippe Meier

Accurate Crowdsourcing for Human Rights

This is a short video of the presentation I will be giving at the Leir Conference on The Next Generation of Human Rights. My talk focuses on the use of digital technologies to leverage the crowdsourcing and crowdfeeding of human rights information. I draw on Ushahidi’s Swift River initiative to describe how crowdsourced information can be auto-validated.

Here’s a copy of the agenda (PDF) along with more details. This Leir Conference aims to bring together world-leading human rights practitioners, advocates, and funders for discussions in an intimate setting. Three panels will be convened, with a focus on audience discussion with the panelists. The topics will include:

  1. Trends in Combating Human Rights Abuses;
  2. Human Rights 2.0: The Next Generation of Human Rights Organizations;
  3. Challenges and Opportunities of  Technology for Human Rights.

I will be on presenting on the third panel together with colleagues from Witness.org and The Diarna Project. For more details on the larger subject of my presentation, please see this blog post on peer-producing human rights.

The desired results of this conference are to allow participants to improve advocacy, funding, or operations through collaborative efforts and shared ideas in a natural setting.

Patrick Philippe Meier

Crisis Mapping for Monitoring & Evaluation

I was pleasantly surprised when local government ministry representatives in the Sudan (specifically Kassala) directly requested training on how to use the UNDP’s Threat and Risk Mapping Analysis (TRMA) platforms to monitor and evaluate their own programs.

Introduction

The use of crisis mapping for monitoring and evaluation (M&E) had cropped up earlier this year in separate conversations with the Open Society Institute (OSI) and MercyCorps. The specific platform in mind was Ushahidi, and the two organizations were interested in exploring the possibility of using the platform to monitor the impact of their funding and/or projects.

As far as I know, however, little to no rigorous research has been done on the use of crisis mapping for M&E. The field of M&E is far more focused on change over time than over space. Clearly, however, post-conflict recovery programs are implemented in both time and space. Furthermore, any conflict sensitivity programming must necessarily take into account spatial factors.

CartaMetrix

The only reference to mapping for M&E that I was able to find online was one paragraph in relation to the Cartametrix 4D map player. Here’s the paragraph (which I have split into to ease legibility) and below a short video demo I created:

“The Cartametrix 4D map player is visually compelling and fun to use, but in terms of tracking results of development and relief programs, it can be much more than a communications/PR tool. Through analyzing impact and results across time and space, the 4D map player also serves as a good program management tool. The map administrator has the opportunity to set quarterly, annual, and life of project indicator targets based on program components, regions, etc.

Tracking increases in results via the 4D map players, gives a program manager a sense of the pace at which targets are being reached (or not). Filtering by types of activities also provides for a quick and easy way to visualize which types of activities are most effectively resulting in achievements toward indicator targets. Of course, depending on the success of the program, an organization may or may not want to make the map (or at least all facets of the map) public. Cartametrix understands this and is able to create internal program management map applications alongside the publicly available map that doesn’t necessarily present all of the available data and analysis tools.”

Mapping Baselines

I expect that it will only be a matter of time until the M&E field recognizes the added value of mapping. Indeed, why not use mapping as a contributing tools in the M&E process, particularly within the context of formative evaluation?

Clearly, mapping can be one contributing tool in the M&E process. To be sure, baseline data can be collected, time-stamped and mapped. Mobile phones further facilitate this spatially decentralized process of information collection. Once baseline data is collected, the organization would map the expected outcomes of the projects they’re rolling out and estimated impact date against this baseline data.

The organization would then implement local development and/or conflict management programs  in certain geographical areas and continue to monitor local tensions by regularly collecting geo-referenced data on the indicators that said projects are set to influence. Again, these trends would be compared to the initial baseline.

These program could then be mapped and data on local tensions animated over time and space. The dynamic mapping would provide an intuitive and compelling way to demonstrate impact (or the lack thereof) in certain geographical areas where the projects were rolled out as compared to other similar areas with no parallel projects. Furthermore, using spatial analysis for M&E could also be a way to carry out a gap analysis and to assess whether resources are being allocated efficiently in more complex environments.

Next Steps

One of my tasks at TRMA is to develop a short document on using crisis mapping for M&E so if anyone has any leads on applied research in this area, I would be much obliged.

Patrick Philippe Meier

Updated: Humanitarian Situation Risk Index (HSRI)

The Humanitarian Situation Risk Index (HSRI) is a tool created by UN OCHA in Colombia. The objective of HSRI is to determine the probability that a humanitarian situation occurs in each of the country’s municipalities in relation to the ongoing complex emergency. HSRI’s overall purpose is to serve as a “complementary analytical tool in decision-making allowing for humanitarian assistance prioritization in different regions as needed.”

UPDATE: I actually got in touch with the HSRI group back in February 2009 to let them know about Ushahidi and they have since “been running some beta-testing on Ushahidi, and may as of next week start up a pilot effort to organize a large number of actors in northeastern Colombia to feed data into [their] on-line information system.” In addition, they “plan to move from a logit model calculating probability of a displacement situation for each of the 1,120 Colombian municipalities, to cluster analysis, and have been running the identical model on data [they] have for confined communities.”

hsrimap

HSRI uses statistical tools (principal component analysis and the Logit model) to estimate the risk indexes. The indexes range from 0 to 1, where 0 is no risk and 1 is maximum risk. The team behind the project clearly state that the tool does not indicate the current situation in each municipality given that the data is not collected in real-time. Nor does the tool quantify the precise number of persons at risk.

The data used to estimate the Humanitarian Situation Risk Index “mostly comes from official sources, due to the fact that the vast majority of data collected and processed are from State entities, and in the remaining cases the data is from non-governmental or multilateral institutions.” The following table depicts the data collected.

hsri

I’d be interested to know whether the project will move towards doing any temporal analysis of the data over time. This would enable trends analysis which could more directly inform decision-making than a static map representing static data. One other thought might be to complement this “baseline” type data with event-data by using mobile phones and a “bounded crowdsourcing” approach a la Ushahidi.

Patrick Philippe Meier

Armed Conflict and Location Event Dataset (ACLED)

I joined the Peace Research Institute, Oslo (PRIO) as a researcher in 2006 to do some data development work on a conflict dataset and to work with Norways’ former Secretary of State on assessing the impact of armed conflict on women’s health for the Ministry of Foreign Affairs (MFA).

I quickly became interested in a related PRIO project that had recently begun called the “Armed Conflict and Location Event Dataset, or ACLED. Having worked with conflict event-datasets as part of operational conflict early warning systems in the Horn, I immediately took interest in the project.

While I have referred to ACLED in a number of previous blog posts, two of my main criticisms (until recently) were (1) the lack of data on recent conflicts; and (2) the lack of an interactive interface for geospatial analysis, or at least more compelling visualization platform.

Introducing SpatialKey

Independently, I came across UniveralMind back November of last year when Andrew Turner at GeoCommons made a reference to the group’s work in his presentation at an Ushahidi meeting. I featured one of the group’s products, SpatialKey, in my recent video primer on crisis mapping.

As it turns out, ACLED is now using SpatialKey to visualize and analyze some of it’s data. So the team has definitely come a long way from using ArcGIS and Google Earth, which is great. The screenshot below, for example, depicts the ACLED data on Kenya’s post-election violence using SpatialKey.

ACLEDspatialkey

If the Kenya data is not drawn from the Ushahidi then this could be an exciting research opportunity to compare both datasets using visual analysis and applied geo-statistics. I write “if” because PRIO somewhat surprisingly has not made the Kenya data available. They are usually very transparent so I will follow up with them and hope to get the data. Anyone interested in co-authoring this study?

Academics Get up To Speed

It’s great to see ACLED developing conflict data for more recent conflicts. Data on Chad, Sudan and the Central African Republic (CAR) is also depicted using SpatialKey but again the underlying spreadsheet data does not appear to be available regrettably. If the data were public, then the UN’s Threat and Risk Mapping Analysis (TRMA) project may very well have much to gain from using the data operationally.

ACLEDspatialkey2

Data Hugging Disorder

I’ll close with just one—perhaps unwarranted—concern since I still haven’t heard back from ACLED about accessing their data. As academics become increasingly interested in applying geospatial analysis to recent or even current conflicts by developing their own datasets (a very positive move for sure), will these academics however keep their data to themselves until they’ve published an article in a peer-reviewed journal, which can often take up to a year or more to publish?

To this end I share the concern that my colleague Ed Jezierski from InSTEDD articulated in his excellent blog post yesterday: “Academic projects that collect data with preference towards information that will help to publish a paper rather than the information that will be the most actionable or help community health the most.” Worst still, however, would be academics collecting data very relevant to the humanitarian or human rights community and not sharing that data until their academic papers are officially published.

I don’t think there needs to be competition between scholars and like-minded practitioners. There are increasingly more scholar-practitioners who recognize that they can contributed their research and skills to the benefit of the humanitarian and human rights communities. At the same time, the currency of academia remains the number of peer-reviewed publications. But humanitarian practitioners can simply sign an agreement such that anyone using the data for humanitarian purposes cannot publish any analysis of said data in a peer-reviewed forum.

Thoughts?

Patrick Philippe Meier