Monthly Archives: December 2009

Top 10 Posts on iRevolution in 2009

Here are the top 10 most popular posts on iRevolution in 2009:

  1. How To Communicate Securely in Repressive Environments
  2. A Brief History of Crisis Mapping
  3. Crisis Mapping Kenya’s Election Violence
  4. Video Introduction to Crisis Mapping
  5. Impact of ICTs on Repressive Regimes
  6. Proposing the Field of Crisis Mapping
  7. Mobile Banking for the Bottom of the Pyramid
  8. Digital Resistance: Between Digital Activism and Civil Resistance
  9. Moving Forward with Swift River
  10. Why Dictators Love the Web or: How I Learned to Stop Worrying and Say So What?!

Note the contrasting titles of posts #1 and #10. I actually wrote the former back in June during Iran’s post-election crackdown and the latter just a few weeks ago in response to reading a laundry list of “techtics” (technologies + tactics) that repressive regimes like Iran’s employ.

As David Sasaki recently noted, making lists of what is wrong is all fine and well, but we also need action items for “what needs to be done to make it right” so that  “month by month, year by year, we’re slowly [able to be] checking those items off.” So consider the most popular post (communicating securely in repressive environments), a collection of action items (gathered from multiple sources) to deal with some laundry lists of what is wrong.

I was glad to see one of my posts on Ushahidi’s Swift River appear in the top 10, and surprised to see a post on mobile banking in position 7. I’ll be looking to blog more about mobile banking in 2010 especially as my Fletcher colleagues (and alumni) are becoming leaders in their own right in this exciting space ripe for iRevolutions. See this link for the international conference on Mobile Banking that we co-organized in Kenya earlier this year.

As for crisis mapping, I think 2009 will mark the launch of crisis mapping as a field in it’s own right. Thank you to all the donors who have helped to make this happen. There’s no doubt that 2010 will be a year of many more iRevolutions. I look forward to it!

Patrick Philippe Meier

Google’s New Earth Engine for Satellite Imagery Analysis: Applications to Humanitarian Crises

So that’s what they’ve been up to. Google is developing a new computational platform for global-scale analysis of satellite imagery to monitor deforestation. But this is just “the first of many Earth Engine applications that will help scientists, policymakers, and the general public to better monitor and understand the Earth’s ecosystems.”

How about the Earth’s social systems? Humanitarian crises? Armed conflicts? This has been one of the main drivers of the Program on Crisis Mapping and Early Warning (CM&EW) which I co-direct at the Harvard Humanitarian Initiative (HHI) with Dr. Jennifer Leaning. Indeed, we had a meeting with the Google Earth team earlier this year to discuss the development of a computational platform to analyze satellite imagery of humanitarian crises for the purposes of early detection and early response.

In particular, we were interested in determining whether certain spatial patterns could be identified and if so whether we could develop a taxonomy of different spatial patterns of humanitarian crises; something like a library of “crisis finger prints.” As we noted to Google in writing following the conversations,

It is our view that the work of interpretation will be powerfully enhanced by the development of valid patterns relating to issues of importance in specific sets of circumstances that can be reproducibly recognized in satellite imagery. To be sure, the geo-spatial analysis of humanitarian crisis can serve as an important control mechanism for Google’s efforts in extending the functionality of Google Earth and Google’s analytical expertise.

This is something that a consortium of organizations including HHI can get engaged in. Population movement and settlement, shelter options and conditions, environmental threats, access to food and water, are discernible from various elements and resolution levels of satellite imagery.  But much more could be apprehended from these images were patterns assembled and then tested against other information sources and empirical field assessments. For an excellent presentation on this, see my colleague Jennifer Leaning’s excellent Keynote address at ICCM 2009:

The military uses of satellite imagery are far more developed than the humanitarian capacities because the interpretive link between what can be seen in the image and what is actually happening on the ground has been made, in great iterative detail, over a period of many years, encompassing a wide span of geographies and technological deployments. We need to develop a process to explore and validate what can be understood from satellite imagery about key humanitarian concerns by augmenting standard satellite analytics with time-specific and informed assessments of what was concurrently taking place in the location being photographed.

The potential for such applications has just begun to surface in humanitarian circles.  The Darfur Google initiative has demonstrated the force of vivid images of destruction tethered to actual locations of villages across the span of Darfur.  Little further detail is available from the actual images, however, and much of the associated information depicted by clicking on the image is static derived from other sources, somewhat laboriously acquired.  The full power of what might be gleaned simply from the satellite image remains to be explored.

Because systematic and empirical analysis of what a series of satellite images might reveal about humanitarian issues has not yet been undertaken, any effort to draw inferences from current images does not lead far.  The recent coverage of the war in Sri Lanka included satellite photos of the same contested terrain in the northeast, for two time frames, a month apart.  The attempt to determine what had transpired in that interim, relating to population movement, shelter de-construction and reconstruction, and land bombardment, was a matter of conjecture.

Bridging this gap from image to insight will not only be a matter of technological enhancement of satellite imaging. It will require interrogating the satellite images through the filter of questions and concerns that are relevant to humanitarian action and then infusing other kinds of information, gathered through a range of methods, to create visual metrics for understanding what the images project.

There is a lot of exciting work to be done in this space and I do hope that Google will seek to partner with humanitarian organizations and applied research institutes to develop an Earth Engine for Humanitarian Crises. While the technological and analytical breakthroughs are path breaking, let us remember that they can be even more breathtaking by applying them to save lives in humanitarian crises.

Patrick Philippe Meier

Is Journalism Just Failed Crowdsourcing?

This provocative question materialized during a recent conversation I had with a Professor of Political Science whilst in New York in this week. Major news companies like CNN have started to crowdsource citizen generated news on the basis that “looking at the news from different angles gives us a deeper understanding of what’s going on.” CNN’s iReporter thus invites citizens to help shape the news “in order to paint a more complete picture of the news.”

This would imply that traditional journalism has provided a relatively incomplete picture of global events. So the question is, if crowdsourcing platforms had been available to journalists one hundred years ago, would they view these platforms as an exciting opportunity to get early leads on breaking stories? The common counter argument is: but crowdsourcing “opens the floodgates” of information and we simply can’t follow up on everything. Yes, but whoever said that every lead requires follow up?

Journalists are not always interested in following up on every lead that comes their way. They’ll select a few sources, interview them and then write up the story. What crowdsourcing citizen generated news does, however, is to provide them with many more leads to choose from. Isn’t this an ideal set up for a journalist? Instead of having to chase down leads across town, the leads come directly to them with names, phone numbers and email addresses.

Imagine that the field of journalism had started out using crowdsourcing platforms combined with investigative journalism. If these platforms were then outlawed for whatever reason, would investigative journalists be hindered in their ability to cover the news from different angles? Or would they still be able to paint an equally complete picture of the news?

Granted, one common criticism of citizen journalism is the lack of context they provide especially when using Twitter given the 140 characters restriction. But surely 140 characters are plenty for the purposes of a potential lead. And if a mountain of Tweets started to point to the same lead story, then a professional journalist could take advantage of this information when deciding whether or not to follow up.

Source: CoolThing

I also find the criticism against Twitter interesting coming from traditional journalists. In the early 1900s, large newspapers started hiring war correspondents “who used the new telegraph and expanding railways to move news faster to their newspapers.” However, the cost of sending telegrams forced reporters to develop a “new concise or ‘tight’ style of writing which became the standard for journalism through the next century.”

Today, the costs of hiring professional journalists means that a newspaper like the Herald (at the time),  is not going to send any modern Henry Stanley to find a certain Dr. Livingstone in Africa. And besides, if the Herald had global crowdsourcing platforms back in the 1870s, they may have instead used Twitter to crowdsource the coordinates of Dr. Livingstone.

This may imply that traditional journalism was primarily shaped by the constraint of technology at the time. In a teleological sense, then, crowdsourcing may simply by the next phase in the future of journalism.

Patrick Philippe Meier

Crisis Information and The End of Crowdsourcing

When Wired journalist Jeff Howe coined the term crowdsourcing back in 2006, he did so in contradistinction to the term outsourcing and defined crowdsourcing as tapping the talent of the crowd. The tag line of his article was: “Remember outsourcing? Sending jobs to India and China is so 2003. The new pool of cheap labor: everyday people using their spare cycles to create content, solve problems, even do corporate R & D.”

If I had a tag line for this blog post it would be: “Remember crowdsourcing? Cheap labor to create content and solve problems using the Internet is so 2006. What’s new and cool today is the tapping of official and unofficial sources using new technologies to create and validate quality content.” I would call this allsourcing.

The word “crowdsourcing” is obviously a compound word that combines “crowd” and “sourcing”. But what exactly does “crowd” mean in this respect? And how has “sourcing” changed since Jeff introduced the term crowdsourcing over three-and-a-half years ago?

Lets tackle the question of “sourcing” first. In his June 2006 article on crowdsourcing, Jeff provides case studies that all relate to a novel application of a website and perhaps the most famous example of crowdsourcing is Wikipedia, another website. But we’ve just recently seen some interesting uses of mobile phones to crowdsource information. See Ushahidi or Nathan Eagle’s talk at ETech09, for example:

So the word “sourcing” here goes beyond the website-based e-business approach that Jeff originally wrote about in 2006. The mobile technology component here is key. A “crowd” is not still. A crowd moves, especially in crisis, which is my area of interest. So the term “allsourcing” not only implies collecting information from all sources but also the use of “all” technologies to collect said information in different media.

As for the word “crowd”, I recently noted in this Ushahidi blog post that we may need some qualifiers—namely bounded and unbounded crowdsourcing. In other words, the term “crowd” can mean a large group of people (unbounded crowdsourcing) or perhaps a specific group (bounded crowdsourcing). Unbounded crowdsourcing implies that the identity of individuals reporting the information is unknown whereas bounded crowdsourcing would describe a known group of individuals supplying information.

The term “allsourcing” represents a combination of bounded and unbounded crowdsourcing coupled with new “sourcing” technologies. An allsourcing approach would combined information supplied by known/official sources and unknown/unofficial sources using the Web, e-mail, SMS, Twitter, Flickr, YouTube etc. I think the future of crowdsourcing is allsourcing because allsourcing combines the strengths of both bounded and unbounded approaches while reducing the constraints inherent to each individual approach.

Let me explain. One main important advantage of unbounded crowdsourcing is the ability to collect information from unofficial sources. I consider this an advantage over bounded crowdsourcing since more information can be collected this way. The challenge of course is how to verify the validity of said information. Verifying information is by no means a new process, but unbounded crowdsourcing has the potential to generate a lot more information than bounded crowdsourcing since the former does not censor unofficial content. This presents a challenge.

At the same time, bounded crowdsourcing has the advantage of yielding reliable information since the reports are produced by known/official sources. However, bounded crowdsourcing is constrained to a relatively small number of individuals doing the reporting. Obviously, these individuals cannot be everywhere at the same time. But if we combined bounded and unbounded crowdsourcing, we would see an increase in (1) overall reporting, and (2) in the ability to validate reports from unknown sources.

The increased ability to validate information is due to the fact that official and unofficial sources can be triangulated when using an allsourcing approach. Given that official sources are considered trusted sources, any reports from unofficial sources that match official reports can be considered more reliable along with their associated sources. And so the combined allsourcing approach in effect enables the identification of new reliable sources even if the identify of these sources remains unknown.

Ushahidi is good example of an allsourcing platform. Organizations can use Ushahidi to capture both official and unofficial sources using all kinds of new sourcing technologies. Allsourcing is definitely something new so there’s still much to learn. I have a hunch that there is huge potential. Jeff Howe titled his famous article in Wired “The Rise of Crowdsourcing.” Will a future edition of Wired include an article on “The Rise of Allsourcing”?

Patrick Philippe Meier

Three Common Misconceptions About Ushahidi

Cross posted on Ushahidi

Here are three interesting misconceptions about Ushahidi and crowdsourcing in general:

  1. Ushahidi takes the lead in deploying the Ushahidi platform
  2. Crowdsourced information is statistically representative
  3. Crowdsourced information cannot be validated

Lets start with the first. We do not take the lead in deploying Ushahidi platforms. In fact, we often learn about new deployments second-hand via Twitter. We are a non-profit tech company and our goal is to continue developing innovative crowdsourcing platforms that cater to the growing needs of our current and prospective partners. We provide technical and strategic support when asked but otherwise you’ll find us in the backseat, which is honestly where we prefer to be. Our comparative advantage is not in deployment. So the credit for Ushahidi deployments really go the numerous organizations that continue to implement the platform in new and innovative ways.

On this note, keep in mind that the first downloadable Ushahidi platform was made available just this May, and the second version just last week. So implementing organizations have been remarkable test pilots, experimenting and learning on the fly without recourse to any particular manual or documented best practices. Most election-related deployments, for example, were even launched before May, when platform stability was still an issue and the code was still being written. So our hats go off to all the organizations that have piloted Ushahidi and continue to do so. They are the true pioneers in this space.

Also keep in mind that these organizations rarely had more than a month or two of lead-time before scheduled elections, like in India. If all of us have learned anything from watching these deployments in 2009, it is this: the challenge is not one of technology but election awareness and voter education. So we’re impressed that several organizations are already customizing the Ushahidi platform for elections that are more than 6-12 months away. These deployments will definitely be a first for Ushahidi and we look forward to learning all we can from implementing organizations.

The second misconception, “crowdsourced information is statistically representative,” often crops up in conversations around election monitoring. The problem is largely one of language. The field of election monitoring is hardly new. Established organizations have been involved in election monitoring for decades and have gained a wealth of knowledge and experience in this area. For these organizations, the term “election monitoring” has specific connotations, such as random sampling and statistical analysis, verification, validation and accredited election monitors.

When partners use Ushahidi for election monitoring, I think they mean something different. What they generally mean is citizen-powered election monitoring aided by crowdsourcing. Does this imply that crowdsourced information is statistically representative of all the events taking place across a given country? Of course not: I’ve never heard anyone suggest that crowdsourcing is equivalent to random sampling.

Citizen-powered election monitoring is about empowering citizens to take ownership over their elections and to have a voice. Indeed, elections do not start and stop at the polling booth. Should we prevent civil society groups from crowdsourcing crisis information on the basis that their reports may not be statistically representative? No. This is not our decision to make and the data is not even meant for us.

Another language-related problem has to due with the term “crowdsourcing”. The word  “crowd” here can literally mean anyone (unbounded crowdsourcing) or a specific group (bounded crowdsourcing) such as designated election monitors. If these official monitors use Ushahidi and they are deliberately positioned across a country for random sampling purposes, then this becomes no different at all to standard and established approaches to election monitoring. Bounded crowdsourcing can be statistically representative.

The third misconception about Ushahidi has to do with the tradeoff between unbounded crowdsourcing and the validation of said crowdsourced information. One of the main advantages of unbounded crowdsourcing is the ability to collect a lot of information from a variety of sources and media—official and nonofficial sources—in near real time. Of course, this means that a lot more of information can be reported at once, which can make the validation of said information a challenging process.

A common reaction to this challenge is to dismiss crowdsourcing altogether because unofficial sources may be unreliable or at worse deliberately misleading. Some organizations thus find it easier to write off all unofficial content because of these concerns. Ushahidi takes a different stance. We recognize that user-generated content is not about to disappear any time soon and that a lot of good can come out of such content, not least because official information can too easily become proprietary and guarded instead of shared.

So we’re not prepared to write off user-generated content because validating information happens to be challenging. Crowdsourcing crisis information is our business and so is (obviously) the validation of crowdsourced information. This is why Ushahidi is fully committed to developing Swift River. Swift is a free and open source platform that validates crowdsourced information in near real-time. Follow the Ushahidi blog for exciting updates!

Where I Stand on Digital Activism

Journalists, activists, students, donors and most recently a millionaire investment banker have all recently asked me where I stand on Digital Activism. More precisely, the popular question is: Who is going to win? And by that, they refer to the cat-and-mouse dynamics that characterize the digital battle between repressive regimes and civil resistance movements.

My personal opinion (a.k.a. untested hunch) is that this cat-and-mouse game is bound to continue for some time. That said, I ultimately think that repressive regimes will eventually lag behind the adoption and application of innovative methods and technologies. I also think that resistance movements that employ digital technologies will continue to have a first-mover advantage, even if that advantage is short-lived.

Why? Because of Organizational Theory 101. It is well known in the study of complex systems and network dynamics that organizational typologies for command and control structures do not adapt very well to rapidly changing environments. On the other hand, relatively decentralized forms of organization are typically more nimble and adaptable. Decentralized networks are often first movers, which give them a temporary albeit important advantage. They have more feedback loops.

As I wrote in 2006 conference paper (citing Bazerman and Watkins 2004),

Feedback mechanisms enable an organization to manage the complexity of their internal and external environments in four important ways. They allow an organization to: (1) scan the environment and collect sufficient information; (2) integrate and analyze information from multiple sources; (3) respond in a timely manner and observe the results; and (4) reflect on what happened and incorporate lessons-learned into the “institutional memory” of the organization, in order to avoid repetition of past mistakes.

In contrast, hierarchical structures require the executive to rely on others to scan information. Excellent communication “between floors” is therefore critical. In the process of communication, however, “organizational members filter information as it rises through hierarchies” and “those at the top inevitably receive incomplete and distorted data [and] overload may prevent them from keeping up-to-date with incoming information.” This limits the organization’s ability to adapt and change, and “any organization that is not changing is a battlefield monument.”

Furthermore, as Brafman and Beckstrom have shown in The Starfish and the Spider, “when attacked, a decentralized organization tends to become even more open and decentralized.” This means that government crackdowns against resistance movements tend to make the latter more decentralized and harder to track down.

I often use the cat-and-mouse game analogy but perhaps a better analogy is the spider and the starfish. Even if an arm of the starfish is cut off, it will regenerate. Not so with the spider, which has a centralized nervous system. As Brafman and Beckstrom write, “A starfish is a neural network–basically a network of cells. Instead of having a head, like a spider, the starfish functions as a decentralized network.” Of course, resistance movements are not completely decentralized; they need only be more decentralized relatively to repressive regimes.

Notice that I have not referred to technology a single time in this blog post about Digital Activism. That’s because my take on the competition between the spider and starfish ultimately rests on organizational dynamics, not technology.

Organization is a formidable force in social systems and natural systems. The only difference between a water droplet and solid ice is organization—the way the molecules are organized. Asymmetric warfare is possible because of organizational differences. I highly recommend reading this book by my colleagues Shultz and Dew (2006): Insurgents, Territories, Militias: Warriors of Contemporary Combat to understand the power of organization.

So this is ultimately where I stand on Digital Activism and what I wrote over a year ago in my dissertation proposal. We can go on all we want with anecdotal acrobatics but I personally think that doing so is simply barking up the wrong tree and missing the forest for the trees.

Patrick Philippe Meier

New Tech in Emergencies and Conflicts: Role of Information and Social Networks

I had the distinct pleasure of co-authoring this major new United Nations Foundation & Vodafone Foundation Technology Report with my distinguished colleague Diane Coyle. The report looks at innovation in the use of technology along the time line of crisis response, from emergency preparedness and alerts to recovery and rebuilding.

“It profiles organizations whose work is advancing the frontlines of innovation, offers an overview of international efforts to increase sophistication in the use of IT and social networks during emergencies, and provides recommendations for how governments, aid groups, and international organizations can leverage this innovation to improve community resilience.”

Case studies include:

  • Global Impact and Vulnerability Alert System (GIVAS)
  • European Media Monitor (EMM, aka OPTIMA)
  • Emergency Preparedness Information Center (EPIC)
  • Ushahidi Crowdsourcing Crisis Information
  • Télécoms sans Frontières (TSF)
  • Impact of Social Networks in Iran
  • Social Media, Citizen Journalism and Mumbai Terrorist Attacks
  • Global Disaster Alert and Coordination System (GDACS)
  • InSTEDD RIFF
  • UNOSAT
  • AAAS Geospatial Technologies for Human Rights
  • Info Technology for Humanitarian Assistance, Cooperation and Action (ITHACA)
  • Camp Roberts
  • OpenStreetMap and Walking Papers
  • UNDP Threat and Risk Mapping Analysis project (TRMA)
  • Geo-Spatial Info Analysis for Global Security, Stability Program (ISFEREA)
  • FrontlineSMS
  • M-PESA and M-PAISA
  • Souktel

I think this long and diverse list of case studies clearly shows that the field of humanitarian technology is coming into it’s own.  Have a look at the report to learn how all these fit in the ecosystem of humanitarian technologies. And check out the tag #Tech4Dev on Twitter or the UN Foundation’s Facebook page to discuss the report and feel free to add any comments to this blog post below. I’m happy to answer all questions. In the meantime, I salute the UN Foundation for producing a forward looking report on projects that are barely two years old, and some just two months old.

Patrick Philippe Meier