Monthly Archives: June 2012

Wow: How Road Maps Were Made in the 1940s!

This short video is absolutely a must-watch for today’s digital and crowdsourced-mapping enthusiasts. Produced by Chevrolet in the 1940s, Caught Mapping is an educational film that provides a truly intriguing and at times amusingly enter-taining view into how road maps were made at the time. The contrasts with today’s live, crowdsourced, social-media maps rich with high-resolution satellite imagery are simply staggering. This is definitely worth the watch!

Compare the roadmap-making of yesteryear with OpenStreetMap’s impressive map-making efforts in Haiti 2010 (video below) and Japan 2011, for example.

What do you think map-making will look like in 2040? Will we still be making maps? Or will automated sensors be live mapping 24/7? Will 2D interfaces disappear entirely and be replaced by 3D maps? Will all geo-tagged data simply be embedded within augmented reality platforms and updated live? Will we even be using the word “map” anymore?

Crisis Mapping the End of Sudan’s Dictatorship?

Anyone following the twitter hashtag #SudanRevolts in recent days must be stunned by the shocking lack of coverage in the mainstream media. The protests have been escalating since June 17 when female students at the University of Khartoum began demonstrating against the regime’s austerity measures, which are increasing the prices of basic commodities and removing fuel subsidies. The dissent has quickly spread to other universities and communities.

There’s no doubt that Sudan’s dictator is in trouble. He faces international economic sanctions and a mounting US$2.5 billion budget deficit following the secession of South Sudan last year. What’s more, he is also “fighting expensive, devastating, and unpopular wars in Darfur (in the west), Blue Nile, Southern Kordofan, and the Nuba Mountains (on the border with South Sudan)” (UN Dispatch). So what next?

Enter Sudan Change Now, a Sudanese political movement with a clear mandate: peaceful but total democratic change. They seek to “defeat the present power of darkness using all necessary tools of peace resistance to achieve political stability and social peace.” The movement is thus “working on creating a common front that incorporates all victims of the current regime to ensure a unified and effective course of action to overthrow it.” Here are some important videos they have captured of the protests.

According to GlobalVoices, “The Sudanese online community believe that media coverage was an integral part of the revolutions in Egypt and Tunisia, and are therefore demanding the same for Sudan.” The political movement Sudan Change Now is thus turning to crisis mapping to cast more light on the civil resistance efforts in the Sudan:

https://sudanchangenow2012.crowdmap.com

The crisis map includes over 50 individual reports (all added in the past 24 hours) ranging from female protestors confronting armed guards to Sudanese security forces using tear gas to break up demonstrations. There are also reports of detained activists and journalists. These reports come from twitter while more recent incidents are sourced from the little mainstream media coverage that currently exists. The live map is being updated several times a day.

As my colleague Carol Gallo reminds us, “The University of Khartoum was also the birthplace of the movement that led to the overthrow of the military government in 1964.” Symbols and anniversaries are important features of civil resistance. For example, Sudan’s current ruling party came to power on June 30th, 1989. So protestors including those with Sudan Change Now are gearing up for some major demonstrations this Wednesday.

This is not the first crisis map of protests in Khartoum. In January 2011, activists launched this crisis map. I hope that protestors engaged in current civil resistance efforts take note of the lessons learned from last year’s #Jan30 demonstrations. For my doctoral dissertation, I compared the use of crisis maps by Egyptian and Sudanese activists in 2010. If I had to boil down the findings into three key words, these would be: unity, preparedness, creativity.

Unity is absolutely instrumental in civil resistance. As for preparedness, nothing should be left to chance. Prepare and plan the sequence of civil resistance efforts (along with likely reactions) and remember that protests come at the end. The ground-work must first be laid with other civil resistance tactics and thence escalated. Finally, creativity is essential, so here are some tactics that may provide some ideas. They include both traditional tactics and technology-enabled ones like digital crisis maps.

NB: I understand that the security risks of using the Ushahidi mapping platform have been indirectly communicated to the activists.

Back to the Future: On National Geographic and Crisis Mapping

[Cross-posted from National Geographic Newswatch]

Published in October 1888, the first issue of National Geographic “was a modest looking scientific brochure with an austere terra-cotta cover” (NG 2003). The inaugural publication comprised a dense academic treatise on the classification of geographic forms by genesis. But that wasn’t all. The first issue also included a riveting account of “The Great White Hurricane” of March 1888, which still ranks as one of the worst winter storms ever in US history.

Wreck at Coleman’s Station, New York & Harlem R. R., March 13, 1888. Photo courtesy NOAA Photo Library.

I’ve just spent a riveting week myself at the 2012 National Geographic Explorers Symposium in Washington DC, the birthplace of the National Geographic Society. I was truly honored to be recognized as a 2012 Emerging Explorer along with such an amazing and accomplished cadre of explorers. So it was with excitement that I began reading up on the history of this unique institution whilst on my flight to Doha following the Symposium.

I’ve been tagged as the “Crisis Mapper” of the Emerging Explorers Class of 2012. So imagine my astonishment when I  began discovering that National Geographic had a long history of covering and mapping natural disasters, humanitarian crises and wars starting from the very first issue of the magazine in 1888. And when World War I broke out:

“Readers opened their August 1914 edition of the magazine to find an up-to-date map of ‘The New Balkan States and Central Europe’ that allowed them to follow the developments of the war. Large maps of the fighting fronts continued to be published throughout the conflict […]” (NG 2003).

Map of ‘The New Balkan States and Central Europe’ from the August 1914 “National Geographic Magazine.” Image courtesy NGS.

National Geographic even established a News Service Bureau to provide bulletins on the geographic aspects of the war for the nation’s newspapers. As the respected war strategist Carl von Clausewitz noted half-a-century before the launch of Geographic, “geography and the character of the ground bear a close and ever present relation to warfare, . . . both as to its course and to its planning and exploitation.”

“When World War II came, the Geographic opened its vast files of photographs, more than 300,000 at that time, to the armed forces. By matching prewar aerial photographs against wartime ones, analysts detected camouflage and gathered intelligence” (NG 2003).

During the 1960s, National Geographic “did not shrink from covering the war in Vietnam.” Staff writers and photographers captured all aspects of the war from “Saigon to the Mekong Delta to villages and rice fields.” In the years and decades that followed, Geographic continued to capture unfolding crises, from occupied Palestine and Apartheid South Africa to war-torn Afghanistan and the drought-striven Sahel of Africa.

Geographic also covered the tragedy of the Chernobyl nuclear disaster and the dramatic eruption of Mount Saint Helens. The gripping account of the latter would in fact become the most popular article in all of National Geographic history. Today,

“New technologies–remote sensing, lasers, computer graphics, x-rays and CT scans–allow National Geographic to picture the world in new ways.” This is equally true of maps. “Since the first map was published in the magazine in 1888, maps  have been an integral component of many magazine articles, books and television programs […]. Originally drafted by hand on large projections, today’s maps are created by state-of-the art computers to map everything from the Grand Canyon to the outer reaches of the universe” (NG 2003). And crises.

“Pick up a newspaper and every single day you’ll see how geography plays a dominant role in giving a third dimension to life,” wrote Gil Grosvenor, the former Editor in Chief of National Geographic (NG 2003). And as we know only too well, many of the headlines in today’s newspapers relay stories of crises the world over. National Geographic has a tremendous opportunity to shed a third dimension on emerging crises around the globe using new live mapping technologies. Indeed, to map the world is to know it, and to map the world live is to change it live before it’s too late. The next post in this series will illustrate why with an example from the 2010 Haiti Earthquake.

Patrick Meier is a 2012 National Geographic Emerging ExplorerHe is an internationally recognized thought leader on the application of new technologies for positive social change. He currently serves as Director of Social Innovation at the Qatar Foundation’s Computing Research Institute (QCRI). Patrick also authors the respected iRevolution blog & tweets at @patrickmeier. This piece was originally published here on National Geographic.

Does Digital Crime Mapping Work? Insights on Engagement, Empowerment & Transparency

In 2008, police forces across the United Kingdom (UK) launched an online crime mapping tool “to help improve the credibility and confidence that the public had in police-recorded crime levels, address perceptions of crime, promote community engagement and empowerment, and support greater public service transparency and accountability.” How effective has this large scale digital mapping effort been? “There continues to be a lack of evidence that publishing crime statistics using crime mapping actually supports improvements in community engagement and empowerment.” This blog post evaluates the project’s impact by summarizing the findings from a recent peer-reviewed study entitled: “Engagement, Empowerment and Transparency: Publishing Crime Statistics using Online Crime Mapping.” Insights from this study have important implications for crisis mapping projects.

The rationale for publishing up-to-date crime statistics online was to address the “reassurance gap” which “relates to the counterintuitive relationship between fear of crime and the reality of crime.” While crime in the UK has decreased steadily over the past 15 years, there was no corresponding in the public’s fear of crime during this period. Studies subsequently found a relationship between a person’s confidence in the criminal justice system and the level to which a person felt informed about crime and justice issue. “Hence, barriers to accurate information were one of the main reasons why the reassurance gap, and lack of confidence in the police, was believed to exist.”

A related study  found that people’s opinions on crime levels were “heavily influenced by media depictions, demographic qualities, and personal ex- perience.” Meanwhile, “the countervailing source of information—nationally reported crime statistics—was not being heard. Simply put, the message that crime levels were falling was not getting through to the populace over the cacophony of competing information.” Hence the move to publish crime statistics online using a crime map.

Studies have long inferred that “publically disseminating crime information engages the public and empowers them to get involved in their communities. This has been a key principle in the adoption of community policing, where the public are considered just as much a part of community safety as the police themselves. Increasing public access to crime information is seen as integral to this whole agenda.” In addition, digital crime mapping was “seen as a ‘key mechanism for encouraging the public to take greater responsibility for holding the local police to account for their performance.’ In other words, it is believed that publishing information on crime at a local level facilitates greater public scrutiny of how well the police are doing at suppressing local crime and serves as a basis for dialogue between the public and their local police.”

While these are all great reasons to launch a nationwide crime mapping initiative online, “the evidence base that should form the foundations of the policy to pub- lish crime statistics using crime maps is distinctly absent.” When the project launched, “no police force had conducted a survey (robust or anecdotal) that had measured the impact that publishing crime statistics had on improving the credibility of these data or the way in which the information was being used to inform, reassure, and engage with the public.” In fact, “the only research-derived knowledge available on the impact of crime maps on public perceptions of crime was generated in the USA, on a small and conveniently selected sample.”

Moreover, many practitioners had “concerns with the geocoding accuracy of some crime data and how this would be represented on the national crime mapping site (i.e. many records cannot be geographically referenced to the exact location where the crime took place) [...] which could make the interpretation of street-level data misleading and confusing to the public.” The authors thus argue that “an areal visualization method such as kernel density estimation that is commonly used in police forces for visualizing the geographic distribution of crime would have been more appropriate.”

The media was particularly critical of the confusion provoked by the map and the negative media attention “may have done long-term damage to the reputation of crime statistics. Whilst these inaccuracies are few in number, they have been high profile and risk undermining the legitimacy and confidence in the accuracy of all the crime statistics published on the site. Indeed, these concerns were highlighted further in October 2011 when the crime statistics for the month of August appeared to show no resemblance to the riots [...].”

Furthermore, “contemporary research has stressed that information provision needs to be relevant to the recipients, and should emphasize police responsive-ness to local issues to chime with the public’s priorities.” Yet the UK’s online crime map does not “taylor sub-neighborhood reassurance messages alongside the publishing of crime statistics (e.g., saying how little crime there is in your neighborhood). General messages of reassurance and crime prevention often fail to resonate. This also under-scores the need for tailored information that is actively passed on to local communities at times of heightened crime risk, which local residents can then use to minimize their own immediate risk of victimization and improve local public safety.”

In addition, the presentation of “crime statistics on the national website is very passive, offering little that will draw people back and keep them interested on crime trends and policing in their area.” For example, the project did not require users to register their email addresses and home post (zip) codes when using the map. This meant the police had no way to inform interested audiences with locally relevant crime information such as “specific and tailored crime prevention advice regarding a known local crime issue (e.g. a spate of burglaries), directly promoting messages of reassurance and used as a means to publicize police activity.” I would personally argue for the use of automated alerts and messages of reassurance via geo-fencing. (The LA Crime Map provides automated alerts, for example). I would also recommend social networking tools such as Facebook and Twitter to the map.

In conclusion, the authors question the “assumption that all police-recorded crime data are fit for purpose for mapping at street level.” They recommend using the Management of Police Information (MOPI) protocol, which states that “information must fulfill a necessary purpose for it to be recorded and retained by the police.” MOPI would “help to qualify what should and what should not be published, and the mechanism by which it is published.” Instead of mapping everything and anything, the authors advocate for the provision of “better quality information that the public can actually do something with to minimize their risk of victimization, or use as a basis for dialogue with their local policing teams.” In sum, “the purpose of publishing the crime statistics must not lose sight of the important potential it can contribute to improving the dialogue and involvement of local communities in improving community safety, and must avoid becoming an exercise in promoting political transparency when the data it offers provides little that encourages the public to react.”

For more on crime mapping, see:

How Can Innovative Technology Make Conflict Prevention More Effective?

I’ve been asked to participate in an expert working group in support of a research project launched by the International Peace Institute (IPI) on new technologies for conflict prevention. Both UNDP and USAID are also partners in this effort. To this end, I’ve been invited to make some introductory remarks during our upcoming working group meeting. The purpose of this blog post is to share my preliminary thoughts on this research and provide some initial suggestions.

Before I launch into said thoughts, some context may be in order. I spent several years studying, launching and improving conflict early warning systems for violence prevention. While I haven’t recently blogged about conflict prevention on iRevolution, you’ll find my writings on this topic posted on my other blog, Conflict Early Warning. I have also published and presented several papers on conflict prevention, most of which are available here. The most relevant ones include the following:

  • Meier, Patrick. 2011. Early Warning Systems and the Prevention of Violent Conflict. In Peacebuilding in the Information Age: Sifting Hype from Reality, ed. Daniel Stauffacher et al. GenevaICT4Peace. Available online.
  • Leaning, Jennifer and Patrick Meier. 2009. “The Untapped Potential of Information Communication Technology for Conflict Early Warning and Crisis Mapping,” Working Paper Series, Harvard Humanitarian Initiative (HHI), Harvard University. Available online.
  • Leaning, Jennifer and Patrick Meier. 2008. “Community Based Conflict Early Warning and Response Systems: Opportunities and Challenges.” Working Paper Series, Harvard Humanitarian Initiative (HHI), Harvard University. Available online.
  • Leaning, Jennifer and Patrick Meier. 2008. “Conflict Early Warning and Response: A Critical Reassessment.” Working Paper Series, Harvard Humanitarian Initiative (HHI), Harvard University. Available online.
  • Meier, Patrick. 2008. “Upgrading the Role of Information Communication Technology (ICT) for Tactical Early Warning/Response.” Paper prepared for the 49th Annual Convention of the International Studies Association (ISA) in San Francisco. Available online.
  • Meier, Patrick. 2007. “New Strategies for Effective Early Response: Insights from Complexity Science.” Paper prepared for the 48th Annual Convention of the International Studies Association (ISA) in Chicago.Available online.
  • Campbell, Susanna and Patrick Meier. 2007. “Deciding to Prevent Violent Conflict: Early Warning and Decision-Making at the United Nations.” Paper prepared for the 48th Annual Convention of the International Studies Association (ISA) in Chicago. Available online.
  • Meier, Patrick. 2007. From Disaster to Conflict Early Warning: A People-Centred Approach. Monday Developments 25, no. 4, 12-14. Available online.
  • Meier, Patrick. 2006. “Early Warning for Cowardly Lions: Response in Disaster & Conflict Early Warning Systems.” Unpublished academic paper, The Fletcher SchoolAvailable online.
  • I was also invited to be an official reviewer of this 100+ page workshop summary on “Communication and Technology for Violence Prevention” (PDF), which was just published by the National Academy of Sciences. In addition, I was an official referee for this important OECD report on “Preventing Violence, War and State Collapse: The Future of Conflict Early Warning and Response.”

An obvious first step for IPI’s research would be to identify the conceptual touch-points between the individual functions or components of conflict early warning systems and information & communication technology (ICT). Using this concep-tual framework put forward by ISDR would be a good place to start:

That said, colleagues at IPI should take care not to fall prey to technological determinism. The first order of business should be to understand exactly why previous (and existing) conflict early warning systems are complete failures—a topic I have written extensively about and been particularly vocal on since 2004. Throwing innovative technology at failed systems will not turn them into successful operations. Furthermore, IPI should also take note of the relatively new discourse on people-centered approaches to early warning and distinguish between first, second, third and fourth generation conflict early warning systems.

On this note, IPI ought to focus in particular on third and fourth generation systems vis-a-vis the role of innovative technology. Why? Because first and second generation systems are structured for failure due to constraints explained by organizational theory. They should thus explore the critical importance of conflict preparedness and the role that technology can play in this respect since preparedness is key to the success of third and fourth generation systems. In addition, IPI should consider the implications of crowdsourcing, crisis mapping, Big Data, satellite imagery and the impact that social media analytics might play for the early detection and respons to violent conflict. They should also take care not to ignore critical insights from the field of nonviolent civil resistance vis-a-vis preparedness and tactical approaches to community-based early response. Finally, they should take note of new and experimental initiatives in this space, such as PeaceTXT.

IPI’s plans to write up several case studies on conflict early warning systems to understand how innovative technology might (or already are) making these more effective. I would recommend focusing on specific systems in Kenya, Kyrgyzstan Sri Lanka and Timor-Leste. Note that some community-based systems are too sensitive to make public, such as one in Burma for example. In terms of additional experts worth consulting, I would recommend David Nyheim, Joe Bock, Maria Stephan, Sanjana Hattotuwa, Scott Edwards and Casey Barrs. I would also shy away from inviting too many academics or technology companies. The former tend to focus too much on theory while the latter often have a singular focus on technology.

Many thanks to UNDP for including me in the team of experts. I look forward to the first working group meeting and reviewing IPI’s early drafts. In the meantime, if iRevolution readers have certain examples or questions they’d like me to relay to the working group, please do let me know via the comments section below and I’ll be sure to share.

Marketing Peace using SMS Mobile Advertising: A New Approach to Conflict Prevention

I was just in Kenya working on the next phase of the PeaceTXT project with my colleague Rachel Brown from Sisi ni Amani. I’m finally getting to implement an approach to conflict early warning and early response that I have been advocating for since 2006. I came close in 2008 whilst working on a conflict early and response project in Timor-Leste. But I wasn’t in Dili long enough to see the project through and the country’s limited mobile phone coverage presented an important obstacle. Long story short, I’ve been advocating for a people-centered and preparedness-based approach to conflict early warning systems for half a decade and am finally implementing one with PeaceTXT.

Conflicts are often grounded in the stories and narratives that people tell themselves and the emotions that these stories generate. Narratives shape identity and the social construct of reality—we interpret our lives through stories. These have the power to transform relationships and communities. The purpose of PeaceTXT is to leverage mobile messaging (SMS) to market peace in strategic ways and thereby generate alternative narratives. SMS reminders have been particularly effective in catalyzing behavior change in several important public health projects. In addition, marketing to the “Bottom of the Pyramid” is increasingly big business and getting more sophisticated. We believe that lessons learned from these sectors can be combined and applied to catalyze behavior  change vis-a-vis peace and conflict issues by amplifying new narratives using timely and strategically targeted SMS campaigns.

Last year, Sisi ni Amani sent the following SMS to 10,000 subscribers across Kenya: A good leader initiates and encourages peace and development among all people and is not tribal. “In a nation divided along ethnic lines, where a winner-takes-all mindset fuels rampant corruption and political violence, changing perceptions of good leadership is a daunting endeavor. And yet, according to post-campaign data, 90 percent of respondents said they changed their understanding of ‘what makes a good leader’ in response to the organization’s messaging. As one respondent commented: ‘I used to think a good leader is one who has the most votes, but now I know a good leader is one who thinks of the people who voted for him, not himself'” (NextBillion Blog Post).

PeaceTXT is about marketing peace using mobile advertising by leveraging user-generated content for said text messages. We’re in the business of selling peace for free by countering other narratives that tend to incite violent behavior. Preparedness is core to the PeaceTXT model. To be sure, local mobile-based advertising is hardly reactive or random. Indeed, billions of dollars go into marketing campaigns for a reason. To this end, we’re busy developing an agile SMS protocol that will allow us to send pre-determined customized text messages to specific groups (demographics) in targeted locations within minutes of an incident occurring. The content for said text messages will come from local communities themselves.

The next step is for Rachel and her team to organize and hold several local focus groups in July to begin generating appropriate content for text messages to de-escalate and/or counter police-community tensions, rumors and insecurity. I’ll be back in Kenya in August to review this user-generated content so we can add the text messages to our SMS protocol and customized SMS platform. I’m thrilled and can’t wait to work on this next phase.

What United Airlines can Teach the World Bank about Mobile Accountability

Flight delays can sometimes lead to interesting discoveries. As my flight to DC was delayed for a third frustrating hour, I picked up the United Airlines in-flight magazine and saw this:

United just launched a novel feedback program that the World Bank and other development organizations may want to emulate given their interest in pro-moting upward accountability. From the United Press Release:

“Behind every great trip is an airline of great people. Now, when you receive excellent customer service from an eligible United [...] employee, you can enter him or her in United’s Outperform Recognition Program. If the employee you enter is a winner in our random drawing for cash prizes, you win, too. With just a few clicks on the United mobile app, you could have the chance to win MileagePlus award miles or even roundtrip tickets.”

“Eligible MileagePlus members can participate in the recognition program using the United mobile app, available for Apple and Android devices, to nominate eligible employees. MileagePlus members simply nominate the employee of their choice through the United mobile app.”

This participatory and crowdsourced recognition program is brilliant for several reasons. First, the focus is on identifying positive deviance rather than generating negative feedback. In other words, it is not a complaints but a rewards system. Second, the program is incentive-based with shared proceeds. Not only do United employees have the chance to make some extra cash (average salary of flight attendants is $36,128), those who nominate employees for outstanding service also share in the proceeds in the form of free tickets and airline miles.

Third, United didn’t develop a new, separate smartphone app or technology for this recognition program; they added the feature directly into the existing United app instead. (That said, they ought to give passengers the option of submitting an entry via United’s website as well since not everyone will be comfortable using a smartphone app). I’d also recommend they make some of the submissions available on a decidate section of the United website to give users the option to browse through some of the feedback (and even digg up those they like the most).

I wonder whether other airlines in the StarAlliance network will adopt the same (or similar) recognition program. I also wonder whether donors like the World Bank ought to develop a similar solution (perhaps SMS-based) and require the use of this service for all projects funded by the Bank.

Big Data for Development: Challenges and Opportunities

The UN Global Pulse report on Big Data for Development ought to be required reading for anyone interested in humanitarian applications of Big Data. The purpose of this post is not to summarize this excellent 50-page document but to relay the most important insights contained therein. In addition, I question the motivation behind the unbalanced commentary on Haiti, which is my only major criticism of this otherwise authoritative report.

Real-time “does not always mean occurring immediately. Rather, “real-time” can be understood as information which is produced and made available in a relatively short and relevant period of time, and information which is made available within a timeframe that allows action to be taken in response i.e. creating a feedback loop. Importantly, it is the intrinsic time dimensionality of the data, and that of the feedback loop that jointly define its characteristic as real-time. (One could also add that the real-time nature of the data is ultimately contingent on the analysis being conducted in real-time, and by extension, where action is required, used in real-time).”

Data privacy “is the most sensitive issue, with conceptual, legal, and technological implications.” To be sure, “because privacy is a pillar of democracy, we must remain alert to the possibility that it might be compromised by the rise of new technologies, and put in place all necessary safeguards.” Privacy is defined by the International Telecommunications Union as theright of individuals to control or influence what information related to them may be disclosed.” Moving forward, “these concerns must nurture and shape on-going debates around data privacy in the digital age in a constructive manner in order to devise strong principles and strict rules—backed by adequate tools and systems—to ensure “privacy-preserving analysis.”

Non-representative data is often dismissed outright since findings based on such data cannot be generalized beyond that sample. “But while findings based on non-representative datasets need to be treated with caution, they are not valueless […].” Indeed, while the “sampling selection bias can clearly be a challenge, especially in regions or communities where technological penetration is low […],  this does not mean that the data has no value. For one, data from “non-representative” samples (such as mobile phone users) provide representative information about the sample itself—and do so in close to real time and on a potentially large and growing scale, such that the challenge will become less and less salient as technology spreads across and within developing countries.”

Perceptions rather than reality is what social media captures. Moreover, these perceptions can also be wrong. But only those individuals “who wrongfully assume that the data is an accurate picture of reality can be deceived. Furthermore, there are instances where wrong perceptions are precisely what is desirable to monitor because they might determine collective behaviors in ways that can have catastrophic effects.” In other words, “perceptions can also shape reality. Detecting and understanding perceptions quickly can help change outcomes.”

False data and hoaxes are part and parcel of user-generated content. While the challenges around reliability and verifiability are real, Some media organizations, such as the BBC, stand by the utility of citizen reporting of current events: “there are many brave people out there, and some of them are prolific bloggers and Tweeters. We should not ignore the real ones because we were fooled by a fake one.” And have thus devised internal strategies to confirm the veracity of the information they receive and chose to report, offering an example of what can be done to mitigate the challenge of false information.” See for example my 20-page study on how to verify crowdsourced social media data, a field I refer to as information forensics. In any event, “whether false negatives are more or less problematic than false positives depends on what is being monitored, and why it is being monitored.”

“The United States Geological Survey (USGS) has developed a system that monitors Twitter for significant spikes in the volume of messages about earthquakes,” and as it turns out, 90% of user-generated reports that trigger an alert have turned out to be valid. “Similarly, a recent retrospective analysis of the 2010 cholera outbreak in Haiti conducted by researchers at Harvard Medical School and Children’s Hospital Boston demonstrated that mining Twitter and online news reports could have provided health officials a highly accurate indication of the actual spread of the disease with two weeks lead time.”

This leads to the other Haiti example raised in the report, namely the finding that SMS data was correlated with building damage. Please see my previous blog posts here and here for context. What the authors seem to overlook is that Benetech apparently did not submit their counter-findings for independent peer-review whereas the team at the European Commission’s Joint Research Center did—and the latter passed the peer-review process. Peer-review is how rigorous scientific work is validated. The fact that Benetech never submitted their blog post for peer-review is actually quite telling.

In sum, while this Big Data report is otherwise strong and balanced, I am really surprised that they cite a blog post as “evidence” while completely ignoring the JRC’s peer-reviewed scientific paper published in the Journal of the European Geosciences Union. Until counter-findings are submitted for peer review, the JRC’s results stand: unverified, non-representative crowd-sourced text messages from the disaster affected population in Port-au-Prince that were in turn translated from Haitian Creole to English via a novel crowdsourced volunteer effort and subsequently geo-referenced by hundreds of volunteers  which did not undergo any quality control, produced a statistically significant, positive correlation with building damage.

In conclusion, “any challenge with utilizing Big Data sources of information cannot be assessed divorced from the intended use of the information. These new, digital data sources may not be the best suited to conduct airtight scientific analysis, but they have a huge potential for a whole range of other applications that can greatly affect development outcomes.”

One such application is disaster response. Earlier this year, FEMA Administrator Craig Fugate, gave a superb presentation on “Real Time Awareness” in which he relayed an example of how he and his team used Big Data (twitter) during a series of devastating tornadoes in 2011:

“Mr. Fugate proposed dispatching relief supplies to the long list of locations immediately and received pushback from his team who were concerned that they did not yet have an accurate estimate of the level of damage. His challenge was to get the staff to understand that the priority should be one of changing outcomes, and thus even if half of the supplies dispatched were never used and sent back later, there would be no chance of reaching communities in need if they were in fact suffering tornado damage already, without getting trucks out immediately. He explained, “if you’re waiting to react to the aftermath of an event until you have a formal assessment, you’re going to lose 12-to-24 hours…Perhaps we shouldn’t be waiting for that. Perhaps we should make the assumption that if something bad happens, it’s bad. Speed in response is the most perishable commodity you have…We looked at social media as the public telling us enough information to suggest this was worse than we thought and to make decisions to spend [taxpayer] money to get moving without waiting for formal request, without waiting for assessments, without waiting to know how bad because we needed to change that outcome.”

“Fugate also emphasized that using social media as an information source isn’t a precise science and the response isn’t going to be precise either. “Disasters are like horseshoes, hand grenades and thermal nuclear devices, you just need to be close— preferably more than less.”

Big Data Philanthropy for Humanitarian Response

My colleague Robert Kirkpatrick from Global Pulse has been actively promoting the concept of “data philanthropy” within the context of development. Data philanthropy involves companies sharing proprietary datasets for social good. I believe we urgently need big (social) data philanthropy for humanitarian response as well. Disaster-affected communities are increasingly the source of big data, which they generate and share via social media platforms like twitter. Processing this data manually, however, is very time consuming and resource intensive. Indeed, large numbers of digital humanitarian volunteers are often needed to monitor and process user-generated content from disaster-affected communities in near real-time.

Meanwhile, companies like Crimson Hexagon, Geofeedia, NetBase, Netvibes, RecordedFuture and Social Flow are defining the cutting edge of automated methods for media monitoring and analysis. So why not set up a Big Data Philanthropy group for humanitarian response in partnership with the Digital Humanitarian Network? Call it Corporate Social Responsibility (CRS) for digital humanitarian response. These companies would benefit from the publicity of supporting such positive and highly visible efforts. They would also receive expert feedback on their tools.

This “Emergency Access Initiative” could be modeled along the lines of the International Charter whereby certain criteria vis-a-vis the disaster would need to be met before an activation request could be made to the Big Data Philanthropy group for humanitarian response. These companies would then provide a dedicated account to the Digital Humanitarian Network (DHNet). These accounts would be available for 72 hours only and also be monitored by said companies to ensure they aren’t being abused. We would simply need to  have relevant members of the DHNet trained on these platforms and draft the appropriate protocols, data privacy measures and MoUs.

I’ve had preliminary conversations with humanitarian colleagues from the United Nations and DHnet who confirm that “this type of collaboration would be see very positively from the coordination area within the traditional humanitarian sector.” On the business development end, this setup would enable companies to get their foot in the door of the humanitarian sector—a multi-billion dollar industry. Members of the DHNet are early adopters of humanitarian technology and are ideally placed to demonstrate the added value of these platforms since they regularly partner with large humanitarian organizations. Indeed, DHNet operates as a partnership model. This would enable humanitarian professionals to learn about new Big Data tools, see them in action and, possibly, purchase full licenses for their organizations. In sum, data philanthropy is good for business.

I have colleagues at most of the companies listed above and thus plan to actively pursue this idea further. In the meantime, I’d be very grateful for any feedback and suggestions, particularly on the suggested protocols and MoUs. So I’ve set up this open and editable Google Doc for feedback.

Big thanks to the team at the Disaster Information Management Research Center (DIMRC) for planting the seeds of this idea during our recent meeting. Check out their very neat Emergency Access Initiative.

Geofeedia: Next Generation Crisis Mapping Technology?

My colleague Jeannine Lemaire from the Core Team of the Standby Volunteer Task Force (SBTF) recently pointed me to Geofeedia, which may very well be the next generation in crisis mapping technology. So I spent over an hour talking with GeoFeedia’s CEO, Phil Harris, to learn more about the platform and discuss potential applications for humanitarian response. The short version: I’m impressed; not just with the technology itself and potential, but also by Phil’s deep intuition and genuine interest in building a platform that enables others to scale positive social impact.

Situational awareness is absolutely key to emergency response, hence the rise of crisis mapping. The challenge? Processing and geo-referencing Big Data from social media sources to produce live maps has largely been a manual (and arduous) task for many in the humanitarian space. In fact, a number of humanitarian colleagues I’ve spoken to recently have complained that the manual labor required to create (and maintain) live maps is precisely why they aren’t able to launch their own crisis maps. I know this is also true of several international media organizations.

There have been several attempts at creating automated live maps. Take Havaria and Global Incidents Map, for example. But neither of these provide the customi-zability necessary for users to apply the platforms in meaningful ways. Enter Geofeedia. Lets take the recent earthquake and 800 aftershocks in Emilia, Italy. Simply type in the place name (or an exact address) and hit enter. Geofeedia automatically parses Twitter, YouTube, Flickr, Picasa and Instagram for the latest updates in that area and populates the map with this content. The algorithm pulls in data that is already geo-tagged and designated as public.

The geo-tagging happens on the smartphone, laptop/desktop when an image or Tweet is generated. The platform then allows you to pivot between the map and to browse through a collage of the automatically harvested content. Note that each entry includes a time stamp. Of course, since the search function is purely geo-based, the result will not be restricted to earthquake-related updates, hence the picture of friends at a picnic.

But lets click on the picture of the collapsed roof directly to the left. This opens up a new page with the following: the original picture and a map displaying where this picture was taken.

In between these, you’ll note the source of the picture, the time it was uploaded and the author. Directly below this you’ll find the option to query the map further by geographic distance. Lets click on the 300 meters option. The result is the updated collage below.

We know see a lot more content relevant to the earthquake than we did after the initial search. Geofeedia only parses for recently published information, which adds temporal relevance to the geographic search. The result of combing these two dimensions is a more filtered result. Incidentally, Geofeedia allows you to save and very easily share these searches and results. Now lets click on the first picture on the top left.

Geofeedia allows you to create collections (top right-hand corner).  I’ve called mine “Earthquake Damage” so I can collect all the relevant Tweets, pictures and video footage of the disaster. The platform gives me the option of inviting specific colleagues to view and help curate this new collection by adding other relevant content such as tweets and video footage. Together with Geofeedia’s multi-media approach, these features facilitate the clustering and triangulation of multi-media data in a very easy way.

Now lets pivot from these search results in collage form to the search results in map view. This display can also be saved and shared with others.

One of the clear strengths of Geofeedia is the simplicity of the user-interface. Key features and functions are esthetically designed. For example, if we wish to view the YouTube footage that is closest to the circle’s center, simply click on the icon and the video can be watched in the pop-up on the same page.

Now notice the menu just to the right of the YouTube video. Geofeedia allows you to create geo-fences on the fly. For example, we can click on “Search by Polygon” and draw a “digital fence” of that shape directly onto the map with just a few clicks of the mouse. Say we’re interested in the residential area just north of Via Statale. Simply trace the area, double-click to finish and then press on the magnifying glass icon to search for the latest social media updates and Geofeedia will return all content with relevant geo-tags.

The platform allows us to filter these results further the “Settings” menu as displayed below. On the technical side, the tool’s API supports ATOM/RSS, JSON and GeoRSS formats.

Geofeedia has a lot of potential vis-a-vis humanitarian applications, which is why the Standby Volunteer Task Force (SBTF) is partnering with the group to explore this potential further. A forthcoming blog post on the SBTF blog will outline this partnership in more detail.

In the meantime, below are a few thoughts and suggestions for Phil and team on how they can make Geofeedia even more relevant and compelling for humanitarian applications. A quick qualifier is in order beforehand, however. I often have a tendency to ask for the moon when discovering a new platform I’m excited about. The suggestions that follow are thus not criticism at all but rather the result of my imagination gone wild. So big congrats to Phil and team for having built what is already a very, very neat platform!

  • Topical search feature that enables users to search by location and a specific theme or topic.
  • Delete function that allows users to delete content that is not relevant to them either from the Map or Collage interface. In the future, perhaps some “basic” machine learning algorithms could be added to learn what types of content the user does not want displayed or prioritized.
  • Add function that gives users the option of adding relevant multi-media content, say perhaps from a blog post, a Wikipedia entry, news article or (Geo)RSS feed. I would be particularly interested in seeing a Storyful feed integrated into Geofeedia, for example. The ability to add KML files could also be interesting, e.g., a KML of an earthquake’s epicenter and estimated impact.
  • Commenting function that enables users to comment on individual data points (Tweets, pictures, etc) and a “discussion forum” feature that enables users to engage in text-based conversation vis-a-vis a specific data point.
  • Storify feature that gives users the ability to turn their curated content into a storify-like story board with narrative. A Storify plugin perhaps.
  • Ushahidi feature that enables users to export an item (Tweet, picture, etc) directly to an Ushahidi platform with just one click. This feature should also allow for the automatic publishing of said item on an Ushahidi map.
  • Alerts function that allows one to turn a geo-fence into an automated alert feature. For example, once I’ve created my geo-fence, having an option that allows me (and others) to subscribe to this geo-fence for future updates could be particularly interesting. These alerts would be sent out as emails (and maybe SMS) with a link to the new picture or Tweet that has been geo-tagged within the geographical area of the geo-fence. Perhaps each geo-fence could tweet updates directly to anyone subscribed to that Geofeedia deployment.
  • Trends alert feature that gives users the option of subscribing to specific trends of interest. For example, I’d like to be notified if the number of data points in my geo-fence increases by more than 25% within a 24-hour time period. Or more specifically whether the number of pictures has suddenly increased. These meta-level trends can provide important insights vis-a-vis early detection & response.
  • Analytics function that produces summary statistics and trends analysis for a geo-fence of interest. This is where Geofeedia could better capture temporal dynamics by including charts, graphs and simple time-series analysis to depict how events have been unfolding over the past hour vs 12 hours, 24 hours, etc.
  • Sentiment analysis feature that enables users to have an at-a-glance understanding of the sentiments and moods being expressed in the harvested social media content.
  • Augmented Reality feature … just kidding (sort-of).

Naturally, most or all of the above may not be in line with Geofeedia’s vision, purpose or business model. But I very much look forward to collaborating with Phil & team vis-a-vis our SBTF partnership. A big thanks to Jeannine once again for pointing me to Geofeedia, and equally big thanks to my SBTF colleague Timo Luege for his blog post on the platform. I’m thrilled to see more colleagues actively blog about the application of new technologies for disaster response.

On this note, anyone familiar with this new Iremos platform (above picture) from France? They recently contacted me to offer a demo.