Category Archives: Early Warning

Behind the Scenes: The Digital Operations Center of the American Red Cross

The Digital Operations Center at the American Red Cross is an important and exciting development. I recently sat down with Wendy Harman to learn more about the initiative and to exchange some lessons learned in this new world of digital  humanitarians. One common challenge in emergency response is scaling. The American Red Cross cannot be everywhere at the same time—and that includes being on social media. More than 4,000 tweets reference the Red Cross on an average day, a figure that skyrockets during disasters. And when crises strike, so does Big Data. The Digital Operations Center is one response to this scaling challenge.

Sponsored by Dell, the Center uses customized software produced by Radian 6 to monitor and analyze social media in real-time. The Center itself sits three people who have access to six customized screens that relate relevant information drawn from various social media channels. The first screen below depicts some of key topical areas that the Red Cross monitors, e.g., references to the American Red Cross, Storms in 2012, and Delivery Services.

Circle sizes in the first screen depict the volume of references related to that topic area. The color coding (red, green and beige) relates to sentiment analysis (beige being neutral). The dashboard with the “speed dials” right underneath the first screen provides more details on the sentiment analysis.

Lets take a closer look at the circles from the first screen. The dots “orbiting” the central icon relate to the categories of key words that the Radian 6 platform parses. You can click on these orbiting dots to “drill down” and view the individual key words that make up that specific category. This circles screen gets updated in near real-time and draws on data from Twitter, Facebook, YouTube, Flickr and blogs. (Note that the distance between the orbiting dots and the center does not represent anything).

An operations center would of course not be complete without a map, so the Red Cross uses two screens to visualize different data on two heat maps. The one below depicts references made on social media platforms vis-a-vis storms that have occurred during the past 3 days.

The screen below the map highlights the bio’s of 50 individual twitter users who have made references to the storms. All this data gets generated from the “Engagement Console” pictured below. The purpose of this web-based tool, which looks a lot like Tweetdeck, is to enable the Red Cross to customize the specific types of information they’re looking form, and to respond accordingly.

Lets look at the Consul more closely. In the Workflow section on the left, users decide what types of tags they’re looking for and can also filter by priority level. They can also specify the type of sentiment they’re looking, e.g., negative feelings vis-a-vis a particular issue. In addition, they can take certain actions in response to each information item. For example, they can reply to a tweet, a Facebook status update, or a blog post; and they can do this directly from the engagement consul. Based on the license that the Red Cross users, up to 25 of their team members can access the Consul and collaborate in real-time when processing the various tweets and Facebook updates.

The Consul also allows users to create customized timelines, charts and wordl graphics to better understand trends changing over time in the social media space. To fully leverage this social media monitoring platform, Wendy and team are also launching a digital volunteers program. The goal is for these volunteers to eventually become the prime users of the Radian platform and to filter the bulk of relevant information in the social media space. This would considerably lighten the load for existing staff. In other words, the volunteer program would help the American Red Cross scale in the social media world we live in.

Wendy plans to set up a dedicated 2-hour training for individuals who want to volunteer online in support of the Digital Operations Center. These trainings will be carried out via Webex and will also be available to existing Red Cross staff.


As  argued in this previous blog post, the launch of this Digital Operations Center is further evidence that the humanitarian space is ready for innovation and that some technology companies are starting to think about how their solutions might be applied for humanitarian purposes. Indeed, it was Dell that first approached the Red Cross with an expressed interest in contributing to the organization’s efforts in disaster response. The initiative also demonstrates that combining automated natural language processing solutions with a digital volunteer net-work seems to be a winning strategy, at least for now.

After listening to Wendy describe the various tools she and her colleagues use as part of the Operations Center, I began to wonder whether these types of tools will eventually become free and easy enough for one person to be her very own operations center. I suppose only time will tell. Until then, I look forward to following the Center’s progress and hope it inspires other emergency response organizations to adopt similar solutions.

Twitcident: Filtering Tweets in Real-Time for Crisis Response

The most recent newcomer to the “tweetsourcing” space comes to us from Delft University of Technology in the Netherlands. Twitcident is a web-based filtering system that extracts crisis information from Twitter in real-time to support emergency response efforts. Dutch emergency services have been testing the platform over the past 10 months and results “show the system to be far more useful than simple keyword searching of a twitter feed” (NewScientist).

Here’s how it works. First the dashboard, which shows current events-of-interest being monitored.

Lets click on “Texas”, which produces the following page. More than 22,000 tweets potentially relate to the actual fire of interest.

This is where the filtering comes in:

The number of relevant tweets is reduced with every applied filter.

Naturally, geo-location is also an optional filter.

Twitcident also allows for various visualization options, including timelines, word clouds and charts.

The system also allows the user to view the filtered tweets on a map. The pictures and videos shared via twitter are also aggregated and viewable on dedicated tabs.

The developers of the platform have not revealed how their algorithms work but will demo the tool at the World Wide Web 2012 conference in France next week. In the meantime, here’s a graphic that summarizes the platform workflow.

I look forward to following Twitcident’s developments. I’d be particularly interested in learning more about how Dutch emergency services have been using the tool and what features they think would improve the platform’s added value.

SMS for Violence Prevention: PeaceTXT International Launches in Kenya

[Cross-posted from my post on the Ushahidi blog]

One of the main reasons I’m in Nairobi this month is to launch PeaceTXT International with PopTech, Praekelt Foundation, Sisi ni Amani and several other key partners. PeaceTXT International is a spin-off from the original PeaceTXT project that several of us began working on with CeaseFire Chicago last year. I began thinking about the many possible international applications of the PeaceTXT project during our very first meeting, which is why I am thrilled and honored to be spearheading the first PeaceTXT International pilot project.

The purpose of PeaceTXT is to leverage mobile messaging to catalyze behavior change around peace and conflict issues. In the context of Chicago, the joint project with CeaseFire aims to leverage SMS reminders to interrupt gun violence in marginalized neighborhoods. Several studies in other fields of public health have already shown the massive impact that SMS reminders can have on behavior change, e.g., improving drug adherence behavior among AIDS and TB patients in Africa, Asia and South America.

Our mobile messaging campaign in Chicago builds on another very successful one in the US: “Friends Don’t Let Friends Drink and Drive.” Inspired by this approach, the PeaceTXT Team is looking to launch a friends-don’t-let-friends-get-killed campaign. Focus groups recently conducted with high-risk individuals have resulted in rich content for several dozen reminder messages (see below) that could be disseminated via SMS. Note that CeaseFire has been directly credited for significantly reducing the number of gun-related killings in Chicago over the past 10 years. In other words, they have a successful and proven methodology; one being applied to several other cities and countries worldwide. PeaceTXT simply seeks to scale this success by introducing SMS.

These messages are user-generated in that the content was developed by high-risk individuals themselves—i.e., those most likely to get involved in gun violence. The messages are not limited to reminders. Some also prompt the community to get engaged by responding to various questions. Indeed, the project seeks to crowdsource community solutions to gun violence and thus greater participation. When high-risk individuals were asked how they’d feel if they were to receive these messages on their phones, they had the following to share: “makes me feel like no one is forgetting about me”; “message me once a day to make a difference.”

Given that both forwarding and saving text messages is very common among the population that CeaseFire works with, the team hopes that the text messages will circulate and recycle widely. Note that the project is still in prototype phase but going into implementation mode as of 2012. So we’ll have to wait and see how the project fares and what the initial impact looks like.

In the meantime, PeaceTXT is partnering with Sisi ni Amani (We are Peace) to launch its first international pilot project. Rachel Brown, who spearheads the initiative, first got in touch with me back in the Fall of 2009 whilst finishing her undergraduate studies at Tufts. Rachel was interested in crowdsourcing a peace map of Kenya, which I blogged about here shortly after our first conversation. Since then, Rachel and her team have set up the Kenyan NGO Sisi ni Amani Kenya (SnA-K) to leverage mobile technology for awareness raising and civic engagement with the aim of preventing possible violence during next year’s Presidential Elections.

SnA-K currently manages a ~10,000 member SMS subscriber list in Baba Dogo and Korogocho, Kamukunji and Narok. SnA-K’s SMS campaigns focus on voter education, community cohesion and rumor prevention. What SnA-K needs, how-ever, is the scalable SMS broadcasting technology, the type of focus that PeaceTXT brought to CeaseFire Chicago and the unique response methodology developed by the CeaseFire team. So I reached out to Rachel early on during the work in Chicago to let her know about PeaceTXT and to gain insights from her projects in Kenya. We set up regular conference calls throughout the year to keep each other informed of our respective progress and findings.

Soon enough, PopTech’s delightful Leetha Filderman asked me to put together a pitch for international applications of PeaceTXT’s work, an initiative I have “code-named” PeaceTXT International. I was absolutely thrilled when she shared the good news at PopTech 2011 that our donor, the Rita Allen Foundation, had provided us with additional funding, some of which could go towards an international pilot project. Naturally, Sisi ni Amani was a perfect fit.

So we organized a half-day brainstorming session at the iHub last week to chart the way forward on PeaceTXT Kenya. For example, what is the key behavioral change variable (like friendship in Chicago) that is most likely to succeed in Kenya? As for interrupting violence, how can the CeaseFire methodology be customized for the SnA-K context? Finally, what kind of SMS broadcasting technology do we need to have in place to provide maximum flexibility and scalability earlier rather than later? Answering these questions and implementing scalable solutions essentially forms the basis of the partnership between SnA-K and PeaceTXT (which also includes Mobile:Medic & Revolution Messaging). We have some exciting leads on next steps and will be sure to blog about them as we move forward to get feedback from the wider community.

Conflicts are often grounded in the stories and narratives that people tell themselves and the emotions that these stories generate. Narratives shape identity and the social construct of reality—we interpret our lives through stories. These have the power to transform relationships and communities. We believe the PeaceTXT model can be applied to catalyze behavior  change vis-a-vis peace and conflict issues at the community level by amplifying new narratives via SMS. There is considerable potential here and still much to learn, which is why I’m thrilled to be working with SnA, PopTech & partners on launching our first international pilot project: PeaceTXT Kenya.

Using Ushahidi Data to Study the Micro-Dynamics of Violent Conflict

The field of conflict analysis has long been handicapped by the country-year straightjacket. This is beginning to change thanks to the increasing availability of subnational and sub-annual conflict data. In the past, one was limited to macro-level data, such as the number of casualties resulting from violent conflict in a given county and year. Today, datasets such as the Armed Conflict Location Event Data (ACLED) provide considerably more temporal and spatial resolution. Another example is this quantitative study: “The Micro-dynamics of Reciprocity in an Asymmetric Conflict: Hamas, Israel, and the 2008-2009 Gaza Conflict,” authored by by NYU PhD Candidate Thomas Zeitzoff.

Picture 5

I’ve done some work on conflict event-data and reciprocity analysis in the past (such as this study of Afghanistan), but Thomas is really breaking new ground here with the hourly temporal resolution of the conflict analysis, which was made possible by Al-Jazeera’s War on Gaza project powered by the Ushahidi platform.

ABSTRACT

The Gaza Conflict (2008-2009) between Hamas and Israel was de fined the participants’ strategic use of force. Critics of Israel point to the large number of Palestinian casualties compared to Israelis killed as evidence of a disproportionate Israeli response. I investigate Israeli and Hamas response patterns by constructing a unique data set of hourly conflict intensity scores from new social media and news source over the nearly 600 hours of the conflict. Using vector autoregression techniques (VAR), I fi nd that Israel responds about twice as intensely to a Hamas escalation as Hamas responds to an Israeli escalation. Furthermore, I find that both Hamas’ and Israel’s response patterns change once the ground invasion begins and after the UN Security Council votes. (Study available as PDF here).

As Thomas notes, “Ushahidi worked with Al-Jazeera to track events on the ground in Gaza via SMS messages, email, or the web. Events were then sent in by reporters and civilians through the platform and put into a Twitter feed entitled AJGaza, which gave the event a time stamp. By cross-checking with other sources such as Reuters, the UN, and the Israeli newspaper Haaretz, I was able see that the time stamp was usually within a few minutes of event occurrence.”

Key Highlights from the study:

  • Hamas’ cumulative response intensity to an Israeli escalation decreases (by about 17 percent) after the ground invasion begins. Conversely, Israel’s cumulative response intensity after the invasion increases by about three fold.
  • Both Hamas and Israel’s cumulative response drop after the UN Security Council vote on January 8th, 2009 for an immediate cease-fi re, but Israel’s drops more than Hamas (about 30 percent to 20 percent decrease).
  • For the period covering the whole conflict, Hamas would react (on average) to a “surprise” 1 event (15 minute interval) of Israeli misinformation/psy-ops with the equivalent of 1 extra incident of mortar re/endangering civilians.
  • Before the invasion, Hamas would respond to a 1 hour shock of targeted air strikes with 3 incidents of endangering civilians. Comparatively, after the invasion, Hamas would only respond to that same Israeli shock with 3 incidents of psychological warfare.
  • The results con firm my hypotheses that Israel’s reactions were more dependent upon Hamas and that these responses were contextually dependent.
  • Wikipedia’s Timeline of the 2008-2009 Gaza Conflict was particularly helpful in sourcing and targeting events that might have diverging reports (i.e. controversial).

[An earlier version of this blog post appeared on my Early Warning blog]

The Mathematics of War: On Earthquakes and Conflicts

A conversation with my colleague Sinan Aral at PopTech 2011 reminded me of some earlier research I had carried out on the mathematics of war. So this is a good time to share some of the findings from this research. The story begins some 60 years ago, when British physicist Lewis Fry Richardson found that international wars follow what is called a power law distribution. A power law distribution relates the frequency and “magnitude” of events. For example, the Richter scale, relates the size of earthquakes to their frequency. Richardson found that the frequency of international wars and the number of causalities each produced followed a power law.

More recently, my colleague Erik-Lars Cederman sought to explain Richardson’s findings in his 2003 peer-reviewed publication “Modeling the Size of Wars: From Billiard Balls to Sandpiles.” However, Lars used an invalid statistical technique to test for power law distributions. In 2005, I began collaborating with Pro-fessors Neil Johnson and Michael Spagat on related research after I came across their fascinating co-authored study that tested casualty distributions in new wars (internal conflicts) for power laws. Though he was not a co-author on the 2005 study, my colleague Sean Gourely presented this research at TED in 2009.

In any case, I invited Michael to present his research at The Fletcher School in the Fall of 2005 to generate interest here. Shortly after, I suggested to Michael that we test whether conflict events, in addition to casualties, followed a power law distribution. I had access to an otherwise proprietary dataset on conflict events that spanned a longer time period than the casualty datasets that he and Neils were working off. I also suggested we try to test whether casualties from natural disasters follow a power law distribution.

We chose to pursue the latter first and I submitted an abstract to the 2006 American Political Science Association (APSA) conference to present our findings. Soon after, I was accepted to the Santa Fe Institute’s Complex Systems Summer Institute for PhD students and took the opportunity to pursue my original research in testing conflict events for power law distributions with my colleague Dr. Ryan Woodard.

The APSA paper, presented in August 2006, was entitled “Natural Disasters, Casualties and Power Laws:  A Comparative Analysis with Armed Conflict” (PDF). Here is the paper’s abstract and findings:

Power-law relationships, relating events with magnitudes to their frequency, are common in natural disasters and violent conflict. Compared to many statistical distributions, power laws drop off more gradually, i.e. they have “fat tails”. Existing studies on natural disaster power laws are mostly confined to physical measurements, e.g., the Richter scale, and seldom cover casualty distributions. Drawing on the Center for Research on the Epidemiology of Disasters (CRED) International Disaster Database, 1980 to 2005, we find strong evidence for power laws in casualty distributions for all disasters combined, both globally and by continent except for North America and non-EU Europe. This finding is timely and gives useful guidance for disaster preparedness and response since natural catastrophes are increasing in frequency and affecting larger numbers of people.  We also find that the slopes of the disaster casualty power laws are much smaller than those for modern wars and terrorism, raising an open question of how to explain the differences. We show that many standard risk quantification methods fail in the case of natural disasters.

apsa1

Dr. Woodard and I presented our research on power laws and conflict events at SFI in June 2006. We produced a paper in August of that year entitled “Concerning Critical Correlations in Conflict, Cooperation and Casualties” (PDF). As the title implies, we also tested whether cooperative events followed a power law. As far as I know, we were the first to test conflict events not to mention cooperative events for power laws. In addition, we looked at conflict/cooperation (C/C) events in Western countries.

The abstract and some findings are included below:

Knowing that the number of casualties of war are distributed as a power law and given a rich data set of conflict and cooperation (C/C) events, we ask: Are there correlations among C/C events? Is there a correlation between C/C events and war casualties? Can C/C data be used as proxy for (potentially) less reliable casualty data? Can C/C data be used in conflict early warning systems? To begin to answer these questions we analyze the distribution of C/C event data for the period 1990–2004 in Afghanistan, Colombia, Iran, Iraq, North Korea, Switzerland, UK and USA. We find that the distributions of individual C/C event types scale as power laws, but only over approximately a single decade, leaving open the possibility of a more appropriate fit (for which we have not yet tested). However, the average exponent of the power law (2.5) is the same as that found in recent studies of casualties of war. We find low levels of correlations between C/C events in Iraq and Afghanistan but not in the other countries studied. We find that the distribution of the sum of all conflict or cooperation events scales exponentially. Finally, we find low levels of correlations between a two year time series of casualties in Afghanistan and the corresponding conflict events.

sfi1sfi2sfi3

I’m looking to discuss all this further with Sinan and learning more about his fascinating area of research.

Detecting Emerging Conflicts with Web Mining and Crisis Mapping

My colleague Christopher Ahlberg, CEO of Recorded Future, recently got in touch to share some exciting news. We had discussed our shared interests a while back at Harvard University. It was clear then that his ideas and existing technologies were very closely aligned to those we were pursuing with Ushahidi’s Swift River platform. I’m thrilled that he has been able to accomplish a lot since we last spoke. His exciting update is captured in this excellent co-authored study entitled “Detecting Emergent Conflicts Through Web Mining and Visualization” which is available here as a PDF.

The study combines almost all of my core interests: crisis mapping, conflict early warning, conflict analysis, digital activism, pattern recognition, natural language processing, machine learning, data visualization, etc. The study describes a semi-automatic system which automatically collects information from pre-specified sources and then applies linguistic analysis to user-specified extract events and entities, i.e., structured data for quantitative analysis.

Natural Language Processing (NLP) and event-data extraction applied to crisis monitoring and analysis is of course nothing new. Back in 2004-2005, I worked for a company that was at the cutting edge of this field vis-a-vis conflict early warning. (The company subsequently joined the Integrated Conflict Early Warning System (ICEWS) consortium supported by DARPA). Just a year later, Larry Brilliant told TED 2006 how the Global Public Health Information Net-work (GPHIN) had leveraged NLP and machine learning to detect an outbreak of SARS 3 months before the WHO. I blogged about this, Global Incident Map, European Media Monitor (EMM), HavariaHealthMap and Crimson Hexagon back in 2008. Most recently, my colleague Kalev Leetaru showed how applying NLP to historical data could have predicted the Arab Spring. Each of these initiatives represents an important effort in leveraging NLP and machine learning for early detection of events of interest.

The RecordedFuture system works as follows. A user first selects a set of data sources (websites, RSS feeds, etc) and determines the rate at which to update the data. Next, the user chooses one or several existing “extractors” to find specific entities and events (or constructs a new type). Finally, a taxonomy is selected to specify exactly how the data is to be grouped. The data is then automatically harvested and passed through a linguistics analyzer which extracts useful information such as event types, names, dates, and places. Finally, the reports are clustered and visualized on a crisis map, in this case using an Ushahidi platform. This allows for all kinds of other datasets to be imported, compared and analyzed, such as high resolution satellite imagery and crowdsourced data.

A key feature of the RecordedFuture system is that extracts and estimates the time for the event described rather than the publication time of the newspaper article parsed, for example. As such, the harvested data can include both historic and future events.

In sum, the RecordedFuture system is composed of the following five features as described in the study:

1. Harvesting: a process in which text documents are retrieved from various sources and stored in the database. The documents are stored for long-term if permitted by terms of use and IPR legislation, otherwise they are only stored temporarily for the needed analysis.

2. Linguistic analysis: the process in which the retrieved texts are analyzed in order to extract entities, events, time and location, etc. In contrast to other components, the linguistic analysis is language dependent.

3. Refinement: additional information can be obtained in this process by synonym detection, ontology analysis, and sentiment analysis.

4. Data analysis: application of statistical and AI-based models such as Hidden Markov Models (HMMs) and Artificial Neural Networks (ANNs) to generate predictions about the future and detect anomalies in the data.

5. User experience: a web interface for ordinary users to interact with, and an API for interfacing to other systems.

The authors ran a pilot that “manually” integrated the RecordedFuture system with the Ushahidi platform. The result is depicted in the figure below. In the future, the authors plan to automate the creation of reports on the Ushahidi platform via the RecordedFuture system. Intriguingly, the authors chose to focus on protest events to demo their Ushahidi-coupled system. Why is this intriguing? Because my dissertation analyzed whether access to new information and communication technologies (ICTs) are statistically significant predictors of protest events in repressive states. Moreover, the protest data I used in my econometric analysis came from an automated NLP algorithm that parsed Reuters Newswires.

Using RecordedFuture, the authors extracted some 6,000 protest event-data for Quarter 1 of 2011. These events were identified and harvested using a “trained protest extractor” constructed using the system’s event extractor frame-work. Note that many of the 6,000 events are duplicates because they are the same events but reported by different forces. Not surprisingly, Christopher and team plan to develop a duplicate detection algorithm that will also double as a triangulation & veracity scoring feature. I would be particularly interested to see them do this kind of triangulation and validation of crowdsourced data on the fly.

Below are the protest events picked up by RecordedFuture for both Tunisia and Egypt. From these two figures, it is possible to see how the Tunisian protests preceded those in Egypt.

The authors argue that if the platform had been set up earlier this year, a user would have seen the sudden rise in the number of protests in Egypt. However, the authors acknowledge that their data is a function of media interest and attention—the same issue I had with my dissertation. One way to overcome this challenge might be by complementing the harvested reports with crowdsourced data from social media and Crowdmap.

In the future, the authors plan to have the system auto-detect major changes in trends and to add support for the analysis of media in languages beyond English. They also plan to test the reliability and accuracy of their conflict early warning algorithm by comparing their forecasts of historical data with existing conflict data sets. I have several ideas of my own about next steps and look forward to speaking with Christopher’s team about ways to collaborate.

Real Time LRA Crisis Map Tracks Mass Atrocities in Central Africa

My colleagues at Resolve and Invisible Children have just launched their very impressive Crisis Map of LRA Attacks in Central Africa. The LRA, or Lord’s Resistance Army, is a brutal rebel group responsible for widespread mass atrocities, most of which go completely unreported because the killings and kidnappings happen in remote areas. This crisis map has been a long time in the making so I want to sincerely congratulate Michael Poffenberger, Sean Poole, Adam Finck, Kenneth Transier and the entire team for the stellar job they’ve done with this project. The LRA Crisis Tracker is an  important milestone for the fields of crisis mapping and early warning.

The Crisis Tracker team did an excellent job putting together a detailed code book (PDF) for this crisis map, a critical piece of any crisis mapping and conflict early warning project that is all too-often ignored or rushed by most. The reports mapped on Crisis Tracker come from Invisible Children’s local Early Warning Radio Network, UN agencies and local NGOs. Invisible Children’s radio network also provides local communities with the ability to receive warnings of LRA activity and alert local security forces to LRA violence.

When I sat down with Resolve’s Kenneth Transier earlier this month, he noted that the majority of the reports depicted on their LRA crisis map represent new and original information. He also noted that they currently have 22 months of solid data, with historical and real-time data entry on-going. You can download the data here. Note that the public version of this data does not include the most sensitive information for security reasons.

The Crisis Tracker team also provide monthly and quarterly security briefs, analyzing the latest data they’ve collected for trends and patterns. This project is by far the most accurate, up-to-date and comprehensive source of information on LRA atrocities, which the partners hope will improve efforts to protect vulnerable communities in the region. Indeed, the team has joined forces with a number of community-run protection organizations in Central Africa who hope to benefit from the team’s regular crisis reports.

The project is also innovative because of the technology being used. Michael got in touch about a year ago to learn more about the Ushahidi platform and after a series of conversations decided that they needed more features than were currently available from Ushahidi, especially on the data visualization side. So I put them in touch with my colleagues at Development Seed. Ultimately, the team partnered with a company called Digitaria which used the backend of a Sales-force platform and a customized content management system to publish the in-formation to the crisis map. This an important contribution to the field of crisis mapping and I do hope that Digitaria share their technology with other groups. Indeed, the fact that new crisis mapping technologies are surfacing is a healthy sign that the field is maturing and evolving.

In the meantime, I’m speaking with Michael about next steps on the conflict early warning and especially response side. This project has the potential to become a successful people-centered conflict early response initiative as long as the team focuses seriously on conflict preparedness and implement an number of other best practices from fourth generation conflict early warning systems.

This project is definitely worth keeping an eye on. I’ve invited Crisis Tracker to present at the 2011 International Conference of Crisis Mappers in Geneva in November (ICCM 2011). I do hope they’ll be able to participate. In the meantime, you can follow the team and their updates via twitter at @crisistracker. The Crisis Tracker iPhone and iPad apps and should be out soon.