Monthly Archives: June 2011

A List of Completely Wrong Assumptions About Technology Use in Emerging Economies

I’ve spent the past week at the iLab in Liberia and got what I came for: an updated reality check on the limitations of technology adoption in developing countries. Below are some of the assumptions that I took for granted. They’re perfectly obvious in hindsight and I’m annoyed at myself for not having realized their obviousness sooner. I’d be very interested in hearing from others about these and reading their lists. This need not be limited to one particular sector like ICT for Development (ICT4D) or Mobile Health (mHealth). Many of these assumptions have repercussions across multiple disciplines.

The following examples come from conversations with my colleague Kate Cummings who directs Ushahidi Liberia and the iLab here in Monrovia. She and her truly outstanding team—Kpetermeni Siakor, Carter Draper, Luther Jeke and Anthony Kamah—spearheaded a number of excellent training workshops over the past few days. At one point we began discussing the reasons for the limited use of SMS in Liberia. There are the usual and obvious reasons. But the one hurdle I had not expected to hear was Nokia’s predictive text functionality. This feature is incredibly helpful since the mobile phone basically guesses which words you’re trying to write so you don’t have to type every single letter.

But as soon as she pointed out how confusing this can be, I immediately understood what she meant. If I had never seen or been warned about this feature before, I’d honestly think the phone was broken. It would really be impossible to type with. I’d get frustrated and give up (the tiny screen further adds to the frustration). And if I was new to mobile phones, it wouldn’t be obvious how to switch that feature off either. (There are several tutorials online on how to use the predictive text feature and how to turn it off, which clearly proves they’re not intuitive).

In one of the training workshops we just had, I was explaining what Walking Papers was about and how it might be useful in Liberia. So I showed the example below and continued talking. But Kate jumped in and asked participants: “What do you see in this picture? Do you see the trees, the little roads?” She pointed at the features as she described the individual shapes. This is when it dawned on me that there is absolutely nothing inherently intuitive about satellite images. Most people on this planet have not been on an airplane or a tall building. So why would a bird’s eye view of their village be anything remotely recognizable? I really kicked myself on that one. So I’ll write it again: there is nothing intuitive about satellite imagery. Nor is there anything intuitive about GPS and the existence of a latitude and longitude coordinate system.

Kate went on to explain that this kind of picture is what you would see if you were flying high like a bird. That was the way I should have introduced the image but I had taken it completely for granted that satellite imagery was self-explanatory when it simply isn’t. In further conversations with Kate, she explained that they too had made that assumption early on when trying to introduce the in’s and out’s of the Ushahidi platform. They quickly realized that they had to rethink their approach and decided to provide introductory courses on Google Maps instead.

More wrong assumptions revealed themselves during the workshpos. For example, the “+” and “-” markers on Google Map are not intuitive either nor is the concept of zooming in and out. How are you supposed to understand that pressing these buttons still shows the same map but at a different scale and not an entirely different picture instead? Again, when I took a moment to think about this, I realized how completely confusing that could be. And again I kicked myself. But contrast this to an entirely different setting, San Francisco, where some friends recently told me how their five year old went up to a framed picture in their living room and started pinching at it with his fingers, the exact same gestures one would use on an iPhone to zoom in and out of a picture. “Broken, broken” is all the five year old said after that disappointing experience.

The final example actually comes from Haiti where my colleague Chrissy Martin is one of the main drivers behind the Digicel Group’s mobile banking efforts in the country. There were of course a number of expected challenges on the road to launching Haiti’s first successful mobile banking service, TchoTcho Mobile. The hurdle that I had not expected, however, had to do with the pin code. To use the service, you would enter your own personal pin number on your mobile phone in order to access your account. Seems perfectly straight forward. But it really isn’t.

The concept of a pin number is one that many of us take completely for granted. But the idea is often foreign to many would-be users of mobile banking services and not just in Haiti. Think about it: all one has to do to access all my money is to simply enter four numbers on my phone. That does genuinely sound crazy to me at a certain level. Granted, if you guess the pin wrong three times, the phone gets blocked and you have to call TchoTcho’s customer service. But still, I can understand the initial hesitation that many users had. When I asked Chrissy how they overcame the hurdle, her answer was simply this: training. It takes time for users to begin trusting a completely new technology.

So those are some of the assumptions I’ve gotten wrong. I’d be grateful if readers could share theirs as there must be plenty of other assumptions I’m making which don’t fit reality. Incidentally, I realize that emerging economies vary widely in technology diffusion and adoption—not to mention sub-nationally as well. This is why having the iLab in Liberia is so important. Identifying which assumptions are wrong in more challenging environments is really important if our goal is to use technology to help contribute meaningfully to a community’s empowerment, development and independence.

“No Data is Better Than Bad Data…” Really?

I recently tweeted the following:

“No data is better than bad data…” really? if you have no data, how do you know it’s bad data? doh.

This prompted a surprising number of DM’s, follow-up emails and even two in-person conversations. Everyone wholeheartedly agreed with my tweet, which was a delayed reaction to a response I got from a journalist who works for The Economist who in a rather derisive tone tweeted that “no data is better than bad data.” This is of course not the first time I’ve heard this statement so lets explore this issue further.

The first point to note is the rather contradictory nature of the statement “no data is better than bad data.” Indeed, you have to have data in order to deem it as bad in the first place. But Mr. Economist and company clearly overlook this little detail. Having “bad” data requires that this data be bad relative to other data and thus having said other data in the first place. So if data point A is bad compared to data point B, then by definition data point B is available and good data relative to A. I’m not convinced that a data point is either “good or bad” a priori unless the methods that produce that data are well understood and can themselves be judged. Of course, validating methods requires the comparison of data as well.

In any case, the problem is not bad versus good data, in my opinion. The question has to do with error margins. The vast majority of data shared seldom comes with associated error margins or any indication regarding the reliability of the data. This rightly leads to questions over data quality. I believe that introducing a simple lykert scale to tag the perceived quality of the data can go a long way. This is what we did back in 2003/2004 when I was on the team that launched the Conflict Early Warning and Response Network (CEWARN) in the Horn of Africa. While I still wonder whether the project had any real impact on conflict prevention since it launched in 2004, I believe that the initiative’s approach to information collection was pioneering at the time.

The screenshot below is of CEWARN’s online Incident Report Form. Note the “Information Source” and “Information Credibility” fields. These were really informative for us when aggregating the data and studying the corresponding time series. They allowed us to at least gain a certain level of understanding regarding the possible reliability of depicted trends over time. Indeed, we could start quantifying the level of uncertainty or margin of error. Interestingly, this also allowed us to look for patterns in varying credibility scores. Of course, these were perhaps largely based on perceptions but I believe this extra bit of information is worth having if the alternative is no qualifications on the possible credibility of individual reports.

Fast forward to 2011 and you see the same approach taken with the Ushahidi platform. The screenshot below is of the Matrix plugin for Ushahidi developed in partnership with ICT4Peace. The plugin allows reporters to tag reports with the reliability of the source and the probability that the information is correct. The result is the following graphic representing the trustworthiness of the report.

Some closing thoughts: many public health experts that I have spoken to in the field of emergency medicine repeatedly state they would rather have some data that is not immediately verifiable than no data at all. Indeed, in some ways all data begins life this way. They would rather have a potential rumor about a disease outbreak on their radar which they can follow up on and verify than have nothing appear on their radar until it’s too late if said rumor turns out to be true.

Finally, as noted in my previous post on “Tweetsourcing”, while some fear that bad data can cost lives, this doesn’t mean that no data doesn’t cost lives, especially in a crisis zone. Indeed, time is the most perishable commodity during a disaster—the “sell by” date of information is calculated in hours rather than days. This is in no way implies that I’m an advocate for bad data! The risks of basing decisions on bad data are obvious. At the end of the day, the question is about tolerance for uncertainty—different disciplines will have varying levels of tolerance depending on the situation, time and place. In sum, making the sweeping statement “no data is better than bad data” can come across as rather myopic.

How to Verify Social Media Content: Some Tips and Tricks on Information Forensics

Update: I have authored a 20+ page paper on verifying social media content based on 5 case studies. Please see this blog post for a copy.

I get this question all the time: “How do you verify social media data?” This question drives many of the conversations on crowdsourcing and crisis mapping these days. It’s high time that we start compiling our tips and tricks into an online how-to-guide so that we don’t have to start from square one every time the question comes up. We need to build and accumulate our shared knowledge in information forensics. So here is the Google Doc version of this blog post, please feel free to add your best practices and ask others to contribute. Feel free to also add links to other studies on verifying social media content.

If every source we monitored in the social media space was known and trusted, then the need for verification would not be as pronounced. In other words, it is the plethora and virtual anonymity of sources that makes us skeptical of the content they deliver. The process of verifying  social media data thus requires a two-step process: the authentication of the source as reliable and the triangulation of the content as valid. If we can authenticate the source and find it trustworthy, this may be sufficient to trust the content and mark is a verified depending on context. If source authentication is difficult to ascertain, then we need to triangulate the content itself.

Lets unpack these two processes—authentication and triangulation—and apply them to Twitter since the most pressing challenges regarding social media verification have to do with eyewitness, user-generated content. The first step is to try and determine whether the source is trustworthy. Here are some tips on how to do this:

  • Bio on Twitter: Does the source provide a name, picture, bio and any  links to their own blog, identity, professional occupation, etc., on their page? If there’s a name, does searching for this name on Google provide any further clues to the person’s identity? Perhaps a Facebook page, a professional email address, a LinkedIn profile?
  • Number of Tweets: Is this a new Twitter handle with only a few tweets? If so, this makes authentication more difficult. Arasmus notes that “the more recent, the less reliable and the more likely it is to be an account intended to spread disinformation.” In general, the longer the Twitter handle has been around and the more Tweets linked to this handle, the better. This gives a digital trace, a history of prior evidence that can be scrutinized for evidence of political bias, misinformation, etc. Arasmus specifies: “What are the tweets like? Does the person qualify his/her reports? Are they intelligible? Is the person given to exaggeration and inconsistencies?”
  • Number of followers: Does the source have a large following? If there are only a few, are any of the followers know and credible sources? Also, how many lists has this Twitter hanlde been added to?
  • Number following: How many Twitter users does the Twitter handle follow? Are these known and credible sources?
  • Retweets: What type of content does the Twitter handle retweet? Does the Twitter handle in question get retweeted by known and credible sources?
  • Location: Can the source’s geographic location be ascertained? If so, are they nearby the unfolding events? One way to try and find out by proxy is to examine during which periods of the day/night the source tweets the most. This may provide an indication as to the person’s time zone.
  • Timing: Does the source appear to be tweeting in near real-time? Or are there considerable delays? Does anything appear unusual about the timing of the person’s tweets?
  • Social authentication: If you’re still unsure about the source’s reliability, use your own social network–Twitter, Facebook, LinkedIn–to find out if anyone in your network know about the source’s reliability.
  • Media authentication: Is the source quoted by trusted media outlines whether this be in the mainstream or social media space?
  • Engage the source: Tweet them back and ask them for further information. NPR’s Andy Carvin has employed this technique particularly well. For example, you can tweet back and ask for the source of the report and for any available pictures, videos, etc. Place the burden of proof on the source.

These are some of the tips that come to mind for source authentication. For more thoughts on this process, see my previous blog post “Passing the I’m-Not-Gaddafi-Test: Authenticating Identity During Crisis Mapping Operations.” If you some tips of your own not listed here, please do add them to the Google Doc—they don’t need to be limited to Twitter either.

Now, lets say that we’ve gone through list above and find the evidence inconclusive. We thus move to try and triangulate the content. Here are some tips on how to do this:

  • Triangulation: Are other sources on Twitter or elsewhere reporting on the event you are investigating? As Arasmus notes, “remain skeptical about the reports that you receive. Look for multiple reports from different unconnected sources.” The more independent witnesses you can get information from the better and the less critical the need for identity authentication.
  • Origins: If the user reporting an event is not necessarily the original source, can the original source be identified and authenticated? In particular, if the original source is found, does the time/date of the original report make sense given the situation?
  • Social authentication: Ask members of your own social network whether the tweet you are investigating is being reported by other sources. Ask them how unusual the event reporting is to get a sense of how likely it is to have happened in the first place. Andy Carvin’s followers, for example, “help him translate, triangulate, and track down key information. They enable remarkable acts of crowdsourced verification [...] but he must always tell himself to check and challenge what he is told.”
  • Language: Andy Carvin notes that tweets that sound too official, using official language like “breaking news”, “urgent”, “confirmed” etc. need to be scrutinized. “When he sees these terms used, Carvin often replies and asks for additional details, for pictures and video. Or he will quote the tweet and add a simple one word question to the front of the message: Source?” The BBC’s UGC (user-generated content) Hub in London also verifies whether the vocabulary, slang, accents are correct for the location that a source might claim to be reporting from.
  • Pictures: If the twitter handle shares photographic “evidence”, does the photo provide any clues about the location where it was taken based on buildings, signs, cars, etc., in the background? The BBC’s UGC Hub checks weaponry against those know for the given country and also looks for shadows to determine the possible time of day that a picture was taken. In addition, they examine weather reports to “confirm that the conditions shown fit with the claimed date and time.” These same tips can be applied to Tweets that share video footage.
  • Follow up: If you have contacts in the geographic area of interest, then you could ask them to follow up directly/in-person to confirm the validity of the report. Obviously this is not always possible, particularly in conflict zones. Still, there is increasing anecdotal evidence that this strategy is being used by various media organizations and human rights groups. One particularly striking example comes from Kyrgyzstan where  a Skype group with hundreds of users across the country were able disprove and counter rumors at a breathtaking pace. See this blog post for more details. See my blog post on “How to Use Technology to Counter Rumors During Crises: Anecdotes from Kyrgyzstan.”

These are just a handful of tips and tricks come to mind. The number of bullet points above clearly shows we are not completely powerless when verifying social media data. There are several strategies available. The main challenge, as the BBC points out, is that this type of information forensics “can take anything from seconds [...] to hours, as we hunt for clues and confirmation.” See for example my earlier post on “The Crowdsourcing Detective: Crisis, Deception and Intrigue in the Twitterspehere” which highlights some challenges but also new opportunities.

One of Storyful‘s comparative strengths when it comes to real-time news curation is the growing list of authenticated users it follows. This represents more of a bounded (but certainly not static) approach.  As noted in my previous blog post on “Seeking the Trustworthy Tweet,” following a bounded model presents some obvious advantages. This explains by the BBC recommends “maintaining lists of previously verified material [and sources] to act as a reference for colleagues covering the stories.” This strategy is also employed by the Verification Team of the Standby Volunteer Task Force (SBTF).

In sum, I still stand by my earlier blog post entitled “Wag the Dog: How Falsifying Crowdsourced Data can be a Pain.” I also continue to stand by my opinion that some data–even if not immediately verifiable—is better than no data. Also, it’s important to recognize that  we have in some occasions seen social media prove to be self-correcting, as I blogged about here. Finally, we know that information is often perishable in times of crises. By this I mean that crisis data often has a “use-by date” after which, it no longer matters whether said information is true or not. So speed is often vital. This is why semi-automated platforms like SwiftRiver that aim to filter and triangulate social media content can be helpful.

Passing the I’m-Not-Gaddafi Test: Authenticating Identity During Crisis Mapping Operations

I’ve found myself telling this story so often in response to various questions that it really should be a blog post. The story begins with the launch of the Libya Crisis Map a few months ago at the request of the UN. After the first 10 days of deploying the live map, the UN asked us to continue for another two weeks. When I write “us” here, I mean the Standby Volunteer Task Force (SBTF), which is designed for short-term rapid crisis mapping support, not long term deploy-ments. So we needed to recruit additional volunteers to continue mapping the Libya crisis. And this is where the I’m-not-Gaddafi test comes in.

To do our live crisis mapping work, SBTF volunteers generally need password access to whatever mapping platform we happen to be using. This has typically been the Ushahidi platform. Giving out passwords to several dozen volunteers in almost as many countries requires trust. Password access means one could start sabotaging the platform, e.g., deleting reports, creating fake ones, etc. So when we began recruiting 200+ new volunteers to sustain our crisis mapping efforts in Libya, we needed a way to vet these new recruits, particularly since we were dealing with a political conflict. So we set up an I’m-not-Gaddafi test by using this Google Form:

So we placed the burden of proof on our (very patient) volunteers. Here’s a quick summary of the key items we used in our “grading” to authenticate volunteers’ identity:

Email address: Professional or academic email addresses were preferred and received a more favorable “score”.

Twitter handle: The great thing about Twitter is you can read through weeks’ worth of someone’s Twitter stream. I personally used this feature several times to determine whether any political tweets revealed a pro-Gaddafi attitude.

Facebook page: Given that posing as someone else or a fictitious person on Facebook violates their terms of service, having the link to an applicant’s Facebook page was considered a plus.

LinkedIn profile: This was a particularly useful piece of evidence given that the majority of people on LinkedIn are professionals.

Personal/Professional blog or website: This was also a great to way to authenticate an individual’s identity. We also encouraged applicants to share links to anything they had published which was available online.

For every application, we had two or more of us from the core team go through the responses. In order to sign off a new volunteer as vetted, two people had to write down “Yes” with their name. We would give priority to the most complete applications. I would say that 80% of the 200+ applications we received were able to be signed off on without requiring additional information. We did follow ups via email for the remaining 20%, the majority of whom provided us with extra info that enabled us to validate their identity. One individual even sent us a copy of his official ID. There may have been a handful who didn’t reply to our requests for additional information.

This entire vetting process appears to have worked, but it was extremely laborious and time-consuming. I personally spent hours and hours going through more than 100 applications. We definitely need to come up with a different system in the future. So I’ve been exploring some possible solutions—such as social authentication—with a number of groups and I hope to provide an update next month which will make all our lives a lot easier, not to mention give us more dedicated mapping time. There’s also the need to improve the Ushahidi platform to make it more like Wikipedia, i.e., where contributions can be tracked and logged. I think combining both approaches—identity authentication and tracking—may be the way to go.

Digital Activism, Epidemiology and Old Spice: Why Faster is Indeed Different

The following thoughts were inspired by one of Zeynep Tufekci’s recent posts entitled “Faster is Different” on her Technosociology blog. Zeynep argues “against the misconception that acceleration in the information cycle means would simply mean same things will happen as would have before, but merely at a more rapid pace. So, you can’t just say, hey, people communicated before, it was just slower. That is wrong. Faster is different.”

I think she’s spot on and the reason why goes to the heart of complex systems behavior and network science. “Combined with the reshaping of networks of connectivity from one/few-to-one/few (interpersonal) and one-to-many (broadcast) into many-to-many, we encounter qualitatively different dynamics,” writes Zeynep. In a very neat move, she draws upon “epidemiology and quarantine models to explain why resource-constrained actors, states, can deal with slower diffusion of protests using ‘whack-a-protest’ method whereas they can be overwhelmed by simultaneous and multi-channel uprisings which spread rapidly and ‘virally.’ (Think of it as a modified disease/contagion model).” She then uses the “unsuccessful Gafsa protests in 2008 in Tunisia and the successful Sidi Bouzid uprising in Tunisia in 2010 to illustrate the point.”

I love the use of epidemiology and quarantine models to demonstrate why faster is indeed different. One of the complex systems lectures we had when I was at the Sante Fe Institute (SFI) focused on explaining why epidemics are so unpredictable. It was a real treat to have Duncan Watts himself present his latest research on this question. Back in 1998, he and Steven Strogatz wrote a seminal paper presenting the mathematical theory of the small world phenomenon. One of Duncan’s principle area of research has been information contagion and for his presentation at SFI, he explained that, amazingly, mathematical  epidemiology currently has no way to answer how big a novel outbreak of an infectious disease will get.

I won’t go into the details of traditional mathematical epidemiology and the Standard (SIR) Model but suffice it to say that the main factor thought to determine the spread of an epidemic was the “Basic Reproduction Number”, i.e., the average number of newly infected individuals by a single infected individual in a susceptible population. However, the following epidemics, while differing dramatically in size, all have more or less the same Basic Reproduction Number.

Standard models also imply that outbreaks are “bi-modal” but empirical research clearly shows that epidemics tend to be “multi-modal.” Real epidemics are also resurgent with several peaks interspersed with lulls. So the result is unpredictability: Multi-modal size distributions imply that any given outbreak of the same disease can have dramatically different outcomes while Resurgence implies that even epidemics which seem to be burning out can regenerate themselves by invading new populations.

To this end, there has been a rapid growth in “network epidemiology” over the past 20 years. Studies in network epidemiology suggest that the size of an epidemic depends on Mobility: the expected number of infected individuals “escaping” a local context; and Range: the typical distance traveled.” Of course, the “Basic Reproduction Number” still matters, and has to be greater than 1 as a necessary condition for an epidemic in the first place. However, when this figure is greater than 1, the value itself tells us very little about size or duration. Epidemic size tends to depend instead on mobility and range, although the latter appears to be more influential. To this end, simply restricting the range of travel of infected individuals may be an effective strategy.

There are, however, some important differences in terms of network models being compared here. The critical feature of biological disease in contrast with information spread is that individuals need to be co-located. But recall when during the recent Egyptian revolution the regime had cut off access to the Internet and blocked cell phone use. How did people get their news? The good old fashioned way, by getting out in the streets and speaking in person, i.e., by co-locating. Still, information can be contagious regardless of co-location. This is where Old Spice comes in vis-a-vis their hugely effective marking campaign in 2010 where their popular ads on YouTube went viral and had a significant impact on sales of the deodorant, i.e., massive offline action. Clearly, information can lead to a contagion effect. This is the “information cascade” that Dan Drezner and others refer to in the context of digital activism in repressive environments.

“Under normal circumstances,” Zeynep writes, “autocratic regimes need to lock up only a few people at a time, as people cannot easily rise up all at once. Thus, governments can readily fight slow epidemics, which spread through word-of-mouth (one-to-one), by the selective use of force (a quarantine). No country, however, can jail a significant fraction of their population rising up; the only alternative is excessive violence. Thus, social media can destabilize the situation in unpopular autocracies: rather than relatively low-level and constant repression, regimes face the choice between crumbling in the face of simultaneous protests from many quarters and massive use of force.”
 
For me, the key lesson from mathematical epidemiology is that predicting when an epidemic will go “viral” and thus the size of this epidemic is particularly challenging. In the case of digital activism, the figures for Mobility and Range are even more accentuated than the analogous equivalent for biological systems. Given the ubiquity of information communication networks thanks to the proliferation of social media, Mobility has virtually no limit and nor does Range. That accounts for the speed of “infection” that may ultimately mean the reversal of an information cascade. This unpredictability is why, as Zeynep puts it, “faster is different.” This is also why regimes like that of Mubarak’s and Al-Assad’s try to quarantine information communication and why doing so completely is very difficult, perhaps impossible.
 
Obviously, offline action that leads to more purchases of Old Spice versus offline action that spurs mass protests in Tahrir Square are two very different scenarios. The former may only require weak ties while the latter, due to high-risk actions, may require strong ties. But there are many civil resistance tactics that can be considered as micro-contributions and hence don’t involve relatively high risk to carry out. So communication can still change behavior which may then catalyze high-risk action, especially if said communication comes from someone you know within your own social network. This is one of the keys to effective marketing and advertising strategies. You’re more likely to consider taking offline action if one of your friends or family members do even if there are some risks involved. This is where the “infection” is most likely to take place. These infections can spur low-risk actions at first, which can synchronize “micro-motives” that lead to more risky “macro-behavior” and thus reversals in information cascades.

Identifying Strategic Protest Routes for Civil Resistance: An Analysis of Optimal Approaches to Tahrir Square

My colleague Jessica recently won the Tufts GIS Poster Expo with her excellent poster on civil resistance. She used GIS data to analyze optimal approaches to Tahrir Square in Cairo. According to Jessica, many previous efforts to occupy the square had failed. So Egyptian activists spent two weeks brainstorming the best strategies to approach Tahrir Square.

Out of curiosity, Jessica began to wonder whether the use of GIS data and spatial analysis might shed some light on possible protest routes. She began her analysis by  identifying three critical strategic elements for a successful protest route:

“1) Gathering points where demonstrators initiate protests; 2) two types of routes—protest collection areas of high population density through which protesters walk to collect additional supporters and protest approach routes on major streets that accommodate large groups that are more difficult to disperse; and 3) convergence points where smaller groups of protester merge to increase strength in order to approach the destination.”

For her analysis, Jessica took gathering points and convergence points into consideration. For example, many Egyptian activist met at Mosques. So she selected optimal Mosques based on their distance to police stations (the farther the better) and high road density area “as a proxy for population density.” In terms of convergence points, smaller groups of protestors converged on major roads and intersections. The criteria that Jessica used to select these points were: distance to Tahrir Square, high density of road junctions and open space to allow for large group movement. She also took into account protest route collection areas. These tend to be “densely populated and encourage residents to join, increasing participation.” So Jessica selected these based on high road density and most direct route to Tahrir Square using major roads.

Overlaying the data and using GIS analysis on each strategic element yields the following optimal routes to Tahrir:

Jessica writes that “the results of this project demonstrate that GIS tools can be used for plotting strategic routes for protest using criteria that can change based on the unique geospatial environment. In Cairo, the optimal gathering points, strategic routes and convergence points are not always located in an obvious path (i.e. optimal mosques located in areas with low road density or convergence points without gathering points in the close proximity). The map does, however, provide protest organizers with some basic instruction on where to start, what direction to head and where to converge for the final approach.”

She does also acknowledge some of the limitations of the study owing to lack of high-resolution spatial data. I would add temporal data since civil resistance is fluid and changes, which requires rapid adaptation and re-strategizing. If her analysis could be combined with real time information coming from crowdsourced data such as U-Shahid, then I think this could be quite powerful.

For more on the civil resistance tactics used in Egypt during the revolution, please see this blog post.

Seeking the Trustworthy Tweet: Can “Tweetsourcing” Ever Fit the Needs of Humanitarian Organizations?

Can microblogged data fit the information needs of humanitarian organizations? This is the question asked by a group of academics at Pennsylvania State University’s College of Information Sciences and Technology. Their study (PDF) is an important contribution to the discourse on humanitarian technology and crisis information. The applied research provides key insights based on a series of interviews with humanitarian professionals. While I largely agree with the majority of the arguments presented in this study, I do have questions regarding the framing of the problem and some of the assertions made.

The authors note that “despite the evidence of strong value to those experiencing the disaster and those seeking information concerning the disaster, there has been very little uptake of message data by large-scale, international humanitarian relief organizations.” This is because real-time message data is “deemed as unverifiable and untrustworthy, and it has not been incorporated into established mechanisms for organizational decision-making.” To this end, “committing to the mobilization of valuable and time sensitive relief supplies and personnel, based on what may turn out be illegitimate claims, has been perceived to be too great a risk.” Thus far, the authors argue, “no mechanisms have been fashioned for harvesting microblogged data from the public in a manner, which facilitates organizational decisions.”

I don’t think this latter assertion is entirely true if one looks at the use of Twitter by the private sector. Take for example the services offered by Crimson Hexagon, which I blogged about 3 years ago. This successful start-up launched by Gary King out of Harvard University provides companies with real-time sentiment analysis of brand perceptions in the Twittersphere precisely to help inform their decision making. Another example is Storyful, which harvests data from authenticated Twitter users to provide highly curated, real-time information via microblogging. Given that the humanitarian community lags behind in the use and adoption of new technologies, it behooves us to look at those sectors that are ahead of the curve to better understand the opportunities that do exist.

Since the study principally focused on Twitter, I’m surprised that the authors did not reference the empirical study that came out last year on the behavior of Twitter users after the 8.8 magnitude earthquake in Chile. The study shows that about 95% of tweets related to confirmed reports validated that information. In contrast only 0.03% of tweets denied the validity of these true cases. Interestingly, the results also show  that “the number of tweets that deny information becomes much larger when the information corresponds to a false rumor.” In fact, about 50% of tweets will deny the validity of false reports. This means it may very well be posible to detect rumors by using aggregate analysis on tweets.

On framing, I believe the focus on microblogging and Twitter in particular misses the bigger picture which ultimately is about the methodology of crowdsourcing rather than the technology. To be sure, the study by Penn State could just as well have been titled “Seeking the Trustworthy SMS.” I think this important research on microblogging would be stronger if this distinction were made and the resulting analysis tied more closely to the ongoing debate on crowdsourcing crisis information that began during the response to Haiti’s earthquake in 2010.

Also, as was noted during the Red Cross Summit in 2010, more than two-thirds of respondents to a survey noted that they would expect a response within an hour if they posted a need for help on a social media platform (and not just Twitter) during a crisis. So whether humanitarian organizations like it or not, crowdsourced social media information cannot be ignored.

The authors carried out a series of insightful interviews with about a dozen international humanitarian organizations to try and better understand the hesitation around the use of Twitter for humanitarian response. As noted earlier, however, it is not Twitter per se that is a concern but the underlying methodology of crowdsourcing.

As expected, interviewees noted that they prioritize the veracity of information over the speed of communication. “I don’t think speed is necessarily the number one tool that an emergency operator needs to use.” Another interviewee opined that “It might be hard to trust the data. I mean, I don’t think you can make major decisions based on a couple of tweets, on one or two tweets.” What’s interesting about this latter comment is that it implies that only one channel of information, Twitter, is to be used in decision-making, which is a false argument and one that nobody I know has ever made.

Either way, the trade-off between speed and accuracy is a well known one. As mentioned in this blog post from 2009, information is perishable and accuracy is often a luxury in the first few hours and days following a major disaster. As the authors for the study rightly note, “uncertainty is ‘always expected, if sometimes crippling’ (Benini, 1997) for NGOs involved in humanitarian relief.” Ultimately, the question posed by the authors of the Penn study can be boiled down to this: is some information better than no information if it cannot be immediately verified? In my opinion, yes. If you have some information, then at least you can investigate it’s veracity which may lead to action. I also believe that from this philosophical point of view, the answer would still be yes.

Based on the interviews, the authors found that organizations engaged in immediate emergency response were less likely to make use of Twitter (or crowdsourced information) as a channel for information. As one interviewee put it, “Lives are on the line. Every moment counts. We have it down to a science. We know what information we need and we get in and get it…” In contrast, those organizations engaged in subsequent phases of disaster response were thought more likely to make use of crowdsourced data.

I’m not entirely convinced by this: “We know what information we need and we get in and get it…”. Yes, humanitarian organizations typically know but whether they get it, and in time, is certainly not a given. Just look at the humanitarian responses to Haiti and Libya, for example. Organizations may very well be “unwilling to trade data assurance, veracity and authenticity for speed,” but sometimes this mindset will mean having absolutely no information. This is why OCHA asked the Standby Volunteer Taskforce to provide them with a live crowdsourced social media may of Libya. In Haiti, while the UN is not thought to have used crowdsourced SMS data from Mission 4636, other responders like the Marine Corps did.

Still, according to one interviewee, “fast is good, but bad information fast can kill people. It’s got to be good, and maybe fast too.” This assumes that no information doesn’t kill people. Also good information that is late, can also kill people. As one of the interviewees admitted when using traditional methods, “it can be quite slow before all that [information] trickles through all the layers to get to us.” The authors of the study also noted that, “Many [interviewees] were frustrated with how slow the traditional methods of gathering post-disaster data had remained despite the growing ubiquity of smart phones and high quality connectivity and power worldwide.”

On a side note, I found the following comment during the interviews especially revealing: “When we do needs assessments, we drive around and we look with our eyes and we talk to people and we assess what’s on the ground and that’s how we make our evaluations.” One of the common criticisms leveled against the use of crowdsourced information is that it isn’t representative. But then again, driving around, checking things out and chatting with people is hardly going to yield a representative sample either.

One of the main findings from this research has to do with a problem in attitude on the part of humanitarian organizations. “Each of the interviewees stated that their organization did not have the organizational will to try out new technolo-gies. Most expressed this as a lack of resources, support, leadership and interest to adopt new technologies.” As one interview noted, “We tried to get the president and CEO both to use Twitter. We failed abysmally, so they’re not– they almost never use it.” Interestingly, “most of the respondents admitted that many of their technological changes were motivated by the demands of their donors. At this point in time their donors have not demanded that these organizations make use of microblogged data. The subjects believed they would need to wait until this occurred for real change to begin.”

For me the lack of will has less to do with available resources and limited capacity and far more to do with a generational gap. When today’s young professionals in the humanitarian space work their way up to more executive positions, we’ll  see a significant change in attitude within these organizations. I’m thinking in particular of the many dozens of core volunteers who played a pivotal role in the crisis mapping operations in Haiti, Chile, Pakistan, Russia and most recently Libya. And when attitude changes, resources can be reallocated and new priorities can be rationalized.

What’s interesting about these interviews is that despite all the concerns and criticisms of crowdsourced Twitter data, all interviewees still see microblogged data as a “vast trove of potentially useful information concerning a disaster zone.” One of the professionals interviewed said, “Yes! Yes! Because that would – again, it would tell us what resources are already in the ground, what resources are still needed, who has the right staff, what we could provide. I mean, it would just – it would give you so much more real-time data, so that as we’re putting our plans together we can react based on what is already known as opposed to getting there and discovering, oh, they don’t really need medical supplies. What they really need is construction supplies or whatever.”

Another professional stated that, “Twitter data could potentially be used the same way… for crisis mapping. When an emergency happens there are so many things going on in the ground, and an emergency response is simply prioritization, taking care of the most important things first and knowing what those are. The difficult thing is that things change so quickly. So being able to gather information quickly…. <with Twitter> There’s enormous power.”

The authors propose three possible future directions. The first is bounded microblogging, which I have long referred to as “bounded crowdsourcing.” It doesn’t make sense to focus on the technology instead of the methodology because at the heart of the issue are the methods for information collection. In “bounded crowdsourcing,” membership is “controlled to only those vetted by a particular organization or community.” This is the approach taken by Storyful, for example. One interviewee acknowledge that “Twitter might be useful right after a disaster, but only if the person doing the Tweeting was from <NGO name removed>, you know, our own people. I guess if our own people were sending us back Tweets about the situation it could help.”

Bounded crowdsourcing overcomes the challenge of authentication and verification but obviously with a tradeoff in the volume of data collected “if an additional means were not created to enable new members through an automatic authentication system, to the bounded microblogging community.” However, the authors feel that bounded crowdsourcing environments “undermine the value of the system” since “the power of the medium lies in the fact that people, out of their own volition, make localized observations and that organizations could harness that multitude of data. The bounded environment argument neutralizes that, so in effect, at that point, when you have a group of people vetted to join a trusted circle, the data does not scale, because that pool by necessity would be small.”

That said, I believe the authors are spot on when they write that “Bounded environments might be a way of introducing Twitter into the humanitarian centric organizational discourse, as a starting point, because these organizations, as seen from the evidence presented above, are not likely to initially embrace the medium. Bounded environments could hence demonstrate the potential for Twitter to move beyond the PR and Communications departments.”

The second possible future direction is to treat crowdsourced data is ambient, “contextual information rather than instrumental information, (i.e., factual in nature).” This grassroots information could be considered as an “add-on to traditional, trusted institutional lines of information gathering.” As one interviewee noted, “Usually information exists. The question is the context doesn’t exist…. that’s really what I see as the biggest value [of crowdsourced information] and why would you use that in the future is creating the context…”.

The authors rightly suggest that “that adding contextual information through microblogged data may alleviate some of the uncertainty during the time of disaster. Since the microblogged data would not be the single data source upon which decisions would be made, the standards for authentication and security could be less stringent. This solution would offer the organization rich contextual data, while reducing the need for absolute data authentication, reducing the need for the organization to structurally change, and reducing the need for significant resources.” This is exactly how I consider and treat crowdsourced data.

The third and final forward-looking solution is computational. The authors “believe better computational models will eventually deduce informational snippets with acceptable levels of trust.” They refer to Ushahidi’s SwiftRiver project as an example.

In sum, this study is an important contribution to the discourse. The challenges around using crowdsourced crisis information are well known. If I come across as optimistic, it is for two reasons. First, I do think a lot can be done to address the challenges. Second, I do believe that attitudes in the humanitarian sector will continue to change.

Analyzing the Libya Crisis Map Data in 3D (Video)

I first blogged about GeoTime exactly two years ago in a blog post entitled “GeoTime: Crisis Mapping in 3D.” The rationale for visualizing geospatial data in 3D very much resonates with me and in my opinion becomes particularly compelling when analyzing crisis mapping data.

This is why I invited my GeoTime colleague Adeel Khamisa to present their platform at the first International Conference on Crisis Mapping (ICCM 2009). Adeel used the Ushahidi-Haiti data to demonstrate the added value of using a 3D approach, which you can watch in the short video below.

Earlier this year, I asked Adeel whether he might be interested in analyzing the Libya Crisis Map data using GeoTime. He was indeed curious and kindly produced the short video below on his preliminary findings.

The above visual overview of the Libya data is really worth watching. I hope that fellow Crisis Mappers will consider making more use of GeoTime in their projects. The platform really is ideal for Crisis Mapping Analysis.