Monthly Archives: March 2012

Crisis Mapping Syria: Automated Data Mining and Crowdsourced Human Intelligence

The Syria Tracker Crisis Map is without doubt one of the most impressive crisis mapping projects yet. Launched just a few weeks after the protests began one year ago, the crisis map is spearheaded by a just handful of US-based Syrian activists have meticulously and systematically documented 1,529 reports of human rights violations including a total of 11,147 killings. As recently reported in this NewScientist article, “Mapping the Human Cost of Syria’s Uprising,” the crisis map “could be the most accurate estimate yet of the death toll in Syria’s uprising […].” Their approach? “A combination of automated data mining and crowdsourced human intelligence,” which “could provide a powerful means to assess the human cost of wars and disasters.”

On the data-mining side, Syria Tracker has repurposed the HealthMap platform, which mines thousands of online sources for the purposes of disease detection and then maps the results, “giving public-health officials an easy way to monitor local disease conditions.” The customized version of this platform for Syria Tracker (ST), known as HealthMap Crisis, mines English information sources for evidence of human rights violations, such as killings, torture and detainment. As the ST Team notes, their data mining platform “draws from a broad range of sources to reduce reporting biases.” Between June 2011 and January 2012, for example, the platform collected over 43,o00 news articles and blog posts from almost 2,000 English-based sources from around the world (including some pro-regime sources).

Syria Tracker combines the results of this sophisticated data mining approach with crowdsourced human intelligence, i.e., field-based eye-witness reports shared via webform, email, Twitter, Facebook, YouTube and voicemail. This naturally presents several important security issues, which explains why the main ST website includes an instructions page detailing security precautions that need to be taken while sub-mitting reports from within Syria. They also link to this practical guide on how to protect your identity and security online and when using mobile phones. The guide is available in both English and Arabic.

Eye-witness reports are subsequently translated, geo-referenced, coded and verified by a group of volunteers who triangulate the information with other sources such as those provided by the HealthMap Crisis platform. They also filter the reports and remove dupli-cates. Reports that have a low con-fidence level vis-a-vis veracity are also removed. Volunteers use a dig-up or vote-up/vote-down feature to “score” the veracity of eye-witness reports. Using this approach, the ST Team and their volunteers have been able to verify almost 90% of the documented killings mapped on their platform thanks to video and/or photographic evidence. They have also been able to associate specific names to about 88% of those reported killed by Syrian forces since the uprising began.

Depending on the levels of violence in Syria, the turn-around time for a report to be mapped on Syria Tracker is between 1-3 days. The team also produces weekly situation reports based on the data they’ve collected along with detailed graphical analysis. KML files that can be uploaded and viewed using Google Earth are also made available on a regular basis. These provide “a more precisely geo-located tally of deaths per location.”

In sum, Syria Tracker is very much breaking new ground vis-a-vis crisis mapping. They’re combining automated data mining technology with crowdsourced eye-witness reports from Syria. In addition, they’ve been doing this for a year, which makes the project the longest running crisis maps I’ve seen in a hostile environ-ment. Moreover, they’ve been able to sustain these import efforts with just a small team of volunteers. As for the veracity of the collected information, I know of no other public effort that has taken such a meticulous and rigorous approach to documenting the killings in Syria in near real-time. On February 24th, Al-Jazeera posted the following estimates:

Syrian Revolution Coordination Union: 9,073 deaths
Local Coordination Committees: 8,551 deaths
Syrian Observatory for Human Rights: 5,581 deaths

At the time, Syria Tracker had a total of 7,901 documented killings associated with specific names, dates and locations. While some duplicate reports may remain, the team argues that “missing records are a much bigger source of error.” Indeed, They believe that “the higher estimates are more likely, even if one chooses to disregard those reports that came in on some of the most violent days where names were not always recorded.”

The Syria Crisis Map itself has been viewed by visitors from 136 countries around the world and 2,018 cities—with the top 3 cities being Damascus, Washington DC and, interestingly, Riyadh, Saudia Arabia. The witnessing has thus been truly global and collective. When the Syrian regime falls, “the data may help sub-sequent governments hold him and other senior leaders to account,” writes the New Scientist. This was one of the principle motivations behind the launch of the Ushahidi platform in Kenya over four years ago. Syria Tracker is powered by Ushahidi’s cloud-based platform, Crowdmap. Finally, we know for a fact that the International Criminal Court (ICC) and Amnesty International (AI) closely followed the Libya Crisis Map last year.

Communicating with Disaster Affected Communities (CDAC)

Communication is Aid: Curated tweets and commentary from the CDAC Network’s Media and Technology Fair, London 2012. My commentary in blue. This is the first time I’ve used Storify to curate content. (I bumped into the co-founder of the platform at SXSW which reminded me I really needed to get in on the action).

  1. Sha

    re
    “After the Japan earthquake, >20% of ALL web queries issued were on tsunamis.” @spangledrongo #commisaid

    Thu, Mar 22 2012 08:53:43
  2. Would be great to see how this type of search data compares to data from Tweets. Take this analysis of tweets following the earthquake in Chile, for example.
  3. Share
    RT @UNFPA: Professional systems are being replaced by consumer tools says @Google Crisis Response #commisaid

    Thu, Mar 22 2012 08:48:58
  4. And as a result, crisis-affected communities are increasingly becoming digital as I note in this blog post.
  5. Share
    Closed systems closed data will be left behind and unused:crisis response is social and collaboration is empowering @CDACNetwork #commisaid

    Thu, Mar 22 2012 08:49:58
  6. Share
    RT @catherinedem: Crisis response is #social – online social collaboration spikes during and after disaster @spangledrongo #commisaid

    Thu, Mar 22 2012 08:54:03
  7. Share
    “It is essential to have authoritative content” Nigel Snoad at @CDACNetwork’s Media & Tech Fair #commisaid

    Thu, Mar 22 2012 08:58:28
  8. Does this mean that all user-generated content should be ignored because said content does not necessarily come from a known and authoritative source? Who decides what is authoritative?
  9. Share
    we don’t empower communities by giving them info,they empower themselves by giving us info that we can act on-@komunikasikan #commisaid

    Thu, Mar 22 2012 11:31:02
  10. What if this information is not authoritative because it does not come from official sources?
  11. Share
    RT @ushahidi: “In a crisis, the mobile internet stays most resilient, even more than SMS.” #commisaid Nigel Snoad

    Thu, Mar 22 2012 09:02:53
  12. Share
    RT @jqg: Empower local communities to generate their own tools and figure out their own solutions #commisaid

    Thu, Mar 22 2012 09:10:43
  13. See this blog post on Democratizing ICT for Development Using DIY Innovation and Open Data.
  14. Share
    Through #Mission4636 SMS system, radio presenter @carelpedre was able to communicate directly with affected people in Haiti #commisaid

    Thu, Mar 22 2012 12:20:14
  15. This is rather interesting, I hadn’t realized that radio stations in Haiti actively used the information from the Ushahidi Haiti 4636 project.
  16. Share
    Fascinating talk with @carelpedre- many of #Haiti ‘s pre-earthquake twitter users came from @juno7’s lottery push notifications #Commisaid

    Thu, Mar 22 2012 11:11:43
  17. Share
    More about @Carelpedre using 4636 project after #Haiti earthquake bit.ly/plgzXJ #commisaid

    Thu, Mar 22 2012 12:17:52
  18. Share
    Internews: Innovation comes within, info comes from within, people will find ways to communicate no matter what #commisaid @CDACNetwork

    Thu, Mar 22 2012 09:26:25
  19. So best of luck to those who wish to regulate this space! As my colleague Tim McNamara has noted “Crisis mapping is not simply a technological shift, it is also a process of rapid decentralisation of power. With extremely low barriers to entry, many new entrants are appearing in the fields of emergency and disaster response. They are ignoring the traditional hierarchies, because the new entrants perceive that there is something that they can do which benefits others.”
  20. Share
    @souktel: “be simple, be creative and learn from the community around you” #commisaid

    Thu, Mar 22 2012 10:57:09
  21. Share
    Tools shouldn’t own data. RT @whiteafrican: “It’s not about the platform being open, it’s about the data being open”- @jcrowley #commisaid

    Thu, Mar 22 2012 11:56:08
  22. Share
    Humanitarians have to get their heads around media and tech or risk being left behind #m4d #media #tech #commisaid

    Thu, Mar 22 2012 11:56:58
  23. Share
    Cutting edge is to get the #crowd & the #algorithm to filter each other in filtering massive overload of information in a crisis #commisaid

    Thu, Mar 22 2012 12:01:17
  24. As Robert Kirkpatrick likes to say, “Use the hunch of the expert, machine algorithms and the wisdom of the crowd.”
  25. Share
    “How long until the disaster affected communities start analyzing the aid agencies?” #commisaid

    Thu, Mar 22 2012 12:14:50
  26. Yes! Sousveillance meets analysis of big data on the humanitarian sector.
  27. Share
    Why is it such a big deal, for the humanitarian industry to get feedback from a community? Companies have done it for decades. #commisaid

    Thu, Mar 22 2012 12:17:31
  28. Some of my thoughts on what the humanitarian community can learn from the private sector vis-a-vis customer support.
  29. Share
    People who are not traditional humanitarian actors are taking on humanitarian roles, driven by the democratisation of technology #commisaid

    Thu, Mar 22 2012 13:00:12
  30. Indeed, not only are disaster-affected communities increasingly digital, so are global volunteer networks like the Standby Volunteer Task Force (SBTF).
  31. Share
    Technology is shifting the power balance – it’s helping local communities to organise their own responses to disasters #commisaid

    Thu, Mar 22 2012 13:11:18
  32. Indeed, as a result of these mobile technologies, affected populations are increasingly able to source, share and generate a vast amount of information, which is completely transforming disaster response. More on this here.
  33. Share
    Like Paul Currion’s analogy – humanitarian sector risks obsolescence in the same way the record industry did. Watch out? #commisaid

    Thu, Mar 22 2012 13:17:36
  34. Share


    Thu, Mar 22 2012 14:57:13
  35. One of my favorite books, The Starfish and the Spider: The Unstoppable Power of Leaderless Organizations, has an excellent case study on the music industry. The above picture is taken from that chapter and charts the history of the industry from the perspective of hierarchies vs networks. I’ve argued a couple years ago that the same dynamic is taking place within humanitarian response. See this blog post on Disaster Relief 2.0: Toward a Multipolar System.
  36. Share
    A thousand flowers can bloom beautifully IF common data standards allow sharing. Right now no natural selection improving quality #commisaid

    Thu, Mar 22 2012 13:43:19
  37. Share
    @GSMADisasterRes :free phone numbers &short codes r not silver bullets 4 meaningful communication w/disaster affected communities #commisaid

    Thu, Mar 22 2012 13:58:44
  38. Share
    Scale horizontally, not vertically. The end of command and control?! #commisaid

    Thu, Mar 22 2012 14:07:54
  39. Share
    RT @reeniac: what now? communities have the solution, we need to listen. start by including them in the discussions – Dr Jamilah #commisaid

    Thu, Mar 22 2012 14:14:55

Twitter, Crises and Early Detection: Why “Small Data” Still Matters

My colleagues John Brownstein and Rumi Chunara at Harvard Univer-sity’s HealthMap project are continuing to break new ground in the field of Digital Disease Detection. Using data obtained from tweets and online news, the team was able to identify a cholera outbreak in Haiti weeks before health officials acknowledged the problem publicly. Meanwhile, my colleagues from UN Global Pulse partnered with Crimson Hexagon to forecast food prices in Indonesia by carrying out sentiment analysis of tweets. I had actually written this blog post on Crimson Hexagon four years ago to explore how the platform could be used for early warning purposes, so I’m thrilled to see this potential realized.

There is a lot that intrigues me about the work that HealthMap and Global Pulse are doing. But one point that really struck me vis-a-vis the former is just how little data was necessary to identify the outbreak. To be sure, not many Haitians are on Twitter and my impression is that most humanitarians have not really taken to Twitter either (I’m not sure about the Haitian Diaspora). This would suggest that accurate, early detection is possible even without Big Data; even with “Small Data” that is neither representative or indeed verified. (Inter-estingly, Rumi notes that the Haiti dataset is actually larger than datasets typically used for this kind of study).

In related news, a recent peer-reviewed study by the European Commi-ssion found that the spatial distribution of crowdsourced text messages (SMS) following the earthquake in Haiti were strongly correlated with building damage. Again, the dataset of text messages was relatively small. And again, this data was neither collected using random sampling (i.e., it was crowdsourced) nor was it verified for accuracy. Yet the analysis of this small dataset still yielded some particularly interesting findings that have important implications for rapid damage detection in post-emergency contexts.

While I’m no expert in econometrics, what these studies suggests to me is that detecting change-over–time is ultimately more critical than having a large-N dataset, let alone one that is obtained via random sampling or even vetted for quality control purposes. That doesn’t mean that the latter factors are not important, it simply means that the outcome of the analysis is relatively less sensitive to these specific variables. Changes in the baseline volume/location of tweets on a given topic appears to be strongly correlated with offline dynamics.

What are the implications for crowdsourced crisis maps and disaster response? Could similar statistical analyses be carried out on Crowdmap data, for example? How small can a dataset be and still yield actionable findings like those mentioned in this blog post?

Crisis Mapping Climate Change, Conflict and Aid in Africa

I recently gave a guest lecture at the University of Texas, Austin, and finally had the opportunity to catch up with my colleague Josh Busby who has been working on a promising crisis mapping project as part of the university’s Climate Change and African Political Stability Program (CCAPS).

Josh and team just released the pilot version of its dynamic mapping tool, which aims to provide the most comprehensive view yet of climate change and security in Africa. The platform, developed in partnership with AidData, enables users to “visualize data on climate change vulnerability, conflict, and aid, and to analyze how these issues intersect in Africa.” The tool is powered by ESRI technology and allows researchers as well as policymakers to “select and layer any combination of CCAPS data onto one map to assess how myriad climate change impacts and responses intersect. For example, mapping conflict data over climate vulnera-bility data can assess how local conflict patterns could exacerbate climate-induced insecurity in a region. It also shows how conflict dynamics are changing over time and space.”

The platform provides hyper-local data on climate change and aid-funded interventions, which can provide important insights on how development assistance might (or might not) be reducing vulnerability. For example, aid projects funded by 27 donors in Malawi (i.e., aid flows) can be layered on top of the climate change vulnerability data to “discern whether adaptation aid is effectively targeting the regions where climate change poses the most significant risk to the sustainable development and political stability of a country.”

If this weren’t impressive enough, I was positively amazed when I learned from Josh and team that the conflict data they’re using, the Armed Conflict Location Event Data (ACLED), will be updated on a weekly basis as part of this project, which is absolutely stunning. Back in the day, ACLED was specifically coding historical data. A few years ago they closed the gap by updating some conflict data on a yearly basis. Now the temporal lag will just be one week. Note that the mapping tool already draws on the Social Conflict in Africa Database (SCAD).

This project is an important contribution to the field of crisis mapping and I look forward to following CCAPS’s progress closely over the next few months. I’m hoping that Josh will present this project at the 2012 International Crisis Mappers Conference (ICCM 2012) later this year.

#UgandaSpeaks: Al-Jazeera uses Ushahidi to Amplify Local Voices in Response to #Kony2012

[Cross-posted from the Ushahidi blog]

Invisible Children’s #Kony2012 campaign has set off a massive firestorm of criticism with the debate likely to continue raging for many more weeks and months. In the meantime, our colleagues at Al-Jazeera have repurposed our previous #SomaliaSpeaks project to amplify Ugandan voices responding to the Kony campaign: #UgandaSpeaks.

Other than GlobalVoices, this Al-Jazeera initiative is one of the very few seeking to amplify local reactions to the Kony campaign. Over 70 local voices have been shared and mapped on Al-Jazeera’s Ushahidi platform in the first few hours since the launch. The majority of reactions submitted thus far are critical of the campaign but a few are positive.

One person from Kampala asks, “How come the world now knows more about #Kony2012 than about the Nodding Syndrome in Northern Uganda?” Another person in Gulu complains that “there is nothing new they are showing us. Its like a campaign against our country. […] Did they put on consideration how much its costing our country’s image? It shows as if Uganda is finished.” In nearby Lira, one person shares their story about growing up in Northern Uganda and attending “St. Mary’s College Aboke, a school from which Joseph Kony’s rebels abducted 139 girls in ordinary level […]. For the 4 years that I spent in that school (1999-2002), together with other students, I remember praying the Rosary at the School Grotto on daily basis and in the process, reading out the names of the 30 girls who had remained in captivity after Sr. Rachelle an Italian Nun together with a Ugandan teacher John Bosco rescued only 109 of them.”

The Ushahidi platform was first launched in neighboring Kenya to give ordinary Kenyans a voice during the post election-violence in 2007/2008. Indeed, “ushahidi” means witness or testimony in Swahili. So I am pleased to see this free and open source platform from Africa being used to amplify voices next door in Uganda, voices that are not represented in the #Kony2012 campaign.

Some Ugandan activists are asking why they should respond to “some American video release about something that happened 20 years ago by someone who is not in my country?” Indeed, why should anyone? If the #Kony2012 campaign and underlying message doesn’t bother Ugandans and doesn’t paint the country in a bad light, then there’s no need to respond. If the campaign doesn’t divert attention from current issues that are more pressing to Ugandans and does not adversely effect tourism, then again, why should anyone respond? This is, after all a personal choice, no one is forced to have their voices heard.

At SXSW yesterday, Ugandan activist Teddy Ruge weighed in on the #Kony2012 campaign with the following:

“We [Ugandans] have such a hard time being given the microphone to talk about our issues that sometimes we have to follow on the coat-tails of Western projects like this one and say that we also have a voice in this matter.”

I believe one way to have those local voices heard is to have them echoed using innovative software “Made in Africa” like Ushahidi and then amplified by a non-Western but international news company like Al-Jazeera. Looking at my Twitter stream this morning, it appears that I’m not the only one. The microphone is yours. Over to you.

Truthiness as Probability: Moving Beyond the True or False Dichotomy when Verifying Social Media

I asked the following question at the Berkman Center’s recent Symposium on Truthiness in Digital Media: “Should we think of truthiness in terms of probabili-ties rather than use a True or False dichotomy?” The wording here is important. The word “truthiness” already suggests a subjective fuzziness around the term. Expressing truthiness as probabilities provides more contextual information than does a binary true or false answer.

When we set out to design the SwiftRiver platform some three years ago, it was already clear to me then that the veracity of crowdsourced information ought to be scored in terms of probabilities. For example, what is the probability that the content of a Tweet referring to the Russian elections is actually true? Why use probabilities? Because it is particularly challenging to instantaneously verify crowdsourced information in the real-time social media world we live in.

There is a common tendency to assume that all unverified information is false until proven otherwise. This is too simplistic, however. We need a fuzzy logic approach to truthiness:

“In contrast with traditional logic theory, where binary sets have two-valued logic: true or false, fuzzy logic variables may have a truth value that ranges in degree between 0 and 1. Fuzzy logic has been extended to handle the concept of partial truth, where the truth value may range between completely true and completely false.”

The majority of user-generated content is unverified at time of birth. (Does said data deserve the “original sin” of being labeled as false, unworthy, until prove otherwise? To digress further, unverified content could be said to have a distinct wave function that enables said data to be both true and false until observed. The act of observation starts the collapse of said wave function. To the astute observer, yes, I’m riffing off Shroedinger’s Cat, and was also pondering how to weave in Heisenberg’s uncertainty principle as an analogy; think of a piece of information characterized by a “probability cloud” of truthiness).

I believe the hard sciences have much to offer in this respect. Why don’t we have error margins for truthiness? Why not take a weather forecast approach to information truthiness in social media? What if we had a truthiness forecast understanding full well that weather forecasts are not always correct? The fact that a 70% chance of rain is forecasted doesn’t prevent us from acting and using that forecast to inform our decision-making. If we applied binary logic to weather forecasts, we’d be left with either a 100% chance of rain or 100% chance of sun. Such weather forecasts would be at best suspect if not wrong rather frequently.

In any case, instead of dismissing content generated in real-time because it is not immediately verifiable, we can draw on Information Forensics to begin assessing the potential validity of said content. Tactics from information forensics can help us create a score card of heuristics to express truthiness in terms of probabilities. (I call this advanced media literacy). There are indeed several factors that one can weigh, e.g., the identity of the messenger relaying the content, the source of the content, the wording of said content, the time of day the information was shared, the geographical proximity of the source to the event being reported, etc.

These weights need not be static as they are largely subjective and temporal; after all, truth is socially constructed and dynamic. So while a “wisdom of the crowds” approach alone may not always be well-suited to generating these weights, perhaps integrating the hunch of the expert coupled with machine learning algorithms (based on lessons learned in information forensics) could result more useful decision-support tools for truthiness forecasting (or rather “backcasting”).

In sum, thinking of truthiness strictly in terms of true and false prevents us from “complexifying” a scalar variable into a vector (a wave function), which in turn limits our ability to develop new intervention strategies. We need new conceptual frameworks to reflect the complexity and ambiguity of user-generated content:

 

Innovation and Counter-Innovation: Digital Resistance in Russia

Want to know what the future of digital activism looks like? Then follow the developments in Russia. I argued a few years back that the fields of digital activism and civil resistance were converging to a point I referred to as  “digital resistance.” The pace of tactical innovation and counter-innovation in Russia’s digital battlefield is stunning and rapidly converging to this notion of digital resistance.

“Crisis can be a fruitful time for innovation,” writes Gregory Asmolov. Contested elections are also ripe for innovation, which is why my dissertation case studies focused on elections. “In most cases,” says Asmolov, “innovations are created by the oppressed (the opposition, in Russia’s case), who try to challenge the existing balance of power by using new tools and technologies. But the state can also adapt and adopt some of these technologies to protect the status quo.” These innovations stem not only from the new technologies themselves but are embodied in the creative ways they are used. In other words, tactical innovation (and counter-innovation) is taking place alongside technological innovation. Indeed, “innovation can be seen not only in the new tools, but also in the new forms of protest enabled by the technology.”

Some of my favorite tactics from Russia include the YouTube video of Vladimir Putin arrested for fraud and corruption. The video was made to look like a real “breaking news” announcement on Russian television. The site got millions of viewers in just a few days. Another tactic is the use of DIY drones, mobile phone live-streaming and/or 360-degree 3D photo installations to more accurately relay the size of protests. A third tactic entails the use of a twitter username that resembles that of a well-known individual. Michael McFaul, the US Ambassador to Russia, has the twitter handle @McFaul. Activists set up the twitter handle @McFauI that appears identical but actually uses a capital “i” instead of a lower case “L” for the last letter in McFaul.

Asmolov lists a number of additional innovations in the Russian context in this excellent write-up. From coordination tools such as the “League of Voters” website, the “Street Art” group on Facebook and the car-based flashmob protests which attracted more than one thousand cars in one case, to the crowdsourced violations map “Karta Narusheniy“, the “SMS Golos” and “Svodny Protocol” platforms used to collect, analyze and/or map reports from trusted election observers (using bounded crowdsourcing).

One of my favorite tactics is the “solo protest.” According to Russian law, “a protest by one person does not require special permission. So activist Olesya Shmagun stood in from of Putin’s office with a poster that read “Putin, go and take part in public debates!” While she was questioned by the police and security service, she was not detained since one-person protests are not illegal. Even though she only caught the attention of several dozen people walking by at the time, she published the story of her protests and a few photos on her LiveJournal blog, which drew considerable attention after being shared on many blogs and media outlets. As Asmolov writes, “this story shows the power of what is known as Manuel Castell’s ‘mass self-communication’. Thanks to the presence of one camera, an offline one-person protest found a way to a [much wider] audience online.”

This innovative tactic lead to another challenge: how to turn a one-person protests into a massive number of one-person protests? So on top of this original innovation came yet another innovation, the Big White Circle action. The dedicated online tool Feb26.ru was developed specifically to coordinate many simultaneous one-person protests. The platform,

“[…] allowed people to check in at locations of their choice on the map of the Garden Ring circle, and showed what locations were already occupied. Unlike other protests, the Big White Circle did not have any organizational committee or a particular leader. The role of the leader was played by a website. The website suffered from DDoS attacks; as a result, it was closed and deleted by the provider; a day later, it was restored.  The practice of creating special dedicated websites for specific protest events is one of the most interesting innovations of the Russian protests. The initial idea belongs to Ilya Klishin, who launched the dec24.ru website (which doesn’t exist anymore) for the big opposition rally that took place in Moscow on December 24, 2011.”

The reason I like this tactic is because it takes a perfectly legal action and simply multiplies it, thus forcing the regime to potentially come up with a new set of laws that will clearly appear absurd and ridiculed by a larger segment of the population.

Citizen-based journalism played a pivotal role by “increasing transparency of the coverage of pro-government rallies.” As Asmolov notes, “Internet users were able to provide much content, including high quality YouTube reports that showed that many of those who took a part in these rallies had been forced or paid to participate, without really having any political stance.” This relates to my earlier blog post, “Wag the Dog, or Why Falsifying Crowdsourced Information Can be a Pain.”

Of course, there is plenty of “counter-innovation” coming from the Kremlin and friends. Take this case of pro-Kremlin activists producing an instructional YouTube video on how to manipulate a crowdsourced election-monitoring platform. In addition, Putin loyalists have adapted some of the same tactics as opposition activists, such as the car-based flash-mob protest. The Russian government also decided to create an online system of their own for election monitoring:

“Following an order from Putin, the state communication company Rostelecom developed a website webvybory2012.ru, which allowed people to follow the majority of the Russian polling stations (some 95,000) online on the day of the March 4 presidential election.  Every polling station was equipped with two cameras: one has to be focused on the ballot box and the other has to give the general picture of the polling station. Once the voting was over, one of the cameras broadcasted the counting of the votes. The cost of this project is at least 13 billion rubles (around $500 million). Many bloggers have criticized this system, claiming that it creates an imitation of transparency, when actually the most common election violations cannot be monitored through webcameras (more detailed analysis can be found here). Despite this, the cameras allowed to spot numerous violations (1, 2).”

From the perspective of digital resistance strategies, this is exactly the kind of reaction you want to provoke from a repressive regime. Force them to decen-tralize, spend hundreds of millions of dollars and hundreds of labor-hours to adopt similar “technologies of liberation” and in the process document voting irregularities on their own websites. In other words, leverage and integrate the regime’s technologies within the election-monitoring ecosystem being created, as this will spawn additional innovation. For example, one Russian activist proposed that this webcam network be complemented by a network of citizen mobile phones. In fact, a group of activists developed a smartphone app that could do just this. “The application Webnablyudatel has a classification of all the violations and makes it possible to instantly share video, photos and reports of violations.”

Putin supporters also made an innovative use of crowdsourcing during the recent elections. “What Putin has done is based on a map of Russia where anyone can submit information about Putin’s good deeds.” Just like pro-Kremlin activists can game pro-democracy crowdsourcing platforms, so can supporters of the opposition game a platform like this Putin map. In addition, activists could have easily created a Crowdmap and called it “What Putin Has Not Done” and crowdsource that map, which no doubt would be far more populated than the original good deed map.

One question that comes to mind is how the regime will deal with disinformation on crowdsourcing platforms they set up? Will they need to hire more supporters to vet the information submitted to said platform? Or will  they close up the reporting and use “bounded crowdsourcing” instead? If so, will they have a communications challenge on their hands in trying to convince that trusted reporters are indeed legitimate? Another question has to do with collective action. Pro-Kremlin activists are already innovating on their own but will this create a collective-action challenge for the Russian government? Take the example of the pro-regime “Putin Alarm Clock” (Budilnikputina.ru) tactic which backfired and even prompted Putin’s chief of elections staff to dismiss the initiative as “a provocation organized by the protestors.”

There has always been an interesting asymmetric dynamic in digital activism, with activists as first-movers innovating under oppression and regimes counter-innovating. How will this asymmetry change as digital activism and civil resistance tactics and strategies increasingly converge? Will repressive regimes be pushed to decentralize their digital resistance innovations in order to keep pace with the distributed pro-democracy innovations springing up? Does innovation require less coordination than counter-innovation? And as Gregory Asmolov concludes in his post-script, how will the future ubiquity of crowd-funding platforms and tools for micro-donations/payments online change digital resistance?