Category Archives: Satellite Imagery

QED – Goodbye Doha, Hello Adventure!

Quod Erat Demonstrandum (QED) is Latin for “that which had to be proven.” This abbreviation was traditionally used at the end of mathematical proofs to signal the completion of said proofs. I joined the Qatar Computing Research Institute (QCRI) well over 3 years ago with a very specific mission and mandate: to develop and deploy next generation humanitarian technologies. So I built the Institute’s Social Innovation Program from the ground up and recruited the majority of the full-time experts (scientists, engineers, research assistants, interns & project manager) who have become integral to the Program’s success. During these 3+years, my team and I partnered directly with humanitarian and development organizations to empirically prove that methods from advanced computing can be used to make sense of Big (Crisis) Data. The time has thus come to add “QED” to the end of that proof and move on to new adventures. But first a reflection.

Over the past 3.5 years, my team and I at QCRI developed free and open source solutions powered by crowdsourcing and artificial intelligence to make sense of Tweets, text messages, pictures, videos, satellite and aerial imagery for a wide range of humanitarian and development projects. We co-developed and co-deployed these platforms (AIDR and MicroMappers) with the United Nations and the World Bank in response to major disasters such as Typhoons Haiyan and RubyCyclone Pam and both the Nepal & Chile Earthquakes. In addition, we carried out peer-reviewed, scientific research on these deployments to better understand how to meet the information needs of our humanitarian partners. We also tackled the information reliability question, experimenting with crowd-sourcing (Verily) and machine learning (TweetCred) to assess the credibility of information generated during disasters. All of these initiatives were firsts in the humanitarian technology space.

We later developed AIDR-SMS to auto-classify text messages; a platform that UNICEF successfully tested in Zambia and which the World Food Program (WFP) and the International Federation of the Red Cross (IFRC) now plan to pilot. AIDR was also used to monitor a recent election, and our partners are now looking to use AIDR again for upcoming election monitoring efforts. In terms of MicroMappers, we extended the platform (considerably) in order to crowd-source the analysis of oblique aerial imagery captured via small UAVs, which was another first in the humanitarian space. We also teamed up with excellent research partners to crowdsource the analysis of aerial video footage and to develop automated feature-detection algorithms for oblique imagery analysis based on crowdsourced results derived from MicroMappers. We developed these Big Data solutions to support damage assessment efforts, food security projects and even this wildlife protection initiative.

In addition to the above accomplishments, we launched the Internet Response League (IRL) to explore the possibility of leveraging massive multiplayer online games to process Big Crisis Data. Along similar lines, we developed the first ever spam filter to make sense of Big Crisis Data. Furthermore, we got directly engaged in the field of robotics by launching the Humanitarian UAV Network (UAViators), yet another first in the humanitarian space. In the process, we created the largest repository of aerial imagery and videos of disaster damage, which is ripe for cutting-edge computer vision research. We also spearheaded the World Bank’s UAV response to Category 5 Cyclone Pam in Vanuatu and also directed a unique disaster recovery UAV mission in Nepal after the devastating earthquakes. (I took time off from QCRI to carry out both of these missions and also took holiday time to support UN relief efforts in the Philippines following Typhoon Haiyan in 2013). Lastly, on the robotics front, we championed the development of international guidelines to inform the safe, ethical & responsible use of this new technology in both humanitarian and development settings. To be sure, innovation is not just about the technology but also about crafting appropriate processes to leverage this technology. Hence also the rationale behind the Humanitarian UAV Experts Meetings that we’ve held at the United Nations Secretariat, the Rockefeller Foundation and MIT.

All  of the above pioneering-and-experimental projects have resulted in extensive media coverage, which has placed QCRI squarely on the radar of international humanitarian and development groups. This media coverage has included the New York Times, Washington Post, Wall Street Journal, CNN, BBC News, UK Guardian, The Economist, Forbes and Times Magazines, New Yorker, NPR, Wired, Mashable, TechCrunch, Fast Company, Nature, New Scientist, Scientific American and more. In addition, our good work and applied research has been featured in numerous international conference presentations and keynotes. In sum, I know of no other institute for advanced computing research that has contributed this much to the international humanitarian space in terms of thought-leadership, strategic partnerships, applied research and operational expertise through real-world co-deployments during and after major disasters.

There is, of course, a lot more to be done in the humanitarian technology space. But what we have accomplished over the past 3 years clearly demonstrates that techniques from advanced computing can indeed provide part of the solution to the pressing Big Data challenge that humanitarian & development organizations face. At the same time, as I wrote in the concluding chapter of my new book, Digital Humanitarians, solving the Big Data challenge does not alas imply that international aid organizations will actually make use of the resulting filtered data or any other data for that matter—even if they ask for this data in the first place. So until humanitarian organizations truly shift towards both strategic and tactical evidence-based analysis & data-driven decision-making, this disconnect will surely continue unabated for many more years to come.

Reflecting on the past 3.5 years at QCRI, it is crystal clear to me that the number one most important lesson I (re)learned is that you can do anything if you have an outstanding, super-smart and highly dedicated team that continually goes way above and beyond the call of duty. It is one thing for me to have had the vision for AIDR, MicroMappers, IRL, UAViators, etc., but vision alone does not amount to much. Implementing said vision is what delivers results and learning. And I simply couldn’t have asked for a more talented & stellar team to translate these visions into reality over the past 3+years. You each know who you are, partners included; it has truly been a privilege and honor working with you. I can’t wait to see what you do next at/with QCRI. Thank you for trusting me; thank you for sharing my vision; thanks for your sense of humor, and thank you for your dedication and loyalty to science and social innovation.

So what’s next for me? I’ll be lining up independent consulting work with several organizations (likely including QCRI). In short, I’ll be open for business. I’m also planning to work on a new project that I’m very excited about, so stay tuned for updates; I’ll be sure to blog about this new adventure when the time is right. For now, I’m busy wrapping up my work as Director of Social Innovation at QCRI and working with the best team there is. QED.

Aerial Robotics in the Land of Buddha

Buddhist Temples adorn Nepal’s blessed land. Their stupas, like Everest, stretch to the heavens, yearning to democratize the sky. We felt the same yearning after landing in Kathmandu with our UAVs. While some prefer the word “drone” over “UAVs”, the reason our Nepali partners use the latter dates back some 3,000 years to the spiritual epic Mahabharata (Great Story of Bharatas). The ancient story features Drona, a master of advanced military arts who slayed hundreds of thousands with his bow & arrows. This strong military connotation explains why our Nepali partners use “UAV” instead, which is the term we also used for our Humanitarian UAV Mission in the land of Buddha. Our purpose: to democratize the sky.

Screen Shot 2015-09-28 at 12.05.09 AM

Unmanned Aerial Vehicles (UAVs) are aerial robots. They are the first wave of robotics to impact the humanitarian space. The mission of the Humanitarian UAV Network (UAViators) is to enable the safe, responsible and effective use of UAVs in a wide range of humanitarian and development settings. We thus spearheaded a unique and weeklong UAV Mission in Nepal in close collaboration with Kathmandu University (KU), Kathmandu Living Labs (KLL), DJI and Pix4D. This mission represents the first major milestone for Kathmandu Flying Labs (please see end of this post for background on KFL).


Our joint UAV mission combined both hands-on training and operational deployments. The full program is available here. The first day comprised a series of presentations on Humanitarian UAV Applications, Missions, Best Practices, Guidelines, Technologies, Software and Regulations. These talks were given by myself, KU, DJI, KLL and the Civil Aviation Authority (CAA) of Nepal. The second day  focused on direct hands-on training. DJI took the lead by training 30+ participants on how to use the Phantom 3 UAVs safely, responsibly. Pix4D, also on site, followed up by introducing their imagery-analysis software.




The second-half of the day was dedicated to operations. We had already received written permission from the CAA to carry out all UAV flights thanks to KU’s outstanding leadership. KU also selected the deployment sites and enabled us to team up with the very pro-active Community Disaster Management Committee (CDMC-9) of Kirtipur to survey the town of Panga, which had been severely affected by the earthquake just months earlier. The CDMC was particularly keen to gain access to very high-resolution aerial imagery of the area to build back faster and better, so we spent half-a-day flying half-a-dozen Phantom 3’s over parts of Panga as requested by our local partners.





The best part of this operation came at the end of the day when we had finished the mission and were packing up: Our Nepali partners politely noted that we had not in fact finished the job; we still had a lot more area to cover. They wanted us back in Panga the following day to complete our mapping mission. We thus changed our plans and returned the next day during which—thanks to DJI & Pix4D—we flew several dozen additional UAV flights from four different locations across Panga (without taking a single break; no lunch was had). Our local partners were of course absolutely invaluable throughout since they were the ones informing the flight plans. They also made it possible for us to launch and land all our flights from the highest rooftops across town. (Click images to enlarge).






Meanwhile, back at KU, our Pix4D partners provided hands-on training on how to use their software to analyze the aerial imagery we had collected the day before. KLL also provided training on how to use the Humanitarian Open Street Map Tasking Manager to trace this aerial imagery. Incidentally, we flew well over 60 UAV flights all in all over the course of our UAV mission, which includes all our training flights on campus as well as our aerial survey of a teaching hospital. Not a single incident or accident occurred; everyone followed safety guidelines and the technology worked flawlessly.




With more than 800 aerial photographs in hand, the Pix4D team worked through the night to produce a very high-resolution orthorectified mosaic of Panga. Here are some of the results.





Compare these results with the resolution and colors of the satellite imagery for the same area (maximum zoom).

Screen Shot 2015-09-28 at 12.32.36 AM


We can now use MicroMappers to crowdsource the analysis & digital annotation of oblique aerial pictures and videos collected throughout the mission. This is an important step in the development of automated feature-detection algorithms using techniques from computer vision and machine learning. The reason we want automated solutions is because aerial imagery already presents a Big Data challenge for humanitarian and development organizations. Indeed, a single 20- minute UAV flight can generate some 800 images. A trained analyst needs at least one minute to analyze a single image, which means that more than 13 hours of human time is needed to analyze imagery captured from just one 20-minute UAV flight. More on this Big Data challenge here.

Incidentally, since Pix4D also used their software to produce a number of stunning 3D models, I’m keen to explore ways to crowdsource 3D models via MicroMappers and to explore possible Virtual Reality solutions to the Big Data challenge. In any event, we generated all the aerial data requested by our local partners by the end of the day.

While this technically meant that we had successfully completed our mission, it didn’t feel finished to me. I really wanted to “liberate” the data completely and place it directly into the hands of the CDCM and local community in Panga. What’s the point of “open data” if most of Panga’s residents are not able to view or interact with the resulting maps? So I canceled my return flight and stayed an extra day to print out our aerial maps on very large roll-able and waterproof banners (which are more durable than paper-based maps).




We thus used these banner-maps and participatory mapping methods to engage the local community directly. We invited community members to annotate the very-high resolution aerial maps themselves by using tape and color-coded paper we had brought along. In other words, we used the aerial imagery as a base map to catalyze a community-wide discussion; to crowdsource and to visualize the community’s local knowledge. Participatory mapping and GIS (PPGIS) can play an impactful role in humanitarian and development projects, hence the initiative with our local partners (more here on community mapping).

In short, our humanitarian mission combined aerial robotics, computer vision, waterproof banners, tape, paper and crowdsourcing to inform the rebuilding process at the community level.












The engagement from the community was absolutely phenomenal and definitely for me the highlight of the mission. Our CDMC partners were equally thrilled and excited with the community engagement that the maps elicited. There were smiles all around. When we left Panga some four hours later, dozens of community members were still discussing the map, which our partners had hung up near a popular local teashop.

There’s so much more to share from this UAV mission; so many angles, side-stories and insights. The above is really just a brief and incomplete teaser. So stay tuned, there’s a lot more coming up from DJI and Pix4D. Also, the outstanding film crew that DJI invited along is already reviewing the vast volume of footage captured during the week. We’re excited to see the professionally edited video in coming weeks, not to mention the professional photographs that both DJI and Pix4D took throughout the mission. We’re especially keen to see what our trainees at KU and KLL do next with the technology and software that are now in their hands. Indeed, the entire point of our mission was to help build local capacity for UAV missions in Nepal by transferring knowledge, skills and technology. It is now their turn to democratize the skies of Nepal.



Acknowledgements: Some serious acknowledgements are in order. First, huge thanks to Lecturer Uma Shankar Panday from KU for co-sponsoring this mission, for hosting us and for making our joint efforts a resounding success. The warm welcome and kind hospitality we received from him, KU’s faculty and executive leadership was truly very touching. Second, special thanks to the CAA of Nepal for participating in our training and for giving us permission to fly. Third, big, big thanks to the entire DJI and Pix4D Teams for joining this UAViators mission and for all their very, very hard work throughout the week. Many thanks also to DJI for kindly donating 10 Smartisan phones and 10 Phantom 3’s to KU and KLL; and kind thanks to Pix4D for generously donating licenses of their software to both KU and KLL. Fourth, many thanks to KLL for contributing to the training and for sharing our vision behind Kathmandu Flying Labs. Fifth, I’d like to express my sincere gratitude to Smartisan for co-sponsoring this mission. Sixth, deepest thanks to CDMC and Dhulikhel Hospital for partnering with us on the ops side of the mission. Their commitment and life-saving work are truly inspiring. Seventh, special thanks to the film and photography crew for being so engaged throughout the mission; they were absolutely part of the team. In closing, I want to specifically thank my colleagues Andrew Schroeder from UAViators and Paul & William from DJI for all the heavy lifting they did to make this entire mission possible. On a final and personal note, I’ve made new friends for life as a result of this UAV mission, and for that I am infinitely grateful.

Kathmandu Flying Labs: My colleague Dr. Nama Budhathoki and I began discussing the potential role that small UAVs could play in his country in early 2014, well over a year-and-half before Nepal’s tragic earthquakes. Nama is the Director of Kathmandu Living Labs, a crack team of Digital Humanitarians whose hard work has been featured in The New York Times and the BBC. Nama and team create open-data maps for disaster risk reduction and response. They use Humanitarian OpenStreetMap’s Tasking Server to trace buildings and roads visible from orbiting satellites in order to produce these invaluable maps. Their primary source of satellite imagery for this is Bing. Alas, said imagery is both low-resolution and out-of-date. And they’re not sure they’ll have free access to said imagery indefinitely either.

KFL logo draft

So Nama and I decided to launch a UAV Innovation Lab in Nepal, which I’ve been referring to as Kathmandu Flying Labs. A year-and-a-half later, the tragic earthquake struck. So I reached out to DJI in my capacity as founder of the Humanitarian UAV Network (UAViators). The mission of UAViators is to enable the safe, responsible and effective use of UAVs in a wide range of humanitarian and development settings. DJI, who are on the Advisory Board of UAViators, had deployed a UAV team in response to the 6.1 earthquake in China the year before. Alas, they weren’t able to deploy to Nepal. But they very kindly donated two Phantom 2’s to KLL.

A few months later, my colleague Andrew Schroeder from UAViators and Direct Relief reconnected with DJI to explore the possibility of a post-disaster UAV Mission focused on recovery and rebuilding. Both DJI and Pix4D were game to make this mission happen, so I reached out to KLL and KU to discuss logistics. Professor Uma at KU worked tirelessly to set everything up. The rest, as they say, is history. There is of course a lot more to be done, which is why Nama, Uma and I are already planning the next important milestones for Kathmandu Flying Labs. Do please get in touch if you’d like to be involved and contribute to this truly unique initiative. We’re also exploring payload delivery options via UAVs and gearing up for new humanitarian UAV missions in other parts of the planet.

A Force for Good: How Digital Jedis are Responding to the Nepal Earthquake (Updated)

Digital Humanitarians are responding in full force to the devastating earthquake that struck Nepal. Information sharing and coordination is taking place online via CrisisMappers and on multiple dedicated Skype chats. The Standby Task Force (SBTF), Humanitarian OpenStreetMap (HOT) and others from the Digital Humanitarian Network (DHN) have also deployed in response to the tragedy. This blog post provides a quick summary of some of these digital humanitarian efforts along with what’s coming in terms of new deployments.

Update: A list of Crisis Maps for Nepal is available below.


At the request of the UN Office for the Coordination of Humanitarian Affairs (OCHA), the SBTF is using QCRI’s MicroMappers platform to crowdsource the analysis of tweets and mainstream media (the latter via GDELT) to rapidly 1) assess disaster damage & needs; and 2) Identify where humanitarian groups are deploying (3W’s). The MicroMappers CrisisMaps are already live and publicly available below (simply click on the maps to open live version). Both Crisis Maps are being updated hourly (at times every 15 minutes). Note that MicroMappers also uses both crowdsourcing and Artificial Intelligence (AIDR).

Update: More than 1,200 Digital Jedis have used MicroMappers to sift through a staggering 35,000 images and 7,000 tweets! This has so far resulted in 300+ relevant pictures of disaster damage displayed on the Image Crisis Map and over 100 relevant disaster tweets on the Tweet Crisis Map.

Live CrisisMap of pictures from both Twitter and Mainstream Media showing disaster damage:

MM Nepal Earthquake ImageMap

Live CrisisMap of Urgent Needs, Damage and Response Efforts posted on Twitter:

MM Nepal Earthquake TweetMap

Note: the outstanding Kathmandu Living Labs (KLL) team have also launched an Ushahidi Crisis Map in collaboration with the Nepal Red Cross. We’ve already invited invited KLL to take all of the MicroMappers data and add it to their crisis map. Supporting local efforts is absolutely key.


The Humanitarian UAV Network (UAViators) has also been activated to identify, mobilize and coordinate UAV assets & teams. Several professional UAV teams are already on their way to Kathmandu. The UAV pilots will be producing high resolution nadir imagery, oblique imagery and 3D point clouds. UAViators will be pushing this imagery to both HOT and MicroMappers for rapid crowdsourced analysis (just like was done with the aerial imagery from Vanuatu post Cyclone Pam, more on that here). A leading UAV manufacturer is also donating several UAVs to UAViators for use in Nepal. These UAVs will be sent to KLL to support their efforts. In the meantime, DigitalGlobePlanet Labs and SkyBox are each sharing their satellite imagery with CrisisMappers, HOT and others in the Digital Humanitarian Network.

There are several other efforts going on, so the above is certainly not a complete list but simply reflect those digital humanitarian efforts that I am involved in or most familiar with. If you know of other major efforts, then please feel free to post them in the comments section. Thank you. More on the state of the art in digital humanitarian action in my new book, Digital Humanitarians.

List of Nepal Crisis Maps

Please add to the list below by posting new links in this Google Spreadsheet. Also, someone should really create 1 map that pulls from each of the listed maps.

Code for Nepal Casualty Crisis Map: 

DigitalGlobe Crowdsourced Damage Assessment Map:

Disaster OpenRouteService Map for Nepal:

ESRI Damage Assessment Map:

Harvard WorldMap Tweets of Nepal: 

Humanitarian OpenStreetMap Nepal:

Kathmandu Living Labs Crowdsourced Crisis Map:

MicroMappers Disaster Image Map of Damage:

MicroMappers Disaster Damage Tweet Map of Needs:

NepalQuake Status Map:

UAViators Crisis Map of Damage from Aerial Pics/Vids: (takes a while to load)

Visions SDSU Tweet Crisis Map of Nepal:

Artificial Intelligence Powered by Crowdsourcing: The Future of Big Data and Humanitarian Action

There’s no point spewing stunning statistics like this recent one from The Economist, which states that 80% of adults will have access to smartphones before 2020. The volume, velocity and variety of digital data will continue to skyrocket. To paraphrase Douglas Adams, “Big Data is big. You just won’t believe how vastly, hugely, mind-bogglingly big it is.”


And so, traditional humanitarian organizations have a choice when it comes to battling Big Data. They can either continue business as usual (and lose) or get with the program and adopt Big Data solutions like everyone else. The same goes for Digital Humanitarians. As noted in my new book of the same title, those Digital Humanitarians who cling to crowdsourcing alone as their pièce de résistance will inevitably become the ivy-laden battlefield monuments of 2020.


Big Data comprises a variety of data types such as text, imagery and video. Examples of text-based data includes mainstream news articles, tweets and WhatsApp messages. Imagery includes Instagram, professional photographs that accompany news articles, satellite imagery and increasingly aerial imagery as well (captured by UAVs). Television channels, Meerkat and YouTube broadcast videos. Finding relevant, credible and actionable pieces of text, imagery and video in the Big Data generated during major disasters is like looking for a needle in a meadow (haystacks are ridiculously small datasets by comparison).

Humanitarian organizations, like many others in different sectors, often find comfort in the notion that their problems are unique. Thankfully, this is rarely true. Not only is the Big Data challenge not unique to the humanitarian space, real solutions to the data deluge have already been developed by groups that humanitarian professionals at worst don’t know exist and at best rarely speak with. These groups are already using Artificial Intelligence (AI) and some form of human input to make sense of Big Data.

Data digital flow

How does it work? And why do you still need some human input if AI is already in play? The human input, which can be via crowdsourcing or a few individuals is needed to train the AI engine, which uses a technique from AI called machine learning to learn from the human(s). Take AIDR, for example. This experimental solution, which stands for Artificial Intelligence for Disaster Response, uses AI powered by crowdsourcing to automatically identify relevant tweets and text messages in an exploding meadow of digital data. The crowd tags tweets and messages they find relevant and the AI engine learns to recognize the relevance patterns in real-time, allowing AIDR to automatically identify future tweets and messages.

As far as we know, AIDR is the only Big Data solution out there that combines crowdsourcing with real-time machine learning for disaster response. Why do we use crowdsourcing to train the AI engine? Because speed is of the essence in disasters. You need a crowd of Digital Humanitarians to quickly tag as many tweets/messages as possible so that AIDR can learn as fast as possible. Incidentally, once you’ve created an algorithm that accurately detects tweets relaying urgent needs after a Typhoon in the Philippines, you can use that same algorithm again when the next Typhoon hits (no crowd needed).

What about pictures? After all, pictures are worth a thousand words. Is it possible to combine artificial intelligence with human input to automatically identify pictures that show infrastructure damage? Thanks to recent break-throughs in computer vision, this is indeed possible. Take Metamind, for example, a new startup I just met with in Silicon Valley. Metamind is barely 6 months old but the team has already demonstrated that one can indeed automatically identify a whole host of features in pictures by using artificial intelligence and some initial human input. The key is human input since this is what trains the algorithms. The more human-generated training data you have, the better your algorithms.

My team and I at QCRI are collaborating with Metamind to create algorithms that can automatically detect infrastructure damage in pictures. The Silicon Valley start-up is convinced that we’ll be able to create a highly accurate algorithms if we have enough training data. This is where MicroMappers comes in. We’re already using MicroMappers to create training data for tweets and text messages (which is what AIDR uses to create algorithms). In addition, we’re already using MicroMappers to tag and map pictures of disaster damage. The missing link—in order to turn this tagged data into algorithms—is Metamind. I’m excited about the prospects, so stay tuned for updates as we plan to start teaching Metamind’s AI engine this month.

Screen Shot 2015-03-16 at 11.45.31 AM

How about videos as a source of Big Data during disasters? I was just in Austin for SXSW 2015 and met up with the CEO of WireWax, a British company that uses—you guessed it—artificial intelligence and human input to automatically detect countless features in videos. Their platform has already been used to automatically find guns and Justin Bieber across millions of videos. Several other groups are also working on feature detection in videos. Colleagues at Carnegie Melon University (CMU), for example, are working on developing algorithms that can detect evidence of gross human rights violations in YouTube videos coming from Syria. They’re currently applying their algorithms on videos of disaster footage, which we recently shared with them, to determine whether infrastructure damage can be automatically detected.

What about satellite & aerial imagery? Well the team driving DigitalGlobe’s Tomnod platform have already been using AI powered by crowdsourcing to automatically identify features of interest in satellite (and now aerial) imagery. My team and I are working on similar solutions with MicroMappers, with the hope of creating real-time machine learning solutions for both satellite and aerial imagery. Unlike Tomnod, the MicroMappers platform is free and open source (and also filters social media, photographs, videos & mainstream news).

Screen Shot 2015-03-16 at 11.43.23 AM

Screen Shot 2015-03-16 at 11.41.21 AM

So there you have it. The future of humanitarian information systems will not be an App Store but an “Alg Store”, i.e, an Algorithm Store providing a growing menu of algorithms that have already been trained to automatically detect certain features in texts, imagery and videos that gets generated during disasters. These algorithms will also “talk to each other” and integrate other feeds (from real-time sensors, Internet of Things) thanks to data-fusion solutions that already exist and others that are in the works.

Now, the astute reader may have noted that I omitted audio/speech in my post. I’ll be writing about this in a future post since this one is already long enough.

Remote Sensing Satellites and the Regulation of Violence in Areas of Limited Statehood

In 1985, American intelligence analyst Samuel Loring Morison was charged with espionage after leaking this satellite image of a Soviet shipyard:

Screen Shot 2015-02-05 at 7.03.05 AM

And here’s a satellite image of the same shipyard today, free & publicly available via Google Earth:

Screen Shot 2015-02-05 at 7.03.16 AM

Thus begins colleague Steven Livingston’s intriguing new study entitled Remote Sensing Satellites and the Regulation of Violence in Areas of Limited Statehood. “These two images illustrate the extraordinary changes in remote sensing that have occurred since 2000, the year the first high-resolution, commercially owned and operated satellite images became available. Images that were once shrouded in state secrecy are now available to anyone possessing a computer and internet connection, sometimes even at no cost.”

Steven “considers the implications of this development for governance in areas of limited statehood.” In other words, he “explores digitally enabled collective action in areas of limited statehood” in order to answer the following question: how might remote sensing “strengthen the efforts to hold those responsible for egregious acts of violence against civil populations to greater account”?

Areas of Limited Statehood

An area of limited statehood is a “place, policy arena, or period of time when the governance capacity of the state is unrealized or faltering.” To this end, “Governance can be defined as initiatives intended to provide public goods and to create and enforce binding rules.” I find it fascinating that Steven treats “governance as an analog to collective action, a term more common to political economics.” Using the lens of limited statehood also “disentangles governance from government (or the state). This is especially important to the discussion of remote sensing satellites and their role in mitigating some of the harsher effects of limited statehood.”

In sum, “rather than a dichotomous variable, as references to failed states imply, state governance capacity is more accurately conceptualized as running along a continuum: from failed states at one end to fully consolidated states at the other.” To this end, “What might appear to be a fully consolidated state according to gross indicators might in fact be a quite limited state according to sectorial, social or even spatial grounds.” This is also true of the Global North. Take natural disasters like Hurricane Katrina, for example. Disasters can, and do, “degrade the governance capacity of a state in the affected region.”

Now, the term “limited governance” does not imply the total lack of governance. “Governance might instead come from alternative sources,” writes Steven, such as NGOs, clans and even gangs. “Most often, governance is provided by a mix of modalities […],” which is “particularly important when considering the role of technology as a sort of governance force multiplier.” Evidently, “Leveraging technology lowers the organizational burden historically associated with the provisioning of public goods. By lowering communication and collaboration costs, information and communication technology facilitates organizing without formal organizations, such as states.” To this end, “Rather than building organizations to achieve a public good, digital technologies are used to organize collective actions intended to provide a public good, even in the absence of the state. It involves a shift from a noun (organizations) to a verb (organizing).”

Remote Sensing Satellites

Some covert satellites are hard to keep out of the public eye. “The low-earth orbit and size of government satellites make them fairly easy to spot, a fact that has created a hobby: satellite tracking.” These hobbyists are able to track govern-ment satellites and to calculate their orbits; thus deducing certain features and even purpose of said satellites. What is less well known, however, are the “capabilities of the sensors or camera carried onboard.”

The three important metrics associated with remote sensing satellites are spatial resolution, spectral resolution and temporal resolution. Please see Steven’s study (pages 12-14) for a detailed description of each. “In short, ‘seeing’ involves much more data than is typically associated in popular imagination with satellite images.” Furthermore, “Spatial resolution alone may not matter as much as other technical characteristics. What is analytically possible with 30-centimeter resolution imagery may not outweigh what can be accomplished with a one-meter spatial resolution satellite with a high temporal resolution.” (Steven also provides an informative summary on the emergence of the commercial remote sensing sector including micro-satellites in pages 14-18).

The Regulation of Violence

Can non-state actors use ICTs to “alter the behavior of state actors who have or are using force […] to violate broadly recognized norms”? Clearly one element of this question relates to the possibility of verifying such abuse (although this in no way implies that state behaviors will change as a consequence). “Where the state is too weak [or unwilling] to hold its own security forces to account and to monitor, investigate, and verify the nature of their conduct, nonstate actors fill at least some of the void. Nonstate actors offer a functional equivalency to a consolidated state’s oversight functions.”

Steven highlights a number of projects that seek to use satellite imagery for the above stated purposes. These include projects by Amnesty International, Human Rights Watch, AAAS and the Harvard Humanitarian Initiative’s (HHI) Satellite Sentinel Project. These projects demonstrate that monitoring & verifying state-sanctioned violence is certainly feasible via satellite imagery. I noted as much here and here back in 2008. And I’ve had several conversations over the years with colleagues at Amnesty, AAAS and the Sentinel Project on the impact of their work on state behavior. There are reasons to be optimistic even if many (most?) of these reasons cannot be made public.

There are also reasons to be concerned as per recent conversations I’ve had with Harvard’s Sentinel Project. The latter readily admit that behavior change in no way implies that said change is a positive one, i.e., the cessation of violence. States who learn of projects that use remote sensing satellites to document the mass atrocities they are committing (or complicit in) may accelerate their slaughter and/or change strategies by taking more covert measures.

There is of course the possibility of positive behavior change; one in which “Transnational Advocacy Networks” are able to “mobilize information strategic-ally to help create new issues and categories and to persuade, pressure, and gain leverage over much more powerful organizations and governments […],” who subsequently change their behaviors to align with international norms and practices. While fraught with the conundrums of “proving” direct causality, the conversations I’ve had with some of the leading advocacy networks engaged in these networks leave me hopeful.

In conclusion

Satellite imagery—once the sole purview of intelligence agencies—is increasingly accessible to these advocacy networks who can use said imagery to map unregulated state violence. To this end, “States no longer enjoy a mono-poly on the synoptic view of earth from space. […] Nonstate actors, from corporations to nongovernmental organizations and community groups now have access to the means of ordering a disorderly world on their own terms.”

The extent to which this loss of monopoly is positively affecting state behavior is unclear (or not fully public). Either way, and while obvious, transparency in no way implies accountability. Documenting state atrocities does not automatically end or prevent them—a point clearly lost on a number of conflict early warning “experts” who overlooked this issue in the 1990s and 2000s. Prevention is political; and political will is not an icon on the computer screen that one can turn on with a double-click of the mouse.

In addition to the above, Steven and I have also been exploring the question of UAVs within the context of limited statehood and the regulation of violence for a future book we’re hoping to co-author. While NGOs and community groups are in no position to operate or own a satellite (typical price tag is $300 million), they can absolutely own and operate a $500 UAV. Just in the past few months, I’ve had 3 major human rights organization contact me for guidance on the use of UAVs for human rights monitoring. How all this eventually plays out will hopefully feature in our future book.

Video: Digital Humanitarians & Next Generation Humanitarian Technology

How do international humanitarian organizations make sense of the “Big Data” generated during major disasters? They turn to Digital Humanitarians who craft and leverage ingenious crowdsourcing solutions with trail-blazing insights from artificial intelligence to make sense of vast volumes of social media, satellite imagery and even UAV/aerial imagery. They also use these “Big Data” solutions to verify user-generated content and counter rumors during disasters. The talk below explains how Digital Humanitarians do this and how their next generation humanitarian technologies work.

Many thanks to TTI/Vanguard for having invited me to speak. Lots more on Digital Humanitarians in my new book of the same title.


Videos of my TEDx talks and the talks I’ve given at the White House, PopTech, Where 2.0, National Geographic, etc., are all available here.

Reflections on Digital Humanitarians – The Book

In January 2014, I wrote this blog post announcing my intention to write a book on Digital Humanitarians. Well, it’s done! And launches this week. The book has already been endorsed by scholars at Harvard, MIT, Stanford, Oxford, etc; by practitioners at the United Nations, World Bank, Red Cross, USAID, DfID, etc; and by others including Twitter and National Geographic. These and many more endorsements are available here. Brief summaries of each book chapter are available here; and the short video below provides an excellent overview of the topics covered in the book. Together, these overviews make it clear that this book is directly relevant to many other fields including journalism, human rights, development, activism, business management, computing, ethics, social science, data science, etc. In short, the lessons that digital humanitarians have learned (often the hard way) over the years and the important insights they have gained are directly applicable to fields well beyond the humanitarian space. To this end, Digital Humanitarians is written in a “narrative and conversational style” rather than with dense, technical language.

The story of digital humanitarians is a multifaceted one. Theirs is not just a story about using new technologies to make sense of “Big Data”. For the most part, digital humanitarians are volunteers; volunteers from all walks of life and who occupy every time zone. Many are very tech-savvy and pull all-nighters, but most simply want to make a difference using the few minutes they have with the digital technologies already at their fingertips. Digital humanitarians also include pro-democracy activists who live in countries ruled by tyrants. This story is thus also about hope and humanity; about how technology can extend our humanity during crises. To be sure, if no one cared, if no one felt compelled to help others in need, or to change the status quo, then no one even would bother to use these new, next generation humanitarian technologies in the first place.

I believe this explains why Professor Leysia Palen included the following in her very kind review of my book: “I dare you to read this book and not have both your heart and mind opened.” As I reflected to my editor while in the midst of book writing, an alternative tag line for the title could very well be “How Big Data and Big Hearts are Changing the Face of Humanitarian Response.” It is personally and deeply important to me that the media, would-be volunteers  and others also understand that the digital humanitarians story is not a romanticized story about a few “lone heroes” who accomplish the impossible thanks to their super human technical powers. There are thousands upon thousands of largely anonymous digital volunteers from all around the world who make this story possible. And while we may not know all their names, we certainly do know about their tireless collective action efforts—they mobilize online from all corners of our Blue Planet to support humanitarian efforts. My book explains how these digital volunteers do this, and yes, how you can too.

Digital humanitarians also include a small (but growing) number of forward-thinking professionals from large and well-known humanitarian organizations. After the tragic, nightmarish earthquake that struck Haiti in January 2010, these seasoned and open-minded humanitarians quickly realized that making sense of “Big Data” during future disasters would require new thinking, new risk-taking, new partnerships, and next generation humanitarian technologies. This story thus includes the invaluable contributions of those change-agents and explains how these few individuals are enabling innovation within the large bureaucracies they work in. The story would thus be incomplete without these individuals; without their appetite for risk-taking, their strategic understanding of how to change (and at times circumvent) established systems from the inside to make their organizations still relevant in a hyper-connected world. This may explain why Tarun Sarwal of the International Committee of the Red Cross (ICRC) in Geneva included these words (of warning) in his kind review: “For anyone in the Humanitarian sector — ignore this book at your peril.”


Today, this growing, cross-disciplinary community of digital humanitarians are crafting and leveraging ingenious crowdsourcing solutions with trail-blazing insights from advanced computing and artificial intelligence in order to make sense of “Big Data” generated during disasters. In virtually real-time, these new solutions (many still in early prototype stages) enable digital volunteers to make sense of vast volumes of social media, SMS and imagery captured from satellites & UAVs to support relief efforts worldwide.

All of this obviously comes with a great many challenges. I certainly don’t shy away from these in the book (despite my being an eternal optimist : ). As Ethan Zuckerman from MIT very kindly wrote in his review of the book,

“[Patrick] is also a careful scholar who thinks deeply about the limits and potential dangers of data-centric approaches. His book offers both inspiration for those around the world who want to improve our disaster response and a set of fertile challenges to ensure we use data wisely and ethically.”

Digital humanitarians are not perfect, they’re human, they make mistakes, they fail; innovation, after all, takes experimenting, risk-taking and failing. But most importantly, these digital pioneers learn, innovate and over time make fewer mistakes. In sum, this book charts the sudden and spectacular rise of these digital humanitarians and their next generation technologies by sharing their remarkable, real-life stories and the many lessons they have learned and hurdles both cleared & still standing. In essence, this book highlights how their humanity coupled with innovative solutions to “Big Data” is changing humanitarian response forever. Digital Humanitarians will make you think differently about what it means to be humanitarian and will invite you to join the journey online. And that is what it’s ultimately all about—action, responsible & effective action.

Why did I write this book? The main reason may perhaps come as a surprise—one word: hope. In a world seemingly overrun by heart-wrenching headlines and daily reminders from the news and social media about all the ugly and cruel ways that technologies are being used to spy on entire populations, to harass, oppress, target and kill each other, I felt the pressing need to share a different narrative; a narrative about how selfless volunteers from all walks of life, from all ages, nationalities, creeds use digital technologies to help complete strangers on the other side of the planet. I’ve had the privilege of witnessing this digital good-will first hand and repeatedly over the years. This goodwill is what continues to restore my faith in humanity and what gives me hope, even when things are tough and not going well. And so, I wrote Digital Humanitarians first and fore-most to share this hope more widely. We each have agency and we can change the world for the better. I’ve seen this and witnessed the impact first hand. So if readers come away with a renewed sense of hope and agency after reading the book, I will have achieved my main objective.

For updates on events, talks, trainings, webinars, etc, please click here. I’ll be organizing a Google Hangout on March 5th for readers who wish to discuss the book in more depth and/or follow up with any questions or ideas. If you’d like additional information on this and future Hangouts, please click on the previous link. If you wish to join ongoing conversations online, feel free to do so with the FB & Twitter hashtag #DigitalJedis. If you’d like to set up a book talk and/or co-organize a training at your organization, university, school, etc., then do get in touch. If you wish to give a talk on the book yourself, then let me know and I’d be happy to share my slides. And if you come across interesting examples of digital humanitarians in action, then please consider sharing these with other readers and myself by using the #DigitalJedis hashtag and/or by sending me an email so I can include your observation in my monthly newsletter and future blog posts. I also welcome guest blog posts on iRevolutions.

Naturally, this book would never have existed were it for digital humanitarians volunteering their time—day and night—during major disasters across the world. This book would also not have seen the light of day without the thoughtful guidance and support I received from these mentors, colleagues, friends and my family. I am thus deeply and profoundly grateful for their spirit, inspiration and friendship. Onwards!