Can Massively Multiplayer Online Games also be Next Generation Humanitarian Technologies?


My colleague Peter Mosur and I launched the Internet Response League (IRL) at QCRI a while back to actively explore the intersection of massively multiplayer online games & humanitarian response. IRL is also featured in my new book, Digital Humanitarians, along with many other innovative ideas & technologies. Shortly after the book came out, Peter and I had the pleasure of exploring a collaboration with the team at Massive Multiplayer Online Science (MMOS) and CCP Games—makers of the popular game EVE Online.

MMOS is an awesome group that aims to enable online gamers to contribute to scientific research while playing video games. Our colleagues at MMOS kindly reached out to us earlier this year as they’re really interested in supporting humanitarian efforts as well. They are thus kindly bringing IRL on board to help them explore the use of online games for humanitarian projects.

CCP Games has already been mentioned on the IRL blog here. Their gamers managed to raise an impressive $190,890 for the Icelandic Red Cross in response to Typhoon Haiyan/Yolanda with their PLEX for Good initiative. This is on top of the $100,000 that the company has raised with the program for various disasters in Japan, Haiti, Pakistan, and the United States.

CCP Game’s flagship title EVE Online passed 500,000 subscribers in 2013. The game is extremely unique when it comes to MMORPGs. Rather than having a player base spanning across many different servers, EVE Online keeps keeps all players on one large server. Entitled “Tranquility”, this one server currently averages 25,000 players at any given time, with peaks of over 38,000 [1]. This equates to an average of 600,000 hours of human time spent playing EVE Online every day! The potential good to come out of a humanitarian partnership would be immensely valuable to the world!

So we’re currently exploring with the team at MMOS possible ways to process humanitarian data within EVE’s gaming environment. We’ll write another post soon detailing the unique challenges we’re facing in terms of seamlessly process-ing digital humanitarian tasks within EVE Online. This will require a lot of creativity to pull off and success is by no means guaranteed (just like life and online games). In sum, our humanitarian tasks must in no way disrupt the EVE Online experience; they basically need to be “invisible” to the gamer (besides an initial opt-in).

See the video below for an in-depth overview of the type of work that MMOS and CCP Games envision incorporated into EVE Online. The video was screened at the recent EVE Online Fanfest last month and also features a message from the Internet Response League at the 40:36 minute mark!

This blog post was co-authored with Peter Mosur.

Artificial Intelligence for Monitoring Elections (AIME)

AIME logo

I published a blog post with the same title a good while back. Here’s what I wrote at the time:

Citizen-based, crowdsourced election observation initiatives are on the rise. Leading election monitoring organizations are also looking to leverage citizen-based reporting to complement their own professional election monitoring efforts. Meanwhile, the information revolution continues apace, with the number of new mobile phone subscriptions up by over 1 billion in just the past 36 months alone. The volume of election-related reports generated by “the crowd” is thus expected to grow significantly in the coming years. But international, national and local election monitoring organizations are completely unprepared to deal with the rise of Big (Election) Data.

I thus introduced a new project to “develop a free and open source platform to automatically filter relevant election reports from the crowd.” I’m pleased to report that my team and I at QCRI have just tested AIME during an actual election for the very first time—the 2015 Nigerian Elections. My QCRI Research Assistant Peter Mosur (co-author of this blog post) collaborated directly with Oludotun Babayemi from Clonehouse Nigeria and Chuks Ojidoh from the Community Life Project & Reclaim Naija to deploy and test the AIME platform.

AIME is a free and open source (experimental) solution that combines crowd-sourcing with Artificial Intelligence to automatically identify tweets of interest during major elections. As organizations engaged in election monitoring well know, there can be a lot chatter on social media as people rally behind their chosen candidates, announce this to the world, ask their friends and family who they will be voting for, and updating others when they have voted while posting about election related incidents they may have witnessed. This can make it rather challenging to find reports relevant to election monitoring groups.


Election monitors typically monitor instances of violence, election rigging, and voter issues. These incidents are monitored because they reveal problems that arise with the elections. Election monitoring initiatives such as Reclaim Naija & Uzabe also monitor several other type of incidents but for the purposes of testing the AIME platform, we selected three types of events mentioned above. In order to automatically identify tweets related to these events, one must first provide AIME with example tweets. (Of course, if there is no Twitter traffic to begin with, then there won’t be much need for AIME, which is precisely why we developed an SMS extension that can be used with AIME).

So where does the crowdsourcing comes in? Users of AIME can ask the crowd to tag tweets related to election-violence, rigging and voter issues by simply clicking on tagging tweets posted to the AIME platform with the appropriate event type. (Several quality control mechanisms are built in to ensure data quality. Also, one does not need to use crowdsourcing to tag the tweets; this can be done internally as well or instead). What AIME does next is use a technique from Artificial Intelligence (AI) called statistical machine learning to understand patterns in the human-tagged tweets. In other words, it begins to recognize which tweets belong in which category type—violence, rigging and voter issues. AIME will then auto-classify new tweets that are related to these categories (and can auto-classify around 2 millions tweets or text messages per minute).

Screen Shot 2015-04-10 at 8.33.08 AM

Before creating our automatic classifier for the Nigerian Elections, we first needed to collect examples of tweets related to election violence, rigging and voter issues in order to teach AIME. Oludotun Babayemi and Chuks Ojidoh kindly provided the expert local knowledge needed to identify the keywords we should be following on Twitter (using AIME). They graciously gave us many different keywords to use as well as a list of trusted Twitter accounts to follow for election-related messages. (Due to difficulties with AIME, we were not able to use the trusted accounts. In addition, many of the suggested keywords were unusable since words like “aggressive”, “detonate”, and “security” would have resulted in large amount of false positives).

Here is the full list of keywords used by AIME:

Nigeria elections, nigeriadecides, Nigeria decides, INEC, GEJ, Change Nigeria, Nigeria Transformation, President Jonathan, Goodluck Jonathan, Sai Buhari, saibuhari, All progressives congress, Osibanjo, Sambo, Peoples Democratic Party, boko haram, boko, area boys, nigeria2015, votenotfight, GEJwinsit, iwillvoteapc, gmb2015, revoda, thingsmustchange,  and march4buhari   

Out of this list, “NigeriaDecides” was by far the most popular keyword used in the elections. It accounted for over 28,000 Tweets of a batch of 100,000. During the week leading up to the elections, AIME collected roughly 800,000 Tweets. Over the course of the elections and the few days following, the total number of collected Tweets jumped to well over 4 million.

We sampled just a handful of these tweets and manually tagged those related to violence, rigging and other voting issues using AIME. “Violence” was described as “threats, riots, arming, attacks, rumors, lack of security, vandalism, etc.” while “Election Rigging” was described as “Ballot stuffing, issuing invalid ballot papers, voter impersonation, multiple voting, ballot boxes destroyed after counting, bribery, lack of transparency, tampered ballots etc.” Lastly, “Voting Issues” was defined as “Polling station logistics issues, technical issues, people unable to vote, media unable to enter, insufficient staff, lack of voter assistance, inadequate voting materials, underage voters, etc.”

Any tweet that did not fall into these three categories was tagged as “Other” or “Not Related”. Our Election Classifiers were trained with a total of 571 human-tagged tweets which enabled AIME to automatically classify well over 1 million tweets (1,263,654 to be precise). The results in the screenshot below show accurate AIME was at auto-classifying tweets based on the different event types define earlier. AUC is what captures the “overall accuracy” of AIME’s classifiers.


AIME was rather good at correctly tagging tweets related to “Voting Issues” (98% accuracy) but drastically poor at tagging related to “Election Rigging” (0%). This is not AIME’s fault : ) since it only had 8 examples to learn from. As for “Violence”, the accuracy score was 47%, which is actually surprising given that AIME only had 14 human-tagged examples to learn from. Lastly, AIME did fairly well at auto-classifying unrelated tweets (accuracy of 86%).

Conclusion: this was the first time we tested AIME during an actual election and we’ve learned a lot in the process. The results are not perfect but enough to press on and experiment further with the AIME platform. If you’d like to test AIME yourself (and if you fully recognize that the tool is experimental and still under development, hence not perfect), then feel free to get in touch with me here. We have 2 slots open for testing. In the meantime, big thanks to my RA Peter for spearheading both this deployment and the subsequent research.

Crowdsourcing Point Clouds for Disaster Response

Point Clouds, or 3D models derived from high resolution aerial imagery, are in fact nothing new. Several software platforms already exist to reconstruct a series of 2D aerial images into fully fledged 3D-fly-through models. Check out these very neat examples from my colleagues at Pix4D and SenseFly:

What does a castle, Jesus and a mountain have to do with humanitarian action? As noted in my previous blog post, there’s only so much disaster damage one can glean from nadir (that is, vertical) imagery and oblique imagery. Lets suppose that the nadir image below was taken by an orbiting satellite or flying UAV right after an earthquake, for example. How can you possibly assess disaster damage from this one picture alone? Even if you had nadir imagery for these houses before the earthquake, your ability to assess structural damage would be limited.

Screen Shot 2015-04-09 at 5.48.23 AM

This explains why we also captured oblique imagery for the World Bank’s UAV response to Cyclone Pam in Vanuatu (more here on that humanitarian mission). But even with oblique photographs, you’re stuck with one fixed perspective. Who knows what these houses below look like from the other side; your UAV may have simply captured this side only. And even if you had pictures for all possible angles, you’d literally have 100’s of pictures to leaf through and make sense of.

Screen Shot 2015-04-09 at 5.54.34 AM

What’s that famous quote by Henry Ford again? “If I had asked people what they wanted, they would have said faster horses.” We don’t need faster UAVs, we simply need to turn what we already have into Point Clouds, which I’m indeed hoping to do with the aerial imagery from Vanuatu, by the way. The Point Cloud below was made only from single 2D aerial images.

It isn’t perfect, but we don’t need perfection in disaster response, we need good enough. So when we as humanitarian UAV teams go into the next post-disaster deployment and ask what humanitarians they need, they may say “faster horses” because they’re not (yet) familiar with what’s really possible with the imagery processing solutions available today. That obviously doesn’t mean that we should ignore their information needs. It simply means we should seek to expand their imaginations vis-a-vis the art of the possible with UAVs and aerial imagery. Here is a 3D model of a village in Vanuatu constructed using 2D aerial imagery:

Now, the title of my blog post does lead with the word crowdsourcing. Why? For several reasons. First, it takes some decent computing power (and time) to create these Point Clouds. But if the underlying 2D imagery is made available to hundreds of Digital Humanitarians, we could use this distributed computing power to rapidly crowdsource the creation of 3D models. Second, each model can then be pushed to MicroMappers for crowdsourced analysis. Why? Because having a dozen eyes scrutinizing one Point Cloud is better than 2. Note that for quality control purposes, each Point Cloud would be shown to 5 different Digital Humanitarian volunteers; we already do this with MicroMappers for tweets, pictures, videos, satellite images and of course aerial images as well. Each digital volunteer would then trace areas in the Point Cloud where they spot damage. If the traces from the different volunteers match, then bingo, there’s likely damage at those x, y and z coordinate. Here’s the idea:

We could easily use iPads to turn the process into a Virtual Reality experience for digital volunteers. In other words, you’d be able to move around and above the actual Point Cloud by simply changing the position of your iPad accordingly. This technology already exists and has for several years now. Tracing features in the 3D models that appear to be damaged would be as simple as using your finger to outline the damage on your iPad.

What about the inevitable challenge of Big Data? What if thousands of Point Clouds are generated during a disaster? Sure, we could try to scale our crowd-sourcing efforts by recruiting more Digital Humanitarian volunteers, but wouldn’t that just be asking for a “faster horse”? Just like we’ve already done with MicroMappers for tweets and text messages, we would seek to combine crowdsourcing and Artificial Intelligence to automatically detect features of interest in 3D models. This sounds to me like an excellent research project for a research institute engaged in advanced computing R&D.

I would love to see the results of this applied research integrated directly within MicroMappers. This would allow us to integrate the results of social media analysis via MicroMappers (e.g, tweets, Instagram pictures, YouTube videos) directly with the results of satellite imagery analysis as well as 2D and 3D aerial imagery analysis generated via MicroMappers.

Anyone interested in working on this?

How Digital Jedis Are Springing to Action In Response To Cyclone Pam

Digital Humanitarians sprung to action just hours after the Category 5 Cyclone collided with Vanuatu’s many islands. This first deployment focused on rapidly assessing the damage by analyzing multimedia content posted on social media and in the mainstream news. This request came directly from the United Nations (OCHA), which activated the Digital Humanitarian Network (DHN) to carry out the rapid damage assessment. So the Standby Task Force (SBTF), a founding member of the DHN, used QCRI′s MicroMappers platform to produce a digital, interactive Crisis Map of some 1,000+ geo-tagged pictures of disaster damage (screenshot below).


Within days of Cyclone Pam making landfall, the World Bank (WB) activated the Humanitarian UAV Network (UAViators) to quickly deploy UAV pilots to the affected islands. UAViators has access to a global network of 700+ professional UAV pilots is some 70+ countries worldwide. The WB identified two UAV teams from the Humanitarian UAV Network and deployed them to capture very high-resolution aerial photographs of the damage to support the Government’s post-disaster damage assessment efforts. Pictures from these early UAV missions are available here. Aerial images & videos of the disaster damage were also posted to the UAViators Crowdsourced Crisis Map.

Last week, the World Bank activated the DHN (for the first time ever) to help analyze the many, many GigaBytes of aerial imagery from Vanuatu. So Digital Jedis from the DHN are now using Humanitarian OpenStreetMap (HOT) and MicroMappers (MM) to crowdsource the search for partially damaged and fully destroyed houses in the aerial imagery. The OSM team is specifically looking at the “nadir imagery” captured by the UAVs while MM is exclusively reviewing the “oblique imagery“. More specifically, digital volunteers are using MM to trace destroyed houses red, partially damaged houses orange, and using blue to denote houses that appear to have little to no damage. Below is an early screenshot of the Aerial Crisis Map for the island of Efate. The live Crisis Map is available here.

Screen Shot 2015-04-06 at 10.56.09 AM

Clicking on one of these markers will open up the high resolution aerial pictures taken at that location. Here, two houses are traced in blue (little to no damage) and two on the upper left are traced in orange (partial damage expected).

Screen Shot 2015-04-06 at 10.57.17 AM

The cameras on the UAVs captured the aerial imagery in very high resolution, as you can see from the close up below. You’ll note two traces for the house. These two traces were done by two independent volunteers (for the purposes of quality control). In fact, each aerial image is shown to at least 3 different Digital Jedis.

Screen Shot 2015-04-06 at 10.58.31 AM

Once this MicroMappers deployment is over, we’ll be using the resulting traces to create automated featured detection algorithms; just like we did here for the MicroMappers Namibia deployment. This approach, combining crowdsourcing with Artificial Intelligence (AI), is explored in more detail here vis-a-vis disaster response. The purpose of taking this hybrid human-machine computing solution is to accelerate (semi-automate) future damage assessment efforts.

Meanwhile, back in Vanuatu, the HOT team has already carried out some tentative, preliminary analysis of the damage based on the aerial imagery provided. They are also up-dating their OSM maps of the affected islands thanks this imagery. Below is an initial damage assessment carried out by HOT for demonstration purposes only. Please visit their deployment page on the Vanuatu response for more information.


So what’s next? Combining both the nadir and oblique imagery to interpret disaster damage is ultimately what is needed, so we’re actually hoping to make this happen (today) by displaying the nadir imagery directly within the Aerial Crisis Map produced by MicroMappers. (Many thanks to the MapBox team for their assistance on this). We hope this integration will help HOT and our World Bank partners better assess the disaster damage. This is the first time that we as a group are doing anything like this, so obviously lots of learning going on, which should improve future deployments. Ultimately, we’ll need to create 3D models (point clouds) of disaster affected areas (already easy to do with high-resolution aerial imagery) and then simply use MicroMappers to crowdsource the analysis of these 3D models.

And here’s a 3D model of a village in Vanuatu constructed using 2D aerial photos taken by UAV:

For now, though, Digital Jedis will continue working very closely with the World Bank to ensure that the latter have the results they need in the right format to deliver a comprehensive damage assessment to the Government of Vanuatu by the end of the week. In the meantime, if you’re interested in learning more about digital humanitarian action, then please check out my new book, which features UAViators, HOT, MM and lots more.

Pictures: Humanitarian UAV Mission to Vanuatu in Response to Cyclone Pam

Aéroport de Port Vila – Bauerfield International Airport. As we land, thousands of uprooted trees could be seen in almost every direction.


Massive roots were not enough to save these trees from Cyclone Pam. The devastation reminds us how powerful nature is.




After getting clearance from the Australian Defense Force (ADF), we pack up our UAVs and head over to La Lagune for initial tests. Close collaboration with the military is an absolute must for humanitarian UAV missions. UAVs cannot operate in Restricted Operations Zones without appropriate clearance.


We’re in Vanuatu by invitation of the Government’s National Disaster Risk Management Office (NDMO). So we’re working very closely with our hosts to assess disaster damage and resulting needs. The government and donors need the damage quantified to assess how much funding is necessary for the recovery efforts; and where geographically that funding should be targeted.


Ceci n’est pas un drone; what we found at La Lagune, where the ADF has set up camp. At 2200 every night we send the ADF our flight plan clearance requests for the following day. For obvious safety reasons, we never deviate from these plans after they’ve been approved.


Unpacking and putting together the hexacopters can take a long time. The professional and certified UAV team from New Zealand (X-Craft) follows strict operational check lists to ensure safety and security. We also have a professional and certified team from Australia, Heliwest, which will be flying quadcopters. The UAV team from SPC is also joining our efforts. I’m proud to report that both the Australian & New Zealand teams were recruited directly from the pilot roster of the Humanitarian UAV Network.






The payload (camera) attached to our hexacopters; not exactly a GoPro. We also have other sensors for thermal imaging, etc.


Programming the test flights. Here’s a quick video intro on how to program UAVs for autonomous flights.


Night falls fast in Vanuatu…



… So our helpful drivers kindly light up our work area.


After flawless test flights; we’re back at “HQ” to program the flight paths for tomorrow morning’s humanitarian UAV missions. The priority survey areas tend to change on a daily basis as the government gets more information on which outlying islands have been hardest hit. Our first mission will focus on an area comprised of informal settlements.



Dawn starts to break at 0500. We haven’t gotten much sleep.


At 0600, we arrive at the designated meeting point, the Beach Bar. This will be our base of operations for this morning’s mission.



The flight plans for the hexacopters are ready to go. We have clearance from Air Traffic Control (ATC) to fly until 0830 as manned aircraft start operating extensively after 0900. So in complex airspaces like this one in Vanuatu’s Port Vila, we only fly very early in the morning and after 1700 in the evening. We have ATC’s direct phone number and are in touch with the tower at all times.


Could this be the one and only SXSW 2015 bag in Vanuatu?


All our multirotor UAVs have been tested once again and are now ready to go. The government has already communicated to nearby villages that UAVs will be operating between 0630-0830. We aim to collect aerial imagery at a resolution of 4cm-6cm throughout our missions.



An old basketball court; perfect for take-off & landing.


And of course, when we’re finally ready to fly, it starts to pour. Other challenges include an ash cloud from a nearby volcano. We’ve also been told that kids here are pro’s with slingshots (which is one reason why the government informed local villagers of the mission; i.e., to request that kids not use the UAVs for target practice).


After some delays, we are airborne at last.


Operating the UAViators DJI Phantom…


… Which I’m using purely for documentary purposes. In coming days, we’ll be providing our government partners with a hands-on introduction on how to operate Phantom II’s. Building local capacity is key; which is why this action item is core to the Humanitarian UAV Network’s Code of Conduct.




Can you spot the hexacopter? While there’s only one in the picture below, we actually have two in the air at different altitudes which we are operating by Extended Line of Site and First Person View as a backup.


More aerial shots I took using the Phantom (not for damage assessment; simply for documentary purposes).

Screen Shot 2015-03-28 at 9.55.11 PM

Can you spot the basketball court?


Large clouds bring back the rain; visibility is reduced. We have to suspend our flights; will try again after 1700.




Meanwhile, my Phantom’s GoPro snaps this close up picture on landing.


Stay tuned for updates and in particular the very high resolution aerial imagery that we’ll be posting to MapBox in coming days; along with initial analysis carried out by multiple partners including Humanitarian OpenStreetMap (HOT) and QCRI‘s MicroMappers. Many thanks to MapBox for supporting our efforts. We will also be overlaying the aerial imagery analysis over this MicroMappers crisis map of ground-based pictures of disaster damage in order to triangulate the damage assessment results. Check out the latest update here.

In the meantime, more information on this Humanitarian UAV Mission to Vanuatu–spearheaded by the World Bank in very close collaboration with the Government and SPC–can be found on the Humanitarian UAV Network (UAViators) Ops page here. UAViators is an initiative I launched at QCRI following Typhoon Haiyan in the Philippines in 2013. More on UAViators and the use of humanitarian UAVs in my new book Digital Humanitarians.

Important: this blog post is a personal update written in my personal capacity; none of the above is in any way shape or form a formal communique or press release by any of the partners. Official updates will be provided by the Government of Vanuatu and World Bank directly. Please contact me here for official media requests; kindly note that my responses will need to be cleared by the Government & Bank first.

Artificial Intelligence Powered by Crowdsourcing: The Future of Big Data and Humanitarian Action

There’s no point spewing stunning statistics like this recent one from The Economist, which states that 80% of adults will have access to smartphones before 2020. The volume, velocity and variety of digital data will continue to skyrocket. To paraphrase Douglas Adams, “Big Data is big. You just won’t believe how vastly, hugely, mind-bogglingly big it is.”


And so, traditional humanitarian organizations have a choice when it comes to battling Big Data. They can either continue business as usual (and lose) or get with the program and adopt Big Data solutions like everyone else. The same goes for Digital Humanitarians. As noted in my new book of the same title, those Digital Humanitarians who cling to crowdsourcing alone as their pièce de résistance will inevitably become the ivy-laden battlefield monuments of 2020.


Big Data comprises a variety of data types such as text, imagery and video. Examples of text-based data includes mainstream news articles, tweets and WhatsApp messages. Imagery includes Instagram, professional photographs that accompany news articles, satellite imagery and increasingly aerial imagery as well (captured by UAVs). Television channels, Meerkat and YouTube broadcast videos. Finding relevant, credible and actionable pieces of text, imagery and video in the Big Data generated during major disasters is like looking for a needle in a meadow (haystacks are ridiculously small datasets by comparison).

Humanitarian organizations, like many others in different sectors, often find comfort in the notion that their problems are unique. Thankfully, this is rarely true. Not only is the Big Data challenge not unique to the humanitarian space, real solutions to the data deluge have already been developed by groups that humanitarian professionals at worst don’t know exist and at best rarely speak with. These groups are already using Artificial Intelligence (AI) and some form of human input to make sense of Big Data.

Data digital flow

How does it work? And why do you still need some human input if AI is already in play? The human input, which can be via crowdsourcing or a few individuals is needed to train the AI engine, which uses a technique from AI called machine learning to learn from the human(s). Take AIDR, for example. This experimental solution, which stands for Artificial Intelligence for Disaster Response, uses AI powered by crowdsourcing to automatically identify relevant tweets and text messages in an exploding meadow of digital data. The crowd tags tweets and messages they find relevant and the AI engine learns to recognize the relevance patterns in real-time, allowing AIDR to automatically identify future tweets and messages.

As far as we know, AIDR is the only Big Data solution out there that combines crowdsourcing with real-time machine learning for disaster response. Why do we use crowdsourcing to train the AI engine? Because speed is of the essence in disasters. You need a crowd of Digital Humanitarians to quickly tag as many tweets/messages as possible so that AIDR can learn as fast as possible. Incidentally, once you’ve created an algorithm that accurately detects tweets relaying urgent needs after a Typhoon in the Philippines, you can use that same algorithm again when the next Typhoon hits (no crowd needed).

What about pictures? After all, pictures are worth a thousand words. Is it possible to combine artificial intelligence with human input to automatically identify pictures that show infrastructure damage? Thanks to recent break-throughs in computer vision, this is indeed possible. Take Metamind, for example, a new startup I just met with in Silicon Valley. Metamind is barely 6 months old but the team has already demonstrated that one can indeed automatically identify a whole host of features in pictures by using artificial intelligence and some initial human input. The key is human input since this is what trains the algorithms. The more human-generated training data you have, the better your algorithms.

My team and I at QCRI are collaborating with Metamind to create algorithms that can automatically detect infrastructure damage in pictures. The Silicon Valley start-up is convinced that we’ll be able to create a highly accurate algorithms if we have enough training data. This is where MicroMappers comes in. We’re already using MicroMappers to create training data for tweets and text messages (which is what AIDR uses to create algorithms). In addition, we’re already using MicroMappers to tag and map pictures of disaster damage. The missing link—in order to turn this tagged data into algorithms—is Metamind. I’m excited about the prospects, so stay tuned for updates as we plan to start teaching Metamind’s AI engine this month.

Screen Shot 2015-03-16 at 11.45.31 AM

How about videos as a source of Big Data during disasters? I was just in Austin for SXSW 2015 and met up with the CEO of WireWax, a British company that uses—you guessed it—artificial intelligence and human input to automatically detect countless features in videos. Their platform has already been used to automatically find guns and Justin Bieber across millions of videos. Several other groups are also working on feature detection in videos. Colleagues at Carnegie Melon University (CMU), for example, are working on developing algorithms that can detect evidence of gross human rights violations in YouTube videos coming from Syria. They’re currently applying their algorithms on videos of disaster footage, which we recently shared with them, to determine whether infrastructure damage can be automatically detected.

What about satellite & aerial imagery? Well the team driving DigitalGlobe’s Tomnod platform have already been using AI powered by crowdsourcing to automatically identify features of interest in satellite (and now aerial) imagery. My team and I are working on similar solutions with MicroMappers, with the hope of creating real-time machine learning solutions for both satellite and aerial imagery. Unlike Tomnod, the MicroMappers platform is free and open source (and also filters social media, photographs, videos & mainstream news).

Screen Shot 2015-03-16 at 11.43.23 AM

Screen Shot 2015-03-16 at 11.41.21 AM

So there you have it. The future of humanitarian information systems will not be an App Store but an “Alg Store”, i.e, an Algorithm Store providing a growing menu of algorithms that have already been trained to automatically detect certain features in texts, imagery and videos that gets generated during disasters. These algorithms will also “talk to each other” and integrate other feeds (from real-time sensors, Internet of Things) thanks to data-fusion solutions that already exist and others that are in the works.

Now, the astute reader may have noted that I omitted audio/speech in my post. I’ll be writing about this in a future post since this one is already long enough.

What to Know When Using Humanitarian UAVs for Transportation

UAVs can support humanitarian action in a variety of ways. Perhaps the most common and well-documented use-case is data collection. There are several other use-cases, however, such as payload transportation, which I have blogged about herehere and here. I had the opportunity to learn more about the logistics and operations of payload UAVs while advising a well-known public health NGO in Liberia as well as an international organization in Tanzania. This advising led to conversations with some of the leading experts in the UAV-for-transportation space like Google Project WingMatternet and Vayu for example.

UAV payload unit

Below are just some of the questions you’ll want to ask when you’re considering the use of UAVs for the transportation of small payloads. Of course, the UAV may not be the most appropriate technology for the problem you’re looking to solve. So naturally, the very first step is to carry out a comparative cost-benefit analysis with multiple technologies. The map below, kindly shared by Matternet, is from a project they’re working on with Médecins Sans Frontières (MSF) in Papua New Guinea.

Credit: Matternet

Why does it take some 4 hours to drive 60km (40 miles) compared to 55 minutes by UAV? The pictures below (also shared by Matternet) speak for themselves.

Credit: Matternet

Credit: Matternet

Credit: Matternet

Any use of UAVs in humanitarian contexts should follow the Code of Conduct proposed by the Humanitarian UAV Network (UAViators), which was recently endorsed by the UN. Some of the (somewhat obvious) questions you’ll want to bear in mind as you carry out your cost-benefit analysis thus include:

  • What is maximum, minimum and the average distance that the UAV needs to fly?
  • How frequently do the UAVs need to make the deliveries?
  • How much mass needs to be moved per given amount of time?
  • What is the mass of individual packages (and can these be split into smaller parcels if need be)?
  • Do the packages contain a mechanism for cold transport or would the UAV need to provide refrigeration (assuming this is needed)?
  • What do the take-off and landing spaces look like? How much area, type of ground, size of trees or other obstacles nearby?
  • What does the typology between the take-off and landing sites look like? Tall trees, mountains, or other obstructions?
  • Regarding batteries, is there easy access to electricity in the areas where the UAVs will be landing?
  • Is there any form of cell phone coverage in the landing areas?
  • What is the overall fixed and variable cost of operating the payload UAVs compared to other solutions?
  • What impact (both positive and negative) will the introduction of the payload UAV have on the local economy?

While the payload weight is relatively small (1kg-2kg) for low-cost UAVs, keep in mind that UAV flights can continue around the clock. As one of my colleagues at the Syria Airlift Project recently noted, “If  one crew could launch a plane every 5 minutes, that would add up to almost 200kg in an eight-hour time period.”


Naturally, Google and Matternet are not the only group out there developing UAVs for payload transportation. Amazon, DHL and others are prototyping the same technology. In addition, many of the teams I met at the recent Drones for Good Challenge in Dubai demo’ed payload solutions. One of the competition’s top 5 finalists was Drone Life from Spain. They flew their quadcopter (pictured above) fully autonomously. What’s special about this particular prototype is not just it’s range (40-50km with 2-3kg payload) but the fact that it also includes a fridge (for vaccines, organs, etc.,) that can be remotely monitored in real-time to ensure the temperature remains within required parameters.

At some point in your planning process, you’ll want to map the landing and take-off sites. The map below (click to enlarge) is the one we recently produced for the Tanzania UAV project (which is still being explored). Naturally, all these payload UAV flights would be pre-programmed and autonomous. If you’d like to learn more about how one programs such flights, check out my short video here.

Screen Shot 2015-02-11 at 2.06.45 PM

One other point worth keeping in mind is that UAVs need not be independent from existing transportation infrastructure. One team at the recent Drones for Good Challenge in Dubai suggested using public buses as take-off and landing points for UAVs. A university in the US is actually exploring this same use case, extending the reach of delivery trucks by using UAVs.

Of course, there are a host of issues that one needs to consider when operating any kind of UAV for humanitarian purposes. These include regulations, permits, risk assessments and mitigation strategies, fail safe mechanisms, community engagement, data privacy/security, etc. The above is simply meant to highlight some of the basic questions that need to be posed at the outset of the project. Needless to say, the very first question should always be whether the UAV is indeed the most appropriate tool (cost/benefit analysis) for the task at hand. In any case, the above is obviously not an exhaustive list. So I’d very much welcome feedback on what’s missing. Thank you!