Tag Archives: AI

Analyzing Tweets on Malaysia Flight #MH370

My QCRI colleague Dr. Imran is using our AIDR platform (Artificial Intelligence for Disaster Response) to collect & analyze tweets related to Malaysia Flight 370 that went missing several days ago. He has collected well over 850,000 English-language tweets since March 11th; using the following keywords/hashtags: Malaysia Airlines flight, #MH370m #PrayForMH370 and #MalaysiaAirlines.

MH370 Prayers

Imran then used AIDR to create a number of “machine learning classifiers” to automatically classify all incoming tweets into categories that he is interested in:

  • Informative: tweets that relay breaking news, useful info, etc

  • Praying: tweets that are related to prayers and faith

  • Personal: tweets that express personal opinions

The process is super simple. All he does is tag several dozen incoming tweets into their respective categories. This teaches AIDR what an “Informative” tweet should “look like”. Since our novel approach combines human intelligence with artificial intelligence, AIDR is typically far more accurate at capturing relevant tweets than Twitter’s keyword search.

And the more tweets that Imran tags, the more accurate AIDR gets. At present, AIDR can auto-classify ~500 tweets per second, or 30,000 tweets per minute. This is well above the highest velocity of crisis tweets recorded thus far—16,000 tweets/minute during Hurricane Sandy.

The graph below depicts the number of tweets generated since the day we started collecting the AIDR collection, i.e., March 11th.

Volume of Tweets per Day

This series of pie charts simply reflects the relative share of tweets per category over the past four days.

Tweets Trends

Below are some of the tweets that AIDR has automatically classified as being Informative (click to enlarge). The “Confidence” score simply reflects how confident AIDR is that it has correctly auto-classified a tweet. Note that Imran could also have crowdsourced the manual tagging—that is, he could have crowdsourced the process of teaching AIDR. To learn more about how AIDR works, please see this short overview and this research paper (PDF).

AIDR output

If you’re interested in testing AIDR (still very much under development) and/or would like the Tweet ID’s for the 850,000+ tweets we’ve collected using AIDR, then feel free to contact me. In the meantime, we’ll start a classifier that auto-collects tweets related to hijacking, criminal causes, and so on. If you’d like us to create a classifier for a different topic, let us know—but we can’t make any promises since we’re working on an important project deadline. When we’re further along with the development of AIDR, anyone will be able to easily collect & download tweets and create & share their own classifiers for events related to humanitarian issues.

Bio

Acknowledgements: Many thanks to Imran for collecting and classifying the tweets. Imran also shared the graphs and tabular output that appears above.

Syria: Crowdsourcing Satellite Imagery Analysis to Identify Mass Human Rights Violations

Update: See this blog post for the latest. Also, our project was just featured on the UK Guardian Blog!

What if we crowdsourced satellite imagery analysis of key cities in Syria to identify evidence of mass human rights violations? This is precisely the question that my colleagues at Amnesty International USA’s Science for Human Rights Program asked me following this pilot project I coordinated for Somalia. AI-USA has done similar work in the past with their Eyes on Darfur project, which I blogged about here in 2008. But using micro-tasking with backend triangulation to crowdsource the analysis of high resolution satellite imagery for human rights purposes is definitely breaking new ground.

A staggering amount of new satellite imagery is produced every day; millions of square kilometers’ worth according to one knowledgeable colleague. This is a big data problem that needs mass human intervention until the software can catch up. I recently spoke with Professor Ryan Engstrom, the Director of the Spatial Analysis Lab at George Washington University, and he confirmed that automated algorithms for satellite imagery analysis still have a long, long way to go. So the answer for now has to be human-driven analysis.

But professional satellite imagery experts who have plenty of time to volunteer their skills are far and few between. The Satellite Sentinel Project (SSP), which I blogged about here, is composed of a very small team and a few interns. Their focus is limited to the Sudan and they are understandably very busy. My colleagues at AI-USA analyze satellite imagery for several conflicts, but this takes them far longer than they’d like and their small team is still constrained given the number of conflicts and vast amounts of imagery that could be analyzed. This explains why they’re interested in crowdsourcing.

Indeed, crowdsourcing imagery analysis has proven to be a workable solution in several other projects & sectors. The “crowd” can indeed scan and tag vast volumes of satellite imagery data when that imagery is “sliced and diced” for micro-tasking. This is what we did for the Somalia pilot project thanks to the Tomnod platform and the imagery provided by Digital Globe. The yellow triangles below denote the “sliced images” that individual volunteers from the Standby Task Force (SBTF) analyzed and tagged one at a time.

We plan do the same with high resolution satellite imagery of three key cities in Syria selected by the AI-USA team. The specific features we will look for and tag include: “Burnt and/or darkened building features,” “Roofs absent,” “Blocks on access roads,” “Military equipment in residential areas,” “Equipment/persons on top of buildings indicating potential sniper positions,” “Shelters composed of different materials than surrounding structures,” etc. SBTF volunteers will be provided with examples of what these features look like from a bird’s eye view and from ground level.

Like the Somalia project, only when a feature—say a missing roof—is tagged identically  by at least 3 volunteers will that location be sent to the AI-USA team for review. In addition, if volunteers are unsure about a particular feature they’re looking at, they’ll take a screenshot of said feature and share it on a dedicated Google Doc for the AI-USA team and other satellite imagery experts from the SBTF team to review. This feedback mechanism is key to ensure accurate tagging and inter-coder reliability. In addition, the screenshots shared will be used to build a larger library of features, i.e., what a missing roof looks like as well military equipment in residential areas, road blocks, etc. Volunteers will also be in touch with the AI-USA team via a dedicated Skype chat.

There will no doubt be a learning curve, but the sooner we climb that learning curve the better. Democratizing satellite imagery analysis is no easy task and one or two individuals have opined that what we’re trying to do can’t be done. That may be, but we won’t know unless we try. This is how innovation happens. We can hypothesize and talk all we want, but concrete results are what ultimately matters. And results are what can help us climb that learning curve. My hope, of course, is that democratizing satellite imagery analysis enables AI-USA to strengthen their advocacy campaigns and makes it harder for perpetrators to commit mass human rights violations.

SBTF volunteers will be carrying out the pilot project this month in collaboration with AI-USA, Tomnod and Digital Globe. How and when the results are shared publicly will be up to the AI-USA team as this will depend on what exactly is found. In the meantime, a big thanks to Digital Globe, Tomnod and SBTF volunteers for supporting the AI-USA team on this initiative.

If you’re interested in reading more about satellite imagery analysis, the following blog posts may also be of interest:

• Geo-Spatial Technologies for Human Rights
• Tracking Genocide by Remote Sensing
• Human Rights 2.0: Eyes on Darfur
• GIS Technology for Genocide Prevention
• Geo-Spatial Analysis for Global Security
• US Calls for UN Aerial Surveillance to Detect Preparations for Attacks
• Will Using ‘Live’ Satellite Imagery to Prevent War in the Sudan Actually Work?
• Satellite Imagery Analysis of Kenya’s Election Violence: Crisis Mapping by Fire
• Crisis Mapping Uganda: Combining Narratives and GIS to Study Genocide
• Crowdsourcing Satellite Imagery Analysis for Somalia: Results of Trial Run
• Genghis Khan, Borneo & Galaxies: Crowdsourcing Satellite Imagery Analysis
• OpenStreetMap’s New Micro-Tasking Platform for Satellite Imagery Tracing




Eyes on Darfur: 2 Villages Missing from Site

An update on Amnesty International’s (AI) “Eyes on Darfur” project based on my previous blog.

At least two of the protected villages monitored by AI using very-high resolution imagery provided by AAAS have been removed from the site after reported attacks in the area, with updated imagery still being processed. The attacks in question were summarized by this UNHCR Report.

This raises some important questions as noted by a colleague in a recent discussion: the bigger issue here is vital, all this geo-mapping is virtual, and while it may impact the real world that’s not a foregone conclusion; Would other NGOs, or perhaps a consortium, do better at the protective concept? And how? Namely, who can protect these villages and others like them?

I will write another blog this week on precisely these questions, i.e., civilian protection.

Patrick Philippe Meier

Human Rights 2.0: Eyes on Darfur

Amnesty International (AI) is taking human rights monitoring to a whole new level, metaphorically and literally speaking. The organization’s “Eyes on Darfur” project leverages the power of high-resolution satellite imagery to provide unimpeachable evidence of the atrocities being committed in Darfur – enabling action by private citizens, policy makers and international courts. Eyes On Darfur also breaks new ground in protecting human rights by allowing people around the world to literally “watch over” and protect twelve intact, but highly vulnerable, villages using commercially available satellite imagery.

I met with AI today to learn more. The human rights organization sends government officials these images on a regular basis to remind them that the world is watching. The impact? The villages monitored by AI have not been attacked while neighboring ones have. According to AI, there have also been notable changes in decisions made by the Bashir government since “Eyes on Darfur” went live a year ago. Equally interesting is that AI has been able to track the movement of the Janjaweed thanks to commercially available satellite imagery. In addition, the government of Chad cited the AI project as one of the reasons they accepted UN peacekeepers.

The American Association for the Advancement of Science (AAAS) is also leading a Human Rights and Geospatial Technologies project. So I also sat with them to learn more (September 2007). NGOs in Burma provided AAAS with information concerning attacks on civilians carried out by government forces in late 2006 and early 2007. AAAS staff reviewed these reports and compared them with high-resolution satellite images to identify destruction of housing and infrastructure and construction of new military occupation camps. The result is available in these Google Earth Layers. AAAS has provided comparable layers for Sudan, Chad, Lebanon and Zimbabwe. And this is just the tip of the iceberg.

AI is venturing on a 3-year project to provide satellite imagery to monitor forced displacement for early detection and advocacy. AAAS is developing a user-friendly web-based interface to let the NGO community know in real time where commercial satellites are positioned and what geographical areas they are taking pictures of. The interface includes direct links to the private companies operating these satellites along with contact and pricing information. AAAS believes this tool will enable the NGO community to make far more effective use of satellite imagery and to serve as a deterrent against repressive regimes choosing to commit mass atrocities.

The European Commission’s Joint Research Center (JRC) out of Ispra, Italy is also engaged in phenomenal work using satellite imagery. I first met with the JRC in 2004 and more recently in October 2007. The Center has developed automated models for change detection that are far more reliable than previously thought possible. Using pattern detection algorithms, the JRC can detect whether infrastructure has been destroyed, damaged, built or remained unchanged. They are now applying these models to monitor changes in refugee camps worldwide. The advantage of the JRC’s models is that they don’t necessarily require high resolution satellite imagery.

The same team at the JRC has also developed models to approximate population density in urban areas such as the Kibera slums out of Nairobi. Using satellite pictures taken at different angles, the team is able to construct 3D models of infrastructure such as individual buildings and houses. Thanks to these models they are able to approximate the size of these structures and thus estimate the number of inhabitants.

While AI and AAAS have been collaborating on some of these projects, the JRC has not been connected to this work. I therefore organized a working lunch during the OCHA +5 Symposium in Geneva last Fall to connect AAAS, the JRC, the Feinstein Center and the USHMM. My intention is to catalyze greater collaboration between these organizations and projects so we can upgrade to Human Rights 2.0.

Patrick Philippe Meier