Tag Archives: Research

Computing Research Institutes as an Innovation Pathway for Humanitarian Technology

The World Humanitarian Summit (WHS) is an initiative by United Nations Secretary-General Ban Ki-moon to improve humanitarian action. The Summit, which is to be held in 2016, stands to be one of the most important humanitarian conferences in a decade. One key pillar of WHS is humanitarian innovation. “Transformation through Innovation” is the WHS Working Group dedicated to transforming humanitarian action by focusing explicitly on innovation. I have the pleasure of being a member of this working group where my contribution focuses on the role of new technologies, data science and advanced computing. As such, I’m working on an applied study to explore the role of computing research institutes as an innovation pathway for humanitarian technology. The purpose of this blog post is to invite feedback on the ideas presented below.


I first realized that the humanitarian community faced a “Big Data” challenge in 2010, just months after I had joined Ushahidi as Director of Crisis Mapping, and just months after co-founding CrisisMappers: The Humanitarian Technology Network. The devastating Haiti Earthquake resulted in a massive overflow of information generated via mainstream news, social media, text messages and satellite imagery. I launched and spearheaded the Haiti Crisis Map at the time and together with hundreds of digital volunteers from all around the world went head-to head with Big Data. As noted in my forthcoming book, we realized there and then that crowdsourcing and mapping software alone were no match for Big (Crisis) Data.

Digital Humanitarians: The Book

This explains why I decided to join an advanced computing research institute, namely QCRI. It was clear to me after Haiti that humanitarian organizations had to partner directly with advanced computing experts to manage the new Big Data challenge in disaster response. So I “embedded” myself in an institute with leading experts in Big Data Analytics, Data Science and Social Computing. I believe that computing research institutes (CRI’s) can & must play an important role in fostering innovation in next generation humanitarian technology by partnering with humanitarian organizations on research & development (R&D).

There is already some evidence to support this proposition. We (QCRI) teamed up with the UN Office for the Coordination of Humanitarian Affairs (OCHA) to create the Artificial Intelligence for Disaster Response platform, AIDR as well as MicroMappers. We are now extending AIDR to analyze text messages (SMS) in partnership with UNICEF. We are also spearheading efforts around the use and analysis of aerial imagery (captured via UAVs) for disaster response (see the Humanitarian UAV Network: UAViators). On the subject of UAVs, I believe that this new technology presents us (in the WHS Innovation team) with an ideal opportunity to analyze in “real time” how a new, disruptive technology gets adopted within the humanitarian system. In addition to UAVs, we catalyzed a partnership with Planet Labs and teamed up with Zooniverse to take satellite imagery analysis to the next level with large scale crowd computing. To this end, we are working with humanitarian organizations to enable them to make sense of Big Data generated via social media, SMS, aerial imagery & satellite imagery.

The incentives for humanitarian organizations to collaborate with CRI’s are obvious, especially if the latter (like QCRI) commits to making the resulting prototypes freely accessible and open source. But why should CRI’s collaborate with humanitarian organizations in the first place? Because the latter come with real-world challenges and unique research questions that many computer scientists are very interested in for several reasons. First, carrying out scientific research on real-world problems is of interest to the vast majority of computer scientists I collaborate with, both within QCRI and beyond. These scientists want to apply their skills to make the world a better place. Second, the research questions that humanitarian organizations bring enable computer scientists to differentiate themselves in the publishing world. Third, the resulting research can help advanced the field of computer science and advanced computing.

So why are we see not seeing more collaboration between CRI’s & humanitarian organizations? Because of this cognitive surplus mismatch. It takes a Director of Social Innovation (or related full-time position) to serve as a translational leader between CRI’s and humanitarian organizations. It takes someone (ideally a team) to match the problem owners and problem solvers; to facilitate and manage the collaboration between these two very different types of expertise and organizations. In sum, CRI’s can serve as an innovation pathway if the following three ingredients are in place: 1) Translation Leader; 2) Committed CRI; and 3) Committed Humanitarian Organization. These are necessary but not sufficient conditions for success.

While research institutes have a comparative advantage in R&D, they are not the best place to scale humanitarian technology prototypes. In order to take these prototypes to the next level, make them sustainable and have them develop into enterprise level software, they need to be taken up by for-profit companies. The majority of CRI’s (QCRI included) actually do have a mandate to incubate start-up companies. As such, we plan to spin-off some of the above platforms as independent companies in order to scale the technologies in a robust manner. Note that the software will remain free to use for humanitarian applications; other uses of the platform will require a paid license. Therein lies the end-to-end innovation path that computing research institutes can offer humanitarian organization vis-a-vis next generation humanitarian technologies.

As noted above, part of my involvement with the WHS Innovation Team entails working on an applied study to document and replicate this innovation pathway. As such, I am looking for feedback on the above as well as on the research methodology described below.

I plan to interview Microsoft Research, IBM Research, Yahoo Research, QCRI and other institutes as part of this research. More specifically, the interview questions will include:

  • Have you already partnered with humanitarian organizations? Why/why not?
  • If you have partnered with humanitarian organizations, what was the outcome? What were the biggest challenges? Was the partnership successful? If so, why? If not, why not?
  • If you have not yet partnered with humanitarian organizations, why not? What factors would be conducive to such partnerships and what factors serve as hurdles?
  • What are your biggest concerns vis-a-vis working with humanitarian groups?
  • What funding models did you explore if any?

I also plan to interview humanitarian organizations to better understand the prospects for this potential innovation pathway. More specifically, I plan to interview ICRC, UNHCR, UNICEF and OCHA using the following questions:

  • Have you already partnered with computing research groups? Why/why not?
  • If you have partnered with computing research groups, what was the outcome? What were the biggest challenges? Was the partnership successful? If so, why? If not, why not?
  • If you have not yet partnered with computing research groups, why not? What factors would be conducive to such partnerships and what factors serve as hurdles?
  • What are your biggest concerns vis-a-vis working with computing research groups?
  • What funding models did you explore if any?

My plan is to carry out the above semi-structured interviews in February-March 2015 along with secondary research. My ultimate aim with this deliverable is to develop a model to facilitate greater collaboration between computing research institutes and humanitarian organizations. To this end, I welcome feedback on all of the above (feel free to email me and/or add comments below). Thank you.


See also:

  • Research Framework for Next Generation Humanitarian Technology and Innovation [link]
  • From Gunfire at Sea to Maps of War: Implications for Humanitarian Innovation [link]

The Problem with Crisis Informatics Research

My colleague ChaTo at QCRI recently shared some interesting thoughts on the challenges of crisis informatics research vis-a-vis Twitter as a source of real-time data. The way he drew out the issue was clear, concise and informative. So I’ve replicated his diagram below.

ChaTo Diagram

What Emergency Managers Need: Those actionable tweets that provide situational awareness relevant to decision-making. What People Tweet: Those tweets posted during a crisis which are freely available via Twitter’s API (which is a very small fraction of the Twitter Firehose). What Computers Can Do: The computational ability of today’s algorithms to parse and analyze natural language at a large scale.

A: The small fraction of tweets containing valuable information for emergency responders that computer systems are able to extract automatically.
B: Tweets that are relevant to disaster response but are not able to be analyzed in real-time by existing algorithms due to computational challenges (e.g. data processing is too intensive, or requires artificial intelligence systems that do not exist yet).
C: Tweets that can be analyzed by current computing systems, but do not meet the needs of emergency managers.
D: Tweets that, if they existed, could be analyzed by current computing systems, and would be very valuable for emergency responders—but people do not write such tweets.

These limitations are not just academic. They make it more challenging to develop next-generation humanitarian technologies. So one question that naturally arises is this: How can we expand the size of A? One way is for governments to implement policies that expand access to mobile phones and the Internet, for example.

Area C is where the vast majority of social media companies operate today, on collecting business intelligence and sentiment analysis for private sector companies by combining natural language processing and machine learning methodologies. But this analysis rarely focuses on tweets posted during a major humanitarian crisis. Reaching out to these companies to let them know they could make a difference during disasters would help to expand the size of A + C.

Finally, Area D is composed of information that would be very valuable for emergency responders, and that could automatically extracted from tweets, but that Twitter users are simply not posting this kind of information during emergencies (for now). Here, government and humanitarian organizations can develop policies to incentivise disaster-affected communities to tweet about the impact of a hazard and resulting needs in a way that is actionable, for example. This is what the Philippine Government did during Typhoon Pablo.

Now recall that the circle “What People Tweet About” is actually a very small fraction of all posted tweets. The advantage of this small sample of tweets is that they are freely available via Twitter’s API. But said API limits the number of downloadable tweets to just a few thousand per day. (For comparative purposes, there were over 20 million tweets posted during Hurricane Sandy). Hence the need for data philanthropy for humanitarian response.

I would be grateful for your feedback on these ideas and the conceptual frame-work proposed by ChaTo. The point to remember, as noted in this earlier post, is that today’s challenges are not static; they can be addressed and overcome to various degrees. In other words, the sizes of the circles can and will change.



Muḥammad ibn Mūsā al-Khwārizmī: An Update from the Qatar Computing Research Institute

I first heard of al-Khwārizmī in my ninth-grade computer science class at the International School of Vienna (AIS) back in 1993. Dr. Herman Prossinger who taught the course is exactly the kind of person one describes when answering the question: which teacher had the most impact on you while growing up? I wonder how many other 9th graders in the world had the good fortune of being taught computer science by a full-fledged professor with a PhD dissertation entitled “Isothermal Gas spheres in General Relativity Theory” (1976) and numerous peer-reviewed publications in top-tier scientific journals including Nature?

Muḥammad ibn Mūsā al-Khwārizmī was a brilliant mathematician & astronomer who spent his time as a scholar in the House of Wisdom in Baghdad (possibly the best name of any co-working space in history). “Al-Khwarithmi” was initially transliterated into Latin as Algoritmi. The manuscript above, for example, begins with “DIXIT algorizmi,” meaning “Says al-Khwārizmī.” And thus was born the world AlgorithmBut al-Khwārizmī’s fundamental contributions were not limited to the fields of mathematics and astronomy, he is also well praised for his important work on geography and cartography. Published in 833, his Kitāb ṣūrat al-Arḍ (Arabic: كتاب صورة الأرض) or “Book on the Appearance of the Earth” was a revised and corrected version of Ptolemy’s Geography. al-Khwārizmī’s book comprised an impressive list of 2,402 coordinates of cities and other geo-graphical features. The only surviving copy of the book can be found at Strasbourg University. I’m surprised the item has not yet been purchased by Qatar and relocated to Doha.

View of the bay from QCRI in Doha, Qatar.

This brings me to the Qatar (Foundation) Computing Research Institute (QCRI), which was almost called the al-Khwārizmī Computing Research Institute. I joined QCRI exactly two weeks ago as Director of Social Innovation. My first impression? QCRI is Doha’s “House of Whizzkids”. The team is young, dynamic, international and super smart. I’m already working on several exploratory research and development (R&D) projects that could potentially lead to initial prototypes by the end of the year. These have to do with the application of social computing and big data analysis for humanitarian response. So I’ve been in touch with several colleagues at the United Nations (UN) Office for the Coordination of Humanitarian Affairs (OCHA) to bounce these early ideas off and am thrilled that all responses thus far have been very positive.

My QCRI colleagues and I are also looking into collaborative platforms for “smart microtasking” which may be useful for the Digital Humanitarian Network. In addition, we’re just starting to explore potential solutions for quantifying veracity in social media, a rather non-trivial problem as Dr. Prossinger would often say with a sly smile in relation to NP-hard problems. In terms of partner-ship building, I will be in New York, DC and Boston next month for official meetings with the UN, World Bank and MIT to explore possible collaborations on specific projects. The team in Doha is particularly strong on big data analytics, social computing, data cleaning, machine learning and translation. In fact, most of the whizzkids here come from very impressive track records with Microsoft, Yahoo, Ivy Leagues, etc. So I’m excited by the potential.

View of Tornado Tower (purple lights) where QCRI is located.

The reason I’m not going into specifics vis-a-vis these early R&D efforts is not because I want to be secretive or elusive. Not at all. We’re still refining the ideas ourselves and simply want to manage expectations. There is a very strong and genuine interest within QCRI to contribute meaningfully to the humanitarian technology space. But we’re really just getting started, still hiring left, center and right, and we’ll be in R&D mode for a while. Plus, we don’t want to rush just for the sake of launching a new product. All too often, humanitarian technologies are developed without the benefit (and luxury) of advanced R&D. But if QCRI is going to help shape next-generation humanitarian technology solutions, we should do this in a way that is deliberate, cutting-edge and strategic. That is our comparative advantage.

In sum, the outcome of our R&D efforts may not always lead to a full-fledged prototype, but all the research and findings we carry out will definitely be shared publicly so we can move the field forward. We’re also committed to developing free and open source software as part of our prototyping efforts. Finally, we have no interest in re-inventing the wheel and far prefer working in partnerships than in isolation. So there we go, time to R&D  like al-Khwārizmī.

The Future of Digital Activism and How to Stop It

I’ve been following a “debate” on a technology list serve which represents the absolute worse of the discourse on digital activism. Even writing the word debate in quotes is too generous. It was like watching Bill O’Reilly or Glenn Beck go all out on Fox News.

The arguments were mostly one-sided and mixed with insults to create public ridicule. It was blatantly obvious that those doing the verbal lynching were driven by other motives. They have a history of being aggressive and seeking provocation in public because it gets them attention, which further bloats their egos. They thrive on it. The irony? Neither of them have much of a track record to speak of in the field of digital activism. All they seem to do is talk about tech in the context of insulting others who get engaged operationally and try to make a difference. Constructive criticism is important, but this hardly qualifies. This is a shame as these individuals are otherwise quite sharp.

So how do we prevent a Fox-styled future of Digital Activism? First, ignore these poisonous debates. If people were serious about digital activism, the discourse would take on a very different tone, a professional one. Second, don’t be fooled, most of the conversations on digital activism are mixed with anecdotes, selection bias and hype, often to get media attention. You’ll find that most involved in the “study” of digital activism have no idea about methodology and research design. Third, help make data-driven, mixed-methods research on digital activism  possible by adding data to the Global Digital Activism Data Set (GDADS). The Meta-Activism Project (MAP) recently launched this data project to catalyze more empirical research on digital activism.