Category Archives: Satellite Imagery

Project Loon: Google Blimps for Disaster Response (Updated)

A blimp is a floating airship that does not have any internal supporting framework or keel. The airship is typically filled with helium and is controlled remotely using steerable fans. Projet Loon is a Google initiative to launch a fleet of Blimps to extend Internet/wifi access across Africa and Asia. Some believe that “these high-flying networks would spend their days floating over areas outside of major cities where Internet access is either scarce or simply nonexistent.” Small-scale prototypes are reportedly being piloted in South Africa “where a base station is broadcasting signals to wireless access boxes in high schools over several kilometres.” The US military has been using similar technology for years.

Blimp

Google notes that the technology is “well-suited to provide low cost connectivity to rural communities with poor telecommunications infrastructure, and for expanding coverage of wireless broadband in densely populated urban areas.” Might Google Blimps also be used by Google’s Crisis Response Team in the future? Indeed, Google Blimps could be used to provide Internet access to disaster-affected communities. The blimps could also be used to capture very high-resolution aerial imagery for damage assessment purposes. Simply adding a digital camera to said blimps would do the trick. In fact, they could simply take the fourth-generation cameras used for Google Street View and mount them on the blimps to create Google Sky View. As always, however, these innovations are fraught with privacy and data protection issues. Also, the use of UAVs and balloons for disaster response has been discussed for years already.

bio

Zooniverse: The Answer to Big (Crisis) Data?

Both humanitarian and development organizations are completely unprepared to deal with the rise of “Big Crisis Data” & “Big Development Data.” But many still hope that Big Data is but an illusion. Not so, as I’ve already blogged here, here and here. This explains why I’m on a quest to tame the Big Data Beast. Enter Zooniverse. I’ve been a huge fan of Zooniverse for as long as I can remember, and certainly long before I first mentioned them in this post from two years ago. Zooniverse is a citizen science platform that evolved from GalaxyZoo in 2007. Today, Zooniverse “hosts more than a dozen projects which allow volunteers to participate in scientific research” (1). So, why do I have a major “techie crush” on Zooniverse?

Oh let me count the ways. Zooniverse interfaces are absolutely gorgeous, making them a real pleasure to spend time with; they really understand user-centered design and motivations. The fact that Zooniverse is conversent in multiple disciplines is incredibly attractive. Indeed, the platform has been used to produce rich scientific data across multiple fields such as astronomy, ecology and climate science. Furthermore, this citizen science beauty has a user-base of some 800,000 registered volunteers—with an average of 500 to 1,000 new volunteers joining every day! To place this into context, the Standby Volunteer Task Force (SBTF), a digital humanitarian group has about 1,000 volunteers in total. The open source Zooniverse platform also scales like there’s no tomorrow, enabling hundreds of thousands to participate on a single deployment at any given time. In short, the software supporting these pioneering citizen science projects is well tested and rapidly customizable.

At the heart of the Zooniverse magic is microtasking. If you’re new to microtasking, which I often refer to as “smart crowdsourcing,” this blog post provides a quick introduction. In brief, Microtasking takes a large task and breaks it down into smaller microtasks. Say you were a major (like really major) astro-nomy buff and wanted to tag a million galaxies based on whether they are spiral or elliptical galaxies. The good news? The kind folks at the Sloan Digital Sky Survey have already sent you a hard disk packed full of telescope images. The not-so-good news? A quick back-of-the-envelope calculation reveals it would take 3-5 years, working 24 hours/day and 7 days/week to tag a million galaxies. Ugh!

Screen Shot 2013-03-25 at 4.11.14 PM

But you’re a smart cookie and decide to give this microtasking thing a go. So you upload the pictures to a microtasking website. You then get on Facebook, Twitter, etc., and invite (nay beg) your friends (and as many strangers as you can find on the suddenly-deserted digital streets), to help you tag a million galaxies. Naturally, you provide your friends, and the surprisingly large number good digital Samaritans who’ve just show up, with a quick 2-minute video intro on what spiral and elliptical galaxies look like. You explain that each participant will be asked to tag one galaxy image at a time by simply by clicking the “Spiral” or “Elliptical” button as needed. Inevitably, someone raises their hands to ask the obvious: “Why?! Why in the world would anyone want to tag a zillion galaxies?!”

Well, only cause analyzing the resulting data could yield significant insights that may force a major rethink of cosmology and our place in the Universe. “Good enough for us,” they say. You breathe a sigh of relief and see them off, cruising towards deep space to bolding go where no one has gone before. But before you know it, they’re back on planet Earth. To your utter astonishment, you learn that they’re done with all the tagging! So you run over and check the data to see if they’re pulling your leg; but no, not only are 1 million galaxies tagged, but the tags are highly accurate as well. If you liked this little story, you’ll be glad to know that it happened in real life. GalaxyZoo, as the project was called, was the flash of brilliance that ultimately launched the entire Zooniverse series.

Screen Shot 2013-03-25 at 3.23.53 PM

No, the second Zooniverse project was not an attempt to pull an Oceans 11 in Las Vegas. One of the most attractive features of many microtasking platforms such as Zooniverse is quality control. Think of slot machines. The only way to win big is by having three matching figures such as the three yellow bells in the picture above (righthand side). Hit the jackpot and the coins will flow. Get two out three matching figures (lefthand side), and some slot machines may toss you a few coins for your efforts. Microtasking uses the same approach. Only if three participants tag the same picture of a galaxy as being a spiral galaxy does that data point count. (Of course, you could decide to change the requirement from 3 volunteers to 5 or even 20 volunteers). This important feature allows micro-tasking initiatives to ensure a high standard of data quality, which may explain why many Zooniverse projects have resulted in major scientific break-throughs over the years.

The Zooniverse team is currently running 15 projects, with several more in the works. One of the most recent Zooniverse deployments, Planet Four, received some 15,000 visitors within the first 60 seconds of being announced on BBC TV. Guess how many weeks it took for volunteers to tag over 2,000,0000 satellite images of Mars? A total of 0.286 weeks, i.e., forty-eight hours! Since then, close to 70,000 volunteers have tagged and traced well over 6 million Martian “dunes.” For their Andromeda Project, digital volunteers classified over 7,500 star clusters per hour, even though there was no media or press announce-ment—just one newsletter sent to volunteers. Zooniverse de-ployments also involve tagging earth-based pictures (in contrast to telescope imagery). Take this Serengeti Snapshot deployment, which invited volunteers to classify animals using photographs taken by 225 motion-sensor cameras in Tanzania’s Serengeti National Park. Volunteers swarmed this project to the point that there are no longer any pictures left to tag! So Zooniverse is eagerly waiting for new images to be taken in Serengeti and sent over.

Screen Shot 2013-03-23 at 7.49.56 PM

One of my favorite Zooniverse features is Talk, an online discussion tool used for all projects to provide a real-time interface for volunteers and coordinators, which also facilitates the rapid discovery of important features. This also allows for socializing, which I’ve found to be particularly important with digital humanitarian deployments (such as these). One other major advantage of citizen science platforms like Zooniverse is that they are very easy to use and therefore do not require extensive prior-training (think slot machines). Plus, participants get to learn about new fields of science in the process. So all in all, Zooniverse makes for a great date, which is why I recently reached out to the team behind this citizen science wizardry. Would they be interested in going out (on a limb) to explore some humanitarian (and development) use cases? “Why yes!” they said.

Microtasking platforms have already been used in disaster response, such as MapMill during Hurricane SandyTomnod during the Somali Crisis and CrowdCrafting during Typhoon Pablo. So teaming up with Zooniverse makes a whole lot of sense. Their microtasking software is the most scalable one I’ve come across yet, it is open source and their 800,000 volunteer user-base is simply unparalleled. If Zooniverse volunteers can classify 2 million satellite images of Mars in 48 hours, then surely they can do the same for satellite images of disaster-affected areas on Earth. Volunteers responding to Sandy created some 80,000 assessments of infrastructure damage during the first 48 hours alone. It would have taken Zooniverse just over an hour. Of course, the fact that the hurricane affected New York City and the East Coast meant that many US-based volunteers rallied to the cause, which may explain why it only took 20 minutes to tag the first batch of 400 pictures. What if the hurricane had hit a Caribbean instead? Would the surge of volunteers may have been as high? Might Zooniverse’s 800,000+ standby volunteers also be an asset in this respect?

Screen Shot 2013-03-23 at 7.42.22 PM

Clearly, there is huge potential here, and not only vis-a-vis humanitarian use-cases but development one as well. This is precisely why I’ve already organized and coordinated a number of calls with Zooniverse and various humanitarian and development organizations. As I’ve been telling my colleagues at the United Nations, World Bank and Humanitarian OpenStreetMap, Zooniverse is the Ferrari of Microtasking, so it would be such a big shame if we didn’t take it out for a spin… you know, just a quick test-drive through the rugged terrains of humanitarian response, disaster preparedness and international development. 

bio

Postscript: As some iRevolution readers may know, I am also collaborating with the outstanding team at  CrowdCrafting, who have also developed a free & open-source microtasking platform for citizen science projects (also for disaster response here). I see Zooniverse and CrowCrafting as highly syner-gistic and complementary. Because CrowdCrafting is still in early stages, they fill a very important gap found at the long tail. In contrast, Zooniverse has been already been around for half-a-decade and can caters to very high volume and high profile citizen science projects. This explains why we’ll all be getting on a call in the very near future. 

A Research Framework for Next Generation Humanitarian Technology and Innovation

Humanitarian donors and organizations are increasingly championing innovation and the use of new technologies for humanitarian response. DfID, for example, is committed to using “innovative techniques and technologies more routinely in humanitarian response” (2011). In a more recent strategy paper, DfID confirmed that it would “continue to invest in new technologies” (2012). ALNAP’s important report on “The State of the Humanitarian System” documents the shift towards greater innovation, “with new funds and mechanisms designed to study and support innovation in humanitarian programming” (2012). A forthcoming land-mark study by OCHA makes the strongest case yet for the use and early adoption of new technologies for humanitarian response (2013).

picme8

These strategic policy documents are game-changers and pivotal to ushering in the next wave of humanitarian technology and innovation. That said, the reports are limited by the very fact that the authors are humanitarian professionals and thus not necessarily familiar with the field of advanced computing. The purpose of this post is therefore to set out a more detailed research framework for next generation humanitarian technology and innovation—one with a strong focus on information systems for crisis response and management.

In 2010, I wrote this piece on “The Humanitarian-Technology Divide and What To Do About It.” This divide became increasingly clear to me when I co-founded and co-directed the Harvard Humanitarian Initiative’s (HHI) Program on Crisis Mapping & Early Warning (2007-2009). So I co-founded the annual Inter-national CrisisMappers Conference series in 2009 and have continued to co-organize this unique, cross-disciplinary forum on humanitarian technology. The CrisisMappers Network also plays an important role in bridging the humanitarian and technology divide. My decision to join Ushahidi as Director of Crisis Mapping (2009-2012) was a strategic move to continue bridging the divide—and to do so from the technology side this time.

The same is true of my move to the Qatar Computing Research Institute (QCRI) at the Qatar Foundation. My experience at Ushahidi made me realize that serious expertise in Data Science is required to tackle the major challenges appearing on the horizon of humanitarian technology. Indeed, the key words missing from the DfID, ALNAP and OCHA innovation reports include: Data Science, Big Data Analytics, Artificial Intelligence, Machine Learning, Machine Translation and Human Computing. This current divide between the humanitarian and data science space needs to be bridged, which is precisely why I joined the Qatar Com-puting Research Institute as Director of Innovation; to develop and prototype the next generation of humanitarian technologies by working directly with experts in Data Science and Advanced Computing.

bridgetech

My efforts to bridge these communities also explains why I am co-organizing this year’s Workshop on “Social Web for Disaster Management” at the 2013 World Wide Web conference (WWW13). The WWW event series is one of the most prestigious conferences in the field of Advanced Computing. I have found that experts in this field are very interested and highly motivated to work on humanitarian technology challenges and crisis computing problems. As one of them recently told me: “We simply don’t know what projects or questions to prioritize or work on. We want questions, preferably hard questions, please!”

Yet the humanitarian innovation and technology reports cited above overlook the field of advanced computing. Their policy recommendations vis-a-vis future information systems for crisis response and management are vague at best. Yet one of the major challenges that the humanitarian sector faces is the rise of Big (Crisis) Data. I have already discussed this here, here and here, for example. The humanitarian community is woefully unprepared to deal with this tidal wave of user-generated crisis information. There are already more mobile phone sub-scriptions than people in 100+ countries. And fully 50% of the world’s population in developing countries will be using the Internet within the next 20 months—the current figure is 24%. Meanwhile, close to 250 million people were affected by disasters in 2010 alone. Since then, the number of new mobile phone subscrip-tions has increased by well over one billion, which means that disaster-affected communities today are increasingly likely to be digital communities as well.

In the Philippines, a country highly prone to “natural” disasters, 92% of Filipinos who access the web use Facebook. In early 2012, Filipinos sent an average of 2 billion text messages every day. When disaster strikes, some of these messages will contain information critical for situational awareness & rapid needs assess-ment. The innovation reports by DfID, ALNAP and OCHA emphasize time and time again that listening to local communities is a humanitarian imperative. As DfID notes, “there is a strong need to systematically involve beneficiaries in the collection and use of data to inform decision making. Currently the people directly affected by crises do not routinely have a voice, which makes it difficult for their needs be effectively addressed” (2012). But how exactly should we listen to millions of voices at once, let alone manage, verify and respond to these voices with potentially life-saving information? Over 20 million tweets were posted during Hurricane Sandy. In Japan, over half-a-million new users joined Twitter the day after the 2011 Earthquake. More than 177 million tweets about the disaster were posted that same day, i.e., 2,000 tweets per second on average.

Screen Shot 2013-03-20 at 1.42.25 PM

Of course, the volume and velocity of crisis information will vary from country to country and disaster to disaster. But the majority of humanitarian organizations do not have the technologies in place to handle smaller tidal waves either. Take the case of the recent Typhoon in the Philippines, for example. OCHA activated the Digital Humanitarian Network (DHN) to ask them to carry out a rapid damage assessment by analyzing the 20,000 tweets posted during the first 48 hours of Typhoon Pablo. In fact, one of the main reasons digital volunteer networks like the DHN and the Standby Volunteer Task Force (SBTF) exist is to provide humanitarian organizations with this kind of skilled surge capacity. But analyzing 20,000 tweets in 12 hours (mostly manually) is one thing, analyzing 20 million requires more than a few hundred dedicated volunteers. What’s more, we do not have the luxury of having months to carry out this analysis. Access to information is as important as access to food; and like food, information has a sell-by date.

We clearly need a research agenda to guide the development of next generation humanitarian technology. One such framework is proposed her. The Big (Crisis) Data challenge is composed of (at least) two major problems: (1) finding the needle in the haystack; (2) assessing the accuracy of that needle. In other words, identifying the signal in the noise and determining whether that signal is accurate. Both of these challenges are exacerbated by serious time con-straints. There are (at least) two ways too manage the Big Data challenge in real or near real-time: Human Computing and Artificial Intelligence. We know about these solutions because they have already been developed and used by other sectors and disciplines for several years now. In other words, our information problems are hardly as unique as we might think. Hence the importance of bridging the humanitarian and data science communities.

In sum, the Big Crisis Data challenge can be addressed using Human Computing (HC) and/or Artificial Intelligence (AI). Human Computing includes crowd-sourcing and microtasking. AI includes natural language processing and machine learning. A framework for next generation humanitarian technology and inno-vation must thus promote Research and Development (R&D) that apply these methodologies for humanitarian response. For example, Verily is a project that leverages HC for the verification of crowdsourced social media content generated during crises. In contrast, this here is an example of an AI approach to verification. The Standby Volunteer Task Force (SBTF) has used HC (micro-tasking) to analyze satellite imagery (Big Data) for humanitarian response. An-other novel HC approach to managing Big Data is the use of gaming, something called Playsourcing. AI for Disaster Response (AIDR) is an example of AI applied to humanitarian response. In many ways, though, AIDR combines AI with Human Computing, as does MatchApp. Such hybrid solutions should also be promoted   as part of the R&D framework on next generation humanitarian technology. 

There is of course more to humanitarian technology than information manage-ment alone. Related is the topic of Data Visualization, for example. There are also exciting innovations and developments in the use of drones or Unmanned Aerial Vehicles (UAVs), meshed mobile communication networks, hyper low-cost satellites, etc.. I am particularly interested in each of these areas will continue to blog about them. In the meantime, I very much welcome feedback on this post’s proposed research framework for humanitarian technology and innovation.

 bio

Crowdsourcing the Evaluation of Post-Sandy Building Damage Using Aerial Imagery

Update (Nov 2): 5,739 aerial images tagged by over 3,000 volunteers. Please keep up the outstanding work!

My colleague Schuyler Erle from Humanitarian OpenStreetMap  just launched a very interesting effort in response to Hurricane Sandy. He shared the info below via CrisisMappers earlier this morning, which I’m turning into this blog post to help him recruit more volunteers.

Schuyler and team just got their hands on the Civil Air Patrol’s (CAP) super high resolution aerial imagery of the disaster affected areas. They’ve imported this imagery into their Micro-Tasking Server MapMill created by Jeff Warren and are now asking volunteers to help tag the images in terms of the damage depicted in each photo. “The 531 images on the site were taken from the air by CAP over New York, New Jersey, Rhode Island, and Massachusetts on 31 Oct 2012.”

To access this platform, simply click here: http://sandy.hotosm.org. If that link doesn’t work,  please try sandy.locative.us.

“For each photo shown, please select ‘ok’ if no building or infrastructure damage is evident; please select ‘not ok’ if some damage or flooding is evident; and please select ‘bad’ if buildings etc. seem to be significantly damaged or underwater. Our *hope* is that the aggregation of the ok/not ok/bad ratings can be used to help guide FEMA resource deployment, or so was indicated might be the case during RELIEF at Camp Roberts this summer.”

A disaster response professional working in the affected areas for FEMA replied (via CrisisMappers) to Schuyler’s efforts to confirm that:

“[G]overnment agencies are working on exploiting satellite imagery for damage assessments and flood extents. The best way that you can help is to help categorize photos using the tool Schuyler provides […].  CAP imagery is critical to our decision making as they are able to work around some of the limitations with satellite imagery so that we can get an area of where the worst damage is. Due to the size of this event there is an overwhelming amount of imagery coming in, your assistance will be greatly appreciated and truly aid in response efforts.  Thank you all for your willingness to help.”

Schuyler notes that volunteers can click on the Grid link from the home page of the Micro-Tasking platform to “zoom in to the coastlines of Massachusetts or New Jersey” and see “judgements about building damages beginning to aggregate in US National Grid cells, which is what FEMA use operationally. Again, the idea and intention is that, as volunteers judge the level of damage evident in each photo, the heat map will change color and indicate at a glance where the worst damage has occurred.” See above screenshot.

Even if you just spend 5 or 10 minutes tagging the imagery, this will still go a long way to supporting FEMA’s response efforts. You can also help by spreading the word and recruiting others to your cause. Thank you!

The Best Way to Crowdsource Satellite Imagery Analysis for Disaster Response

My colleague Kirk Morris recently pointed me to this very neat study on iterative versus parallel models of crowdsourcing for the analysis of satellite imagery. The study was carried out by French researcher & engineer Nicolas Maisonneuve for the next GISscience2012 conference.

Nicolas finds that after reaching a certain threshold, adding more volunteers to the parallel model does “not change the representativeness of opinion and thus will not change the consensual output.” His analysis also shows that the value of this threshold has significant impact on the resulting quality of the parallel work and thus should be chosen carefully.  In terms of the iterative approach, Nicolas finds that “the first iterations have a high impact on the final results due to a path dependency effect.” To this end, “stronger commitment during the first steps are thus a primary concern for using such model,” which means that “asking expert/committed users to start,” is important.

Nicolas’s study also reveals that the parellel approach is better able to correct wrong annotations (wrong analysis of the satellite imagery) than the iterative model for images that are fairly straightforward to interpret. In contrast, the iterative model is better suited for handling more ambiguous imagery. But there is a catch: the potential path dependency effect in the iterative model means that  “mistakes could be propagated, generating more easily type I errors as the iterations proceed.” In terms of spatial coverage, the iterative model is more efficient since the parallel model leverages redundancy to ensure data quality. Still, Nicolas concludes that the “parallel model provides an output which is more reliable than that of a basic iterative [because] the latter is sensitive to vandalism or knowledge destruction.”

So the question that naturally follow is this: how can parallel and iterative methodologies be combined to produce a better overall result? Perhaps the parallel approach could be used as the default to begin with. However, images that are considered difficult to interpret would get pushed from the parallel workflow to the iterative workflow. The latter would first be processed by experts in order to create favorable path dependency. Could this hybrid approach be the wining strategy?

How Can Innovative Technology Make Conflict Prevention More Effective?

I’ve been asked to participate in an expert working group in support of a research project launched by the International Peace Institute (IPI) on new technologies for conflict prevention. Both UNDP and USAID are also partners in this effort. To this end, I’ve been invited to make some introductory remarks during our upcoming working group meeting. The purpose of this blog post is to share my preliminary thoughts on this research and provide some initial suggestions.

Before I launch into said thoughts, some context may be in order. I spent several years studying, launching and improving conflict early warning systems for violence prevention. While I haven’t recently blogged about conflict prevention on iRevolution, you’ll find my writings on this topic posted on my other blog, Conflict Early Warning. I have also published and presented several papers on conflict prevention, most of which are available here. The most relevant ones include the following:

  • Meier, Patrick. 2011. Early Warning Systems and the Prevention of Violent Conflict. In Peacebuilding in the Information Age: Sifting Hype from Reality, ed. Daniel Stauffacher et al. GenevaICT4Peace. Available online.
  • Leaning, Jennifer and Patrick Meier. 2009. “The Untapped Potential of Information Communication Technology for Conflict Early Warning and Crisis Mapping,” Working Paper Series, Harvard Humanitarian Initiative (HHI), Harvard University. Available online.
  • Leaning, Jennifer and Patrick Meier. 2008. “Community Based Conflict Early Warning and Response Systems: Opportunities and Challenges.” Working Paper Series, Harvard Humanitarian Initiative (HHI), Harvard University. Available online.
  • Leaning, Jennifer and Patrick Meier. 2008. “Conflict Early Warning and Response: A Critical Reassessment.” Working Paper Series, Harvard Humanitarian Initiative (HHI), Harvard University. Available online.
  • Meier, Patrick. 2008. “Upgrading the Role of Information Communication Technology (ICT) for Tactical Early Warning/Response.” Paper prepared for the 49th Annual Convention of the International Studies Association (ISA) in San Francisco. Available online.
  • Meier, Patrick. 2007. “New Strategies for Effective Early Response: Insights from Complexity Science.” Paper prepared for the 48th Annual Convention of the International Studies Association (ISA) in Chicago.Available online.
  • Campbell, Susanna and Patrick Meier. 2007. “Deciding to Prevent Violent Conflict: Early Warning and Decision-Making at the United Nations.” Paper prepared for the 48th Annual Convention of the International Studies Association (ISA) in Chicago. Available online.
  • Meier, Patrick. 2007. From Disaster to Conflict Early Warning: A People-Centred Approach. Monday Developments 25, no. 4, 12-14. Available online.
  • Meier, Patrick. 2006. “Early Warning for Cowardly Lions: Response in Disaster & Conflict Early Warning Systems.” Unpublished academic paper, The Fletcher SchoolAvailable online.
  • I was also invited to be an official reviewer of this 100+ page workshop summary on “Communication and Technology for Violence Prevention” (PDF), which was just published by the National Academy of Sciences. In addition, I was an official referee for this important OECD report on “Preventing Violence, War and State Collapse: The Future of Conflict Early Warning and Response.”

An obvious first step for IPI’s research would be to identify the conceptual touch-points between the individual functions or components of conflict early warning systems and information & communication technology (ICT). Using this concep-tual framework put forward by ISDR would be a good place to start:

That said, colleagues at IPI should take care not to fall prey to technological determinism. The first order of business should be to understand exactly why previous (and existing) conflict early warning systems are complete failures—a topic I have written extensively about and been particularly vocal on since 2004. Throwing innovative technology at failed systems will not turn them into successful operations. Furthermore, IPI should also take note of the relatively new discourse on people-centered approaches to early warning and distinguish between first, second, third and fourth generation conflict early warning systems.

On this note, IPI ought to focus in particular on third and fourth generation systems vis-a-vis the role of innovative technology. Why? Because first and second generation systems are structured for failure due to constraints explained by organizational theory. They should thus explore the critical importance of conflict preparedness and the role that technology can play in this respect since preparedness is key to the success of third and fourth generation systems. In addition, IPI should consider the implications of crowdsourcing, crisis mapping, Big Data, satellite imagery and the impact that social media analytics might play for the early detection and respons to violent conflict. They should also take care not to ignore critical insights from the field of nonviolent civil resistance vis-a-vis preparedness and tactical approaches to community-based early response. Finally, they should take note of new and experimental initiatives in this space, such as PeaceTXT.

IPI’s plans to write up several case studies on conflict early warning systems to understand how innovative technology might (or already are) making these more effective. I would recommend focusing on specific systems in Kenya, Kyrgyzstan Sri Lanka and Timor-Leste. Note that some community-based systems are too sensitive to make public, such as one in Burma for example. In terms of additional experts worth consulting, I would recommend David Nyheim, Joe Bock, Maria Stephan, Sanjana Hattotuwa, Scott Edwards and Casey Barrs. I would also shy away from inviting too many academics or technology companies. The former tend to focus too much on theory while the latter often have a singular focus on technology.

Many thanks to UNDP for including me in the team of experts. I look forward to the first working group meeting and reviewing IPI’s early drafts. In the meantime, if iRevolution readers have certain examples or questions they’d like me to relay to the working group, please do let me know via the comments section below and I’ll be sure to share.

Predicting the Future of Global Geospatial Information Management

The United Nations Committee of Experts on Global Information Management (GGIM) recently organized a meeting of thought-leaders and visionaries in the geo-spatial world to identify the future of this space over the next 5-10 years. These experts came up with some 80+ individual predictions. I’ve included some of the more interesting ones below.

  • The use of Unmanned Aerial Vehicles (UAVs) as a tool for rapid geospatial data collection will increase.
  • 3D and even 4D geospatial information, incorporating time as the fourth dimension, will increase.
  • Technology will move faster than legal and governance structures.
  • The link between geospatial information and social media, plus other actor networks, will become more and more important.
  • Real-time info will enable more dynamic modeling & response to disasters.
  • Free and open source software will continue to grow as viable alternatives both in terms of software, and potentially in analysis and processing.
  • Geospatial computation will increasingly be non-human consumable in nature, with an increase in fully-automated decision systems.
  • Businesses and Governments will increasingly invest in tools and resources to manage Big Data. The technologies required for this will enable greater use of raw data feeds from sensors and other sources of data.
  • In ten years time it is likely that all smart phones will be able to film 360 degree 3D video at incredibly high resolution by today’s standards & wirelessly stream it in real time.
  • There will be a need for geospatial use governance in order to discern the real world from the virtual/modelled world in a 3D geospatial environ-ment.
  • Free and open access to data will become the norm and geospatial information will increasingly be seen as an essential public good.
  • Funding models to ensure full data coverage even in non-profitable areas will continue to be a challenge.
  • Rapid growth will lead to confusion and lack of clarity over data ownership, distribution rights, liabilities and other aspects.
  • In ten years, there will be a clear dividing line between winning and losing nations, dependent upon whether the appropriate legal and policy frameworks have been developed that enable a location-enabled society to flourish.
  • Some governments will use geospatial technology as a means to monitor or restrict the movements and personal interactions of their citizens. Individuals in these countries may be unwilling to use LBS or applications that require location for fear of this information being shared with authorities.
  • The deployment of sensors and the broader use of geospatial data within society will force public policy and law to move into a direction to protect the interests and rights of the people.
  • Spatial literacy will not be about learning GIS in schools but will be more centered on increasing spatial awareness and an understanding of the value of understanding place as context.
  • The role of National Mapping Agencies as an authoritative supplier of high quality data and of arbitrator of other geospatial data sources will continue to be crucial.
  • Monopolies held by National Mapping Agencies in some areas of specialized spatial data will be eroded completely.
  • More activities carried out by National Mapping Agencies will be outsourced and crowdsourced.
  • Crowdsourced data will push National Mapping Agencies towards niche markets.
  • National Mapping Agencies will be required to find new business models to provide simplified licenses and meet the demands for more free data from mapping agencies.
  • The integration of crowdsourced data with government data will increase over the next 5 to 10 years.
  • Crowdsourced content will decrease cost, improve accuracy and increase availability of rich geospatial information.
  •  There will be increased combining of imagery with crowdsourced data to create datasets that could not have been created affordably on their own.
  • Progress will be made on bridging the gap between authoritative data and crowdsourced data, moving towards true collaboration.
  • There will be an accelerated take-up of Volunteer Geographic Information over the next five years.
  • Within five years the level of detail on transport systems within OpenStreetMap will exceed virtually all other data sources & will be respected/used by major organisations & governments across the globe.
  • Community-based mapping will continue to grow.
  • There is unlikely to be a market for datasets like those currently sold to power navigation and location-based services solutions in 5 years, as they will have been superseded by crowdsourced datasets from OpenStreetMaps or other comparable initiatives.

Which trends have the experts missed? Do you think they’re completely off on any of the above? The full set of predictions on the future of global geospatial information management is available here as a PDF.