Aerial Imagery Analysis: Combining Crowdsourcing and Artificial Intelligence

MicroMappers combines crowdsourcing and artificial intelligence to make sense of “Big Data” for Social Good. Why artificial intelligence (AI)? Because regular crowdsourcing alone is no match for Big Data. The MicroMappers platform can already be used to crowdsource the search for relevant tweets as well as pictures, videos, text messages, aerial imagery and soon satellite imagery. The next step is therefore to add artificial intelligence to this crowdsourced filtering platform. We have already done this with tweets and SMS. So we’re now turning our attention to aerial and satellite imagery.

Our very first deployment of MicroMappers for aerial imagery analysis was in Africa for this wildlife protection project. We crowdsourced the search for wild animals in partnership with rangers from the Kuzikus Wildlife Reserve based in Namibia. We were very pleased with the results, and so were the rangers. As one of them noted: “I am impressed with the results. There are at times when the crowd found animals that I had missed!” We were also pleased that our efforts caught the attention of CNN. As noted in that CNN report, our plan for this pilot was to use crowdsourcing to find the wildlife and to then combine the results with artificial intelligence to develop a set of algorithms that can automatically find wild animals in the future.

To do this, we partnered with a wonderful team of graduate students at EPFL, the well known polytechnique in Lausanne, Switzerland. While these students were pressed for time due to a number of deadlines, they were nevertheless able to deliver some interesting results. Their applied, computer vision research is particularly useful given our ultimate aim: to create an algorithm that can learn to detect features of interest in aerial and satellite imagery in near real-time (as we’re interested in applying this to disaster response and other time-sensitive events). For now, however, we need to walk before we can run. This means carrying out the tasks of crowdsourcing and artificial intelligence in two (not-yet-integrated) steps.

MM Oryx

As the EPFL students rightly note in their preliminary study, the use of thermal imaging (heat detection) to automatically identify wildlife in the bush is some-what problematic since “the temperature difference between animals and ground is much lower in savannah […].” This explains why the research team used the results of our crowdsourcing efforts instead. More specifically, they focused on automatically detecting the shadows of gazelles and ostriches by using an object based support vector machine (SVM). The whole process is summarized below.

Screen Shot 2015-02-09 at 12.46.38 AM

The above method produces results like the one below (click to enlarge). The circles represents the objects used to train the machine learning classifier. The discerning reader will note that the algorithm has correctly identified all the gazelles save for one instance in which two gazelles were standing close together were identified as one gazelle. But no other objects were mislabeled as a gazelle. In other words, EPFL’s gazelle algorithm is very accurate. “Hence the classifier could be used to reduce the number of objects to assess manually and make the search for gazelles faster.” Ostriches, on the other hand, proved more difficult to automatically detect. But the students are convinced that this could be improved if they had more time.

Screen Shot 2015-02-09 at 12.56.17 AM

In conclusion, more work certainly needs to be done, but I am pleased by these preliminary and encouraging results. In addition, the students at EPFL kindly shared some concrete features that we can implement on the MicroMappers side to improve the crowdsourced results for the purposes of developing automated algorithms in the future. So a big thank you to Briant, Millet and Rey for taking the time to carry out the above research. My team and I at QCRI very much look forward to continuing our collaboration with them and colleagues at EPFL.

In the meantime, more on all this in my new bookDigital Humanitarians: How Big Data is Changing the Face of Humanitarian Response, which has already been endorsed by faculty at Harvard, MIT, Stanford, Oxford, etc; and by experts at the UN, World Bank, Red Cross, Twitter, etc.

Drones for Good Make History in Dubai

 Screen Shot 2015-02-07 at 11.00.13 AM


We’ve just wrapped up an incredible week at the first ever Drones for Good Challenge. Not only was this the first event of its kind in Dubai, it was the first ever such event in the world. I was thus hugely honored to both keynote this outstanding celebration of technologies for good and to also serve on the judging panel for the finalists. Some 800 teams from nearly 60 countries around the world submitted their “Drones for Good” ideas. Only 5 made it to the very final round today. I lived-tweeted the event and curated the  list of tweets below as a summary (all original tweets available here). My head is still spinning from all the possibilities, ideas and the incredible innovators that I had the good fortune to meet in person. I’ll absolutely be following up with a number of them for several Humanitarian UAV projects I am working on. In the meantime, huge thanks to the organizing event team for their very kind invitation and friendship!

Screen Shot 2015-02-07 at 10.30.19 AM

Screen Shot 2015-02-07 at 10.31.10 AM

Screen Shot 2015-02-07 at 10.31.39 AM

Screen Shot 2015-02-07 at 10.31.56 AM

Screen Shot 2015-02-07 at 10.32.13 AM

Screen Shot 2015-02-07 at 10.45.12 AM

Screen Shot 2015-02-07 at 10.32.34 AM

Screen Shot 2015-02-07 at 10.32.45 AM

Screen Shot 2015-02-07 at 10.33.00 AM

Screen Shot 2015-02-07 at 10.33.16 AM

Screen Shot 2015-02-07 at 10.33.36 AM

Screen Shot 2015-02-07 at 10.33.53 AM

Screen Shot 2015-02-07 at 10.34.28 AM

Screen Shot 2015-02-07 at 10.34.43 AM

Screen Shot 2015-02-07 at 10.35.41 AM

Screen Shot 2015-02-07 at 10.35.52 AM

Screen Shot 2015-02-07 at 10.36.05 AM

Screen Shot 2015-02-07 at 10.36.18 AM

Screen Shot 2015-02-07 at 10.36.40 AM

Screen Shot 2015-02-07 at 10.49.07 AM


Screen Shot 2015-02-07 at 11.02.23 AM

I’m excited to explore the above possibility with a number of key individuals who I met and spoke with whilst in Dubai.

Indigenous Community in Guyana Builds Drones for Good

If you find yourself in the middle of the jungle somewhere in South America and come across this indigenous community, then you’re probably in Guyana:

guyana-tessa-soldering-800

Screen Shot 2015-02-05 at 8.31.01 AM

Screen Shot 2015-02-05 at 8.33.02 AM

guyana-flight-simulator-800

Screen Shot 2015-02-05 at 8.25.35 AM

Screen Shot 2015-02-05 at 8.32.34 AM

Screen Shot 2015-02-05 at 10.09.10 AM

I’ve been an avid fan of Digital Democracy since 2008 and even had the honor of serving on their Advisory Board during the early days. So I was thrilled when friends Emily Jacobi and Gregor MacLennan told me they were interested in using drones/UAVs for their projects. Six months later, the pictures above explain my excitement.

When Gregor traveled down to Guyana a few months ago, he didn’t bring a drone; he simply brought a bunch of parts and glue, lots of glue. “We didn’t want to just fly into Guyana and fly a drone over the local villages,” writes Gregor. “Our interest was whether this technology could be something that can be used and controlled by the commumunities themselves, and become a tool of em-powerment for helping them have more of a say in their own future. We wanted the Wapichana to be able to repair it themselves, fly it themselves, and process the images to use for their own means.” Oh, and by the way, Gregor had never built a drone before.

And that’s the beauty of Digital Democracy’s approach: co-learning, co-creation and co-experimentation. Moreover, Emily & Gregor didn’t turn to drones simply because it’s the latest fad. They tried using satellite imagery to document illegal logging and deforestation in Guyana but the resolution of said imagery was limited. So they figured drones might do the trick instead. Could this technology be a “tool for positive change in the hands of indigenous communities?” Could local communities in Guyana use flying robots to create maps and thus monitor illegal logging and deforestation?

Building the drone was truly a community effort. “When the motor mount broke, the team scoured the village for different types of plastic, and fashioned a new mount from an old beer crate. The drone was no longer a foreign, mysterious piece of technology, but something they owned, built, & therefore understood.” And that is what it’s all about. Check out the neat video above to see the team in action and the 3D results below based on the data collected.

So what’s next? The Wapichana UAV Team have demonstrated “that a remote indigenous community with no prior engineering experience can build and fly a complex drone and make a detailed map.” The team has already been discussing the multiple ways they want to use their UAVs: “to monitor deforestation of bush islands over time; creating high-resolution maps of villages to use as a basis for resource-management discussions; and flying over logging camps in the forest to document illegal deforestation.” You can make sure this happens by donating to the cause (like I just did). That way, Gregor can continue the training and get “the whole team comfortable with flying and to streamline the process from mission planning to processing imagery.”


Meanwhile, back in Congo-Brazzaville…

braza1

… another team was learning about Drones for Good.

Drones for Good: Technology, Social Movements and The State

Discussions surrounding use of drones, or UAVs, have typically “centered on their use by governments, often for the purpose of surveillance and warfare.” But as colleague Austin Choi-Fitzpatrick rightly notes in his new study, “[t]his focus on the state’s use obscures the opportunity for civil society actors, including social movements, to make use of these technologies.” Austin thus seeks to high-light civil society uses, “ranging from art to digital disruption.” The latter is what I am particularly interested given my previous writings on the use of non-lethal UAVs for civil resistance and for peacebuilding.

Screen Shot 2015-02-02 at 7.50.33 AM

When I began writing my doctoral dissertation some 7 years ago, scholars and activists were particularly interested in measuring the impact of mobile phones on social movements and civil resistance. Today, civil society is also turning to UAVs as evidenced during the recent protests in Hong Kong, Turkey, Poland, Ukraine and Ferguson. “This innovation represents a technological shift in scale for citizen journalists, human rights advocates, and social movement actors,” writes Austin. “As such, it requires a sophisticated assessment of the ethical issues and policy terrain surrounding its use.”

The most disruptive aspect of today’s small, personal UAVs, “is the fundamental break between the camera and the street level. […] The most memorable photographs of violent conflict, social protest and natural disasters have almost all been taken by a person present on the ground. […] UAVs relocate the boundary between what is public and what is private, because camera-equipped UAVs more the line of sight from the street to the air. This simple shift effectively pushes public space from the sidewalk to the stairwell, courtyard, rooftop, and so forth.” As Austin rightly concludes, “‘Open air’ and ‘free space’ are no longer as ‘open’ or ‘free’ as they once were. They are instead now occupied or vulnerable to occupation.” The use of the words “occupied” and “occupation” here is indeed intentional. Austin also makes another crucial point: UAVs  represent a type of innovation that is a “hallmark of asymmetrical warfare.”

One of my favorite books, Wasp, illustrates this asymmetry perfectly; as does the Syria Air Lift project. The latter seeks to fly swarms of UAVs to deliver aid to civilians caught in conflict zones. Little surprise, then, that the State is clamping down on civil society uses of UAVs. At times, they even shoot the UAVs down, as evidenced when “police in Istanbul shot down a camera-equipped UAV while it was monitoring large anti-government protests […].” Authorities would not be shooting down UAVs if they did not pose some form of (real or imagined) threat. And even when they pose no direct threat, UAVs are clearly annoying enough to react to (like a wasp or annoying mosquito). Annoyance is a key tactic in civil guerrilla warfare and civil resistance.

SAP_ReutersPic

Austin goes on to propose a “broad framework to guide a range of non-state and non-commercial actor uses of drones.” This framework is comprised of the following 6 principles:

1. Subsidiarity: decision-making and problem solving should occur at the lowest and least sophisticated level possible. I take this to mean that decisions surrounding the use of drones should be taken at the local level (implying local ownership) and that drones should “only be used to address situations for which there is not a less sophisticated, invasive, or novel use.”

2. Physical and material security: self-explanatory – “care must be taken so that these devices do not collide with people or with one another.”

3. Do no harm: emphasizes a “rights-based approach as found in the development and humanitarian aid communities. “The principle is one of proportionality, in which the question to be answered is, ‘Are the risks of using UAVs in a given humanitarian setting outweighed by the expected benefits?'”

4. Public interest: also self-explanatory but “especially sensitive to the importance of investigative journalism that holds to account the powerful and well-resourced, despite attempts by established interests to discredit these efforts.” Public interest should also include the interests of the local community.

5. Privacy: straightforward issue but not easily resolved: “creating a [privacy] framework that applies in all circumstances is nearly impossible in an era in which digital privacy appears to be a mirage […].

6. Data protection: of paramount importance. Aerial footage of protests can be used by governments to “create a database of known activists.” As such, “[c]ontext specific protocols must ensure the security of data, thereby protecting against physical or digital theft or corruption.”

Are there other principles that should factor into the “Drones for Good” frame-work? If so, what are they? I’ll also be asking these questions in Dubai this week where I’m speaking at the Drones for Good Festival.

Video: Digital Humanitarians & Next Generation Humanitarian Technology

How do international humanitarian organizations make sense of the “Big Data” generated during major disasters? They turn to Digital Humanitarians who craft and leverage ingenious crowdsourcing solutions with trail-blazing insights from artificial intelligence to make sense of vast volumes of social media, satellite imagery and even UAV/aerial imagery. They also use these “Big Data” solutions to verify user-generated content and counter rumors during disasters. The talk below explains how Digital Humanitarians do this and how their next generation humanitarian technologies work.

Many thanks to TTI/Vanguard for having invited me to speak. Lots more on Digital Humanitarians in my new book of the same title.

bookcover

Videos of my TEDx talks and the talks I’ve given at the White House, PopTech, Where 2.0, National Geographic, etc., are all available here.

Reflections on Digital Humanitarians – The Book

In January 2014, I wrote this blog post announcing my intention to write a book on Digital Humanitarians. Well, it’s done! And launches this week. The book has already been endorsed by scholars at Harvard, MIT, Stanford, Oxford, etc; by practitioners at the United Nations, World Bank, Red Cross, USAID, DfID, etc; and by others including Twitter and National Geographic. These and many more endorsements are available here. Brief summaries of each book chapter are available here; and the short video below provides an excellent overview of the topics covered in the book. Together, these overviews make it clear that this book is directly relevant to many other fields including journalism, human rights, development, activism, business management, computing, ethics, social science, data science, etc. In short, the lessons that digital humanitarians have learned (often the hard way) over the years and the important insights they have gained are directly applicable to fields well beyond the humanitarian space. To this end, Digital Humanitarians is written in a “narrative and conversational style” rather than with dense, technical language.

The story of digital humanitarians is a multifaceted one. Theirs is not just a story about using new technologies to make sense of “Big Data”. For the most part, digital humanitarians are volunteers; volunteers from all walks of life and who occupy every time zone. Many are very tech-savvy and pull all-nighters, but most simply want to make a difference using the few minutes they have with the digital technologies already at their fingertips. Digital humanitarians also include pro-democracy activists who live in countries ruled by tyrants. This story is thus also about hope and humanity; about how technology can extend our humanity during crises. To be sure, if no one cared, if no one felt compelled to help others in need, or to change the status quo, then no one even would bother to use these new, next generation humanitarian technologies in the first place.

I believe this explains why Professor Leysia Palen included the following in her very kind review of my book: “I dare you to read this book and not have both your heart and mind opened.” As I reflected to my editor while in the midst of book writing, an alternative tag line for the title could very well be “How Big Data and Big Hearts are Changing the Face of Humanitarian Response.” It is personally and deeply important to me that the media, would-be volunteers  and others also understand that the digital humanitarians story is not a romanticized story about a few “lone heroes” who accomplish the impossible thanks to their super human technical powers. There are thousands upon thousands of largely anonymous digital volunteers from all around the world who make this story possible. And while we may not know all their names, we certainly do know about their tireless collective action efforts—they mobilize online from all corners of our Blue Planet to support humanitarian efforts. My book explains how these digital volunteers do this, and yes, how you can too.

Digital humanitarians also include a small (but growing) number of forward-thinking professionals from large and well-known humanitarian organizations. After the tragic, nightmarish earthquake that struck Haiti in January 2010, these seasoned and open-minded humanitarians quickly realized that making sense of “Big Data” during future disasters would require new thinking, new risk-taking, new partnerships, and next generation humanitarian technologies. This story thus includes the invaluable contributions of those change-agents and explains how these few individuals are enabling innovation within the large bureaucracies they work in. The story would thus be incomplete without these individuals; without their appetite for risk-taking, their strategic understanding of how to change (and at times circumvent) established systems from the inside to make their organizations still relevant in a hyper-connected world. This may explain why Tarun Sarwal of the International Committee of the Red Cross (ICRC) in Geneva included these words (of warning) in his kind review: “For anyone in the Humanitarian sector — ignore this book at your peril.”

bookcover

Today, this growing, cross-disciplinary community of digital humanitarians are crafting and leveraging ingenious crowdsourcing solutions with trail-blazing insights from advanced computing and artificial intelligence in order to make sense of “Big Data” generated during disasters. In virtually real-time, these new solutions (many still in early prototype stages) enable digital volunteers to make sense of vast volumes of social media, SMS and imagery captured from satellites & UAVs to support relief efforts worldwide.

All of this obviously comes with a great many challenges. I certainly don’t shy away from these in the book (despite my being an eternal optimist : ). As Ethan Zuckerman from MIT very kindly wrote in his review of the book,

“[Patrick] is also a careful scholar who thinks deeply about the limits and potential dangers of data-centric approaches. His book offers both inspiration for those around the world who want to improve our disaster response and a set of fertile challenges to ensure we use data wisely and ethically.”

Digital humanitarians are not perfect, they’re human, they make mistakes, they fail; innovation, after all, takes experimenting, risk-taking and failing. But most importantly, these digital pioneers learn, innovate and over time make fewer mistakes. In sum, this book charts the sudden and spectacular rise of these digital humanitarians and their next generation technologies by sharing their remarkable, real-life stories and the many lessons they have learned and hurdles both cleared & still standing. In essence, this book highlights how their humanity coupled with innovative solutions to “Big Data” is changing humanitarian response forever. Digital Humanitarians will make you think differently about what it means to be humanitarian and will invite you to join the journey online. And that is what it’s ultimately all about—action, responsible & effective action.

Why did I write this book? The main reason may perhaps come as a surprise—one word: hope. In a world seemingly overrun by heart-wrenching headlines and daily reminders from the news and social media about all the ugly and cruel ways that technologies are being used to spy on entire populations, to harass, oppress, target and kill each other, I felt the pressing need to share a different narrative; a narrative about how selfless volunteers from all walks of life, from all ages, nationalities, creeds use digital technologies to help complete strangers on the other side of the planet. I’ve had the privilege of witnessing this digital good-will first hand and repeatedly over the years. This goodwill is what continues to restore my faith in humanity and what gives me hope, even when things are tough and not going well. And so, I wrote Digital Humanitarians first and fore-most to share this hope more widely. We each have agency and we can change the world for the better. I’ve seen this and witnessed the impact first hand. So if readers come away with a renewed sense of hope and agency after reading the book, I will have achieved my main objective.

-

For updates on events, talks, trainings, webinars, etc, please click here. I’ll be organizing a Google Hangout on March 5th for readers who wish to discuss the book in more depth and/or follow up with any questions or ideas. If you’d like additional information on this and future Hangouts, please click on the previous link. If you wish to join ongoing conversations online, feel free to do so with the FB & Twitter hashtag #DigitalJedis. If you’d like to set up a book talk and/or co-organize a training at your organization, university, school, etc., then do get in touch. If you wish to give a talk on the book yourself, then let me know and I’d be happy to share my slides. And if you come across interesting examples of digital humanitarians in action, then please consider sharing these with other readers and myself by using the #DigitalJedis hashtag and/or by sending me an email so I can include your observation in my monthly newsletter and future blog posts. I also welcome guest blog posts on iRevolutions.

Naturally, this book would never have existed were it for digital humanitarians volunteering their time—day and night—during major disasters across the world. This book would also not have seen the light of day without the thoughtful guidance and support I received from these mentors, colleagues, friends and my family. I am thus deeply and profoundly grateful for their spirit, inspiration and friendship. Onwards!

MicroMappers: Towards Next Generation Humanitarian Technology

The MicroMappers platform has come a long way and still has a ways to go. Our vision for MicroMappers is simple: combine human computing (smart crowd-sourcing) with machine computing (artificial intelligence) to filter, fuse and map a variety of different data types such as text, photo, video and satellite/aerial imagery. To do this, we have created a collection of “Clickers” for MicroMappers. Clickers are simply web-based crowdsourcing apps used to make sense of “Big Data”. The “Text Cicker” is used to filter tweets & SMS’s; “Photo Clicker” to filter photos; “Video Clicker” to filter videos and yes the Satellite & Aerial Clickers to filter both satellite and aerial imagery. These are the Data Clickers. We also have a collection of Geo Clickers that digital volunteers use to geo-tag tweets, photos and videos filtered by the Data Clickers. Note that these Geo Clickers auto-matically display the results of the crowdsourced geo-tagging on our MicroMaps like the one below.

MM Ruby Tweet Map

Thanks to our Artificial Intelligence (AI) engine AIDR, the MicroMappers “Text Clicker” already combines human and machine computing. This means that tweets and text messages can be automatically filtered (classified) after some initial crowdsourced filtering. The filtered tweets are then pushed to the Geo Clickers for geo-tagging purposes. We want to do the same (semi-automation) for photos posted to social media as well as videos; although this is still a very active area of research and development in the field of computer vision.

So we are prioritizing our next hybrid human-machine computing efforts on aerial imagery instead. Just like the “Text Clicker” above, we want to semi-automate feature detection in aerial imagery by adding an AI engine to the “Aerial Clicker”. We’ve just starting to explore this with computer vision experts in Switzerland and Canada. Another development we’re eyeing vis-a-vis UAVs is live video streaming. To be sure, UAVs will increasingly be transmitting live video feeds directly to the web. This means we may eventually need to develop a “Streaming Clicker”, which would in some respects resemble our existing “Video Clicker” except that the video would be broadcasting live rather than play back from YouTube, for example. The “Streaming Clicker” is for later, however, or at least until a prospective partner organization approaches us with an immediate and compelling social innovation use-case.

In the meantime, my team & I at QCRI will continue to improve our maps (data visualizations) along with the human computing component of the Clickers. The MicroMappers smartphone apps, for example, need more work. We also need to find partners to help us develop apps for tablets like the iPad. In addition, we’re hoping to create a “Translate Clicker” with Translators Without Borders (TWB). The purpose of this Clicker would be to rapidly crowdsource the translation of tweets, text messages, etc. This could open up rather interesting possibilities for machine translation, which is certainly an exciting prospect.

MM All Map

Ultimately, we want to have one and only one map to display the data filtered via the Data and Geo Clickers. This map, using (Humanitarian) OpenStreetMap as a base layer, would display filtered tweets, SMS’s, photos, videos and relevant features from satellite and UAV imagery. Each data type would simply be a different layer on this fused “Meta-Data Crisis Map”; and end-users would simply turn individual layers on and off as needed. Note also the mainstream news feeds (CNN and BBC) depicted in the above image. We’re working with our partners at UN/OCHA, GDELT & SBTF to create a “3W Clicker” to complement our MicroMap. As noted in my forthcoming book, GDELT is the ultimate source of data for the world’s digitized news media. The 3Ws refers to Who, What, Where; an important spreadsheet that OCHA puts together and maintains in the aftermath of major disasters to support coordination efforts.

In response to Typhoon Ruby in the Philippines, Andrej Verity (OCHA) and I collaborated with Kalev Leetaru from GDELT to explore how the MicroMappers “3W Clicker” might work. The result is the Google Spreadsheet below (click to enlarge) that is automatically updated every 15 minutes with the latest news reports that refer to one or more humanitarian organizations in the Philippines. GDELT includes the original URL of the news article as well as the list of humanitarian organizations referenced in the article. In addition, GDELT automatically identifies the locations referred to in the articles, key words (tags) and the date of the news article. The spreadsheet below is already live and working. So all we need now is the “3W Clicker” to crowdsource the “What”.

MM GDELT output

The first version of the mock-up we’ve created for the “3W Clicker” is displayed below. Digital volunteers are presented with an interface that includes an news article with the names of humanitarian organizations highlighted in red for easy reference. GDELT auto-populates the URL, the organization name (or names if there are more than one) and the location. Note that both the “Who” & “Where” information can be edited directly by the volunteer incase GDELT’s automated algorithm gets those wrong. The main role of digital volunteers, however, would simply be to identify the “What” by quickly skimming the article.

MM 3W Clicker v2

The output of the “3W Clicker” would simply be another MicroMap layer. As per Andrej’s suggestion, the resulting data could also be automatically pushed to another Google Spreadsheet in HXL format. We’re excited about the possibilities and plan to move forward on this sooner rather than later. In addition to GDELT, pulling in feeds from CrisisNET may be worth exploring. I’m also really keen on exploring ways to link up with the Global Disaster Alert & Coordination System (GDACS) as well as GeoFeedia.

In the meantime, we’re hoping to pilot our “Satellite Clicker” thanks to recent conversations with Planet Labs and SkyBox Imaging. Overlaying user-generated content such as tweets and images on top of both satellite and aerial imagery can go a long way to helping verify (“ground truth”) social media during disasters and other events. This is evidenced by recent empirical studies such as this one in Germany and this one in the US. On this note, as my QCRI colleague Heather Leson recently pointed out, the above vision for MicroMappers is still missing one important data feed; namely sensors—the Internet of Things. She is absolutely spot on, so we’ll be sure to look for potential pilot projects that would allow us to explore this new data source within MicroMappers.

The above vision is a tad ambitious (understatement). We really can’t do this alone. To this end, please do get in touch if you’re interested in joining the team and getting MicroMappers to the next level. Note that MicroMappers is free and open source and in no way limited to disaster response applications. Indeed, we recently used the Aerial Clicker for this wildlife protection project in Namibia. This explains why our friends over at National Geographic have also expressed an interest in potentially piloting the MicroMappers platform for some of their projects. And of course, one need not use all the Clickers for a project, simply the one(s) that make sense. Another advantage of MicroMappers is that the Clickers (and maps) can be deployed very rapidly (since the platform was initially developed for rapid disaster response purposes). In any event, if you’d like to pilot the platform, then do get in touch.

bio

See also: Digital Humanitarians – The Book