Tag Archives: Innovation

A Research Framework for Next Generation Humanitarian Technology and Innovation

Humanitarian donors and organizations are increasingly championing innovation and the use of new technologies for humanitarian response. DfID, for example, is committed to using “innovative techniques and technologies more routinely in humanitarian response” (2011). In a more recent strategy paper, DfID confirmed that it would “continue to invest in new technologies” (2012). ALNAP’s important report on “The State of the Humanitarian System” documents the shift towards greater innovation, “with new funds and mechanisms designed to study and support innovation in humanitarian programming” (2012). A forthcoming land-mark study by OCHA makes the strongest case yet for the use and early adoption of new technologies for humanitarian response (2013).

picme8

These strategic policy documents are game-changers and pivotal to ushering in the next wave of humanitarian technology and innovation. That said, the reports are limited by the very fact that the authors are humanitarian professionals and thus not necessarily familiar with the field of advanced computing. The purpose of this post is therefore to set out a more detailed research framework for next generation humanitarian technology and innovation—one with a strong focus on information systems for crisis response and management.

In 2010, I wrote this piece on “The Humanitarian-Technology Divide and What To Do About It.” This divide became increasingly clear to me when I co-founded and co-directed the Harvard Humanitarian Initiative’s (HHI) Program on Crisis Mapping & Early Warning (2007-2009). So I co-founded the annual Inter-national CrisisMappers Conference series in 2009 and have continued to co-organize this unique, cross-disciplinary forum on humanitarian technology. The CrisisMappers Network also plays an important role in bridging the humanitarian and technology divide. My decision to join Ushahidi as Director of Crisis Mapping (2009-2012) was a strategic move to continue bridging the divide—and to do so from the technology side this time.

The same is true of my move to the Qatar Computing Research Institute (QCRI) at the Qatar Foundation. My experience at Ushahidi made me realize that serious expertise in Data Science is required to tackle the major challenges appearing on the horizon of humanitarian technology. Indeed, the key words missing from the DfID, ALNAP and OCHA innovation reports include: Data Science, Big Data Analytics, Artificial Intelligence, Machine Learning, Machine Translation and Human Computing. This current divide between the humanitarian and data science space needs to be bridged, which is precisely why I joined the Qatar Com-puting Research Institute as Director of Innovation; to develop and prototype the next generation of humanitarian technologies by working directly with experts in Data Science and Advanced Computing.

bridgetech

My efforts to bridge these communities also explains why I am co-organizing this year’s Workshop on “Social Web for Disaster Management” at the 2013 World Wide Web conference (WWW13). The WWW event series is one of the most prestigious conferences in the field of Advanced Computing. I have found that experts in this field are very interested and highly motivated to work on humanitarian technology challenges and crisis computing problems. As one of them recently told me: “We simply don’t know what projects or questions to prioritize or work on. We want questions, preferably hard questions, please!”

Yet the humanitarian innovation and technology reports cited above overlook the field of advanced computing. Their policy recommendations vis-a-vis future information systems for crisis response and management are vague at best. Yet one of the major challenges that the humanitarian sector faces is the rise of Big (Crisis) Data. I have already discussed this here, here and here, for example. The humanitarian community is woefully unprepared to deal with this tidal wave of user-generated crisis information. There are already more mobile phone sub-scriptions than people in 100+ countries. And fully 50% of the world’s population in developing countries will be using the Internet within the next 20 months—the current figure is 24%. Meanwhile, close to 250 million people were affected by disasters in 2010 alone. Since then, the number of new mobile phone subscrip-tions has increased by well over one billion, which means that disaster-affected communities today are increasingly likely to be digital communities as well.

In the Philippines, a country highly prone to “natural” disasters, 92% of Filipinos who access the web use Facebook. In early 2012, Filipinos sent an average of 2 billion text messages every day. When disaster strikes, some of these messages will contain information critical for situational awareness & rapid needs assess-ment. The innovation reports by DfID, ALNAP and OCHA emphasize time and time again that listening to local communities is a humanitarian imperative. As DfID notes, “there is a strong need to systematically involve beneficiaries in the collection and use of data to inform decision making. Currently the people directly affected by crises do not routinely have a voice, which makes it difficult for their needs be effectively addressed” (2012). But how exactly should we listen to millions of voices at once, let alone manage, verify and respond to these voices with potentially life-saving information? Over 20 million tweets were posted during Hurricane Sandy. In Japan, over half-a-million new users joined Twitter the day after the 2011 Earthquake. More than 177 million tweets about the disaster were posted that same day, i.e., 2,000 tweets per second on average.

Screen Shot 2013-03-20 at 1.42.25 PM

Of course, the volume and velocity of crisis information will vary from country to country and disaster to disaster. But the majority of humanitarian organizations do not have the technologies in place to handle smaller tidal waves either. Take the case of the recent Typhoon in the Philippines, for example. OCHA activated the Digital Humanitarian Network (DHN) to ask them to carry out a rapid damage assessment by analyzing the 20,000 tweets posted during the first 48 hours of Typhoon Pablo. In fact, one of the main reasons digital volunteer networks like the DHN and the Standby Volunteer Task Force (SBTF) exist is to provide humanitarian organizations with this kind of skilled surge capacity. But analyzing 20,000 tweets in 12 hours (mostly manually) is one thing, analyzing 20 million requires more than a few hundred dedicated volunteers. What’s more, we do not have the luxury of having months to carry out this analysis. Access to information is as important as access to food; and like food, information has a sell-by date.

We clearly need a research agenda to guide the development of next generation humanitarian technology. One such framework is proposed her. The Big (Crisis) Data challenge is composed of (at least) two major problems: (1) finding the needle in the haystack; (2) assessing the accuracy of that needle. In other words, identifying the signal in the noise and determining whether that signal is accurate. Both of these challenges are exacerbated by serious time con-straints. There are (at least) two ways too manage the Big Data challenge in real or near real-time: Human Computing and Artificial Intelligence. We know about these solutions because they have already been developed and used by other sectors and disciplines for several years now. In other words, our information problems are hardly as unique as we might think. Hence the importance of bridging the humanitarian and data science communities.

In sum, the Big Crisis Data challenge can be addressed using Human Computing (HC) and/or Artificial Intelligence (AI). Human Computing includes crowd-sourcing and microtasking. AI includes natural language processing and machine learning. A framework for next generation humanitarian technology and inno-vation must thus promote Research and Development (R&D) that apply these methodologies for humanitarian response. For example, Verily is a project that leverages HC for the verification of crowdsourced social media content generated during crises. In contrast, this here is an example of an AI approach to verification. The Standby Volunteer Task Force (SBTF) has used HC (micro-tasking) to analyze satellite imagery (Big Data) for humanitarian response. An-other novel HC approach to managing Big Data is the use of gaming, something called Playsourcing. AI for Disaster Response (AIDR) is an example of AI applied to humanitarian response. In many ways, though, AIDR combines AI with Human Computing, as does MatchApp. Such hybrid solutions should also be promoted   as part of the R&D framework on next generation humanitarian technology. 

There is of course more to humanitarian technology than information manage-ment alone. Related is the topic of Data Visualization, for example. There are also exciting innovations and developments in the use of drones or Unmanned Aerial Vehicles (UAVs), meshed mobile communication networks, hyper low-cost satellites, etc.. I am particularly interested in each of these areas will continue to blog about them. In the meantime, I very much welcome feedback on this post’s proposed research framework for humanitarian technology and innovation.

 bio

Innovation and the State of the Humanitarian System

Published by ALNAP, the 2012 State of the Humanitarian System report is an important evaluation of the humanitarian community’s efforts over the past two years. “I commend this report to all those responsible for planning and delivering life saving aid around the world,” writes UN Under-Secretary General Valerie Amos in the Preface. “If we are going to improve international humanitarian response we all need to pay attention to the areas of action highlighted in the report.” Below are some of the highlighted areas from the 100+ page evaluation that are ripe for innovative interventions.

Accessing Those in Need

Operational access to populations in need has not improved. Access problems continue and are primarily political or security-related rather than logistical. Indeed, “UN security restrictions often place sever limits on the range of UN-led assessments,” which means that “coverage often can be compromised.” This means that “access constraints in some contexts continue to inhibit an accurate assessment of need. Up to 60% of South Sudan is inaccessible for parts of the year. As a result, critical data, including mortality and morbidity, remain unavailable. Data on nutrition, for example, exist in only 25 of 79 countries where humanitarian partners have conducted surveys.”

Could satellite and/or areal imagery be used to measure indirect proxies? This would certainly be rather imperfect but perhaps better than nothing? Could crowdseeding be used?

Information and Communication Technologies

“The use of mobile devices and networks is becoming increasingly important, both to deliver cash and for communication with aid recipients.” Some humanitarian organizations are also “experimenting with different types of communication tools, for different uses and in different contexts. Examples include: offering emergency information, collecting information for needs assessments or for monitoring and evaluation, surveying individuals, or obtaining information on remote populations from an appointed individual at the community level.”

“Across a variety of interventions, mobile phone technology is seen as having great potential to increase efficiency. For example, […] the governments of Japan and Thailand used SMS and Twitter to spread messages about the disaster response.” Naturally, in some contexts, “traditional means like radios and call centers are most appropriate.”

In any case, “thanks to new technologies and initiatives to advance commu-nications with affected populations, the voices of aid recipients began, in a small way, to be heard.” Obviously, heard and understood are not the same thing–not to mention heard, understood and responded to. Moreover, as disaster affected communities become increasingly “digital” thanks to the spread of mobile phones, the number of voices will increase significantly. The humanitarian system is largely (if not completely) unprepared to handle this increase in volume (Big Data).

Consulting Local Recipients

Humanitarian organizations have “failed to consult with recipients […] or to use their input in programming.” Indeed, disaster-affected communities are “rarely given opportunities to assess the impact of interventions and to comment on performance.” In fact, “they are rarely treated as end-users of the service.” Aid recipients also report that “the aid they received did not address their ‘most important needs at the time.'” While some field-level accountability mechanisms do exist, they were typically duplicative and very project oriented. To this end, “it might be more efficient and effective to have more coordination between agencies regarding accountability approaches.”

While the ALNAP report suggests that these shortcomings could “be addressed in the near future by technical advances in methods of needs assessment,” the challenge here is not simply a technical one. Still, there are important efforts underway to address these issues.

Improving Needs Assessments

The Inter-Agency Standing Committee’s (IASC) Needs Assessment Task Force (NAFT) and the International NGO-led Assessment Capacities Project (ACAPS) are two such exempts of progress. OCHA serves as the secretariat for the NAFT through its Assessment and Classification of Emergencies (ACE) Team. ACAPS, which is a consortium of three international NGOs (X, Y and Z) and a member of NATF, aims to “strengthen the capacity of the humanitarian sector in multi-sectoral needs assessment.” ACAPS is considered to have “brought sound technical processes and practical guidelines to common needs assessment.” Note that both ACAPS and ACE have recently reached out to the Digital Humanitarian Network (DHNetwork) to partner on needs-assessment projects in South Sudan and the DRC.

Another promising project is the Humanitarian Emergency Settings Perceived Needs Scale (HESPER). This join initiative between WHO and King’s College London is designed to rapidly assess the “perceived needs of affected populations and allow their views to be taken into consideration. The project specifically aims to fill the gap between population-based ‘objective’ indicators […] and/or qualitative data based on convenience samples such as focus groups or key informant interviews.” On this note, some NGOs argue that “overall assessment methodologies should focus far more at the community (not individual) level, including an assessment of local capacities […],” since “far too often international aid actors assume there is no local capacity.”

Early Warning and Response

An evaluation of the response in the Horn of Africa found “significant disconnects between early warning systems and response, and between technical assessments and decision-makers.” According to ALNAP, “most commentators agree that the early warning worked, but there was a failure to act on it.” This disconnect is a concern I voiced back in 2009 when UN Global Pulse was first launched. To be sure, real-time information does not turn an organization into a real-time organization. Not surprisingly, most of the aid recipients surveyed for the ALNAP report felt that “the foremost way in which humanitarian organizations could improve would be to: ‘be faster to start delivering aid.'” Interestingly, “this stands in contrast to the survey responses of international aid practitioners who gave fairly high marks to themselves for timeliness […].”

Rapid and Skilled Humanitarians

While the humanitarian system’s surge capacity for the deployment of humanitarian personnel has improved, “findings also suggest that the adequate scale-up of appropriately skilled […] staff is still perceived as problematic for both operations and coordination.” Other evaluations “consistently show that staff in NGOs, UN agencies and clusters were perceived to be ill prepared in terms of basic language and context training in a significant number of contexts.” In addition, failures in knowledge and understanding of humanitarian principles were also raised. Furthermore, evaluations of mega-disasters “predictably note influxes or relatively new staff with limited experience.” Several evaluations noted that the lack of “contextual knowledge caused a net decrease in impact.” This lend one senior manager noted:

“If you don’t understand the political, ethnic, tribal contexts it is difficult to be effective… If I had my way I’d first recruit 20 anthropologists and political scientists to help us work out what’s going on in these settings.”

Monitoring and Evaluation

ALNAP found that monitoring and evaluation continues to be a significant shortcoming in the humanitarian system. “Evaluations have made mixed progress, but affected states are still notably absent from evaluating their own response or participating in joint evaluations with counterparts.” Moreover, while there have been important efforts by CDAC and others to “improve accountability to, and communication with, aid recipients,” there is “less evidence to suggest that this new resource of ground-level information is being used strategically to improve humanitarian interventions.” To this end, “relatively few evaluations focus on the views of aid recipients […].” In one case, “although a system was in place with results-based indicators, there was neither the time nor resources to analyze or use the data.”

The most common reasons cited for failing to meet community expectations include the “inability to meet the full spectrum of need, weak understanding of local context, inability to understand the changing nature of need, inadequate information-gathering techniques or an inflexible response approach.” In addition, preconceived notions of vulnerability have “led to inappropriate interventions.” A major study carried out by Tufts University and cited in the ALNAP report concludes that “humanitarian assistance remains driven by ‘anecdote rather than evidence’ […].” One important exception to this is the Danish Refugee Council’s work in Somalia.

Leadership, Risk and Principles

ALNAP identifies an “alarming evidence of a growing tendency towards risk aversion” and a “stifling culture of compliance.” In addition, adherence to humanitarian principles were found to have weakened as “many humanitarian organizations have willingly compromised a principled approach in their own conduct through close alignment with political and military activities and actors.” Moreover, “responses in highly politicized contexts are viewed as particularly problematic for the retention of humanitarian principles.” Humanitarian professionals who were interviewed by ALNAP for this report “highlighted multiple occasions when agencies failed to maintain an impartial response when under pressure from strong states, such as Pakistan and Sri Lanka.”

From Gunfire at Sea to Maps of War: Implications for Humanitarian Innovation

MIT Professor Eric von Hippel is the author of Democratizing Innovation, a book I should have read when it was first published seven years ago. The purpose of this blog post, however, is to share some thoughts on “Gunfire at Sea: A Case Study in Innovation” (PDF), which Eric recently instructed me to read. Authored by Elting Morison in 1968, this piece is definitely required reading for anyone engaged in disruptive innovation, particularly in the humanitarian space. Morison was one of the most distinguished historians of the last century and the founder of MIT’s Program in Science, Technology and Society (STS). The Boston Globe called him “an educator and industrial historian who believed that technology could only be harnessed to serve human beings when scientists and poets could meet with mutual understanding.”

Morison details in intriguing fashion the challenges of using light artillery at sea in the late 1,800’s to illustrate how new technologies and new forms of power collide and indeed, “bombard the fixed structure of our habits of mind and behavior.” The first major innovative disruption in naval gunfire technology is the result of one person’s acute observation. Admiral Sir Percy Scott happened to watched his men during target practice one day while the ship they were on was pitching and rolling acutely due to heavy weather. The resulting accuracy of the shots was dismal save for one man who was doing something slightly different to account for the swaying. Scott observed this positive deviance carefully and cobbled existing to technology to render the strategy easier to repeat and replicate. Within a year, his gun crews were remarkable accurate.

Note that Scott was not responsible for the invention of the basic instruments he cobbled together to scale the positive deviance he observed. Scott’s contribution, rather, was  a mashup of existing technology made possible thanks to mechanical ingenuity and a keen eye for behavioral processes. As for the personality of the innovator, Scott possessed “a savage indignation directed ordinarily at the inelastic intelligence of all constituted authority, especially the British Admiralty.” Chance also plays a role in this story. “Fortune (in this case, the unaware gun pointer) indeed favors the prepared mind, but even fortune and the prepared mind need a favorable environment before they can conspire to produce sudden change. No intelligence can proceed very far above the threshold of existing data or the binding combinations of existing data.”

Whilst stationed in China several years later, Admiral Scott crosses paths with William Sims, an American Junior Officer of similar temperament. Sims’s efforts to reform the naval service are perhaps best told in his own words: “I am perfectly willing that those holding views differing from mine should continue to live, but with every fibre of my being I loathe indirection and shiftiness, and where it occurs in high place, and is used to save face at the expense of the vital interests of our great service (in which silly people place such a child-like trust), I want that man’s blood and I will have it no matter what it costs me personally.” Sims built on Scott’s inventions and made further modifications, resulting in new records in accuracy. “These elements were brought into successful combination by minds not interested in the instruments for themselves but in what they could do with them.”

“Sure of the usefulness of his gunnery methods, Sims then turned to the task of educating the Navy at large.” And this is where the fun really begins. His first strategy was to relay in writing the results of his methods “with a mass of factual data.” Sims authored over a dozen detailed data-driven reports on innovations in naval gunfire strage which he sent from his China Station to the powers that be in Washington DC. At first, there was no response from DC. Sims thus decided to change his tone by using deliberately shocking language in subsequent reports. Writes Sims: “I therefore made up my mind I would give these later papers such a form that they would be dangerous documents to leave neglected in the files.” Sims also decided to share his reports with other officers in the fleet to force a response from the men in Washington.

The response, however, was not exactly what Sims had hoped. Washington’s opinion was that American technology was generally as good as the British, which implied that the trouble was with the men operating the technology, which thus meant that ship officers ought to conduct more training. What probably annoyed Sims most, however, was Washington’s comments vis-a-vis the new records in accuracy that Sims claimed to have achieved. Headquarters simply waived these off as impossible. So while the first reaction was dead silence, DC’s second strategy was to try and “meet Sims’s claims by logical, rational rebuttal.”

I agree with the author, Elting Morison, that this second stage reaction, “the apparent resort to reason,” is the “most entertaining and instructive in our investigation of the responses to innovation.” That said, the third stage, name-calling, can be just as entertaining for some, and Sims took the argumentum ad hominem as evidence that “he was being attacked by shifty, dishonest men who were the victims, as he said, of insufferable conceit and ignorance.” He thus took the extraordinary step of writing directly to the President of the United States, Theodore Roosevelt, to inform him of the remarkable achievements in accuracy that he and Admiral Scott had achieved. “Roosevelt, who always liked to respond to such appeals when he conveniently could, brought Sims back from China late in 1902 and installed him as Inspector of Target Practice [...]. And when he left, after many spirited encounters [...], he was universally acclaimed as ‘the man who taught us how to shoot.'”

What fascinates Morison in this story is the concerted resistance triggered by Sims’s innovation. Why so much resistance? Morison identifies three main sources: “honest disbelief in the dramatic but substantiated claims of the new process; protection of the existing devices and instruments with which they identified themselves; and maintenance of the existing society with which they were identified.” He argues that the latter explanation is the most important, i.e., resistance due to the “fixed structure of our habits of mind and behavior” and the fact that relatively small innovations in gunfire accuracy could quite conceivably unravel the entire fabric of naval doctrine. Indeed,

“From changes in gunnery flowed an extraordinary complex of changes: in shipboard routines, ship design, and fleet tactics. There was, too, a social change. In the days when gunnery was taken lightly, the gunnery officer was taken lightly. After 1903, he became one of the most significant and powerful members of a ship’s company, and this shift of emphasis nat- urally was shortly reflected in promotion lists. Each one of these changes provoked a dislocation in the naval society, and with man’s troubled foresight and natural indisposition to break up classic forms, the men in Washington withstood the Sims onslaught as long as they could. It is very significant that they withstood it until an agent from outside, outside and above, who was not clearly identified with the naval society, entered to force change.”

The resistance to change thus “springs from the normal human instinct to protect oneself, and more especially, one’s way of life.” Interestingly, the deadlock between those who sought change and those who sought to retain things as they were was broken only by an appeal to superior force, a force removed from and unidentified with the mores, conventions, devices of the society. This seems to me a very important point.”  The appeal to Roosevelt suggests perhaps that no organization “should or can undertake to reform itself. It must seek assistance from outside.”

I am absolutely intrigued by what these insights might imply vis-a-vis innovation (and resistance to innovation) in the humanitarian sector. Whether it be the result of combining existing technologies to produce open-source crisis mapping platforms or the use of new information management processes such as crowdsourcing, is concerted resistance to such innovation in the humanitarian space inevitable as well? Do we have a Roosevelt equivalent, i..e, an external and somewhat independent actor who might disrupt the resistance? I can definitely trace the same stages of resistance to innovations in humanitarian technology as those identified by Morison: (1) dead silence; (2) reasoned dismissal; and (3) name-calling. But as Morison himself is compelled to ask: “How then can we find the means to accept with less pain to ourselves and less damage to our social organization the dislocations in our society that are produced by innovation?”

This question, or rather Morison’s insights in tackling this question are profound and have important implications vis-a-vis innovation in the humanitarian space. Morison hones in on the imperative of “identification” in innovation:

“It cannot have escaped notice that some men identified themselves with their creations- sights, gun, gear, and so forth-and thus obtained a presumed satisfaction from the thing itself, a satisfaction that prevented them from thinking too closely on either the use or the defects of the thing; that others identified themselves with a settled way of life they had inherited or accepted with minor modification and thus found their satisfaction in attempting to maintain that way of life unchanged; and that still others identified themselves as rebellious spirits, men of the insurgent cast of mind, and thus obtained a satisfaction from the act of revolt itself.”

This purely personal identification is a powerful barrier to innovation. So can this identifying process be tampered in order to facilitate change that is ultima-tely in everyone’s interest? Morison recommends that we “spend some time and thought on the possibility of enlarging the sphere of our identifications from the part to the whole.” In addition, he suggests an emphasis on process rather than product. If we take this advice to heart, what specific changes should we seek to make in the humanitarian technology space? How do we enlarge the sphere of our identifications and in doing so focus on processes rather than products? There’s no doubt that these are major challenges in and of themselves, but ignoring them may very well mean that important innovations in life-saving technologies and processes will go un-adopted by large humanitarian organiza-tions for many years to come.

Democratizing ICT for Development with DIY Innovation and Open Data

The recent Net Impact conference in Portland proved to be an ideal space to take a few steps back and reflect on the bigger picture. There was much talk of new and alternative approaches to traditional development. The word “participatory” in particular was a trending topic among both presenters and participants. But exactly how “participatory” are these “participatory” approaches to develop-ment? Do they fundamentally democratize the development process? And do these “novel” participatory approaches really let go of control? Should they? The following thoughts and ideas were co-developed in follow-up conversations with my colleague Chrissy Martin who also attended Net Impact. She blogs at Innovate.Inclusively.

I haven’t had the space recently to think through some of these questions or reflect about how the work I’ve been doing with Ushahidi fits (or doesn’t) within the traditional development paradigm—a paradigm which many at the confer-ence characterized as #fail. Some think that perhaps technology can help change this paradigm, hence the burst of energy around the ICT for Development (ICT4D) field. That said, it is worth remembering that the motivations driving this shift are more important than any one technology. For example, recall the principles behind the genesis of the Ushahidi platform: Democratizing information flows and access; promoting Open Data and Do it Yourself (DIY) Innovation with free, highly hackable (i.e., open source) technology; letting go of control.

The Ushahidi platform is not finished. It will never be finished. This is deliberate, not an error in the code. Free and open source software (FOSS) is by definition in a continual phase of co-Research and Development (co-R&D). The Ushahidi platform is not a solution, it is a platform on top of which others build their own solutions. These solutions remain open source and some are folded back into the core Ushahidi code. This type of “open protocol” can reverse “innovation cascades” leading to “reverse innovation” from developing to indus-trialized countries (c.f. information cascades). FOSS acts like a virus, it self-propagates. The Ushahidi platform, for example, has propagated to over 130 countries since it was first launched during Kenya’s post-election violence almost four years ago.

In some ways, the Ushahidi platform can be likened to a “choose your own adventure” game. The readers, not the authors, finish the story. They are the main characters who bring the role playing games and stories to life. But FOSS goes beyond this analogy. The readers can become the authors and vice versa. Welcome to co-creation. Perhaps one insightful analogy is the comparison between Zipcar and RelayRides.

I’ve used the Zipcar for over five years now and love it. But what would a “democratized” Zipcar look like? You guessed it: RelayRides turns every car owner into their own mini-DIY-Zipcar company. You basically get your own “Zipcar-in-a-box” kit and rent out your own car in the same way that Zipcar does with their cars. RelayRides is basically an open source version of Zipcar, a do-it-yourself innovation. A good friend of mine, Becca, is an avid RelayRides user. The income from lending her car out lets her cover part of her rent, and if she needs a car while hers is rented out, she’ll get online and look for available RelayRides in her neighborhood. She likes the “communal ownership” spirit that the technology facilitates. Indeed, she is getting to know her neighbors better as a result. In this case, DIY Innovation is turning strangers, a crowd, into a comm-unity. Perhaps DIY Innovation can facilitate community building in the long run.

The Ushahidi platform shares this same spirit. The motivation behind Ushahidi’s new “Check-In’s” feature, for example, is to democratize platforms like Foursquare. There’s no reason why others can’t have their own Foursquares and customize them for their own projects along with the badges, etc. That’s not to imply that the Ushahidi platform is perfect. There’s a long way to go, but again, it will never be perfect nor is that the intention. Sure, the technology will become more robust, stable and extensible, but not perfect. Perfection denotes an endstate. There is no endstate in co-R&D. The choose your own adventure story continues for as long as the reader, the main character decides to read on.

I’m all for “participatory development” but I’m also interested in allowing indivi-duals to innovate for themselves first and then decide how and who to participate with. I’d call that self-determination. This explains why the Ushahidi team is no longer the only “game in town” so-to-speak. Our colleagues at DISC have customized the Ushahidi platform in more innovative and relevant ways than we could have for the Egyptian context. Not only that, they’re making a business out of customizing the platform and training others in the Arab World. The Ushahidi code is out of our hands and it has been since 2008. We’re actively promoting and supporting partners like DISC. Some may say we’re nurturing our own competition. Well then, even better.

Freely providing the hackable building blocks for DIY Innovation is one way to let go of control and democratize ICT4D. Another complementary way is to democratize information access by promoting automated Open Data generation, i.e., embedded real-time sensors for monitoring purposes. Equal and public access to Open Data levels the playing field, prevents information arbitrage and disrupts otherwise entrenched flows of information. Participatory development without Open Data is unlikely to hold institutions accountable or render the quality of their services (or lack thereof) more transparent. But by Open Data here I don’t only mean data generated via participatory surveys or crowdsourcing.

The type of public-access Open Data generation I’m interested in could be called “Does-It-Itself” Open Data, or DII Data. Take “The Internet of Things” idea and apply this to traditional development. Let non-intrusive, embedded and real-time sensors provide direct, empirical and open data on the status of develop-ment projects without any “middle man” who may have an interest in skewing the data. In other words, hack the Monitoring and Evaluation process (M&E) by letting the sensors vote for themselves and display the “election results” publicly and in real time. Give the sensors a voice. Meet Evan Thomas, a young professor at Portland State, who spends his time doing just this at SweetLab, and my colleague Rose Goslinga who is taking the idea of DII Data to farmers in Kenya.

Evan embeds customized sensors to monitor dozens of development projects in several countries. These sensors generate real-time, high-resolution data that is otherwise challenging, expensive and time-consuming to collect via the tradi-tional survey-based approach. Evan’s embedded sensors generate behavior and usage data for projects like the Mercy Corps Water and Sanitation Program and Bridges to Prosperity Program. Another example of DII Data is Rose’s weather index insurance (WII) project in Kenya called Kilimo Salama. This initiative uses atmospheric data automatically transmitted via local weather towers to determine insurance payouts for participating farmers during periods of drought or floods. Now, instead of expensive visits to farms and subjective assessments, this data-driven approach to feedback loops lowers program costs and renders the process more objective and transparent.

There is of course more to the development field than the innovative processes described above. Development means a great many things to different people. The same is true of the words “Democracy”, “Participatory” and “Crowd-sourcing.” For me, crowdsourcing, like democracy, is a methodology that can catalyze greater participation and civic engagement. Some liken this to demo-cratizing the political process. Elections, in a way, are crowdsourced. Obviously, however, crowdsourced elections in no way imply that they are free, open or fair. Moreover, elections are but one of the ingredients in the recipe for  a democratic, political process.

In the same way, democratizing ICT4D is not a sufficient condition to ensure that the traditional development space obtains a new hashtag: #success. Letting go of control and allowing for self-determination can of course lead to unexpected outcomes. At this point, however, given the #fail hashtag associated with traditional development, perhaps unexpected outcomes driven by democratic, bottom-up innovation processes that facilitate self-organization, determination and participation, are more respectful to human dignity and ingenuity.