The Use of Expendable UAVs After Typhoon Haiyan

My colleague Dr. Imes Chiu recently co-authored this report (PDF) on his team’s use of expendable UAVs following Typhoon Haiyan (known as Typhoon Yolanda in the Philippines). Imes is Chief of Applied Research at the Center for Excellence in Disaster Management and Humanitarian Assistance (COE-DMHA) based in Honolulu, Hawaii.

Screen Shot 2014-06-09 at 8.46.51 AM

Highlights of the report:

  • “The interdisciplinary [...] team concluded that during the rapid response phase of disaster management, aerial imagery of damaged areas proved more useful than a detailed needs-assessment.”
  • “Imagery provided by civil drones enabled local government units to immediately and accurately assess the extent of the damage in their jurisdictions, even when operating with a significantly reduced staff.”
  • “What they [relief workers] actually need at this point is to get an accurate understanding and a very detailed picture at the village level, at the camp level, as to what exactly is going on.”
  • “During Haiyan recovery operations, civil drones were quickly adopted as routine operating procedures for many humanitarian groups. Overcoming the logistical challenges posed by massive debris in Tacloban, civil drones provided many NGOs much needed situational awareness at a time when needs-assessment teams did not have access to the disaster area.”
  • “Initially used to pinpoint potential base camp locations for aid workers, many NGOs began adapted the use of civil drones to inform their relief, rescue and recovery operations from aerial views of infrastructure devastation, road and power line damages, emergency areas and relief distribution networks. Civil drones also helped ensure the safety of aid workers through regular information feeds of their movements in the affected areas.”
  • “The biggest challenge [...] was determining a launch & recovery site sufficient for a fixed-wing xUAV, so the team used a multi-rotor helicopter drone that is vertically launched and recovered. Imagery from both video and still photography informed the acquisition team where to launch and recover the larger fixed-wing unit.”
  • “Even though this UAV subclass is termed ‘expendable,’ it does not mean the team intentionally or willingly ‘expends’ them, rather it means that the cost is so low and accessibility so high that the drones can be readily replace in case of loss—therefore users are not inhibited by the cost & loss factors.”
  • “A significant benefit of the xUAV is as an asset that could be locally employed and managed. They do not require a centralized command system; they are ‘locally modifiable’ so changes to the system can easily be done to meet community needs. These expendable systems by nature are small, inexpensive and not transportation limited. Unlike larger systems, xUAV could easily be hand carried to remote locations. The components are derived from everyday consumer technology backed by a large network of web-based support systems, often set-up by the academic community.”
  • “The team’s first effort started from a fixed-wing xUAV that covered an area of approximately 1.5 square kilometers at an altitude of 150 meters. The total flight time was approximately 30 minutes. The imagery acquired rendered a final mosaic at eight centimeter per pixel. The current xUAV configuration can fly and capture imagery for approximately an hour.”
  • “The xUAV platform used to generate the Tacloban mosaic imagery consisted of widely available parts that can be purchased for approximately $1,000. This is significantly cheaper than the more expensive commercial ‘turnkey’ systems.”

 

Bio

See Also:

  • Welcome to the Humanitarian UAV Network [link]
  • How UAVs are Making a Difference in Disaster Response [link]
  • Humanitarians Using UAVs for Post Disaster Recovery [link]
  • Grassroots UAVs for Disaster Response [link]
  • Using UAVs for Search & Rescue [link]
  • Debrief: UAV/Drone Search & Rescue Challenge [link]
  • Crowdsourcing Analysis of UAV Imagery for Search/Rescue [link]
  • Check-List for Flying UAVs in Humanitarian Settings [link]

Automatically Analyzing UAV/Aerial Imagery from Haiti

My colleague Martino Pesaresi from the European Community’s Joint Research Center (JRC) recently shared one of his co-authored studies with me on the use of advanced computing to analyze UAV (aerial) imagery. Given the rather technical nature of the title, “Rubble Detection from VHR Aerial Imagery Data Using Differential Morphological Profiles,” it is unlikely that many of my humanitarian colleagues have read the study. But the results have important implications for the development of next generation humanitarian technologies that focus on very high resolution (VHR) aerial imagery captured by UAVs.

Credit: BBC News

As Martino and his co-authors note, “The presence of rubble in urban areas can be used as an indicator of building quality, poverty level, commercial activity, and others. In the case of armed conflict or natural disasters, rubble is seen as the trace of the event on the affected area. The amount of rubble and its density are two important attributes for measuring the severity of the event, in contribution to the overall crisis assessment. In the post-disaster time scale, accurate mapping of rubble in relation to the building type and location is of critical importance in allocating response teams and relief resources immediately after event. In the longer run, this information is used for post-disaster needs assessment, recovery planning and other relief activities on the affected region.”

Martino and team therefore developed an “automated method for the rapid detection and quantification of rubble from very high resolution aerial imagery of urban regions.” The first step in this model is to transfer the information depicted in images to “some hierarchical representation structure for indexing and fast component retrieval.” This simply means that aerial images need to be converted into a format that will make them “readable” by a computer. One way to do this is by converting said images into Max-Trees like the one below (which I find rather poetic).

max tree

The conversion of aerial images into Max Trees enables Martino and company to analyze and compare as many images as they’d like to identify which combination of nodes and branches represent rubble. This pattern enables the team to subsequently use advanced statistical techniques to identify the rest of the rubble in the remaining aerial images, as shown below. The heat maps on the right depict the result of the analysis, with the red shapes denoting areas that have a high probability of being rubble.

rubble detector

The detection success rate of Martino et al.’s automated rubble detector was about 92%, “suggesting that the method in its simplest form is sufficiently reliable for rapid damage assessment.” The full study is available here and also appears in my forthcoming book “Digital Humanitarians: How Big Data Changes the Face of Disaster Response.”

bio

 

See Also:

  • Welcome to the Humanitarian UAV Network [link]
  • How UAVs are Making a Difference in Disaster Response [link]
  • Humanitarians Using UAVs for Post Disaster Recovery [link]
  • Grassroots UAVs for Disaster Response [link]
  • Using UAVs for Search & Rescue [link]
  • Debrief: UAV/Drone Search & Rescue Challenge [link]
  • Crowdsourcing Analysis of UAV Imagery for Search/Rescue [link]
  • Check-List for Flying UAVs in Humanitarian Settings [link]

What Humanitarians Can Learn from Conservation UAVs

I recently joined my fellow National Geographic Emergency Explorer colleague Shah Selbe on his first expedition of SoarOcean, which seeks to leverage low-cost UAVs for Ocean protection. Why did I participate in an expedition that seemingly had nothing to do with humanitarian response? Because the conservation space is well ahead of the humanitarian sector when it comes to using UAVs. To this end, we have a lot to learn from colleagues like Shah and others outside our field. The video below explains this further & provides a great overview of SoarOcean.

And here’s my short amateur aerial video from the expedition:

My goal, by the end of the year, is to join two more expeditions led by members of the Humanitarian UAV Network Advisory Board. Hopefully one of these will be with Drone Adventures (especially now that I’ve been invited to volunteer as “Drone Adventures Ambassador”, possibly the coolest title I will ever have). I’m also hoping to join my colleague Steve from the ShadowView Foundation in one of his team’s future expeditions. His Foundation has extensive experience in the use of UAVs for anti-poaching and wildlife conservation.

In sum, I learned heaps during Shah’s SoarOcean expedition; there’s just no substitute for hands-on learning and onsite tinkering. So I really hope I can join Drone Adventures and ShadowView later this year. In the meantime, big thanks to Shah and his awesome team for a great weekend of flying and learning.

bio

See Also:

  • Welcome to the Humanitarian UAV Network [link]
  • How UAVs are Making a Difference in Disaster Response [link]
  • Humanitarians Using UAVs for Post Disaster Recovery [link]
  • Grassroots UAVs for Disaster Response [link]
  • Using UAVs for Search & Rescue [link]
  • Debrief: UAV/Drone Search & Rescue Challenge [link]
  • Crowdsourcing Analysis of UAV Imagery for Search/Rescue [link]
  • Check-List for Flying UAVs in Humanitarian Settings [link]

Zoomanitarians: Using Citizen Science and Next Generation Satellites to Accelerate Disaster Damage Assessments

Zoomanitarians has been in the works for well over a year, so we’re excited to be going fully public for the first time. Zoomanitarians is a joint initiative between Zooniverse (Brook Simmons), Planet Labs (Alex Bakir) and myself at QCRI. The purpose of Zoomanitarians is to accelerate disaster damage assessments by leveraging Planet Labs’ unique constellation of 28 satellites and Zooniverse’s highly scalable microtasking platform. As I noted in this earlier post, digital volunteers from Zooniverse tagged well over 2 million satellite images (of Mars, below) in just 48 hours. So why not invite Zooniverse volunteers to tag millions of images taken by Planet Labs following major disasters (on Earth) to help humanitarians accelerate their damage assessments?

Zooniverse Planet 4

That was the question I posed to Brooke and Alex in early 2013. “Why not indeed?” was our collective answer. So we reached out to several knowledgeable colleagues of mine including Kate Chapman from Humanitarian OpenStreetMap and Lars Bromley from UNOSAT for their feedback and guidance on the idea.

We’ll be able to launch our first pilot project later this year thanks to Kate who kindly provided us with very high-resolution UAV/aerial imagery of downtown Tacloban in the Philippines. Why do we want said imagery when the plan is to use Planet Labs imagery? Because Planet Labs imagery is currently available at 3-5 meter resolution so we’ll be “degrading” the resolution of the aerial imagery to determine just what level and type of damage can be captured at various resolutions as compared to the imagery from Planet Labs. The pilot project will therefore serve to (1) customize & test the Zoomanitarians microtasking platform and (2) determine what level of detail can be captured at various resolutions.

PlanetLabs

We’ll then spend the remainder of the year improving the platform based on the results of the pilot project during which time I will continue to seek input from humanitarian colleagues. Zooniverse’s microtasking platform has already been stress-tested extensively over the years, which is one reason why I approached Zooniverse last year. The other reason is that they have over 1 million digital volunteers on their list-serve. Couple this with Planet Labs’ unique constellation of 28 satellites, and you’ve got the potential for near real-time satellite imagery analysis for disaster response. Our plan is to produce “heat maps” based on the results and to share shape files as well for overlay on other maps.

It took imagery analysts well over 48 hours to acquire and analyze satellite imagery following Typhoon Yolanda. While Planet Labs imagery is not (yet) available at high-resolutions, our hope is that Zoomanitarians will be able to acquire and analyze relevant imagery within 12-24 hours of a request. Several colleagues have confirmed to me that the results of this rapid analysis will also prove invaluable for subsequent, higher-resolution satellite imagery acquisition and analysis. On a related note, I hope that our rapid satellite-based damage assessments will also serve as a triangulation mechanism (ground-truthing) for the rapid social-media-driven damage assessments carried out using the Artificial Intelligence for Disaster Response (AIDR) platform and MicroMappers.

While much work certainly remains, and while Zoomanitairans is still in the early phases of research and development, I’m nevertheless excited and optimistic about the potential impact—as are my colleagues Brooke and Alex. We’ll be announcing the date of the pilot later this summer, so stay tuned for updates!

Humanitarian UAV Network: Strategy for 2014-2015

The purpose of the Humanitarian UAV Network (UAViators) is to guide the responsible and safe use of small UAVs in humanitarian settings while promoting information sharing and enlightened policymaking. As I’ve noted in the past, UAVs are already being used to support a range of humanitarian efforts. So the question is not if, but rather how to facilitate the inevitable expanded use of UAVs in a responsible and safe manner. This is just one of many challenging questions that UAViators was created to manage.

UAViators Logo

UAViators has already drafted a number of documents, including a Code of Conduct and an Operational Check-List for the use of UAVs in humanitarian settings. These documents will continue to be improved throughout the year, so don’t expect a final and perfect version tomorrow. This space is still too new to have all the answers in a first draft. So our community will aim to improve these documents over time. By the end of 2014, we hope to have a solid version of the of Code of Conduct for organizations and companies to publicly endorse.

OCHA UAV

In the meantime, my three Research Assistants (RA’s) and I are working on (the first ever?) comprehensive evaluation of 1) Small UAVs; 2) Cameras; 3) Payload Units; and 4) Imagery Software specifically for humanitarian field-workers. The purpose of this evaluation is to rate which technologies are best suited to the needs of humanitarians in the field. We will carry out this research through interviews with seasoned UAV experts coupled with secondary, online research. Our aim is to recommend 2-3 small UAVs, cameras, payload units and software solutions for imagery processing and analysis that make the most sense for humanitarians as end users. These suggestions will then be “peer reviewed” by members of the Humanitarian UAV Network.

Following this evaluation, my three RA’s and I will create a detailed end-to-end operational model for the use of UAVs in humanitarian settings. The model will include pre-flight guidance on several key issues including legislation, insurance, safety and coordination. The pre-flight section will also include guidance on how to program the flight-path of the UAVs recommended in the evaluation. But the model does not end with the safe landing of a UAV. The operational model will include post-flight guidance on imagery processing and analysis for decision support as well as guidelines on information sharing with local communities. Once completed, this operational model will also be “peer reviewed” by members of the UAViators.

Credit Drone Adventures

Both deliverables—the evaluation and model—will be further reviewed by the Advisory Board of UAViators and by field-based humanitarians. We hope to have this review completed during the Humanitarian UAV Experts Meeting, which I am co-organizing with OCHA in New York this November. Feedback from this session will be integrated into both deliverables.

Our plan is to subsequently convert these documents into training materials for both online and onsite training. We have thus far identified two sites for this training, one in Southeast Asia and the other in southern Africa. We’re considering a potential third site in South America depending on the availability of funding. These trainings will enable us to further improve our materials and to provide minimum level certification to humanitarians participating in said trainings. To this end, our long-term strategy for the Humanitarian UAV Network is not only to facilitate the coordination of small UAVs in humanitarian settings but also to provide both training and certification in collaboration with multiple humanitarian organizations.

I recognize that the above is highly ambitious. But all the signals I’m getting from humanitarian organizations clearly demonstrate that the above is needed. So if you have some expertise in this space and wish to join my Research Assistants and I in this applied and policy-focused research, then please do get in touch. In addition, if your organization or company is interested in funding any of the above, then do get in touch as well. We have the initial funding for the first phase of the 2014-2015 strategy and are in the process of applying for funding to complete the second phase.

One final but important point: while the use of many small and large UAVs in complex airspaces in which piloted (manned) aircraft are also flying poses a major challenge in terms of safety, collision avoidance and coordination, this obviously doesn’t mean that small UAVs should be grounded in humanitarian contexts with far simpler airspaces. Indeed, to argue that small UAVs cannot be responsibly and safely operated in simpler airspaces ignores the obvious fact that they already have—and continue to be responsibly & safely used. Moreover, I for one don’t see the point of flying small UAVs in areas already covered by larger UAVs and piloted aircraft. I’m far more interested in the rapid and local deployment of small UAVs to cover areas that are overlooked or have not yet been reached by mainstream response efforts. In sum, while it will take years to develop effective solutions for large UAV-use in dense and complex airspaces, small UAVs are already being used responsibly and safely by a number of humanitarian organizations and their partners.

Bio

See Also:

  • Welcome to the Humanitarian UAV Network [link]
  • How UAVs are Making a Difference in Disaster Response [link]
  • Humanitarians Using UAVs for Post Disaster Recovery [link]
  • Grassroots UAVs for Disaster Response [link]
  • Using UAVs for Search & Rescue [link]
  • Debrief: UAV/Drone Search & Rescue Challenge [link]
  • Crowdsourcing Analysis of UAV Imagery for Search/Rescue [link]
  • Check-List for Flying UAVs in Humanitarian Settings [link]

Using MicroMappers to Make Sense of UAV Imagery During Disasters

Aerial imagery will soon become a Big Data problem for humanitarian response—particularly oblique imagery. This was confirmed to me by a number of imagery experts in both the US (FEMA) and Europe (JRC). Aerial imagery taken at an angle is referred to as oblique imagery; compared to vertical imagery, which is taken by cameras pointing straight down (like satellite imagery). The team from Humanitarian OpenStreetMap (HOT) is already well equipped to make sense of vertical aerial imagery. They do this by microtasking the tracing of said imagery, as depicted below. So how do we rapidly analyze oblique images, which often provide more detail vis-a-vis infrastructure damage than vertical pictures?

HOTosm PH

One approach is to microtask the tagging of oblique images. This was carried out very successfully after Hurricane Sandy (screenshot below).

This solution did not include any tracing and was not designed to inform the development of machine learning classifiers to automatically identify features of interest, like damaged buildings, for example. Making sense of Big (Aerial) Data will ultimately require the combined use of human computing (microtasking) and machine learning. As volunteers use microtasking to trace features of interest such as damaged buildings pictured in oblique aerial imagery, perhaps machine learning algorithms can learn to detect these features automatically if enough examples of damaged buildings are provided. There is obviously value in doing automated feature detection with vertical imagery as well. So my team and I at QCRI have been collaborating with a local Filipino UAV start-up (SkyEye) to develop a new “Clicker” for our MicroMappers collection. We’ll be testing the “Aerial Clicker” below with our Filipino partners this summer. Incidentally, SkyEye is on the Advisory Board of the Humanitarian UAV Network (UAViators).

Aerial Clicker

Aerial Clicker 2

SkyEye is interested in developing a machine learning classifier to automatically identify coconut trees, for example. Why? Because coconut trees are an important source of livelihood for many Filipinos. Being able to rapidly identify trees that are still standing versus uprooted would enable SkyEye to quickly assess the impact of a Typhoon on local agriculture, which is important for food security & long-term recovery. So we hope to use the Aerial Clicker to microtask the tracing of coconut trees in order to significantly improve the accuracy of the machine learning classifier that SkyEye has already developed.

Will this be successful? One way to find out is by experimenting. I realize that developing a “visual version” of AIDR is anything but trivial. While AIDR was developed to automatically identify tweets (i.e., text) of interest during disasters by using microtasking and machine learning, what if we also had a free and open source platform to microtask and then automatically identify visual features of interest in both vertical and oblique imagery captured by UAVs? To be honest, I’m not sure how feasible this is vis-a-vis oblique imagery. As an imagery analyst at FEMA recently told me, this is still a research question for now. So I’m hoping to take this research on at QCRI but I do not want to duplicate any existing efforts in this space. So I would be grateful for feedback on this idea and any related research that iRevolution readers may recommend.

In the meantime, here’s another idea I’m toying with for the Aerial Clicker:

Aerial Clicker 3

I often see this in the aftermath of major disasters; affected communities turning to “analog social medial” to communicate when cell phone towers are down. The aerial imagery above was taken following Typhoon Yolanda in the Philippines. And this is just one of several dozen images with analog media messages that I came across. So what if our Aerial Clicker were to ask digital volunteers to transcribe or categorize these messages? This would enable us to quickly create a crisis map of needs based on said content since every image is already geo-referenced. Thoughts?

bio

See Also:

  • Welcome to the Humanitarian UAV Network [link]
  • How UAVs are Making a Difference in Disaster Response [link]
  • Humanitarians Using UAVs for Post Disaster Recovery [link]
  • Grassroots UAVs for Disaster Response [link]
  • Using UAVs for Search & Rescue [link]
  • Debrief: UAV/Drone Search & Rescue Challenge [link]
  • Crowdsourcing Analysis of UAV Imagery for Search/Rescue [link]
  • Check-List for Flying UAVs in Humanitarian Settings [link]

Debrief: UAV/Drone Search and Rescue Challenge

I had the pleasure of helping co-organize the first UAV/Drone Search and Rescue Challenge in the DC Area last Saturday. This was the first time that members of the DC Area Drone User Group participated in an event like this, so it was an ideal opportunity for everyone involved to better understand how UAVs might be used in a real world emergency to support of professional first responders. The challenge was held at the 65-acre MadCap Farm in The Plains, Virginia. For perspective, 65 acres is equal to about 30 full-size football (soccer) fields.

Madcap Farm

Satellite view of MadCap Farm above versus aerial view below during the UAV Search and Rescue Challenge.

Madcap Farm 2

Big thanks to our host and to Timothy Reuter who organized and ran the challenge; and of course many thanks indeed to all five teams who participated in the challenge. One colleague even flew in from Texas to compete in the event, which was sponsored by UAViators, the Humanitarian UAV Network. I described the rules of the challenge in this post but let me briefly summarize these here. Teams were notified of the following “alert” the night before the challenge:

“We have received reports of three lost campers in the vicinity of MadCap Farms. Local Search & Rescue professionals have requested our help to find them. Please report to the front field of MadCap no later than 9:15am for additional details on the campers and efforts to locate them. You will receive a laminated map of the area upon your arrival as well as a wax pen. We ask that you use your drones to identify objects that may help local responders determine where the campers are and ideally find the campers themselves. You will mark on the maps you receive what items you find, their color, and any people you identify. If any of the campers are trapped, you may need to deliver some form of medicine or other relief to them in advance of first responders being able to aid them in person.”

UAVteams

Upon reporting to the farm the following morning, the teams (pictured above) were notified that the campers were teenagers who were carrying sleeping bags and tents. In reality, our three lost campers were the cardboard stand-ups below, but Timothy had already hidden these and scattered their belongings by the time participants arrived at the farm. By the way, notice all the clouds in the picture above? This would have hampered efforts to use satellite imagery in the search and rescue efforts. UAVs, in contrast, fly below the cloud canopy and can provide far cheaper and more up-to-date imagery at far higher spatial resolutions and available even using the best commercial satellites.

LostCampers2

As a side note, I was really pleased to see the Civilian Air Patrol (CAP) at the Search and Rescue Challenge. The Air Patrol is a federally supported non-profit volunteer-based organization that serves as the official civilian auxiliary of the US Air Force. CAP members share a passion for aviation and come from all backgrounds and walks of life.

UAV_CAP

Back to the Challenge. Each team had an hour to fly their UAVs and another hour to search through their aerial videos and/or images post-flight. Two of the five teams used fixed-wing UAVs, like the group below, which kicked off our Search & Rescue Challenge.

UAVgroup1

They decided to program their UAV for autonomous flight. You can see the flight path below with specified altitude and the different way points (numbers) in the top-right screen (click to enlarge).

UAVgroup1 autonomous

Here’s a short 20-second video of the hand-held launch of the fixed-wing UAV. Once airborne, the team simply switches to auto-pilot and the UAV does the rest, accurately following the pre-programmed flight path.

The team decided to analyze their streaming aerial video in real-time, as you can observe in the second video below. While this certainly expedites the analysis and the search for the missing campers, it is also challenging since the team has to pivot back and forth between the live video and the flight path of the UAV in order to pin-point the location of a potential camper or their tent. Unlike rotary-wing UAVs, fixed-wing UAVs obviously cannot hover over one area but need to circle back to fly over the same area.

My colleague Michael and his co-pilot programmed a quadcopter to fly to designated waypoints at a specified altitude. They too used live-streaming to look for clues that could reveal the location of the three missing campers. But they also recorded the video-feed for later analysis, which proved far more effective at identifying said clues. In any event, they used First Person View (FPV) goggles to see exactly what the quadcopter’s camera was seeing, as depicted below.

FPV Goggles Quadcopter

In addition to searching for the whereabouts of the missing campers, Timothy and I decided to add a bit more focus on the “rescue” part of Search & Rescue. My colleague Euan kindly gave us a number of his new payload units, which are basically a pair of magnets that can be demagnetized by passing a small electric current through said magnets, thus acting as a release mechanism. Euan posted this short video of his prototype payload units in action during a first test earlier this year. Competing teams could earn additional points if they were able to carry a small payload (of their choice) and release this near any of the cardboard stand-ups they could find.

UAV payload unit

Some teams used Euan’s units while other used their own, like the device pictured above. Here’s a short video of payload release (with parachute) during the competition.

At the end of the competition, we all came together for a full debrief and of course to count up points. Timothy asked each team to share what they felt went well and what some of the major challenges were. The picture below shows some of the items (sleeping bags, clothing, etc.) that were scattered around the farm.

UAV debrief

Perhaps the single biggest challenge was altitude. Given that we were in a valley surrounded by rolling hills, it was difficult for competing teams to judge at what altitude their UAVs should be programmed to fly since we couldn’t see over the next hill to determine whether there were taller trees in the distance.

UAVtrees

Flying too high would make it more difficult to identify the potential campers on the ground while flying too low would mean running into trees. Unfortunately, two teams encountered the latter problem but both UAVs were eventually recovered. This highlights the importance of developing automatic collision avoidance systems (ACAS) specifically for UAVs. In addition, if UAVs are to be used for Search and Rescue efforts in forested areas, it would pay to have a back-up quadcopter to rescue any UAVs caught in taller trees. One could attach a hanger to said quadcopter to unhook UAVs out of trees. The picture below is taken by a camera fixed to a quadcopter that hit the top of a tree. Yes, we all had a good laugh about the irony of sending UAVs to rescue other UAVs.

StuckQuadUAV

The debrief also revealed that most teams were able to find more items post-flight after going through their recorded video footage. My colleague Michael noted that finding signs of the campers was “like looking for a needle in a haystack.” One team noted that live video feeds have a limited range, which hampered their efforts. Another team remarked that one can never have enough batteries on hand. Indeed, wind conditions can very easily affect the endurance of UAV batteries, for example. The importance of pre-flight check-lists was reiterated as well as clearly spelling out safety protocols before a challenge.

UAViators Logo

I’ll be sharing this debrief and lessons learned with my humanitarian colleagues at the United Nations and the Red Cross; as well as with members of the Advisory Board of the Humanitarian UAV Network (UAViators). Keep in mind that groups like UNICEF, UNHCR and the UN Office for the Coordination for Humanitarian Affairs (OCHA) have not yet begun to experiment hands-on with UAVs to support their relief efforts, so all of the above will be very new to them, just as it was to most teams who participated in the challenge. So this kind of hands-on learning will be of interest to humanitarians groups looking to explore this space.

Counting Points UAVs

We counted up the points from each team’s map (like the one above) after the debrief and congratulated the winning team pictured below. They were the only team that found all three missing campers along with some of their belongings.

Winning UAV

Big thanks again to our hosts at MadCap Farm, to Timothy Reuter and all participants for spending a fun Saturday outdoors trying something new. We certainly learned some valuable lessons and in the process made new friends.

The short video above was produced by CCTV America, a news television channel that reported on the Search & Rescue Challenge.

bio

Acknowledgements: Many thanks to Timothy Reuter and Michael Ender for their feedback on an earlier draft of this blog post.

See also:

  • How UAVs are Making a Difference in Disaster Response [link]
  • Humanitarians Using UAVs for Post Disaster Recovery [link]
  • Grassroots UAVs for Disaster Response [link]
  • Using UAVs for Search & Rescue [link]
  • Crowdsourcing Analysis of UAV Imagery for Search/Rescue [link]