Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are much more useful real-world examples of applied probabilistic thinking in search and rescue.

In conditions where you don't have direct contact with the search subject (and, sometimes, even if you do), there are a ton of factors to account for when deciding where to deploy resources and which resources to deploy.

Most search managers rely on gut feelings and experience, and in many cases, that works okay. They're familiar with the "hot spots" in their area where subjects tend to get turned around or hurt (or turn up if they've got a mental disability).

But there are many many searches where those tools totally fail them, and in those cases, I've seen searches stumble pretty badly and a ton of resources just get totally wasted as the manager re-deploys resources to the same places over and over again, totally certain each time that the subject is there and just somehow got missed by previous teams.

My favorite search-related text so far is "Lost Person Behavior", an analysis of the behaviors of past search subjects. It's far from perfect -- in some cases, it's relying on very small amounts of data or on data that's relevant to a specific area only -- but it's all we've got at the moment, it's a step in the right direction, and it's been right for the most part, even when the search manager was wrong.

I've also developed a personal rule that "40% of the information we've got is wrong" going into a search. Bad information is the result of a lot of hands touching the info before we ever get it. Everyone tries to be helpful, and they suggest things that then become facts. These bits of misinformation can misdirect searches really badly, so it's good practice to review all of the information you've got right away, and try to identify the bits that are most likely to be incorrect.

In the first quarter of 2017, we had a plane go down in a neighboring county that didn't have the resources to manage the search, so we handled it. It turned into a major search spanning almost a week, with CalOES on-site, along with air national guard, civil air patrol, and a half-dozen other agencies, and hundreds of feet on the ground. Along about day 3, I decided to re-review the data; there was a flight track from radar just before the aircraft disappeared, there was an eyewitness account that heard a loud aircraft engine but couldn't see it because of heavy cloud cover, and there was an intermittent ELT signal. We had some information on the conditions at the time. The area around the ELT had been searched, and re-searched, and was about to be searched again. Given that info, what would you do as a search manager?

The eyewitness account was from about the right time of day, and the guy didn't seem over-helpful, so it was probably reliable but all it told us was that there might have been an aircraft in the area at the time.

I decided to dismiss the ELT entirely, and instead reviewed the terrain, compared it to the flight track, and talked to a pilot. We assumed the pilot was capable and that they had a technical issue with the aircraft; given that, the flight track suggested they attempted to turn back, and then changed their mind. There was a narrow mountain pass just ahead of the end of the flight track, and then a long valley that suddenly ended in mountains. And that's approximately where the plane was eventually found, many miles from the ELT reports.

I reasoned that once the ELT area had been searched, the probability of the aircraft being there dropped considerably, and therefore the ELT was misleading the search. So, where would we search if there was no ELT?

This has been a recurring pattern in a lot of searches over the last couple of years, and I hope someday a sea change happens in search management that incorporates more of this approach.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: