Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

IMHO the important question is, as consumers of expertise, how do we manage its risk? Can we predict the risk, the likelihood and/or magnitude of failures?

One issue might be our standards of 'success' and 'failure' for economists and other experts. The standard probably shouldn't be, 'predict 100% of economic events accurately', though that's an ideal to strive for. IIRC, some geologists were put on trial in Italy for failing to predict an earthquake; AFAIK there's no known way to do it reliably. Is 'predict 75% accurately' realistic for economics? Too high? Too easy? Is the thing even measurable? I don't know. And when the housing market crashes, saying 'I got the other 3 predictions right' doesn't matter much to people nor is it evidence that you weren't incompetent in this case. Maybe a standard should be, 'no incorrect predictions on critical issues' (i.e., it's better to say nothing than risk being wrong), but maybe that would result in no predictions on critical issues; also, in many situations such as earthquakes, no prediction infers the null hypothesis: No earthquakes today. Regardless, likely any standard should rely on degree of error: The housing market predictions weren't off by a few percent but by orders of magnitude.

Can we create a model that accurately predicts where expertise fails and its magnitude? Perhaps large errors simply occur in unstable systems which have high magnitude variation in outcomes (e.g., earthquake or no earthquake), and thus unavoidably larger errors. Also, I'd expect accuracy to correlate with (high volume of quality research + consensus). If there's consensus based on low volume of quality research, that seems ripe for failure. Based on only a little research, it was popularly said (I don't know if it was consensus of scientists) in the 1970s that there would be an ice age soon; that was wrong. Based on a mountain of research, scientific consensus accurately has predicted global warming. But I'm guessing at the factors involved in predicting the risk of expertise. Has anyone researched this?



If you could create such a model, you'd become very rich of course if you put your money where your mouth is and you are correct. These type of contrarian investors exist (i.e. black swan investors). The problem is that their funds lose money every year, year over year, and people think they are crazy until some unforseen crisis hits and they become "prescient" all of the sudden.

In game theory there is more and more research pointing towards ambiguity and how people deal with that. Ambiguity is defined as sourced of probability which are generally unknown but people still prefer one over the other. A well known example is the Ellsberg paradox [0]. In this paradox there are two turns, one contains 50 red and 50 black balls and the other contains 100 balls of unknown proportion of red/black. People often prefer to bet on the known urn (even on complmentary bets) leading to probabilistic contradictions (i.e. probabilities stop summing to one)

There's also a lot of research on behavioral finance and in particular behavioral macro, these models show that even with only a small number of heterogenous agents you can create chaotic models ...

The point I'm trying to make is that we, as economists create models, based on certain assumptions, that seem to work well, until they don't.

[0] https://en.wikipedia.org/wiki/Ellsberg_paradox


Why trust expertise as such? If an expert can't give a convincing argument for their claims, why believe them?


> If an expert can't give a convincing argument for their claims, why believe them?

IME, there's weak correlation between an expert's persuasiveness to a non-expert on one hand and the truth on the other hand:

* As an expert in my field, I could convince non-experts of almost anything; they have no idea what I'm talking about: Is it true? Have I omitted key things? Twisted other things? I wouldn't do that, but I've seen others do it. As an example we're all familiar with, U.S. intelligence officials have said, 'we're only collecting metadata about U.S. citizens, not content, so don't worry'. Obviously these experts knew that bulk collection of metadata is just as invasive as content, but it convinced the non-experts who don't understand that.

* In fields in which I'm non-expert, I used to think I could evaluate expert claims based on their persuasiveness. I was wrong - I was a mark, a sucker; I was the kind of person that propagandists, experts in persuasion, count on; my overestimation of my own powers, my ego, was my weakness. Eventually I observed a pattern: Later, when more facts came out or I knew more, what had been persuasive was actually BS. And a key point: It hadn't become wrong; it was always wrong and I had been conned by it. And what about those situations where I just never learned I was wrong, and the con continued indefinitely? In fields where I read a variety of experts and have some minor sophistication, I've learned that newspaper op-eds, which many find persuasive (people love to send them to me to read) are not infrequently dogs-t piled on a foundation of horses-t, with a few grains of truth sprinkled on top.

IME, the general wisdom that most people gain through years of painful experience is that persuasiveness has a small place, but far more you need to learn who to trust with what, who not to, and how to tell the difference. That's the only solution.


Obviously that's a risk, but so is blind faith in trusted experts. I don't think your examples really relate to this situation, because there wasn't a convincing expert argument that the housing bubble was economically sustainable, the arguments that it was unsustainable were simply ignored by trusted experts, for the most part. I mean, I think I found Nouriel Roubini through a post by Brad DeLong, but I don't think DeLong ever actually addressed his concerns.


> blind faith in trusted experts

C'mon. There is no way anyone could actually read my original comment and think I advocated blind faith in experts.


Fair enough. I didn't intend to attack that straw man.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: