Always worth mentioning: the fact that most people think of themselves as above-average drivers is not a delusion; it happens to be true given quite reasonable assumptions as to what constitutes "being a good driver" that most people are above average drivers.
The average number of legs per person is slightly less than two, so if you have two legs, you have an above-average number of legs. Thus, nearly everybody in the population has an above-average number of legs.
Similarly: The number of car accidents the average driver causes per year is fractionally more than zero, so if you caused no accidents last year you were "an above average driver". Some measurable attributes just happen to be "chunky" in that way - they don't follow a normal distribution.
But see, these aren't even reasonable measures. In the case of legs, there is a hard upper bound which changes the useful definition of average away from the mean. This is a fairly intuitive thing.
Similarly with driving, a terrible measure is number of accidents per year/number of drivers that year. It just makes no sense, because it doesn't take into account the person's entire driving record. An intuitive measure might be how many accidents that person has been in total compared to the average number of accidents a driver has been in (of course, this makes people who have been hit by someone blowing a stop sign bad drivers, even though they had no major input to the accident... again intuitive, having this conversation with someone not well versed in stats will have this exception outed pretty quickly).
Basically both of these measures being considered reasonable suggests people who think this don't really have the wherewithal (knowledge or expertise) of legs or accidents to properly assess what makes sense.
Whether or not you take into account the entire driving record (which you arguably should not since people's driving ability changes over time), another relevant fact is that some people are unusually bad drivers - causing lots and lots of accidents or near-accidents all the time. The few drivers who are actively a menace to everyone else make the rest of us look good by comparison. There's an upper limit on how good a driver you can be and most drivers aren't terribly far from that limit - they know their abilities and drive within them most of the time. But there's no strong lower limit on how bad a driver you can be. So we all grok intuitively that a few people who are drunk, high, angry, near-blind, aggressive or just terribly lacking in judgement are way over there on the other side of the driving-well scale from us; that makes us "above average drivers".
Actually by your argument, there is a strong upper bound. Therefore, by basic statistics reasoning, the mean is no longer a remotely valid measure of "average" - mode is much better in this case. Therefore average still constitutes most people. If you strictly are defining "average" to be mean, then sure, everyone is an above mean driver, however if you take average to be defined in a way statistically relevant, then it means "the measure of population that encompasses most of the population - the typical value of the population". (hint, if it is the typical value, most people can't be better than typical, it is definitionally impossible). Basically, I'm taking the long way of telling you to stop being disingenuous if you want to make an argument.
The article talks about the measurement of skill. The number of accidents that a driver has caused in the past year is not a good measurement of skill. Essentially the difference you bring up is whether something can be expected to fit a normal distribution (where the average is by definition in the middle). It's a reasonable assumption that skill will fit a normal distribution, but the number of accidents caused in the past year is much more likely to fit a Poisson distribution.
Most people might be better than the average driver, but not better than the typical (median) driver (same goes for legs).
I'd say that the typical is a better reference point than the average, since the counter-intuitive effects you describe disappear, and the reference point "1,9999995 legs / person" is pretty useless.
I'd say it depends. Even "average" can be several different things depending on your definitions. More precise than "average" is "mean", and even better than that is the "arithmetic mean". Typical is typical in its ability to be misunderstood.
"Average" is a term which can be claimed by the arithmetic mean (the word you're confusing it with), the median (midpoint value), or mode (most frequently occurring value).
It's also "quite reasonable" to correct for outliers like those who cause accidents. If you drop those people off your dataset, then your distribution is a flat line because it's too lacking in nuance to describe anything useful.
The 'average' meaning does not depends on context but:
- on the distribution
- on whether by 'average' you mean 'mean' (in statistics, arithmetic mean) or 'median' (what value cuts the population in half)
The GP gives an example of people being above average by having two legs. I will take a similar example but in a classroom.
Say students in a classroom take a test scored from 0 (worst) to 20 (best). The result of the test is as follows: forty students get a 11 and ten gets a 1. That makes a mean of (11x40+10)/50 => 9. Hence all forty students are above average. If we plot the number of people per mark this gives a very special (and artificial) distribution since we made it up to make the point that it is possible for most people to be above average. In more realistic scenarios and unless skewed by external factors (bad test, wrong population, cheating) the distribution is often close to being normal, i.e mean and median are approximately the same (hence it's an impossibility to have 80% of people above mean).
Yet what's really interesting in Dunning-Kruger tests is not the actual results of the tests, it is the comparison between one's own assessment and real results, which actually abstracts from distribution problems and 'average' discussions, since each one person is compared to oneself. In the above whacky scenario we could have expected people getting a 1 to evaluate themselves scoring a 8.
If you don't want to read the whole paper, just look at the quartile/percentile graphs in the paper. They speak for themselves.
When the vast majority of people fall into the 60-100 range, but it's still possible to get zero's (ex: cheating) you don't end up with a normal distribution. This is one of the reasons most standardized tests give percentile results.
IMO, the Dunning-Kruger effect relates to this. Compared to people that regularly ride horses I am probably at the bottom 5 percentile. However, I am better than most people because most people have never ridden a horse. As people improve the list of people they compare themselves to shrinks faster than their skills improve. Being the worst player in the MBA makes you better than 99.99999% of people, but that does not let you keep your job.
>How are we to parse all this information? Do any of these people know what they are talking about? And if anyone does, how can we know which ones to listen to? The research of Dunning and Kruger may well tell us there is no way to figure out the answers to any of these questions.
Yikes, that's a stretch. Dunning and Kruger's work tells us nothing about our ability to assess others' competence; just our ability to assess our own. This idea of trying to apply that to arrive at "how do we know which experts to listen to?" is really reaching.
It's turtles all the way down. Incompetence includes over-estimating our ability to evaluate experts (e.g. acting upon or ignoring legal advice from web forums).
Do any of these people know what they are talking about? And if anyone does, how can we know which ones to listen to?
I must confess to bailing out here.
The purpose of democratic government is the consent of the governed, not to find the best experts to make the best decisions. People have to agree on stuff, and that means people who don't understand issues have to agree on them anyway. That's much more important than intelligence and competence. In fact, I'd argue for many of the complexities of the modern nation-state, nobody knows what they are doing in many areas, not even people who have paid lots of money to be trained on these issues. Simply because we can imagine that there is some sort of hierarchy of intelligence and competence in a certain field does not mean that there actually is one. Every generation, no matter how ignorant, has always had a somewhat ordered list of intelligentsia. Many times they have had little to do with how much is known and much more to do with how well somebody is respected by their peers.
Another way to think of it: democracy is a way to convince the losing side in a political dispute to back down (at least temporarily). This has benefits whether or not the decision was right when the drawbacks of escalating a dispute (rebellion, civil war, etc) outweigh the original issue.
How is having people agree on bad decisions a good thing? Would it not be better to have governments make the best decisions they can for their people, and also teach the people why these decisions are best?
Yes, many of the issues facing governments are complex and difficult to solve. You seem to be arguing that because it's difficult to learn what course of action is best, we should not even try.
Humility is a good thing, but it is completely absent in popular opinion.
> Would it not be better to have governments make the best decisions they can for their people, and also teach the people why these decisions are best?
Why are you assuming that governments are capable of making good decisions?
I'm serious - name three politicians who you would trust to make important decisions about your life? On the off chance that you're the first who has a list that long, what makes you think that said politicians will be in charge?
How is having people agree on bad decisions a good thing?
Because the decision affects everybody. Even more so, everybody has a greater stake in the system of decision-making than they do in the decision itself.
Look at it this way: suppose ten people are in a lifeboat stranded on the ocean. There are no supplies, and everybody is starving to death. Logically, killing and eating one of the people would allow the rest to survive. However the group votes not to do that, and they all starve. Later it's discovered that had they lived a few more days they would have been rescued.
Or you could play it the other way: they all unanimously decide to draw straws - the "law of the sea" -- and yet it does no good.
In either case, having consent of those involved is more important than optimizing around one individual's opinion, even if that opinion represents an outcome that's in the best interest of the most people involved.
You don't have to believe me. Play various scenarios out a few times yourself and work through it. And it's not unanimous agreement, not at all. The trick, as the other commenter pointed out, is to create a simple and understandable system where the minority still participates when they lose arguments. (Obviously eating the minority in our example would prevent such participation, which is why only unanimous consent would work, and it would only work in coming up with some selection criteria, not actually applying it.) I think you'll find that people deserve the dignity of being wrong, even when faced with their own death.
I'm also not saying that "we should not even try." Far from it. True learning takes place in a group setting from the bottom-up, in a peer-to-peer fashion. Once critical mass is reached, persuasion is used to convince the majority. That's the way systems of people operate, which is much, much different from the way we might like them to work, or the way a mathematical proof might work.
An excellent book about this topic, which also gives some suggestions on how the majority and minority should actually conduct themselves to make sure this happens, is Danielle Allen's Talking with Strangers.
Ann Coulter is famous for saying that fascism is optimal locally. Something tells me you aren't happy you agree with Ann Coulter about the benefits of fascism.
Very important document. At year end, how many employees really believe they're in the bottom 50% of a population? And is it really worth fighting this when measurement of ability is so hard?
Most "Up or Out" firms need to cull people so they are firm, and try to create the appearance of rigor.
Most software firms find this much more difficult. Very few start-ups have formal evaluation processes, which sounds terrible to the MBA crowd, but could just be admitting that it's impossible to really fairly judge people. (Deming would say judge the system anyway)
How about the bravado of the startup "code ninja rockstars" fresh out of school - simply a brave face in a macho subculture, or is it the Dunning Kruger effect at play, because they haven't had the chance to work alongside experienced colleagues that make them realize they have more to learn?
The D-K results don't actually say that incompetents think they're awesome. It says that, as a group, they tend to estimate themselves as slightly above average. So do the most competent. What this means is that percentile-based self-assessment has no signal, not that unskilled people are preternaturally inclined to think of themselves as excellent.
The average number of legs per person is slightly less than two, so if you have two legs, you have an above-average number of legs. Thus, nearly everybody in the population has an above-average number of legs.
Similarly: The number of car accidents the average driver causes per year is fractionally more than zero, so if you caused no accidents last year you were "an above average driver". Some measurable attributes just happen to be "chunky" in that way - they don't follow a normal distribution.