There's one distinction that I think is worth making - but not made clearly in the essay - and that's between beliefs and decisions/actions. Whether a belief is reasonable often just depends on the available evidence, but whether an action is reasonable depends on the value context, i.e. what your goal is.
This adds a different perspective to the example of the American football game. If the value context is purely winning the game, then it is rational not to make the kick. However, the value context is actually a lot larger and more nebulous than that; it's some combination of the students' well-being, the college's reputation, the crowd's entertainment and a bunch of other stuff.
Before asserting that someone is being irrational, it's always worth thinking about their value context.
Edit: Just remembered that of course Hume already understood this very well. "Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them."
I understood rationality here is used to describe the kind of participating in the discussion which is dedicated to establishing objective truth.
Rationalizing, on the other hand, is something completely different even if it makes sense from the point of view of your interest.
An example: a businessman may have a stake in the project and so publicly state to the effect he believes the project will succeed. The project actually is failing but he is to gain from extending the perception of success. He may be rationally taking care for his financials but to the public, he is dishonest, rationalizing and being objectively irrational given the clear facts that oppose his public statements.
>Edit: Just remembered that of course Hume already understood this very well. "Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them."
Blegh. Neuroscientifically, "the passions" are just reasoning about your bodily and mental well-being. It can be reasoning better or worse, since "perception is unconscious inference" as Helmholtz put it, but these things largely amount to the same problem being posed with respect to different datasets.
> For example, it might be okay to risk a $1 bet to win $1,000,000 if the probability is one in 500, but it is quite another thing for the average person to risk $50,000 to win $1,000,000 if the probability is one in 400. Even with better odds in the second case,
The author either made a typo (intended billion rather than million) or doesn't understand probability. The second case is on average going to lose you money. The expected return from getting $1,000,000 1 in 400 times is only $2,500. So betting $50,000 will on average lose you a lot of money.
A large number of people in this community regretted failing to invest in cryptocurrency, despite being the perfect group of people to recognize the spectacular potential payoff of the decision.
Some folks who try to be rational pulled out of bitcoin/similar when their stakes were lost in one or more of the exchange hacks / inside jobs. I know I bailed when my ~.25 bitcoin disappeared in an exchange "hack" 5-6 years ago (back when that amount was worth something like $100).
The author was using the "pay 50k to win 1M at 1:400 odds" example to illustrate how the calculating the expected value (in a mathematical sense [1]) does not always result in the "most reasonable" decision. If I understood correctly, his point was that _even if_ the expected value (reward*probability - cost) is positive, it still might not make sense for the average person, because of the non-linear relationship between the money they have and the utility they will derive from it (i.e. the first dollar is more valuable to them than the millionth dollar).
The example doesn't support that point very well because the expected value of making that bet is negative, so the bet would not make sense even in the naive case, where all dollars have equal utility.
Indeed odds of winning are technically "better". But if I gave you the opportunity of getting $1,000,000 99/100 times for the cost of $1,000,000 I don't think anyone would say those odds are better than 1:400.
> Anyway, can you even talk about "expected return" on independent events like these?
Technically, I used the wrong term. I meant "expected value" which is `probability of winning * value of winning`. However, you definitely can use "expected return" to talk about singular events if there are known probabilities involved.
> There is no expectation of you winning, ever, in either scenarios.
Surely you expect to win occasionally (namely 1 out of 500 times or 1 out of 400 times). If you had no expectation of winning, there would be no point; it would just be lost money.
> However, you definitely can use "expected return" to talk about singular events if there are known probabilities involved.
Just that "expected" sounds like there's a certainty about it. If I lose 100 times in a lottery with 1:500 odds, it does not mean my odds have now improved to 1:400. Playing the lottery 500 times does not give you expected return of 1, the odds are the same 1:500 in every round.
> Just that "expected" sounds like there's a certainty about it
It does seem give a sense of certainty, but is not how "expect" is used in probability. The expected value is just the average value per attempt as converges as attempts approaches infinity. So its a useful measure of a distribution (which can be used to make informed gambling decisions), but shouldn't be used to make a prediction on any single attempt.
Yeah, expected return only makes sense in e.g. a diversified portfolio.
edit: You can downvote all you want, but it still doesn't make sense to consider the expected return on one item in isolation. If you disagree, provide a reason.
Being "rational" (doing "reasoning") is not the same as being "reasonable".
> Being reasonable means holding beliefs and views for which (1) one can give true or probable evidence that (2) actually (or sufficiently and relevantly) supports them.
That's a definition of "rational", not "reasonable". Even the <title> tag of the page says "What it is to be rational", not "what it is to be reasonable".
The author seems to be opposed to any commonly accepted definition of "reasonable" that isn't reducible to "rational". But if everyone else looks like they're mistaken about what a word means, maybe you're the one who's using the word incorrectly.
Being "reasonable" is a fallback that we as a society endorse as a practical ideal, because we know that not everyone can be rational all the time. It'a vague and ambiguous concept because the goalpost moves depending on how rational the target group is typically expected to be in a given situation. This doesn't mean that we should throw away its nuances and replace it with the common denominator that is "rational". Many people believe that one can be reasonable without being fully rational, and vice versa. You can of course disagree with this proposition, but you can't just redefine a keyword to win the argument.
This is an interesting read. However, the final example is a bit puzzling, because point 6 does not follow from 5, 7 does not follow from 6, and 10 does not follow from the previous points because of the addition "if that preventive..." comes out of nowhere. It would be better to make the missing premises explicit.
On a side note, I'd like to question the usefulness of using the term "reasonable" in contemporary philosophy. The problem is that "reasonable" is an evaluative predicate. In its everyday, non-technical use "being reasonable" certainly also involves conforming to social norms and expectations, which has barely anything to do with the quality of reasoning. Philosophers would be better served to use another term instead of playing with words inventing connections between "reason" and "being reasonable" when those words have deviated so much from each other over the past centuries. In contemporary English, "reasonable" does not primarily/by default mean "based on (good) reasons."
Also worth mentioning is that moral philosophers rarely give satisfying definitions of having a reason vs. there being a reason, there being reason (mass noun reading), reasonable vs. rational, justification, reasoning, and good vs. bad reason. Especially the last point is strange, since we're ultimately only interested in good reasons.
That being said, again, it's a nice and interesting article.
> However, the final example is a bit puzzling, because point 6 does not follow from 5, 7 does not follow from 6, and 10 does not follow from the previous points because of the addition "if that preventive..." comes out of nowhere. It would be better to make the missing premises explicit.
6 follows from 4 and 5 by modus tollens.
7 is just a rewording of 6. "any ... which causes harm" -> harmful and "more highly than, nor equal to," -> "less highly than"
10 indeed does not follow because no premise provides a way of assessing effects of one behavior on another. However the phrasing "if that preventive" simply is a conditional referencing the object described in the consequent, "whatever prevents the harm"
I think following social norms is often (but not always) a "good reason" for doing something. It's a tricky line of analysis.
I've been leaning heavily towards the idea the "usefulness", broadly interpreted, is utterly paramount -- even in the world of abstract philosophy. It is perhaps second only to truth, but it is much easier to evaluate, and often an incredibly good proxy for truth anyways.
We allow most other disciplines to use words in special technical senses. For example, no-one thinks that physicists ought to stop using the terms 'force' or 'energy' just because their technical usage of the terms doesn't comport with regular usage.
I take your point but even the current "rationalist" community (LessWrong, Eliezer Yudkowski, etc.) don't seem to use the word "reasonable" in that sense.
Of course I'm not saying anything like that. Just pointing out that the people who seem to be publicly talking about this the most also reject the use of "reasonable" as a synonym of "rational" [0] (LessWrong might be "fringe" to the general public, but is quite mainstream to people who are actually concerned about the topic of rationality).
It could be that some current academics use the word that way. I would be surprised though, I don't know why they would prefer the ambiguous "reasonable" to the established and well-defined term "rational" (I don't think this is the same as asking physicists to stop using "energy"). It's even in the Wikipedia entry for "Reason" [1]:
> The meaning of the word "reason" in senses such as "human reason" also overlaps to a large extent with "rationality" and the adjective of "reason" in philosophical contexts is normally "rational", rather than "reasoned" or "reasonable".
>LessWrong might be "fringe" to the general public, but is quite mainstream to people who are actually concerned about the topic of rationality
No it isn't. It's not cited within any of the philosophical literature on rationality.
I missed that you were focusing on 'reasonable' in contrast to 'rational', though. Indeed, 'rational' is generally preferred in a philosophical context.
A few years ago I attempted to make the writings of Richard Garlikov a little more approachable, as his website is very basic.
I never finished it as the process of converting each article into HTML was very tedious.
I've uploaded an example to Imgur [0] and I’d be happy to share all resources if anyone is interested.
I would be interested in using Pollen[0], or Racket, to construct an online eBook in the vein of Matthew Butterick's work[1]. Does that sound like something you would like to see?
I'm not sure I am seeing the full picture here - did you contact him to discuss improving his design, and this is your WIP from that effort? Or did you take his content on your own and try to create a 'better' copy?
It would be interesting to read a similar essay about how some people mistakenly think reasoning is about truth finding. That would be closer to the realizations I've had.
Reasoning from premises that you have reason to suspect are false, in order to see what the conclusions of those premises are? Though I suppose you could claim that doing so is trying to find the truth of the conclusions of those premises...
You can also take the position that there is no such thing as "truth", it's just a construct created by the environment at the time. That's post-modernism for you.
I think in the real world everyone thinks they're playing chess and they can, more or less, tell you what the rules to chess are, and when asked they'll say they are playing their moves according to the rules of chess, but in reality they are not playing chess. They're all playing some other different game and often the players that are rewarded are the ones that won according to the rules of that game.
Which is to say that no one thinks they're irrational or unreasonable. But that's also part of the game. For many people the real game is social, not truth-finding, because that's where the real stakes are. I don't think people are explicitly aware of it, but they learned that this is the way the game is played because that's what everyone else is doing, frequently even the truth-finders. And again the real stakes are here, and anyway it comes more naturally to most people. The truth-finding game is hard and often has worse prizes.
Is this a matter of semantics? It's strange because, if a word's meaning comes from how it's used by most people, well what most people say and believe reasoning means and what they are actually doing when they say they are reasoning are different.
Being rational within an established societal/economic/scientific system and being rational per se are quite different things. A truly rational argument may lead to the questioning of the existence of these systems, but reforming or removing the system just to accommodate this particular case may be counterproductive and a net loss.
The think about being 'rational' is that you can rationalize just about anything. It's like the balance of exploration and exploitation in CS...the exploitation is the 'rational' and the exploration is taking a chance at something more by being 'irrational.'
Sometimes the best way to go is pick a path and stick with it. That's why I had tacos and not sushi for lunch today.
The overgeneralization that the author talks about is, I think, really at the crux of where the rational thinker may go astray (lawmaker, bureaucrat, waterfall process software developer, architect, ...). You can build up an argument which seems to be true, but is in fact irrelevant and wrong/harmful in the real world.
By the time you get to 2 or 3 points deep in any rationalization, you've made such ambitious leaps of (subjective) assumption, that your view isn't more logical than any one else's. This is fundamentally different than the postmodern assumption that any given categorization (and following assumption) is as good as any other.
e.g. If you think that tigers are woozles, ok. If you want to attribute woozles with subjective properties outside of a label, you're very likely going to die when you try to interact with them.
After being raised in a cult, this describes perfectly the members in such organizations. A cult is being rational in undermining the reasoning abilities of the members and indoctrinating them to use fake reasoning. And redefining words such as logic, reasoning, truth.
>> As I understand it now, smallpox vaccine is not routinely given any more in the United States since the risk of dying from the vaccination is greater than the risk of getting smallpox.
If the article is from 2000, the real reason was that smallpox had, by that point, been eradicated. According to wikipedia:
The last naturally occurring case was diagnosed in October 1977 and the World Health Organization certified the global eradication of the disease in 1980.
The eradication of smallpox does not necessitate the end of vaccination. Instead, it changes the cost benefit analysis of vaccination as the article described.
Most people are not rational. A lot of people are very good at recalling facts and pattern-matching, but they don't even come close to understanding the full complexity and the effects of what they're doing or saying.
Nobody comes even close to understanding the full complexity and effects of what they're doing or saying. I guess we could say that each one of us is more or less rational, but treating it as a binary feature would be hard to defend in practice.
Making bad decisions based on faulty understanding does not make you irrational, it just means that you don't know enough to make the optimal decision.
The really "irrational" decisions, such as decisions based on altruism, can still be made even with complete knowledge and understanding of the full complexity of the issues that led to the decision.
At the risk of just mirroring krotton's comment: Given the results of 'behavioural economics' research (I still don't understand why they don't just brand their work as psychology), surely the better question is whether anyone can be anywhere near rational?
This adds a different perspective to the example of the American football game. If the value context is purely winning the game, then it is rational not to make the kick. However, the value context is actually a lot larger and more nebulous than that; it's some combination of the students' well-being, the college's reputation, the crowd's entertainment and a bunch of other stuff.
Before asserting that someone is being irrational, it's always worth thinking about their value context.
Edit: Just remembered that of course Hume already understood this very well. "Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them."