Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This misses the dangerous part, which is mathematicians in groups can confuse each other into accepting ideas which are basically nonsensical, especially if the counter argument relies on some obvious but intuitive observation of reality but cannot be easily formalised within their chosen framework of the moment.

As a consequence of this it wouldn't surprise me if the overwhelming majority of maths was actually incoherent nonsense and that the people that understood this thought they were just very confused due to being shouted down all the time, when the really confused people are the ones oblivious to their own situation.



I'm going to be rather dismissive in my reply, and for that, I apologize, because I'm not quite sure how else to respond.

This is more or less a non-issue. Thanks to mathematicians building on Euclid for the last 2300 years, we have a system of mathematics built on a few basic principles (that you would not disagree with) and deductive reasoning. If you take a theorem that is accepted as proven, you can almost definitely follow an immense chain of logic back to the fundamentals. It will take you a ridiculous amount of time to do so, but it is possible.

If you're referring to specific debates in the math community (e.g. "I feel that the general math community accepting the axiom of choice was a bad idea") then that's worth being specific about in your post.


>we have a system of mathematics built on a few basic principles (that you would not disagree with) and deductive reasoning

I think it's even better than that; mathematicians don't necessarily care whether the reader 'agrees' with the axioms, or whether they're in any sense 'true' or 'false'. Mathematics is always of the form 'if these axioms are true, this theorem follows from it'.

The real world and the notions which people consider to be self-evidently true is just some messy slimy gooey gunk best left to psychoanalysts and theoretical physicists and sewer workers and the like.


Oh, I agree, that's the best part. Pick your rules: oh, you picked those seven? You've got a ring; here are your math rules!

In that specific case, though, I figured I'd point out that the basic rules of math aren't usually things people squabble over. (Although I do enjoy a little mathematical philosophy from time to time.)


Even better is when, to everyone's surprise, those seven turn out to be relevant for describing real world phenomena.


You can stop worrying now: what you're afraid of doesn't actually happen, though I sort of see how someone who hasn't spent their life studying mathematics might worry that it does. Since you have to back up your ideas with proofs, you can't in the long run hoodwink people into accepting false statements.

You also seem to worry about mathematicians accepting perfectly consistent sets of ideas even when those ideas contradict "inuititive observation of reality". To that I can only say that mathematics is not a subject where intuitive observation of reality plays any major role. What decides whether some piece of math is good or not is whether it is logically consistent, found interesting by people, and useful, either in other parts of mathematics or in applications to the real world. Notice in particular, that if the applications work no-one cares if part of the math leading to them contradicts any given person's intuition. For example, physics uses real numbers a lot and your intuition might tell you they don't make sense because there can't be uncountably many different things of any kind. But physics work extremely well and the math it uses is consistent, so we use it even if it doesn't sit well with a few people.


Since fidotron has been piled on, let me defend the point in his/her post. Good mathematics has come out of being worried that what other mathematicians have done isn't quite right, and I think the perspective of the article doesn't acknowledge that.

For example: Cantor's theorem is quite true, only cranks doubt it [1]. But many mathematicians take it to have the corollary that cardinalities greater than that of the natural numbers exist, which does not follow: it is perfectly coherent to say that constructions such as the power set of natural numbers do not exist as a definite whole, and so do not have a cardinality. These kinds of doubt have driven constructivism which has led to interesting work in topology, measure theory, and type theory, and led to such useful applications as calculators for exact real arithmetic.

Cantor's paradise seems to be coherent (likewise I would be deeply surprised if large parts of mathematics turned out to be misconstrued) but the assumptions of large cardinal set theory are grandiose and poorly justified, and yet for a long time those people who wondered if it was wise to embrace the whole edifice were marginalised. It seems that now there are many more mathematicians who are interested in revisiting this perspective [2].

To put Wiles' metaphor in perspective, it is good if some mathematicians step outside the mansion from time to time, to see if the superstructure is up to all the crashing about that happens in the dark rooms.

[1]: https://www.math.ucla.edu/~asl/bsl/0401/0401-001.ps (Postscript file)

[2]: http://homotopytypetheory.org/book/ has been very successful


Thank you, I wanted to say this but you put it much better.


I'd be a lot more worried about the danger you mention if you could give even one example of that happening, ever. What ideas are mathematicians confusing each other into accepting that are basically nonsensical?


Cantor's conception of transfinite numbers is the one that I think has done most damage.



How are transfinite numbers "nonsensical"?

When you get into infinity, you have two notions of "number" that diverge. Mathematical operations on them do different things. (For example, cardinal "exponentiation" is the power set; ordinal "exponentiation" is something different and smaller.) One is size, but proper subsets can have the same size at infinity (integers, even numbers, rationals). That's where Aleph-0 (cardinality of the integers) and "c" (cardinality of the reals) come from. With cardinal infinities, you can't really do meaningful arithmetic because the field properties don't apply. "Infinity" violates the mathematical fact that x+1 != x, for example.

The other notion comes from the concept of a well-ordered set, which also maps nicely to "indexes" into possibly infinite lists. With this foundation, you have more options in terms of mathematical manipulations: you can add ordinals (but not always subtract them) and, because they pertain to list operations, the traditional "field" properties aren't always commutative. That's where we get ω, ω+1, ω^2, ω^ω, ε_0 and so on. Those all have rigorous definitions. For example, ω^2 is the order type of ordered pairs of numbers with lexicographic comparison:

    (0, 0) < (0, 1) < ... < (0, 10^100000) < ...  < (1, 0) < ... < (2, 0) < ... . 
... and ω^ω is the order type of formal natural-number polynomials in one variable with lexicographic comparison:

    0 < 1 < 10^100 < X < X+1 < X + 10^100 < 2*X < 10^100*X < X^2 < X^3 < X^3 + 1...
Where things get messy is that the relationship between cardinal and ordinal numbers (more formally, what ordinal number has the same cardinality as the reals, or the continuum?) is, in fact, formally undecidable. (Continuum Hypothesis). That doesn't mean no one has solved it. It means there's no mathematical way to refute or prove it from ZFC, the Zermelo-Frankel set axioms plus the Axiom of Choice. The CH is neither true nor false, insofar as one can have valid mathematics with or without it.

To put the above more succinctly, we know that the countable ordinals are a well-ordered set (totally ordered with a minimum) and since no set contains itself, that set is uncountable. It is, in fact, the smallest countable set (the ordinal numbers are totally ordered by the subset relation). That's called ω_1. Intuitively, we might hope that that's also the same "size" as the real numbers (we don't know of any smaller uncountable infinities, and we can't construct any). But there is no way to prove or refute whether that is true. Mathematics is valid either way; it has to "fork".

It's not "nonsensical". What it is is formal. It may or may not map to the real world. You can't actually perform Banach-Tarski (Axiom of Choice hack) on an orange, nor can you store a complete Hamel basis on your hard drive. But these concepts are still useful in defining our notion of what a "set", precisely, is.


Well put; a little quibble: these are the two notions of infinity that most interest set theorists, but there are many other notions of infinity in mathematics, e.g.,

1. Representation of geometric entities "at infinity" in, e.g., the point at infinity from the projective sphere that allows straight lines to be treated as circles;

2. Infinitesimals;

3. Game-theoretic constructions of infinite numbers, e.g., in Conway numbers. Incidentally, the set-theoretic cardinals are equivalent to a special case of these;

4. Definition of numbers as equivalence classes of functions under their speed of growth as they tend to infinity, e.g., Hardy's logarithmico-exponential functions. Incidentally, the computable set-theoretic ordinals are equivalent to a special case of these.


Sorry to give a minor correction to a little quibble, but it is the ordinals, not the cardinals, that are a special case of Conway numbers. The cardinals are equivalence classes of these of the form [א_a,א_(a+1))

(Also you can get infinitesimals from Conway's construction as well)


A quibble of my own: a lot of the "infinity" constructions in mathematics only use infinity as a name. Projective geometry (1) is a good example of that. The formalization doesn't actually appeal to any sort of infinite quantities.


Projective geometry: even in the simple case I outlined, you have lines being circles of infinite diameter. That infinity is just an additional closure point on the plane for shapes (and you can think of the similar projective line providing the complementary notion of displacement we can use to measure the diameter of infinite circles) doesn't stop the geometry from representing shapes with infinite attributes.

It is the case that all of this can be finitely represented. But this is true of a quite large part of large cardinal set theory as well, which can be represented in constructive type theory - mathematicians make it their business to transform the infinitary into the finitary.


Excellent point.

Conway's surreal numbers are awesome. Combinatorial game theory seems silly at first (why are we analyzing Hackenbush?) if you expect it to be like "regular" game theory but is mind-blowing when you actually get it in all its glory.


Is that you Henri ?


You're right that there is a big social aspect to mathematics that has bitten people in the butt many times throughout history.

But to say that an "observation of reality" should have any effect on existing mathematics is silly. Though mathematics might take inspiration from the physical world, it is removed from it by design. When some observation of reality disagrees with mathematics, that usually means the mathematical framework in question needs to be generalized or specialized, not altered.

This has happened over the years, for example, with measure theory, Fourier analysis, Lie theory, computational complexity theory, and many others.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: