Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You seem to imply that mathematical notation should be standardized, but are omitting that even languages explicitly meant for computation are not.

Why do you think mathematics should be more standardized than programming? (Actually, I'd argue it's already more standardized than programming, and you're arguing for some kind of extreme position.)

Sorry, forgot to reply to part I had meant to:

> Is it because you have to start from scratch? Has no one created a "standard library of existing theorems / proofs" that one can depend on?

It's because we have essentially picked the parts of mathematics we're going to force to be true about half way up the stack. If the axioms don't permit those theories, then we'll do away with the axioms and pick a different set. (And perhaps explore why they failed to, and what is required in axioms to enable those theorems.)

As I mentioned before, mathematics already has a good way to relate high level theorems and such to these mid-level structures, and you see it employed all the time. The problem is that many of these structures aren't easy to compute with, so we're essentially having to work backwards to find a set of formalities that we can both do mechanistically and support the theorems/propositions we'd like to be true.

Mathematicians generally don't evaluate the truth of a new paper relative to the axioms, but relative to the already established results in a field. So you question about why libraries don't exist is essentially "Why have mathematicians not replicated hundreds or thousands of years of effort in to a format that's hard for them to personally use, but is good for these tools we've developed in the past couple decades?"

Well, people are working on it, but it's going to take some time. And the moment you pick a slightly different set of axioms, you need to rebuild large portions of the library you allude to, even if the results are still true.

(Derivations in terms of base steps are considerably longer than most mathematics proofs would be, which often omit some "standard" kinds of details. For an idea of what this is like, read portions of Principia Mathematica by Russell and Whitehead.)



Hey - thanks for actually replying and taking me seriously. This is as far as this discussion has ever gotten between me and someone who seems to know what they are doing. I'm actually learning a lot.

> It's because we have essentially picked the parts of mathematics we're going to force to be true about half way up the stack. If the axioms don't permit those theories, then we'll do away with the axioms and pick a different set.

Is this related to the famous incompleteness theory? If you hit a proof that you can't prove with this set of axioms, you just try another one? It blows my mind that you can just pick any set of axioms you like. It feels like there should be a set of core axioms that is the fundamental truth. Maybe it's time that I read G.E.B and the Principia. Any recommendations for these kinds of questions?

> So you question about why libraries don't exist is essentially "Why have mathematicians not replicated hundreds or thousands of years of effort in to a format that's hard for them to personally use, but is good for these tools we've developed in the past couple decades?"

Okay, so people are working on it. It seems like a miracle that the alternative of "we're doing it all in our heads" actually works:

> Mathematicians generally don't evaluate the truth of a new paper relative to the axioms, but relative to the already established results in a field.

This is what I'm taking about. What if - somewhere in the middle - there was a mistake?

...

> Why do you think mathematics should be more standardized than programming? (Actually, I'd argue it's already more standardized than programming, and you're arguing for some kind of extreme position.)

My main concern is - every year we get someone who thought they proved, for example, P != NP, only to find a few months and a billion man hours later that there is a minute error on page thirty-five where the author misunderstood and overlooked some very subtle thing. Wouldn't it me much more pleasant if this task was automated?

Wouldn't it be amazing if we could use machine learning methods or tree search methods to formulate a proof mechanically? Or automatically eliminate proof steps to make them less complicated! Compiler-style optimization if you will. Wouldn't it be amazing if thousands of existing proofs could be analyzed statistically? Wouldn't it be great if we could machine-translate standardized proofs into regular ol' mathematical notation or whatever language we want? When we have the ability in the future, we can go back and verify them all en masse! Am I deluded in thinking any of this could be valuable?

> You seem to imply that mathematical notation should be standardized, but are omitting that even languages explicitly meant for computation are not.

Well, at least programming languages without standards have an implementation that is the official implementation! So arguably, they are more standardized.

Anyway, thanks for replying, I haven't had such an interesting exchange on HN in a while. I'm eagerly awaiting your response (if you have the time)!

edit: Also it seems like learning a standardized mathematical notation would be only marginally harder that learning to typeset regular mathematical notation in latex! Heck, you could have it compile down to latex.

edit2: Another thought I had was, since mathematicians don't prove things all the way from the bottom axioms up, but from the closest accepted truth, couldn't we have verification systems do that instead? That seems a more easily reachable goal.


> It blows my mind that you can just pick any set of axioms you like. It feels like there should be a set of core axioms that is the fundamental truth.

Think of it like this: it's not that Euclid set out a set of axioms and they just happened to make geometry we could use to talk about triangles, lines, squares, and matched up (to varying degrees) with behavior we see in the world; rather, it's that we saw those relationships and went looking for the minimum set of rules from which they could all be derived. There are (of course) other choices we could have made, which are useful in different cases, such as talking about geometry on a sphere instead of geometry on a flat piece of paper.

> This is what I'm taking about. What if - somewhere in the middle - there was a mistake?

It's happened before, it'll probably happen again.

But there's a use in doing this that you might not see. Let's take for an example the history of limits. Originally, there was a sort of fuzzy conception of what a limit was - and everyone knew that it was problematic - but the results of limits if they had a particular kind of behavior were insanely useful, proving all kinds of things about the real numbers and how functions on them behaved. And so people, as they realize more formality was needed, went back and redefined a limit over and over in more rigorous terms until we reached the modern definition.

This process of continual refinement fleshed out the concept much better than had someone simply plopped down the modern definition and called it good. It gave a corpus of different approaches, linked up the intuitive notions with formalisms, and led to several generalizations applied to different contexts.

This organic process of exploring an idea iteratively is really what drives mathematics forwards.

> Wouldn't it be amazing if we could use machine learning methods or tree search methods to formulate a proof mechanically? Or automatically eliminate proof steps to make them less complicated! Compiler-style optimization if you will. Wouldn't it be amazing if thousands of existing proofs could be analyzed statistically? Wouldn't it be great if we could machine-translate standardized proofs into regular ol' mathematical notation or whatever language we want? When we have the ability in the future, we can go back and verify them all en masse! Am I deluded in thinking any of this could be valuable?

No, no, most people in the math community agree with you. They've been working on it since at least the late 1600s, with some of Leibniz's work, and likely much earlier. Euclid's work on geometry, in some ways, was the start of making mathematics so explicit as to be obviously true (or able to be reasoned about without anything being inferred).

However, it wasn't really until the 1950s that we settled on the way to formally describe a mechanistic calculation and really fleshed out the details there. Similarly, it wasn't until 1880-1920ish that we settled on most modern formalisms (ZFC, the set of axioms most often used for modern math, is from the 1920s), and really, some of that stretched all the way in to the 1940s.

If you need an analogy to what this would be like in a different field, we discovered (modern) formalism around the same time that biology discovered evolution, and formal (mechanical) computation around the same time they discovered DNA. Much like biology is still a very active field trying to build on these results, mathematics is trying to build on the big shake-up it went through in the first half of the 20th century. (Principia Mathematica, the Hilbert program, modern axiom set ZFC, incompleteness, and uncomputable numbers all date to this period.)

> Well, at least programming languages without standards have an implementation that is the official implementation! So arguably, they are more standardized.

It may not look like it, but mathematics actually is highly standardized within its various subdomains. I recently picked up a book on number theory slightly outside of my background, and was familiar with all the symbols they used. (There's an argument to be made we'd be better off with English terms/abbreviations in the style of programming, but I actually don't agree with that. I think the symbols reduce the cognitive load to keep something in your head, because they're formatted in a 2D (instead of 1D) fashion, and use typeface to represent types of the objects.)

However, that standardization process takes time. If you read the first couple papers in a subdomain, they often will use different symbols and structures than the subdomain eventually settles on. After a while, though, it tends to settle on a particular set of symbols used a particular way.

You can define those standard symbols in terms of the basic operations in a system like Principia Mathematica uses, but the usual omitted formality actually greatly helps readability. If you don't believe me, try reading a derivation in Principia Mathematica.

tl;dr:

I agree that more mechanical computing would be good (and so do lots of other people), but I don't think the highly specialized symbols are as bad as people outside the field make them out to be. Math contains lots of big, very specific, and slightly unusual ideas, so it makes sense to give them a particular language to write them in. It's actually very consistent once you get a knack for it. Try to think of it like learning a foreign language!

Also, if you want to look up active areas of research on this topic, you might want to check out Category Theory or Homotopy Type Theory. Both are very applicable to programming or science, if either is your field!

> Hey - thanks for actually replying and taking me seriously. This is as far as this discussion has ever gotten between me and someone who seems to know what they are doing. I'm actually learning a lot.

> Anyway, thanks for replying, I haven't had such an interesting exchange on HN in a while. I'm eagerly awaiting your response (if you have the time)!

Anything to get a chance to rant about how awesome math is, hahahahaha.

More seriously, I'm glad you find the topic interesting, and I generally think that the math community could do a better job of explaining the context of their work and what the current developments in the field are. Most people aren't really exposed to math invented after the 1700s unless they go in to a technical field, and even then, not much of it.

If you want any book recommendations, feel free to let me know (and tell me a little of your background, so I can pick the right level of book) and I can try to find something good.

Alright, going to stop ranting for one post, though there's much, much more to say!


> rather, it's that we saw those relationships and went looking for the minimum set of rules from which they could all be derived. There are (of course) other choices we could have made, which are useful in different cases, such as talking about geometry on a sphere instead of geometry on a flat piece of paper.

OK, this makes sense now!

> It gave a corpus of different approaches, linked up the intuitive notions with formalisms, and led to several generalizations applied to different contexts.

And I guess this gives you multiple points from which to verify new ideas.

> If you need an analogy to what this would be like in a different field, we discovered (modern) formalism around the same time that biology discovered evolution, and formal (mechanical) computation around the same time they discovered DNA

This kinda reminds me of the Bohr atomic model from 1925: it went through many iterations since, a lot of which were kind of true but not quite. They discovered that, oops, the electron orbits are elliptical, not circular. And the more weird and odd particles and effects we discover, the more accurate the model gets, but the original wasn't that bad at modelling reality. Or the Newtonian physics vs modern stuff. Newtonian physics was 90% there, but not quite enough to explain things happening in extreme circumstances that are difficult to observe.

> you might want to check out Category Theory or Homotopy Type Theory. Both are very applicable to programming or science, if either is your field!

Category Theory is definitely on my list of things to learn. HTT seems quite new fangled, I remember seeing that the book made a big fuss on HN recently. I'll have to read more into it to see if it's worth it for me to delve deeper into.

I'd love to hear more about book recommendations! I'm a software engineer and I've had a CS education, so I've had a decent chunk of college-level math (including linalg, diffeqs...), automata theory, etc. So, some exposure to post-1700s math, some formal paper-reading, etc. You can find my e-mail in my HN profile if you like.

Thanks again. Good talk!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: