Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Undergrad discrete mathematics and symbolic logic made proof writing click for me. You have a set of things known to be true as handed down from on high, you have a set of operations to transform those true things into equivalent statements, you then go about the work of using those transformations to connect true statements together into a massive graph and extend that graph through speculation if possible.

With that intuition it's simply a matter of slogging through a proof textbook like Velleman's "How to Prove It" until you have the confidence to work through the texts that truly interest you. If you don't feel like a clueless fool you're not trying hard enough. Confusion and self-doubt are sure signs you're finally learning something.



I studied CS and it never clicked for me.

For me, a proof is essentially transforming one formular into another one until someone who understands math says "yes, now you have proven it!"

For me, any step is as good as the others.

I wish, I could understand what's happening.


Proving things is basically a graph search problem. You start from some givens, and you try to transform those givens into the desired theorem. Each transformation is an edge in the graph, leading you to another vertex.

What makes one person better than another at proving things? The same things that make e.g. one chess engine better than another:

1. They have a better heuristic about which paths to explore.

2. They are simply faster at executing the search.

3. They know of more “edges” in the graph: intermediate lemmas that they can apply to arrive at the theorem faster. (This doesn’t apply to most chess engines.)

Getting better at proving things requires exercising these muscles.

This is also why (IMO) it’s important to have basic data structures and algorithms memorized. The faster your recall of these algorithms, and the more algorithms you know, the more likely it is you can solve hard algorithmic problems on the job should that be necessary.


A lot of things aren't that rigorous and people like to skip steps.

Here's what happens if you try to prove that 2+2=4 with a computer program that refuses to skip any steps (granted, it's treating them as complex numbers with zero imaginary parts):

http://us.metamath.org/mpeuni/mmset.html#trivia

Math is just a matter of transforming one thing into another using the operations allowed by the system starting from the ground truths stipulated by the system


> … a total of 26,323 steps—this is how many steps you would have to examine if you wanted to verify the proof by hand in complete detail all the way back to the axioms of ZFC set theory

I can’t help but shake the feeling that if it takes this many steps to prove 2+2=4 then you’ve done something wrong in the design of your formalism.


> One of the reasons that the proof of 2 + 2 = 4 is so long is that 2 and 4 are complex numbers—i.e. we are really proving (2+0i) + (2+0i) = (4+0i)

Yeah, this is why


Right. That seems like an unnecessary detour to me.


It's necessary to go down to the level of axioms and do one step at a time. It's obviously not needed for us to see that this proof is correct.

A human proof would probably not generally go beyond something like:

2+2=4

1+1+1+1 = 1+1+1+1 // Substitute the definitions of 2 (1+1) and 4 (1+1+1+1)


Except that 4 is not 1+1+1+1, it's the successor of 3, which is the successor of 2 which is the successor of 1 which is the successor of 0. So you'd have to show that:

S(S(0)) + S(S(0)) = S(S(S(S(0))))

and that is non-trivial. (But it's not 26,000 steps either.)


It is trivial in most axiomatic systems. It’s directly linked to how + is defined and will mostly boil down to applying it. The 26000 steps proof is extremely convoluted.


I'm assuming those steps in the definitions of 2 and 4, but you're right that this is part of why it gets so tedious.


That's what algebra was like for me. I never was able to develop any intuition for algebra. Calculus on the other hand was very intuitive for me.

Algebra was like solving a puzzle where you randomly move pieces around to try to get them in order. There didn't seem to be any real principles involved. Just a set of rules to memorize.


It seems like this is because abstract algebra is the study of invented number games. These new games can seem a little too, well, abstract at times. The power comes from being able to play the same game with different rules when necessary.


I could "use" linear algrbra in programming and I never understood analysis.

But I didn't understand proofs with either.


Mathematicians can skip a lot of steps because they have a good intuition of what's possible and what's not, so it's enough when they know that in principle something could be proven. But this can make it hard for a beginner to understand what reasoning is allowed and what isn't allowed. However, when going back to foundations, I've found that how proofs work is surprisingly simple.

For example, in Metamath[0] (which was mentioned in another comment), there are just two inference rules. First is modus ponens[1], which says "if A is true, and if A being true implies that B is true, then B is true". Second is the rule of generalization[2], which says "if A is unconditionally true, then for all x A is true". If you start with the axiomatic statements of classical logic + set theory, pretty much all of mathematics can be inferred just by repeatedly applying these two rules to derive more true statements.

The hard part is developing a good feel for what's possible within this system and what's not, so that you can start skipping large numbers of steps too. As someone who's self-learning this stuff I've personally found exploring Metamath very helpful for this, because I find it helpful to break things down to the foundations when I'm not sure about a bit of reasoning, and Metamath is good at breaking things down to the foundations. But to each their own. Regardless, if you haven't already done so, I'd recommend learning classical logic if you want to understand proofs.

[0] http://us.metamath.org/mpeuni/mmset.html

[1] http://us.metamath.org/mpeuni/ax-mp.html

[2] http://us.metamath.org/mpeuni/ax-gen.html


Did you cover proof by induction? This technique is one that is most likely to click with CS types.


We did, but it didn't help.


I found proof by induction useful to learn, because it translates to solving problems recursively theoretically and in programming languages.


I love, love, love Velleman's book.

I worked through it after uni years, and after working for a few years. It changed my perspective on proofs and math completely!

It showed how the language of logic combined set theory form a very small and comprehensible foundation for most of math.

After reading it and a few introductory-level books on number theory, calculus and combinatorics, most of CS proofs started feeling... cute.

Did I say I love "How to prove it"?


Thanks - I've been starting to work through this book. I see you are in programming / software engineering - would you say that working through it helped you in your engineering skills (not necessarily day-day, but perhaps your ability to reason about problems)?


Well... it's been a long journey. I'd say that working through math in general, going both wider and deeper, does indeed help with reasoning and being able to reach for theory whenever necessary. Combinatorics, elementary number theory and probability are especially useful. Graph theory as well.

CS is quite math-intensive on its own but it kind of assumes a certain level of math maturity.


> a few introductory-level books on number theory, calculus and combinatorics

Any particular recommendations?


Of recent gems: Elementary Number Theory by Underwood Dudley is easy to like. You can really see how it was written with an active reader in mind, with a good and doable set of inline excercises, key problems and additional problems. The material covers all the basics and doesn't try to be original, which is good for self-study.

Instead of listing other books I would suggest the following learning scheme of a programmer trying to deepen their math:

1. No need to go beyond basics. Any uni-level introductory math course is already more advanced than 99.99% uses you might have.

2. Know your basics well. Get 2-3 classic books. Work through the one you like most, but still read through the rest.

3. Practical book size limit - 200-250 pages. Math material is hard, 200 pages of good math is months and months and months and mo...

4. Setup a learning/reading routine.


One last important bit: don't forget to enjoy the adventure, take it slow, think and feel how your reasoning toolbox expands :-)


"How to Prove It" was my textbook in my Abstract Algebra class. Absolutely loved it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: