Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Mechanics of Proof (hrmacbeth.github.io)
137 points by segfaultbuserr on March 19, 2024 | hide | past | favorite | 25 comments


One thing that's always been a little unclear to me is this:

With proofs, I normally think of the "work" as being in the derivation of a statement that is shown to be true. That is, the value is in deriving a new true statement that was not previously known to be true.

In the examples I usually see for using something like Lean, there's a true statement given, and the question is how to prove it. Although there are reasons this could be interesting, it doesn't seem quite as valuable as deriving some truth statement that wasn't previously known to be true.

Can something like Lean be used generatively, to generate multiple true statements given a set of assumptions?


I would think you’d generally need to have something interesting in mind that you think might be true before bothering to prove it. Certainly you’ll need it to be interesting if you want anyone else to notice. It’s trivial to generate an endless tree of true statements by just tree searching with whatever valid operators your proof system has.


How do you know the statement is true if you have not proven it to be true?

Proof of true statements and discovery of new true statements go hand in hand. This is also because the proof tells you something about why the theorem is interesting. Generating multiple true statements given a set of assumptions is not very interesting per se.

But yes, something like, "in your situation, this is an interesting conjecture" would be nice in a proof assistant. Maybe AI will get us there.


I guess in my research (which has involved proofs), I haven't always known where a proof would lead a priori? I don't usually start with a statement I want and then look to prove it, it's more like I explore various derivations to see where they lead (probably with some kind of hunches), and then some of them end up in an interesting place.

But yes, I agree some kind of AI-ish guided suggestion process would be interesting. You'd have to quantify "interesting" probably to build such a thing but I suspect it could be done.


It's like solving a maze or exploring a dungeon. Exploring from the start might find something interesting, but having an interesting goal in mind can suggest a possible path and is a guarantee that the proof is rewarding. Whereas untargeted exploration can end up somewhere more surprising and fun, but also less likely to generate anything interesting.


> I don't usually start with a statement I want and then look to prove it, it's more like I explore various derivations to see where they lead (probably with some kind of hunches), and then some of them end up in an interesting place.

You can work this way in Lean. You need to have a goal to make the engine happy, but there's no need for the goal to be true -- you can just do something like

    have h : ∀x, ∀y, x = y := by
      -- h is not true, but in the attempt to prove it,
      -- we can do whatever we want.
      -- If we find something interesting, we can redefine h


Maybe what you are saying is that you want to use the proof assistant like a computer algebra system. You apply correct transformations, and so you are at the same time proving and exploring.

I would agree that this aspect is underdeveloped in proof assistants. At the root of it is that people believe that logic is different from algebra, while it really isn't.


This is an interesting perspective. I've recently been playing with sequent calculus, and I'm writing everything upside-down because I'm starting with the statement to prove and applying the derivations mechanically to produce the proof tree.

But hearing how you approach proofs makes the standard directionality make much more sense.


Question is: Is something true or not? If you can prove it then you know it is true. If you cannot prove it all you know is you cannot prove it. It still could be true, or false. So the best you can do is to TRY to prove it, and/or try to prove its negation.

As pointed out by others generating random true statements is not very interesting because you can generate such trivially by appending && true as many times as you want to the statement.

So in general you always want to prove something, because only that can tell you something about what is true and what is not.


Yes, easily, but they are infinite and, unlike a given statement, mostly uninteresting.


It’s easy to generate true statements, but difficult to generate interesting true statements. It’s humans who intuit statements they think might be true, or are convinced must be true, and where we’d like to have a formal machine-checked proof. That latter use case is the primary one for theorem-provers like Lean.


Let's say you want to prove a compiler correct. How do you express that as a thing you want to prove mathematically? Here us a possible way to do it:

BehaviourMC(compile(src)) = BehaviourE(src)

You are basically saying that the machine code behaviour of the compiled source should be identical to emulating the source directly.

You then need to define what you mean mathematically by machine code etc. until the goal is fully specified.

And then you prove it correct.


I much prefer this resource over the official offerings.

https://lean-lang.org/theorem_proving_in_lean4/title_page.ht...

You may need to use them in tandem, but I have found it much more productive to jump to the “number theory” section of TFA then to step through the official resource.

This resource also has its code online: https://github.com/hrmacbeth/math2001/tree/main/Math2001

One major ommission from TFA is lean's built in package manager and package builder lake: https://lean-lang.org/lean4/doc/setup.html#lake


What is going on in the third example?

"Let a,b,m,n be integers, and suppose that b^2 = 2a^2 ..."

So, (b^2)/(a^2)=2 b/a = sqrt(2)

or, in other words, sqrt(2) is rational. then it goes and uses this in the rest of the example.


You are referring to https://hrmacbeth.github.io/math2001/01_Proofs_by_Calculatio...

That's hilarious. It's the brick joke, in math. https://plantsarethestrangestpeople.blogspot.com/2012/01/bri... It's a great example, made terrible (for teaching) because the punchline is never explained.

Your excerpt left out key information, which is the secondary hypothesis that am+bn=1

Hint: what are a and b? I won't wait for you to find values that satisfy the hypotheses.

It's the same flavor as: assume a=b+1, and b=a+1. Then a-b=b-a You can prove that p=>q, sometimes, even if p is false. (Especially if p is false!)

A later chapter provides the punchline.

https://hrmacbeth.github.io/math2001/07_Number_Theory.html#t...


The point is that in order to prove something by contradiction, you must assume something that turns out to be false and see what you can derive from that assumption (ideally a contradiction). The last link in the sibling comments does indeed contain proofs that the given premise is impossible. Think about how you know that the square root of 2 is irrational.


It is correct, true, and funny. Funny in the way that it creates a seeming contradiction and your brain feels good when it solves it.

Nothing special is going on there, but it may be distracting if the joke doesn't land.


Some mathematicians are afraid of being censored or only becoming mechanics.

"What some people like to call the progress of mathematics has always consisted in the subversion of paradigms. Pierre Deligne made this point in a lecture on Voevodsky’s univalent foundations: he said it reminds him of Orwell’s Newspeak, where the ideal was to have a language where it is impossible to express heretical thoughts."

https://siliconreckoner.substack.com/p/the-central-dogma-of-...


Deligne probably invoked the language of set theory without thinking that it restrained him too much. Mathematicians seem to be terrible philosophers.


See also the Lean Game Server at https://adam.math.hhu.de/ , and in particular the "Natural Number Game" (for Lean 4!), which has done the rounds on HN a few times.

Does anyone know of a Lean book that builds up the basics from absolutely nothing, not even the Lean standard library? I'm curious about the foundations that lie just above the language.


For me, the best route is to start with Functional Programming in Lean [1]. Get familiar with dependent type theory and Lean's syntax. Then go back to Lean's source code [2].

Lean is self-hosted, so the language is written in Lean. And it's well documented. Starting from the Prelude [3], all the way up to the language server (yes, Lean ships a parser combinator library and a JSON library with the language).

One caveat is that the toolchain version of the repo is the repo itself, so you'd have to follow the development guide [4] to set it up.

Also, this is different from the standard library std4 [5], which is actually not shipped with the language, but just a normal package developed and used as the standard library.

[1]: https://leanprover.github.io/functional_programming_in_lean/

[2]: https://github.com/leanprover/lean4

[3]: https://github.com/leanprover/lean4/blob/master/src/Init/Pre...

[4]: https://github.com/leanprover/lean4/blob/master/doc/dev/inde...

[5]: https://github.com/leanprover/std4


Your best bet lies with the introductory Functional Programming in Lean: https://lean-lang.org/functional_programming_in_lean/


Anyone who wants to learn more about the calculational proof style as it applies to computing science would do very well to read Dijkstra.


What by Dijkstra, specifically?


This[1] contains an explanation of his proof format. More generally many of the later monographs from that archive have interesting proofs. Also the book Predicate Calculus and Program Semantics is chock full of proofs.

[1] https://www.cs.utexas.edu/users/EWD/transcriptions/EWD13xx/E...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: