One thing that's always been a little unclear to me is this:
With proofs, I normally think of the "work" as being in the derivation of a statement that is shown to be true. That is, the value is in deriving a new true statement that was not previously known to be true.
In the examples I usually see for using something like Lean, there's a true statement given, and the question is how to prove it. Although there are reasons this could be interesting, it doesn't seem quite as valuable as deriving some truth statement that wasn't previously known to be true.
Can something like Lean be used generatively, to generate multiple true statements given a set of assumptions?
I would think you’d generally need to have something interesting in mind that you think might be true before bothering to prove it. Certainly you’ll need it to be interesting if you want anyone else to notice. It’s trivial to generate an endless tree of true statements by just tree searching with whatever valid operators your proof system has.
How do you know the statement is true if you have not proven it to be true?
Proof of true statements and discovery of new true statements go hand in hand. This is also because the proof tells you something about why the theorem is interesting. Generating multiple true statements given a set of assumptions is not very interesting per se.
But yes, something like, "in your situation, this is an interesting conjecture" would be nice in a proof assistant. Maybe AI will get us there.
I guess in my research (which has involved proofs), I haven't always known where a proof would lead a priori? I don't usually start with a statement I want and then look to prove it, it's more like I explore various derivations to see where they lead (probably with some kind of hunches), and then some of them end up in an interesting place.
But yes, I agree some kind of AI-ish guided suggestion process would be interesting. You'd have to quantify "interesting" probably to build such a thing but I suspect it could be done.
It's like solving a maze or exploring a dungeon. Exploring from the start might find something interesting, but having an interesting goal in mind can suggest a possible path and is a guarantee that the proof is rewarding. Whereas untargeted exploration can end up somewhere more surprising and fun, but also less likely to generate anything interesting.
> I don't usually start with a statement I want and then look to prove it, it's more like I explore various derivations to see where they lead (probably with some kind of hunches), and then some of them end up in an interesting place.
You can work this way in Lean. You need to have a goal to make the engine happy, but there's no need for the goal to be true -- you can just do something like
have h : ∀x, ∀y, x = y := by
-- h is not true, but in the attempt to prove it,
-- we can do whatever we want.
-- If we find something interesting, we can redefine h
Maybe what you are saying is that you want to use the proof assistant like a computer algebra system. You apply correct transformations, and so you are at the same time proving and exploring.
I would agree that this aspect is underdeveloped in proof assistants. At the root of it is that people believe that logic is different from algebra, while it really isn't.
This is an interesting perspective. I've recently been playing with sequent calculus, and I'm writing everything upside-down because I'm starting with the statement to prove and applying the derivations mechanically to produce the proof tree.
But hearing how you approach proofs makes the standard directionality make much more sense.
Question is: Is something true or not? If you can prove it then you know it is true. If you cannot prove it all you know is you cannot prove it. It still could be true, or false. So the best you can do is to TRY to prove it, and/or try to prove its negation.
As pointed out by others generating random true statements is not very interesting because you can generate such trivially by appending && true as many times as you want to the statement.
So in general you always want to prove something, because only that can tell you something about what is true and what is not.
It’s easy to generate true statements, but difficult to generate interesting true statements. It’s humans who intuit statements they think might be true, or are convinced must be true, and where we’d like to have a formal machine-checked proof. That latter use case is the primary one for theorem-provers like Lean.
You may need to use them in tandem, but I have found it much more productive to jump to the “number theory” section of TFA then to step through the official resource.
The point is that in order to prove something by contradiction, you must assume something that turns out to be false and see what you can derive from that assumption (ideally a contradiction). The last link in the sibling comments does indeed contain proofs that the given premise is impossible. Think about how you know that the square root of 2 is irrational.
Some mathematicians are afraid of being censored or only becoming mechanics.
"What some people like to call the progress of mathematics has always consisted in the subversion of paradigms. Pierre Deligne made this point in a lecture on Voevodsky’s univalent foundations: he said it reminds him of Orwell’s Newspeak, where the ideal was to have a language where it is impossible to express heretical thoughts."
See also the Lean Game Server at https://adam.math.hhu.de/ , and in particular the "Natural Number Game" (for Lean 4!), which has done the rounds on HN a few times.
Does anyone know of a Lean book that builds up the basics from absolutely nothing, not even the Lean standard library? I'm curious about the foundations that lie just above the language.
For me, the best route is to start with Functional Programming in Lean [1]. Get familiar with dependent type theory and Lean's syntax. Then go back to Lean's source code [2].
Lean is self-hosted, so the language is written in Lean. And it's well documented. Starting from the Prelude [3], all the way up to the language server (yes, Lean ships a parser combinator library and a JSON library with the language).
One caveat is that the toolchain version of the repo is the repo itself, so you'd have to follow the development guide [4] to set it up.
Also, this is different from the standard library std4 [5], which is actually not shipped with the language, but just a normal package developed and used as the standard library.
This[1] contains an explanation of his proof format. More generally many of the later monographs from that archive have interesting proofs. Also the book Predicate Calculus and Program Semantics is chock full of proofs.
With proofs, I normally think of the "work" as being in the derivation of a statement that is shown to be true. That is, the value is in deriving a new true statement that was not previously known to be true.
In the examples I usually see for using something like Lean, there's a true statement given, and the question is how to prove it. Although there are reasons this could be interesting, it doesn't seem quite as valuable as deriving some truth statement that wasn't previously known to be true.
Can something like Lean be used generatively, to generate multiple true statements given a set of assumptions?