The claim that Lean's core is not completely sound is FUD. Completely bogus.
You might be confused by the fact that Lean's definitional equality is not decidable, but that doesn't mean it isn't sound. Nobody has ever found a proof of `false` so far in Lean 3.
The choice for Lean is actually quite natural: (i) it has a large and coherent library of mathematics to build such a project upon. And (ii), it has a substantial user base of mathematicians somewhat familiar with the subject at hand (i.e., condensed mathematics).
The initial challenge by Peter Scholze was directed at all theorem proving communities. As far as I know, only the Lean community took up the challenge.
(Concerning Lean 4: yes, mathlib will have to be ported. And yes, people are working hard on this. I think that none of the theorem provers so far are ready for wide-scale use in mathematics. And that's why it is important to iterate fast. The Lean community is not afraid of redesigns and large refactors when discovering better approaches.)
> You might be confused by the fact that Lean's definitional equality is not decidable, but that doesn't mean it isn't sound. Nobody has ever found a proof of `false` so far in Lean 3.
You are right, my bad. Taking my words back on that.
A bit more details from the Pierre-Marie Pédrot:
> Lean does not escape this limitation, as it defines the equality type inductively as in Coq. Rather, in a mastermind PR campaign, they successfully convinced non-experts of type theory that they could give them quotient types without breaking everything around.
> The first thing they do is to axiomatize quotient types, which is somewhat fine and can be done the same in Coq. As any use of axioms, this breaks canonicity, meaning your computations may get stuck on some opaque term. (While slightly cumbersome, axiomatizing quotients already solves 99,9% of the problems of the working mathematician).
> Now, were they do evil things that would make decent type theorists shake in terror, they add a definitional reduction rule over the quotient induction principle. We could do the same in the Coq kernel, but there is a very good reason not to do this. Namely, this definitional rule breaks subject reduction, which is a property deemed more important than loss of canonicity.
> In Lean, you can write a term t : A where t reduces to u, but u doesn't have type A (or equivalently may not typecheck at all). This is BAD for a flurry of reasons, mark my words. Reinstating it while keeping quotients requires an undecidable type checking algorithm, which is another can of worms you don't want to wander into. The same kind of trouble arises from similar Lean extensions like choice.
> Lean does not escape this limitation, as it defines the equality type inductively as in Coq. Rather, in a mastermind PR campaign, they successfully convinced non-experts of type theory that they could give them quotient types without breaking everything around.
This is not the first time that I hear someone from the Coq community talk about Lean and its "mastermind PR campaign". To me it comes across in a denigrating way, and frankly I'm a bit sick of it.
Working mathematicians are usually not experts of type theory. Yes. But they aren't stupid either. Why does the Lean community have such a large number of mathematician users? Why did it successfully formalize the main technical result of Peter Scholze's challenge in less than 6 months?
Is this all because of a "mastermind PR campaign" selling mathematicians snake oil?
There has been some strong disagreements and even mud slinging between the Coq and Lean communities in the past. But I thought that axe was buried. I would like to move on.
I'm fine with people finding the foundations of Lean ugly, distasteful, improper, or whatever. But please stop the insults.
The context of that "mastermind PR campaign" comment is a video watched now more than 70,000 times in which it's claimed that Coq is just not as good as Lean because it doesn't have quotient types, even though you can get them if you're willing to make the same trade-off that Lean does (broken SR). Calling Lean's framing a 'mastermind PR campaign' is of course still quite snippy, but I don't think it's totally unfair seeing as how it led to those comments.
>There has been some strong disagreements and even mud slinging between the Coq and Lean communities in the past. But I thought that axe was buried. I would like to move on.
I think it just takes time to fully move on. And while it's not the fault of the Lean community, when something like the recent quantamagazine article on formalizing mathematics essentially only mentions Coq to call it old and busted in comparison to Lean, that's only going to cause people to hold onto their resentments for longer.
The Coq folks tend to focus on constructivity, and Lean gets sold as a constructive system but then you get this weird axiomatised-quotients stuff that's the farthest thing from constructivity. I can see how they might find that kind of thing highly frustrating.
It might not show up as a problem if you only ever care about classical stuff, but there are arguably better ways of doing purely classical mathematics that don't arbitrarily break interoperability w/ the constructive world.
mathlib != Lean. I'm talking about the basic logic. People will try to sell you Lean by describing its system as constructive, but if so the quotients stuff is pure breakage as the Coq folks point out.
And if Lean could support classical logic without arbitrarily breaking interop for folks who want to also prove constructive statements, we might see some additions to mathlib with a closer focus on constructive math.
I agree very much that mathlib != Lean. Still, I think much of the talk about Lean will mention that mathlib is classical.
It was an honest question: I don't know where Lean is sold as a constructive system. (Note, I haven't read every part of Lean's documentation or website. I might be missing something obvious here.)
I don't think anyone is trying to sell Lean as a constructive system. The current developers certainly don't think of it that way, further evidenced by the fact that the typical way of doing computation in Lean does not involve definitional reduction, but using `#eval` with additional low level code for performance. Proof irrelevance (and quotients) were adopted with that in mind.
You can absolutely do constructive things in Lean, as long as you use neither the built-in "quot" type nor the three other axioms described in the docs, and then you get a system with all the desired properties I think
Some constructivists may also take offense with proof irrelevance (and the resulting loss of normalization [1] or its incompatibility with HoTT), which you can only really avoid by avoiding Prop.
Ok, no worries. I understand that from a theoretical point it's not so nice that defeq is not decidable. But frankly I don't care. Because in practice, I don't know of any example where someone was bitten by this. It only shows up in contrived examples. And even if you have some practical example where lean gets stuck on A = B, it will probably be very easy to find a C so that lean gets unstuck on A = C = B.
[Edit, this comment only replies to the first paragraph of parent. I wrote it before parent was edited.]
By the way, just for the record, I like many aspects of the new Lean 4 design and wish the project luck. They really aim for better maintainability and user experience. It is nice to see them taking best solutions from other programming languages and frameworks.
I haven't used quotients (or Lean), but I've certainly encountered subject reduction problems in Coq when using coinduction, so this sounds a tad hypocritical.
It's certainly good to avoid breaking subject reduction even more though ;)
Pierre Pedrot has been very vocal in his belief that broken SR from coinductive types is a serious problem with Coq that needs to be fixed. This is quite different from introducing stuff that deliberately breaks it.
There's ongoing work[1] on fixing that. Also let's see how migrating setoids to cubical type theory might fix more issues for Coq. Will they follow the Arend[2] path? That remains to be seen.
Yes and no. If you want to port large low-level proof objects in all their gory detail, then this has been done already. But if you want to port the more concise higher-level proof scripts you run in all the same issues as with regular transpilers.
Compare it to porting machine code for one instruction set to another instruction set. This is usually doable in a straightforward manner (maybe with a mild performance penalty). But transpiling idiomatic Java to idiomatic Haskell is nigh impossible.
Luckily, the differences between Lean 3 and Lean 4 are not as large as between Java and Haskell. But still, they have different idioms, different syntax, different features. And this has to be taken care of.
You might be confused by the fact that Lean's definitional equality is not decidable, but that doesn't mean it isn't sound. Nobody has ever found a proof of `false` so far in Lean 3.
The choice for Lean is actually quite natural: (i) it has a large and coherent library of mathematics to build such a project upon. And (ii), it has a substantial user base of mathematicians somewhat familiar with the subject at hand (i.e., condensed mathematics).
The initial challenge by Peter Scholze was directed at all theorem proving communities. As far as I know, only the Lean community took up the challenge.
(Concerning Lean 4: yes, mathlib will have to be ported. And yes, people are working hard on this. I think that none of the theorem provers so far are ready for wide-scale use in mathematics. And that's why it is important to iterate fast. The Lean community is not afraid of redesigns and large refactors when discovering better approaches.)