> We want to be able to add two numbers and get another number, but we don't want to be able to intersect two numbers as if they were sets, even if they happen to have been built out of sets.
Can't we do this in current mathematics?! I mean, no physicist or engineer ever thinks of numbers as sets, even if you are the kind of physicist that reads and understands mathematical proofs.
Right, this is how mathematics really works. But formalizations of mathematics may suffer from leaky abstractions. If we prove facts about numbers by compiling them into sets, and then using set-theoretic axioms, we might accidentally make it possible to prove things about numbers that are incorrect or meaningless.
Isn't this an abstraction problem that you solve by simply providing an "interface" or equivalent concept or access specifiers in oop like private/protected? All other modules that use the number module for applied math will just see an "interface" (let's call it GeneralNumber - as far as I know there are a few other alternate ways of defining numbers besides sets, right?), and the particular "implementation of numbers as sets".
For more abstract algebras or who knows what, the "numbers module" might also implement another more advanced interface that exposes more of the implementation, a "SetsNumber" interface. If you know have a proof that uses this interpretation of number that is tied to one particular "implementation", then there is nothing incorrect about it leading to weird or "meaningless" results, they would be correct for SetsNumber but not for GeneralNumber (or someone might need to take a good look and see if they can be made to work for GeneralNumber too).
(I know, the words are all wrong, it probably sounds either "all wrong" or like a gibberish to mathematicians that don't also happen to be programmers ...someone should figure out more appropriate terms :) )
And about leaky abstractions, I think they happen a lot in software because of the tradeoffs we make, like 'but we also need access to those low level stuff to tweak performance', 'but we need it done yesterday so it's no time to think it through and find the right mode' or 'our model has contradictions and inconsistencies but it's good enough at delivering usable tools to the end-user, so we'll leave "wrong" because we want to focus on something that brings more business value right now' etc. Also, there's a biggie: for some problems using no abstraction is not good enough (initial developing/prototyping speed is just to small), but if you figure out the right abstraction it will end up being understandable only by people with 'iq over n' or 'advanced knowledge of hairy theoretical topic x', and you can't hire just these kinds of people to maintain the product, so you knowingly choose something that's leaky but works and can be maintained by mediocre programmers, hopefully even outsourced :)
Yes, the interface idea is basically the right thing, but there are technical difficulties which aren't immediately obvious. For example, we'd like to be able to prove that two different implementations of the natural numbers are equivalent (one can do this mathematically, so if our system is going to handle general mathematics then it has to be capable of doing this). So we have to think quite hard about what this "equivalence" actually means. It's not enough to say that they satisfy the same axioms, because in general there can be all kinds of models for some set of axioms. It's closer to say that all number-theoretic statements about numbers-v1 are true of numbers-v2, and vice versa: but you can see that this is starting to get a bit hairy in terms of computable proofs.
A related problem, which speaks to the leaky abstraction issue, is "proof irrelevance". Typically, if I've proved something, it shouldn't matter exactly how I did it. But it turns out to be tricky to make sure that the proof objects in the system don't accidentally carry too much information about where they came from. Sure, you can define a way to erase the details, but you still have to prove that erasing doesn't mess up the deductive system.
None of this is insurmountable, but it's a glimpse into the reasons why encoding mathematics computationally is not trivial.
Can't we do this in current mathematics?! I mean, no physicist or engineer ever thinks of numbers as sets, even if you are the kind of physicist that reads and understands mathematical proofs.