Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Or the definable reals. Definable meaning you can “name” it uniquely, by writing down some proposition which is true of that real and false for every other. Since there are only a countable number of formulas, there are only countably many definable reals. Definable reals are a superset of computable reals - Chaitin’s constant (for a given computational formalism) is definable but not computable.

There is not a single set of definable reals, since different formal languages produce different sets (e.g. there are reals definable in second order logic but not first order logic). But, given any formal language composed of finite sentences from a finite alphabet, the set of sentences in the language is countable so the corresponding set of definable reals is too.

The definable reals are all the reals we can talk about individually. (Well, there are some definable reals we can’t in practice talk about individually-e.g. those for which the shortest sentence uniquely identifying them is too long to fit in this physical universe.) Undefinable reals are an uncountable amorphous mass we can only discuss collectively, we can’t refer to any of them as individuals.



For any concrete definition of definable reals and any well-order of the reals, I can refer to the first real that’s undefinable in that system.


If we don’t limit ourselves to formal systems, we can speak about definable in natural language. In which case you just uttered a paradox.

Now, you can object to “natural language definability” as lacking “concreteness” since maybe it can’t be formalised, but then the claim in your comment can’t be formalised either…

Actually, are we sure natural language can’t be formalised? Obviously any formalisation of natural language has to be paraconsistent (maybe even dialetheic) rather than consistent, but granted that…


I think you’re right, which I guess might prove that it’s impossible to refer to a well-ordering of the real numbers. I guess that’s what they mean when they say non-constructible, huh?

But still my intuition is that for any consistent formal system I should be able to find a more powerful system that can express strictly more things, even if my argument above was silly. But I don’t think we can talk about the set of things that can be defined in natural language in any meaningful way.


> But still my intuition is that for any consistent formal system I should be able to find a more powerful system that can express strictly more things

The problem with defining more and more powerful systems is that we have an arbitrary selection of axioms to choose from. For example, if we start with the Peano axioms and define a Goedel sentence (i.e. a specific instance of "this sentence cannot be proven"), then we can choose to extend that system either by adding that sentence as an axiom, or adding its negation as an axiom. Both would result in a consistent system. Then we can define a Goedel sentence in that system, and make a more powerful system by making that or its negation an axiom. And so on.

This process of choosing new axioms does make our formal system strictly more powerful (it can prove/refute sentences which were unprovable in the weaker systems, and their consequences); but it's not very useful or interesting, since we're just arbitrarily choosing whether those extra sentences should be provable or not.

This is related to the fact that any particular formal system can only compute a few digits of Chaitin's constant (since we can only ever solve the halting problem for a subset of all possible programs). We can consistently extend our chosen system by specifying subsequent digits as axioms, but that doesn't really tell us anything; we're just asserting it, for no particular reason one way or another.


But doesn’t that still call into question the notion of “definability” as a property of a number? If I invent an axiomatic system called “ZFC + Chaitin's constant is c” does that make Chaitin’s constant any more definable than it was before?


Hmm, that's a good question. My first thought is that, if a number has some representation, then we can always define a Turing machine which outputs that number; by doing the equivalent of a hard-coded 'print "..."' statement. However, such machines may require an infinite transition table; so the computable numbers are those which are finitely representable; which limits us to those whose digits follow some sort of algorithm.

Relating that to "definable" numbers, it seems I've probably missed some subtlety with Chaitin's constant: since (a) defining extra digits via axioms seems to be adding (algorithmic) information to our system, so defining all of its digits seems to give infinite information; yet (b) it's trivial to do this in a finite way, like "all subsequent digits are zero". Maybe I'll think about this more, but it seems like my intuition is a bit off (this area of mathematics is notoriously nuanced).


Have you heard of https://en.wikipedia.org/wiki/Berry_paradox ?

It seems to run into the same sort of thing that led to ZF axioms, namely that you can have a language that is a bit too "loose" and therefore leads to contradictions. I think Kurt Godel showed that any non-trivial language is going to have something like that.


Thanks, I have now!


Can you coherently refer to the "first" real outside some set, just in general?

Like, what is the first positive real?

If you have to involve infinitesimals or something, I think it's perfectly fine for you to be able to take the definable reals and then extend them in that way. it doesn't invalidate the idea of a "definable" real.


See the “well ordering theorem”.


Which is equivalent to the axiom of choice. If one chooses not to accept that axiom, then the well-ordering theorem is not a theorem.

One can even (consistently) append the negation of the axiom of choice to ZF, in which case this theorem is false. Mere negation of the axiom of choice is not a very interesting addition, but there are some interesting alternative axioms, such as the axiom of determinacy, which have negation of the axiom of choice as a consequence

People who are fixated on definable/computable reals generally have constructivist sympathies, and many (but not all) constructivist mathematicians reject the axiom of choice. So relying on it in arguments in this context is questionable.

https://mathoverflow.net/questions/119561/mathematics-with-t...


Okay, I hadn't really picked up on the implications of well-ordering there.

Though the (im)possibility of well-ordering all reals doesn't really matter here. Computable reals can by sorted by their programs, and definable reals can be sorted by their definitions. So if you have a "concrete definition of definable reals", then the system you use gives you ways to well-order them. And if you want to "refer to the first real that’s undefinable in that system", you can make a new system2 that encapsulates the old system1.

That might prove we can't have a universal system for defining things, which would be disappointing but workable.

Or maybe it's just that "the first real that's undefinable in system1" is not concrete, so we can reject it as a description of a number, and then the consequences are much smaller.


Can you? There’s no “first real larger than 2”, for example, so if I define the definable reals as {0, 1, 2}, what is that first real?

In general, you also can’t use Cantor’s diagonal argument on the list of definable reals because there are numbers in there that aren’t computable.


See the “well ordering theorem”.


I don't know a ton about this (just heard about it in something I read somewhere?), but I prefer the computables--I feel like the idea of a computable number is the formalized version of the idea that (non-integer) numbers in nature do not have definite values, but instead have processes by which you can overturn more digits with effort, and thus they can only be said to be "pretty equal" but never totally equal.

If a number can be defined in a language but you can't tell me if it's different from another number that I have in finite time, then it is not very interesting to me.


> I prefer the computables--I feel like the idea of a computable number is the formalized version of the idea that (non-integer) numbers in nature do not have definite values, but instead have processes by which you can overturn more digits with effort, and thus they can only be said to be "pretty equal" but never totally equal.

Consider an irrational number, where the fastest possible algorithm (given a standard Turing machine) to compute its nth digit requires (10^(10^100))*(n+1) operations. Now, by the standard definition, such a number is computable – but in practice, we will never be able to compute even its first digit. In practice, its first digit is just as unknowable to us as the first uncomputable digit of (any instantiation of) Chaitin's constant is.

"Computable" by the standard definition is unphysical, because it includes computations that require unphysically large amounts of time and space. Now, many uncomputable (by the standard definition) numbers can be computed by a hypercomputer (such as a Turing machine with an oracle, or a computer which can execute supertasks) – hypercomputers are unphysical, but "standard" computers (in the mathematical sense) are unphysical too. Why denounce one form of unphysicality but not another? It would seem to be more consistent to either embrace the unphysicality of mathematics and imbibe the psychedelic broth of the Cantorian paradise, or else reject unphysicality entirely and embrace ultrafinitism. The "infinity is verboten but unphysically large finitudes are A-okay" approach of (non-ultrafinitist) construcitivsm/intuitionism seems rather arbitrary.

> If a number can be defined in a language but you can't tell me if it's different from another number that I have in finite time, then it is not very interesting to me.

If a number can be defined in a language but I can't tell you whether it's different from another number that you have in a physical amount of time (let's say less than the age of the universe), is it interesting to you?


> Consider an irrational number, where the fastest possible algorithm (given a standard Turing machine) to compute its nth digit requires (10^(10^100))*(n+1) operations

There can be no such number. You can always make a Turing machine output initial digits faster by adding some more states to do so.


Yes, if you knew ahead of time that the first digit of some such number happened to be seven, you could just hardcode your program to start with "IF n = 0 THEN RETURN 7". But that's cheating...

Let's denote the nth prime by P(n). Now, consider the real number Q, where 0.0 < Q < 1.0, and the nth decimal digit (after the decimal point) of Q, Q(n), is given by Q(n) = P(10^(10^100) + n) mod 10. Q, as defined, is computable, but (unless we discover some mathematical "shortcut" for computing the last digit of a prime number), I doubt we are ever going to know what even its first digit is (well, we know it must be 1, 3, 7 or 9–but which of those four?). And even if some such "shortcut" were discovered, I'm sure someone could cook up another such number for which there is no known "shortcut".

How about this: let N(n) be any FNP-complete [FNP is to function problems what NP is to decision problems] computable endofunction of the natural numbers (there are many to choose from, pick any), then define:

Q(n) = P(N(10^(10^100) + n)) mod 10

You actually think we'll ever know the first digit of the real number so defined?


Well, I am glad to learn an even better notion for the concept I'm talking about! I'm just saying, I think "definable" is too loose, because it presupposes that I can even talk about real numbers in a definite way. "Computable" feels closer because it is like an idealized version of the notion, but I agree, it's too loose also.

The reason I've latched onto it is that I read about the idea that in constructive mathematics all functions are continuous, basically because you can't tell that they're not continuous (because to tell that would imply you could inspect infinite digits of function's inputs and tell that they lead to discontinuities in the outputs). I quite like that idea because it rhymes with the intuition that nature doesn't care about later digits of numbers either, and that properties of the real numbers like being uncountable, dense, etc cannot show up in any physical system and therefore aren't "real".

But I'm definitely looking for a better way of understanding this, so curious if you can suggest anything.


> I read about the idea that in constructive mathematics all functions are continuous

From what I understand, it is more that most systems of intuitionistic/constructive logic can't prove the existence of a discontinuous real function – but nor can they disprove it. They have models both in which such functions exist, and also models in which they don't.

https://math.stackexchange.com/questions/176279/all-real-fun...


> Since there are only a countable number of formulas, there are only countably many definable reals

It always seemed like it should matter that the constructable formula strings include strings that define multiple numbers (e.g. "three; also pi squared") which at least seems like it should be aleph one rather than aleph nought, but smarter people than me assure me it doesn't work that way for (reasons)



that redundancy will only decrease the number of definables, no?

and countable infinity is already the smallest one, so you can't get smaller




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: