Morse didn't conceptually extend encoding to self-referential symbolic systems. Morse's insight was pure communication of symbols devoid of meaning.
Important but nowhere near the same.
Today, general symbolic encoding is viewed as trivial. Every symbol we have is pervasively encoded as bits, so of course entire expressions are. So Morse's code might seem comparable.
But what Gödel invented went well beyond Morse. We are just jaded with regard to his insight now.
Of course you can encode self-references in morse code, how could morse prevent that? Just use the same lisp syntax as in the article and then encode using morse code instead of Gödel numbering.
The purpose of Gödel numbering is to represent an arbitrary-length string of symbols as a single integer which allows you to manipulate it using Peano arithmetic.
But it is not like Gödel invented binary as you seem to suggest. Baudot code (a 5-bit character encoding) was in use in 1870’s.
In any case, Gödel-numbering is the least interesting part of the the theorem. The groundbreaking idea is creating statements about theorems.
> could you encode in pure logic how a dog behaves
Assuming we knew enough about how a dog behaves (or less ambitiously, a more primitive organism) I would assume this could be described in a formal language. But why would Principia be needed for this? Math have been used to model natural phenomena a long time before Principia.
>Assuming we knew enough about how a dog behaves (or less ambitiously, a more primitive organism) I would assume this could be described in a formal language. But why would Principia be needed for this?
It explains this directly before that phrase:
"Their language was dense and the work laborious, but they kept on proving a whole bunch of different truths in mathematics, and so far as anyone could tell at the time, there were no contradictions. It was imagined that at least in theory you could take this foundation and eventually expand it past mathematics: could you encode in pure logic how a dog behaves, or how humans think?"
>Math have been used to model natural phenomena a long time before Principia.
Which means little in this context. The question posed wasn't if you can use some math to describe some natural phenomenon.
The question posed was whether one could model the whole thing (e.g. how a dog behaves) in a formal language - not just take some isolated equations and apply it to this or that aspect of phenomenon (especially if it's a mere approximation). That they already knew, e.g. the equations for planetary motion.
So would it be just as correct to use an electrical circuit as example, i.e. before Principia it was not believed possible to model an electrical circuit in a formal language?
I still don't think it's possible to model an electrical circuit in a formal language, except if you mean the very crude high level behavior. Getting to electron flows and even quantum effects though?
> I would assume this could be described in a formal language
The assumption is the first problem, no? If the formal language is complete, it must be inconsistent. If it is consistent, it must be incomplete. If the language is incomplete or inconsistent, you may be unable to encode dog behavior in it.
The article use “describing the behavior of a dog” as something people began to think was possible because of Principia. This is what I don’t get. Was this thought impossible before Principia? On what grounds?
What about describing an electrical circuit formally? Surely this was thought possible before Principia was published.
The assumption made by many in the early 20th century, spurred on by the recent successes of unification and formalization, was essentially that we could formally describe the entire universe. Godel’s proof shows that if you attempt to formally describe something there’s either an inconsistency or it’s incomplete. That doesn’t mean you cannot describe the behavior of a dog formally but it does mean the same formula which encodes the behavior will either be inconsistent or incomplete. It might only be inconsistent or incomplete when applied outside of defining the behavior of a dog though. That’s why the little preamble about unification exists in this post but it’s not very well tied into the rest of the post.
>This is what I don’t get. Was this thought impossible before Principia? On what grounds?
Even the full formalization of mathematics wasn't considered certain (*) before Hilbert/Principia and those 20th century attempts. Much less the formalization being applied everything in the physical world or even animal behavior.
* in retrospect rightly so, because it wasn't, at least not without inconsistency/incompleteness.
> Except the proportion of paedophile priests is about the same as the proportion of paedophiles in the general population.
I doubt you have any reliable statistics about this, given how many victims keep silent out of fear.
But in any case, the moral failure of the church was not the existence of individual abusers (which indeed can exist anywhere in society), but how on an institutional level known abusers were protected by the curch. Everyone who was part of the cover-up (which went all the way to the top) is complicit.
Some of the classic 80s themes, like Space and Castle, primarily used regular bricks of reasonable sizes in a very limited palette of colours, with a few special parts unique to the theme. They were much more suited to taking apart and building your own creations.
These days, there's just too many specialised and small parts, and too many colours. Even if you buy a big grey Star Wars set, you'll find that the internal structure is often brightly coloured to make the instructions clearer - but this isn't ideal if you want to take it apart and build something else.
If you like City, Lego has you covered, even now (cue the old jokes about Lego City having 500 police stations, 400 fire stations, 20 gas stations, and no shops and no houses).
The other themes of old have been replaced by movie tie-ins, and it's hard to build a pirate world out of Pirates of the Caribbean sets
> I don't trust humans in doing the right thing and being judicious with this.
Language-level safety only protect against trivial mistakes like dereferencing a null-pointer. No language can protect against logical errors. If you have untrusted people comitting unvetted code, you will have much worse problems.
You misunderstand the “billion dollar mistake”. The mistake is not the use of nulls per se, the mistake is type-systems which does not make them explicit.
The mistake was not null references per se. The mistake was having all references be implicitly nullable.
He states around minute 25 the solution to the problem is to explicitly represent null in the type system, so nullable pointers are explicitly declared as such. But it can be complex to ensure that non-nullable references are always initialized to a non-null value, which is why he chose the easy solution to just let every reference be nullable.
I think even having references that aren't necessarily null is only part of it. Image that your language supports two forms of references, one nullable, one not. Let's just borrow C++ here:
&ref_that_cannot_be_null
*ref_that_can_be_null
The latter is still a bad idea, even if it isn't the only reference form, and even if it isn't the default, if it lets you do this:
ref_that_can_be_null->thing()
Where only things that are, e.g., type T have a `thing` attribute. Nulls are "obviously" not T, but a good number of languages' type system which permit nullable reference, or some form of it, permit treating what is in actuality T|null in the type system as if it were just T, usually leading to some form of runtime failure if null is actually used, ranging from UB (in C, C++) to panics/exceptions (Go, Java, C#, TS).
It's an error that can be caught by the type system (any number of other languages demonstrate that), and null pointer derefs are one of those bugs that just plague the languages that have it.
TypeScript actually supports nulls through type unions, exactly as Hoare suggests. It will not let you derefence a possibly-null value without a check.
C# also supports null-safety, although less elegantly and as opt-in. If enabled, it won’t let you deference a possibly-null reference.
> If enabled, it won’t let you deference a possibly-null reference.
It will moan, it doesn't stop you from doing it, our C# software is littered with intentional "Eh, take my word for it this isn't null" and even more annoying, "Eh, it's null but I swear it doesn't matter" code that the compiler moans about but will compile.
The C# ecosystem pre-dates nullable reference types and so does much of our codebase, and the result is that you can't reap all the benefits without disproportionate effort. Entity Framework, the .NET ORM is an example.
Sounds rather far-fetched. “Viking” only started getting used as an ethonym in the 19th century. In the sagas it it only used in the meaning of raider, pirate or outlaw. The etymology of “vik” (bay) is much more plausible.
No, vikingr is an old norse word meaning pirate, raider, or outlaw. Not everyone in the Nordics were vikings, just like not ever American were a cowboy.
reply