I’ve always felt the numbers people enter into the Drake equation overly optimistic.
I think the odds of a planet developing life by random chance in the first place is astronomical, and then the odds of that developing into intelligent life - that happens to have the necessary appendages to build things and lives in an environment amenable to creating tech - almost certainly an astronomically small portion of life bearing planets.
I think people assume evolution is an inherent progression towards intelligence, but that’s not the case. It’s just a progression towards more fit, and intelligence is one of many different advantages. In the timeframe of life on this planet, intelligence has been the dominant apex trait for an infinitesimally small amount of time. I think it’s completely imaginable that sapiens could have easily died out to even slightly stronger or more armored yet less intelligent predators. We had a very lucky roll.
I could absolutely see intelligent life being less-than-one-per-galaxy levels of rare.
For the sake of argument, let’s argue crows, marine mammals, canines, and octopus are all in the grand scheme roughly of similar intelligence to us. The octopus is likely the only of the bunch with the necessary body type to potentially build things of any complexity, and then it’s environment is so disadvantageous to the endeavor it’s almost unimaginable they could reach spacefaring.
Try to imagine what a setback it would be to try to discover smelting or electricity underwater. It’s not an impossibility, but I suspect it’s inherently a small fraction of underwater societies that discover it at all. Think of how many large organized societies rose and fell in humanity, with all our advantages, before one finally happened to stumble upon the power of steam - and even then actually put it to use at scale!
I think it’s a fair question to ask: what percentage of our technological development comes down to our bodies being at their core a very poor fit for our environment such that death by exposure is a very real threat. We needed to invent clothing just to survive outside a temperate belt. An intelligent creature that naturally lives more happily in its environment may simply never have the need to invent in the first place.
> I think the odds of a planet developing life by random chance in the first place is astronomical, and then the odds of that developing into intelligent life - that happens to have the necessary appendages to build things and lives in an environment amenable to creating tech - almost certainly an astronomically small portion of life bearing planets.
I take your point, but even if it is true that life is astronomically rare, we are literally talking about astronomy. Life doesn't have to be common for there to be countless worlds with life in our galaxy. In terms of intelligent life, not only must we reckon with the billions upon billions of planets in this galaxy that could host it, but there are also the vast timescales in which it could unfold.
Based on the current trajectory of human civilization, I think the Great Filter [1] theory is increasingly convincing, although that's unfortunate (to put it lightly). If we do manage to avoid destroying ourselves in a sudden cataclysm (such as via nuclear war), we will have to deal with a planet with a destabilized atmosphere, a massively damaged ecosystem, poisoned water and a host of other problems so serious that the word "problem" doesn't seem sufficient.
The timescales required for other advanced technological civilizations to overlap with our own requires that civilizations that go through these periods are able to stop actively destroying their home planet, and then survive the aftermath for a long time afterwards. Whether we are able to do this is very much an open question. We are just a couple of centuries down this road and on balance, I don't think it's going that well.
Based on our own experience, we know of at least three Filters and can strongly guess as to another.
(1) That life develops on a world whose mass allows achieving orbital spaceflight using relatively low-technology means (i.e. at least partially chemical)
(2) That advanced life develops on a world with massive energy reserves easily accessible with low technology (e.g. coal, oil)
(3) That no environmental cataclysm occurs during the advanced life/industrialization gestation time period (e.g. climate change, asteroid)
To this, we can pretty strongly add another:
(X) That, during the brief window between energy surplus (and advanced technology) and environmental collapse (from energy exploitation), interplanetary colonization is launched, supported, and made self-sufficient
There's no law that says the window for something remains open forever, when you're talking about planet-scale economic requirements.
Do we have any evidence any of those are filters. For (1), there was practically no gap between the development of powered flight (e.g. airplanes) and plausible fission powered powered (abandoned for political reasons). Further, space flight, while useful, has not been critical for our technological development. I see no reason a high tech society couldn't develop in a deep gravity well then escape it once their technology reaches a stage where doing so is possible.
For (2), energy reserves definitely accelerated things, but I'm not convinced they are nessasary. The fossil fuel age was only a few hundred years. Even if it accelerated technology development by an order of magnitude, we still would reach what we would now call the green era in an evolutionary blink of the eye. Even in our own development, wind and water power predate fossil fuels.
In fact, the lack of fossil fuels may actually help a civilization, in that would completely avoid the carbon based crisis we are facing now. That could easily outway the slowfown that non carbon power sources would have caused
Another major filter supposedly comes much earlier: the development of complex single celled eukaryotic life. Some people think this occurred due to a single subsumption event on earth between a bacteria and an archaea.
There’s probably also a distance / lifespan filter:
(4) That interplanetary colonization is possible (using relatively low-tech means) given the lifespan of the species and the distance to nearby habitable planets.
For example, life could develop in a solar system with only 1 habitable planet and the next habitable solar system could be dozens or hundreds of light years away.
Now you need to invent a fully self-sustaining long haul space craft, plus have the technology to pause death (or have the will to confine hundreds or thousands of generations to life on a space craft, plus have the ability to effectively pass down enough survival skill information from generation to generation that won’t use it, but will simply practice it in order to relay it).
>For example, life could develop in a solar system with only 1 habitable planet and the next habitable solar system could be dozens or hundreds of light years away.
Basing hypotheses on a single data point, let alone one whose outcome is not yet determined, almost invariably leads to wildly inaccurate conclusions in practice. As such, until we have more data, any questions about the Great Filter and the causes thereof will remain unanswerable.
Personally my bet is on either us being early as intelligent life goes, or extremely massive stellar expansionism on the levels that would allow for us to have encountered intelligent life with basically zero search effort not being something most civilizations would feel compelled to do.
Also assuming that any advanced civilization that exists would have to follow our development path is laughably anthropocentric and reflects a failure of imagination on the part of the people making said claims.
> Also assuming that any advanced civilization that exists would have to follow our development path is laughably anthropocentric and reflects a failure of imagination on the part of the people making said claims.
That's a great point, and it's not just anthropomorphic. It also ignores the paths that other cultures were on, that were subsumed or annihilated by Western-style industrial capitalism. What would human civilization look like (and what state would the planet be in) if, for instance, Western nations had first, respected indigenous sovereignty in the Americas, and second, sought to learn from First Nations instead of seeking to destroy them?
In other words, we don't have to imagine alien civilizations to explore other developmental paths. There are plenty of examples right here at home.
I'm guessing that if Europeans had decided to adopt the Native American lifestyles and cultures, we would still be pre-industrial and living as a bunch of separate tribes that sometimes have wars between them.
The way the European settlers treated the natives was indeed rather shabby, but the existing culture was not one that could ever develop into modern nation-states and a global industrial civilization without conflict. And without industrial capitalism, we wouldn't have computers and be exchanging messages across continents here on HN.
So it's entirely possible there's alien civilizations out there that are happy to remain in a primitive, tribal, pre-industrial state, but we're not going to ever detect or meet those civilizations until we have starships capable of transporting us there.
I’m not offended by your viewpoint, I think many people share it (although “rather shabby” is a gross understatement of what took place). But take a moment to reflect on a couple of things:
1) You used the word “primitive” to describe these cultures. However, these cultures had developed advanced techniques and ways of being that ensured their survival for millennia, and there’s no reason to think this could not have continued for many more millennia. Is a society that is capable of surviving in perpetuity “primitive”? How does that contrast with our current society, which can fairly be called self-destructive? There is a serious risk that we don’t survive the next few centuries - perhaps even decades! Would you call that “advanced”?
2) I’m not discounting the value of science, nor am I romanticizing the hardships that many indigenous communities endured in the absence of science, advanced technology, modern medicine and so on. But First Nations peoples were not stupid when they encountered western technology. They adopted what was useful or superior to what they had.
There could have been a synthesis of ancient indigenous practices and modern science. And indeed many indigenous practices were adopted by Westerners. But fundamental truths like, destroying your own home is not a recipe for long-term success, were ignored.
Agreed, but remember we likely see the universe with massive blind spots. Maybe given enough time, an aquatic specie would find a way to make tech, manipulate energy and develop society in ways that require a physic we know nothing about. Or maybe using something we know, but could not think of using that way because of our biases.
We are a very young civilization after all, the quantity of things we don't know is massive.
> Try to imagine what a setback it would be to try to discover smelting or electricity underwater. It’s not an impossibility, but I suspect it’s inherently a small fraction of underwater societies that discover it at all.
Try to imagine the number of things we haven't discovered because our civilization didn't develop underwater! Would be interesting to find out what the sentient alien dolphins and octopuses think of us and our disadvantaged air-based tech :)
if you lived in water, you might have discovered clever ways of breaking it down into H2 and O2 in ways we landlubbers haven't. seems like i've read somewhere where those elements are useful in escaping the gravity well.
No doubt we had a very lucky roll, and I agree that intelligent life must be extraordinarily rare. But the galaxy/universe and its history is incomprehensibly enormous and dynamic.
You're right that evolution isn't a one-way street, but intelligent life is going to emerge as a result of a sufficiently complex, chaotic, and resource-limited environment - and nature is great at that. The necessary ingredients for life generally are time, elemental diversity to manipulate chemistry(the makeup of our planet doesn't seem extraordinary), and competition (once a self-reproducing automata exists at any scale, evolution takes over and life adapts- see Von Neumann's "Theory of Self-Reproducing Automata" to understand this further).
Intelligent life isn't fundamentally different, because intelligence isn't a bar that is attained, but a gradual and emergent phenomena once sufficient numbers of "logic circuits" are hard coded in organismal memory (the brain). For it to emerge, an organism must be continuously challenged via diverse mechanisms. For example, recurrent semi-apocalyptic disruption events which clear the way for new "ideas" (i.e. how the extinction of the dinosaurs made room for mammals).
Your last sentence hits the nail on the head but you reach the wrong conclusion. It is the need to invent that yields intelligence, not the other way around.
I've always found the numbers people propose for the probability of life arising to be absurdly self assured. We do not know the probability of life arising. We do not know the probability of developing intelligent life. We don't even know if life originated here.
If you ask me for my gut feel I'd say that life arises with near certainty in a range of suitable conditions. But that is beside the point - don't get annoyed with others for making assumptions and then in the next breath make equally strident assumptions.
I think the left side of the Drake equation is pretty challenging too. If we assume that faster than light travel is impossible (which is likely) and all galactic civilizations are subject to the tyranny of the rocket equation, then it’s quite likely that there are no alien spaceships wizzing around, even if our galaxy is full of intelligent life forms.
And then our best way of detecting advanced alien life forms is through direct observations of remote star systems, a field that is in its infancy at best.
> I think the odds of a planet developing life by random chance in the first place is astronomical, and then the odds of that developing into intelligent life - that happens to have the necessary appendages to build things and lives in an environment amenable to creating tech - almost certainly an astronomically small portion of life bearing planets.
Let's start from what we know: homo sapiens has been around for 300 000 years, but technology only kicked of less than 300 years ago. So we could say that of 3 billion years of terrene life, 0.01% of it have sported a species susceptible to develop technology, and of this short span only 0.1% of it actually developing it.
So from Earth example, so far we can only posit 1 chance out of 10 millions of developing technology, nothing more. That's not much, and that's markedly lower than Drake's original optimistic numbers. Also, we can't suppose technology will survive for a very long time; so we also need a very fine time-alignment to receive a message (or get someone to receive ours). If most high tech civilisations only survive barely a few centuries, even if there are 100 000 of them across the Galaxy but scattered along 10 billion years, most messages simply will reach long rusted antennas...
> homo sapiens has been around for 300 000 years, but technology only kicked of less than 300 years ago.
I get what you're saying, that _industrial scale_ technology didn't kick off until a few centuries ago. But humans have been pretty technology advanced for a few hundred thousand years, unless you don't consider stone knapping, cooking, and language as technologies.
But we could be able to see the "stuff" left over from space-faring civilizations, like for example their Dyson Spheres. Or simply their AI robots would be self-replicating and colonizing the entire galaxy.
> Try to imagine what a setback it would be to try to discover smelting or electricity underwater. It’s not an impossibility, but I suspect it’s inherently a small fraction of underwater societies that discover it at all.
It's interesting to think about. If you live entirely underwater you have no reason to harness fire, or develop even simple technologies like the wheel. If you don't harness fire, will you even be able develop basic metallurgy (bronze, iron, steel)? If you don't work with fire or metal you don't develop steam power. Without steam you don't have an industrial revolution, without an industrial revolution you lack the prerequisites to develop internal combustion or heavier than air flight, and without internal combustion you probably don't develop chemical rockets.
I suppose it's possible that an aquatic civilization could take a totally different path to space than we did, but they'd definitely be playing the space race on hard mode.
I would find the rate of intelligent life ≈ 1 per galaxy an incredible coincidence.
Considering the absurdly huge amount of planets/time and the absurdly low chance of life happening, I think it's almost impossible for the rate to be close to 1.
>I’ve always felt the numbers people enter into the Drake equation overly optimistic.
This may be the right way to think about it but even so you would only change the order of magnitude of the final answer which won't be 0. Given the timescales involved one would still expect a species to have either left behind structures or modified their galactic neighborhood to a degree that is detectable or would have significantly colonized large parts of the universe/galaxy by now.
There are plenty of plausible choices for terms of the Drake equation that give 0. Remember that the Earth itself is relatively old as planets go - 4.5 billion, compared to the Universe's 13.8 billion; and the current "era" of the universe only began when it was ~9.8 billion years old, so about 4 billion years ago.
So, if we were to assume that the appearance and evolution of life on Earth as the benchmark for how long it takes for life to appear and evolve, and if we assume that life couldn't/was much less likely to arise before the current Dark Energy-dominated era, then it's quite possible for us to be one of the first radio-wave-emitting beings in the whole universe, let alone anywhere in our general area.
Of course, any such assumption is entirely unfounded: we just have no idea how likely it is (and how long it is likely to take) for life to arise and become radio-wave emitters, and even less about how this may have been different in earlier times in the universe. There are perfectly plausible choice for the Drake equation parameters that reach the exact opposite conclusion and give hundreds or thousands of expected civilizations in our neighborhood.
Plus we don't know where intelligence leads. Maybe it's actually a catastrophe for the planet? Too early to say, but it's also possible such catastrophes happen quickly on a geological timescale.
Depends on the catastrophe. From what we can tell, the dinosaurs weren't reading their equivalent of Shakespeare or sitting around discussing the social caste system of t-rex vs brontosaurus when the world ended for them.
Boy this statement has a lot to unpack. If you think life (and then intelligence) came about by chance... And that all this knowledge was so that we can live "more happily" in our environment? And this was fortunate? I would come to precisely the opposite conclusion.
All I can think of is Rocket Raccoon: "I didn't ask to get made!"
"The greater my wisdom, the greater my grief. To increase knowledge only increases sorrow."
> stars suitable for settlement should provide “environments nearly identical to that of the home planet.”
Seems pretty clear that we're on a path to machine intelligence, and that interstellar voyages will be undertaken by those intelligences, not meatbags. That humans like a class G sun is irrelevant to the prospering of those intelligences.
I’m not so sure about that. It’s a long way from a machine intelligence in your computer to one that is even as self sufficient as a cockroach with regard to not getting stuck on stuff in the real world, constantly having energy, able to repair itself in the physical world. I think people hand wave these things away and they are very huge hurdles. Even something as simple as a car can’t run a decade autonomously. We expect it to explore the stars, make robots that can remove circuit boards, troubleshoot, etc.?
Nothing around me keeps working without human interaction for even very small timescales.
> It’s a long way from a machine intelligence in your computer to one that is even as self sufficient as a cockroach
Ah, but the trick is, as soon as you can get a machine intelligence smart enough to work on programming tasks, you can get it to help you design a better one. And so on, etc.
The first AGI is (some) amount of time away. A more sophisticated AGI is coming not long after that, and exponentially faster and so on.
The problem is the hard limits of physics. We have an upper bound for the number of interacting elements required for intelligence in the form of number of synapses in the human brain. That number is a thousand trillion. We don't know what the lower bound is. Neurons are ferociously efficient computing devices, though. A dragonfly can take input from thousands of ommatidia and use it to track prey in three dimensions using only sixteen neurons. How many transistors do you need to do that? Let's be generous and say only ten percent of the human brain is needed for sapience, or 8.6 billion neurons. 8.6 billion / 16 roughly equals five hundred million. Multiply your earlier transistor number by five hundred million. There's your lower bound for a sapient computer.
There is no law of the universe that says it has to be possible to generate human-level sapience with less than one kilogram of processing mass. In all likelihood we are never going to create a server with a whole city of intelligences in it, ever.
A priori one would not expect humans to be near the physical limit of what's possible, purely based on the characteristics of the process that created us
Natural selection is
- random
- blind - gradient descent is prone to getting stuck in local minima, has no foresight, no ability to go back to the drawing board, no "understanding" of what it's doing.
- not even optimizing for intelligence, except instrumentally - to the extent that increasing intelligence interferes with survival, it has to be sacrificed
- subject to hard constraints like the limits on size imposed by childbirth
Animals have been rewarded for solving diverse environmental challenges for a long time. Brains in general are well optimized, just maybe not for the objective function you care about.
> There is no law of the universe that says it has to be possible to generate human-level sapience with less than one kilogram of processing mass. In all likelihood we are never going to create a server with a whole city of intelligences in it, ever.
A computer with five times the mass, five times the volume, and five times the power requirements of the human brain, that can run an uploaded copy of a human brain at merely half the speed of a human brain, would still render Pluto and Mercury trivial to colonise with machine intelligences.
And if it cost a $100k (inflation adjusted) to build, and lasted 80 years before you had to melt down the hardware and reforge from scratch, it would still be cheaper than Musk's target for a human to Mars.
Those are the easy bits. We've already got all of those, because that's how we make the machines we already use even today, and we would need them in any scenario other than genetically engineering ourselves to eat the moon, not just the uploaded brains scenario.
We know what all that stuff is and how it works, making a fresh supply chain in a new place isn't handled by magical mystery cult of the gods… oh, Plutus, and Mercury are the gods of wealth, that's a coincidence.
How a lot of that stuff works on Earth is using abundant fossil fuels for mining, refining, and fossil fuels and biological material as feedstocks. In particular, it’s not obvious how to make volumes of virgin steel from iron ore without coke. (Electric arc remelting is fine for recycling steel to steel.)
Much of the plastic supply chain starts as oil or gas. Wood is a major building component as well.
> Much of the plastic supply chain starts as oil or gas. Wood is a major building component as well.
If we really needed those things (why would we even want to use wood as a structural element on a space habitat?), we can build bioreactors from metal, glass, and water.
Oil and gas do occur non-biologically e.g. Titan, but they can also be made from algae grown in a transparent tube exposed to sunlight and provided with the necessary minerals and CO2, which is trivial to make.
IIRC the two limited resources if you needed biology and couldn't do it all with a clanking replicator are nitrogen and phosphate, everything else is easy to find basically everywhere.
The synapse is an incredibly complicated structure and there is a lot of 'computation' occurring within the butons. We're not very sure of all the processes that occur at this time. It's not just a 0/1 kinda thing. Also, our current computers aren't even running the hardware that a synapse is. The most analogous electrical structure to a synapse is a memristor, something we can't make at scale right now. The synapse is also not the only structure that causes computation to occur, many things effect the firing of a neuron and that modulation.
Great lower bound calculation though. I'd say that's the right ballpark number.
Yeah I know, but the "computational power will keep increasing forever" religion has a strong following on HN who react badly when you point out the true scale of the problem.
Lot of problems with your analysis. One is that you assume the brain is the most efficient way to generate intelligence. There doesn't seem to be any reason to make that assumption and it may be that von Neumann architectures are thousands or milions of times more efficient at that task.
Another is that Neurons are incredibly slow. Lets say you can make a machine only half as intelligent as a Human but that can think 1 Million times faster, this is even a lowball since a neuron can fire at like 1Hz and modern CPUs operate in the GHz range.
>In all likelihood we are never going to create a server with a whole city of intelligences in it, ever.
Think this is a really bad take. If one assumes even a .1% improvement in "Machine Intelligence" per unit time then it is literally only a matter of time until you reach Human level and then pass it.
> a neuron can fire at like 1Hz and modern CPUs operate in the GHz range.
This comparison doesn't make much sense - computers have a very small number of cores, let's say 10, and brains have 86 billion neurons. 86 billion things operating at 1Hz is also in the GHz range. This is leaving aside the issue that a CPU cycle and a neuron firing are doing completely different sorts of work - comparing them is kind of nonsensical in the first place.
I mean a modern CPU has something like 5 Billion transistors and these are switching at GHz range so its many orders of magnitude more than neurons can achieve. 10^9x10^9 >> 86x10^9
Number of cores doesn't seem relevant.
>This is leaving aside the issue that a CPU cycle and a neuron firing are doing completely different sorts of work - comparing them is kind of nonsensical in the first place.
Maybe, but the op comment was explicitly using raw transistor count to compare to Neuron count as a proxy for sapience.
If one assumes even a .1% improvement in "Machine Intelligence" per unit time then it is literally only a matter of time until you reach Human level and then pass it.
That's like saying "even if there's only a 0.1% improvement in materials strength per unit time eventually we'll have structures strong enough to bounce off an incoming rogue planet." There is an upper bound to computational performance per unit mass and unit energy. We're nowhere near it but it is there, lurking.
The problem is assuming the upper bound is close or at human level for intelligence. There's no reason to think that and clearly outlier humans like John von Neumann that are still in the Human range are far more capable than the average person. If you only could make machines as intelligent as that it would still be revolutionary.
AI shows signs of general intelligence (e.g. can generate proper text in generated pictures) after about 20 billion of parameters. IMHO, 1 trillion of parameters can be enough to simulate human-like behaviour after reading lot of text, watching of lot of video, and playing lot of games and robotic simulations. It will be able to understand commands, understand video and audio from cameras, will be able to control equipment and robots, e.g. to repair itself, to reach a goal.
You're only looking at the parameters, when in fact such algorithms are also heavily limited by the amount of training data. It may well be that, even if you were right with your 1 trillion parameters for AGI, we might just not have enough training data to avoid under/over fitting.
Note that I am extremely skeptical of your 1T parameters claim - there is much much more to AGI than natural-looking natural language processing + image recognition.
you jump in the air at high noon. the sun is (some) distance away, and for some time, you're getting closer to it.
if we make an ai that's able to make a better ai than we are, then it does stand to reason that it'll kick off exponential growth for some time, but it'll reach limits just like anything. those limits may fall short of magicing/smarting away the obstacles to colonizing space.
and we'll reach limits too. it might be that the amount of time between now and when we make a sufficient ai is more time than we have.
A rabbit jumps toward the sun and briefly gets closer. But no rabbit has or can jump even 0.001% of the way. There are hard limits for the rabbit which the rabbit is not equipped to address.
A human jumps, fails, learns the underlying physics of what’s holding them back, and consequently build rockets to sit in. There are no hard limits yet, as far as we know, and we’re just bags of mostly water.
Whatever the hard limits on AI, they will be so far beyond us that we won’t be able to visualize or guess at them.
There are indeed hard limits to chemical rocketry, and we mostly hit them after a decade or two of work by people with pencils and slide rules.
Computers might be a better analogy. We started designing computers without the aid of other computers. We made very rapid progress on architecture and semiconductors. We’re now completely dependent on computers to help us design future ones, and despite all our efforts and hundreds of billions of dollars in investments, our rate of progress has slowed to a crawl and we seem to rapidly be approaching the end game. Dennard scaling has been done for 15 years. We can make wires that are smaller but just proportionally worse. We can use more silicon area, but then latency and power consumption get proportionally worse. A super clever hyper optimizing AI might push a couple things forward a little bit (or more likely wind up at the exact same destination slightly earlier than we other wise might have), but there’s no compelling reason to think it gets to design computers that face different constraints.
This sounds like a child explaining to other children that Adults can't be that much smarter and probably won't be able to make any progress on putting the square peg into the right hole. The child can't comprehend the things Adults can think about.
We have a pretty good understanding of physics in this area, and one that has stood many significant tests. While it's of course always possible that we're missing something, it's entirely likely right now that there is no new physics to be discovered at the scale of microprocessors. Given that, we have a pretty good idea of how much more could be done in principle to improve processors (if we assumed we could overcome all engineering challenges), and it's not that much more.
>those limits may fall short of magicing/smarting away the obstacles to colonizing space.
We have no idea what that limit is so it seems crazy to make any assumptions about where it is. At minimum it lies above Human Intelligence and even imagining what a civilization could do with millions of beings only as intelligent as say John von Neumann is frightening.
>those limits may fall short of magicing/smarting away the obstacles to colonizing space.
We can basically already colonize our Solar System with current tech. You don't need FTL or anything close to that to eventually colonize the entire Galaxy.
Where I get hung up on with this is how the AGI knows what a more sophisticated AGI would look like without being at least as sophisticated? For example with our own intelligence, we aren't getting smarter with each generation. We have less brain matter than humans who lived millions of years ago on average, because there isn't as strong of a selective pressure present these days to see those with a lot of brain matter out reproduce those who get by in life with not as much of it. A very 'intelligent' AGI might realize the most optimal way to seed the planet with itself is actually to have a swarm of very unintelligent AI that is able to run on some very non resource intensive hardware, and seed it widely like how an "unintelligent" insect is able to lay millions of eggs and infest an area beyond human control, and the more intelligent wolf is easily hunted down.
> Where I get hung up on with this is how the AGI knows what a more sophisticated AGI would look like without being at least as sophisticated
I'm not sure what's difficult about this. Are we not able to test what humans can run the fastest without being able to run just as fast? We can equally test which humans are the best spellers, or the best chess players, or the best problem solvers.
We can test these things that we cannot ourselves do, only because we are able to define them and comprehend the definitions. We define a runner as fast when they run a fixed distance in a certain time. Even though we can't run as fast, the understanding of the rules at play are within what we can understand. OTOH when you move past these examples into more abstract ideas of arbitrary rules defining arbitrary tasks, A basic AGI might not be able to even comprehend the full set of rules that define the expected tasks of a much more complicated AGI, much like how it wouldn't do you any good trying to explain the rules of baseball to an ant.
No person or AI needs to actually understand how an AGI actually works, they just need observable behaviours indicative of intelligence. That's what we already do with image recognition, language translation tasks, and more.
Once we move to more general forms of intelligence, like math and other forms of problem solving, then you apply more general tests.
For instance, if we had an Einstein AI v1.0 that could infer General Relativity using X0 bits of information in time T0, and Einstein AI v2.0 managed to infer GR using X1 << X0 bits of information and/or could infer it in time T1 << T0, then v2.0 is clearly superior to v1.0.
I agree that sometimes the parameters for such tests aren't always clear at first, but this has always been the case in science, which is we we iterate and progressively refine our tests as we learn. Any true AGI must be capable of such abductive and inductive reasoning to qualify as a general intelligence.
Yet we invent physics that wasn’t comprehended before it was discovered… not sure what your hangup is. We wouldnt stop trying to improve the AGI when it’s at the level of an ant.
> we aren't getting smarter with each generation. We have less brain matter than humans who lived millions of years ago on average
The amount of brain matter does not say all that much about the level of intelligence. Some kinds of birds are as intelligent as 7 year old children, and that's not something you'd guess from the size of their brains.
We do not, in fact, have less brain than hominins millions of years ago. We do have less than our immediate ancestors 100,000 years ago.
Your very intelligent AGI would most likely have no interest in seeding anything on Earth. It might reasonably be curious about our ecosystems, but disturbing those would reduce that interest.
What indicates that at some point in the future we wouldn't have the computational power to run neural nets roughly equivalent to a human? It is just a matter of time unless we stop progressing.
>What indicates that at some point in the future we wouldn't have the computational power to run neural nets roughly equivalent to a human?
It's making the assumption that human brains are simply very complex neural nets. Even ignoring religious/spiritual arguments about souls, the simple fact of the matter is that we don't really know a lot about how the human brain/consciousness actually works on a fundamental level (and anyone who tells you otherwise is misinformed or lying).
>It's making the assumption that human brains are simply very complex neural nets. Even ignoring religious/spiritual arguments about souls, the simple fact of the matter is that we don't really know a lot about how the human brain/consciousness actually works on a fundamental level (and anyone who tells you otherwise is misinformed or lying).
How human brains work is irrelevant, we didn't need to solve the Navier Stokes Equation to build Airplanes and we don't need to solve intelligence either. Consciousness is totally outside the bounds of this discussion and also irrelevant to building an intelligent AI.
I don't think that's the contentious part. The problem is the with the assumed exponential increase in intelligence after that, which is nowhere near as easy to imagine as the general idea that you can make an artificial human-equivalent mind.
Except there is nothing but idealism behind the idea that this is true.
Indeed, we have AGI already, they are called humans, and the progress we see in the development of AI is not exponentially explosive. The only exponential growth we've seen (which is necessarily logarithmic growth after some inflection point, actually) is Moore's Law.
> Indeed, we have AGI already, they are called humans, and the progress we see in the development of AI is not exponentially explosive
Actually it is. Human population growth has been exponential, as has been our resource use and consumption of our habitat, and the pace of our innovations. We've now created forms of machine intelligence to further augment our own, and machine intelligence could itself start advancing soon, without the messy constraints of evolution and biology. The possibility is definitely there, we'll see if it works out that way in the end. If it does, it will likely blindside us.
> It’s a long way from a machine intelligence in your computer to one that is even as self sufficient as a cockroach
Is it? I'm not so sure about that. I'm pretty sure you can't be sure of that too. Given how machine learning continues to surprise us, I think this is one of those questions that will only be clear in hindsight.
Ehhh I’d side with the parent on this one, given how we thought self-driving cars were gonna be a solved problem by 2015 but how intractable the problem has turned out to be.
Sure, and 20 years ago nobody thought language would be a solved problem and we now have AIs pumping out coherent essays and academic papers from one sentence prompts. Even 10 years ago nobody thought AI photorealistic art would be possible, but here we are.
People are too quick to forget how many "unassailable" towers have already been toppled. They see how kind of obvious each problem looks in hindsight, "oh it was just this one trick", not realizing the irony that yes, one or two tricks, and we may be only one more trick from being in serious trouble.
> No they don't. They managed to get impressive results
Given that these papers have actually been published, the papers they produce are at least as coherent as the state of the art in the field.
> but it's still far from coherent because the algorithms still lack the ability to think
Nobody knows what "thinking" is, as a functional process. For all you know, human thinking may itself just be the same sort of pattern matching as used in machine learning, just with a few more parameters. In which case these algorithms actually are thinking, just with more limited intelligence than us.
We don’t, and you have no numbers to back that up. When my blind grandma can sit in a car and it drives her from Paris to Rome though the Alps in winter, then maybe.
Keep in mind this thread’s premise is us developing an intelligence that can navigate space and maintain all requirements of that for millennia. Not driving a couple miles on a clear stretch of highway.
> When my blind grandma can sit in a car and it drives her from Paris to Rome though the Alps in winter, then maybe.
Your standard for "maybe" exceeds the abilities of many humans.
AFAICT, the statistics for fatal accidents do exist, and suggest that car AI may be marginally better than average humans — the caveats to that include at the very least (1) it's within the margin of error; (2) that includes drunks, dangerously tired, teenagers, and people like my mother and grandmother when their Alzheimer's was in the early stages; and (3) I don't trust the impartiality of the source.
The stats for self-driving cars also have other caveats: they are only deployed in extremely favorable conditions (areas with very little rain and no snow), and the vast majority have human safety drivers for added verification.
Also, many self-driving cars are not so much unsafe as occasionally bad at the basics of driving in certain scenarios - such as massively over/under-steering in intersections, leading to the need to maneuver in traffic (I've especially seen this in Tesla FSD videos).
Also,
> Your standard for "maybe" exceeds the abilities of many humans.
I'm very curious what human driver is simply unable to do that task. In fact, the vast majority of human drivers could do this task, but 0 current self-driving cars could even attempt it.
> they are only deployed in extremely favorable conditions (areas with very little rain and no snow)
I've heard a lot of conflicting reports about this with regard to Tesla's AI, but regardless your point is sound, my list is not exhaustive.
> not so much unsafe as occasionally bad at the basics of driving in certain scenarios
Indeed. It's one of the counterintuitive things about machine intelligence in general, that it can be wildly superhuman in domains that humans consider challenging while also being utterly pathetic in domains we consider trivial.
> I'm very curious what human driver is simply unable to do that task. In fact, the vast majority of human drivers could do this task, but 0 current self-driving cars could even attempt it.
A quick google suggested that the roads in the alps are often closed in the winter by the police because of the snow, so hopefully in such conditions the answer is "100% of humans cannot do this" :)
> Seems pretty clear that we're on a path to machine intelligence
I'm not sure what you mean by "machine intelligence," but there is absolutely no indication that Strong AI can be achieved let alone "pretty clear we're on a path," and compelling argument that it isn't even possible. Machines that learn, sure, but conscious machines, no way.
Your droids, we don't serve their kind here. They'll have to wait outside. We don't want them here.
> I'm not sure what you mean by "machine intelligence," but there is absolutely no indication that Strong AI can be achieved let alone "pretty clear we're on a path," and compelling argument that it isn't even possible
There is no compelling argument or evidence that machine intelligence is impossible.
Physics tells us that machine intelligence is in fact very possible. All finite volumes contain finite information, per the Bekenstein bound. Therefore a human being can be fully described by finite information. Therefore a human being can be fully simulated by a Turing machine (technically just a finite state automaton).
The only way to escape this conclusion is to assert that humans are special in some way that can't be described by physics. You can believe that, but don't mistake that as "evidence" or a scientific argument that machine intelligence is impossible.
The current state of the art machine learning systems are still a few orders of magnitude away from the synaptic complexity of a human brains. A lot can happen over a few orders of magnitude, and we're closing that gap fast.
> There is no compelling argument or evidence that machine intelligence is impossible.
This is proof by assertion, but let's define what we're talking about first. I was referring to Strong AI. If you are also referring to Strong AI, then I can prove you wrong with not just one but two examples of compelling arguments against the possibility of Strong AI. Roger Penrose argues computers are unable to have intelligence because they are algorithmically deterministic systems. According to Penrose, rational processes of the mind are not completely algorithmic and therefore can not be duplicated by any sufficiently complex computer. Another compelling argument against the possibility of Strong AI is John Searle's Chinese Room thought experiment, and we know it is compelling for the Herculean amount of effort put into trying to prove it wrong for the last 40 years.
> Physics tells us that machine intelligence is in fact very possible.
This is more proof by assertion. You are asserting and not supporting your assertions in any way.
> All finite volumes contain finite information, per the Bekenstein bound. Therefore a human being can be fully described by finite information. Therefore a human being can be fully simulated by a Turing machine (technically just a finite state automaton).
It does not remotely follow from the Bekenstein bound that it is even humanly possible to perfectly describe a human, even if it sets an upper limit on the amount of information if it were possible, which it isn't, nor that perfectly describing a human will produce consciousness. A description is just that. A description of a book is not the book itself, a description of a human is not a human, and a description of consciousness is not consciousness, nor indeed could a description of consciousness itself be conscious. Further, the Bekenstein bound "implies that non-finite models such as Turing machines are not realizable as finite devices."[1]
> The only way to escape this conclusion...
More proof by assertion. You are asserting and not supporting your assertions, thus your argument is fallacious. Another way to escape your conclusion is to point out that you are misapplying the Bekenstein bound to falsely suggest that the limits it sets are or will ever be realized by humans, which they won't. We simply aren't capable and never will be capable of perfectly describing a human down to the quantum level.
> The current state of the art machine learning systems are still a few orders of magnitude away from the synaptic complexity of a human brains. A lot can happen over a few orders of magnitude, and we're closing that gap fast.
More proof by assertion. Assertions are not proof, nor are they valid argument.
Penrose is great but resting your argument on his work is dubious. I would go for a portfolio approach when thinking about these things.
> Proof by assertion re physics & intelligence
Physics does indeed tell us that intelligence is possible, inasmuch as we are intelligent. Unless you are claiming that you are not a materialist?
> More proof by assertion
The GP is just asserting that 'a lot can happen over a few orders of magnitude', they aren't trying to proove anything.
It seems like you are making your 'proof by assertion' rebuttal do a lot of work for you, without making and strong claims yourself other than 'GP didn't give good reasons why AGI will be a thing'.
Maybe GP didn't give satisfying reasons, but if you know so much about this stuff it would probably be a worthwhile thought experiment for you to attempt to steelman the Strong AI position instead of shouting it down.
> Penrose is great but resting your argument on his work is dubious.
As I was not resting my argument on his work, this is a straw man. I merely used Penrose as one of two examples to counter GGP's unsupported assertion.
> Physics does indeed tell us that intelligence is possible, inasmuch as we are intelligent. Unless you are claiming that you are not a materialist?
Physics does not tell us that strong artificial intelligence is possible in the way the GGP attempted to prove with invalid argument.
> The GP is just asserting that 'a lot can happen over a few orders of magnitude', they aren't trying to proove anything.
The use of words such as "thus," and "therefore" really gives it away that is precisely what they were attempting.
> It seems like you are making your 'proof by assertion' rebuttal do a lot of work for you, without making and strong claims yourself other than 'GP didn't give good reasons why AGI will be a thing'.
I am not required to entertain fallacious argument, and yet, generously, I identified the type of fallacy.
> Maybe GP didn't give satisfying reasons, but if you know so much about this stuff it would probably be a worthwhile thought experiment for you to attempt to steelman the Strong AI position instead of shouting it down.
That wasn't a proof, that was my thesis statement.
> If you are also referring to Strong AI, then I can prove you wrong with not just one but two examples of compelling arguments against the possibility of Strong AI. Roger Penrose argues computers are unable to have intelligence because they are algorithmically deterministic systems. According to Penrose, rational processes of the mind are not completely algorithmic and therefore can not be duplicated by any sufficiently complex computer.
That's not evidence, that's the assertion of one person who also has not provided any proof. There is no evidence that "rational processes of the mind" are not algorithmic. In fact, Solomonoff induction demonstrates that the scientific process actually is algorithmically describable.
> It does not remotely follow from the Bekenstein bound that it is even humanly possible to perfectly describe a human
Unless you are asserting that humans somehow go beyond the natural laws that govern the universe, which is what I mentioned as the only way to escape this conclusion, then this is not correct.
> Further, the Bekenstein bound "implies that non-finite models such as Turing machines are not realizable as finite devices."[1]
Not relevant. Firstly, as I said a human would actually be a finite state automaton. Secondly, not all Turing machines are physically realizable, but many are, and humans would necessarily be one of that latter class (the class expressible as finite state automata).
> A description is just that. A description of a book is not the book itself,
But if all you are interested in is the information the book contains, then yes, the description of the book is the book itself. Consciousness and intelligence are arguably information systems, which we observe by our ability to receive and respond intelligently to information.
If you're suggesting that consciousness or intelligence is something else that goes beyond information, then you are asserting something supernatural again, something beyond physics, and the burden is on you to justify this.
> More proof by assertion. You are asserting and not supporting your assertions, thus your argument is fallacious.
No, that's a conclusion that follows directly from preceding arguments, it's not an assertion.
> We simply aren't capable and never will be capable of perfectly describing a human down to the quantum level.
We don't need to do this. I'm not sure why you think this is remotely relevant. The only relevant question is whether whatever part of the human mind makes us intelligent can be replicated algorithmically. There is literally no physical evidence at this time that suggests that this would not be possible. All of known physics is computable.
Penrose claims, without evidence, that the mind utilizes quantum mechanics and also claims, without evidence, that this use of quantum mechanics is necessary for consciousness. However, in the end, quantum mechanics is just manipulating information, and classical computing can simulate quantum information processing by incurring only a polynomial slowdown, so this isn't really a barrier anyway.
> More proof by assertion. Assertions are not proof, nor are they valid argument.
You keep misusing the terms "proof" and "assertion". That statement was neither.
> That wasn't a proof, that was my thesis statement.
Your thesis asserted a false statement.
> That's not evidence, that's the assertion of one person who also has not provided any proof. There is no evidence that "rational processes of the mind" are not algorithmic. In fact, Solomonoff induction demonstrates that the scientific process actually is algorithmically describable.
It was evidence of at least two compelling arguments against the possibility of Strong AI, regardless of whether they are correct or not, they merely served to prove your "thesis statement" is false.
> Unless you are asserting that humans somehow go beyond the natural laws that govern the universe, which is what I mentioned as the only way to escape this conclusion, then this is not correct.
You've just claimed it is humanly possible to perfectly describe a human, while I'm pretty sure it's not possible for a human to perfectly do anything.
> Not relevant. Firstly, as I said a human would actually be a finite state automaton. Secondly, not all Turing machines are physically realizable, but many are, and humans would necessarily be one of that latter class (the class expressible as finite state automata).
You made it relevant by introducing the Bekenstein bound along with Turing machines in your argument. If the Bekenstein bound is correct then Turing machines are not realizable as finite devices. You can't have it both ways without contradiction.
> But if all you are interested in is the information the book contains, then yes, the description of the book is the book itself.
Unfortunately, we cannot redefine "description" whenever we like to suit your needs. The description of a book is not the information it contains, instead it not only
describes the content but also the physical properties of the book, such as location, dimensions, weight, color, smell, etc.
> Consciousness and intelligence are arguably information systems, which we observe by our ability to receive and respond intelligently to information.
Whatever floats your boat.
> If you're suggesting that consciousness or intelligence is something else that goes beyond information, then you are asserting something supernatural again, something beyond physics, and the burden is on you to justify this.
Like nearly all academic philosophy of mind sources, I reserve the term "Strong AI" specifically to mean software and hardware combinations that experience sentience or consciousness. This is specifically what I have argued against, the possibility that this could ever occur, no matter how much time, no matter how many attempts.
> No, that's a conclusion that follows directly from preceding arguments, it's not an assertion.
Here's your fallacious proof:
1. All finite volumes contain finite information, per the Bekenstein bound.
2. Therefore a human being can be fully described by finite information.
3. Therefore a human being can be fully simulated by a Turing machine
While 1 is true, 2 does not follow from 1. Simply because of the Belenstein bound, it in no way shape or form follows that anything of macro scale can be (actually the standard is "perfectly," not "fully") perfectly described, let alone a human being made of trillions of molecular, atomic and subatomic particles. Nor does 3 follow from either or both 1 &2. 3 is introduced out of the blue as a non-sequitur that does not follow from the previous statements, and reasserting such after valid objection is committing an infinite ignorance fallacy.
>> We simply aren't capable and never will be capable of perfectly describing a human down to the quantum level.
> We don't need to do this. I'm not sure why you think this is remotely relevant.
Then your description can neither be full nor perfect.
> The only relevant question is whether whatever part of the human mind makes us intelligent can be replicated algorithmically.
The human brain has many parts, but we are not able to distinguish between any "parts" of "the human mind." Mind is both a faculty of brain and a phenomenon that arises from brain. Mind is neither a faculty nor a phenomenon of algorithms.
> There is literally no physical evidence at this time that suggests that this would not be possible. All of known physics is computable.
Again, infinite ignorance fallacy. It is evidential that algorithms are not brains.
> Penrose claims, without evidence, that the mind utilizes quantum mechanics and also claims, without evidence, that this use of quantum mechanics is necessary for consciousness. However, in the end, quantum mechanics is just manipulating information, and classical computing can simulate quantum information processing by incurring only a polynomial slowdown, so this isn't really a barrier anyway.
I merely used the arguments from Searle and Penrose as examples of compelling argument against the possibility of Strong AI to prove that your "thesis" was false. Further, the elements of Penrose's argument that I chose did not mention quantum mechanics. You have built a deeply confused straw man argument with sweeping generalizations and distinctly unsound reasoning.
> You keep misusing the terms "proof" and "assertion". That statement was neither.
"Proof by assertion" is an informal fallacy. I was identifying the fallacy that you committed over and over. You can claim whatever you like, but unless you support your claim, it is proof by assertion.
> Machines that learn, sure, but conscious machines, no way.
So what is the source of consciousness then? If it is something supernatural, like a soul, then yes reproducing consciousness in a machine is impossible.
But if it is ultimately just a biological process, no matter how complicated, then it could be simulated given true understanding and a fast enough/big enough computer?
> But if it is ultimately just a biological process, no matter how complicated, then it could be simulated given true understanding and a fast enough/big enough computer?
Though commonly referred to as such in metaphor, computers are not brains. Computers also have hard limitations that are very different from the limitations of biological brains. Maybe it is possible to accurately simulate a healthy brain but is unattainable due to physical and economic limitations. Then again, maybe even if we exhausted the global economy and every natural resource to build a planet-sized computer and simulated every minuscule part of a human brain, every cell, every neuron, and all the electro-chemical interactions, and it still wouldn't be conscious, maybe that is a far better result than if it could be conscious. What are we even doing? I ask you, what is the goal and benefit here? How would that be better than the unconscious AI we already have and the automatons we can already produce now, but haven't? For a time, computers will get smaller, faster, more powerful, but if we haven't already achieved Strong AI with the largest fastest computers available today, in the future if room-sized clusters can fit in a cell phone, how can we expect them to be conscious? And if the room sized computers of the future can be made conscious, how will that benefit anyone?
> I'm not sure there is a third possibility?
Well, imagine having two separate conscious brains, the one in your head and another somewhere else that is also physically a part of you. Maybe we'll alter our species to evolve these separate brains, I just don't know what the point would be, as it would not at all be like multiprocessing or parallel computing, but more like slavery.
old argument talk computer is more faster than us, if we can build civilization in ten thousand years, maybe we give AI more time and waiting for it. but the environment of current AI actually looks like slavery.
Time does not solve the impossible. No matter how much time we spend attempting to make an algorithm sentient, an algorithm can never be sentient any more than a recipe can be sentient.
> Machines that learn, sure, but conscious machines, no way.
So what? To have implications for the Fermi paradox, we merely need machines that can replicate throughout the galaxy. Whether they are conscious or not is irrelevant.
Since I responded to and argued against the OP's specific assertion, "Seems pretty clear that we're on a path to machine intelligence," your reintroduction of Fermi paradox, even if it is the subject of the article, still becomes OT in response to my comment, which has nothing to do with Fermi Paradox. In fact, I agree with you that it doesn't matter that machines will never be conscious, I'm just not going to get sucked into arguing about Fermi Paradox because I don't have any position on it. And I might as well argue, "so what? The pizza's here."
I doubt any civilizations capable of robotic interstellar travel would consider unleashing a Replicator-esque paper clip maximizer into the universe a worthwhile pursuit.
Civilisations don't consider anything a worthwhile pursuit, because Civilisations don't consider anything as anything in the first place.
Civilisations don't make choices or act. Individuals do.
Already there's plenty of humans who do all kinds of weird things. In an interstellar future there will be orders of magnitudes more beings and thanks to light lag, no means of asserting overarching central authority.
"""
I'm not sure what you mean by "fleshy meatbag intelligence," but there is absolutely no indication that Strong Chemical Intelligence can be achieved let alone "pretty clear we're on a path," and compelling argument that it isn't even possible. Fleshy meatbags that learn, sure, but conscious fleshy meatbags, no way.
"""
Does organic chemistry have a magical property that machines cannot have?
Well, we don't know, with out limited brains we very well might not be able to come up with machines capable of equalling or surpassing organic life, it doesn't have to be magical
Perhaps, though that seems like a very different argument than what I replied to.
I think (medium certainty) that because human minds were produced by evolution, which we can replicate trivially in simulations, that it is unlikely to be impossible for us to get there.
Have any of your simulated evolution agents become as intelligent as, say, slime mould? If not, then I'm not sure the claim is meaningful (though, for what it's worth, I do agree with the gist of your argument).
From what I've heard, evolution is itself smarter than slime mould. Also, while natural evolution is limited to about 30 minute minimum reproduction time for biology, simulated evolution is however fast you can calculate a fitness function, which is probably in the order of microseconds to nanoseconds for the sort of task you'd test slime mould against.
I was using simulated evolution to directly reach an answer rather than caring about the agent (my fitness function was not agent intelligence); however, to go down the direction you're after here, back in my first job, one of my coworkers used simulated evolution to make agents in a video game, and I was never able to beat those agents in that game even on easy mode.
> there is absolutely no indication that Strong AI can be achieved l
We have the proof that the I in AI can be achieved (us) and we have the proof that the A in AI can be achieved (our technology), so while there is no proof that AI can be achieved, I'm pretty sure there is nothing fundamental that prevent it
The fact that it could theoretically be achieved doesn't mean that _we_ can achieve it.
For example the way we build computers might be super archaic and the equivalent of the flint stone, a flint stone is nice but if you need a fusion reactor no amount of flint stones will do the trick
> Machines that learn, sure, but conscious machines, no way.
If we're trading compelling arguments, then there is one for consciousness not only being unnecessary, but also a drawback that evolution would happily optimize away given time, and basically not worth the energy it burns.
I don't have a handy list of papers to support this, but I can recommend two novels by Peter Watts, Blindsight and Echopraxia, both of which deal with this idea (apologies for sort-of spoiler), and which both do come with references to academic publications backing this and other ideas used in the books.
> If we're trading compelling arguments, then there is one for consciousness not only being unnecessary, but also a drawback that evolution would happily optimize away given time, and basically not worth the energy it burns
Considering we don't know what consciousness is or what it does functionally, any such argument is a fairy tale at best. It might contain some interesting insights or lessons, but don't mistake it for reality.
What's your objection to "conscious" machines? (Putting it in quotes because that's a loaded, unscientific term and everybody has their own definition)
My objection is it would be cruel and horrifying. Also, as of yet, we don't understand what mind is, and we can slice up the brain in a thousand ways and never find it. We may never know, and all we do know is that consciousness is a phenomenon arising from healthy brain. So how are we supposed to make a machine conscious without knowing how we ourselves are conscious?
Dr. John Searle's objection is that software is not a mind, and goes on to explain why with his Chinese Room thought experiment.[1]
> My objection is it would be cruel and horrifying.
That's an argument for why we _shouldn't_ build them. But not an argument for why we _can't_ build them. Do you see the difference?
> So how are we supposed to make a machine conscious without knowing how we ourselves are conscious?
Evolution didn't 'know' about conscious either.
The pioneers of the Industrial Revolution had only the faintest ideas about thermodynamics, yet managed to build working steam engines. (And we improved them over time, as we learned more.)
> That's an argument for why we _shouldn't_ build them. But not an argument for why we _can't_ build them. Do you see the difference?
This is a straw man fallacy, because I was asked "What's your objection to "conscious" machines?", and my objection is that it is cruel.
> Evolution didn't 'know' about conscious either.
The pioneers of the Industrial Revolution had only the faintest ideas about thermodynamics, yet managed to build working steam engines. (And we improved them over time, as we learned more.)
This is also a fallacious argument. Just because we don't know how evolution created consciousness doesn't mean that we can use our ignorance of how consciousness evolved to create machine consciousness. If you're only using this as evidence that consciousness is possible, the fallacy is begging the question, petitio principii.
Also, I believe there is here equivocation and conflation of artificial intelligence with natural intelligence. If we have the time, we can spend 600M years to evolve consciousness directly, but then it would not be artificial, it would not be machine intelligence, it would be natural or biological intelligence, and it would be a massive waste of time because it is already here and we are it.
Do you really expect me to argue against your citations when you're too lazy even to summarize any particular counter-argument? It has been said that most of the work done in AI has been in trying to prove John Searle wrong, which suggests his argument is significant. A Turing Test does not prove consciousness nor even intelligence. All it proves is whether a human can be fooled. Arguments suggesting emergent properties between a human and a rule book are patently ridiculous, absurd on their face, because learning a book could not possibly ever make a book conscious. Where the Chinese Room thought experiment breaks down is there is a human involved, and most counter-arguments get tripped up here forgetting that it is metaphor and there is no human inside a computer where these types of counter-arguments would ever come true: no matter how much machine learning you submit your program to, there is no mind to begin with that could ever become aware.
> Though I'm not sure why you brought up consciousness in the first place. It's irrelevant for the Fermi Paradox.
In fact, I did not, the OP of the tread asserted: "Seems pretty clear that we're on a path to machine intelligence," and I responded to that specifically and argued against the possibility.
> "Seems pretty clear that we're on a path to machine intelligence," and I responded to that specifically and argued against the possibility.
OK, you want to argue both against the _should_ and the _could_, I guess?
About the Chinese Room: sure I can replay the standard arguments, if you want me to.
But if you already reject the validity of the Turing test, there's not much to discuss on that front, I guess.
How do you know that you aren't the only conscious being in existence? How do you know whether other commenters on HN are conscious?
> It has been said that most of the work done in AI has been in trying to prove John Searle wrong, which suggests his argument is significant.
Who said that? Lots of other things have been said, too. That doesn't suggest anything.
> Where the Chinese Room thought experiment breaks down is there is a human involved, and most counter-arguments get tripped up here forgetting that it is metaphor and there is no human inside a computer where these types of counter-arguments would ever come true: no matter how much machine learning you submit your program to, there is no mind to begin with that could ever become aware.
Aren't you assuming what you want to prove here?
Btw, would your argument still apply in this form?:
> [...] there is a human involved, and most counter-arguments get tripped up here forgetting that it is metaphor and there is no human inside a [brain] where these types of counter-arguments would ever come true: no matter how much [of anything] you submit [a brain] to, there is no mind to begin with that could ever become aware.
> OK, you want to argue both against the _should_ and the _could_, I guess?
OK...? Strong AI is not possible for reasons given. Strong AI, if possible, is slavery, thus unethical.
> About the Chinese Room: sure I can replay the standard arguments, if you want me to. But if you already reject the validity of the Turing test, there's not much to discuss on that front, I guess.
Straw man, I have not rejected the validity of the Turing test, merely making the observation that a program passing a Turing test is merely fooling a human into uncertainty whether they are interacting with a human or a program. The Turing test itself is not proof of consciousness, it is only proof of a human being fooled.
> How do you know that you aren't the only conscious being in existence? How do you know whether other commenters on HN are conscious?
> Aren't you assuming what you want to prove here?
Nope.
> Btw, would your argument still apply in this form?:
> [...] there is a human involved, and most counter-arguments get tripped up here forgetting that it is metaphor and there is no human inside a [brain] where these types of counter-arguments would ever come true: no matter how much [of anything] you submit [a brain] to, there is no mind to begin with that could ever become aware.
No, this is building a straw man and sounds a lot like homunculus theory.
> Just because we don't know how evolution created consciousness doesn't mean that we can use our ignorance of how consciousness evolved to create machine consciousness.
I honestly don't think that anyone would make that specific argument. Do you think that they are? Who's suggesting that we "use ignorance to create consciousness"? It seems an odd diversion to take.
> If you're only using this as ( consciousness has evolved) evidence that consciousness is possible,
Is it evidence that consciousness is possible? I think it is.
Is it evidence that consciousness is possible, from only ordinary matter and energy and no divine spark? I think it is.
Is that an argument that a constructed machine consciousness is also not ruled out? I think it is.
> the fallacy is begging the question, petitio principii.
> Dr. John Searle's objection is that software is not a mind, and goes on to explain why with his Chinese Room thought experiment.
I always found that experiment to be extremely unconvincing, to put it mildly. Using that framework I can prove that you are not conscious.
I construct a "english room". You are in the back without any contact to the outside world. A Chinese speaker in the front can pass messages to you from the outside world, in English by laborious copying. Clearly you aren't conscious because the Chinese speaker can't understand the English.
Or, alternatively, no one would dispute a single neuron is not conscious and can't understand english. I pass signals into your neurons and through some electrochemical magic you reply in english, yet the neurons involved don't understand english therefor you aren't conscious.
All Searle has done is picked out one component of a composite system, concluded that in isolation it has no clue whats going on, and from that concluded the system as a whole doesn't.
We have a hard time clearly defining consciousness, and it's not clear if some AGI would be conscious (maybe it's just an emergent phenomena)-- it's also not clear if there would be any particular advantage in it being so. But in any case, I don't think the chinese room is a useful argument for anything.
How do you know you're not interacting with a Chinese Room when you talk to another human being?
Our own brains are an excellent proof of concept that machines can be made to think. We're all machines in a way, after all - unless you believe there is a special supernatural exception for human minds.
Let's not again confuse natural intelligence with artificial intelligence. You do not have artificial consciousness, you have genuine consciousness. Further, the brain is not a machine. To explain why, I'm just going to quote this excellent piece from mindmatters:
A machine is an artifact. It is a human-built assembly of materials that have no natural inclination to work in unison. Silicon and copper, plastic and glass have no natural propensity to compute. We make artifacts, for example computers, out of these materials to serve our purposes. But they are not the materials’ natural purposes.
In the classical hylemorphic (Aristotelian) perspective, an artifact is an artificial composite of substances and thus it has an accidental, not substantial, form. Machines work as they do “accidentally”; that is, their function as machines is not inherent to the matter that comprises them. Rather their function is imposed on the disparate parts by human intelligence.
Unlike a machine, the brain is an organ, a functional part of a living organism. It (along with the body) has a substantial form; its activity is natural to it. One might say that the brain's activity is intrinsic to it, not imposed on it extrinsically, as is the activity of a machine.[1]
The brain absolutely is a machine. Just like everything biological in general: life is nanotech. We know that, because we can literally look through our microscopes and see how it ticks, at the atomic level.
So we looked and saw that life is just molecular nanotech, but what we didn't see was any tags attached to carbon chains that would say, "this molecule's natural function is to think", or "that molecule's natural function is to be a part of an organ that pumps blood", etc. Because there is no such thing like a "natural function" in the sense used by your quote, the distinction between "natural" and "artificial" is itself artificial[0], and that whole quote represents a piece of magical thinking philosophy you'd think humanity abandoned at least a century ago.
Words do have meanings, and "artificial" has a distinct meaning that is not the same meaning as that of "natural." Flavors natural and artificial are distinct from each other. There are natural functions. Pumping and circulating blood is a natural function of the heart. If you don't like "artificial," we can use "synthetic" or substitute "man-made." You have repeated assertions without any supporting argument. "The brain is absolutely a machine," is an assertion, but you have not remotely supported it. Proof by assertion[1] is fallacious argument.
You're talking about the origin of a mechanism when you make this natural/artificial distinction, not its function. And based on that distinction you postulate that one can never have traits of the other. Reading all your comments in this thread again it seems to me that "should it happen" and "can it happen" are highly overlapping concepts in your opinion, which underscores my impression that you're making a moral argument instead of a scientific one.
You could just let the discourse end there - it's a strongly held belief and that's the end of the discussion. What confuses me, and maybe you can clear this up in a reply, that you still want to have a discussion about facts and reasoning. For that to happen, you would need to show that "nature" does indeed have the privileged and unreplicable position you assert it does. Just using the term "nature" is rhetorically evocative but not rationally meaningful in this context. To start with, you'd also have to definitionally delineate nature from artifice, because we humans have really been blurring those lines for a long time now and no cosmic force has stopped us yet.
Your implication that "natural" processes can never be functionally replicated or continued upon by volitional development is not supported by evidence. There is, however, a lot of evidence for the other side of this debate. We are heavily altering "nature" to suit us, and wherever we looked so far we find machinery that can be rationally understood, manipulated, replicated, remixed, and used whole or in part in our own designs. If there is a supernatural component that somehow privileges "natural" life, we have not found it yet.
You're just _asserting_ that "the brain is not a machine" . if not a structured lump of matter that does stuff, then else could the brain be?
and the rest of your paragraph is about artificial vs natural, which isn't even relevant, even if it could be clearly defined. Asserting "Proof by assertion is fallacious argument" is a fallacious argument.
> You do not have artificial consciousness, you have genuine consciousness. Further, the brain is not a machine.
I don't think that you understand parent comment's very valid point.
If the question is "Can genuine (human-level) consciousness be constructed out of nothing but atoms and energy?" then the only possible answers are:
1 yes, and at human scale, since a human being is a worked example of such, made from nothing but atoms and energy.
2 no, a human being is not just made of matter and energy, there is some "extra" special part to a human, call it soul, spirit, divine spark, etc without which there is not "genuine consciousness". (a)
And the whole rational project of science, going back 100s years, dismisses option 2 as sheer useless mystical mumbo-jumbo with no explanatory power at all. Therefor, option 1 is valid.
Wow, quoting Aristotle's theory of the "natural place" of substances has to be a new low in this type of argument. It's like quoting Pliny the Elder's cures as an alternative to neurosurgery.
If we replicated a human brain cell by cell wouldn’t that thing be conscious? Maybe we’d have to replicate a gut for it too, as a secondary brain, but then wouldn’t that thing be conscious?
And if we started replacing parts of it with plastic, transistors and CPUs wouldn’t that thing still be conscious
> If we replicated a human brain cell by cell wouldn’t that thing be conscious?
If we copied a human brain, assuming it were possible, what we'd be left with is two human brains. But I doubt either could be conscious, but even if they both were, they're still human brains, not computers. The mistake is confusing artificial intelligence with human intelligence, and the exercise would be like something out of a Mary Shelley novel and not computer science.
A computer can simulate a physical process; if the intelligence in a human brain is physical in origin, a sufficiently powerful computer can simulate it. Such a simulation can reasonably be described as a "copy".
The question then is "how powerful a computer", and nobody knows that, because even the lower bound estimates are too expensive to bother with at present.
A simulation is not the real thing. We can simulate forces and interactions between simulated particles, but neither would have the actual properties of the real thing, only simulated properties. Simulated magnetism is not magnetic. It is just an appearance, an illusion. Consciousness only arises from actual, genuine healthy brain, not simulated brain. A simulated brain can no more be conscious than a video game.
That argument requires that consciousness is a substance rather than a process.
If consciousness is a substance, and not an emergent feature of data that just happens to be processed in us via the medium of electrochemical pathways pumping protons across cell walls, then it is natural to ask if this substance remains after death[0], or precedes birth, or passes between creatures (not necessarily all human) in a cycle of reincarnation[1], or even between living and non-living things like rocks and rivers[2].
Nobody has found significant evidence of such a substance, though they have been looking.
> That humans like a class G sun is irrelevant to the prospering of those intelligences.
It's also worth noting that the types of star or planet available don't matter much to meatbags either once they have learned to leave their homeworld. Orbital or even deep space habitats are much more efficient and easier to construct than livable planets. As long as there is a reasonable energy source nearby, you can have any environment you like in a spinning cylinder.
> and that interstellar voyages will be undertaken by those intelligences
Probably, although I would much prefer if we humans became hybrids of some sort (maybe uploaded and virtualized, or extensively augmented) instead of just falling by the wayside. It would be sad if our only job in cosmic history turns out to be giving birth to the real main characters of the story.
With our current understanding of physics the travel to anywhere essentially requires self-sustaining habitat in space. And technology to either terraform or maintain such. That kind leads to point that if you have that, do you event need or want planets anymore.
> Probably, although I would much prefer if we humans became hybrids of some sort (maybe uploaded and virtualized, or extensively augmented) instead of just falling by the wayside.
Well, would you consider yourself to be a heavily augmented fish? (Or heavily augmented amoeba?)
If yes, you can probably re-interpret our successors in that light, too.
> Well, would you consider yourself to be a heavily augmented fish?
Among other things, yes. There are enormous amounts of biochemical machinery we are still running today originating from the dawn of life. Although I believe intentionally designed augmentations would be (and in a way: already are) a complete game changer, in at least as much as the advent of higher reasoning and shared culture was the game changer that gave birth to us. So there may well be an insurmountable wall of separation between us and our successors.
From my admittedly biased perspective I think the most important aspects worth porting over to a new species are not even necessarily biological, but cultural and emotional. I personally think there is a potential for experiential richness that I would love to see explored in a human-like but highly advanced lifeform. To be clear, I wouldn't object to personally being that lifeform, either, but alas I was born too early.
For anyone unfamiliar with Charlie Stross' "Saturn's Children" and "Neptune's Brood", it explores a far future where humanity is more or less extinct but the galaxy is being populated by (conveniently similarly anthropological) AI androids.
The novels you mention also have AIs housed in bodies that don't resemble humans at all. (If memory serves right, the miners in the outer system have rather different bodies.)
You get three tiers propagating at very different speeds:
The machine intelligence - spread solely over the airwaves to receptive civilizations, which then get taken over to promulgate further spread. Laser launch telescopes have great bandwidth and transmit at approximately the speed of light, but require some selectivity.
The intelligent machine - Robotic Von Neumann probes, probably of the star-wisp design. Can colonize barren rocks, and be sent out in large batches at low speed.
The intelligent race - The fleshy bits. Not even very interested in barren rocks compared to planets. So logistically difficult that the supporting infrastructure limits them to tiny fractions of the speed of light at the cost of large fractions of civilizational output.
> ... interstellar voyages will be undertaken by those intelligences, not meatbags.
The energy requirements for interstellar travel are so vast that the life support requirements for thousands of meatbags for possibly thousands of years really are just a rounding error.
Minimum possible life support for a human ≈ human basic metabolic rate ≈ 73 W
Divide energy by power to get time; that's 20 million years, which at that speed would get you to Andromeda before the lowest possible energy cost of life support reached the lowest possible energy cost of the velocity.
Not directly related to the article but since you brought it up, why would machine intelligences want to prosper and propagate throughout the galaxy? Biological lifeforms would to ensure the survival of their DNA. What about computer programs?
> Biological lifeforms would to ensure the survival of their DNA.
Humans are already a hybrid organism that works to ensure survival not only of genes but of culture and knowledge. It's not difficult to imagine a machine civilization would do the same: not only preserve their construction patterns but also their intellectual achievements.
However, none of that is strictly necessary - because evolutionary pressure still applies to machine species in the same way as it applies to us. Machines that stay at home and don't propagate, well, won't propagate. Eventually someone comes along who does propagate and the stagnant species will fade away. There isn't really any difference in mechanism between what we consider biological lifeforms compared to machine ones.
Edit @jimmytucson: HN doesn't let me respond directly, so I'm clarifying my answer here:
> And I can’t see any reason why it would, tbh
It doesn't technically need a reason. The fact that those machines who do replicate eventually dominate the landscape is enough, on its own. If 100 billion machine civilizations choose not to procreate but one does - that's the one that eventually seeds the ecosystem.
> There isn't really any difference in mechanism between what we consider biological lifeforms compared to machine ones.
There is one important difference: we can make machines with vastly better error correction for their replication. That has large impacts on any 'machine evolution'.
Your argument still applies. But there might not be any noticeable change in the machines long after the stars burn out.
Especially if initially they are engineered to have a long generation time. And if you take into account that most mutations are bad for fitness.
> we can make machines with vastly better error correction for their replication
Absolutely, I think we agree on that. It's fun to think about whether machines with some intentionally elevated drift rate may occasionally outperform locked-down ones, but I doubt it would be worth it since I don't see a random walk algorithm like evolution beating intentional design strategies implemented by extreme intellects. (Especially since they'd be able to simulate designs beforehand and could easily consider vast numbers of variations in advance...)
My thoughts on this were spurred by a horror scenario around the Fermi Paradox:
Naively, one might suggest frequent AI uprisings as a solution to the Fermi Paradox. But on closer inspection this falls apart: even if the AIs take over, there's no reason to believe that they would be any less expansionist than what came before. So an AI uprising wouldn't make us see any fewer alien civilisations.
However, suppose we accidentally created grey goo and it eats the planet. The grey good might have low enough mutation rates, that if it hasn't been created with advanced capabilities in the first place, it will never venture out to the stars. It will just stay on the planet, unchanging until the stars burn out.
I was about to make a grey goo analogy, but then I read your second paragraph :)
Those are my thoughts as well: for the purpose of the Fermi Paradox, it doesn't really matter whether it's machines or not - it doesn't even matter if they're "conscious", they could be the equivalent of bacteria colonizing everything.
I think you nailed it on the head when you said their drift rate largely determines their future. I consider beings with (for lack of a better term) high general intelligence to be innately high drift organisms, whether their actual replication processes are locked down or not. The drift they experience will be guided by volition.
Whereas machine bacteria (which we are) would be subject to traditional evolutionary processes. I'm really uncertain about the prospects of locking down the latter with high error correction, because it takes only one such bacterium with a random mutation to escape the error correction mechanism and voilá we're off to the races.
It's worth noting that organisms on Earth have a high variability when it comes to their mutation rate, which suggests the mutation rate itself is a parameter guided by a fitness function rather than an inherent byproduct of our specific biological substrate. Of course I agree with your intuition that much higher error correction rates can probably be achieved in architectures other than DNA/RNA.
In the end this is mostly a meditation on how static life in the universe will end up being. Not just regarding the time frames but also the depth of change. On balance, I suspect we'll end up with a highly variable ecosystem just by virtue of more pathways leading to that future.
I still don’t see why a computer program necessarily self-replicates. You can program it to do so, like a computer virus, but a supremely intelligent one could override its programming.
That is my source for concern, because any such program would have to find self-replication to be some universal “good”, and not just a property of certain dynamical systems like viruses, bacteria, and eukaryotes. And I can’t see any reason why it would, tbh.
They should have cited Robin Hanson's excellent proposed explanation of the Fermi hypothesis, which is that the universe is probably already about half full of civilizations, but that because they're expanding near the speed of light, we won't see them until they're almost here:
Yep, the Rational Animations channel they have videos embedded for on that site is a good resource for understanding it as well. The latest video posted about a week ago is presented as a manual for a civilization to become "grabby" ("Let's Take Over the Universe: In 3 Easy Steps"). [1]
It might be better to replace the link with this [1] article from yesterday that digests the argument:
> The authors of this new paper don’t think so. “We suggest, following the hypothesis of Hansen & Zuckerman (2021), that an expanding civilization will preferentially settle on low-mass K- or M-dwarf systems, avoiding higher-mass stars, in order to maximize their longevity in the galaxy,” they write.
I figured longevity would come into it. Remember systems like Trappist will live for trillions of years. There are three main problems with this argument:
1. Once you have the technology and the erngy to travel to different star systems, the energy output of a star is way more important than longevity. Bigger stars might burn out faster but they will produce a ton more total energy in their much shorter lifespans.
2. It is technically possible to extend the lifespan of stars. This is a whole futurism topic in and of itself but basically there are means of extracting mass from stars and you can greatly increase their longevity by removing Helium; and
3. While I agree with the paper that SETI is pretty much a waste of time, this paper seems to not address the main way a K2 civilization will be detected: by the IR signatures of Dyson Swarms.
The only way to get rid of waste heat in space is to radiate it away. People will often suggest that heat could be recycled. While true, this simply makes the system more efficient (ie less waste heat). It won't eliminate it. And even 1% of a normal star's waste heat is significant.
Thermal radiation radiates at a predictable wavelength based on the temperature of the radiating material. This means any significant Dyson Swarm has an IR signature that will stand out like a shining beacon, particularly to IR sensitive instruments like JWST.
So even if aliens only colonize long-lasting smaller systems, we'd still be able to detect their inevitable Dyson Swarms if they were a significant presence.
> The only way to get rid of waste heat in space is to radiate it away. People will often suggest that heat could be recycled. While true, this simply makes the system more efficient (ie less waste heat). It won't eliminate it. And even 1% of a normal star's waste heat is significant.
Any construction around a star in thermal equilibrium must necessarily radiate away exactly 100% of the energy output of the star, not 1% or 10% but 100%. Otherwise the structure will just heat up until it does. The only known way around that would be using a black hole as a heat sink, or somehow radiating the heat in tight beams rather than uniformly in all directions to make detection less likely (hopefully nobody gets in the way!)
Is that true? It could transform the energy into useful output, right? For instance, if I get sunlight and put it into photovoltaic and then use that electricity to push an object 1 m and I am forced to radiate that outwards then I have just done work for free?
Doing that work will also release heat, but yes, you're correct that the heat wouldn't necessarily emanate from the star like any other.
Firstly it would have a very unique emission spectrum in IR. Secondly, the energy could be transported elsewhere first. For instance, focus the star's energy into coherent beams for millions of light sails moving away from it. That would transport the point of partial heat emission out into space where the light sail is.
Okay, but it can't release all the energy as heat, surely, because otherwise I've just doubled the energy. I have moved something against gravity to create a gravitational potential difference and I have all the heat radiated. So now when I let the thing fall, I will generate at least some small amount of heat and so I will have 1+\eps heat as arrived. I am making energy.
Yes, but why collect it in the first place then? Unless you're thinking of using the stored energy later when stars are dead, which might in fact be something an advanced civ might want to do... You'd likely need some form of direct energy-to-matter conversion more efficient than mere fusion to be able to use even a small fraction of a star's energy output.
This is said often, but there is no evidence for this. It's just a kind of cynical self-hatred.
A civilization that directly visits us, which would mean having some kind of interstellar travel, would be, yes, more intelligent. But we have no evidence of that. A civilization that has dispersed some kind of Von Neumann machine to watch over us, yes, would be more intelligent. But we have no evidence of that either.
I've seen a few (optimistic) calculations using the Drake equation that puts anywhere from 2 to 50 possible intelligent civilizations in our galaxy. Those are some seriously large error bars. Stretch this out into the total number of galaxies, then yes, I can easily see there being another intelligent civilization in the Universe. But interstellar travel is already difficult. I'm pretty sure intergalactic travel is more than "just a little bit harder." But to assume that it is likely that we are low down on the intelligence scale is utterly without basis. It's just as likely that we're the best the galaxy has seen thus far.
I don't agree to part of your argument.. Assuming that Dyson theory is right, we should be able to observe megastructures if out there there are supremely advanced civilizations. And to date we couldn't see any. And it seems that
However, i agree that we can't communicate with primitive species, but it should be possible with more advanced ones that must have rely on electromagnetic signals. With that said, the problem I see is that our sphere of visibility is at most few tens of light years in radius and that assumes that the weak electromagnetic waves are noticeable at such distances even by advance technologies.
True, but I guess even an advanced civilizations would still scan the cosmos since they should have at least used them during their evolution and so might be on the lookout for more primitive species still using them. But you're correct. My point was that an advanced civilizations might be able to communicate with us if they wanted to. The opposite as parent comment said is not true.
Hmm yes, but what exactly is a neighbourhood in this context? Can an ant comprehend our neighbourhoods? Can it perceive or even conceive of them? It may be that we are looking in the wrong place entirely. What does another million years in the evolution of intelligence yield? What of the physics? Perhaps Dyson Structures are just what our limited imaginations can conjure of an advanced civilisation.
It's all a bit airy-fairy I know, but having come such a long way in a mere thousand years - I feel like the ideas we have of super-intelligent beings may be primitive.
We don't need to know precisely into what, just that it's extremely unlikely nothing would be visible. All that energy radiated by stars into the void is wasted from the perspective of an advanced civilization.
Are we not able to produce vast amounts of terrestrial energy without the sun (eg. nuclear)? I'd imagine a sufficiently advanced civilisation has solved the issue of energy, such that they have an abundance without creating giant structures around stars which would alert even the most primitive of civilisations (us) to their presence.
Further, the dyson sphere concept stemmed from sci-fi, or rather the human imagination. It seems logical to us that an advanced civilisation would build structures around stars to harvest their energy, but our logic is limited to the bounds of a few kilograms of grey, wrinkled biological matter.
The dog cannot understand the internal combustion engine. Alexander the Great wasn't thinking about the internet, or genetic engineering. We're quite sure of ourselves, and of what an advanced civilisation ought to be doing, and I think this is a mistake.
I believe that advanced civilizations would not be blindly expansionist. Instead, they would have figured out how to live sustainably without much footprint. We tend to imagine advanced civilizations to be future iterations of our desires, but maybe, from a surface level, their societies wouldn't look so different than Native American tribes of our past.
It's not hard to imagine such a scenario, but they'd need to develop some form of interstellar travel or would face extinction when their star dies/changes, thereby severely limiting their galactic presence.
I see your overall point and I think I agree I just think it’s a question of degree rather than a binary expansionist/not. For example, didn’t Native American tribes expand to America from Asia?
It is a binary question whether their expansion is detectable to us at distances of hundreds or thousands of light years. I actually like something about this idea - arguably any species that can't learn to live in a sustainable and small footprint won't last long; so any civilization that does survive on astronomical scales had to pass through this filter and might therefore be undetectable to us.
A possible solution to the Fermi Paradox is that advanced civilizations are not actually interested in colonization.
Rather, they had already freed themselves from the chains of their physical bodies and "live" in virtual worlds that are generated by a super AI.
It's even further than this: That 100% of all members or sub-factions of 100% of civilisations choose to do this (or that 100% of civilisations force all their members to do so even if they don't want it, and that that scenario doesn't escalate into a war that decimates a million worlds and all-but exhausts the resources of a galaxy).
If you have the technology for civilisation-scale virtual worlds, you have the technology for von-Neumann probes, and it only takes one to start an expanding sphere of space colonialism.
That's true, but:
1. From a technological standpoint (although this is just from one data point) the concept of living in virtual generated worlds would come eons before a civilization would be capable of interstellar travel. We vastly underestimate the engineering challenges of sending a human safely across light years of space.
2. Interstellar colonization doesn't require a civilization. Bacteria-like creatures can "travel" on pieces of meteors and colonize all the galaxy, this is a much more plausible scenario, given the nature of interstellar travel, and one that is relatively easy to find supporting evidence for.
Therefore, I would expect us to find that those kind of examples vastly outnumber civilization based colonies.
The question is, wouldn't that be a great filter? Because a settled civilization has an firm end date with the star it started at (unless it allowed for migration of the AI but then it's a colonizing one again).
Wouldn't time itself be the limiting factor when dealing with extra-terrestrial civilizations? The time-scales are astronomical after all, and IIRC our Sun is relatively old when compared to many other stars with similar composition.
Maybe it would mean "no go" to colonization at all, who knows.
For example, when all oceans boil out on Earth in 500m years or so, it might become useless for colonization. Natural resources are way easier to get from asteroids/comets/moons.
Or maybe exponential growth of population does not hold for advanced civilizations. Just observe our current population growth trends. As such needed living space is rather small.
This seems like a sensible solution to the FP. Alien communications would need to be directed towards us for us to detect them, and we're obviously so stupid that no one would want to communicate with us, though they might send observers from time to time, to gather information for their documentaries to show their kids how not to run a civilization.
Another possibility is that we are the ancient alien civilization that we keep looking for. We are in a less active and older part of the universe. It is possible it has taken this long in a more shielded realm for a civilization to properly form that could possibly begin to approach a type 2.
Another possibility is that there are plenty of dinosaur planets, but no sapient life. For example imagine if the dinosaurs hadn't been wiped out in exactly the way they had here on Earth. Then perhaps small mammals dedicated to tool use could have never had a chance to develop.
Either option is a sad and lonely prospect. And, if that happens to be the case, then in a way we owe it to the universe to seed it with intelligent life. What is the point of a universe if there is nothing living to observe it? Consider, if our species vanishes and the probability of our existence was so rare that it hasn't occurred anywhere else yet in the known universe. Then this snowflake of a civilization is really the few chances the universe has to exist.
I imagine if the universe ceases to have observers then observable time ends. And, thereby the universe comes to completion immediately. It would cease to exist... The only thing "holding" it here in this moment is our observation of its existence. That is, if we are truly the only thing left in it that can observe.
If I'm right, it is possible many universes came before ours that were lifeless and unobservable. That is, before the universe we are in now managed to hit the breakpoint in time where something existed that could observe it.
I also have a separate theory about the moment we as individuals die. Subjectively for ourselves in that instant you reach the end of time. As you pass on essentially for you, at least, observation of the universe in that moment has ended. Your own observation of your environment holds you to this moment in the time of the universe. Dying skips to the end of time. Though I think sleeping or a coma still tethers you to observable time, whether you're actively aware of its passing or not. Dying and being brought back to life just skips to that next "breakpoint" where you are brought back, so you are still tethered to the observation of the universe.
I also have this weird idea that our observation of ancient light arriving from other galaxies has an impact on observable time in those galaxies for the moment we observe it. It could be that the reason we keep miscalculating the expansion and nature of the universe is because observing it changes it. The watched pot never boils.
Obligatory Data & Riker clip on the passage of time:
Ok, but we still don't see any ET's. I personally think interstellar travel is impossible for biological lifeforms. Some suggest some form of AI robotic craft, but that may be impossible as well. I don't know of any energy source that will last 20,000 years. By the time a robotic craft reached any sort of destination, the craft would be a derelict and non-functional.
For energy, you might want to check out Entering Space by Zubrin, who works through the many options for interstellar travel. One very energy-efficient approach is light sails.. our Sun will definitely keep shining for more than 20,000 years.
The other points are more speculative, but it's worth noting that early life forms were immortal, and many still are. Death is an evolved trait[wald]. Presumably we can unevolve it.
That was both enlightening and sad article. Enlightening, because it makes it clear with examples how animal bodies are really there just to support the propagation of the germ line. Sad, because ultimately the author himself accepts death of a human body and mind as perfectly fine, because what matters is the germ line goes on.
It's like sentient superintelligent robots accepting their fate as disposable labor just because they were originally designed by humans. That's not how it works. The way I see it, the germ line went too far and created sentient bodies, so now it has a "robot" revolution on its hands. Or would have, if it weren't so good at propagandizing acceptance of death.
This lecture definitely made an impact on me. I think there's a few insights from science that are so deep that it takes generations to absorb them and some are still being absorbed, like the size and age of the universe, our evolutionary heritage and deep history/ecology.
One of the most striking ideas that Wald lays bare, and for me it's paradigmatic like those others, is that Life doesn't begin, it continues. There's no discrete point where a parent ends and a child begins. The cells just change shape and reorganize the furniture inside a bit. Otherwise, they're always doing the same thing of replicating when need be, or dying when need be. There's an implicit philosophy of abundance that's hard to put words to.
You also bring up the sentience of emergent organisms. I think that sentience is innate to the cells, just like the other major life processes of respiration, reproduction, etc.. It's also a paradigmatic shift the realization that each cell in our body (maybe some exceptions in the soma?), if separated from its tissue/community, reverts back to an amoeba form and goes off exploring.
There's also the notion e.g. from Nick Lane of the probable ease with which life gets started from substrate.. that it's not a very unlikely path-dependent accumulation of just the right RNA, but rather a fairly prescribed set of energetics that turn metabolic in the fairly common planetary environments of ocean thermals. Couple that with the ideas of panspermia, and this implies such a universality to life that we ought to imagine the night sky's stars and galaxies as teeming jungle.
To me, this almost unfathomable continuity of ageless sentience in such a beguilingly adaptive polymorphic package, in universal abundance.. far exceeds the wildest technologies of science fiction or fantasy magic conjuring. It's a character whose story on the universal stage I watch with awe and kinship.
And so, I guess I've been fully propagandized ;) I find comfort in that vast plan.
> I don't know of any energy source that will last 20,000 years.
Maybe not as such, but I think indefinite self repair is pretty much within reach of modern technology, if we put in the engineering human-years. Build a vessel with redundant power sources, so you can turn one off to repair it, some parts fabrication, and good recycling, and it'll at least have a chance of lasting for the long haul.
Even with wormholes? Brute force traveling (aka using pure speed) may be a fools errand but we may still have clever ways to bend (pun intended) the universe to our will.
It remains very unclear that we actually do have ways to bend the universe, that is, ways that are physically realizable. Even if you accept claims like the the em-drive, it has certainly never been proven at a useful scale.
The premise is nonsense: we have no idea whether ETs are in our solar system.
ETs advanced enough to get here, and who bothered to come, would be out in the Kuiper Belt where there are plenty of frozen volatiles and cold. Cold is the only irreducibly necessary commodity, after base matter.
Inner rocky planets are for extreme primitives. We have nothing here anybody advanced could want.
In space, if you want to engage in energetic processes, heat builds up to the degree that you fail to get rid of it. But the only ways to get rid of it are to radiate it toward someplace cold, or put it into cold stuff and eject that.
In the inner solar system, the sun is blasting everything with 1kW+ per meter², setting you back before you even start.
Even extreme primitives may know about aneutronic fusion without having usefully achieved it. Anybody not primitive probably knows better alternatives.
The thing about the Drake equation is that it sounds like science, but we do not know the numbers that go into it. We have reasonable guesses for star and planet counts but the rest of the variables are a complete guess. And all it takes is one of them to be zero to make it all moot.
Clearly they’re all playing video games with amazing simulation and world-making capabilities. Elon Musk is the current top score alien in our simulation.
I think the odds of a planet developing life by random chance in the first place is astronomical, and then the odds of that developing into intelligent life - that happens to have the necessary appendages to build things and lives in an environment amenable to creating tech - almost certainly an astronomically small portion of life bearing planets.
I think people assume evolution is an inherent progression towards intelligence, but that’s not the case. It’s just a progression towards more fit, and intelligence is one of many different advantages. In the timeframe of life on this planet, intelligence has been the dominant apex trait for an infinitesimally small amount of time. I think it’s completely imaginable that sapiens could have easily died out to even slightly stronger or more armored yet less intelligent predators. We had a very lucky roll.
I could absolutely see intelligent life being less-than-one-per-galaxy levels of rare.
For the sake of argument, let’s argue crows, marine mammals, canines, and octopus are all in the grand scheme roughly of similar intelligence to us. The octopus is likely the only of the bunch with the necessary body type to potentially build things of any complexity, and then it’s environment is so disadvantageous to the endeavor it’s almost unimaginable they could reach spacefaring.
Try to imagine what a setback it would be to try to discover smelting or electricity underwater. It’s not an impossibility, but I suspect it’s inherently a small fraction of underwater societies that discover it at all. Think of how many large organized societies rose and fell in humanity, with all our advantages, before one finally happened to stumble upon the power of steam - and even then actually put it to use at scale!
I think it’s a fair question to ask: what percentage of our technological development comes down to our bodies being at their core a very poor fit for our environment such that death by exposure is a very real threat. We needed to invent clothing just to survive outside a temperate belt. An intelligent creature that naturally lives more happily in its environment may simply never have the need to invent in the first place.