Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Imho, not a matter of "if" but "when". I'm convinced that it will be a future civil rights battle, with young people largely on the "AI has rights" side and old people largely on the "AI has no rights" side.


Then you grossly misunderstand how far along AI is. AGI is not even a remote possibility with current techniques and implementations (and I would contest, entirely impossible with digital logic). It's just massive amount of statistics that were computationally impossible given available hardware until recently.

We don't have a baseline understanding of consciousness or intuition to a degree that we could even begin to replicate it.


While true, what you're saying is totally tangential to whether or not large numbers of people will treat AIs like they're conscious.

Expecting masses of people to defer to subject matter experts, contrary to what their feelings tell them, isn't a bet I would have much confidence in given the current climate.


The concept of rights also makes no sense to a machine. The main reason rights are a thing are to prevent pain and suffering, which unless specifically implemented no AGI will likely have.


If you want to get to AGI, it doesn't really have to be a remote possibility with current techniques.

The only thing current techniques would have to do is build something that improves itself intelligently enough.

Then the next iteration and the next, until you get to an iteration in which AGI is a remote possibility with current techniques and implementations.

We're currently far from AGI, but I'm not sure we're far from a thing that can make a thing that makes a thing that makes something like AGI.


I still think we'd be better off making a thing that makes smarter humans.


That would be AGI by another name.


I agree with that sentiment. Also I would estimate that AI would eliminate humans long before it would or could reach the level of what humans are. So we wouldn't exist to see such a world.


> AI would eliminate humans long before it would or could reach the level of what humans are

You think that an artificial agent with less than human level intelligence could destroy humanity? Then why hasn't a deranged human (or animal) already done so?


by AGI do you just mean artificial general intelligence, as in, capable of composing plans and deriving conclusions and such about the world in general, doing all the same types of reasoning tasks that humans are capable of and which are also used to achieve outcomes,

Or do you mean, being conscious?

Whether the latter is impossible with only "digital logic", is somewhat plausible (though I would still guess that it is possible, though far beyond our understanding.)

But the former being impossible with only "digital logic", seems rather implausible to me!

Like, I endorse the claim that souls exist, but, I see no reason that a soul would be required for an agent to have a model of the world we live in (not just a toy environment), and to act in the world in ways which both improve its model of the world and to achieve "goals" within it (and when there is a trade-off between these, which balances these in some way).

Nor do I see a reason that any such agent would need to have any internal experience.

(I still think it is probably possible to make an artificial agent which does have an internal experience, but, I doubt this will ever actually happen.)

Ok, you might ask, "Why do you think those things?", which, first, I should ask you the same, but, I will answer:

I see no fundamental obstacle to it.

The world behaves in ways which can be modeled well. These models which we use, they are not some ineffable knowledge that can only ever be represented within a person's mind, and cannot be concretely expressed in artifacts like books and pdf files such that it could be recovered from said artifacts.

If navigating the world required such a kind of secret knowledge, that either couldn't be communicated, or which could only be communicated through some kind of special person-to-person medium which is never merely expressed in an object in the world, and such that without this secret knowledge, effective planning in the world was impossible, with the world being too wild without it, then it would make sense that, unless we could make machines that could have this kind of secret knowledge, then it would be impossible to make machines that could plan in the world and such.

But, no such secret knowledge appears to be needed when acting in the world. When one constructs a shed according to some plans, there is no ineffable secret knowledge needed for this. When one, given some desiderata, designs a plan for a shed, there is no secret knowledge needed for this either. Nor when designing a computer chip.

(by "secret knowledge" I don't mean that it would be a secret that a few people know and other people don't. I mean secret as in, cannot be shared with or expressed via anything we know of that isn't a person.)

It very much seems that the world works according to expressible rules, or at least, can be very well approximated as working according to such rules.

Expressible rules can be enumerated. They can also be interpreted mechanically, and therefore evaluated mechanically. Of course, a naive enumeration and testing would be completely impractical, but if we are talking about what is possible in principle, with no requirement that the computations be doable in practice, just that they be finite, then it seems clear that rules which describe the world well can, in principle, be discovered mechanically.

There is no fundamental barrier.

Obviously I can't rule out that there is an undiscovered law-of-physics, that if ever an AGI would be created, lighting strikes the area and destroys it before it is completed, and that therefore AGI is impossible, because if it ever would be created, this would be prevented by the lightning.

But, within our current understanding of the world, there is nothing that can be a reason it is impossible.

Any such reason would have to apply to machines but not to us.

Now, maybe if our brains work quantum mechanically in an important large-scale way, or, if our brains receive signals from beyond the physical universe (which I'm not ruling out; see: souls), these could be reasons it could be impossible to emulate a human mind using a binary classical computer, even allowing lots of slowdown. (Err, quantum mechanics can be simulated with costs exponential in the size of the system, but, if the human brain were entangled with other things in an important way, you couldn't really emulate the brain with just a classical computer, because it couldn't be entangled with the other thing.)

But, this still wouldn't be a barrier to something using just classical computation with binary, having models of the world and acting within it, unless these things were needed for modeling the world, which, seeing as we can communicate our models and such with words, they aren't.

(... uh... ok so, quantum teleportation does allow using entanglement along with sending classical information, to communicate quantum information, so you might say "well, if two people's brains are entangled, then what if the measurements and such done in quantum teleportation are somehow encoded in a way we don't notice in the word choice and such that people use, and then this is subconsciously used in the other person's brain for the other half of the quantum teleportation protocol, and so quantum bits are communicated that way, but, I don't think this is plausible. There would have to be some way that the brains renew the entanglement, which doesn't seem plausible even if brains do store quantum information, and I really don't think brains store quantum information. I only mention this to cover bases.)

And, our reasoning about the models, which we use to make models and such, are also things we can explain.

There is no fundamental barrier. The only barriers are practical ones, things being hard, algorithms being too inefficient, not having worked out all the details of things, etc.

(That's not to say that I think AGI will ever be produced. I'm kind of trusting that God won't allow that to happen, because I think it would be likely to go very badly if it did happen. (But, I still think research into trying to figure out how to make sure that it goes well if it does happen, is good and important. "Do not rely on miracles" and all that. Perhaps His actual plan is that people solve AI safety, rather than AGI being prevented. Idk.))


But thou, O Daniel, shut up the words, and seal the book, even to the time of the end: many shall run to and fro, and knowledge shall be increased.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: