Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What do you mean exist outside of time? They definitely don't exist outside of any causal chain - tokens follow other tokens in order.

Gaps in which no processing occurs seems sort of irrelevant to me.

The main limitation I'd point to if I wanted to reject LLMs being conscious is that they're minimally recurrent if at all.



Pseudocode for LLM inference:

    while (sampled_token != END_OF_TEXT) {
    probability_set = LLM(context_list)
    sampled_token = sampler(probability_set)
    context_list.append(sampled_token)
    }
LLM() is a pure function. The only "memory" is context_list. You can change it any way you like and LLM() will never know. It doesn't have time as an input.


As opposed to what? There are still causal connections, which feel sufficient. A presentist would reject the concept of multiple "times" to begin with.


A LLM is not intrinsically affected by time. The model rests completely inert until a query comes in, regardless of whether that happens once per second, per minute, or per day. The model is not even aware of these gaps unless that information is provided externally.

It is like a crystal that shows beautiful colours when you shine a light through it. You can play with different kinds of lights and patterns, or you can put it in a drawer and forget about it: the crystal doesn’t care anyway.


So what? If a human were unconscious every 5 seconds for 100ms, would you say they are "less conscious"? Tokens are still causally connected, which feels sufficient.


If the human is killed every 5 seconds and replaced by a new human, they are indeed less conscious. The LLM doesn't even get 5 seconds; it's "killed" after its smallest unit of computation (which is also its largest unit of computation). And that computation is equivalent to reading the compressed form of a giant look-up table, not something essential to its behavior in a mathematical sense.


I'm not understanding how this is analogous to being killed every 5 seconds as opposed to being paused. Let's call it N seconds, unless you think length matters?

> And that computation is equivalent to reading the compressed form of a giant look-up table, not something essential to its behavior in a mathematical sense.

Sure, that's a totally separate issue though.


Because (during inference) the LLM is reset after every token. Every human thought changes the thinker, but inference has no consequences at all. From the LLM's "point of view", time doesn't exist. This is the same as being dead.


The "time" part is what I don't get. If you want to say that "resetting and reingesting all context fresh" somehow causes a problem, that I can see. If you want to say that the immutability of the weights is a problem, okay great I'm probably with you there too. "Time" seems irrelevant.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: