> I think the real gap in computer languages wrt LLMs is a replacement for python as a "notebook" language that the LLM uses to solve ad hoc problems during a chat.
hey I found this project december 23 and you just commented on another thing I posted "amazing one shot that" I will give you an invite if you want (because it also does that) check bio will add contact dets now...
it was posted to this site earlier about 20 days ago and front paged and hilariously about half the comments were shooting it down the top voted comment was even "this is the worst website ever" lol xD and they since invite only to manage abuse (its a very capable service and currently free)
It's capable of what you just mentioned, and it made the other site that one-shot you said was amazing for the one shot (literally cut and paste the comment into the prompt, then 2nd was "Good, now do it better")
My city (Sydney) is known for having a huge supply of water & the council is now talking about shortage and need for planning as they expect the data centre in 1 block to be using 25% of it because of the AI tools https://www.abc.net.au/news/2025-08-27/ai-to-take-up-one-qua...
As I understand it - a person with psychosis is someone who has over-weighted perceptions that cannot be corrected with sensory input. Hence to "bring someone to their senses".
I've seen and thought there might be a few programmers maybe with a related (not psychosis) "ai mania" - where one thinks one changing the world and uniquely positioned. It's not that we're not capable in our small ways with network effects, or like a hand touch could begin the wave breaking (hello surfers!) what distinguishes this capacity for small effects having big impacts from the mania version is the mania version bears no subtle understanding of cause and effect. A person who is adept in understanding cause and effect usually there's quite a simple, beautiful and refined rule of thumb the know about it. "Less is more". Mania on the other hand proliferates outward - concept upon concept upon concept - and these concepts are removed from that cause and effect - playful interaction with nature. As a wise old sage hacker once said "Proof of concept, or get the fuck out"
Where mania occurs in contrast to grounded capacity that does change the world is in the aspect of responsibility - being able to respond or be responsive to something. Someone who changes the world with their maths equation will be able to be responsive, responsible and claim, a manic person there's a disconnect. Actually, from certain points of view with certain apps or mediums that have a claim to universality and claiming "how the world is" it looks like there's most definitely already some titans of industry and powerful people inside the feedback loop of mania.
The should just go for a walk. Call a childhood friend. Have a cup of tea. Come to their senses (literally).
it's good to remember we're just querying a datastructure.
If you have racing thoughts and some magic system responds to you and it's abstract enough (even people on hn do not know how LLMs work, plenty of them) then going for a walk is not enough...
I recently rewatched "The Lawnmower Man" [0] and was not disappointed. The vast majority of the comments I see promoting the notion of "AI" achieving AGI sound like the Jobe Smith character from the movie.
LLM/"reasoning" models will not manifest "AGI" anytime soon, as it would take 75% of our galaxy energy output to bring the error rate to average human levels.
However, several Neuromorphic computing projects look a lot more viable, and may bring the energy consumption needs down by several orders of magnitude. =3
Humans don’t exist as standalone individuals, and a lot of our experience is shaped by being in a larger graph of people. I suspect it’s under-appreciated how social our perception is: in order to tell each other about X, we need to classify reality, define X and separate it from non-X; and because we (directly or indirectly) talk to each other so much, because we generally don’t just shut off the part of ourselves that classifies reality, the shared map we have for the purposes of communication blends with (perhaps even becomes) our experience of reality.
So, to me, “to bring someone to their senses” is significantly about reinforcing a shared map through interpersonal connection—not unlike how before online forums it was much harder to maintain particularly unorthodox[0] worldviews: when exposure to a selection of people around you is non-optional, it tempers even the most extreme left-field takes, as humans (sans pathologies) are primed to mirror each other.
I’m not a psychologist, but likening chatbot psychosis to an ungrounded feedback loop rings true, except I would say human connection is the missing ground (although you could say it’s not grounded in senses or experience by proxy, per above). Arguably, one of the significant issues of chatbots is the illusion of human connection where there’s nothing but a data structure query; and I know that some people have no trouble treating the chat as just that, but somehow that doesn’t seem like a consolation: if treating that which quite successfully pretends to be a natural conversation with a human as nothing more than a data structure query comes so naturally to them, what does it say about how they see conversing with us, the actual humans around them?
[0] As in, starkly misaligned with the community—which admittedly could be for better or for worse (isolated cults come to mind).
> if treating that which quite successfully pretends to be a natural conversation with a human as nothing more than a data structure query comes so naturally to them, what does it say about how they see conversing with us, the actual humans around them?
You are suspicious of people who treat an AI chatbot for what it is, just a tool?
As the saying goes: if it fires together, it wires together. Is it outlandish to wonder whether, after you create a habit of using certain tricks (including lies, threats of physical violence[0], etc.) whenever your human-like counterpart doesn’t provide required output, you might use those with another human-like counterpart that just happens to also be an actual human? Whether one’s abuse of p-zombified perfect human copies might lead to a change in how one sees and treats actual humans, which are increasingly no different (if not worse) in their text output except they can also feel?
I’m not a psychologist so I can’t say. Maybe some people have no issues treating this tool as a tool while their “system 2” is tirelessly making sure they are at all times mindful whether their fully human-like counterpart is or is not actually human. Maybe they actually see others around them as bots, except they suppress treating them like that out of fear of retribution. Who knows, maybe it’s not a pathology and we are all like that deep inside. Maybe this provides a vent for aggression and people who abuse chatbots might actually be nicer to other humans as a result.
What we do know, though, is that the tool mimics human behaviour well enough that possibly even more other people (many presumably without diagnosed pathologies) treat it as very much human, some to the point of having [a]romantic relationships with it.
Its an interesting line of thought, but people are generally able to contextualize interactions. The classic one is that regularly being violent in video games does not translate to violence in other contexts.
1. When you speak of context, note that a game is play (“as-if”), while for many people interacting with chatbots is presumably life (“is”). Humans can occasionally seem to be pretty awful to each other in playful context, but be still friends.
2. If somebody came up with a game in which your experience of murdering a human mimics reality as successfully as a modern LLM chatbot mimics interacting with a human, I think that game might be somewhat more controversial than GTA V or Call of Duty.
That link is hidden. No I am not signing up to what ever site that is because it breaks the web and obviously wants to live rent free on open standards.
I publish all my posts on Threads/X/Bluesky/Mastodon because I have to meet my customers where they are, but Mastodon is the preferred platform that I point everyone to for open standards reasons.
(if a moderator doesn't mind updating the link, that'd be great)
yeah that is what I was thinking "Ah how cute, it's the ops team from a state" lol but probably not - didn't look into / not interested but guessing it's an existing info sec consultancy behind it that do sometimes work those kinda places or banks etc.
hey I found this project december 23 and you just commented on another thing I posted "amazing one shot that" I will give you an invite if you want (because it also does that) check bio will add contact dets now...
it was posted to this site earlier about 20 days ago and front paged and hilariously about half the comments were shooting it down the top voted comment was even "this is the worst website ever" lol xD and they since invite only to manage abuse (its a very capable service and currently free)
It's capable of what you just mentioned, and it made the other site that one-shot you said was amazing for the one shot (literally cut and paste the comment into the prompt, then 2nd was "Good, now do it better")