Hacker Newsnew | past | comments | ask | show | jobs | submit | oidar's commentslogin

>> To finish, I do just sort of straight up hate the idea that we're comparing this moment to the invention of electricity. It's on the face of it absurd.

Do you feel that any technology is comparable in it’s impact?


Most of modern medicine, by which I mean each discovery and invention in their own right, stand alongside electricity. Particularly vaccines.

AI isn’t there yet. You could turn off AI tomorrow and there’d be a shock but people would quickly switch back. You could not do the same for electricity, medicine, combustion engines (or steam engines/turbines), computers, the internet, modern building materials, etc. You try to swap back off any of those and the modern world (literally and figuratively) collapses. Turn off AI, and there’d be a financial collapse but afterwards everything would return relatively easily to an earlier way of doing things (ye know, the way from just 4 years ago, and which is still 99% of how people do things :) )


Sure, but compare this to "turn[ing] off" combustion engines a mere four years after commercial adoption rather than 162 years later (now). Back then, going back to horses wouldn't have been as big of a deal as it would be now.

I think the Internet is the more apt analogy. But even with electricity, you could have taken it away within the first couple decades of its popularity and society would have shrugged it off. Once they got used to that telegraph thing, not so much.

Yeah, I agree, but AI isn’t there yet. It’s too early to call it one way or the other. There’s plenty else that’s as important as electricity in my view, and maybe AI will join those ranks in 15 years or so when it’s gone through the hype loop and when the economy has recovered from the now-basically-inevitable AI- and war-fueled turmoil of the next decade.

That's primarily a function of the time for adoption, though, not the utility of the technology. In 20 years, people would not be able to so easily say that they could turn off AI with no impact.

That..what..no. The question was whether there are any comparable to electricity, of which I have put forth a number of examples. And also offered my opinion that it is too early to judge whether AI will be as significant or not.

There are loads of technologies that, despite being decades old, do not qualify. So, no, it’s not “primarily a function of time”. It absolutely is about the utility. We can only be in a position to judge utility when sufficient time has passed, and AI ain’t had enough time yet to prove its utility. Given enough time, it might prove as useful as electricity, or it might just sit alongside computer operating systems - never quite making it onto anyone’s “this changed the world” list, even if it has as much utility as an OS.


Sure. I'm just more optimistic than you are about the enduring value of AI. Time will tell.

> The fun in the job is not knowing where to place a semicolon.

If a person needs an LLM to figure where an semicolon goes, a LLM is not going to help them code.


I don't need one to know where it goes, but it certainly is better than I am at never missing one.

I disagree with this take. I get that LLM produced text is filled with crappy, over the top writing in pretty much all cases, but if a prompter/writer/blogger is using it iteratively, the LLM output is going to be way better than their writing. Also, if a person is using LLMs to write articles, do you really want to see their likely even worse writing?

Yes, I want to see the prompts. Yes.

But I won’t promise to read it, because it’s bad writing.

So maybe it would be better to not use the LLM to draft writing that pretends to be you. That would be easier on everyone who reads.

Instead we live in a world where all of us are reading through a cynical lens.

This comment was written without using any form of AI.


Was this written by an LLM?

> This comment was written without using any form of AI.

That's exactly what ChatGPT would write if it didn't want us to think it wrote that comment!


In this ever-changing world, it pays to delve beneath the surface of a casual claim— if you know what I mean.

It's absolutely nutso that we (the users) have to guess what the actually limits are. And now they through this into the mix. I love using Claude Code, but if they don't offer some transparency soon re:token limits (other than a status bar), ... I don't know what I'll do, but I will continue to not be happy.

Not sure I understand how passkeys verify humanity.

Bluetooth headphones too?

This is actually a really good response though. Because the act of having a device blaring demonstrates contempt for everyone one around them. It's hard to act in a hateful way to someone who just offered you something for free.


Exactly. To refuse the “gift” is an explicit statement of “I know I could do this silently but I want to bother everyone around me.”

The main issue is sota LLMs can only reason one way - forwards, and can't go back and revise a prior statement. That would remove a whole lot of "it's not this is that" and "the big takeaway here is" and so on. Those kinds of ideas are typically at the beginning of a human writer's output structure. An LLM can't go back and edit the first paragraph, because it has to reason (for whatever that means for LLM) it's way through it to get to the big idea of the paragraph/structure. I haven't played with diffusion text models enough to know if that's a remedy for that kind of output.

When LLMs are good enough to not be detectable, what happens then? They aren't that far away atm, so it's only a matter of time until _everyone_ is assumed to be an LLM.


Ultimate user here. I assume this doesn't kill "quick answers" for pro users - which I use frequently when I need a quick summarization. For assistant use directly, I've been thinking about stepping back from ultimate, as I use claude for AI rubber ducking; which works better than all of the LLMs available on assistant.


That's right:

> Kagi Assistant's web tool uses Kagi Search, and that has nothing to do with this subscription plans discussion, we're not changing anything there. The same applies to LLM-powered features in Kagi Search, like Quick Answer.


I have maxed out my ultimate usage before and when that happens the quick summarization tools did not function indicating I hit my limit. I assume it would affect those, but that might be part of how they break it up.


What's the latency on these like for music production?


Have you seen Decker: https://beyondloom.com/decker/


there's a great blogpost explaining the creator's inspirations in hypercard also: https://beyondloom.com/blog/sketchpad.html


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: