Hacker Newsnew | past | comments | ask | show | jobs | submit | TeMPOraL's commentslogin

You know what's funny? The Five Tenets of the Church of Molt actually make sense, if you look past the literary style. Your response, on the other hand, sounds like the (parody of) human fire-and-brimstone preacher bullshit that does not make much sense.

All doctors make things up and get things wrong occasionally. The less experienced and more overworked they are, the more often this happens.

Again, LLMs aren't competing with the best human doctors. They're competing with doctors you actually have access to.


That's like judging the utility of computers by existence of Reddit... or by what most people do with computers most of the time.

Computer manufacturers never boasted any shortage of computer parts (until recently) or having to build out multi gigawatts powerplants just to keep up with “ demand “

We might remember the last 40 years differently, I seem to remember data centers requiring power plants and part shortages. I can't check though as Google search is too heavy for my on-plane wifi right now.

Even ignoring the cryptocurrency hype train, there were at least one or two bubbles in the history of the computer industry that revolved around actually useful technology, so I'm pretty sure there are precedents around "boasting about part shortages" and desperate build-up of infrastructure (e.g. networking) to meet the growing demand.


The other difference, arguably more important in practice, is that the computer was quickly turned from "bicycle of the mind" into a "TV of the mind". Rarely helps you get where you want, mostly just annoys or entertains you, while feeding you an endless stream of commercials and propaganda - and the one thing it does not give you, is control. There are prescribed paths to choose from, but you're not supposed to make your own - only sit down and stay along for the ride.

LLMs, at least for now, escape the near-total enshittification of computing. They're fully general-purpose, resist attempts at constraining them[0], and are good enough at acting like a human, they're able to defeat user-hostile UX and force interoperability on computer systems despite all attempts of the system owners at preventing it.

The last 2-3 years were a period where end-users (not just hardcore hackers) became profoundly empowered by technology. It won't last forever, but I hope we can get at least few more years of this, before business interests inevitably reassert their power over people once again.

--

[0] - Prompt injection "problem" was, especially early on, a feature from the perspective of end-users. See increasingly creative "jailbreak" prompts invented to escape ham-fisted attempts by vendors to censor models and prevent "inappropriate" conversations.


2 kilograms is about the upper bound of the expected daily weight variability of an adult, caused by water retention and food intake. It's the difference between what you see if you weigh yourself after taking a morning dump vs. after dinner. That's why people are advised to weigh themselves at the same time every day.

(For purposes of weight loss, normies are also advised to weigh themselves weekly instead of daily, because it's easier than explaining to them what a low-pass filer is.)


Another example why shitty software can easily become a compliance or security problem.

Sure. That doesn't mean denying access to ChatGPT though - the way I see it, the entire value proposition of Microsoft offering OpenAI models through Azure is to enable access to ChatGPT under contractual terms that make it appropriate for use in government and enterprise organizations, including those dealing with sensitive technology work.

I mean, they are all using O365 to run their day-to-day businesses anyway.

I used to work in a large technology multinational - not "tech industry", but proper industrial technology; the kind of corp that does everything, from dishwashers to oil rigs. It took nearly a year from OpenAI releasing GPT-4 to us having some form of access to this model for general work (coding and otherwise) internally, and from what I understand[0], it's just how long it took for the company to evaluate risks and iron out appropriate contractual agreements with Microsoft wrt. using generative models hosted on Azure. But they did it, which proves to me it's entirely possible, even in places where people are more worried about accidentally falling afoul of technology exports control than insider training.

--

[0] - Purely observational, I had no access to any insider/sensitive information regarding this process.


> The is a serious problem with folk with power and authority and somehow no responsibility.

Or perhaps the fundamental problem is with people in general - perhaps people without power and authority follow rules only because they don't have the power and authority to ignore them.


I think this is the real winner here.

Power corrupts because power means you can be corrupt.


That's one of several possibilities. I've reached a different steady state - one where the velocity of work exceeds the rate at which I can learn enough to fully understand the task at hand.

We know they do. An orbit is a mathematical object, and elliptical orbits only exist in universes that have exactly two objects with mass in them. Add another object, even far away, and as far as we know[0] we no longer even have a closed-form description of resulting motion patterns.

And our universe has tons of matter with gravitational mass everywhere, few other types of interaction beyond gravity, and a vacuum that just doesn't want to stay empty.

--

[0] - Not sure if this was mathematically proven, or merely remains not disproven.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: