Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am not in Seattle. I do work in AI but have shifted more towards infrastructure.

I feel fatigued by AI. To be more precise, this fatigue includes several factors. The first one is that a lot of people around me get excited by events in the AI world that I find distracting. These might be new FOSS library releases, news announcements from the big players, new models, new papers. As one person, I can only work on 2-3 things at a given interval in time. Ideally I would like to focus and go deep in those things. Often, I need to learn something new and that takes time, energy and focus. This constant Brownian motion of ideas gives a sense of progress and "keeping up" but, for me at least, acts as a constantly tapped brake.

Secondly, there is a sentiment that every problem has an AI solution. Why sit and think, run experiments, try to build a theoretical framework when one can just present the problem to a model. I use LLMs too but it is more satisfying, productive, insightful when one actually thinks hard and understands a topic before using LLMs.

Thirdly, I keep hearing that the "space moves fast" and "one must keep up". The fundamentals actually haven't changed that much in the last 3 years and new developments are easy to pick up. Even if they did, trying to keep up results in very shallow and broad knowledge that one can't actually use. There are a million things going on and I am completely at peace with not knowing most of them.

Lastly, there is pressure to be strategic. To guess where the tech world is going, to predict and plan, to somehow get ahead. I have no interest in that. I am confident many of us will adapt and if I can't, I'll find something else to do.

I am actually impressed with and heavily use models. The tiresome part now are some of the humans around the technology who participate in the behaviors listed above.





> The fundamentals actually haven't changed that much in the last 3 years

Even said fundamentals don't have much in the way to foundations. It's just brute forcing your way using a O(n^3) algorithm using a lot of data and compute.


Brute force!? Language modeling is a factorial time and memory problem. Someone comes up with a successful method that’s quadratic in the input sequence length and you’re complaining…?

O(n^(~2.8)) because fast matrix mult?

I hate scammers like many of the Anthropic employees that post every other week "brooo we have this model that can break out of the system bro!"

"broo it's so dangerous let me tell you how dangerous it is! you don't want to get this out! we have something really dangerous internally!"

Those are the worst, Dario included there btw, almost a worse grifter than Altman.

The models themselves are fine except Claude that calls the police if you say the word boob.


Dario wishes he was the grifter Altman is. He's like a kirland brand grifter compared to Altman. Altman is a generational level talent when it comes to grifting.

> I am actually impressed with and heavily use models. The tiresome part now are some of the humans around the technology who participate in the behaviors listed above.

the AI just an LLM and it just does what it is told to.

no limit to human greed though


I get excited by new model releases, try it, switch it to default if I feel it's better, and then I move on. I don't understand why any professional SWE should engage in weird cultish behavior about these models, it's a better mousetrap as far as I'm concerned

its just the old pc vs mac cultism. nobody who actually has work to do cares. much like authors obsessed with typewriters, transport companies with auto brands, etc



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: