I agree, and I would prefer to see concrete examples of LLMs being used productively and profitably in the industry in a "disruptive" manner--putting people out of work, etc--before we conclude they're somehow the next big thing. Basically, before claiming LLMs (or generative techniques, more generally) mean that we're on the doorstep of "general" intelligence, show me door!
The outline of that door might look like industrial adoption of these things for solving some actual problem other than the entertainment value of typing things into the box and seeing what comes out the other side. But so far, as far as I can tell, nobody's actually doing this?
I am a programmer and I use GPT occasionally, and I even pay 20 bucks a month (for now), but even for my job it's not a not a world-shattering improvement.
> ... the entertainment value of typing things into the box and seeing what comes out ...
I would only add that in a consumer society like ours, entertainment is important. Changes to entertainment seem to have, like, weird ripple effects. Not the knock-down economic disruptions that AI is promising, but I kind of think LLMs are just going to make our culture weirder. I can't anticipate how, but having a bunch of little LLM-powered daemons buzzing around the internet is just gonna be freaky.
> I am a programmer and I use GPT occasionally, and I even pay 20 bucks a month (for now), but even for my job it's not a not a world-shattering improvement.
I am also a programmer, and when I think about the amount of time I actually spend typing out code, even on a great day where all the stars have aligned just right and I can really bang out some code that's like... idk, 30-50% of my time? Usually it's much less, and I'm doing things like reading documentation, reading code, talking to people, etc. So it's hard to imagine Copilot or whatever making me much more effective at my job, as it can really only help with a fraction of it.
I could see someone making the assumption that being able to delegate programming tasks to a robot assistant might make them more productive, but often I find that I don't really understand a problem fully until I'm in the weeds solving it--by which I mean I haven't specified it completely until I've finished the implementation and written the tests. So I don't know to what extent being able to specify and delegate would really help me be more productive.
> having a bunch of little LLM-powered daemons buzzing around the internet is just gonna be freaky.
Yeah, they're not super cheap though so they need to get actual work done otherwise there's no reason to run them. Unlike blockchains, they don't have a pyramid scheme holding them up.
The outline of that door might look like industrial adoption of these things for solving some actual problem other than the entertainment value of typing things into the box and seeing what comes out the other side. But so far, as far as I can tell, nobody's actually doing this?