I certainly hold those opinions still, because the models still have yet to prove they are anything worth a person's time. I don't bother posting that because there's no way an AI hype person and I are ever going to convince each other, so what's the point?
The skeptics haven't evaporated, they just aren't bothering to try to talk to you any more because they don't think there's value in it.
And whats with everything else regarding ML progress like image generation, 3d world generation etc.?
I vibe coded plenty of small things i haven't ever had the time for them. You don't have anything which you wanted to do and can fit in a single page html application? It can even use local storage etc.
This is why they don't talk to you anymore. The only comparison you can make to a flat earther is that you think they're wrong, and flat earthers are also wrong. It's just dumb invective, and people don't like getting empty insults. I prefer my insults full.
The earth is flat until you have evidence of the contrary. It's you who should provide that evidence. We had physics, navigation and then space shuttles that clearly showed the earth is not flat.
We are yet to have a fully vibe-coded piece of software that actually works. The blog post is actually great because LLMs are very good are regurgitating pieces of code that already exist on a single prompt. Now ask them to make a few changes and suddenly the genie is back in the bottle.
Something doesn't math out. You can't be both a genius and extremely dumb (retarded) at the same time. You can be, however, good at information retrieval and presenting it in a better way. That's what LLMs are and am not discounting the usefulness of that.
It’s good at quickly producing a lot of code which is most likely going to give interesting results, and it’s completely unaware of anything including why human might want to produce code.
The marketing bullshit that it’s a "thinking" and "hallucinating" is just bringing the intended confusion on the table.
They are great tools for many purpose. But a GPS is not a copilot, and an LLM is not going to replace coworkers where their humanity matters.
I mean is it really that interesting if it completely falls flat and permanently runs in unfulfilling circles around basically any mild complexity the problem introduces as you get further along solving it, making it really hard to not feel like you need to just do it yourself?
For one thing, it’s far more interesting than a rubber duck in many cases. Of course on that matter in the end it’s about framing the representation adequately and enter a fictional dialog.
Original post alone mentions multiple projects and links https://pine.town as no code directly written by the author.
From perspective of personally using it daily, seeing what my team is using it for it's quite shocking to still see those kind of comments, it's like we're living on different planets - again, gives flat earther like vibe.
We're living in such interesting times - you can talk to a computer and it works, in many cases at extraordinary level - yet you still see intellectually constipated opinions arguing against basic facts established years ago - incredible.
It has been interesting experience, like trolling but you actually believe what you're saying. I wonder how you arrived at it - is it fear, insecurity, ignorance, feelings of injustice or maybe something else? I wonder what bothers you about LLMs?
The skeptics haven't evaporated, they just aren't bothering to try to talk to you any more because they don't think there's value in it.