Last year I used LLM to solve AoC, to see how they could keep up, to learn how to steer them and to see how the open models will perform. When I talk about it, quite a bit of "programmers" get upset. Glad to see that Norvig is experimenting.
p/s, anyone who gets upset that folks are experimenting with LLMs to generate code or solve AoC should have their programmer's card revoked.
It's quite foolish of you to make that assumption. To begin with, with my timezone and when I could get to it, I was starting 12 hrs after the release so the leaderboard was useless. I was writing about it openly on the internet, pointing out the huddles I faced and how much effort it took to get the LLM to generate the correct solutions.
> it's quite foolish of you to make that assumption
To have assumed that you didn't submit to the leaderboard? I'm trying to give you the benefit of the doubt on not ruining the competition for everyone else. You can do whatever you want on your own time.
If you were submitting, despite as I recall AOC specifically saying not to submit ai solutions, then you know exactly why people were upset. If you weren't, them you're being aggressive at me for no reason.
I enjoy reading Peter’s ‘Python studies’ and was surprised to see here a comparison of different LLMs for solving advent of code problems, but the linked article is pretty cool.
Peter and a friend of his wrote an article over a year ago discussing whether or not LLMs are already AGI, and after re-reading that article my opinion was moved a bit to: LLMs are AGI in broad digital domains. I still need to see embodied AI in robots and physical devices before I think we are 100% of the way there. Still, I apply Gemini and also a lot of open weight models to both 1. coding problems and 2. after I read or watch material on Philosophy I almost always ask Gemini for a summary, references, and a short discussion based on what Gemini knows about me.
I'm sorry, but what's the point here ? It's not for a job or improve a LLM or doing something useful per se, just to "enjoy" how version X or Y of an LLM can solve problems.
I don't want to sound grumpy or but it doesn't achieve anything, this is just a showcase of how a "calculator with a small probability of failure can succeed".
Move on, do something useful, don't stop being amazed by AI but please stop throwing it at my face.
You are conflating "hype" with any positive outlook. It has some uses and some people are using it. That's not "hype". It is exhausting to see it everywhere so I sympathize.
Readers may enjoy looking into Peter Norvig and his contributions to the field, which they might find in positive contrast to any of the stereotypical LLM hype.
I will stop when the AI bubble will burst and people will stop throwing at my face everyday about statistical models. It is litterally everywhere I look, I did not ask for it even when I filter the inputs.
IMHO it would be nice to have an AI summarizer/filter (see the irony ?) for tech news (hn maybe ?) that filters out everything about AI, LLMs and company.
p/s, anyone who gets upset that folks are experimenting with LLMs to generate code or solve AoC should have their programmer's card revoked.
reply