I have no idea what agents are for, could be my own ignorance.
That said, I have been using LLMs for a while now with great benefit. I did not notice anything missing, and I am not sure what agents bring to the table. Do you know?
You are a manual agent to LLMs when you use things like ChatGPT. You go through a workflow loop when you try to investigate and consult with an LLM. Agents are just trying to automate your workflow against an LLM. It's basically just scripting. Scripting these LLMs is where we all want to go, but the context window length is a limiting factor, as well as inferencing on any notable sized window.
I'll manage my whiney emotions over the term Agents, but you'll have to hold a gun to my head before I embrace "Agentic", which is a thoroughly stupid word. "Scripted workflow" is what it is, but I know there are some true "visionaries" out there ready to call it "Sentient workflow".
Agents, besides tool use, also have memory, can plan work towards a goal, and can, through an iterative process (Reflect - Act), validate if they are on the right track.
If an agent takes a Topic A and goes down a rabbit hole all the way to Topic Z, you'll see that it won't be able to incorporate or backtrack back to Topic A without losing a lot of detail from the trek down to Topic Z. It's a serious limitation right now from the application development side of things, but I'm just reiterating what the article pointed out, which is that you need to work with fewer step workflows that isn't as ambitious as covering all things from A-Z.
Not a disagreement with you but wanted to further clarify.
I do think it’s a step up when done correctly. Thinking of tools like Cursor. Most of my concern and issue comes from the amount of folks I have seen trying to great a system that solves everything. I know in my org people were working on Agents without even a problem they were solving for. They are effectively trying to recreate ChatGPT which to me is a fools errand.
What do agents provide? Asynchronous work output, decoupled from human time.
That’s super valuable in a lot of use cases! Especially because it’s a prerequisite for parallelizing “AI” use (1 human : many AI).
But the key insight from TFA (which I 100% agree with) is that the tyranny of sub-100% reliability compounded across multiple independent steps is brutal.
Practical agent folks should be engineering risk / reliability, instead of happy path.
And there are patterns and approaches to do that (bounded inputs, pre-classification into workable / not-workable, human in the loop), but many teams aren’t looking at the right problem (risk/reliability) and therefore aren’t architecting to those methods.
And there’s fundamentally no way to compose 2 sequential 99% reliable steps into a 99% reliable system with a risk-naive approach.
I updated a svelte component at work, and while i could test it in the browser and see it worked fine, the existing unit test suddenly started failing. I spent about an hour trying to figure out why the results logged in the test didn't match the results in the browser.
I got frustrated, gave in and asked Claude Code, an AI agent. The tool call loop is something like: it reads my code, then looks up the documentation, then proposed a change to the test which i approve, then it re-runs the test, feeds the output back into the AI, re-checks the documentation, and then proposes another change.
It's all quite impressive, or it would be if at one point it didn't randomly say "we fixed it! The first element is now active" -- except it wasn't, Claude thought the first element was element [1], when of course the first element in an array is [0]. The test hadn't even actually passed.
An hour and a few thousand Claude tokens my company paid for and got nothing back for lol.
A friend of mine set up a cron job coupled with the Claude API to process his email inbox every 30 minutes and unsubscribe/archive/delete as necessary. It could also be expanded to draft replies (I forget if his does this) and even send them, if you’re feeling lucky. I’m pretty sure the AI (I’m guessing Claude Code in this case) wrote most or all of the code for the script that does the interaction with the email API.
An example of my own, not agentic or running in a loop, but might be an interesting example of a use case for this stuff: I had a CSV file of old coupon codes I needed to process. Everything would start in limbo, uncategorized. Then I wanted to be able to search for some common substrings and delete them, search for other common substrings and keep them. I described what I wanted to do with Claude 3.7 and it built out a ruby script that gave me an interactive menu of commands like search to select/show all/delete selected/keep selected. It was an awesome little throwaway script that would’ve taken me embarrassingly long to write, or I could’ve done it all by hand in Excel or at the command line with grep and stuff, but I think it would’ve taken longer.
Honestly one of the hard things about using AI for me is remembering to try to use it, or coming up with interesting things to try. Building up that new pattern recognition.
No, the fact Claude couldn't remember that JavaScript is zero-indexed for more than 20 minutes has not left me interested in letting it take on bigger tasks
The tools can be an editor/terminal/dev environment, automatically iterating to testing the changes and refining until a finished product, without a human developer, at least that is what some wish of it.
Oh, okay, I understand it now, especially with the other comment that said Cursor is one. OK, makes sense. Seems like it "just" reduces friction (quite a lot).
Yeah, it's really just a user experience improvement. In particular, it makes AI look a lot better if it can internally retry a bunch of times until it comes up with valid code or whatever, instead of you having to see each error and prompt it to fix it. (Also, sometimes they can do fancy sampling tricks to force the AI to produce a syntactically valid result the first time. Mostly this is just used for simple JSON schemas though.)
Thank you, that is what my initial thought was. I am still doing things the old-fashioned way, thankfully it has worked out for me (and learned a lot in the process), but perhaps this AI agent thing might speed things up a bit. :D Although then I will learn much less.
Cursor is my classic example. I don’t know exactly what tools are defined in their loop but you give the agent some code to write. It may search your code base, it may then search online for third party library docs. Then come back and write some code etc.
That said, I have been using LLMs for a while now with great benefit. I did not notice anything missing, and I am not sure what agents bring to the table. Do you know?