Hacker Newsnew | past | comments | ask | show | jobs | submit | tjchear's commentslogin

Here’s my experience: just yesterday I had to tackle this task that’d have required a backend engineer and a frontend engineer several days, so I tasked several Claude code agents to work on them autonomously. With the time freed up, I didn’t just twiddle my thumbs. I used it to read up on this topic that was making the rounds yesterday and gained a better understanding of it - something hard to do when you juggle both a job and raising a family. I could then reinvest the time I used to learn something by using them in some other projects.

Just my two cents. No matter whether you use AI or not, I’m sure you’ll gain something.


Lots of fun questions! Can you make it so that I can open each one in a new tab? Also if I navigate back to the main view I lose my scroll position.


Yes! Amazing you spotted this, I'm about to push an update, will be live in 1h max.


Okay it's done, all fixed!


Yay thank you!


If you’re using it with a local model then you need a lot of GPU memory to load up the model. Unified memory is great here since you can basically use almost all the RAM to load the model.


I take a fairly optimistic view to the adoption of AI assistants in our line of work. We begin to work and reason at a higher level and let the agents worry about the lower level details. Know where else this happens? Any human organization that existed, exists, and will exist. Hierarchies form because no one person can do everything and hold all the details in their mind, especially as the complexity of what they intend to accomplish goes up.

One can continue to perfect and exercise their craft the old school way, and that’s totally fine, but don’t count on that to put food on the table. Some genius probably can, but I certainly am not one.


But what if the AI agent has a 5% chance of adding a bug to that feature? Surely before any feature was completely bug free


Yeah it’s all trade offs. If it means I get to where I want to be faster, even if it’s imperfect, so be it.

Humans aren’t without flaws; prior to coding assistants, I’ve lost count of the times my PM telling me to rush things at the expense of engineering rigor. We validate or falsify the need for a feature sooner and move on to other things. Sometimes it works sometimes a bug blows up in our faces, but things still chug along.

This point will become increasingly moot as AI gets better at generating good code, and faster, too.


What is the chance that you add a bug?


I’ve not used duckdb before nor do I do much data analysis so I am curious about this one aspect of processing medium sized json/csv with it: the data are not indexed, so any non-trivial query would require a full scan. Is duckdb so fast that this is never really a problem for most folks?


It is true that for json and csv you need a full scan but there are several mitigations.

The first is simply that it's fast - for example, DuckDB has one of the best csv readers around, and it's parallelised.

Next, engines like DuckDB are optimised for aggregate analysis, where your single query processes a lot of rows (often a significant % of all rows). That means that a full scan is not necessarily as big a problem as it first appears. It's not like a transactional database where often you need to quickly locate and update a single row out of millions.

In addition, engines like DuckDB have predicate pushdown so if your data is stored in parquet format, then you do not need to scan every row because the parquet files themselves hold metadata about the values contained within the file.

Finally, when data is stored in formats like parquet, it's a columnar format, so it only needs to scan the data in that column, rather than needing to process the whole row even though you may be only interested in one or two columns


If you are going to query it frequently then json/csv might become an issue. I think the reason it doesn't become a problem for duckdb/polars users is that we generally convert them to parquet after first read.


Zonemaps are created for columns automatically. I process somewhat large tables w/ duckdb regularly (100M rows) and never have any problems.


that's true for duckdb native tables, but the question was about json.


They said json and csv - it handles both!


handles depends on size. But I tried to say there is no zonemaps for json.


Many analytical queries require full scans of fact tables anyway, so indexes are less useful. Joins are usually to dimensional tables, which are quite small. Snowflake doesn’t use indexes at all, and it’s built for handling the largest volumes of data.

However, you wouldn’t want to use either for transaction processing, the lack of indexes would really hurt.


But when indexing your json or csv, if you have say 10 rows, each row is separated on your disk instead of all together. So a scan for one columb only needs to read a tenth of the disk space used for the data. Obviously this depends on the columns' content.


But you can have a surprisingly large amount of data before the inefficiency you're talking about becomes untenable.


Not a duckdb user, but I use polars a lot (mentioned in the article).

Depends on your definition of medium sized, but for tables of hundreds of thousands of rows and ~30 columns, these tools are fast enough to run queries instantly or near instantly even on laptop CPUs.


Visidata is in Python and has offered “real time” analytics of fixed files for a long time. Computers are stupidly fast. You can do a lot of operations within a few seconds time window.


I guess the question is: how much is medium? DuckDB can handle quite a lot of data without breaking a sweat. Certainly if you prefer writing SQL for certain things, it's a no-brainer.


My understanding is food delivery companies take a huge cut (like 30%) so restaurants are forced to raise their prices significantly or risk losing customers. Even with that cut, food delivery customers still have to pay a significant delivery/service fee.


iirc there was a startup doing the same thing called sourcetable? How does pane compare to it?


The author makes a good point about language capabilities enabling certain libraries to be written, just as DSL makes it easier to reason about problems and implement solutions with the right kind of abstractions and language ergonomics (usually at the expense of expressivity and flexibility).

There’s a time in my life where I designed languages and wrote compilers. One type of language I’ve always thought about that could be made more approachable to non technical users is an outline-liked language with English like syntaxes and being a DSL, the shape of the outline would be very much fixed and on a guardrail, and can’t express arbitrary instructions like normal programming languages, but an escape hatch (to more expressive language) for advanced users can be provided. An area where this DSL can be used would be common portal admin app generation and workflow automation.

That said, with the advent of AI assistants, I’m not sure if there is still room for my DSL idea.


I mean that's Zapier or n8n?


If we sees ourselves less as a programmer and more as a software builder, then it doesn’t really matter if our programming skills atrophy in the process of adopting this tool, because it affords us to build at a higher abstraction level), kind of like how a PM does it. This up-leveling in abstractions have happened over and over in software engineering as our tooling improves over time. I’m sure some excellent software engineers here couldn’t write in assembly code to save their lives, but are wildly productive and respected for what they do - building excellent software.

That said, as long as there’s the potential for AI to hallucinate, we’ll always need to be vigilant - for that reason I would want to keep my programming skills sharp.

AI assisted software building by day, artisanal coder by night perhaps.


Isn't this the exact reason why modern software is so bloated?


I think this question can be answered in so many ways - first of all, piling abstraction doesn’t automatically imply bloating - with proper compile time optimizations you can achieve zero cost abstractions, e.g C++ compilers.

Secondly, bloated comes in so many forms and they all have different reasons. Did you mean bloated as in huge dependency installs like those node modules? Or did you mean an electron app where a browser is bundled? Or perhaps you mean the insane number of FactoryFactoryFactoryBuilder classes that Java programmers have to bear with because of misguided overarchitecting? The 7 layer of network protocols - is that bloating?

These are human decisions - trade-offs between delivering values fast and performance. Foundational layers are usually built with care, and the right abstractions help with correctness and performance. At the app layers, requirements change more quickly and people are more accepting of performance hits, so they pick tech stacks that you would describe as bloated for faster iteration and delivery of value.

So even if I used abstraction as an analogy, I don’t think that automatically implies AI assisted coding will lead to more bloat. If anything it can help guide people to proper engineering principles and fit the code to the task at hand instead of overarchitecting. It’s still early days and we need to learn to work well with it so it can give us what we want.


You'd have to define bloat first. Is internationalization bloat? How about screen reader support for the blind? I mean, okay, Excel didn't need a whole flight simulator in it, but just because you doing don't you use a particular feature doesn't mean it's necessarily bloat. So first: define bloat.


Some termite mounds in Botswana already reach over two meters high, but these traditional engineering termites will be left behind in their careers if they don't start using AI and redefine themselves as mound builders.


That’s really awesome to have a viable self bootstrapped project! Did you have to spend a lot of time maintaining it or deal with customer support after the initial launch? A low maintenance yet viable business would truly be the dream!


It is pretty close to that dream scenario now, yes.

Because the tech stack is stable (and fully matured), I almost never have to deal with 'emergency' technical support or bug fixes. The servers just hum along.

I do handle customer support myself, but the volume is very low relative to the traffic. 90% of the tickets are just non-technical questions about billing or ad-free subscriptions.

This low-maintenance overhead is exactly what allows me to work on new features or experiment with new projects (like my upcoming AI drawing school) without burning out.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: