Hacker Newsnew | past | comments | ask | show | jobs | submit | everfrustrated's commentslogin

I've found copilot chat is able to do everything I need. I tried the Claude plugin for vscode and it was a noticeably worse experience for me.

Mind you copilot has only supported agent mode relatively recently.

I really like the way copilot does changes in such a way you can accept or reject and even revert to point in time in the chat history without using git. Something about this just fits right with how my brain works. Using Claude plugin just felt like I had one hand tied behind my back.


I find Claude Code in VS Code is sometimes horribly inefficient. I tell it to replace some print-statements with proper logging in the one file I have open and it first starts burning tokens to understand the codebase for the 13th time today, despite not needing to and having it laid out in the CLAUDE.md already.

You're not wrong. Europe has no clouds only hosting & vps providers. Nothing has changed in 20 years. Really sad actually.

That'd be the same osi model which only is used by academics and nobody in the real world.

The US has state has county taxes. All with different thresholds of when you're required to collect and remit.

Cloudflare has a speed test. Try comparing it to other speed tests around the same time. Should reveal if Dtag is congested/throttling your traffic to Cloudflare sites vs others.

https://speed.cloudflare.com/


Could also check the official Breitbandmessung by the BNetzA They also have some interesting statistic by provider, etc. on there. https://www.breitbandmessung.de

Worth pointing out that your IDE/plugin usually adds a whole bunch of prompts before yours - let alone the prompts that the model hosting provider prepends as well.

This might be what is encouraging the agent to do best practices like improvements. Looking at mine:

>You are a highly sophisticated automated coding agent with expert-level knowledge across many different programming languages and frameworks and software engineering tasks - this encompasses debugging issues, implementing new features, restructuring code, and providing code explanations, among other engineering activities.

I could imagine that an LLM could well interpret that to mean improve things as it goes. Models (like humans) don't respond well to things in the negative (don't think about pink monkeys - Now we're both thinking about them).


It's also common for your own CLAUDE.md to have some generic line like "Always use best practices and good software design" that gets in the way of other prompts.

The entire reason behind Turso is that the author had a beef with the sqlite people.

How is this any different to US users? Do you think stripe is correctly remitting US sales and county taxes?

The obligation has always been on the company making the sale not the processor.


> Do you think stripe is correctly remitting US sales and county taxes?

You tell me. Would the same people who help evade tax payments in the EU really do the same in the US? That's unbelievable! /s

> The obligation has always been on the company making the sale not the processor.

That's incorrect. At minimum, the processor needs to tell me exactly who the money goes to, so I can reach out to them.

And that's a "legal reach out" kind of information including company name, company type, company registration number, and company country of incorporation.

Stripe makes it easy for merchants to obscure that information and is actively hiding it from the customers who paid the merchant.


The software licencing of 50 read replicas alone would make sqlserver a non-starter

This is why I love Postgres. It can get you to being one of the largest websites before you need to reconsider your architecture just by throwing CPU and disk at it. At that point you can well afford to hire people who are deep experts at sharding etc.

PostgreSQL actually supports sharding out of the box, it's just a matter of setting up the right table partitioning and using Foreign Data Wrapper (FDW) to forward queries to remote databases. I'm not sure what the post is referencing when they say that sharding requires leaving Postgres altogether.

This is specifically what they said about sharding

> The primary rationale is that sharding existing application workloads would be highly complex and time-consuming, requiring changes to hundreds of application endpoints and potentially taking months or even years


> potentially taking months or even years

On one hand OAI sell coding agents and constantly hype how easy it will replace developers and most of the code written is by agents, on the other hand they claim it will take years to refactor

Both cannot be true at the same time.


Genuinely sounds like the kind of challenge that could be solved with a swarm of Codex coding agents. I'm surprised they aren't treating this as an ideal use-case to show off their stack!

Oh snap! Maybe it's all a great deception for making money?

I read your message, guessed the author, and I’m happy to announce I guessed correctly.

Getting the sharing in-place, yes, but maintaining it operationally would still be a headache. Things like schema migrations across shards, resharding, and even observability.

It wouldn’t work.

I know they said that, but in fact sharding is entirely a database-level concern. The application need not be aware of it at all.

Sharding can be made mostly transparent, but it's not purely a DB-level concern in practice. Once data is split across nodes, join patterns, cross-shard transactions, global uniqueness, certain keys hit with a lot of traffic, etc matter a lot. Even if partitioning handles routing, the application's query patterns and its consistency/latency requirements can still force application-level changes.

> Once data is split across nodes, join patterns, cross-shard transactions, global uniqueness, certain keys hit with a lot of traffic

If you're having trouble there then a proxy "layer" between your application and the sharded database makes sense, meaning your application still keeps its naieve understanding of the data (as it should) and the proxy/database access layer handles that messiness... shirley


> mostly transparent, but it's not purely a DB-level concern in practice ...

But how would any of that change by going outside Postgres itself to begin with? That's the part that doesn't make much sense to me.


When sharded, anything crossing a shard boundary becomes non-transactional.

Ie. if you shard by userId, then a "share" feature which allows a user to share data with another user by having a "SharedDocuments" table cannot be consistent.

That in turn means you're probably going to have to rewrite the application to handle cases like a shared document having one or other user attached to it disappear or reappear. There are loads of bugs that can happen with weak consistency like this, and at scale every very rare bug is going to happen and need dealing with.


> When sharded, anything crossing a shard boundary becomes non-transactional.

Not necessarily? You can have two-phase commit for cross-shard writes, which ought to be rare anyway.


Two-phase commit provides an eventual consistency guarantee only....

Other clients (readers) have to be able to deal with inconsistencies in the meantime.

Also, 2PC in postgres is incompatible with temporary tables, which rules out use with longrunning batch analysis jobs which might use temporary tables for intermediate work and then save results. Eg. "We want to send this marketing campaign to the top 10% of users" doesn't work with the naive approach.


Sorry, what am I missing here, this complaint is true for all architectures, because the readers are always going to be out of sync with the state in the database until they do another read.

The nanosecond that the system has the concept of readers and writers being different processes/people/whatever it has multiple copies, the one held by the database, and the copies held by the readers when they last read.

It does not matter if there is a single DB lock, or a multi shared distributed lock.


These are limitations in the current PostgreSQL implementation. It's quite possible to have consistent commits and snapshots across sharded databases. Hopefully some day in PostgreSQL too.

Shameless plug: https://github.com/mkleczek/pgwrh automates it quite a bit.

> At that point you can well afford to hire people who are deep experts at sharding etc.

Can you, though? OpenAI is haemorrhaging money like it is going out of style and, according to the news cycle over the last couple of days, will likely to be bankrupt by 2027.


And typically the bigger the company gets, the harder it is to migrate to a new data model.

You suddenly have literally thousands of internal users of a datastore, and "We want to shard by userId, nobody please don't do joins on user Id anymore" becomes an impossible ask.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: