I've started using Python for many more tasks after I discovered this feature. I'm primarily a JS/TS developer, but the ability to write a "standalone" script that can pull in third-party dependencies without affecting your current project is a massive productivity boost.
I've worked with a number of people who made the IC -> manager conversion because it was represented as the best way forward in their career, only to find out it made them miserable, and convert back after a few years. I think you'll find that sort of conversion back to IC is not all that uncommon.
Essentially the thinking went. If everyone is remote, why not hire remote workers from countries that are a lot cheaper. Suddenly you had a hard time finding contractors and FTEs from those countries because everyone was hiring them. At the same time it got really hard for entry level developers in the USA to find work.
The supply/demand curve shifted and now those workers are becoming more expensive while domestic workers are becoming cheaper.
India specifically is in the middle of a massive years-long labor movement that is changing the terms of work there and I believe shifting the degree of alignment with western corporate outsourcing though I'm not very informed about the details.
Scale is beyond comprehension though, there were 250 million people on strike one day last summer. This is not ever really covered in western media or mentioned on HN for reasons that are surely not interesting or worth pondering at all.
No one (at a national scale) can afford to strike like that, except people who have an understanding of why they even more can't afford not to strike like that.
You're most likely correct; I originally started writing this comment to refute your statement, but found that my assumptions appear to be wrong.
Americans have the nearly the highest nominal and PPP income of OECD countries as of 2024, only behind Luxembourg, Iceland, and Switzerland [1].
India experiences substantially higher shelter and food insecurity and poverty rates than the United States.
However, tech workers in Bangalore are paid an order of magnitude higher than prevailing local wages in other sectors, at around ₹2M (₹20 lakh) [2]. Median annual rents for 2BHK (2 bedroom) apartments appear to be around 1/10th of that figure at ₹3 lahk in desirable neighborhoods [3].
It appears to be reasonable for a technology worker to be able to perform a sustained strike. I have never personally traveled to Bangalore, though I have lived in places where cost of living is under a tenth of median American income.
I invite correction by people with first hand knowledge about cost of living in Bangalore.
and it was an absolutely made up number. The real numbers would be so low it was insignificant. That number was only reported in global outlets, and that strike had zero practical impact in India. It was so uneventful, almost all of India except select pockets that has communist party influence didn't even know about the whole thing - let alone felt the impact of the strike.
> However, tech workers in Bangalore are paid an order of magnitude higher than prevailing local wages in other sectors
250 million people striking in India isn't mainly “tech workers in Bangalore”, or mainly tech and other elite workers at all. It’s about 40% of Indian workers, and most articles I've seen about it centered on widespread participation of workers in coal, construction, and agricultural sectors.
When India "shut down" for Covid, day labourers suddenly had no income, and no government support - they had to walk all the way to their home province (can't remember if the trains were even running).
But oh well, Uberizing employment means the run-of-the-mill American worker can also live like that in the future... progress!
I'm not an economist either, but I also assume that as the country attracts more local talent for local companies, the competition for outsourcing becomes harder. (i.e, you now have to pay more than the local companies).
All just speculation on my part though, I really have no clue either.
No shame in this! When you're using Claude code (or Cursor, or similar), you get these pop-ups rather frequently. "May I do XYZ web search?" "May I run this command?" "May I make this HTTP request?" This is for security, but it becomes the limiting step in your workflow if you're trying to use parallel agents.
These tools generally offer the ability to simply shut off these guardrails. When you do this, you're in what has come to be called "yolo mode."
I am arguing that, sandboxed correctly, this mode is actually safer than the standard one because it mitigates my own fatigue and frustration. These threats surface every hour of every day. Malicious actors are definitely a thing, but your own exhaustion is a far more present danger.
It really depends on the person. This may be an unpopular opinion today, but I strongly believe that someone doing what he loves will statistically be both happier and better off financially than someone in only for the money. So whatever makes their ears perk up: EE, chemistry, mechanical, math, physics, biology, etc.
And, as a complement, pick up a "computer-ish" minor to learn how to make a machine do your bidding. My 2c.
It sure is. If I said that it is not it was a typo.
What I was saying is that today I see a non-CS STEM major plus a CS-like minor as a better ticket for an undergrad (who will enter the job market in the next 2-4 years) than a CS major. Which was not the case for the last almost 30 years, when a pure CS major gave many folks an excellent start. My 2c.
I've tried most of the CLI coding tools with the Claude models and I keep coming back to Claude Code. It hits a sweet spot of simple and capable, and right now I'd say it's the best from an "it just works" perspective.
In my experience the CLI tool is part of the secret sauce. I haven't tried switching models per each CLI tool though. I use claude exclusively at work and for personal projects I use claude, codex, gemini.
Claude Code seems to package a relatively smart prompt as well, as it seems to work better even with one-line prompts than alternatives that just invoke the API.
Key word: seems. It's impossible to do a proper qualitative analysis.
Aha! That makes so much sense. Thank you for this.
Edit: Ah, right, the downside is that this is not going to have good olap query performance when interacting directly with the sqlite tables. So still necessary to copy out to duckdb tables (probably in batches) if this matters. Still seems very useful to me though.
Analytics is done in "batches" (daily, weekly) anyways, right?
We know you can't get both, row and column orders at the same time, and that continuously maintaining both means duplication and ensuring you get the worst case from both worlds.
Local, row-wise writing is the way to go for write performance. Column-oriented reads are the way to do analytics at scale. It seems alright to have a sync process that does the order re-arrangement (maybe with extra precomputed statistics, and sharding to allow many workers if necessary) to let queries of now historical data run fast.
It's not just about row versus column. OLAPs are potentially denormalised as well, and sometimes pre-aggregation, such as rolling up by day, by customer.
If you really need to get performance you'll be building a star schema.
Not all olap-like queries are for daily reporting.
I agree that the basic architecture should be row order -> delay -> column order, but the question (in my mind) is balancing the length of that delay with the usefulness of column order queries for a given workload. I seem to keep running into workloads that do inserts very quickly and then batch reads on a slower cadence (either in lockstep with the writes, or concurrently) but not on the extremely slow cadence seen in the typical olap reporting type flow. Essentially, building up state and then querying the results.
I'm not so sure about "continuously maintaining both means duplication and ensuring you get the worst case from both worlds". Maybe you're right, I'm just not so sure. I agree that it's duplicating storage requirements, but is that such a big deal? And I think if fast writes and lookups and fast batch reads are both possible at the cost of storage duplication, that would actually be the best case from both worlds?
I mean, this isn't that different conceptually from the architecture of log-structured merge trees, which have this same kind of "duplication" but for good purpose. (Indeed, rocksdb has been the closest thing to what I want for this workload that I've found; I just think it would be neat if I could use sqlite+duckdb instead, accepting some tradeoffs.)
> the question (in my mind) is balancing the length of that delay with the usefulness of column order queries for a given workload. I seem to keep running into workloads that do inserts very quickly and then batch reads on a slower cadence (either in lockstep with the writes, or concurrently) but not on the extremely slow cadence seen in the typical olap reporting type flow. Essentially, building up state and then querying the results.
I see. Can you come up with row/table watermarks? Say your column store is up-to-date with certain watermark, so any query that requires freshness beyond that will need to snoop into the rows that haven't made it into the columnar store to check for data up to the required query timestamp.
In the past I've dealt with a system that had read-optimised columnar data that was overlaid with fresh write-optimised data and used timestamps to agree on the data that should be visible to the queries. It continuously consolidated data into the read-optimised store instead of having the silly daily job that you might have in the extremely slow cadence reporting job you mention.
You can write such a system, but in reality I've found it hard to justify building a system for continuous updates when a 15min delay isn't the end of the world, but it's doable if you want it.
> I'm not so sure about "continuously maintaining both means duplication and ensuring you get the worst case from both worlds". Maybe you're right, I'm just not so sure. I agree that it's duplicating storage requirements, but is that such a big deal? And I think if fast writes and lookups and fast batch reads are both possible at the cost of storage duplication, that would actually be the best case from both worlds?
I mean that if you want both views in a consistent world, then writes will bring things to a crawl as both, row and column ordered data needs to be updated before the writing lock is released.
Yes! We're definitely talking about the same thing here! Definitely not thinking of consistent writes to both views.
Now that you said this about watermarks, I realize that this is definitely the same idea as streaming systems like flink (which is where I'm familiar with watermarks from), but my use cases are smaller data and I'm looking for lower latency than distributed systems like that. I'm interested in delays that are on the order of double to triple digit milliseconds, rather than 15 minutes. (But also not microseconds.)
I definitely agree that it's difficult to justify building this, which is why I keep looking for a system that already exists :)
Flutter is pretty bad at least when compared to react native with expo these days. I was big into flutter but it’s just pretty bad overall when compared to expo.
Flutter developer experience is top notch. The best place you can see this is by comparing the experience of upgrading React Native vs upgrading Flutter. Flutter Web with WASM is also very cool. If you’re using a bespoke design for your app and don’t care about looking native, it’s overall a more predictable way to manage cross platform development.
That said, React Native has many great third party libraries whereas Flutter is dominated by a lot of low quality solutions.
Edit: zero-dependency Python.
reply