Hacker Newsnew | past | comments | ask | show | jobs | submit | theanonymousone's commentslogin

I already do it, but not in TS. There is a scripting language that is as available in most/all (non-Windows) systems as Bash: Python.

Edit: zero-dependency Python.


Works all and well until you need a dependency, then you need to do all the same project setup as normal.

Stopped using python for scripting for this reason


If you use `uv`, you can declare your dependencies at the top of a script: https://docs.astral.sh/uv/guides/scripts/#declaring-script-d...

I've started using Python for many more tasks after I discovered this feature. I'm primarily a JS/TS developer, but the ability to write a "standalone" script that can pull in third-party dependencies without affecting your current project is a massive productivity boost.


I don't think you need any dependencies to match Bash scripting in capability.

You can even wrap shell / system commands in python and capture the output, so it’s basically a superset!

You can also inline python inside shell scripts, does that make them equal sets? :)

    life() {
      python3 << EOF
    print(42)
    EOF
    }

Does that mean half of those managers will become engineers?

If I were a betting person, I'd bet that's the number of engineers who were previously forced to become managers in order to get a promotion.

I've worked with a number of people who made the IC -> manager conversion because it was represented as the best way forward in their career, only to find out it made them miserable, and convert back after a few years. I think you'll find that sort of conversion back to IC is not all that uncommon.

Why did "outsourced workers get (relatively) much more expensive after"?

Essentially the thinking went. If everyone is remote, why not hire remote workers from countries that are a lot cheaper. Suddenly you had a hard time finding contractors and FTEs from those countries because everyone was hiring them. At the same time it got really hard for entry level developers in the USA to find work.

The supply/demand curve shifted and now those workers are becoming more expensive while domestic workers are becoming cheaper.


India specifically is in the middle of a massive years-long labor movement that is changing the terms of work there and I believe shifting the degree of alignment with western corporate outsourcing though I'm not very informed about the details.

Scale is beyond comprehension though, there were 250 million people on strike one day last summer. This is not ever really covered in western media or mentioned on HN for reasons that are surely not interesting or worth pondering at all.


Americans can’t afford to strike like that.

No one (at a national scale) can afford to strike like that, except people who have an understanding of why they even more can't afford not to strike like that.

You're most likely correct; I originally started writing this comment to refute your statement, but found that my assumptions appear to be wrong.

Americans have the nearly the highest nominal and PPP income of OECD countries as of 2024, only behind Luxembourg, Iceland, and Switzerland [1].

India experiences substantially higher shelter and food insecurity and poverty rates than the United States.

However, tech workers in Bangalore are paid an order of magnitude higher than prevailing local wages in other sectors, at around ₹2M (₹20 lakh) [2]. Median annual rents for 2BHK (2 bedroom) apartments appear to be around 1/10th of that figure at ₹3 lahk in desirable neighborhoods [3].

It appears to be reasonable for a technology worker to be able to perform a sustained strike. I have never personally traveled to Bangalore, though I have lived in places where cost of living is under a tenth of median American income.

I invite correction by people with first hand knowledge about cost of living in Bangalore.

1. https://www.oecd.org/en/data/indicators/average-annual-wages...

2. https://timesofindia.indiatimes.com/city/bengaluru/median-te...

3. https://www.birlaevara.org.in/best-areas-in-bangalore-for-re...


> It appears to be reasonable for a technology worker to be able to perform a sustained strike.

I don't think the strikes are done by tech people at all. Just normal workers.


Then indeed these striking workers are doing so bravely, especially in comparison to the wealth of American workers.

and it was an absolutely made up number. The real numbers would be so low it was insignificant. That number was only reported in global outlets, and that strike had zero practical impact in India. It was so uneventful, almost all of India except select pockets that has communist party influence didn't even know about the whole thing - let alone felt the impact of the strike.

> However, tech workers in Bangalore are paid an order of magnitude higher than prevailing local wages in other sectors

250 million people striking in India isn't mainly “tech workers in Bangalore”, or mainly tech and other elite workers at all. It’s about 40% of Indian workers, and most articles I've seen about it centered on widespread participation of workers in coal, construction, and agricultural sectors.


Thank you for the correction. Indeed these workers' livelihoods are more perilous than their American contemporaries.

And Indians can?

When India "shut down" for Covid, day labourers suddenly had no income, and no government support - they had to walk all the way to their home province (can't remember if the trains were even running).

But oh well, Uberizing employment means the run-of-the-mill American worker can also live like that in the future... progress!


Americans have chosen to learn exactly how good they have had it. You get to watch!

Can't afford not to.

Great question. I'm not an economist so I have no idea why. The outsourcing rates I've all seen have gotten way higher in the past ~10 years though.

Beyond just the usual inflation?

I'm not an economist either, but I also assume that as the country attracts more local talent for local companies, the competition for outsourcing becomes harder. (i.e, you now have to pay more than the local companies).

All just speculation on my part though, I really have no clue either.


People from Bangalore were telling me it was getting crazy expensive to live there (by Indian standards) circa 2013.

May I humbly and shamefully ask what does YOLO mean in this context, particularly "Yolo-ing it"?

The only Yolo I know about is an object detection model :/


No shame in this! When you're using Claude code (or Cursor, or similar), you get these pop-ups rather frequently. "May I do XYZ web search?" "May I run this command?" "May I make this HTTP request?" This is for security, but it becomes the limiting step in your workflow if you're trying to use parallel agents.

These tools generally offer the ability to simply shut off these guardrails. When you do this, you're in what has come to be called "yolo mode."

I am arguing that, sandboxed correctly, this mode is actually safer than the standard one because it mitigates my own fatigue and frustration. These threats surface every hour of every day. Malicious actors are definitely a thing, but your own exhaustion is a far more present danger.


The object detection method name is a pun on YOLO as in "you only live once", which refers to taking risks (which might be exciting).

A yearly Github copilot subscription :)

> So do still learn CS or SE in college, but as a minor to another STEM field. My 2c.

What other STEM field, if I may ask?


It really depends on the person. This may be an unpopular opinion today, but I strongly believe that someone doing what he loves will statistically be both happier and better off financially than someone in only for the money. So whatever makes their ears perk up: EE, chemistry, mechanical, math, physics, biology, etc.

And, as a complement, pick up a "computer-ish" minor to learn how to make a machine do your bidding. My 2c.


Thanks. And why is CS not in STEM?

> And why is CS not in STEM?

It sure is. If I said that it is not it was a typo.

What I was saying is that today I see a non-CS STEM major plus a CS-like minor as a better ticket for an undergrad (who will enter the job market in the next 2-4 years) than a CS major. Which was not the case for the last almost 30 years, when a pure CS major gave many folks an excellent start. My 2c.


Same for me regarding subreddits.


Is Claude "Code" anything special,or it's mostly the LLM and other CLIs (e.g. Copilot) also work?


I've tried most of the CLI coding tools with the Claude models and I keep coming back to Claude Code. It hits a sweet spot of simple and capable, and right now I'd say it's the best from an "it just works" perspective.


In my experience the CLI tool is part of the secret sauce. I haven't tried switching models per each CLI tool though. I use claude exclusively at work and for personal projects I use claude, codex, gemini.


It’s mostly the model, Copilot, Claude Code, OpenCode, snake oil like Oh My OpenCode, it’s not huge differences.


Claude Code seems to package a relatively smart prompt as well, as it seems to work better even with one-line prompts than alternatives that just invoke the API.

Key word: seems. It's impossible to do a proper qualitative analysis.


Why do you call Oh My OpenCode snake oil?


The way I understood it, you can do your inserts with SQLite "proper", and simultaneously use DuckDB for analytics (aka read-only).


Aha! That makes so much sense. Thank you for this.

Edit: Ah, right, the downside is that this is not going to have good olap query performance when interacting directly with the sqlite tables. So still necessary to copy out to duckdb tables (probably in batches) if this matters. Still seems very useful to me though.


Analytics is done in "batches" (daily, weekly) anyways, right?

We know you can't get both, row and column orders at the same time, and that continuously maintaining both means duplication and ensuring you get the worst case from both worlds.

Local, row-wise writing is the way to go for write performance. Column-oriented reads are the way to do analytics at scale. It seems alright to have a sync process that does the order re-arrangement (maybe with extra precomputed statistics, and sharding to allow many workers if necessary) to let queries of now historical data run fast.


It's not just about row versus column. OLAPs are potentially denormalised as well, and sometimes pre-aggregation, such as rolling up by day, by customer.

If you really need to get performance you'll be building a star schema.


Not all olap-like queries are for daily reporting.

I agree that the basic architecture should be row order -> delay -> column order, but the question (in my mind) is balancing the length of that delay with the usefulness of column order queries for a given workload. I seem to keep running into workloads that do inserts very quickly and then batch reads on a slower cadence (either in lockstep with the writes, or concurrently) but not on the extremely slow cadence seen in the typical olap reporting type flow. Essentially, building up state and then querying the results.

I'm not so sure about "continuously maintaining both means duplication and ensuring you get the worst case from both worlds". Maybe you're right, I'm just not so sure. I agree that it's duplicating storage requirements, but is that such a big deal? And I think if fast writes and lookups and fast batch reads are both possible at the cost of storage duplication, that would actually be the best case from both worlds?

I mean, this isn't that different conceptually from the architecture of log-structured merge trees, which have this same kind of "duplication" but for good purpose. (Indeed, rocksdb has been the closest thing to what I want for this workload that I've found; I just think it would be neat if I could use sqlite+duckdb instead, accepting some tradeoffs.)


> the question (in my mind) is balancing the length of that delay with the usefulness of column order queries for a given workload. I seem to keep running into workloads that do inserts very quickly and then batch reads on a slower cadence (either in lockstep with the writes, or concurrently) but not on the extremely slow cadence seen in the typical olap reporting type flow. Essentially, building up state and then querying the results.

I see. Can you come up with row/table watermarks? Say your column store is up-to-date with certain watermark, so any query that requires freshness beyond that will need to snoop into the rows that haven't made it into the columnar store to check for data up to the required query timestamp.

In the past I've dealt with a system that had read-optimised columnar data that was overlaid with fresh write-optimised data and used timestamps to agree on the data that should be visible to the queries. It continuously consolidated data into the read-optimised store instead of having the silly daily job that you might have in the extremely slow cadence reporting job you mention.

You can write such a system, but in reality I've found it hard to justify building a system for continuous updates when a 15min delay isn't the end of the world, but it's doable if you want it.

> I'm not so sure about "continuously maintaining both means duplication and ensuring you get the worst case from both worlds". Maybe you're right, I'm just not so sure. I agree that it's duplicating storage requirements, but is that such a big deal? And I think if fast writes and lookups and fast batch reads are both possible at the cost of storage duplication, that would actually be the best case from both worlds?

I mean that if you want both views in a consistent world, then writes will bring things to a crawl as both, row and column ordered data needs to be updated before the writing lock is released.


Yes! We're definitely talking about the same thing here! Definitely not thinking of consistent writes to both views.

Now that you said this about watermarks, I realize that this is definitely the same idea as streaming systems like flink (which is where I'm familiar with watermarks from), but my use cases are smaller data and I'm looking for lower latency than distributed systems like that. I'm interested in delays that are on the order of double to triple digit milliseconds, rather than 15 minutes. (But also not microseconds.)

I definitely agree that it's difficult to justify building this, which is why I keep looking for a system that already exists :)


So why did Google make such a choice for their front-end framework?


Flutter is pretty bad at least when compared to react native with expo these days. I was big into flutter but it’s just pretty bad overall when compared to expo.


Flutter developer experience is top notch. The best place you can see this is by comparing the experience of upgrading React Native vs upgrading Flutter. Flutter Web with WASM is also very cool. If you’re using a bespoke design for your app and don’t care about looking native, it’s overall a more predictable way to manage cross platform development.

That said, React Native has many great third party libraries whereas Flutter is dominated by a lot of low quality solutions.


Expo and the library quality is day and night compared to flutter.

Flutter has potential but just like any other Google tech it’s just shitty over time due to lack of TLC.


I usually consider it more of a spectrum, with the left being more suited to fast iteration/abstraction, and the right being more useful for control:-

Expo / RN → Flutter → KMP → Native


Really? I always read people say Flutter was more performant.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: