Hacker Newsnew | past | comments | ask | show | jobs | submit | SlinkyOnStairs's commentslogin


That isn't though.

Both Cerner (EHR) and NetSuite (ERP) were laggards in their market segments for years.

If I'm the Director of Enterprise Applications and have a budget allocated to procurement, I have no reason to purchase a laggard product like Cerner or NetSuite even with the Oracle bundle when SAP is giving significant discounts because OpenAI, Anthropic, and GCP are offering partnerships with systems integrations like Accenture or Deloitte to fully build out and manage your own hyperspecific ERP or EHR.

There's no reason to keep investing in products in a market that was already past it's growth stage pre-AI with a clear market winner, especially now that there is downstream pressure that makes build much more attractive than buying an inferior product.

Based on your response, I doubt you even cared to read my entire post.

Edit: can't reply

> I didn't read it because it didn't exist yet, you added it in an edit

It did when I posted. The only edit I made after you posted was fixing HRM to EHR.

> You're not even disagreeing with my response, merely elaborating the mechanism behind it. This is bad faith posting.

I strongly disagree. My entire thesis is that Cerner and NetSuite were bad businesses. If a business is bad you kill the business.

No need to gaslight me and delete your response.


Anyone with even a passing familiarity with EHR systems will know that nobody wants to build their own. I once worked for a large hospital system that abandoned a decades old institutionally built and maintained system for Epic. The choice was celebrated by almost everyone who worked there.

The value is in the “system” itself. The tooling, plugins, knowledge that your staff has the familiarity and skills so as to not require retraining, the interoperability of data with other systems and vendors.

The idea that AI is going to enable a variety of bespoke competitors is truly laughable!


> Based on your response, I doubt you even cared to read my entire post.

I didn't read it because it didn't exist yet, you added it in an edit.

You're not even disagreeing with my response, merely elaborating the mechanism behind it. This is bad faith posting.


In india, most of the firings happened in OFSS, Database and OCI.

That seems to be a bit of contradiction to your thesis no? OCI is their golden goose now for example.


It is true enough that "the stock market" is about them.

To stick some concrete numbers on this, combined the world's billionaires have about $15 trillion dollars worth of assets. Combined the world's retirement funds have about $60-$70 trillion dollars worth of assets.

What's driving the major disconnect is the generational wealth divide. Boomers have loads of wealth, housing, their pension funds, non-pension investments. Millenials, not so much. (Obviously, this is in part because wealth builds up during one's life, though the divide is stronger than merely that effect)

If you're a boomer, all this politics that promotes the stock market over the material economy is fucking great. Tech lays off another 16 billion people? Stonks go to the moon, and maybe you'll collect a nice fat severance package on your way out of your last job. If you're young though, it's a nightmare.

It's quite recent that the political balance has changed; Biden fumbled the 2024 election in no small part because of his "But the stock market is good, why are you mad?" stance that had been ol' reliable for the decades prior.


"Obviously, this is in part because wealth builds up during one's life, though the divide is stronger than merely that effect"

Do you have any numbers on this? Age is a pretty important part of the GINI index. People underestimate the age effect when discussing inequality.


> There's nothing magical about a "quarter".

There isn't. But everyone knows the US stock market is about run off a cliff. Or rather, it already has.

Everyone is looking at the quarters because they're waiting for Wile E. Coyote to look down.


What's that old saying?

Be greedy when others are fearful, or something like that?


Do bear in mind the context of that Buffett quote is to not blindly chase market sentiment and the numbers, neither directly nor inversely; Berkshire Hathaway's got quite the pile of cash right now.

Never heard that one, but my off the cuff thought was "sounds like something a so-far-lucky gambling addict would say".

I have a feeling Warren Buffet would accept that label, with a chuckle and a smirk!

But I also think Buffet wouldn't characterize the current environment as particularly fearful. We haven't seen a whole lot of panic aside from a couple 1-2% daily swings, which is nothing.


This will slightly overlap with the other replies, but to be concise:

> If you put stuff out in public for anyone to use, then find out it's used in a way you don't like, it's your right to stop sharing

Yes. The entire point of Copyright and the reason it was invented is to ensure people will keep sharing things. Because otherwise people will just stop publishing things, which is a detriment to all. (Including AI companies, who now don't get new training data)

We have collectively decided that we will give authors some power to say "I don't like how my work is being used" to ensure they don't just "stop sharing".

Fair Use is an exception to that, where the public good does outweigh an individual author's objections. But critically, not such that authors stop publishing. Hence the 4th "factor" in US copyright law (which is one of the most expansive on fair use), where the "effect of the use upon the potential market for or value of the copyrighted work" is evaluated. Fair use isn't supposed to obliterate the value of the original work, or people will stop publishing again.

This is what makes AI training's status so contentious. In terms of direct copyright it is a very weak case. It is incredibly hard to prove a direct 1:1 copy from AI training data into the model and into the output, you have to argue about the architecture of LLMs, and it's incapability of separating copyrightable expressions from uncopyrightable facts.

Yet in spirit, AI training clearly violates copyright. The explicit stated purpose is to copy the works for training data, oft without any compensation or even permission, in order to create a machine that will annihilate the market for all works used.

People already are pulling back on the amount of works they share.


It's not market driven. AI is ludicrously unprofitable for nearly all involved.

The profit appears to be capturing the political class and it's associated lobbies and monied interests.

> Yes, with hindsight, we can definitively know, and with sufficient time each target could probably have been positively ID'd, but there was precisely one mis-strike in 1000s of sorties, so this already is a low error rate.

This is giving them too much credit.

Hegseth has already shown himself to entirely disregard the notion of War Crime, even by the US military's own already controversial standards. The double strike on the boats in the caribbean are literally the textbook example in US military textbooks of what not to do, and that it is a warcrime.

This was no mistake. It was the obvious outcome of a pattern of reckless action.


> but that analogy carries a lot of real-world weight that doesn't map cleanly to software decisions

Twitter literally runs CSAM-as-a-service.

While Microsoft is not quite that evil, building the North Korean computer surveillance system with "Recall" comes pretty close. Other examples include things like Facebook's regular doxxing of it's users with their real name policy.

It's a crass comparison, but not unreasonable on both sides. Abuse goes beyond just physical violence, and the practices of these tech companies really do match those other kinds of abuse. The other half is that software has eaten the world, and these changes really do affect people's lives.


> The article mentions reduced job growth in SWE due to AI but the fed actually says the opposite

It's an open secret a good majority of these "AI layoffs" are AI in name only, a little lie told to keep the shareholders happy while the real cause is the worsening economy.


In that same period the big players have only gotten bigger and the "Mittelstand" in tech has been practically dying. Replaced by the flood of VC startups that are far too obsessed with "growth" to care about reliability and stability.

(Note that "is this company financially viable in the long term future" is an important part of stability. Doesn't matter how rock solid the software is if the startup's bankrupt by the end of next year.)


> but isn't the EU simply horrible when it comes to privacy of your data from a nosy government?

It's a case of "better is not perfect".

Yes, the EU & it's member states allow the police quite a bit of access to data and servers. However, there are still decently functional checks and balances. Unlike China, unlike Russia, unlike the US, where there is a carte-blanche already employed by authoritarian governments.

What the line really seems to refer to is General data protection. While "the state spies on you" is one attack vector, and one certainly becoming dangerous for oppressed minority groups in the US, it's not the only one.

For most people, really, all people because the authoritarian systems rely heavily on data from breaches, the chief risk to one's wellbeing are said data breaches. Of companies recklessly collecting all data they can get their hands on and retaining it forever.

There, the EU does have notably better laws. Where data collection and retention are restricted, and user-requested deletion is a legal right. (Enforcement of this is still a mess.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: