The problem with Epic is that they seem to refuse to deal with their technical debt. As far as I know, it's still some janky visualbasic UI with a M/UMPS backend (well w/ a modernization wrapper). And why change because hospital CIOs are willing to approve contracts worth hundreds of millions (and their competitors are even worse)
Maybe it all works...however slowly... but pulling data out of what is effectively a bunch of antiquated and super siloed-bins seems overly labor intensive and cost ineffective.
The other announcement was that they were doing to do a partnership with Apple, but sounded like the task of actually making a modern native client on MacOS was beyond their technical skill...(their iOS app efforts are...not pretty...)
I think it's partly because those who have a passion for the industry aren't good developers and good developers don't have a passion for the industry.
Not to mention a lot of decisions are driven by medical doctors and if you think software developers believe that being good at one thing makes them good at all things, you are going to love medical doctors.
That's certainly an issue, but one thing that MD's (I am one) are very, very good at is knowing whether a system makes their lives easier or harder. Paper charts had their problems - unreadable handwriting, limited accessibility - but they were a very heavily refined system that served the doctors and nurses who used them daily. EMR's are largely written to maximize billing, not end-user convenience.
I'm an anesthesiologist, so there's really nothing for me to up-code in order to get paid more for the same work (except that I can declare the patient to be higher-risk, or the surgery to be an emergency, each of which nets my group about $20). It's baked into the system, based on what surgery is being performed. So EMR has been good in that I can find old information more easily, and Epic in particular allows me to get information from other health systems that use Epic, but as a practical matter it's not a big improvement in general from the old system in which paper notes were scanned in after discharge and all dictations (admissions, discharge summaries, any procedure notes) were transcribed into it.
I feel like it's not even medical doctors, it's the CIO/hospital administrators and the insurance companies that drive most of the software decisions, since that is where the money for these large Epic deployment contracts come from and they are the true "customers". Most medical doctors don't even know where to start when asked re software decisions.
> As far as I know, it's still some janky visualbasic UI with a M/UMPS backend (well w/ a modernization wrapper).
You are correct. They haven't changed in last 20 years. Even in their coding tests, they write pseudo code or ask you to write some code (with basic code grammar & layout provided), which is thinly veiled MUMPS.
And there's a proctor who watches you & your screen over remote connection - lest you googled anything related to the problems. It was very creepy
That's really bizarre. Do they pay well, at least? I get that there are tons of jobs that aren't super exciting, but even a Java enterprise middleware app sounds better if only because you wouldn't be working on a completely niche platform.
Good question: The proctor made me swivel my laptop webcam around the room to make sure I was alone. Even down & behind to check I wasn't holding anything in my lap. I was allowed one 10min break where cam & microphone were to remain hot & a quick second check before second part of coding test.
They are finally reaching the point of rolling out their new clients that get off Visual Basic, so that’s something.
M, however, ends up being a combined database/business logic platform (and it’s fairly speedy at that, if that’s what you want). Extracting data into a nice tidy relational database does take time and development effort, though, but they have a reasonably robust process for this already.
This article seems to be a relatively mundane announcement to me. You could already do this same stuff with Epic on Azure, but I guess more options are nice.
While it's been at least a decade since I've looked at M/UMPS ... I do remember the database having some really interesting stuff going on that suits the storage of healthcare data well. The language itself though ... a complete abomination.
The company I work for is in the healthcare tech space and we all generally despise Epic. But when I saw the MyChart app, I was really impressed.
Also, we were in the ER recently, and my spouse got their test/lab results on their phone before anyone came in for like 30 mins. It was all good news.
But probably don't want that to happen if someone ends up having cancer..
What do you mean by "pulling data out"? The underlying database doesn't really matter for that. Epic supports an extensive set of APIs, including many that implement open industry standards.
Since they have lots of pull in the industry it makes lots of sense for google to partner with them. Possibly they can modernize them but I don’t think that is very likely this looks like more of making a larger moat for both companies.
Knowing how they think...this is scary to me. It is not inconceivable they are going to jam their VB ui into some wrapper that somehow runs in an unholy fashion on a web browser...
I'm actually less concerned with the privacy implications (which are concerning) than with the idea of more "AI" in healthcare. AI is a shiny object that does not solve any of the most urgent problems in healthcare, and almost certainly distracts from them, while all the "quick wins" are going to be the usual automation of what should be personal interactions, making it worse for everyone that is not the "average" customer. AI investment in healthcare is for the benefit of the AI providers only. It is not going to benefit patients, certainly not vs other ways the money could be spent (more staff, more beds)
Right now rich people in developed countries can afford the best doctors. Most people in the world get shitty healthcare. An AI assisted medical staff can significantly raise the bar.
In many low income settings, even access to a medical professional means a long wait time. I see AI serving a valid purpose of guiding busy humans who have little free time. An AI can take current symptoms, past medical history, clustering the data based on travel history, real-time outbreaks, etc, etc and come up with candidate diagnoses.
If applied in the US, where a huge amount of people dont have access to decent healthcare who is responsible when a patient is misdiagnosed by an AI and
treated for the wrong condition?
If a doctor is responsbile for vetting all the answers then it is questionable
how much time will be saved.
All expected relevant information may be collected and presented in a handy report. That is good but doesn't require AI.
In order to pick a diagnosis in all but trivial cases the MD will have to invest real time into it and at that point making trivial diagnosis is also simpler to do manually.
I suppose the medical treatment can come with an EULA that informs the patient
that the diagnosis might not be accurate. (Though that can happen from flesh and blood MDs as well).
There isn't any other way except governments implementing a legal framework of liabilities around AI tech.
>In order to pick a diagnosis in all but trivial cases the MD will have to invest real time into it and at that point making trivial diagnosis is also simpler to do manually.
People around the world still die from trivial, preventable illnesses simply for lack of access to a doctor/diagnosis. Its wrong to look at AI as magically solving everything. I see a use for it where it can improve the existing situation.
Our entire scientific journey is filled with failed experiments. Its not possible to stop AI from progressing, but we can and must use an ethical approach.
Please explain how an AI can take current systems and past medical history. No AI system has ever successfully demonstrated such a capability. Especially not with patients who may have limited literacy.
There isn't much to explain. Existing medical data is processed by the AI. New data is collected by means of electronic questionnaires. Questions are chosen in succession based on past responses.
People who have learning disabilities, or disabled in other areas - sight, hearing, or generally unable to interact with the electronic system will require human assistance. The global literacy rate is 87%, and there is room for more than one system so we can cater to everyone's needs.
Your explanation is inadequate. The necessary information can't be effectively gathered with electric questionnaires. This has been tried and it doesn't work. Your comment is typical for software developers who are ignorant about the basics of healthcare delivery.
> AI is a shiny object that does not solve any of the most urgent problems in healthcare, and almost certainly distracts from them [...] It is not going to benefit patients, certainly not vs other ways the money could be spent (more staff, more beds)
I am baffled. AI clearly accomplished amazing things very recently, at the very least in language and image generation, self driving and problem solving as it pertains to games and upset a lot of expectations in the process. AI also, clearly, is used as a buzzword to confuse a lot of people (often times including the ones who are using it).
What grand insight could anyone have into what is going to on right now everywhere (and, clearly, there is a lot of stuff going on, right now, everywhere) to justify such a sweeping statement?
Maybe I don't understand your point, but as I read it you're saying "AI clearly accomplished amazing things very recently" as evidence that it will be the best (or a comparatively effective) use of healthcare dollars?
IBM Watson used this argument in marketing: "it can play jeopardy, now we'll focus it's powers on healthcare", and all their projects ended in failure.
AI doesn't work on edge cases (and typically doesn't know when it gets an edge case), distracts from resource scarcity by optimizing averages of shitty metrics (same as most "data driven" consulting projects), and diverts money from more helpful things. In a world without resource constraints, it would be worth implementing, but for now, it's a grift to make money for AI consultants, and will be worse for patients
My take on this, although it is not entirely clear from the article, is that Epic wants to move most of their computing to the cloud and Google felt left out and now they have a hospital chain who wants to partner which gives Epic a reason to add Google cloud compatibility (i.e., a customer will pay for the translation). There are a variety of things that might happen to the data once it is in the cloud. AI's can be trained on all the patient data and look for correlations (such as "all doctor X's patients do poorly" or "Unit 12 is responsible for most sepsis problems".) AI's are NOT going to be making a diagnosis any time soon, the doctors won't stand for it, and there is no AI capable of it yet. Even AI's that inspect radiology can only report to a real radiologist who then has to make their own decision (and is legally responsible for that decision.)
Another use of the cloud data is to aggregate data across many institutions, mostly to allow medical studies. The NHS in Britain has all their patients in one database which allowed them to do incredible studies related to the Sars-Cov-2 pandemic. Currently the US has no such capability, all that data is in silo's in each health institution, each with their own data formats and their own workflow. Get all the data into clouds, all in the same format, and it can be used for medical research across a much larger population. There are privacy concerns, but there are very desirable benefits to everyone. It would be preferable if government got involved rather than having industry do this for profit. The forced change to electronic health records would have been the perfect time to do this, but the lobbyists won instead so the US health datasphere is still highly fragmented. Migration to the cloud might be another opportunity to fix the problem.
I can tell you that the current focuses of AI implementations are around real, impactful issues: sepsis risk, readmission risk, deterioration index, etc.
The problems with AI in healthcare are:
1) People don’t want it to be a black box - that means quantifying the factors that go into a recommendation
2) Operationalizing AI recommendations is hard. AI tends to give gradiated information on binary decisions (e.g. there’s a 68% chance this patient is septic. Should someone go check on them? What if they were 49%?). The challenge becomes deciding how that information should be shown to people and what the acceptable false positive and false negative rate are.
3) The same problems of AI everywhere. Things like garbage in garbage out, unrealistic user expectations, feeling like it basically tells you what you already know, the challenge of getting insight from a pile of data.
No, the problem with AI in healthcare is that like much of healthtech is that it further reduces the ability of providers (especially in hospital settings) to respond to fluid and evolving situations that may fall outside the dotted lines that the AI understands or scenarios the system allows you to work within. Specifically, it creates further red tape that providers need to worry about, more checkboxes on an iPad to be clicked, more time required per patient on administrivia.
It could be done well but it will be done poorly, will increase the burden of front-line workers while making administrators feel like they can say they accomplished a big project this year. At the end of the day rather than making healthcare more auditable, practitioners will learn to just quickly fill in bogus data on the new system so they can go deal with the patient that's coding and when the AI gives a recommendation a provider doesn't like they'll just ignore it anyway.
In a good system that wasn't falling apart at the seams, AI in healthcare would be a boon, but in a broken system that's falling apart and failing its front-line workers, it will just serve as a distraction and another burden.
I think what you described falls under #2 in my reply. “Doing it well” is not a trivial option that people are ignoring; “doing it well” is the thing people are trying to solve. AI is not a magic bullet that always makes everything better.
Honestly that's worse than I thought. I work in the field, particularly in relation to accountable AI, and it's not OK to have models that tell you whether to check on people to make sure they're not dying unless there is also a human checking every case, which I hope is what's going on. How would you like to be different than the training data and deemed "no risk, 100% confidence" when you actually have a life threatening problem?
In a hospital setting, nurses and doctors round regularly. No one is talking about using AI as a replacement for that, because no one has anything approaching that much trust in predictive models.
Predictive models are most often used as either an alerting mechanism or an additional data point on a dashboard. You need to careful of alert fatigue, where too many false positives cause humans to disregard all alerts from the model. And if you don’t get people ignoring alerts, you can waste a lot of people’s time and energy by having constantly having them run to check on someone who is actually fine.
On the other hand, basic statistical techniques with access to a large amount of patient outcomes may find drug side effects faster. It's kind of crazy that this stuff is not better tracked already. Of course AI is the shiny new thing in the press release.
Perhaps, but previous attempts by researchers to generate clinical insights by merging clinical records from multiple provider organizations and then doing retrospective outcomes analysis have produced disappointing results. Data quality is generally bad, leading to a fundamental garbage in / garbage out problem. There is little consistency between documentation practices across providers, and many findings aren't recorded at all. So in order to get a useful data set, researchers have to do a huge amount of manual data cleansing. Current AI/ML algorithms are incapable of automating much of this work.
It could be greatly beneficial to the patient. Time will tell, but I also expect the financial optimization issues will take precedence over the rest.
Additionally, this is the typical example of the medical industry tackling problems from the wrong angle. We should be (MUCH) improving data retrieval and measurement reliability in medicine before we can hope to make anything approaching "stable diffusion for medicine"...
Meaning... those with critical conditions will be left in a room (acceptable losses), those with mild conditions will receive a high amount of treatment (ensuring no one moves from mild to critical, overall reducing fatalities).
Long term I expect that advanced persistent security threats will force almost all small and medium enterprises to migrate to cloud platforms whether they want to or not. It takes quite a high level of scale and technical competence to maintain a secure infrastructure against ransomware.
Epic primarily uses the InterSystems Caché database, which is essentially a modernized edition of MUMPS. It is about as "legacy" as any piece of software can be with parts of the code dating back to 1966, but still works well and for certain applications it's still superior to more modern databases.
BigQuery is not an appropriate database for an EHR.
Epic used to be notorious for lack of interoperability. They now have an extensive set of open APIs and are active participants in industry standards development organizations.
The use of “AI” here is style without substance. The thing that jumps out at me is “Epic workloads” on Google cloud, which sounds a lot like generating reports on cloud rather than on prem(?).
Just in case there's some confusion here - Epic is Epic the healthcare company, not Epic Games. And Epic the healthcare company most likely already has all of your healthcare records.
You don't want Epic to be within miles of your health records? Well, friend, I've got bad news for you. They are the largest Electronic Medical Record (EMR) company in America. 4 out of 5 patients are in their system. Don't worry too much, though, the records are stored in the blessed hell that is their custom DBMS.
I generally agree. It sounds like Epic and Google won't have direct access, but are just allowing Epic using orgs the ability to host on the cloud instead of in their own data center. Or are they centralizing it?
Hosting sensitive stuff in the cloud is absolutely centralising it and giving access to anyone that can issue a national security letter or join Google in particular roles, or just hacks it because someone left the keys in a git repository.
Eh, not exactly a definition I would use. It depends on the location. It's completely different to have Epic manage all the data under their account vs having the org's IT set up their own account/servers/etc. There's really nothing stopping a national security letter from grabbing your health data under the current architecture.
There's a huge difference between the effort required to get one person's records from wherever they are now and the effort required to trawl through a centralized database.
Put all the data in one place and make it easy to access and LEOs are going to treat it the same way they do their other data sources - abuse it up to the point they can get away with. It happens every single time.
> And I guarantee anything that would happen in the cloud is already happening.
Then you should be much more careful about what you choose to guarantee, because you'd be wrong, and that heavily devalues your "guarantees".
Many of the abuses described in the article below are simply not possible when data is distributed amongst many actors. And many of the targeted abuses would never have happened without easy access to these databases.
It simply comes down this this: make data more available, and it will be used more. In theory, it is possible to build audited systems that, combined with oversight, could solve a lot of this. But in practice, audit trails are frequently never built, oversight ranges from powerless to nonexistent, and the extreme deference cops get in this country means this stuff is rarely investigated and results in wrist slaps when it is.
"Then you should be much more careful about what you choose to guarantee, because you'd be wrong,"
Please, do show me how I'm wrong.
HIPAA allows the disclosure of some health information without a warrant. There are plenty of abuses that happen today. Would centralizing increase those? Sure. But they are already happening. And again, I don't see whether these are truly centralized like a DB, or if they're independently on the cloud (or perhaps you didn't answer that because you're trolling).
> perhaps you didn't answer that because you're trolling
You don't get to accuse me of trolling when you're being intentionally unresponsive and obtuse. Just another username I'm not going to waste time on in the future.
Hopefully the law will intervene here. We should have a right to not have our medical data ingested by Google, and Google's already gotten in trouble for this in the UK.
We really need the government to outright forbid Google from healthcare at this point.
In the EU, Google as well as any other American owned firm are already forbidden from handling personally sensitive information. IANAL but I can not see this as legal in EU in any way. Even before the Gogle involvement, the mere fact that Epic handles such sensitive data across many European countries is bizarre.
It will be interesting how this pans out and when/if EU whatchdogs will discover it / intervene.
Google is not magically unbound by HIPAA so Google can't actually do much of anything with the data outside of host it. They could do some machine learning but you can't mix data so it would have to be client specific which I see no issue with.
Maybe it all works...however slowly... but pulling data out of what is effectively a bunch of antiquated and super siloed-bins seems overly labor intensive and cost ineffective.
The other announcement was that they were doing to do a partnership with Apple, but sounded like the task of actually making a modern native client on MacOS was beyond their technical skill...(their iOS app efforts are...not pretty...)