Something that surprises me is that nobody ever talks about efficiency, as a trait of a particular piece of regulation.
The goal of a regulation, like a project constraint, is to enforce some outcome. This is often a good thing. However, not all implementations of the "code" of that regulation are going to be equal. There's a huge gap between the desired outcome and the way it's articulated as law; there's a ton of wiggle-room there. And that can make a huge difference in how burdensome a regulation is to actually adhere to. And that can have a huge economic impact.
The left talks about how important regulations are, and the right talks about how inefficient they are, and both of these things can be true at the same time.
What would it look like to ask ourselves how regulations could be "refactored" to achieve the same goal in a more efficient way? Maybe the assumption is that too much political capitol would be required to actually put such changes into action. Though, we should at least be asking ourselves this question when drafting new law.
Why isn't there a whole field around "regulation engineering" (not actually suggesting this name, but it gets my point across), including best-practices, case-studies, etc?
This is also my intuition but political and market systems are extremely complex, outcomes are very hard to predict and there is rarely a one size fits all solution.
Regulation and public service work best in scenarios where the processes and data are well established and accessible.
For example transport via train / cars etc. is rather well understood and most efficient when regulated well and collaboratively/publicly owned and ran because of that. Individual competing actors only create chaos and inefficiencies in comparison. Most clear headed observers would agree to this, just by looking at current and historical examples.
But the logistics and production of relatively new goods and services still need to figure out their place. This is where a deregulated, free market makes sense as long as certain baseline requirements are met (protection of workers, consumers, the environment etc.)
A field that would be very controversial to socialize would be food production. On one hand privatized food production is wasteful and even literally toxic and a lot of countries see food production as a matter of national security, so they subsidize it. Also the logistics and needs of the consumers are well understood. But on the other hand almost nobody seems to think that it would be a good idea to put it into public hands.
As weird as it sounds, inefficiency can be a feature of regulation, especially for big companies because it creates a slow and expensive hurdle for smaller competitors.
Sometimes, even as a consumer, this is desired, for ex when new drugs are coming to market. Sometimes, as you point out, it’s stifling.
Government is also inefficient by design. Imagine the gov’t pivoting to different laws every week - it would be completely unsustainable and would drive people crazy.
To answer your question, “regulation engineering” probably doesn’t exist because it’s designed to be slow.
I don't understand this argument. The goal is not to be intrinsically "slow", but to be suitably "careful". If you can be as careful as needed, but execute that careful process in such a way that no time is wasted, that's a win. The two are separate questions.
To put it differently: high-security systems programmers don't move their fingers more slowly on the keyboards because "going slow is better". They put checks in place, and they take as much time as is needed to do things right, but padding that time just for the sake of "slowness" helps nobody.
> Government is also inefficient by design. Imagine the gov’t pivoting to different laws every week - it would be completely unsustainable and would drive people crazy.
Speed of change has nothing to do with efficiency of implementation. I'm talking about writing laws that are efficient for companies to deal with, not streamlining the law-passing process so that legislators themselves are unburdened. It's a totally separate thing.
> inefficiency can be a feature...for big companies because it creates a slow and expensive hurdle for smaller competitors
This is true, and could be part of the reason this problem hasn't been addressed. But corporate lobbyists are just one of the many forces influencing legislation, and formalizing ideas around "efficient regulation" would only shed even more light on bad-faith attempts at making laws less efficient for the purpose of moat-building.
> To put it differently: high-security systems programmers don't move their fingers more slowly on the keyboards because "going slow is better". They put checks in place, and they take as much time as is needed to do things right, but padding that time just for the sake of "slowness" helps nobody.
"Being careful" doesn't necessarily lead to slowness directly, but making the carefulness verifiable (and making that verification, by an external stakeholder, mandatory) usually does.
This approach (regulation and inspection by a government agency) is generally how society makes actors internalize otherwise external costs, but there are other variations such as government codes and standards.
Fine-grained mandatory process specification is usually the least desirable route to safety, but that's often what companies end up asking for in return for giving them a pass when the process inevitably fails to prevent a bad outcome with a large blast-radius.
However, in some specific circumstances where you don't have another means, directly enforcing slowness in some way may be your best option for at least limiting the damage caused by a failure, even if it doesn't reduce the chances of an error (though sometimes it does that too). Vehicle speed limits are one example, rate-limits on transactions (or comments) are another. In other circumstances, requiring speed (eg. monitoring with a fast response time, quick deployment of a fix) may be the right choice for limiting the damage a failure can cause.
Spouse of a medical practice manager at a mid-sized practice here. Provider data is wildly variable in quality, to be sure. But if the insurance companies are trying to clean the data, they're not using the 'result' to make the system more efficient (except maybe for them at the expense of everyone else), because "we get bad data" gives them another reason to deny the claim. My wife and all of her peers spend inordinate amounts of their time responding to valid insurance denials, quoting chapter and verse of IDC10. Some insurers are so notorious (as in 'deny every claim up front...make them work for it hoping they'll give up') my wife drafts the response to the denial along with the initial request, because she knows it's automatically coming. This makes the automation possibilities that electronic patient management systems offer less effective, because to get paid the practitioner still has to manually intercede in way too many claims.
Another bit of collateral damage is the increasing number of providers who no longer take insurance of any kind and put the onus on the patient to file (and fight) with the insurance companies.
Absolutely. But it happens on the opposite side as well. I can say that major insurance companies have to deal with major hospitals (in addition to individual practices) miscoding stuff.
Not even because they want more money! It's more wrong vs correct. And is done just because "that's the way they've always done it" (and they're used to the insurance company fixing it on their side).
The biggest benefit of the move to automated processing and electronic records is it doesn't leave room for Dr. Sue and Mr. Green to have a non-policy understanding on how to handle claims.
It got things done, but it made it impossible to scale when you were trying to untangle 1,000,000 "special cases."
If you don't mind, why exactly? You're worried the care will get screwed up? Or you worry that someone will steal that data and charge more for insurance?
I'm just wondering about why exactly the paradigm doesn't totally work here. I get that we don't want medical devices failing, but that's different than charting. And I get it that we don't want everyone to have your data. But risking that someone does a data copy vs. reducing healthcare costs seems perhaps a risk worth taking (and it's not like the insurance companies who actually charge us money don't already have it).
Move fast and break things is a really unfortunate name. In my experience, the process of continous deployment, and the automation and defintion of processes to do it well bring more stability than the "move slow and keep things stable" environments. When you deploy once a month (or less!) you view deployment as a one off thing. When you deploy every day or every week. You view releases as a regular part of development. That change in mentality is critical to stable releases.
"move fast and break things" was Facebook's internal engineering motto when I started working there, but it was later (~2016) changed to "move fast and be bold"). The new motto is really just saying what the old motto meant but in a less hyperbolic way.
Hi. "Top developer" (if that can be really quantified in the way you mean, I almost certainly qualify) here.
What keeps me from returning to the medical space is not trusting other people to take privacy and data security (not HIPAA and definitely not HIPPA) seriously enough. I don't feel sufficiently aligned with the decisions of any medical company I've worked for to compromise my ethics for them.
The asks HIPAA makes are minimal, largely reasonable where they exist, and are more about responsibility and management than anything a "top developer" has to care about.
I think you're underestimating the cost of not having good regulations. Not all regulations are bad, and not all barriers (cost or otherwise) imposed by regulation is bad -- often it's been judged that the alternative has more, often externalized, costs.
What do healtcare costs and deaths look like if we removed all our environmental regulations? What's the cost in terms of people not getting treatment for things because they don't want it to be publicly know?
Regulations usually don't do what those who advocate for them say they do. They are generally created to benefit some corporations at the expense of consumers and/or other corporations.
I'm not convinced environmental regulations are a global net benefit to the environment and I can't see how privacy regulations could possibly be beneficial as customers who want privacy protections create a market demand for them. Even in the extremely unfree healthcare market in the US, provides would improve their data security if they thought people were not getting treatment because they worried about their privacy. Regulations create a false sense of security, incentivize companies to keep data breaches and past vulnerabilities secret, and make developers spend time complying with ineffective or harmful requirements instead of actually improving security.
I think one should be very sceptical of regulations and other coercive alleged solutions from governments, especially as politicians have a very strong incentive to serve the corporations who fund their campaigns rather than the voters who rarely even get to hear the name of a candidate who is not supported by corporate spacial interest and wouldn't vote for them anyway as to not "throw away their vote".
> HIPPA terrifies so many IT people. Driving away top developers.
I've worked in healthcare tech stacks. It's like saying the GDPR is driving away top developers. Not true in the slightest. The problems with making money in healthcare are business related, not developer related. HIPAA is just another set of rules to abide by when creating systems.
Even when good, the cost is big.
Health care is a good example. Keeping patient data private is a good thing.
However it’s also absolutely crushing in its impact in the industry.
Tiny projects can take months or years.
HIPPA terrifies so many IT people. Driving away top developers.