Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Maersk, Me and NotPetya (gvnshtn.com)
195 points by omnibrain on June 21, 2020 | hide | past | favorite | 57 comments


I can fully recommend reading the book "Sandworm" by Andy Greenberg. It explains NotPetya and all of the sorrounding investigation.


There's also a really good Darknet Diaries podcast episode on NotPetya: https://darknetdiaries.com/episode/54/

It's a brilliant podcast - there's some very interesting stories and interviews, my favourite recent one being The Courthouse: https://darknetdiaries.com/episode/59/


Darknet Diaries is moderately engaging - but overly dramatic. They also dumb things down a bit (I assume to attract a wider audience?).

I still find myself listening occasionally, so it's hard to criticize too much.


In the same genre, I recommend "Countdown to Zero Day" which looks at Stuxnet and Flame and the events surrounding their creation, deployment and aftermath.


Just an aside, I have met Kim a few times at various conferences. She is an interesting person, in a good way. She came to the book as an infosec neophyte. We talked about her process as something of a technologist outsider and how she decided what to focus on. Ultimately, she decided the storytelling was far more important than the technical details, though she felt those were incredibly important as well.


I'm a big fan of Countdown to Zero Day. I've recommended it to a lot of non-techie friends as it's a great blend of pacey thriller with just enough information to help a non-techie understand some basic elements of a sophisticated malware campaign.


Fantastic book! I only wish there were more in depth books like this. I've read Countdown to zero day as well almost immediately afterwards and was similarly engaged.


Its an interesting article, and I agree with a lot of it, but I never manage to run windows desktop without local admin.

Some examples: Thanks to corona, tons of people started using usb headphones. The bloody thing needs local admin for almost weekly firmware updates. No idea why, but if you dont do them, a windows update for the driver will break them soon.

VPN! If you dare to log on with alternate credentials, it ends your connection. Hence any admin on a remote machine can only happen by a local admin.

Banking software pushes an update and immediately refuses to do any payment until you upgrade. The (nice) people who should package that upgrade are swamped and need months, at least if you manage to get a budget to let them package it. After that, an infosec review might take more weeks.

Printer drivers. No admin? No printing! Bonus points if vendor decides to publish the driver in the app store, which is blocked by group policy for everyone including admins.

ctr alt del needs local admin to kill a task.

As a bonus, infosec is the biggest hurdle: If they take weeks to approve any kind of admin access,and keep asling bureaucratic questions, only big problems that burn for weeks are worth solving. We have expensive software being unused because no one wants to do the battle to get access to fix it. I would love to drop some privileges, if only I trust I could claim them back when shit hits the fan.


Related talk at Black Hat 2019: "Implementing the Lessons Learned From a Major Cyber Attack", by the Chief Information Security Officer of Maersk:

https://www.youtube.com/watch?v=wQ8HIjkEe9o


As someone with very limited Microsoft/Windows background, I would be curious to somehow better understand how these lessons would apply to the Linux world.

What are the Linux equivalents of pass-the-hash, TAM/PAM/PAWs etc?


They don't directly translate due to the inherent differences in between the two systems.

In short, pass-the-hash is a technique by which it is possible to authenticate to a windows system using the hash of a password, instead of the password itself. The NTLM hash is the secret, and does not need decrypting to authenticate.

NTLM authentication over the network can be redirected to other machines if they don't have traffic signing enabled (default only for domain controllers). So this gives rise to 'spreading' over the network in two ways:

* Steal the hash out memory of a system where you've got root access (called SYSTEM in windows terminology).

* Trick an administrator's system by connecting to your system somehow, and redirect the authentication to another system to take control. There are various techniques to do this, which I won't explain in this answer.

Given this known weakness, TAM/PAM/PAWs are all procedures/and a tiering architecture to prevent those secrets from being compromised.

A PAW, privileged access workstation, can be seen as an equivalent to a linux sysadmin's bastion host, roughly. It contains all private keys to all systems, but is well segmented, audited, and protected. This is the system that you use to perform administrative tasks that can't be done with any lower level of privilege. Say, the system that has the root account to all your production servers, for example.

PAM is the set the set of policies around logging when highly privileged accounts are used, which systems they can access with what privilege, etc, who can use them, how to approve actions by them, etc.

In short, they are the frameworks and policies used to combat the security weakness of these legacy protocol designs, and the reality of running big networks with guaranteed attacker activity in it.


> It contains all private keys to all systems

Hopefully it doesn't? That would be poor design. It typically is just on a network segment that the firewall rules allow it to access the other servers.


Well, not literally. But it is meant to be the system that is used to gain root access to your domain controller to perform administrative tasks there. Install updates, fix issues, that type of thing.

So, although it does not literally house all passwords/keys/whatever to your network, it has access to a system that indirectly does.

Normal jump hosts should not have your private keys I guess, but I thought it was the closest analogy.

Just put it this way: if an attacker gets on that system, it's complete game over.


If it contains all private keys that would indeed be a bad design. Maybe what awd meant is that it contains a private key that all systems trust. That would make more sense.


I'm guessing you know what a password hash is and roughly how password hashing works?

Microsoft's systems don't like to send your plaintext password over the network. Rather than either get rid of passwords or at least secure that so it isn't a problem any more, they do the hashing on your machine and send the hash to wherever it needs to be authenticated. This behaviour enables Pass-the-hash.

Since we can authenticate with the hash, not the password, we don't even need to know the password. If we break into one system that knows the password hash for JimSmith in order to authenticate JimSmith then we can tell other systems we're JimSmith and present that password hash and they'll accept it.

So malware that gets enough rights on a machine with a bunch of people's password hashes effectively gets the ability to log in as them on other machines, on which maybe it can ascend to equivalent rights and get more hashes, which it can use again, recursively.

Pass-the-hash isn't a thing on Linux itself, it's a consequence of crypto-illiterate design in Windows. Arguably that design pre-dates modern Windows, ie it isn't the fault of the people who built versions of Windows you use today. On the other hand though rather than just outlaw this behaviour entirely they've chosen to try to mitigate the worst effects and whose fault is that?


Linux isn't vulnerable by default, because it's missing features by default. It has no equivalent of Active Directory, and doesn't use Kerberos or anything like it by default.

However, it can, at which point you're back to the same problem. The vulnerability is with the protocol, not the operating system.

Modern versions of Active Directory enable strong protections for Kerberos that almost entirely stops the majority of the Pass-the-Hash or the Golden Ticket attacks. However, this isn't on by default even in Windows Server 2019 running a domain in 2019 mode for "compatibility" reasons.

I put that in air quotes because it's an excuse, and this is where Microsoft has consistently dropped the ball. They refuse to change security defaults, even when it starts getting absurd, and then lay the responsibility (and blame) at the feet of their customers.

For example, domain trusts between two Windows Server 2019 DCs will use NT4-era RC4 ciphers by default, downgrading all AES-capable devices across the trust.

Similarly, newly created accounts will always default to RC4, allowing downgrade attacks.

SMB is neither signed, nor encrypted by default.

Up until very recently, Windows Server has TLS 1.1 and 1.2, but they were disabled. Now, they're enabled, but so is TLS 1.0!

So on, and so forth.

That's the real issue. It's not that Windows is "crypto illiterate". That's like a person who can't read. No, it's like a person that can read but refuses to.


> Linux isn't vulnerable by default, because it's missing features by default. It has no equivalent of Active Directory, and doesn't use Kerberos or anything like it by default.

Could you tell more why do you think so?

For what I remember, Linux in this regard is exactly the same as Windows: you have to install and configure Samba 4 or FreeIPA in order to get a kerberized domain; you also have to join the clients into domain exactly the same way you would join windows clients.

In Windows, your servers out of the box won't run the necessary services, you have to install the AD DS server role. You have to join the clients into domain.

So there's nothing like AD or Kerberos on Windows by default either.


> They refuse to change security defaults, even when it starts getting absurd, and then lay the responsibility (and blame) at the feet of their customers.

They changed a lot of security defaults with Windows Vista and literally(figuratively) everybody dumped on them. It got called worst Windows ever, unusable, and names I don’t want to spell out from the public and the press. That made them reluctant to attempt such drastic measures again. But at least they disable SMBv1 by default nowadays.


The crypto wasn't at all the criticism most (any?) people had with Vista.

My criticism is that they didn't implement the Vista-era crypto enough.

In 2020, most Microsoft software doesn't support ECC certificates because their server products are still written to use the 2000/XP/2003 era crypto APIs instead of the Vista and later crypto APIs.

I remind you that none of those operating systems are supported any longer, but apparently for "comptibility reasons" SQL Server 2019, AD FS 2019, and System Centre 2019 can't use elliptic certs. Or use TPM-hosted certs. Or anything at all really other than RSA 2048-bit certs stored in software.

IIS can, but that's the lone exception, not the rule.


First of all we should distinguish that while Golden Ticket is potentially viable on other Kerberos and Kerberos-style systems Pass-the-hash is a security vulnerability per se and outside of Windows it would just be given a CVE number and you'd be expected to fix it not write that "Mitigation is not practical" and act as though it isn't your fault.

In fact it has been given a CVE number and fixed in obscure systems where it was done, only Windows gets to shrug and say it's hard so we won't fix it. e.g. CVE-2005-3435

In Windows every user - anywhere in the world not just within one organisation - with the same password has the same hash. Worse than a PHP app from the turn of the century their most sophisticated password hash scheme is MD4(password). This makes the mistake that results in Pass-the-hash almost irresistible.

Because a Linux system uses salted (and pessimised) hashes, it is not tempting to try to authenticate remote hashes because you're going to have to build a multi-step protocol, passing parameters to the client so it can perform the hash. You'd probably instead look at existing protocols and discover SRP (or in the modern era any sensible asymmetric PAKE). This immediately shows you a better path forwards with no Pass-the-hash.

But because Windows has this constant 16-byte hash it is tempting to use that for authentication. You can do a bit of hand waving to avoid confronting how insecure the result is:

‣ Bad guys can't possibly know this 128-bit value because it's MD4(password) and there's no way to find that without knowing password. Therefore we only need to check that the 128-bit value is correct and we've authenticated the user.

‣ Storing the 128-bit value locally is fine because it's MD4(password) and you can't reverse MD4 to get the password back so it's not a secret.

Only when you see these arguments next to each other it is obvious this is absurd. So long as they're on different pages of a document, or preferably made in entirely separate discussions at Microsoft, the mistake is unnoticed.

You've engaged in the same equivocation that Microsoft uses by adding "or Golden Ticket attacks". Microsoft documents about Pass-the-hash routinely argue that this is just the same as any other stolen credentials attack, so it's not Windows at fault nobody could be expected to do better. This is definitively wrong and we need to call them out on it.


> On the other hand though rather than just outlaw this behaviour entirely they've chosen to try to mitigate the worst effects and whose fault is that?

That's the fault of reality. Disable this today and big chunks of the world will just stop working. That's why they try to push to disable their stupid design mistakes of the past instead of just disabling them cold turkey.

Testing whether disabling a weak auth option won't hurt anything significant within your typical medium to large organization with 20 to 30 years of IT history is a timely operation and major pain.

Any mistake can be extremely costly - think about few hundred expensive employees, doctors, heavy equipment operators, whatever - sitting without their working tools for hours or days because of a stupid security checkbox.


Other answers are good, but no one mentioned *nix equivalent of PTH. Admittedly it's not exactly the same, but from an attacker's perspective it can be used with similar effect. This equivalent (which works on Windows as well, btw!) is pass-the-password.

There are two requirements for this to work, both much more common in the wild then hip HN DevOps crowd would like you to believe:

  1. root or other privileged accounts share the same password across the internal infrastructure and SSH is configured to allow password authentication for root (that's not the default, but it's usually enabled when admins believe that the network access is sufficiently restricted anyway)
  2. requirement for admins to normally login as an unprivileged user and gain superuser rights when necessary through su or sudo (for auditing purposes)
The attacker gains superuser access on one system, then tricks admin to log in and gain root privileges, disclosing their password during that process (by swapping binaries, process injection, reading memory, rootkits, etc).


> The attacker gains superuser access on one system, then tricks admin to log in and gain root privileges, disclosing their password during that process (by swapping binaries, process injection, reading memory, rootkits, etc).

Just gaining access to the admin's login account (ie. one from which the admin runs SSH) is enough - alias sudo=/home/admin/.trustmeimadolphin.sh in .bashrc and wait.


As someone similarly unacquainted with this area, it was interesting to see the strong Microsoft influence here.


SSH Agent Hijacking comes to mind. Technically not the same, but outcome of self propagating malware is same.


All and all very interesting, however, some of strongly stated opinions in the article lack justifications and that's pity.

I don't mean that the author is wrong, just that he is stating his opinions strongly as facts, for an instance regarding ADFS vs SSO with Hash Sync, the statement that the latter is much better is stated as an obvious fact, without much of explanation of justification.

Since not everyone would agree, for instance some security teams I have the pleasure of working with/against, more facts and reasons and less assertions could have done better.


In a geographical note, if you can get a chance to work/contract in Denmark, absolutely do so! I have been working in Copenhagen in 2014-2015 and I only have great memories of the country and the capital.


What about the language? I've heard that the best way to learn Danish is to be born to Danish parents.



Great article and good bottom line advice.


Not sure if advises are actionable. Especially in the light that decisions ultimately are made by 'tops' and consulting companies.


By that logic, nothing is actionable because someone else might prevent it happening. It being derived from the Maersk incident is probably among the better arguments for pushing it against that sort of resistance, since that is something business higher-ups have have heard of and are scared by.


well the PAW is nothing than admins at least putting their browsing into a VM (which costs round about 0€...). And seriously I never thought that doing else was a thing. But hey, apparently I am too far removed from sysadmins...


devops should be done from 2 systems.

Dev (local administrator access ok, production access not ok)

Ops (local admin access not ok, production access ok)


Nobody should directly have access to production, it should be controlled via CD flows which are gated on approvals from other team members or metrics.


I can see that being somewhat impractical in real life, but you’re not wrong.

In the ideal setup NotPetya would have been less of an issue for Mærsk should only have allowed whitelisted software to run on computers controlling critical infrastructure. It’s just a solution very few choose to deploy.


How would that have helped? The finance software that started the breach was legitimately needed and would have been whitelisted.


One of two things:

Either the malware modifies the finance software, and is executed as part of the finance software, but the checksum for the software is now different and can't run.

Or: The executable malware code is separate and only triggered by the finance software, which will fail to execute it, because the malware isn't a whitelisted application.

At any rate, the malware would never be able to escape beyond the finance software computers. This means that yes you could have some issues with invoicing, new orders and so on, but you most likely didn't have to shutdown ports, because the computers there aren't allowed to run the finance software.


NotPetya authors penetrated the accounting software vendor and planted their attack code in a regular update.


I am with you on this.


Very impressive write-up.


Why would any of the proposals provide any meaningful protection against this threat model?

Maersk claims NotPetya cost them $250M to $300M [1]. Assuming a criminal organization could demonstrate to Maersk that they could do an attack with similar effects they should be able to extort Maersk for a similar amount of money. If we discount due to unknown information, ROI, etc. I think it is reasonable to say that an extortion demand for $100M, assuming a credible demonstration that the criminal organization could pull off such an attack, would be an economically sound demand and likely to be paid. A criminal organization, considering their own ROI, would probably be willing to invest $30M for a $100M return. For $30M an organization could hire 30 full-time security specialists for 3 years at SV wages to identify an attack with similar effects.

Does anybody here think that their new system could resist such an attack even assuming they adopted all recommendations proposed?

Does anybody here think that there is any deployed system in the world that could resist such an attack?

Does anybody here think that even adopting and correctly practicing all practically deployed recommendations of the security industry that a system could resist such an attack?

All of my research points to no on all of those fronts. And, assuming the answer is no, then adopting all of the recommendations provides no meaningful protection to Maersk or any other company in a similar position since it would still be extremely profitable to attack them. Therefore, any company in a similar circumstance should probably not be deploying connected systems that allow this level of attack.

If the answer to any of those is yes, could you provide an example and evidence that supports that claim? I would sorely like to find a credible deployed case.

[1] https://www.wired.com/story/notpetya-cyberattack-ukraine-rus...


This is ridiculous. You're basically saying that having any security at all is pointless because someone will always still be able to break into your system in some way.

Nothing could be further from the truth. Only ignorant amateurs believe that security is an all-or-nothing game.

By making your system more difficult to break into, you:

* increase the effort and thus cost for the attacker, thereby reducing the number of opponents that can successfully attack you

* reduce the damage they can do before the attack is discovered and stopped

* make yourself a less attractive target compared to others

> Therefore, any company in a similar circumstance should probably not be deploying connected systems that allow this level of attack.

That is simply not an option. You'd increase operating costs far more than the damage caused by this attack, and at the same time lose capabilities that customers have come to expect. Most of the cost of this attack came from having to operate without connected systems.


No. I made a very specific statement about the cost of attack relative to the benefit of attack for this class of attack. The cost of attack is so far from the benefit of attack that there is no meaningful defense being offered. To use an analogy, making a tank from paper provides more defense than tissue paper. That does not, however, mean that there is any meaningful defense against credible threats. To provide a theoretical example, which should not be misconstrued as my specific belief on effectiveness, if the existing techniques could only stop an untrained child, but the best techniques in the world could only stop an untrained teenager, I do not think anybody would consider that to be a meaningful defense.

To go through your arguments in order:

Increasing the effort to attack is meaningful if it reduces aggregate harm in excess of the cost of implementation. In the specific case of the NotPetya attack on Maersk and generalized to similar attacks, there is no evidence that any of the proposed measures would meaningfully reduce the probability or raise the cost to be other than extremely profitable (this is my statement that $30M cost of attack is profitable). This is because the benefit of attack is so high relative to the cost of exploit development and deployment. So, in the specific case of techniques designed to prevent large, valuable attacks, it has no significant impact since you would need to raise the cost of such an attack to around the benefit of doing such an attack.

Reducing the damage they can do would be useful. The damage done in the Maersk attack occurred over the course of a few hours at most. Any defense against such a technique would need to already be prepared or completely automated. From my reading, damage mitigation usually occurs far after and mostly only prevents the marginal long tail, so I will contend, as in my previous post, that an organization with $30M in funding would be able to do the same amount of damage to any system given that they have the element of surprise and reconnaissance.

Making yourself a less attractive target is only meaningful if nobody wants to attack in you particular, it is not easy to wantonly attack all vulnerable parties, and the attacker does not have enough resources after attacking all even more profitable targets before attacking you since, as we stated before, it is still very beneficial to attack you. For the first, that is a bad bet when running a large-scale multinational. For the second, that is literally what software is good at, mass synchronized automated attacks. NotPetya is literally an example of a wanton attack. It is mentioned in that article that Maersk was not even the target. They were accidentally attacked for $250M in damages. Being a less attractive target means nothing if somebody has a weapon that hits all attractive targets at the same time for no extra effort. And for the third, that is a terrible bet in the long run because profitable targets means they have money after each attack, so they will have money to spare to go after you. The only comfort is that it may take them time to hockey-stick to the point where they can saturate the market, but saturate they will. We are already seeing this with the increase in attacks with meaningful economic upside for the attacking parties instead of cheesy little $200/computer attacks.

Not deploying vulnerable systems is always an option, it just depends on the cost-benefit analysis as you state. My thesis is that the cost of vulnerable systems is, in the long run, significantly worse than almost all companies realize and there is no effective solution. My justification for this is that I am firmly of the belief, as my questions above indicate, that there is no company that can defend against a $30M attack and that a $30M attack can easily and credibly cause $300M in damages, and, even assuming good-faith extortion so they do not just extort for more money with the same attack, there are enough more $30M attacks that can cause $300M in damages that any such company will go bankrupt either from paying the extortion or from the extortion following through on their threats.

As a thought-experiment to go with this, if Maersk offered a $30M bounty for each unique vulnerability discovered that could cause them over $300M in damages, do you think they would run out of such bugs first or go bankrupt first? If the answer is "bugs first", why do they not offer such a bounty since each such vulnerability discovered is at least a 10:1 ROI for criminals and thus would be highly attractive to discover?

Just to get ahead of a common response to the above thought experiment corollary. Some people will respond that companies do not need to offer that much to get such vulnerabilities reported to them. This indicates that the problem is even worse than I stated since the cost of discovery is lower which means the criminal ROI is even higher. If they offered $30M for all such vulnerabilities they would be more likely to remove the highly attractive 10+:1 ROI attacks that can do tremendous amounts of damage to them which is a great ROI for the company.


You are wrong to focus on 300M. That's the cost of dealing with consequences of an attack, not the cost of measures that would have prevented it. So, you're right to say that attacking some businesses leads to >10 ROI for an attacker (actually much higher ROI are quite common although with lower thresholds), but that's only assuming that these businesses do not invest into proper protections ahead of the attack.

The whole point of InfoSec is to find the right balance of investment into preventive measures and incident response teams so that the cost/risk/reward ratios of an attack make it non viable for economically motivated attacker.


I agree with your statement on the point of infosec. I disagree that any particular infosec organization is equipped to deal with problems of this class in any meaningful way. In fact, it is so far off as to be mind-boggling and probably criminally irresponsible.

To this end, I will clarify what I meant.

I believe that an attack funded on the order of $30M would be able to do $300M in damages to Maersk even if Maersk adopted best-in-class preventative measures and implemented them as a primary focus with support from management at all levels. An attack, able to do $300M in damages that Maersk can not prevent after we have assumed it already did the best it possibly can, should be able to support a $100M extortion payment. This is an ROI of 3 for an attacker with high threshold and an ROI of 3 for Maersk, so I think this is a valid assessment.

So, a counterargument/example is an organization where an attack funded on the order of $30M can not impact operations by more than 1%. I chose the number of 1% because Maersk has a revenue of $39B, so a $300M attack is only ~1% reduction in company output. I hope this clarifies my statement.

I also stated in a different response that I believe that the number of $30M attacks that could do $300M in damages probably exceeds Maersk's profit if they paid for all of them at $30M, let alone paying extortion at $100M or having the attack follow through at the cost of $300M. Therefore, in the long run the potential market size is enough to destroy Maersk.

As a mildly related note, if anybody here is a member of the infosec community I have a question:

How much do you think it would cost for a targeted attack to breach and cause significant damage to the best system you have every personally observed? How did you verify that number? Three pentests by three different competent companies paid that amount and that found no vulnerabilities of note would be convincing. I would likely find other things on that general level convincing, but I can not declare them off-hand.


> I disagree that any particular infosec organization is equipped to deal with problems of this class in any meaningful way.

FAANGs are bigger targets and yet none of them suffered anything remotely similar.

> An attack, able to do $300M in damages that Maersk can not prevent after we have assumed it already did the best it possibly can, should be able to support a $100M extortion payment.

That's incorrect. I already mentioned it in another reply, but in short it does not matter that attack caused 300M in damages. What matters is how much Maersk could possibly save by paying out ransom. And that's a fraction of those 300M not even including secondary effects such as potential problems with tax office, reputational damage or having to deal with becoming a target #1 for every other ransomware crew out there - as they just proved that they are ok with paying out for such "unsolicited penetration tests". It also misses that even if Maersk payed out 100M, criminals wouldn't have any way to actually benefit from all of that, as laundering such amount of BTC isn't trivial and historically that's exactly the step with highest risk for the criminals.

> How much do you think it would cost for a targeted attack to breach and cause significant damage to the best system you have every personally observed?

That's a non-answerable question as it does not mention other resources that are available to attacker besides pew-pew internet weapons, restrictions that they potentially face, the risk they are comfortable with, and basically all of these questions applied to the defensive side as well (i.e. on a furthest side of the spectrum there are certain systems, attacking which would put you above ISIS leaders on a to-be-droned-soon list).


That is not an unanswerable question at all. To clarify, I am literally asking for a simplified threat model. Take an existing threat model, reduce it to cost of doing those actions, done. Order of magnitude is fine. If there are parameters, pick a set of parameters within the non-totally-stupid range and state them. Estimate when reasonable. The question is just me looking for broad strokes anecdotes.


> To clarify, I am literally asking for a simplified threat model.

That's a simplification beyond any usefulness. You wouldn't be able to do as much damage with $100k budget in a few months as a well-staffed national agency in a week.

> The question is just me looking for broad strokes anecdotes.

Even Jeff Bezos wouldn't be able to orchestrate a cyberattack that crashes International Space Station with astronauts aboard.


I do not care what parameter set you choose if you actually want to answer the question. Pick one, state the parameters, and then specify it.

To illustrate:

How much damage could somebody do with a $100K budget in a few months to the best system you have been personally involved in?

How much damage could a well-staffed national agency do in a week to that system?

For all credible adversaries that could cause $X in damage, choose the 20th percentile cost adversary, how much would that cost?

The question is also specifically limited to systems the answerer has worked on to avoid speculation on practices or "grass is always greener" mentality. Did you work on the ISS on software or software security?

Jeff Bezos has over $100B. Therefore, I take your answer to mean:

With $100B nobody could orchestrate an cyberattack that could crash the ISS?

The cost to develop the Stuxnet attack has been estimated to be $1M according to former director of the NSA General Hayden. This is likely an underestimate only accounting for the cost of the exploit itself. Kaspersky Lab claims it cost in the regime of $100M to develop and deploy. So, lets take the high number and multiple it by 10 leaving the cost of disabling the secret air-gapped Iranian Nuclear Weapons Program at $1B. Do you think the cost of a critical attack against the ISS is 100x higher than a critical attack on the Iranian Nuclear Weapons Program?

Please avoid limiting your imagination to just direct attack on the ISS itself. There are multiple entities which when attacked would likely be able to de-orbit the ISS and kill all the astronauts. Please verify that none of these could occur for less than $100B: taking over a rocket to the ISS, taking over a rocket to LEO, active satellites in the correct orbital plane, decommissioned satellites that are no longer tracked but with enough fuel to intercept, scientist laptops that connect to the ISS network, over-drawing the laptop batteries so they blow up while in the ISS, etc. This also ignores more clever tactics you could do with $100B such as buy a company directly supplying critical needs of the ISS and then insert backdoors into the software.


You're still thinking all-or-nothing, talking about "exploits" or "vulnerabilities" where finding one lets you take over the whole ystem at once.

That is not how these attacks happen, that is not what happened at Maersk, and that is not what the article is about.

The breach happend via compromised third-party software installed on a small number of regular workplace PCs. The big damages happened largely because sloppy privilege managment allowed the attackers to hop between systems and gradually acquire more credentials and more privileges via weak or reused passwords and accounts shared across systems and functions, until the attackers controlled thousands of PCs and servers across the entire international company.

This kind of thing is a process, and the amount of damages depends mostly on how fast the attackers can gain more privileges vs. how soon they are noticed and countermeasures taken.

Better privilege management, as suggested by the article, can easily slow down that process to a point where it won't cause significant damage.

And your whole extortion argument hinges on the idea that an attacker can demonstrate the ability to cause X amount of damage without actually doing it or giving the victim information that can be used to prevent the attack. That is not how it works in reality, except in the case of DDOS attacks, which are an infrastructure rather than a security issue.

In reality, the amount of damage an extortionist hacker can put a price on is only the amount they can undo or provably refrain from doing in the future. Specific examples of this are mass-encrypting data and threatening to publish trade secrets. And those are things we see happening in reality, but the amounts are much smaller because they cannot include the large-scale disruption of operations that Maersk experienced.

By your logic, the form of extortion you describe should already be happening, constantly, to every large company on the planet. It should be the most profitable form of organized crime ever.

But that is not happening in reality. Because in reality, security and security breaches don't work like you seem to think they do.


No. I made no claim about attack procedure. I said an attack that would cause X dollars of damage which can include instantaneous, short duration, remote control, automated control, long duration, social engineering, multiple attacks, etc. To constrain the type of attack is ludicrous since attackers are not required to constrain the type of their attack, so doing so does not properly reflect the threat model.

What is your evidence that better privilege management would prevent an attack funded on the order of $30M from being anything other than devastating? Given that nearly every company in the world is routinely successfully attacked, I see no justification for giving the benefit of the doubt to claims of security that are not rigorously analyzed and tested. Have you ever seen a company run a $30M pentest that did not find ways in? If your answer is that no company does a $30M pentest since that is a crazy amount to pay, that supports my point since they are only bothering to test things easier than the standard I put forward. There is no reason to trust an entity in an industry that claims they are better than can be verified. If your answer is that a $30M pentest is not indicative of anything and nobody tests that way, then please suggest a test that correlates to difficulty of attack that is at the $30M level. If the answer is no such test exists, then I fail to see why there should be any confidence at all in claims that can not even be loosely quantified or correlated against the primary problem.

I will actually claim $30M is far too high. I bet attacks of this caliber would only take ~$1M to develop and deploy against best-in-class defenses. Therefore, my extortion argument actually becomes that they engineer 30 independent attacks and can burn one or two of them to demonstrate that they can do at least $300M in damages. Even developing 30 independent attacks, the strategy is highly profitable under my assumptions.

No, the extortion market is new. There is no reason to suggest that a new greenfield market with a multi-billion dollar potential market should be instantaneously saturated. That is ridiculous. This should be especially true in light of the fact that criminals do not have access to high-growth funding models due to being criminals so are generally required to bootstrap. Just because some criminal act is not done does not mean that it can not be done. Nobody hijacked a plane and flew it into a building before 9/11, but nobody is claiming that it was not feasible beforehand or that it would not have been an efficient act of terror if done previously.

If you actually want to make a meaningful counter-argument that may be convincing, please start from $300M in damages and then back-calculate the necessary cost of attack for a criminal to find such an attack profitable. State assumptions on each step for why it influences cost or benefit in some fashion and then we can discuss if those steps seem reasonable. I already did this previously when back-calculating to $30M from $300M, so you could also discuss why you think the individual steps are invalid. Please try to include quantitative estimates or beliefs. Ranges and probabilities are fine to hedge any statements.


> Does anybody here think that even adopting and correctly practicing all practically deployed recommendations of the security industry that a system could resist such an attack?

There are multiple examples of organizations successfully thwarting advanced attacks by following (and exceeding) industry best practices. Most of that stuff isn't disclosed to the public unfortunately. Coinbase incident from the last year is one such example: https://blog.coinbase.com/responding-to-firefox-0-days-in-th...


Thank you for your response.

Unfortunately, that example is about 2 orders of magnitude (100x) cheaper to do at market rates than the standard I proposed before the questions of $30M. I arrived at the number of the thwarted attack by looking at Zerodium payouts https://zerodium.com/program.html where a Firefox RCE+LPE goes for up to $100K, so even if we say both such exploits were a RCE+LPE (they are not, the pair amounts to a single RCE+LPE) two such exploits is only on the order of $200K. Adding in the cost of social engineering and any custom work, it probably only amounts to a $300K attack or so. I think it is fair to say that achieving 1% of a standard is inadequate to be a valid example of achieving a standard.

If I want to be even more pedantic, the attack was not even targeted at Coinbase since it appears to be part of a larger-scale opportunistic attack. I think you would agree that a more targeted attack that used all resources allocated to the various targets to just Coinbase would be harder to defeat, so we should actually be distributing the cost of attack across all the targets when evaluating the effort level of the attack on Coinbase itself. The article also indicates that the exploit was active for hours before it was stopped. That is more than enough time to cause the vast majority of damage if intended. In this particular case all they did was exfiltrate credentials and documents, but in many cases that is already the majority of the damage intended. As another example, the Maersk attack in the OP only took a few hours to do the $250M of damages.


You're right, that wasn't a 30M attack. But that's because there aren't any 30M attacks that we know of (coming from economically-motivated attackers). NotPetya as a whole would have cost <100k for example. Probably the most successful financial attacks have been carried by APT38. It is assumed that it's NK so it might not even fit parameters of a typical crew that multinationals care about, but even these attacks would have cost <2M per attack and 300M is close to what they got in total after all these years, no single attack lead to such payment.

So, your math just does not represent real life. Criminals that have 30M lying around are looking for a way to get out and would rather invest their money into legitimate(ish) businesses with lower ROI, criminals that don't have 30M would rather invest into multiple smaller campaigns with higher ROI rather than putting all their eggs into one basket, developers that can attract valley level salary on black market can also find positions in the valley or at defense contractors.

On the other hand just because company faces X damages does not mean they are willing to pay anywhere near to X to avoid them. Ethical issues aside, Maersk would have faced impressive damages even if they decided to pay out ransom. Their operations would have been disrupted regardless, they would need to carry out the massive data restoration anyway. Their networks where compromised and there were no guarantees about consistency/integrity of restored data or that paying out ransom wouldn't make them even bigger targets and that hackers (possibly some other crew) wouldn't return next week through the same or different vulnerabilities. So they would need to plan extremely fast migration to win10 in any case, as well as fixing their network architecture, permission systems, putting in place mitigations, buying new defensive solutions and requiring various consulting services. It is hard to say how much money would they be able to save if they would have chosen to pay ransom (and if NotPetya was a ransomware), but it wouldn't be anywhere close to 300M, maybe just <10M.

> If I want to be even more pedantic

That's ignorant not pedantic. Attackers spent month on spearphishing attacks targeting Coinbase engineers specifically which included making verifyable fake identities and compromising Cambridge web & mail systems. That's exact opposite of a large-scale opportunistic attack.


Before I get onto the main topic, the timeline seems to indicate that non-Coinbase engineers were also targeted by the attack given that, to paraphrase, it says: "Early June: people click link" and in a separate block "June 17: Coinbase employee clicks link". Using that reading, which may be mistaken, I concluded that it is non-targeted. If it is non-targeted then the fake Cambridge identities were presumably used on all targets which would then make it purely opportunistic as well. Luckily, even if my reading is incorrect, it does not affect the conclusion I wrote since it assumed the strongest possible interpretation of a targeted attack.

Onto the other topic, yes I agree there are probably no $30M attacks in the wild. This means that every attack that hits and breaches a company is less than the standard that I proposed. Given that essentially every major company using every combination security solution has been successfully attacked and the situation of continuous hacks has only gotten worse over the years, this supports my position. The lack of $30M attacks is most likely an indication of the immaturity of the attacker market. The fact that an entity can accidentally do $250M in damages to a company, get away with it, and nobody is trying to do that themselves all the time is plain incompetence in the criminal industry. To further support my point that the attacker market is immature, the number and size of economically-motivated attacks has been rapidly increasing over the years. 30 years ago, it was all pranks. 20 years ago it was all data loss. 10 years ago it was all cheesy $200/computer ransoms from consumers. Now we are seeing hospitals being ransomed for $1M. Given the cost of these attacks it is massively profitable at this stage to continue upping the ante.

I computed $30M by back-calculating from the damages with an extortion payment of $100M from an extortion damages of $300M. I think this is a reasonable analysis, but you are free to substitute your own numbers on those. If I wanted to give a forward-calculated number based on my knowledge of attack difficulty, I would say that a targeted attack by a competent adversary whose primary goal was to extort Maersk and researched how to actually do damage would be able to allocate ~$10M and cause ~$10B in damages. As for how they might do such damage, they could hack every ship in their fleet and crash them into each other or land. They hack the ships while they are at sea and crash them into cruise ships. They could take over the shipping cranes and drop containers incorrectly onto ships destroying them. They could make the shipping cranes operate in unsafe parameters destroying all of them. They could use the shipping cranes to drop containers on the employees. They could reorganize the shipping manifests subtly over a few weeks to violate shipping agreements that Maersk made. They could sit on every computer until backups are made and then take over the backup systems and destroy them then destroy all the existing systems and servers and wipe the shipping manifests. The list goes on.

If you want cases for other companies that might be desirable to attack with high extortion value:

An attacker could take over every 2019 Camry then wait until rush hour to engage the ABS so the brakes do not work, engage the cruise control to 120 MPH, then engage autosteer to turn slightly left (or right depending on your country). They would kills tens of thousands in 3 minutes which would completely irrevocably destroy Toyota.

An attacker could take over every internet connected GE stove with remote turn-on capabilities (they make these, seriously) and engage the gas at 3:00 AM then wait 30 minutes then ignite blowing up every house with the stove killing everybody inside while they are asleep and at least thousands worldwide which would completely irrevocably destroy GE.

An attacker could hit Merck (also hit by NotPetya for apparently ~$1.3B or more) by targeting one of their pharmaceutical plants to either vent and over-pressurize all of the chemicals so they explode into the nearby community or you could re-tune the chemistry to increase toxicity while preventing the automated QA systems from rejecting them which would both completely irrevocably destroy Merck. The list goes on.

Given the continuous failure to protect against attacks less than $10M by every company in every industry for decades I see no reason to give the benefit of the doubt to any of these companies, so I assert that not a single one of these companies can protect against an attack funded on the order of $10M where as the extortion value is in the tens to hundreds of billions and thousands of lives. I further assert that there is not a single well known company in the world that can do so and is willing to make that statement in a legally binding manner. And, even if they did so, that is still only minimally sufficient for unimportant industries. A thousand lives should not be subject to the whim of someone with $10M, that is criminally irresponsible in the actual sense where you should go to jail if you do that. For the cases I gave above, you probably need a number on the order of ~$100B on the low end.


> Using that reading, which may be mistaken, I concluded that it is non-targeted.

That's a wrong reading. The attackers were specifically targeting Coinbase employees although there were some instances where people with the same name and working in the same field (but not for Coinbase) got emails as well. The reason they say engineers clicked on a phishing link only on June 17 is precisely because it was a quality attack. The emails where personalized, written by literate English speaker and didn't contain any attachments or links at all initially. They only sent a link to an exploit after exchanging a few emails, confirming target's identity and establishing trusted relationship.

> Given the cost of these attacks it is massively profitable at this stage to continue upping the ante.

Sure, that's the general trend. But cost of the attacks also rises due to software becoming more security-aware. Just the fact that all major browsers and Windows 10 have auto-updates enabled by default (and hard to disable) has basically killed exploit kits, although that was a booming market less than a decade ago. Legitimate security job openings have exploded as well (including remote positions) with the HR focus shifting from costly certifications to practical skills, which gives would-be-hackers more possibilities to choose lighter hat to wear. Bug bounties are a thing now meaning people have new monetization opportunities for the vulns they discover. Twitter community has matured and there are more than 365 security conferences a year held globally, meaning angsty teens have so many more ways to establish their street creds besides dark corners of IRC and anonymous imageboards.

Your simplistic analysis ignores all these factors. It's like completely ignoring logistics, supply chain and risk management in real world, if you are one of these MBA types.

> allocate ~$10M and cause ~$10B in damages

Sure. Just like you can cause massive damages by causing forest fire in dry season with just a box of matches. In the context of our discussion such estimations are non-productive as 1) they ignore other costs and risks that attackers need to account for; and 2) ability to cause massive damage does not necessarily translate into ability to extract similarly massive profits from the situation.

> they could hack every ship (...)

The examples that follow are laughable. You clearly know nothing about these systems. That's like being afraid that someone will hack your laptop and give you cancer by manipulating display refresh rate. There certainly are problems with industrial control systems and critical infrastructure, but they are much more nuanced than the Hollywood-type hacking you predict. And no exploit is worth more than ~1M$ (apart from military thingies), because 1) there are always multiple ways to enter and as a result is the same attackers usually choose the path of the lowest resistance); 2) at that price you can buy an insider that will just run your stuff directly or plug an LTE-enabled rPI into the network. And for the vast majority of modern real-world attacks exploit is the most trivial/easiest/cheapest part. So well protected systems are designed in such a way that even malicious sysadmin would be able to do only so much damage before getting noticed and expelled from the network.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: