Characterizing this software feature as an "attack" or "backdoor" is pretty hyperbolic. In order to abuse multiplexing, the adversary needs local code execution ability, by which point you've already lost.
I would classify it as "works as designed". That said, I have argued with the ssh developers at length about MaxSessions defaulting to 10. There are no syslog entries created for subsequent authentication and phishing attacks become incredibly easy. A coworker and I were going to demo how getting a developer to run a python/ruby script would lead to root access in production but they stopped the demo for fear they would have to mitigate the scenario.
Some would argue that getting someone to run a script is difficult, but we found that about 10% of developers want to be helpful and are not cynical enough to presume malice. They will run the script which will happily drop a ssh key, fire up sshd as the user, create an outbound connection to a passwordless shell-less VPS node and then we are that developer and can piggy back all their connections. Some developers are devops, so they also have prod access. Some places have passwordless sudo, too. In some places, you don't even need sudo, as the posix permissions of applications are sub-optimal.
If you try this, the script should have an obvious problem that requires running it to see. The developer/engineer will feel good that they helped you solve a trivial problem and you will have whatever access they have. Obviously get written permission for this type of pen-test with all the steps clearly documented and approved. Most important, ensure management agree to NOT shame the victims of the test. Get them to participate in the re-engineering of your network to harden it properly without adding excessive friction.
But then again, getting someone to run a script is "local code execution" so it really doesn't matter how SSH is configured, the compromise was already once the user ran a malicious script. What comes after is not so interesting.
I mostly agree. This just makes the attack substantially easier and removes all remote logging of the access. As far as the investigators will see, the victim of the attack performed the malicious behavior. Hopefully the edge firewall in front of the developer logs all outbound connections and who owns the IP at the time and hopefully they are not working from home/remote, or they have a corporate VPN that logs all outbound connections.
If I phish you and you run a script, but multiplexing is disabled, then I have to take a few extra steps on your machine to capture passwords assuming you have passwords set on your ssh keys. It also means I have to initiate a new connection rather than using your existing ssh channels. Depending on the environment and your laptop configuration, this may or may not increase my risk of being detected. This of course highly depends on what level of logging and remote monitoring of your laptop is in place.
What comes after is the interesting part. Because that's where the attacker will try to gain access to production and the clock for response and blue team for detection and eviction starts ticking.
Assume Breach mindset that Microsoft developed for instance - in case you are intersted to learn more. There is an entire domain/world of security engineering that starts when the initial compromise has happened. And it does/should not mean the adversary won, just because they have code execution on one host.
If a malicious user gained access to your machine, how SSH is configured isn't interesting. If you use that machine to connect to other machines, the attacker will be able to as well, regardless of how SSH was configured at the time.
Heck, the attacker prefers a certain SSH config, the attacker could just change it. Even if you disabled the feature at compile time, the attacker could just replace the SSH command in your shell with their preferred version.
This is just disabling useful features to maybe cause minor inconvenience. I find it about as interesting as telling someone to pull out the power cord of their monitor to increase security of their login prompt screen.
How machines are configured is very interesting, as adversaries make mistakes, and cam trigger detection for suspicious behavior. There is an entire security field that is concerned about what happens after a breach.
Coinbase recently had a very interesting article/blog post about something similar, how adversaries gained access to engineering hosts and how they detected it.
Of course how much you lock something down depends on the critically of an asset and so forth. E.g. in certain high security facilities slight variations of your monitor example are applicable.
That's a good point. If the attacker changes configuration or drops binaries, they make noise instead of living off the land care free, which make them easier to detect. I see.
I think this helps the attacker piggy back on the connection of a user who has the MFA device and is able to get deeper into the network than the bastion can w/o MFA?
First, the article doesn’t characterize the feature as an attack or a backdoor at all. It describes how a perfectly valid feature can be exploited to achieve deeper network penetration. I believe this technique was actually used to target Coinbase a few months back, as I recall from a post in HN a while back.
It’s useful in pivoting from a foothold attack at the boundary (e.g. Chrome zero-day) into the crown jewel backend which could be totally isolated to reduce the attack surface, but if you connect into it from a compromised host, this provides a convenient and hard to disable vector to piggyback onto the connection.
If there was no way to piggyback the session, even owning the developer’s terminal doesn’t gain you access to a secure system which has multi-factor authentication using a hardware token.
If the developer's terminal is owned the attacker can always find a way to piggyback the session, such as by attaching a debugger to ssh and injecting malicious commands as if the user had typed them (and hiding the echo so the user doesn't even know it is happening).
Depending on how big you are and how much security is a core competency, even at this point it's important for your system to be architected in a way that can slow the attacker down in order to give your blue team time to respond.
Ideally you will have built your system to have multiple layers of defense. Reality is somewhat less ideal, but it's still valuable to discuss how to harden against amplification/persistence techniques after the initial breach.
additionally you would need to be able to execute code as the targeted user account. I think its acceptable to call this kind of use an attack or backdoor but not the feature itself. I use this feature daily while considering the risks.
I would like to see Shaun Jones, the article's author, compare and contrast the dangers of "ControlMaster auto" with Russell Jones, one of the authors of Teleport.
> There is a procedure that may prevent malware from using
the ssh-agent socket. If the ssh-add -c option is set when the keys are imported into the ssh-agent, then the agent requests a confirmation from the user using the program specified by the SSH_ASKPASS environment variable, whenever ssh tries to connect.
I don’t believe ssh multiplexing talks to the ssh agent. That’s the whole point, really. You just connect through an already authorized ssh connection. This eliminates the initial key handshake stuff at the beginning and makes subsequent ssh and scp connections really fast.
If I’m correct, your mitigation doesn’t affect this particular kind of attack.
dont people authenticate indiviually using their own credentials on a common bastion host? that seems quite odd to me and like the actual issue here. if you got root already on the bastion host there are quite a few ways to obtain credentials i would guess...
Not all credentials are name+password (in fact, using password over SSH is the least secure option available): user authenticates using their name+ssh key. Nothing too useful can be captured from that set of credentials, except to confirm their identity. If they also use an U2F, there's no way to capture that: "I am user Piskvorrr...but have no way to prove it."
OTOH, this method captures their session, in other words, "I am user Piskvorrr and I have already proven it by successfully authenticating".
Even if you disallow command execution on the bastion (good idea), this will still allow an attacker to pivot to servers behind the bastion. The whole point being that the strong auth (e.g. MFA) enforced by the bastion is a roadblock to pivoting.
Not sure, but it's basically about how you shouldn't SSH from compromised machines because those can have "ControlMaster auto" in OpenSSH config to by pass authentication to the hosts you SSH to. That means the next time you SSH from that compromised host they will open another connection without having to use a password.
Of course, if you SSH from a compromised host, it could also just piggyback on your terminal session, or steal your credentials, or whatever else. This is really an instance of being on the other side of the airtight hatchway: https://devblogs.microsoft.com/oldnewthing/20181219-00/?p=10...
I can't understand how this page could work on any browser that doesn't enable Javascript.
The only possible explanation I can think of is that it must be sending different content based on user agent, or something, though messing around with sending different user agents via "wget -U" gets me more or less the same thing.
The source for that page only has Javascript in it.
Anyway, when I try to load that page in Firefox while blocking scripts with uMatrix, I get this error:
"The website you are visiting is protected and accelerated by Incapsula. Your computer may have been infected by malware and therefore flagged by the Incapsula network. Incapsula displays this page for you to verify that an actual human is the source of the traffic to this site, and not malicious software."
And then it tells me it wants me to click a "I am not a robot" checkbox.. but, of course, there isn't one, because I'm not allowing Javascript.
I also use uMatrix and probably with stricter settings than you do (* * * block) and also NoScript. I don't know what you're doing wrong; I load it just fine by following the exact instructions I said: prepend the URL with `about:reader?url=` and go. Page loads fine.
Yes, prepending 'about:reader?url=' did work, but I still wonder why. I suspect that Firefox's reader mode must be using Javascript behind uMatrix's back.