It's really cool to see projects like this designed for dropping in assets from the proprietary version. The separation in the first place is unfortunate, but at least the capability exists.
Civ III in my opinion had some of the best art of the entire series. The 3D feeling of the successor games are kind of off-putting by comparison.
>...I quit a job over similar concerns, knowing it would lead to a >70% decrease in comp. Without a significant nest egg or wealth, whether personal or through family.
Best practices usually call for not exposing the SSH endpoints to the public internet. The principal risk is vulnerabilities in the underlying SSH server implementation. Historically, critical flaws that can compromise you are few and far between. However, these days AI is already starting to become adept at reverse engineering.
If you must, you'd typically use a bastion host that's configured just for the purpose of handing inbound SSH connections, and is locked down to a maximal degree. It then routes SSH traffic to your other machines internally.
I'd argue that model is outdated though, and the prevailing preference is putting SSH behind the firewall on internal networks. Think Wireguard, Tailscale, service meshes, and so on.
With AWS, restricting SSH ports via security groups to just your IP is simple and goes a long way.
But doesn’t your argument that the principal risk [with ssh] is vulnerabilities also apply to the alternatives you say is best practice? Firewalling off ssh (but not http(s)) has the risk of vulns in the FW software. Tailscale, wireguard etc also has the risk of vulns in that software?
So what’s the difference in risk of ssh software vulns and other software vulns?
Also, another point of view is that vulnerabilities are not very high on the risk ladder. Weak passwords, password reuse etc are far greater risks. So, the alternatives to ssh you suggest are all reliant on passwords but ssh, in the case, is based on secure keys and no passwords. Should “best practices” not include this perpective?
But saying ssh is a risk “on principle” due to possible vulnerabilities, and then implying that if wireguard is used then that risk isnt there is wrong. Wireguard, and any other software, has the same vuln risk “on principle”.
> For vulnerabilities, complexity usually equals surface area. WireGuard was created with simplicity in mind.
That is such consultant distraction-speak. Simple software can have plenty vulns, and complex software can be well tested. Wireguard being “created with simplicity in mind” doesn’t not make it a better alternative to ssh, since it doesn’t mean ssh wasnt created with simplicity in mind.
I don’t disagree that adding a vpn layer is an extra layer of security which can be good. But that does not make ssh bad and vpn good. Further, they serve two different purposes so its comparing Apples to oranges in the first place.
Or how large companies actually think about this risk in the real world. Expose SSH ports to the public internet willy-nilly and count the seconds until their ops and security teams come knocking wondering what the heck. YMMV of course, but that's generally how it goes.
Are critical SSH vulns few and far between, as far as anyone knows? Yes.
Do large companies want to protect against APT-style threats with nation-state level resources? Yep.
Does seeing hundreds if not thousands of failed login attempts a day directly on their infrastructure maybe worry some people, for that reason? Yup.
You call it consultant distraction speak, I call it educating you about what Wireguard actually is, because in your original reply you suggested it was password-based.
>Further, they serve two different purposes so its comparing Apples to oranges in the first place.
Not when both can be used to protect authentication flows.
One is chatty and handshakes with unauthenticated requests, also yielding a server version number. The other simply doesn't reply and stays silent.
>Simple software can have plenty vulns, and complex software can be well tested.
In this case, both are among some of the most highly audited pieces of software on the planet.
I’m calling it consultant speak because your response to an argument is to bring up something else, instead of actually responding.
The same with this last reply; you can keep throwing out new points all you want, but thats not going to make you correct in the original question.
Saying or implying that one software has a “principle” risk of vulnerabilities that another software doesn’t is plain and simply wrong.
And that has nothing to do with all the other stuff about layered defence, vpns, enterprise security, chatty protocols or whatever you want to pile on the discusion.
>So what’s the difference in risk of ssh software vulns and other software vulns?
I proceeded to explain how large companies think about the issue and what their rationale is for not exposing SSH endpoints to the public internet. On the technical side, I compared SSH to WireGuard.
For that comparison, the chattiness of their respective protocols was directly relevant.
Likewise complexity: between two highly-audited pieces of software, the silent one that's vastly simpler tends to win from a security perspective.
All of those points seem highly relevant to your question.
>... but thats not going to make you correct in the original question.
If you can elucidate what I said that was incorrect, I'm all ears.
You are still implying that wireguard are somehow different from ssh in its suceptibilty to vulnerabilities existing or being introduced into its codebase. And it simply is not.
Edit: codebase of ssh/wireguard implementations, just to be clear
WireGuard is 4k LoC and is very intentional about its choice of using a single, static crypto implementation to drastically reduce its complexity. Technically speaking, it has a lower attack surface for that reason.
That said, I've been on your side of the argument before, and practically speaking you can expose OpenSSH on the public internet with a proper key setup and almost certainly nothing will happen because it's a highly-audited, proven piece of software. Even though it's technically very complex.
But, that still doesn't mean it isn't best practice to avoid exposing it to the public internet. Especially when you can put things in front of it (such as WireGuard) that have a much lower technical complexity, and thus a reduced attack surface.
No, they are not. Doesn’t matter how many LoC; it only take 1 LoC to introduce a vulnerability.
Wireguard is a protocol. So what implementation is “very intentional about its choice of …”? Are you talking about my own WG client implementation? Or the one made by this other Chinese vendor?
I don’t care what software we are talking about, or who made it. All software has a risk of undiscovered/-disclosed vulnerabilities already existing, or when new ones introduced with an update.
If you really want to make this argument we can talk about the implementing organisations SDLC, including SW supply chain, and compare those.
But back to the OP/point above: its false to state that one piece of software has a “principle risk” of vulnerabilities that another piece does not. At least, not when both are internet exposed and accepting incoming data.
Lasty remember that I never disagreed with you point that a VPN solution is often a better solution, but that was never what I was arguing about. Simply that all code always has a risk of vulnerabilities. No piece of software is excempt from that.
>No, they are not. Doesn’t matter how many LoC; it only take 1 LoC to introduce a vulnerability.
So according to you, the concept of attack surface doesn't exist. A 100MB binary is equivalent in risk to a 1KB binary. Got it.
If both are highly-audited, their risk is equal despite their size and protocol complexity. Got it.
>...its false to state that one piece of software has a “principle risk” of vulnerabilities that another piece does not.
That's like the third or fourth time you've scare-quoted the word principle. You're aware that principle and principal are two different words with different meanings?
The word I used, principal, in that context means the foremost or primary risk.
Anyways, I'm just telling you how major corporations think about it. Their underlying rationale is exactly what I've explained thus far, and hence why it's best practice.
Yea, just include "All printers bought by US government must have Tracking Dots" and Executives will move that feature to top of backlog without any other concerns.
Big company executives are easiest to control; they want money and all of it. US Government luckily has plenty of it to throw around.
Perhaps AI time is the inverse of Valve time.
reply