Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you set up a wildcard cert and configure your server to reject invalid subdomains, you've already done more work than it takes to actually secure your site. If you've learned to do all that, you'd just secure your site. There is no time benefit to the obscurity approach.


Why do people insist that "security" is binary, i.e. something is either secure or not? A residential building has way less security than a military base, yet can be consider secured while the base not.

The article points to one of the core equations every business should embrace: `cost_of_risk = unit_probability * unit_impact`.

If some measure can improve security metrics (time to discover, time to break, skill/power to break, etc.) then it can be considered a security measure. SSH server running on non-default port is not much more secure than otherwise equal server, but probability of random attack is lower.

> you've already done more work than it takes to actually secure your site

Finding and plugging RCE holes in every dependency you have deployed is way harder than making the host less discoverable. If running security-patched software, having non-default, hard to guess passwords are security measures that lead to lower chance of exploitation, why can't non-default, hard to discover hostnames/ports be considered security measures if they do reduce likelihood of exploitation? Relying solely on `apt upgrade` is similarly insecure as merely exposing software on non-default ports.


Security is not binary. Setting up firewall and using 192 bit basic auth password for all your web properties is however going to stop maybe 99.9% of attacks that wildcard cert + subdomains will stop and takes like, 2 hours. If you can do both, sure, go for it. But there is certain truth in applying traditional normal approach first


What binary conclusion do you see on the GP?

There is only a cost-benefit analysis there.


Weird, I do not see cost-benefit analysis in OP, which I am missing. OP discards the obscured hostname measure claiming it takes "more work than it takes to actually secure your site".

To me this reads as there is some inherently secure approach and then this obscure approach, slightly increasing security. I cannot agree with such assessment.


How do you secure your site against a vulnerability which an attacker knows, and you still don't? And for which a fix is not available yet?

No layer of defense is useless. Hiding the host name is a pretty noticeable layer.

Running a private server only inside a VPN is, of course, even nicer, when you can afford it.


If you're going to talk about vulnerabilities that you don't know about yet, then you should be looking at how you're running old kernel versions and unpatched software. Hiding your hostname doesn't protect you against anything other than insecure application code. Old httpd or openssl will still be exposed.

There are plenty of ways that your secret hostname can leak. It's just a weak, low-entropy, guessable password that can't really be rotated. And moreover, if you're going to choose one thing to do (or time box your efforts around security), it's the highest complexity I can think of for almost no real return. I mean, if you're so concerned about leaking the hostname from your HTTPS cert because your site is that insecure, why are you even using HTTPS? What are you even protecting?

Is that the best way to spend an hour securing your site? Even just slapping the six lines to put hard coded HTTP auth credentials on your nginx virtual host will do more than that.


Require TLS Client Certificates at the load balancer level (or even just HTTP basic auth).


In this particular case, the community I'm talking about is primarily (exclusively) using their http server as reverse proxy, pointing to various web-based backends. They have to set up subdomains and certs anyways, so doing it with wildcards is actually less work.

Yes, they can and should also set up things like fail2ban or crowdsec, add geo-based ip blocking, etc, but many don't because it's not fundamentally required to make their services work. Even harder still, or even impossible, is making sure all of the backend services themselves are secure.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: