Only if the detection mechanism is looking at that single IP and from a single location.
Find the ASN(s) advertising that network and figure out their location.
Even within the ASN there may still be multiple hops, and those IPs may be owned by others (eg the hosting facility) who are not playing the same latency games.
A few months ago I had someone submit a security issue to us with a PoC that was broken but mostly complete and looked like it might actually be valid.
Rather than swap out the various encoded bits for ones that would be relevant for my local dev environment - I asked Claude to do it for me.
The first response was all "Oh, no, I can't do that"
I then said I was evaluating a PoC and I'm an admin - no problems, off it went.
Agreed, there's definitely a heavy element of that to it.
But, at the risk of again being labelled as an AWS Shill - there's also other benefits.
If your organisation needs to deploy some kind of security/compliance tools to help with getting (say) SOC2 certification - then there's a bunch of tools out there to help with that. All you have to do then is plug them into your AWS organisation. They can run a whole bunch of automated policy checks to say you're complying with whatever audit requirements.
If you're self-hosting, or using Hetzner - well, you're going to spend a whole lot more time providing evidence to auditors.
Same goes with integrating with vendors.
Maybe you want someone to load/save data for you - no problems, create an AWS S3 bucket and hand them an AWS IAM Role and they can do that. No handing over of creds.
There's a bunch of semi-managed services where a vendor will spin up EC2 instances running their special software, but since it's running in your account - you get more control/visiblity into it. Again, hand over an AWS IAM Role and off you go.
It's the Slack of IAAS - it might not be the fastest, it's definitely not the cheapest, and you can roll your own for sure. But then you miss out on all these integrations that make life easier.
For years I've been collecting a slowly growing pile of old drives from old devices as I had replaced them.
I finally decided to do something about them and got myself a USB to IDE/SATA adapter and spent a week or so duplicating them so I could check if there was anything worthwhile to keep, before destroyig and sending them to an e-waste recycler.
Some of those drives had been sitting around since early 2000.
All except two of the mechanical drives just fired up and worked fine. One won't power on at all. One powers up, but reading from some sectors just results in failures.
But the SSDs are another story. All blank/empty for any that were more than a few years old. Even though I know some were pulled from working machines.
I'm struggling to find a way to express my opinion about this video without seeming like a complete ass.
If the author's point was to make a low effort "ha ha AWS sucks" video, well sure: success, I guess.
Nobody outside of AWS sales is going to say AWS is cheaper.
But comparing the lowest end instances, and apparently, using ECS without seeming to understand how they're configuring or using it just makes their points about it being slower kind of useless. Yes you got some instances that were 5-10x slower than Hetzner. On it's own that's not particularly useful.
I thought, going in, that this was going to be along the lines of others I have seen, previously: you can generally get a reasonably beefy machine with a bunch of memory and local SSDs that will come in half or less the cost of a similar spec EC2 instance. That would've been a reasonable path to go. Add on that you don't have issues with noisy neighbors when running a dedicated box, and yeah - something people can learn from.
But this... Yeah. Nah. Sorry
Maybe try again but get some help speccing out the comparison configuration from folks who do have experience in this.
Unfortunately it will cost more to do a proper comparison with mid-range hardware.
What is the point you are trying to make? Are you saying that we would need to have someone on payroll to have a usable machine? Then why not just have... a SysAdmin?
Shared instances is something even European "cloud" providers can do so why is EC2 so much more expensive and slower?
Because people aren't going on AWS for EC2, they go on it to have access to RDS, S3, EKS, ELB, SNS, Cognito, etc. Large enough customers also don't pay list price for AWS.
Of the services you list, S3 is OK. I would rather admin an RDBMS than use RDS at small scale
> Large enough customers also don't pay list price for AWS.
At that scale the cost savings on not hiring sysadmins becomes much smaller, so what is the case for using AWS? The absolute cost savings will be huge.
In absolute numbers maybe it's a lot, but I doubt even 10% are EC2 only.
Even "only" ECS users often benefit from load balancing there. Other clouds sometimes have their own (Hetzner), but generally it's kind of a hard problem to do well, if you don't have a cloud service like Elastic IP's that you can use to handle fail over.
Generally everywhere I've worked has been pretty excited to have a lot more than just ecs managed for them. There's still a strong perception that other people managing services is a wonderful freedom. I'd love some day if the stance could shift some, if the everyday operator felt a little more secure doing some of their own platform engineering, if folks had faith in that. Having a solid secure day-2 stance starts with simple pieces but isnt simple, is quite hard, with inherent complexity: I'm excited by those many folks out there saddling up for open source platform engineering works (operators/controllers).
>Even "only" ECS users often benefit from load balancing there. Other clouds sometimes have their own (Hetzner), but generally it's kind of a hard problem to do well, if you don't have a cloud service like Elastic IP's that you can use to handle fail over.
Pretty much everyone offers load balancing and IPs that can be moved between servers and VPSs. Even if you have to switch to new IPs DNS propagation will not take as long as waiting out an AWS shutdown.
10% of what? Users, instances/capacity...? If its users then its a lot higher because it it gets commoner the smaller users are.
> There's still a strong perception that other people managing services is a wonderful freedom.
The argument is really about whether that is a perception of a reality. If you can fit everything on one box (which is a LOT these days) then its easy to manage your own stuff. if you cannot you are probably big enough to employ people to manage it (which is discussed in other comments) and you still have to deal with the complexity of AWS (also discussed elsewhere in the comments).
I'd be shocked if 10% of users only use EC2. And as you say, for the most part I expect these to be pretty small fry users.
I've used dozens of VPS providers in my life, and a sizable majority had not advertised any load balancing offerings. I can open tickets to move IP addresses around. But that takes time. And these environments almost always require static configuration of your IP address, which you need some way to do effectively during your outage.
Anyone who declares managing their own stuff to be easy is, to me, highly suspect. Day 0 setting stuff up isn't so bad, day 1 running will show you some new things, but as time goes on there's always new and surprising ways for things to break or not scale or not be resilient or for backups to not be quite right. You talk about employing people to manage for you, but one to three persons salary will buy you a lot of elasticache and rds. As a business, it's hard to trust your DBA's and other folks, to believe the half dozen people really have done a great job. Where-as you know there have been many people-decades of work out into resiliency at AWS or others, you know what you are getting, and it's probably cheaper than having your own team for many many people.
I want to be clear that I am 100% for folks buying hardware and hosting themselves. I think it's awesome and wild how good hardware is. But what we run atop is way way way too often more an afterthought than a well planned cohesive system that's going to work well over time. Thats why I am hats off to the open source platform engineering works out there. I think we're getting closer to some very interesting spaces where doing for ourselves starts to be viable, in a way that's legitimate & runnable in ways that everyone-figuring-it-out-for-themselved of the past was always quite risky and where the business as a whole or external systems reviewers rarely had a good ability to evaluate what was really going on or how trustworthy it was.
I aspire for us to outdo the perception that other people managing for us is a great freedom. I really long for that. But the kind of "meh it's not that hard" attitude I see here, to me, dissuades from the point: it undermines how hard and what a travail it is to run systems. It is a travail. But it's one that open source platform engineering is advancing mightily to meet, in exciting & clear ways, that the just throwing some shit up there past always made murky.
I'm saying that if you do want to compare two different platforms on performance, it should probably be done in consultation with someone who has worked with it before.
To use an analogy it's like someone who's never driven a car, and really only read some basic articles about vehicles deciding to test the performance of two random vehicles.
Maybe one of them does suck, and is overpriced - but you're not getting the full picture if you never figured out that you've been driving it in first gear the whole time.
At this point managing AWS, Azure or other cloud providers is as complicated or more complicated than managing your own but at an enormous cost multiplier and if you have steady traffic workloads I'm not sure it makes sense for most companies other than burning money. You still need to pay a sysadmin to manage the cloud and the complexity of the ecosystem is pretty brutal. Combine that with random changes in shit that causes problems like when we got locked out of our Azure account because they changed how part of their roles system works. I've also seen people not understanding the complexity of permissions etc and giving way to much access to people who should not have access.
When I was learning cloud computing I ran an ASP.Net forum software on Azure Cloud Service with Azure SQL backend. It cost me ~110 USD per month and was a total dog - slow as hell intermittently.
Moved it to AWS on a small instance running Server 2012 / IIS / SqlExpress and it ran like a champ for 10 USD a month. Did that for years. Only main thing I had to do was install Fail2Ban, because being on cloud IP space seemed to invite more attackers.
10 dollars a month is probably less than I paid in electricity to run my home server.
For what it's worth - my day job does involve running a bunch of infrastructure on AWS. I know it's not good value, but that's the direction the organisation went long before I joined them.
Previous companies I worked for had their infrastructure hosted with the likes of Rackspace, Softlayer, and others. Every now and then someone from management would come back from an AWS conference saying how they'd been offered $megabucks in AWS Credit if only we'd sign an agreement to move over. We'd re-run the numbers on what our infrastructure would cost on AWS and send it back - and that would stop the questions dead every time.
So, I'm not exactly tied to doing it one way or another.
I do still think though that if you're going to do a comparison on price and performance between two things, you should at least be somewhat experienced with them first, OR involve someone who is.
The author spun up an ECS cluster and then is talking about being unsure of how it works. It's still not clear whether they spun up Fargate nodes or EC2 instances. There's talk of performance variations between runs. All of these things raise questions about their testing methodology.
So, yeah, AWS is over-priced and under-performing by comparison with just spinning up a machine on Hetzner.
But at least get some basics right. I don't think that's too much to ask.
On the "value" question, it's worth considering why so many tech savvy firms with infra-as-code chops remain with GCP or AWS. It's unlikely, given how such firms work, they find no value in this.
FWIW, I firmly believe non "cloud native" platforms should be hosted using PXE-booted bare metal withing the physical network constructs that cloud provider software-defined-network abstractions are designed to emulate.
A better presentation would be to have someone make the best performance/price on AWS EC2, then someone else make the best performance/price on Hetzner and compare.
I myself used EC2 instances with locally attached NVMe drives with (mdadm) RAID-0 on BTRFS that was quite fast. It was for a CI/CD pipeline so only the config and the most recent build data needed to be kept. Either BTRFS or the CI/CD database (PostgreSQL I think) would eventually get corrupted and I'd run a rebuild script a few times a year.
i made a similar comment on the video a week ago. It is an AWFUL analysis, in almost every way. Which is shocking, because its so easy to show that AWS is overpriced and underpowered.
If you're someone who's bought into the Apple ecosystem over multiple devices, or ave a partner or children who are also using devices in the Apple ecosystem, then your Apple ID is something that is very definitely tied to you and probably difficult to change/give up when you replace your phone.
I don't think it would be at all surprising to find that the vast majority of people use their legal name or something closely associated with their identity, and that it persists over multiple devices.
Find the ASN(s) advertising that network and figure out their location.
Even within the ASN there may still be multiple hops, and those IPs may be owned by others (eg the hosting facility) who are not playing the same latency games.
reply