I just recently started hosting my personal website in my closet on a laptop: https://notryan.com.
I also host a Discourse forum. It's pretty snappy for being in my closet (https://forum.webfpga.com) Beats paying DigitalOcean $15/month ($180/year!) for a machine with 2vCPU + 2 GB RAM (the minimum for Discourse).
I think more people should consider self-hosting, especially if it's not critical. It makes the Internet more diverse for sure.
It's great to see someone self-hosting, and I noticed your note about seeing a blue header if running ipfs. I'm running ipfs with the companion extension. However, I can't resolve your dnslink.
You may not be forwarding your swarm address port, 4001 by default, to your laptop.
I'm pretty new to ipfs so take this with a grain of salt.
edit: It's worth noting that I can access other ipfs content using my gateway :)
edit2: I also noticed you're hardcoding the local gateway port to 8080 for your header background image. This port is configurable, mine is set to 5080.
Ah, yeah I don't have IPNS running right now. Also my website is not up to date on IPFS right now... maybe I'll resync it today. Anyways, this is a shortcut to it https://notryan.com/ipfs. IPNS was annoyingly slow (30+ second resolves) due to how it functioned in the past. So I stopped using it. I think they changed this in 0.5.0 so it might be better now.
Basically, if you've got IPFS you'll see a bar. If not, you'll see nothing. Did that at least work? If you had `ipfs daemon` running in the background on port 8080, you should've seen something.
Hmm yeah I see. I wonder if there would be a way to programmatically detect it. Even still, I'm not running any JS on my website because I like to keep it snappy.
Also how many people do you think are not running on port 8080? It's annoying for sure as a dev, when every program decides it wants its local web server to be 8080.
I agree with it from an architectural standpoint, but it violates the TOS of most residential ISPs in the US and endangers your internet connection if it starts seeing any significant traffic.
ISPs there are usually monopolies or duopolies so you run the risk of having to buy slower/pricier service, or being totally offline, if one shuts you off.
Thanks to restrictive firewalls, much incoming and outgoing traffic to services is tunneled over HTTP/HTTPS and uses the standard 80/443 ports. There are also legitimate reasons to have 80/443 open such as access to security systems, your home router, etc. They can't block incoming 80 or 443 without breaking a lot of things, so they won't. It's not the 90's anymore.
You really have to worry more about hitting your 1TB Comcast bandwidth cap.
I'd think that most ISP's terms and conditions don't specify the amount of bandwidth used for hosting public websites. They just don't allow server use in general, so it's a grey area and it could be enforced if needed.
This got me interested. In UK, just checked my ToS with my isp, and i have nothing of the sort.
It does, however, have a policy against 'excessive usage', which is left conpletelt undefined and they have a policy of 'Acceptable Content' which is equally vague and ill-defined.
So I guess they do have recourse if i started hosting a pornsite and it became popular, but that's about it.
One of my friends got a ToS warning from Comcast for doing something similar. Usually, they seem to only enforce the rule against pirates and people using a ton of bandwidth, but it's not a good idea in general.
The ISPs happily allow brute-force bots to smash away at anything that provides remote access (SSH, RDP, VNC). Why would a web server be considered different?
The point this parent comment is trying to make is that they're both technically ToS violations (both are running servers), but with the SSH server, there's a whole lot of traffic that the ISP could do something about (the botnets), and the owner would love it (not having my network constantly attacked would be good).
If an ISP is concerned about traffic volume, take out the botnets hammering away at the doors — the occasional guy's private website getting slashdotted is small, temporary potatoes.
With a 80$/month vm on Azure, you may get some CPU cycles but dont expect a lot of storage IOs. Azure is particularly shitty and slow if you don't spend a lot. In my experience a 3€ Scaleway vm is better than much more expensive Azure or AWS vms. You don't get as many cloud features though.
Scaleway and OVH also have cheap bare metal offers if you don't like to share.
Haha that's how I started my business 15 years ago. I was still in school and I got a few old ThinkPads with broken displays so I installed debian on them, put them in my shelf and sold webhosting.
Should be up! There are moments of brief, brief downtime when I switch binary versions. Guess you refreshed in the 5 second window when I was recompiling. I run Ctrl+c then up-arrow in tmux. Next step is to write a general supervisor the listens for SIGHUP :)
(Of course, as many people will tell me, this already exists. But sometimes, it's okay to do things for the sake of learning... come on, HN)
It's working now! But check the content-type for https://blog.notryan.com/. It's returning text/plain, and I see the html src in firefox instead of the rendered web page..
Oh shoot, agh! I was messing around with rendering different content-types based on mobile/desktop -- to prevent text-wrapping on mobile phones. The index.html case slipped through my mind. I need to write some solid tests.
Just serve html and use the viewport meta tag with width=device-width. Don't serve mobile devices plain text as they'll render at whatever their default viewport width is set to, 980px for Safari on iOS, which makes the text super annoying to read.
Desktop browsers ignore the viewport tag and mobile browsers respect it. The page is rendered how you want everywhere for little effort.
I've searched far and wide haha; the cheapest I can find them is $3.50/month from Vultr. I'm almost ready to switch to DDNS though. The threat of DDOS scares me a little bit. We have 300 mbps down though, so I don't forsee it being a terrible problem. I guess I would just call my ISP if something sketchy was going on.
I'd like to imagine that my content is not DDOS-worthy though :)
Yup :) I'm running 5 KVM virtual machines, all behind a KVM NAT interface. They're not necessarily all on one laptop haha, but still on-premise. I can do live migrations between the hardware boxes with KVM without restarting the machines. Pretty cool stuff. Then they connect to a public-facing VPS that shuttles traffic to and from. I've tested out different ratelimiting strategies too. KVM's ratelimitter is surprisingly accurate. Probably gonna stick with a 10 mbps uplink and downlink pipe. Seems consistent and reliable, and still a fraction of my 300 mbps up / 30 mbps down home internet. Doubt my ISP cares about my uploading. Literally if a couple people hop on Zoom, they would be using more internet bandwidth than my servers do.
Just use Cloudflare for your name severs. Then you can have a script that runs every 5 minutes on a cron to update your dns A records via the Cloudflare API
In my case I have a dynamic IP but I use a Linux box as a router and set up Google DNS record sets for my domains and after every reconnect to Internet (or PPP connection in this case), a shell script checks to see if the IP address changed by comparing with a text file, and if it has, it updates the IP address using the gcloud CLI utility.
The DNS timeout is set relatively low, 5 minutes. I used Google Cloud DNS because it’s scalable, cheap (the setup costs under $2/month for low traffic) and has a great API and command line tool. Just took a half day of tinkering to get it all set up and most days I forget it’s there. (I should probably set up monitoring, but it’s fine most days...)
If using CloudFlare, it’s possible you could push your dynamic IP to CF as a DNS host. Haven’t looked into it yet.
And of course there’s the DynDNS approach but most of those approaches seemed more complicated or costly these days. I also use gcloud to update the DNS for LetsEncrypt roughly once a month, so it does double duty. The language for the gcloud DNS CLI tool is a bit technical but try it, test it out, and it’ll start to make more sense by example: https://cloud.google.com/sdk/gcloud/reference/dns
For a bit more security, the apps I’m running are in their own virtual machines and I’ve tried to enable default auto updates everywhere. I keep telling myself I’ll set up something more complicated to create a continuous deployment pipeline with test cases and automatic rollback but I haven’t yet gotten around to it. (Though it would be fun to set up my own virtual PPP server at some point for running test scenarios, etc.)
I am using digitalocean's dns api for similar needs.
setting ext.mac.mydomain.com and int.mac.mydomain.com internal and external addresses from crontab.
since i'm using mac, you may need to change ipconfig command to hostname -i/-I)
Hey that's really cool. On a related note, Cloudflare's API is _really_ simple. Here's a script that I used to run on several machines that would update/insert a DNS record matching hostname->public IP.
You are lucky. My present ISP, Verizon, does not give static IPv4s, and hasn't even heard of IPv6 yet.
My previous ISP, Comcast, required a business plan, statics were $5/month, and they even changed the static IP on me, once. (They issued the same IP to another customer. I complained, and they told me I had to rotate.) Did have IPv6, though.
I have spectrum, and though it is "dynamic", I have never had it change.
A couple of thoughts though for Dynamic IP:
- I have my DNS on cloudflare, and I make a script that runs ever minute to checkif my ip changed, and if it did, than update the DNS (this was on a Cloud server and I didn't buy a static IP).
- I also use pfSense, and it updates dynamic DNS as well.
I once had an ISP with a 3600 hour lease on a dynamic IP. For small things like game servers or short-term file hosting it may as well have been static, but the bi-annual IP change was usually pretty jarring.
"Own your data" maybe is not the right term. What is needed here is that your data does not vanish because the hosting service goes down or under or terminates your account.
Maybe things like Mediagoblin or IPFS could do that job.
Hi OP, you seem to be the author of the blog (according to your post history).
Your contact email seems not working so I will write it here:
Just a big thank you for you blog! I take so much pleasure to read it at least once a week! THIS is the internet I like!
Reading through the webpage, I was thinking that points 1, 2 and 4 really wasn't relevant. We're talking about bandwidth limitations being the key factor here - not the rest.
1. Page Loading Issues are irrelevant, unless it's because of large items served over limited bandwidth.
2. Static vs Dynamic webpages are irrelevant, if the pages themselves are small. Dynamic of course incurs some CPU on the server-side, but that is a machine not bandwidth issue.
3. Limiting the amount of data is obviously important.
4. Number of requests to the server is only relevant for the request-size, CPU and tcp-overhead (which can be alleviated via multiplexing).
5. Yes, do compress the pages.
6. Agree, website development kits often makes the pages much larger.
7. Certainly, but this shouldn't be necessary if you have good cache-headers.
One thing that was not mentioned, was ensuring that static items are cachable by the browser. This has a huge impact.
7 is CDN, and is necessary in the case he described shortly before -- a "hot article". Presumably, a hot article will not get lots of repeat traffic from the same visitor (sure, someone might browse, but you're probably getting 90% one-time visitors). Cache headers do nothing here.. a CDN is the only viable solution... And frankly, using CloudFront for 3 years at work, I started putting my own websites behind a CDN because it costs pennies to do so.
This does not mean I disagree with your other statements, I too found it a mix bag of somewhat obvious or slow-server specific (not just low-bandwidth)
It’s pretty bold to say #2 is irrelevant. I don’t have data to point to, but anecdotally I believe I have perceived the performance impact of node pages over static pages. Do you have data to support your claim? He’s clearly running in constrained hardware.
> 2. Static vs Dynamic webpages are irrelevant, if the pages themselves are small. Dynamic of course incurs some CPU on the server-side, but that is a machine not bandwidth issue.
Huh? Sure, it's not a bandwidth issue, but the article is about more than serving on low-bandwidth. It's about using limited hardware more generally.
From the first two sentences:
> The cheapest way to run a website is from the home Internet connection that you already have. It's essentially free if you host your website on a very low-power computer like a Raspberry Pi.
I guess the headline refers to bandwidth, and there's some bandwidth-specific framing, but the article is discussing more than just low-bandwidth situations.
This is a tangent, but I want to add to the less-is-more vibe!
I run some surprisingly-spiky-and-high-traffic blogs for independent journalists and authors, that kind of thing. Lots of media files.
There are two ways this typically goes: either you use some platform, and try and get a custom domain name to badge it with, or else you imagine some complicated content-management system with app servers, databases, elastic search clusters etc?
At the time wordpress etc weren't attractive. I have no idea what that landscape is like now, or even what was so unattractive about wp then, but anyway...
So we're doing it with an old python tornado webserver with one core, 256MB RAM VM at a small hosting provider. (I think we started with 128MB RAM, but that offering got discontinued years ago. It might now be 512MB, I'd have to check. Whateever it is, its the smallest VM we can buy.)
The webserver is started-if-crashed using cron minute and flock -n.
The key part of the equation is that the hosting provider I use did away with monthly quotas. Instead, they just throttle bandwidth. So when the HN or some other crowd descends, the pages just take longer to load. There is never the risk of a nastygram asking for more money or threatening to turn off stuff or error messages saying some backend db is unavailable etc.
Total cost? Under $20/month. I think domains cost more than hosting.
The last time I even checked up on this little vm? More than a year ago, I think. Perhaps two? Hmm, maybe I should search for the ssh details...
My personal blog is static and is on gh-pages. A fine enough choice for techies.
Static sites can be a fine choice for non-techies too. A non-profit I work with manages theirs using Publii, which pushes the static site to Netlify. The whole thing is free (except the domain), and quite easy to move.
Yes we probably aren't using the cheapest host in the world. It probably wasn't even the cheapest when we chose it. We chose it because I'd used them in my day job and liked their style and support. And the bandwidth throttle rather than quota is really attractive too.
And I think that more than half of the monthly cost is actually the amortized domain renewal fees etc, not the vms themselves.
So we could probably save a few dollars if we shopped around and moved? But I've just spent more time writing on HN today than I normally spend in a year on thinking about these old servers....
It's just a few dollars, but a big percentage. $20 is very expensive for what you have. You could get a $2.50 IPv6 Vultr instance, $3.50 with IPv4. 500GB traffic there. Scaleway also starts in that price range, and if nothing changed includes unlimited traffic. See https://www.scaleway.com/en/virtual-instances/development/ and https://www.vultr.com/products/cloud-compute/. The domain will cost likely $1 the month.
Seeing you mentioned Tornado, WordPress not being so attractive, and $20 for a 512MB VPS makes me wonder if this was sometimes around 2010-2012? A lot of comments here seems to mention $5/mo VMs, but if this was around that time period, then I don't think $20 is unreasonable at all!
(I used RootBSD back then and they only gave you 256MB for $20 ;-)
The type of people interested in self-hosting to the point of being willing to do so over a shitty ADSL connection aren't likely to be interested in just hosting on AWS.
Exactly. For an entirely static website, I can't see any reason not to host content in cloud storage and serve through a CDN (e.g., CloudFront or Cloudflare). You get the advantage of low cost when running at low volume and the ability to scale up as needed.
Further, your users will get lower latency and faster downloads when accessing one of the globally-distributed edge caches. And lastly, you don't have to expose your own IP address, which puts at risk of DDoS.
Does anyone have a good argument for self-hosting an entirely static website?
It's more fun. You have complete control and you learn more. It's like building a Morse code key from spare parts rather than buying one from the store.
Having said that, I think NearlyFreeSpeech provides a great web hosting service for pennies a day for those who don't want to host at home. I've used them for years. https://www.nearlyfreespeech.net/services/pricing
It's more fun for me trying to host a site on S3 and I get to learn more about AWS in general. My goal is to have a site that costs a dollar a month baseline and even if something goes viral the expected bill can never be more than $10-20
Yes and no. The classic adage of "what if" tends to fly into these conversations prematurely.
Let's face facts. Most personal sites won't see a giant boost in traffic unexpectedly. Which means hosting and paying a cloud provider is completely unnecessary.
If you happen to become popular enough (the 1%) THEN you start to consider some scaling patterns.
There have been enough articles about people getting surprise AWS bills going into the thousands of dollars. Even as someone doing business with some of my sites that's a risk I'm not willing to take. It's definitely not an option for someone wanting to minimize costs to an extreme by not spending money on a hoster at all. The simple step of having to provide a credit card for AWS without being able to set a limit is not an acceptable risk.
I don't want to make my visitors more trackable than absolutely necessary. Sure, my ISP and their ISP will know that they connected to me, but not to which site thanks to TLS. If I host on CloudFront, Amazon knows exactly which pages they viewed at which time because they can see inside the TLS connection.
Having just moved some of my static sites to S3+Cloudfront, I think the main reason to self-host would be self-education. AWS is great, but it's quite opaque compared to having your stuff right there on a box you can poke at.
There's also a distinct vendor portability advantage. If you want to move your self-hosted static site from your home connection somewhere else, it's pretty straightforward. But if I want to move my static sites from Amazon to some other cloud vendor, I basically have to start my Terraform from scratch.
Are you accusing Amazon of snooping into traffic going to their AWS customers? Seems quite a charge, and I hope you have evidence. If not, it doesn't look well for someone purportedly good at web development to propose such a silly hypothesis.
I have plenty of evidence. Most companies keep logs and make the available to law enforcement. In countries different than the user accessing the data.
^ This exactly. A half-decent used Thinkpad can be picked up for slightly north of $120. And it will have 4-8x RAM and at least 4-5x CPU performance. And you own it! The only thing you potentially miss out on is: uptime SLAs ("reliability") and a higher-speed internet connection.
You also might miss out on a public IPv4 address, but that's whole 'nother issue... FRP (fast reverse proxy) is a decent workaround for this.
Now take into account electricity costs. Where I live, running a Lenovo constantly @10w for a year would cost me $30. Then you avg the remaining lifetime of your laptop (5yrs) and the time you spent setting it up and you will find out, a $5 droplet is a cheaper option ;-)
Fair point, it depends on your workload a lot. If you're hosting a few static sites, a droplet _might_ be a better option. Previously, I was paying an arm and a leg for $90 worth of droplets a month to cope with my remote-compile applications. Plus, I don't pay for electricity at my current apartment. It's included in my rent, so I'm really paying for the electricity whether or not I use it (within limits).
Depends what you are serving. If you are going to host low traffic blog or site, that can go unpredictably offline and be offline for days, this is OK.
I was doing this for some time and all the nuances with it (assuming electricity and internet are cheap, which isn't the case) makes it still unprofitable on the long run. ISPs usually have really bad upload, port blacklist (e.g. irc), connection unexpectedly drops during the night or day, DNS resolvers or forwarders usually are miss-configured and so on. Support for requests outside "my internet is not working" is often zero (e.g. "your DNS isn't configured properly").
I didn't even start with DDoS (ssh, http) attacks, port scans and even crawler hits (some crawlers by even reputable search engines can be nasty) - all these things will eat your monthly quota without even serving a byte of your site to human visitor. And don't forget visitor impatience these days - if site isn't loaded under a second or two, no more visits :)
Maybe you don't need a static IP, though? If it only changes occasionally, you can just run a small program that checks periodically for those changes and updates the DNS records when it does. I've done that (using Cloudflare as the DNS server) and it worked just fine, the downtime was minimal.
My personal web site spent years on slow cable/DSL, and I used Coral CDN at the time to deal with peak traffic. Today I have 1Gbps/200Mbps fiber and keep it running off a tiny free VPS with Cloudflare, and sometimes wonder if I need the VPS at all.
(One of the reasons I moved it off my LAN was security - not providing an ingress point, etc.)
I was just about to try to grab this and try to contact them to return it, but the WHOIS records say it actually is still registered to them, and it expires in 2023. Phew! Maybe they need to update some DNS records?
I'm pretty sure both AWS and Google Cloud have a free tier enough to run basic websites. I've been running mine on Google Cloud. AWS has all kinds of nice freebies too. Like 25GB of DynamoDB with 200M requests/month, and AWS Lambda with 1 million requests/month.
> your website may occasionally miss potential traffic during "high-traffic" periods.
The thing with personal home websites is that there's really no actual problem if the site gets overloaded, or if it goes down for a day or a week or is only up intermittently at all.
These requirements of constant availability and massive scaling aren't universal requirements. It's okay if a wave of massive attention is more than your upstream can support. If people are interested they'll come back. If they don't it's fine too.
Another way would be to upload your site to your local IPFS node. Your files will be automatically cached by other IPFS nodes as people discover your site providing free load balancing and redundancy.
Your site will still be viewable even after you turn your local node off until all traffic goes to zero and the caches eventually expire.
After making a rare popular post (32K hits in a few hours) I moved my static site (a gohugo.io effort) to AWS Lightsail. For less than a fistful of dollars per month it’s someone else’s problem. I keep page size down (it’s text heavy) and so far I haven’t needed any CDN.
If I’m having trouble viewing a page on someone’s hammered server I either look in Google’s cache or use links/elinks to grab the text (usually what I’m interested in).
As someone who just moved away from the exact Github->Netlify-Cloudflare stack... here's my two cents.
Yes, it does make deploys easier. You don't "have to think" anymore. Well, that's the argument.
I self-host by creating a git bare repo on a server, ala `git init --bare mysite.com.git` and cloning that repo again for the web server (`git clone mysite.com.git mysite.com`). Now add this git hook in the bare repo:
Now my deploys take a second and become visible immediately. If you are intending on serving 1000s of requests per second, sure you might need CDNs and caching. But... if your hosting a portfolio/blog website, you might not need it. On top of that, demonstrating knowledge of backends is never a bad thing for a personal website.
1. Your deployment is not atomic. Probably should consider checking out to and building in a different directory, then atomically switch (symlink is an easy way to achieve atomicity).
2. Using a good CDN ensures your site is fairly responsive on (hopefully) all continents. Your server probably isn’t. Your closet server — even less.
Oh shoot yeah symlinking sounds like a good idea. I should try that out. The thing is, for the amount of visitors I have, both of those points are kinda moot.
Atomic update is just good engineering practice regardless, and it’s pretty easy to work into your existing flow.
As for responsiveness, it’s a good thing to proactively prepare for an HN hug of death ;) It also starts to matter when the traffic crosses into 100k a day, as is the case in TFA.
Dithering is done once (not every request) so not a problem, I will add "once" gzip compression to images at some point but only when the load on my blog increases to the point of breaking bandwidth. I suspect CPU will falter first so no point worrying really!
Ok, I thought you meant HTTP compression, but yeah it makes it bigger! jpeg is not interesting to begin with because of artifacts. The dithering is a style, all-in-all my choice had very little to do with actual size savings... it's also simpler = can be displayed on monochrome displays, the code to make it and render it is simpler. etc. etc. Sure in pure bandwidth savings you'll win every day of the week, but in style? Life without art is meaningless for humans. Dithering is minimalism of expression!
The artifacts may be exaggerated there, as that image got JPEG compressed twice - once by me, and then Imgur did it again and made it smaller (edit: actually, can't see a difference). Here, I've uploaded it as a PNG, so I don't think Imgur messed with it https://i.imgur.com/uzlR3Fj.png
Mmm gotcha, I'll have to run some tests. I've never considered using dithering before, but the minimalist asthetic looks kinda cool. But you're right that it wouldn't make much sense if JPG is better though..
To go along the self-hosting crowd, the whole decentralized web with alternatives to HTTP is solving issues like uptime and low upstream bandwidth by distributing distribution itself, the way bittorrent does. Dat (via beaker browser) and zeronet come to mind.
My ISP (Cox) doesn't allow inbound traffic on port 80. Anyone know any tricks of getting around that? I'm currently using a reverse proxy on a friends server to tunnel through an alternative port, but I'm looking for a better solution.
The reverse proxy is a work around so I can use port 80, I have my web address point to my friends server which reverse proxies to my home server, masking the alternative port to the user.
doesn’t putting CF in front of your home web server mostly solve this problem? CF will cache static assets for you and take all the pain of a traffic surge away.
Netlify has strict limits in their TOS on what you are allowed to host there. There’s lots of legal content that you might want to publish that you aren’t allowed to put on Netlify.
Additionally, their uploader CLI is spyware, and their corporate stance on this is that you agreed to their uploader tool spying on you when you created your Netlify account.
They also can’t pull from private git hosting, only the big public ones (which are themselves ethically questionable), which makes using Netlify for builds a bit of an issue if you, for example, self host Gitea or GitLab.
I use a PaaS (sort of a self-hosted Heroku) called CapRover that pulls/builds from a self-hosted Gitea on branch change. It’s a relatively small change to drop a two-stage Dockerfile into an jekyll/hugo repo that does the build in step one then copies only the resulting static files into a totally off the shelf/vanilla nginx container in stage 2 for runtime hosting.
For my main website, I still have cloudflare in front of that.
It silently uploads your usage data without consent, which includes your IP and thus coarse location data as well as any vendor relationships you use it within (eg hosting or CI).
It even sent a “telemetry disabled” telemetry event when/after you explicitly indicated you didn’t want telemetry sent, until I repeatedly complained (they brushed it off initially): https://github.com/netlify/cli/issues/739
I just don’t believe that the company has any meaningful training or priorities around privacy.
As for the types of content you can’t post, I encourage you to peruse their TOS:
> Content with the sole purpose of causing harm or inciting hate, or content that could be reasonably considered as slanderous or libelous.
I personally would like to be able to post political cartoons or other political content expressing and inciting hate toward, for example, violent or inhuman ideologies, and those would be posted for the express purpose of causing harm and damage to the political campaigns they target.
Netlify should not be policing legal, political speech on their platform.
You also aren’t allowed to host files over 10MB(!) so that rules out hosting your own music, podcasts, high res photography, most types of software downloads, or videos on your website, all pretty normal/standard things to host on a website in 2020.
I'm also using this, and now that I need some dynamic stuff to allow booking me in my calendar, I found that I also get simple access to aws lambdas for free, way enough for little me.
All websites need SSL, even if the content is not sensitive. Someone could inject something like "I'm raising money for a charity, send money >>HERE<<, thanks!" and people that goes on your non sensitive website could get tricked.
That's just a simple example, but a more competent attacker would also use your unencrypted traffic to inject malicious code that target some browser vulnerability, or make the visitor download a malware, ...
A single ping to the other side of the earth takes a theoretical minimum of 133ms, and more realistically about 300ms. An SSL handshake has to exchange data twice (thrice?), so even without processing delays at all we easily get a third of a second or longer if you're far enough away.
That said, SSL is useful on read-only documents: it provides integrity. Without SSL anyone between your server and the end user can just change the page content, whether that's for misinformation, false-flag-attacks, inserting malware, scanning for censored content (which is known to happen) or for inserting ads (which also happens).
The article features some very good advice, regardless of how you actually host the site, that should be followed by many other sites around the web...
It basically boils down to the old KISS principle "keep-it-stupid-simple".
For the kind of site described here, there are plenty of CDNs which will just treat it as a rounding error in terms of their running costs, if at all it costs anything, and give it to you for free
lowtech magazine [1] is an amazing example at sustainablity serving a website, they use solar powered only infra and the website is only available theoretically, if there's been enough sun to power the servers that day (In Barcelona, at least we've sun much of the year)
I ran garyshood.com off of a pentium 3 dell poweredge and a comcast connection in its peak (2009-2012). I couldn't offer an upload speed of more than 20-40kb/s without slowing down the house. but with traffic shaping rules and following most of what he's outlined here, it was easy. it wasn't even cpubound, it was just network bound, but I could still handle 100-300k unique visits a month. I did have to host the downloadable exe for the autoclicker on a third party for $3 a month, even at 120kb that thing would use too much house bandwidth.