Hacker Newsnew | past | comments | ask | show | jobs | submit | CherryJimbo's commentslogin

The Cloudflare Discord is interesting. It's mostly maintained by community volunteers (Champs and MVPs), with team participation varying drastically from team to team. The community does a great job when they can, but it doesn't feel like most teams are empowered to help people there. There are some fantastic employees who do help regularly though, but as far as I know, most employees are doing this out of kindness, since it's not actually part of their role (whether Cloudflare should make this a part of some folks roles is definitely something they should think about heavily).

The best way in Discord is to post in a forum channel like #pages-help or #general-help. There's tooling the community has to actively monitor and reply to those, vs a fast-moving text channel where things can be buried quickly. This is one of the reasons Discord isn't a great platform for offering support.

Community Champs / MVPs have the ability to escalate some things, but posting on HN will always get more attention. We've been raising issues around some things that are impacting lots of people every single day (wrangler.toml + Pages config breaking projects regularly for example), but if you want something fixed quickly, making some noise on social media is always going to be the quickest way, sadly.


You would almost certainly have to be paying for an ENT agreement before you got anywhere close to 300PB of traffic, which would incur bandwidth fees.

It'd still be significantly less than $27 million though.


300PB is literally on their example pricing page, so they believe this is allowed.


Where?



1 GiB * 1000000/day = 29.01 PiB/mo, not 300. But yeah, if they're giving it as an example...


It actually is 300PB. Reading 10 million GB / day, for 30 days is $104. 10 million GB = 10 PB. This is from the examples at the bottom.


This is seriously cool. From Twitter (https://twitter.com/KentonVarda/status/1178723944007733251):

> Not only does this use (much) less overall CPU and RAM, but it avoids delaying TTFB. Also elements you want to modify are selected using CSS selector syntax which makes things nice and easy. It may seem like a small thing to some but I'm pretty excited about this.

Previously I was doing things like fetching content, running some regex replaces, and then returning the changed content. Being able to do this dynamically and in real-time without delaying TTFB is incredible.


If you're not familiar with Backblaze, they offer personal backup services at very affordable rates, but their best offering (in my opinion) is their B2 Cloud Storage (https://www.backblaze.com/b2/cloud-storage.html). It's essentially like Amazon S3, but at a fraction of the cost. If paired with Cloudflare and the Bandwidth Alliance (https://www.cloudflare.com/bandwidth-alliance/), egress is entirely free too, meaning that you only pay for storage and API requests.

Their biggest limitation to date has been that they're only in the US, but today they launched their first EU data center in Amsterdam!

I'm a heavy user of Backblaze both personally and in my company, and you can check some of these blog-posts for use-cases and more info:

- https://blog.jross.me/free-personal-image-hosting-with-backb...

- https://nodecraft.com/blog/development/migrating-23tb-from-s...


Thanks <3


You get 100,000 requests per day, not per month. The burst limits are definitely a concern for heavy traffic, but for just $5 you can remove the burst limits entirely, as you mention.


That was entirely my bad. I was moving a few things around.


And compresses them heavily.


Workers are also used for basic CORS headers, and stripping some other unnecessary headers. They're definitely not required, but I don't believe you can do URL rewriting with page rules; redirects, sure, but not rewriting.


You're very welcome - I'm glad it was helpful. B2 is significantly cheaper than S3, especially when paired with Cloudflare for free bandwidth. If you're interested, my company Nodecraft talked about a 23TB migration we did from S3 to B2 a little while ago: https://nodecraft.com/blog/development/migrating-23tb-from-s...


How's the outgoing/ingress bandwidth comparison? Outgoing bandwidth is expensive on AWS.


It's entirely free if you only use the Cloudflare-routed URLs thanks to the Bandwidth Alliance: https://www.cloudflare.com/bandwidth-alliance


How does CloudFlare themselves afford to give bandwidth for free? I understand that I can pay $20/mo for pro account but they also have a $0/mo option with fewer bells and whistles. What gives them the advantage to charge nothing for bandwidth?


Because we’re peered with Backblaxe (as well as AWS). There’s a fixed cost of setting up those peering arrangements, but, once in place, there’s no incremental cost. That’s why we have similar agreements to Backblaze in place with Google, Microsoft, IBM, Digital Ocean, etc. It’s pretty shameful, actually, that AWS has so far refused. When using Cloudflare, they don’t pay for the bandwidth, and we don’t pay for the Bandwidth, so why are customers paying for the bandwidth. Amazon pretends to be customer-focused. This is a clear example where they’re not.


Maybe Amazon is afraid of sabotaging Cloudfront and losing revenue coming from outgoing data transfers?


Thank you for clarifying. If I were to use a Google cloud service from a Cloudflare Worker would there be no bandwidth charges? That would change everything for us.


Bandwidth between GCP and Cloudflare isn't free, unlike with Backblaze, but the cost is reduced.

https://cloud.google.com/interconnect/docs/how-to/cdn-interc...


as AWS is primary cash cow for Amazon I doubt they would ever change that. Bandwidth fees are a key profit maker for them on the other hand AWS's crazy bandwidth pricing is prob. pretty beneficial to driving customers towards you guys.


Their terms indicate it should be used for caching html primarily. So if they find costly abusers, they could use this clause to get them to upgrade to a paid tier.

> 2.8 Limitation on Non-HTML Caching The Service is offered primarily as a platform to cache and serve web pages and websites. Unless explicitly included as a part of a Paid Service purchased by you, you agree to use the Service solely for the purpose of serving web pages as viewed through a web browser or other application and the Hypertext Markup Language (HTML) protocol or other equivalent technology. Use of the Service for the storage or caching of video (unless purchased separately as a Paid Service) or a disproportionate percentage of pictures, audio files, or other non-HTML content, is prohibited.

https://www.cloudflare.com/terms/


wasabi has it free and (afaik is cheaper than s3 and B2)


According to their documentation, Wasabi will move you on to their $0.005/GB storage $0.01/GB download plan (same price as B2 without api charges) from their $0.0059/GB storage free egress plan if you download more than your total storage (eg. with 100TB stored don't download more than 100TB/month).

https://wasabi.com/pricing/pricing-faqs/


As others have said, bandwidth costs can be absolutely insane with AWS. This was actually the primary reason we moved from S3 to Backblaze B2 as documented at https://news.ycombinator.com/item?id=19648607, and saved ourselves thousands of dollars per month, especially in conjuncture with Cloudflare's Bandwidth alliance. https://www.cloudflare.com/webinars/cloud-jitsu-migrating-23...

We still use AWS for a few things and still have a small bill with them every month, but we're very careful about putting anything there that's going to push a lot of traffic.


People really should be aware of this, yes. Even if your client /management absolutely insists on S3 for reputed durability etc., if bandwidth costs are high, you can often get 'free' extensive compute resources by hiring servers elsewhere to proxy and cache S3 access to cut bandwidth bills and run other things next to the largely network bound load.

In general I find the big problem with AWS is that cost is handled 'in reverse': developers often get near free reign, and cost only gets handled when someone balks at the size of the AWS bill. Often it turns out to be trivial to cut by changing instance types or renting a server somewhere to act as a cache. At that point people have often spent tens of thousands in unnecessary fees.

There's an underserved niche for people to do AWS cost optimization on a 'no-win no-fee' basis.

I used to help clients with cutting costs on AWS and if people were in doubt I'd often offer that.

And the savings were often staggering (e.g halving hosting costs was common; once we cut costs 90 percent by moving to Hetzner for a bandwidth intensive app even though long-term storage remained on S3).

The biggest challenge in serving that niche is getting people to realise they may be throwing money out the window as surprisingly many people still assume AWS is cheap, and offering to do an initial review for free and not charge if I couldn't achieve the promised savings made it a lot easier. Someone who likes the sales side more could make a killing doing that.


This why the "Netflix uses AWS!" rhetoric is misleading. Yes, they use AWS extensively for front-end, analytics, transcoding, billing (now), etc. The one thing they don't use AWS for much at all is Content Delivery (AKA big bandwidth). That uses the Netflix Open Connect CDN which is entirely developed and run in-house.


Also, I've worked with companies a tiny fraction of Netflix who got steep discounts. It's quite possible AWS becomes cost effective when you have millions in yearly spend as leverage. The problem is surviving until you get there.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: