Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Tell HN: Cloudflare R2 allows 300PB of egress for chump change
66 points by truetraveller on Oct 25, 2022 | hide | past | favorite | 28 comments
In their official example, 300PB for $104. Cloudflare only charge for the cost of GET operations. Actual "bandwidth" is free. For comparison, Amazon S3 is $90/TB. So, the equivalent cost would be 300,000TB x $90= $27 million.

What is the catch? Am I missing something?

https://developers.cloudflare.com/r2/platform/pricing/



See also: https://www.cloudflare.com/bandwidth-alliance/

Vendors listed there have pretty competitive pricing for bandwidth.

I find many AWS services to be priced decently (if you are using them properly, elastically scaling, etc). But that's definitely not the case for bandwidth. AZ to AZ charges are one of the worst - yes that's cheaper than egress but that's no consolation because you are essentially required to use multiple AZs if your business actually has any availability requirements.

And don't get me started on NAT Gateway pricing...


The Cloudflare Bandwidth Alliance is fundamentally a publicity tool for cloud vendors. Because most partners still haven't fulfilled their promises (including the founding members who joined in 2018). Their status is always something like "all bandwidth between X and Cloudflare will be free of charge in the upcoming months". Until one day some of them quit the alliance, such as DigitalOcean (IPO) and Linode (acquired by Akamai). So, just don't trust any of those alliance members.


Unfortunately the "Bandwidth Alliance" is too ambiguous for my liking. It requires both parties to be transparent and have clear ToS, and no limits, etc. In practice, it's simply not worth it. Source: myself trying to "beat the system".


You are missing the class b charges. Total would be $234.90 which is still slightly less than 27 million...


No, I included Class B charges. Writes are Class A, reads are Class B. That additional $130 is for 30 million writes, of any data size, even 1gb. I know. So $104 + $130 = $234.


On the other hand, if you enable Cache in front of your bucket, then your class B requests probably go back down to very little...


> What is the catch? Am I missing something?

Some say, Cloudflare throws the ToS rule-book at you once you cross 5TB/mo (or whatever the threshold is; we're at multiple-TBs but no one from Cloudflare has thrown as much as an email at us). That said, Cloudflare's absurdly high bandwidth rates for Specturm (their L4 load balancer) [0] remains a mystery.

Pretty recently, Cloudflare blogged about AWS' potential 80x markup on egress [1]. That is, the $90/TB AWS charges its customers must cost them a measly $1 or so.

Cloudflare in 2014 blogged about how they work relentlessly to bring down bandwidth costs by peering aggressively where possible [2] (which apparently means $0 for unlimited bandwidth [3]). And where they can't / don't [4], egress is 5x (est) the ingress (one pays for the higher among the two), but this creates an opportunity for an arbitrage and give away DDoS protection for free.

This is pretty similar to Amazon's free-shipping offer for Prime customers despite it being one of the biggest loss makers to their retail business. Prime basically has since forced Amazon to bring down costs through building expensive and vast distribution & logistics network that spawns the globe. Doing so was a considerable drain on the resources in the short-run, but in the long run, it has become an unbreachable moat around its largest business.

Analysts like Ben Thompson (stratechery.com) and Matthew Eash (hhhypergrowth.com) have written in detail about Cloudflare's modus operandii over the years, with both agreeing that Cloudflare's model is so brilliantly disruptive that even Clayton Christensen would be proud of it.

[0] $1/GB! https://support.cloudflare.com/hc/en-us/articles/36004172187...

[1] https://blog.cloudflare.com/aws-egregious-egress/

[2] https://blog.cloudflare.com/the-relative-cost-of-bandwidth-a...

[3] https://www.cloudflare.com/bandwidth-alliance/

[4] https://bgpview.io/asn/13335#info


> Some say, Cloudflare throws the ToS rule-book at you once you cross 5TB/mo

5TB a month sounds low. Sure, it's plenty for a website and many apps, but not much if you require file transfers.


This is probably the catch. Cloudflare is, unfortunately, very ambiguous at times. This view is shared by many users on their forums as well.


AWS overcharges extremely for bandwidth. But yes, I'd expect that much bandwidth to cost at least around $1m on most providers. If you end up using THAT much bandwidth with so few GET requests, expect a call from CloudFlare.


Can they actually do that? It's not mentioned in their ToS.


>Cloudflare may, with or without notice to you and without liability of any kind, temporarily limit your storage and/or the number of requests you can make or receive using the Developer Platform for any reason (in its sole reasonable discretion), including without limitation, if processing such requests would put an undue burden on the Cloudflare network, adversely impact the Service, or otherwise threaten the integrity of Cloudflare’s networks.


Thanks for that. Was seriously considering R2. Not anymore.


You would almost certainly have to be paying for an ENT agreement before you got anywhere close to 300PB of traffic, which would incur bandwidth fees.

It'd still be significantly less than $27 million though.


300PB is literally on their example pricing page, so they believe this is allowed.


Where?



1 GiB * 1000000/day = 29.01 PiB/mo, not 300. But yeah, if they're giving it as an example...


It actually is 300PB. Reading 10 million GB / day, for 30 days is $104. 10 million GB = 10 PB. This is from the examples at the bottom.


It seems that they updated their docs just one hour ago to a 100 KB object instead of 1 GB.



I really like the idea of using R2, but it seems like theres no real protection against losing EVERYTHING should a write-enabled key get leaked? On S3, I would be able to set a versioned bucket and prevent deletions, for example.

What are others using R2 for? A "cached" version of your s3 buckets? Im assuming I could set up something like that with workers?


You don't even need to set it up by hand. Cache Reserve is the way to put Cloudflare's CDN with a persistent cache in front of your origin (which could be an S3 bucket). It's a 1 button config thing.

If you use Worker bindings, then there's no write-enabled keys. It's a fair critique obviously and the team is aware of some of the product gaps. We only just launched GA :).


Tbh I wouldn’t trust cloudflare to host anything with their current history.


Could you please give the examples of "current history"?


[flagged]


Do you think that's a little too dramatic? Cloudflare took down 8chan and Kiwi Farms, and while the centralization of the web does scare me, I think you're being silly here.


I don't think they are actually mad at Cloudflare. They just see the last in a very long chain of "refusal to do business" and think it's all cloudflare's fault. It's a lack of reasoning problem - they recommend using other hosts/cdns/etc but never question why their favorite hate site isn't up and running on those alternatives right now. It makes a certain amount of twisted sense though: if they subscribe to a notion as silly as "melanin is the root of evil", they probably have other forms of magical thinking - like "cloudflare killed the site by not keeping their business even when alternatives I recommend exist".


Pretty sure AWS offers that feature too




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: