Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Whether or not this is possible, it seems like a serious DDOS risk. Some one with malicious intent could give you a huge hosting bill. It might be difficult to apply IP-based rate-limiting without a server.


With S3, your server supplies the client with signed time-limited upload endpoint URLs (which is a feature S3 supports). So the server is effectively authorizing the client to upload direct to S3. You could do this only for "logged in" users, or use your own IP-based rate-limiting, or whatever else you wanted to do. Front-end direct-to-cloud doesn't necessarily mean "without a server", as you can set it up so it has to be authorized by the server, and this is generally how you'd do it.

I don't know if GCS is built into uppy at present (contrary to another comment, I don't believe GCS could be called "S3-compatible"), but I suspect there's a way to use uppy hooks to add it. As long as GCS also allows storage locations that allow upload only to signed time-limited URLs, the same approach could be used.

Where you put the file on the cloud storage and what you do with it is, I believe, not uppy's concern. But if you are for instance using the ruby shrine file attachment library (which is built out with examples to support uppy, and direct-to-S3, as a use case) -- shrine strongly encourages you to use a two stage/two location flow, where (eg) any front-end-uploaded things are in a temporary 'cache' storage, which on S3 you might want to use lifecycle rules to automatically delete things from if older than X. The files might only moved to a more permanent storage on some other event.

Once you get into it, it turns out all the concerns of file handling can get pretty complicated. But having the front-end upload directly to cloud storage can be a pretty great thing, depending your back-end architecture, for preventing any of your actual app 'worker' processes/threads from being taken up handling a file upload, dealing with slow clients, etc. Can make proper sizing and scaling of your back-end a lot more straightforward and resource-limited.


There's two ways to directly upload to S3 buckets (or GCS):

1) You allow append-only access for the world, maybe in combination with an expiry policy. Indeed only useful for a few use cases I'd say

2) You deploy signing of requests, and you only sign for those who are logged in, or otherwise match criteria important to your app. A bit more hassle, and still requires server-side code (whether traditionally hosted or 'serverless'), but at least your servers aren't receiving the actual uploads, taking away a potential SPOF and bottle-neck.

That said, I'm not sure how serious you are about handling file uploads, but uploading directly to buckets often means uploading to a single region (on aws, a bucket may be hosted in us-east-1 for instance, meaning high latency for folks in e.g. Australia). This may or may not be problematic for your use case, but it did bring us complaints when we had that.


>That said, I'm not sure how serious you are about handling file uploads, but uploading directly to buckets often means uploading to a single region (on aws, a bucket may be hosted in us-east-1 for instance, meaning high latency for folks in e.g. Australia). This may or may not be problematic for your use case, but it did bring us complaints when we had that.

You can use https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acc...

S3 acceleration uses the cloudfront distributed edge locations. As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path. This costs more money though.


These are valid considerations, but not tied to S3 (or uploading).

It's probably a happy problem if you end up worrying about S3 as a very DDoS-able part of your system.

Running up hosting bills is an scenario that can be addressed with various technical means (like a sibling comment explains). Many people seem to judge the risk * probability too small to put a lot of preemptive effort into it. It's basically a question of how much damage would be done until your monitoring catches it. AWS has also been known to "forgive" bills that were caused by malicious attackers in some situations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: