Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Great writeup. The "user_id" one really hit home for me. @ Userify (ssh key management for on-prem and cloud) we currently have hotspots where some companies integrate the userify shim and put 'change_me' (or similar) in their API ID fields. Apparently, sometimes they don't always update it before putting into production... so we get lots and lots of "Change Me" attempted logins! It's not just one company, but dozens.

Fortunately, we cache everything (including failures) with Redis, so the actual cost is tiny at most, but if you are not caching failures as well as successes, this can result in unexpected and really hard to track down cost spikes. (disclaimer: AWS cert SA, AWS partner)

Segment's trick to detect and record when throttling, and using that as a template for "bad keys" (which presumably are manually validated as well) seems like a great idea as well, but I'd suggest first caching even failure calls on logins if possible, as that probably would have mitigated the need to ever hit dynamo.

PS the name 'project benjamin' for the cost cutting efforts.. pure genius.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: