Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Love your article by the way. Not an expert but off the top of my head:

https://docs.datomic.com/operation/architecture.html

(However they cheat with dynamo lol)

There's also some listed here

https://davidgomes.com/separation-of-storage-and-compute-and...



OK, thanks for the reference. Yeah, so indeed separating storage and compute is nothing new. Definitely not claiming I invented that :)

And as you mention, Datomic uses DynamoDB as well (so, not a pure s3 solution). What I'm proposing is to only use object storage for everything, pay the price in latency, but don't give up on throughput, cost and consistency. The differentiator is that this comes with strict serializability guarantees, so this is not an eventually consistent system (https://jepsen.io/consistency/models/strong-serializable).

No matter how sophisticated the caching is, if you want to retain strict serializability, writes must be confirmed by s3 and reads must validate in s3 before returning, which puts a lower bound on latency.

I focused a lot on throughput, which is the one we can really optimize.

Hopefully that's clear from the blog, though.



I just saw it! I asked a question (https://news.ycombinator.com/item?id=42180611) and it seems that durability and consistency are implemented at the caching layer.

Basically an in-memory database which uses S3 as cold storage. Definitely an interesting approach, but no transactions AFAICT.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: