This builds on the same intuition I had, where data can be easily partitioned across objects. What seems to be missing is transactions across different objects though?
The flipside is that Cloudflare DO will be a lot faster.
Interesting that all these similar solutions are popping out now.
I think it would be interesting to combine a SQLite per-object approach with transactions on top of different objects.
You're right indeed:) but it depends on what you are comparing it with. In this case the comparison is against other managed cloud storage and databases, and in that context I think the claim holds.
Is it the cheapest possible storage in existence? No, if you take raw disks and put them in a rack, but I also feel it wouldn't be an entirely fair comparison.
B2 is mostly S3-compatible, so if they add the same support for preconditions on writes as S3 and GCS, nothing prevents using it as a backend for GlassDB.
Yeah I think it's very interesting to compare the two. SlateDB expects a single writer and fences writes. This means you can make some serious savings on S3 costs because you're using S3 for consistency but you're batching writes.
GlassDB is much more accessible for smaller volume workloads, but gets very costly for high volume because of requests to S3 per-transaction. In-turn the consistency model is easier to reason about because the system is entirely stateless.
I think it's now much easier to achieve than a year ago. The critical one is conditional writes on new objects, because otherwise you can't safely create transaction logs in the presence of timeouts. This is not enough though.
My approach on S3 would be to ensure to modify the ETag of an object whenever other transactions looking at it must be blocked. This makes it easier to use conditional reads (https://docs.aws.amazon.com/AmazonS3/latest/userguide/condit...) on COPY or GET operations.
For write, I would use PUT on a temporary staging area and then conditional COPY + DELETE afterward. This is certainly slower than GCS, but I think it should work.
Locking without modifying the object is the part that needs some optimization though.
Not a full solution, but seeing the OP seeks to be a key-value store (versus full RDBMS? despite the comparisons with Spanner and Postgres?), important to weigh how Rockset (also mainly KV store) dealt with S3-backed caching at scale:
It's a good observation, because I did and decided to keep it out of scope from the base layer.
But this is entirely possible. You can wrap GlassDB transactions and encode multiple keys into the same object at a higher level. Transactions across different objects will still preserve the same isolation.
The current version is meant to be a base from which to build higer level APIs, somewhat like FoundationDB.
Congrats on reinventing the data lake? This is actually how most of the newer generations of "cloud native" databases work, where they separate compute and storage. The key is that they have a more sophisticated caching layer so that the latency cost of a query can be amortized across requests.
It's my understanding that the newer generation of data lakes still make use of a tiny, strongly consistent metadata database to keep track of what is where. This is orders of magnitudes smaller than what you'd have by putting everything in the same database, but it's still there. This is also the case in newer data streaming platforms (e.g. https://www.warpstream.com/blog/kafka-is-dead-long-live-kafk...).
I'm curious to hear if you have examples of any database using only object storage as a backend, because back when I started, I couldn't fin any.
Wow, not sure how I missed this, but I see many similarities. They were also bitten by lack of conditional writes in S3:
> In Databricks service deployments, we use a separate lightweight coordination service to ensure that only one client can add a record with each log ID.
The key difference is that Delta Lake implements MVCC and relies on total ordering of transaction IDs. Something I didn't want to do to avoid forced synchronization points (multiple clients need to fight for IDs). This is certainly a trade-off, because in my case you are forced to read the latest version or retry (but then you get strict serializability), while in Delta Lake you can rely on snapshot isolation, which might give you slightly stale, but consistent data and minimize retries on reads.
It also seems that you can't get transactions across different tables? Another interesting tradeoff.
OK, thanks for the reference. Yeah, so indeed separating storage and compute is nothing new. Definitely not claiming I invented that :)
And as you mention, Datomic uses DynamoDB as well (so, not a pure s3 solution). What I'm proposing is to only use object storage for everything, pay the price in latency, but don't give up on throughput, cost and consistency. The differentiator is that this comes with strict serializability guarantees, so this is not an eventually consistent system (https://jepsen.io/consistency/models/strong-serializable).
No matter how sophisticated the caching is, if you want to retain strict serializability, writes must be confirmed by s3 and reads must validate in s3 before returning, which puts a lower bound on latency.
I focused a lot on throughput, which is the one we can really optimize.
Nicely detailed here https://simonwillison.net/2024/Oct/13/zero-latency-sqlite-st... And https://developers.cloudflare.com/durable-objects/best-pract...