I think it's now much easier to achieve than a year ago. The critical one is conditional writes on new objects, because otherwise you can't safely create transaction logs in the presence of timeouts. This is not enough though.
My approach on S3 would be to ensure to modify the ETag of an object whenever other transactions looking at it must be blocked. This makes it easier to use conditional reads (https://docs.aws.amazon.com/AmazonS3/latest/userguide/condit...) on COPY or GET operations.
For write, I would use PUT on a temporary staging area and then conditional COPY + DELETE afterward. This is certainly slower than GCS, but I think it should work.
Locking without modifying the object is the part that needs some optimization though.
Not a full solution, but seeing the OP seeks to be a key-value store (versus full RDBMS? despite the comparisons with Spanner and Postgres?), important to weigh how Rockset (also mainly KV store) dealt with S3-backed caching at scale: