Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is this fully hosted or runs in my cloud? Is this running https://github.com/ClickHouse/ClickHouse or a closed-source Clickhouse Cloud variant with added features?


It is a fully hosted offering and is based on ClickHouse core v22.10


Curious how CH are ultimately actually hosting this, on a public cloud or is this their own cloud.


Billing page says AWS for storage, so I'd assume the same for compute.

If they're really doing elastic reads as suggested in another comment I don't think this can be the standard CH server (at least for reads) - or I've missed something very exciting in recent versions.


It feels like there's gotta be something proprietary beyond just using a s3 disk setup but maybe not? is simply having data_cache_enabled and cache_path pointing to local nvme ssd sufficient to achieve similar speeds?

* https://clickhouse.com/docs/en/guides/sre/configuring-s3-for...

* https://clickhouse.com/docs/en/integrations/s3/s3-merge-tree...


For non-elastic writes this might be sufficient (I have no clue as to performance of AWS S3 for this kind of use case) - but supposedly[0] there's some elastic read capacity. As far as I know that's not possible with standard CH at all, and may be where some secret sauce is.

I can vaguely imagine how you'd build such a thing, you'd spin up new instances that knows your write-primary's sharding/partitioning scheme - heck you could even approximate it with slightly-augmented clickhouse-clients doing the S3 pulls directly and no real additional "server" - but as far as I know there's no way out of the box.

[0] https://news.ycombinator.com/item?id=33081658




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: