Because we are on the 'unsafe' territory. And Rust doesn't even have a defined memory model. Rust is a little bit immature. We have some other services written in Rust though.
Thanks for the questions! We don't currently have plans to open-source it. For anything else, feel free to reach out at pa@caudena.com - happy to discuss further there. We'd like to keep this thread focused on the technical side rather than product discussions :)
We never targeted weakly-ordered architectures like ARM, only x86. We never used a wide variety of different processors. We are not developing the Linux kernel and are not into control dependencies, just relying on the fences and the memory model. There may be some CPU-dependent performance differences, like discrepancy because of NUMA or false sharing being noticeable on one processor, but not on another. RCU and hazard pointers are nothing new. For the disjoint sets we don't need them. For the forest patches and the tries we do. We are using TBB and OpenMP whenever possible and trying to keep things simple.
Assuming by undoing you mean splitting the cluster:
A linked list can be split in two in O(1). When it comes to updating the roots for all the removed nodes, there is no easy way out, but luckily:
- This process can be parallelized.
- It could be done just once for multiple clustering changes.
- This is a multi-level disjoint set, not all the levels or sub-clusters are usually affected. Upper level clustering, which is based on lower confidence level, can be rebuilt more easily.
If by undoing you mean reverting the changes, we don’t use a persistent data structure. When we need historical clustering, we use a patched forest with concurrent hash maps to track the changes, and then apply or throw them away.
We use a single instance for all clients, but when one CFD server processes new block data, it becomes fully blocked for read access. To solve this, we built a smart load balancer that redirects user requests to a secondary CFD server. This ensures there's always at least two servers running, and more if we need additional throughput.
First of all, at Caudena, we are not involved in crypto projects or investments ourselves. Our expertise lies in analyzing blockchains and providing deep technical insights into how various blockchains operate. We focus on tracking and understanding the flow of digital assets, often in support of government agencies investigating scams, fraud, and other illicit activities.
That said, we absolutely believe that blockchain and cryptocurrency will shape the future of the financial system. When you look beyond the noise of scam tokens, speculative NFTs, and high-profile scandals, there is significant and meaningful financial innovation happening. This extends beyond DeFi to include the tokenization of RWA, where major institutions like BlackRock and JPM Chase are actively exploring and implementing blockchain-based solutions. Numerous projects are driving real progress, and there’s a slow but steady movement toward a more decentralized and transparent financial ecosystem.
It’d be really great to be able to price predictions for various RWA at a fine grain: “Will IBM be >$240 next Friday at market close?” “Will George Alleline default on his mortgage for 46 Smiley Court?”
But manually creating and marketing these nanosecurities has too much overhead. Traditionally you’d bundle up a bunch of small stuff into a derivative, but that creates a complex instrument that’s hard to price—and then you get a Global Financial Crisis.
Block chains offer the potential to make these kinds of nano trades self-executing at scale.