In principle you could likely do it. Either call SetProcessAffinityMask on every process on the system to keep them off your dedicated cores, and add a hook to call it on every newly started process. Or simply start your threads with the highest possible priority, with affinity for one core, and never use blocking APIs. In the Windows scheduler a thread of higher priority always gets priority over threads with lower priority, so if you set your priority sufficiently high and never tell the kernel that you are waiting on something you shouldn't get interrupted.
In practice, we've experienced 20 microsecond-100 millisecond thread interruption issues on Windows. The resources we've found are inadequate to address the questions. No such problems on Linux. Windows scheduler experts appear to be rare. We wonder if Windows is simply not amenable to certain types of high-performance applications.
Probably, but you will have to rip out all the io_uring specific bits and replace them with their Windows equivalent. Probably something with I/O completion ports.
If you are interested, look into the new Windows I/O scheduler that was implemented for GHC 9.0 (in Haskell, obv). There is also some preliminary work to integrate io_uring into the Haskell runtime if it' available, but the Haskell runtime philosophy is rather different from the Rust approach.
>The concurrency value of a completion port is specified when it is created with CreateIoCompletionPort via the NumberOfConcurrentThreads parameter. This value limits the number of runnable threads associated with the completion port.
>[...]
>The best overall maximum value to pick for the concurrency value is the number of CPUs on the computer.