Last week I needed to compress some large CSV files (70+ MB) and tested some compression algorithms for speed/ratio and was surprised that bzip2 was (in this isolated case) way better than Zstandard. Zstandard needed a lot more time (bzip2 3s vs Zstd 15s) to reach the same compression ratio and produced noticeably larger files (5 MB vs 6.2 MB) in its default configuration.
So Zstandard is not the compression algorithm to end them all and definitely has its weak spots.
bzip2 is BWT-based and Zstandard is based on Lempel-Ziv. They are fundamentally different algorithms and each algorithm has its sweet spot in terms of input data. BWT sorts, so it benefits from sorting-friendly data. LZ benefits from long repeats.
The problem with PPM is that it's slow. Zstd is targeting a certain general-purpose use case. There are other compressors that target different time-space trade-offs, if you need extreme performance or extreme compression efficiency.
The conclusion is not radical but quite banal. For some data, zstd (and lzma) lose to methods more suitable to that kind of data. For example, for text, use ppmd.
Using bzip2 is not a great idea because it's old, you can always beat it using newer methods. But bzip2 is widely installed, so that makes it somewhat useful.
The OP also shows zstd being very close to lzma for high compression ratio jobs. In my experience, for higher compression ratios lzma handily beats zstd (in the pareto-optimal sense of the word). But lzma is slow to decompress. If decompression speed matters a lot, zstd is awesome.
If you need to compress something like game assets (compression speed doesn't matter, compression ratio matters, decompression speed matters a lot) and can afford to use proprietary code then check out Oodle [1]. It beats zstd.
So Zstandard is not the compression algorithm to end them all and definitely has its weak spots.