Well, with modern NVMe and SSD, the "where on the disk is my swap file" begins to matter less. Even at my workplace, any VM needing swap has it's OS disk put on NVMe/SSD, simply because having the user even think a second about this isn't worth the time. On NVMe/SSD, the placement simply doesn't matter, memory becomes non-linear.
But then it becomes a question of "Do I want to put swap on this drive" at all? I don't know the endurance of modern NVME drives, but if you can write 1 PB before wearing out the drive at a sustained 100MB/sec, you can wear out the drive in less than 4 months if you let your system run under heavy swap.
Probably not an issue for a desktop since no one would want to use it under heavy swap all the time, but for a server no one pays much attention to... maybe.
I don't create swap space on servers anymore. If I run out of RAM, I'm likely dealing with something that's out of control and I'm going to run out of swap also, it just delays the inevitable.
A small (512 MB) swap partition gives you enough runway to warn on 25% use, alert on 50% use, and address some problems without the fun of abrupt shutdowns when allocations fail (or the OOM killer shows up). Monitoring for high swap I/O makes some sense, but 512 MB fills up fast, so chances are it'll fill up before anyone can respond to an alert in that case.
At least in my experience, it's pretty hard to actually gauge memory use, but swap use makes a reasonable gauge most of the time. There are certainly many use cases where the swap use ends up not being a useful gauge though.
All of the servers I manage now are cloud servers, and swapping to attached storage is slow. I don't really want random processes killed by the OOM killer, leaving the server in an unknown state... so I set the servers to panic on OOM.
As someone who has servers with swap on NVMe; it barely matters. Sustained swap thrashing is a bad scenario no matter how you put it and it'll just tank performance. Get more RAM. SWAP I/O should never have any sustained background level, it should ideally only spike every few minutes or so and remain low level to zero otherwise.
SWAP on SSD or NVMe is still miles better than HDD, you can notice the difference when the swap is being used.
But that assumes that someone notices the swap -- when I was new at a former last job, I asked why the drive activity light was always on on the server marked "finance". The answer was "Who knows!? That's some special software that finance uses, when it gets slow they tell us and we reboot it". It had been like that for more than a year.
Turns out that the app grew huge over time and the machine would swap like crazy and would eventually slow to a crawl. The machine was already maxed out on RAM, so we added a service to restart the app twice a week. Finance said it took hours off their month-end work, they thought the app was just slow.
You can monitor swap usage; in htop you can turn on the SWAP, PERCENT_SWAP_DELAY and M_SWAP columns, telling you exactly how much of a process is in swap, how large that is and the delay the process experiences due to swap.
You can also monitor swapping activity in iotop. If need be, this can also be written on third party tools, the interfaces are exposed by the kernel after all.
Oh and you can use the modern PSI monitoring of the kernel to measure how much pressure a subsystem is experiencing, so you can restart services way before you'd even notice the swapping on other tools.
People should if they are interested in performance. I log about 1TB/day of metrics data for the applications I'm responsible for and developed a mantra for it; It's better to have a shitton of metrics logging and not need them than have no metrics logging and having to set it up when everything is broken already.