MediaWiki is actually pretty easy to set up on a web server, speaking as someone who's now done it twice. You plop the files into htdocs, make sure PHP is set up, set up vanity URLs if you want to, and then… well, that's it. The final step is to go to the site, fill in the setup form, download the settings file it gives you and upload it. It doesn't even need an external database, it can use SQLite; if email setup is annoying, it doesn't even need that. And it's the most powerful and flexible wiki software out there: if there's something you want a wiki to do, MediaWiki can do it, but it also isn't too bloated out of the box, so you can just install plugins as and when you need them. Thoroughly recommend it.
Making MediaWiki survive non-trivial amounts of traffic is much harder than simply setting it up. It's not an impossible task for sure but there's no one click performance setting.
Specifically, managing edge and object caches (and caching for anonymous viewers vs. logged-in editors with separate frontend and backend caches) while mitigating the effects of cache misses, minimizing the impacts of the job queue when many pages are changed at once, optimizing image storage, thumbnailing, and caching, figuring out when to use a wikitext template vs. a Scribunto/Lua module vs. a MediaWiki extension in PHP (and if Scribunto, which Lua runtime to use), figuring out which structured data backend to use and how to tune it, figuring out whether to rely on API bots (expensive on the backend) vs. cache scrapers (expensive on the frontend) vs. database dump bots (no cost to the live site but already outdated before they're finished dumping) for automated content maintenance jobs, tuning rate limiting, and loadbalancing it all.
At especially large scales, spinning the API and job queues off altogether into microservices and insulating the live site from the performance impact of logging this whole rat's nest.
While that's not wrong, the wiki loop of frequent or constant, unpredictably cascading content updates, with or without auth and sometimes with a parallel structured data component + cache and job queue maintenance + image storage + database updates and maintenance becomes a significant burden relatively fast compared to a typical CMS.