Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Being able to serve 1000+ players in a spatial "MMO" game is like the holy grail of netcode programming...

> Here's a demonstration showcasing 1000 cross-server players, this simulation is functionally identical to real player load. The server TPS never dips below 20 (perfect) and I'm running the whole thing on my laptop.

If it can run on one laptop, why does it need horizontal server scaling? :P

You don't really know where the bottle-necks are until you put 1000 actual players on the same "server".



> Being able to serve 1000+ players in a spatial "MMO" game is like the holy grail of netcode programming...

What do you mean by holy grail? Is it not something that's already accomplished by several games/MMOs?

Unless "spatial MMO" means something specific here.


With spatial I mean that the players have a position in the game world, like x,y,z coordinates. Most games solve it by dividing the world into shard/instances/zones. The article mentions this and the issues you get. There are no game that I am aware of that is able to handle 1000+ connected players (1) nearby like in the "wall of copycats" in the article. Most games have a limit of around 100 players on a beefy server.

It's also about he level of trust you are willing to give the clients, you can for example offload all logic to the clients and just have the server broadcast all messages. But then you will have a problem with cheaters that use modified clients.

The same "holy grail" exist in database too, where you want low latency, high throughput/concurrency, and high availability. Where the solution is, just like in "MMO" games, to use "sharding".

1) Battle of B-R5RB in Eve online had 2,670 players on the same shard according to WikiPedia. Their solution to the problem was/is to lower the game physics tick-rate.

https://en.wikipedia.org/wiki/Battle_of_B-R5RB


Thanks for your comment!

That demo is primarily meant to demonstrate the efficiency of the message broker and packet code as if there were 1000 players on different MC servers all forwarding their positions through WorldQL. I’ll make it more clear.


I would be a bit cautious about inferring server tick rates from the performance on one machine unless you are mostly planning to scale to lots of cores rather than lots of servers, since dealing with access latency can otherwise kill many promising attempts at parallelization even when the contention itself is no big deal. It's likely you are already aware of this and have designed the benchmark to compensate, of course :)


The laptop can run multiple copies of a single threaded program (and spread those on diff cores)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: