Skip to content

USE CASE: MULTIPLAYER

In broad strokes, multiplayer is a simple concept. Users should be able to send data to and receive data from other users. Applying this to the client’s area of expertise was pretty simple.

People cycling indoors, doing a virtual cycle tour, should be able to see others on the same road as them. On a GPS map, they should see other people cycling around them too, not necessarily on the same road as them.

The difficult part is of course getting the data from one user to another. It should also happen very fast so that everyone sees everyone else on the position they actually are. And then, on top of that, everything should work for thousands of users doing the same thing at the same time.

In broad strokes, multiplayer is a simple concept. Users should be able to send data to and receive data from other users. Applying this to the client’s area of expertise was pretty simple.

People cycling indoors, doing a virtual cycle tour, should be able to see others on the same road as them. On a GPS map, they should see other people cycling around them too, not necessarily on the same road as them.

The difficult part is of course getting the data from one user to another. It should also happen very fast so that everyone sees everyone else on the position they actually are. And then, on top of that, everything should work for thousands of users doing the same thing at the same time.

1. Requirements

There are some limitations to take into account. For one, it’s impossible to send data instantly across physical distances. Also, making something that scales infinitely, takes a huge amount of time. And finally, sending large amounts of data fast, requires a beefy internet connection.

Since we wanted to keep the multiplayer as accessible as possible, we had to decide what margins were acceptable. We didn’t have the time or the resources to build the best multiplayer in the world. We could, however, build one that can meet the customer’s immediate and future needs.

To do this, we needed to put down the requirements that the multiplayer system had to meet:

  • The apps should send and receive 2 updates per second
  • The system should be able to support 10.000 users at the same time, with the ability to grow beyond that in the future
  • The cost per user should be below € x per month

2. The search for a good networking technology

A multiplayer stands or falls with its networking technology. We aren’t too proud to admit that actual network engineers know a lot more about these things than we do, so instead of trying to create our own network technology to get data from A to B, we decided to see if there were any products available that did exactly that as their main focus.

We made an entire list of possible technologies and noted a few key features of each (if available). Then we suggested a few strong candidates to take a closer look at. In consultation with the client, we decided on two very strong contenders. On paper they were both equally good, but still, we had to choose one. We decided to give them both a trial by fire.

uLink/UnityPark
Photon
SmartFoxServer
AppWarp
Lidgren
KBEngine
Forge
RakNet
Nakama
Colyseus
Badumna
ElectroServer
RedDwarf
NetDog
PikkoServer
Player.IO
SlimNet
Union
Orbit
Gamooga
Azure Gaming – Playfab

 

3. Proof of concept

An app was made that would show dots moving on a GPS map, over a pre-set route. The data for these dots would be sent over the network and come back to actually move it. Multiple apps could be run on different PC’s and they should all see the same dots on the same locations. Furthermore, every app would have to be able to make more dots and they would, in turn, be seen on other apps too. This app then got two versions, one for each networking technology, so we could see which one performed better in a real-life scenario.

4. Server architecture

The client agreed when we suggested that we wanted to run this multiplayer solution in the cloud. This came with some juicy advantages like better scalability and the ability for users to connect to a data center close to them.

A ‘room’ is a collection of users that can see each other. Users should only get updates of other users they can see. This keeps bandwidth to a minimum. A game server can host one or more of such rooms. When more capacity is needed, we can just add a new game server to handle the increased load.

However, two problems remained. Depending on the timing, users on the same road could be put in different rooms, thus not seeing each other. This stemmed from the fact that the app would first ask the server whether a was room available. If there was none, it had to be created and joined. If there was one, it could just join it.

This could lead to two clients asking for a certain room at the same time, before either of them could create a room. The server would tell them both that no such room existed and they would each create one. Both clients would thus create a room for the same route. The result was that they wouldn’t be able to see each other.

If we let a single point decide when to create and when to join rooms, it would always be consistent, because it would know when it had created a room and concurrency issues could be solved.

Enter the matchmaking server…

The other problem was that, by default, every message an app sent, got duplicated and sent to every other client. While this seems okay, even logical at first, the issue becomes clear when we apply a bit of math.

n = amount of users
Amount of messages sent to server = n
Amount of messages sent by server = n * (n – 1)

If we have 100 players on the same route sending 100 messages to the server, 9900 messages are going back. And we would like to have two messages per second, which means 19800 messages per second. That’s a bit much.

So we made some custom logic to run on the server, that bundles updates together and only sends a message to the client on a set interval. We even took it a step further: we only send what data is new to that particular app, so that no unnecessary data is sent. This dramatically reduces bandwidth usage.

5. Implementation in all apps

With the ‘how’ figured out, we created the server setup and implementation, using the proof of concept app to test along the way. Once we were confident everything was up and running, it was time to implement and use the new multiplayer in the end-user apps. There were multiple apps being developed and maintained by different teams. To make sure they could all implement everything as fast as possible, we created some in-depth documentation, such as:

  • An overview of every multiplayer component and how it worked on a high level
  • A data contract describing what messages should be sent to and received from the multiplayer, down to the bits and bytes
  • API-documentation of the matchmaking servers, describing what HTTP-calls could be made and what the responses looked like
  • A data diagram of all fields used in the multiplayer system and what they were used for

6. Result

This multiplayer was released to the public just before our client’s busy season in 2019. It has run uninterrupted and without any major issues ever since. The new multiplayer led to a record number of online users, unprecedented growth and retention of subscribers, because users can enjoy cycling together from the comfort of their own living room.

We will be happy to answer any questions you may have!