BoredGamer

Star Citizen & Squadron 42 Guides, Videos, Gameplay & News from BoredGamer

Star Citizen – We Need Server Meshing Now

Welcome to some more Star Citizen, Server Meshing is arguably the most important part of Star Citizen’s Persistent Universe that Cloud Imperium are currently working on, it allows for Servers to split up & dynamically spin up and down based on population of an area and for those servers to share data and move their boundaries. There was a huge amount of New info that has been shared from Cloud Imperium Devs; Derek Senior (Programming Director) who was at BritzienCon and Clive Johnson (Lead Network Programmer) on Spectrum. This video is a summary and discussion about that info.

Server Tick Rates

The higher the Server tick rate, the faster players receives input updates,making the game feel more responsive & it’s more likely that each player is experiencing actions on the server and from other clients more accurately with less delay.

Some examples, Apex Legends runs at 20 ticks, Fortnite at 30 & Battlefield 5 at 60.

For Star Citizen’s PU they wanted to hit a constant tick rate of 30 in their servers.

Currently the Tick Rates in SC Alpha 3.5 can fall to as low as 10 though due to poor server performance and degradation, as they are running a massive area, the whole of the current stanton system and all it’s entities on a single server all at once. This isn’t ideal and the current servers are now near the limits of what they can do in regards to adding more gameplay area, players, entities etc… It’s one of the major reasons AI suffers too.

Server Meshing and some other supporting Tech / Server Optimizations are being worked on. Without them we can’t have more players on a server, more planets, new star systems OR a better multiplayer experience.

Server Side Object Container Streaming is a pre-requisite of Server Meshing & Full Persistence, it allows for parts of the gameplay area on a server to be dynamically turned off/slept, so that the server is only running areas of the game with players on.

Full Persistence Part would then allow these areas & entities there in to be saved, so that if you had placed an object there, it would persist there for an amount of time based on it’s importance.

Clive Johnson talked a lot about Server Meshing on Spectrum, let’s go through that as it was pretty important.

Clive started by answer a thread – How is server-meshing possible? – How Many Server will Star Citizen Need?!

“We use Amazon’s Elastic Cloud Computing (EC2) for our server hosting. I don’t know the exact numbers but they have tens of thousands of servers available in each region, and we can add as many of these as we want to our network within a matter of minutes. That’s a crazy amount of computing power, right at our fingertips. It’s definitely more than we would need for each planet and space station, in every star system, to have their own server.

However, the thing to remember is that having servers tied to specific in-game locations is just a temporary stepping stone on the way to the full server meshing implementation. My guess and my hope is that we’ll have left this temporary solution behind by the time new systems start coming online, but I’m not entirely sure how everything lines up on the roadmap, so I might be wrong about that.

The reason we’re considering having per location servers as a stepping stone at all, is that it would allow backers to begin testing parts of server meshing before all the other work on it has been completed. To start with, we’d put the boundaries between servers out in deep space so that they could only really be crossed during Quantum Travel. That would really limit, how often players and other entities transition between servers, the kinds of entities that need to transition, as well as what can be happening during a transition. As bugs are fixed and we gain confidence with the technology, we may divide locations between more servers. Ultimately though, the idea is not to have any fixed server boundaries. Instead a server will manage the game for a cluster of players. As the cluster spreads out, the area the server manages will grow, and as the players in a cluster bunch up, the area managed by the server shrinks. When clusters of players belonging to different servers overlap, the servers will decide whether to transition players between them, or even to break out a new cluster of players and spin up another server to handle it. In this version of server meshing, servers will only be assigned to locations where there are players, greatly reducing the number of servers we would otherwise need, and allowing the game to scale to higher player counts much more cheaply.”

There was some additional info here that I’ll summarize.

The Server-Meshed servers do share info and communicated between each other so you will be able to see players and planets that are in an adjacent area that is being handled by another server.

players on different servers should still be able to see and interact with each other. To try and keep latency down we’d ideally migrate players that are interacting or close to each other to the same server. That’s a latency optimization problem for the future though.

Simple Projectiles like bullets and lasers are not networked, as they fire in a strightline they only need to create the fire event. The New Projectile manager really helps optimize this as well, so you don’t have to worry about a huge amount of fire going on slowing down the server.

Desyncs are a real possibility and they need to solve this – Server Meshing will be able to move the boundaries between servers too, so ideally they will have groups of proximate ships migrated into the same server. They only need to deal with ships that are engaging with each other at the time or are in range, they foresee that ability to be able to deal with small sections of larger battles on multiple servers.

“when they can’t co-locate interacting players on the same server, they’ll fudge it with typical networking smoke-and-mirrors. That shouldn’t really be any worse than players interacting in a peer-to-per game. Once server meshing is implented he thinks a lot of the network team’s time will be spent on making the networked experience feel as good as possible.”

Server boundaries will also move with orbiting celestial bodies, or even a ship moving through space. The boundaries will be defined relative to a physics grid so as the grid moves the server boundary will automatically move with it.

SERVER SIDE OCS (hope a dev see that) – In Regard to will we see SSOCS improve Client Side Performance.

Clive Johnson (Lead Network Programmer) for Cloud Imperium

“We won’t know for sure until server-side OCS has been implemented and we can measure the impact. What we’re hoping for is improved server performance, which should mean more frequent network updates delivered to your clients and a less glitchy online experience. If server performance improves enough we may be able to increase the player count per server. That’ll be determined by the worst case server performance, which happens when all the players spread out to worst performing areas in the system, so general performance optimisations also help in this regard. The biggest benefit you will experience is that we can continue adding new content to Stanton without necessarily having a negative impact on server performance.”

They aren’t expecting server-side OCS to have an impact on client performance.

In Regard to Server meshing being dependent on full persistence

Server meshing is also dependent on full persistence, but only because that’s a necessary component of server-side object container streaming. Serialization between servers will be handled by the same network code that we currently use to serialize between clients and a server. A large part of the work for server meshing, in essence, comes down to making a server able to act as a “client” of another server. This means making code that until now has been able to assume that there is only one server, and that therefore that server has authority over everything, understand that authority over different some entities may reside with different servers. Clients already do this to a certain extent since a client has has authority over its player character and the vehicle the player is controlling. It’s a question of firming up that technology and extending so it can be applied to everything.

Server-side OCS comes into the server meshing equation because each server needs to manage what entities it needs to stream in and out. Persistence is part of that because once an entity is streamed out, and no server has it loaded, the state of that entity needs to be persisted to the database, so it can be restored at a later time.

All of this used to come under the “server meshing” banner, but this year we’ve been picking apart the details and separating them out into what’s only needed for server meshing and what’s also needed for streaming in Squadron 42. It makes sense to try and use the same streaming technology for both rather than reinventing the wheel for each. All of the streaming technology for both is now covered by server-side OCS. A benefit of splitting things like this is that both parts can be worked on simultaneously by different teams.

When will we have Server Meshing?

The various requisites and optimizations for Servers SSOCS, Actor Networking Rework & Full Persistence are all being worked on in parallel by different teams.

Server OCS and server meshing aren’t on the roadmap yet because they both touch a lot of systems and need time from other teams to make make them work. The directors are reviewing the plans for both and trying to work them in with the plans for these other teams, deciding what might need to change on the roadmap to make room. The roadmap will be updated once they’re figured out all the details. Work on both Server OCS and server meshing is underway while they do that.

In Regard to How will Large Scale Battles be Possible – 10k players on a Planet or Bengals and 600 fighters supporting them in a small area? Clive Said

A battle of that scale would definitely require some fudging on our part. If you have ever been part of a large crowd, say at a sporting event, a gig, or even a busy city square or train station, you’ll have noticed that you are very aware of the people immediately around you but beyond a certain depth into the crowd you aren’t really aware of the people, and a bit beyond that you aren’t aware of anyone at all. I think truly massive battles could work a bit like that. The battle would be divided among a lot of servers, each server handling a small area due to the density of players. You will be able to look around and see the players in the servers near you but your server won’t connect to others beyond a certain distance (based on the density of players around you) and you won’t be able to see the players on those servers. Hopefully the effect will be similar to that of being in a crowd, in that everywhere you look there are masses of players around you and you just assume that the crowd/battle continues further than your ability to see through it. If you were to fly around the battlefield (is it still called a battlefield in space?) you could visit still everyone in turn, transitioning from one server to another as you move around.

In Regard to Star Citizen having everyone in a single mega-meshed server, even across Countries

Clive Johnson – That’s still an open question and what follows is therefore speculation on my part. We may end up doing things completely differently. But let’s speculate anyway because it’s fun to do…

My guess is we’ll first use server meshing to build regional shards before trying to build a single global instance and address the latency problem. Once server meshing is working well, we have a few options to explore. Obviously traditional network techniques such as lag compensation and client-side prediction will play a role. One of the benefits of using AWS for server hosting is that network communication between servers in different data centres uses Amazon’s CDN backbone, which has roughly 10% lower latencies than the public internet. We can explore having clients always connected to a server in their local region and those servers making the connection to servers in other regions. Since we control the servers we can trust what they tell each other, so hit detection when you shoot at a player in another region could be performed by your local server. This is a “favor the shooter” approach and might result in the classic problem of players feeling like they’ve been shot quite a while after getting behind cover. Another option that may reduce that problem is to use server meshing to put clients on a server that is near their geographic middle. For example, if one player is in Los Angeles and another is in Wilmslow then they may both be connected to a server in New York. Of course we can also investigate combining the above ideas. But first we need to build server meshing before we can try out these ideas.

They are working on getting these Server Optimizations & Server Meshing all working together. Realistically they cannot expand the gameplay are any further without it, microTech, Orison-Crusader and it’s also needed to have to finish off core features.

They can release a lot of these optimizations piecemeal / tiered BUT it is all working towards server meshing.

It’s a major priority now for CI & Star Citizen PU, as it stands at time of writing with microTech in Alpha 3.8 at the end of 2019, that’s likely where they are targeting to have at least some of these optimization completed for or risk having these gameplay areas & other features further delayed.

I am very much looking forward to larger server caps, more gameplay area, new star systems, full persistence, giant capital ship battles and more that now all rely on getting this tech out. I am sure they will do it, it’s just how long will it take is another matter.

Other Info

What instance type of EC2 servers do you use for the game servers? A1, T3, M5 etc?

C5 for PU. I think we’re using C4 for AC and SM. We stack a couple of server instances on the same VM while we’re still changing our code to better utilize the available cores. This information might be a bit out of date as DevOps are constantly tweaking how the servers are deployed.

Offloading processing for certain systems onto different servers is something that server meshing could do. It is an option being considered but nothing has been decided yet.