Welcome to some more Star Citizen with a Dev Response Video… Have you wondered how to tempt devs to answer questions… why do the good questions NOT GET ANSWERED! Are we able to get Better Updates on Server Stability?
Let’s see what Cloud Imperium’s Devs have to say:
Bearded-CIG Responded with a Huge Reply here:
A good majority of the stability fixes that we do take place during the PTU. I think rather than this being something we report on every quarter, it’s something we’d want to get added ( and working ) on the telemetry page. Currently the telemetry page only lists game client stability ( which is broken and the bug for it is currently assigned out to be looked at ). So we’d want to add server stability to that page as well, but I’d also prefer to have a more useful visualization than just the current stability because in the future, the way that we’re going to measure stability will change.
Would be nice to hear … a team discovered x,y,z and we should see 10% improvement or something
I avoid making predictions of how much stability is going to improve when a fix is implemented because based on historical data, any predictions we make tend to be wrong since there is always a chance that additional issues might exist that were being hidden by the issue that was fixed.
Currently, server and client stability have a set threshold that we aim to beat for every release. Sometimes we’re able to reach that goal, but sometimes it doesn’t go that well. Part of the reason why that happens is because we’re using the data on the PTU to represent what the public environment might be like and that doesn’t always end up being accurate. Sometimes the smaller sample size of players on the PTU will play the game differently or doesn’t generate enough traffic to run into certain issues. Sometimes the rapid iteration of the PTU environment prevents us from seeing issues that only occur on game servers that have been running for a while. Because of this, any time a build is released we will keep an eye on it and look to see if things are as stable as expected and if not, release additional patches to help things along.
Some of the biggest improvements that we currently need for stability has to do with how stability issues are reported in our analytics. The way that we currently measure stability doesn’t do a good job of measuring what the player experience is actually like because it uses statistical averages without any way to look at specific gameplay loops. For alpha this is fine because we can look at the numbers and say that the average player experience is stable, but the problem with averages is that while yes, some players are going to have a great time and go hours / days without encountering a server crash, that isn’t true for everyone. This model neglects to identify issues that occur with gameplay that might have a smaller number of players participating in it. For server crashes, there’s two things that can easily help to identify these kinds of problems and that is:
- How long the server was running before it crashed
- What location the server was hosting
Just to answer anyone that’s wondering “well why don’t you have that information?”. The people that would help me get that information into an analytics event that can be visualized into graphs are also the people that are working on critical game functionality. A lot of the current stability analytics that I have were implemented by our backend services, networking, and platform teams ( big thanks to Tom, Clive, and Benoit ) so it’s a question of should they work on improving stability analytics or work on developing the game.
Here’s how those two types of data help:
- Knowing how long the game server was running before it crashed makes it easier to see if there are rare but easy to reproduce crashes. Let’s say that there’s a 100% server crash that occurs for a mission that’s at the end of a chain of missions. Super specific situation that isn’t going to be run into very often but for the person that’s on that mission, the game is going to feel pretty unstable to them because anytime they try to complete it, the server goes down. With the way the game currently works, this wouldn’t work that well because since the game server hosts the entire solar system, it’s possible for a server to have been running for some time before running into an issue like this. That will get better with server meshing, but may still run into accuracy issues because our player in this situation isn’t guaranteed to be the first person to join that server.
- Knowing the location that a server was hosting helps make up for those accuracy issues but not until the servers are hosting more specific locations. Currently, our server crashes do have location data in their analytics events the problem is that the game server hosts the entire solar system so it’s usefulness is really limited to being able to identify crashes that occur in the Persistent Universe, Arena Commander, or Star Marine. All of that is useful because AC and SM related server crashes can help us to identify and fix rare combat related crashed in the PU, but it isn’t going to help us identify our above example of someone that’s crashing the server when doing a specific mission. As server meshing becomes more mature and the game servers start to host locations dynamically based on where players are located, it becomes easier to be able to identify those rare but bad stability issues because we’ll be able to see if there’s a specific location that has a worse stability that other areas.
The recent playtesting of the Xenothreat event that our players helped us out with helped to illustrate how important this kind of information can be. Without those playtests, we wouldn’t have been able to identify the stability of the event because the data for players taking part in the event would be mixed with the rest of the gameplay that players took part of while the event was running. However in the future when we start to dynamically host different locations, we’ll be able to see which locations are less stable than others and focus on getting the issues affecting them fixed.
Clive Johnson CIG Responded – When a question is outside a particular dev’s domain, or would require them to speculate, then that dev won’t respond. Another dev might know the answer but not all devs feel comfortable posting on the public forum. CIG encourages us to engage with backers and respond to your questions but it’s strictly voluntary. Of course, we’re all kept pretty busy and juggling work and life can sometimes make it difficult for devs to find time to check the forums and see if there’s a question they could answer. Unfortunately this means some questions may not get a dev response.
Thankfully we’re lucky to have a well-informed and helpful community, many of whom are able to respond when a dev might otherwise not. There have been lots of times I’ll see that a post I might have replied to has already been answered and there isn’t really anything else I could add.
In short: we are reading these posts and respond where and when we can.
A few devs are quite active on this forum, as can be seen by them being the ones who will most often post a response. If a post is relevant to one of their fields, I’d speculate the chances are pretty good they will at least have read it. Beyond that I don’t think there’s a way for anyone to know.
Boom that’s it for Dev Responses today… we are on the home run towards Alpha 3.13 going into testing soon… and we will see loads on that in the coming days.
Do you wish that Star Citizen communicated more about it’s server improvements and focused more on stability and playability of their patches?
Do you think that important questions are often ignored by devs in the appropriate threads?
Whatever your thoughts I’d love to hear from you in the comments below.