BoredGamer

Star Citizen, Squadron 42 & Theatres of War News, Guides, Videos & Gameplay by BoredGamer

Dev Response – Vulkan & Server Updates

Welcome to some more Star Citizen with Dev Responses on some major questions on Server Meshing, Server Performance, Utilization of Tech, Vulkan Gen12 Renderer Updates for the Future of both Star Citizen’s PU and Squadron 42.

There were Questions about servers performance, SOCS and server meshing by shoeii on Spectrum, which CIG Chad McKinney answered:

Let me preface what I’m about to say with a giant disclaimer. Nothing I say here is definitive about how things will work or whether this is what we’re working on right now, I’m just demonstrating that there’s a range of possibilities. Also I’m not at all commenting on the timing of any release or dates.

Is it possible to implement Pyro without server meshing ?

Depends on how we want it to work. We could implement it without server meshing by making it a different location you connect to in the front end that’s isolated from the stanton instance. However, if we wanted it to work in game we can still be clever about what level of server meshing we need to support. It may be possible to consider a simpler initial version of meshing where different solar systems are run on different servers and you use the jump point as an explicit transition point. This would mean we wouldn’t have to solve all the server meshing problems, but would be an incremental step towards the final goal.

Under these conditions is it possible for the servers to support the introduction of an additional solar system, with players who will be distributed between all the locations of Stanton + Pyro, without being at 1 fps ?

As I always say, you have to profile. My guess is that adding an entire system would cause some serious performance challenges that are probably prohibitive with investing time in optimization, and even then it’s unclear, we’d need to know what the bottlenecks were and how feasible the optimizations would be given the demand on the schedule, weighted against other priorities etc…..

Will we have to wait for server meshing to finally see the server tickrate reaching 30 fps ?

I’m not gonna say what changes are gonna be the thing that makes the servers hit a certain FPS. It’s so much more complicated than that, and guessing the FPS of a future version of the game where the engine works very differently than what we have now and the content we have is different than what we have now is a fool’s errand. I think server meshing will address one of the biggest issues with performance which is density of content, but it’s not the only issue we have with performance.

Or is there any other improvements planned before server meshing to be able to increase fluidity and decrease server load ?

As with any game, we need to spend time on performance, and big tech initiatives can help us make big leaps, but they aren’t going to be enough on their own, and we will always need to consider performance in all our systems in the game and work towards our performance goals. There’s still lots of places for us to make improvements, for example in AI, in the entity/component system, in physics, etc.. there’s wins to be had. When we get server meshing the density of content on a DGS will come down and these other issues will get highlighted even more, and then addressing those will have a more pronounced impact on performance. Performance is a tricky thing, and the solutions that get you from 10 FPS to 20 FPS are different than the ones that get you from 20 to 30 or 30 to 60 as every time you’re bringing that frame time down your time requirements for all your systems are also decreasing which means your problems are trickier and you’re needing to be concerned with more and more nuanced and specific issues. But again, it all starts with profiling and understanding the problem first, there is no optimization without context.

Rooster7755 asked Is there Server Streaming with SSOCS?

The very active Chad Mckinney responded again – We have an initial implementation of SOCS in 3.8 using a prototype backend, but there is still more work to finalize the implementation to use our new persistence backend and also to support global persistence (currently underway). There is no server meshing right now as the full persistence socs implementation is a requirement for server meshing, as well as a host of gameplay changes, not to mention the fundamental network functionality, all of which is either underway or scheduled. We’re getting there, just not there yet!

There were Questions on Vulkan AND DirectX

We know that CIG are going to move to Vulkan and DirectX in the short term, then move over fully to just Vulkan for a focused hardware neutral approach for it’s renderer… otherwise known as the Gen12 Renderer that they are building.

Ben Parry talked more about this:

Calling it Gen12 internally was, I think, my suggestion. It was just to avoid the trap of naming something “New Renderer” or whatever, and being stuck with that name for years to come. Unfortunate, though, that Intel called something Gen12 at exactly the same time, but that’s a totally different thing.

Until we switch to Gen12, everything’s using something named XRenderD3D9 (which runs D3D 11) (names are hard) (that one’s not our fault).

We’re allowing the D3D11 implementation to be a bit clumsy and weird so that the final version can be efficient.

Our implementation doesn’t merge any code from CryEngine or Lumberyard’s Vulkan conversions.

Bear in mind that with cinematics, lighting detail can be a lot nicer, just because every shot can be precisely set up to get the nicest view, tricksy cinematic lighting can be applied for each shot, etc. Hair has been changing tech-wise, and there are definitely different assets at different qualities now, but there is definitely some nice hair in engine.

Star Citizen’s Engine is reportedly scaleable to 30 CPU cores internally.

The idea is that a lot of the time consuming work that’s currently on a single dedicated render thread can be handed off in pieces to the Job System that manages our other multithreading, letting us build the GPU command lists in parallel.

There were questions on Secondary viewports & The Render to Texture Tech from Ol’Red’s Spectrum thread that was answered by Ben Parry CIG

  1. What is the status of the system? It’s up and running! It’s being used all over, in places you may not have realised, such as mini-previews and comms calls in mobiglass, holograms out in the world, like the big soft drink advert in Area 18 and (I think) the enemy ship views in the HUD.
  2. Is it still the plan for reflections? It’s important to stress that this isn’t a general-purpose solution to add high quality reflections everywhere. Whenever we set up one of these views, we have to make decisions about what style of rendering it will use, and what features will be enabled, based on the time it will take to render and what permanent GPU memory they’ll need. So this makes it a good match for a mirror in an enclosed bathroom, where we know there’s only a handful of lights and you can’t, for example, see out a window to a planet’s atmosphere. The same mirror on a player-controlled object would be like a performance landmine, where looking at it in the wrong circumstance would halve your framerate. Similarly, we can’t just spawn them on every shiny surface.
  3. Is there any update as to the timeline? There are a few small technical issues that currently stop it from being used as a mirror in the simplest way, for example, because of the way polygons are culled, mirroring an object also shows you the back faces of the mesh, effectively turning it inside out. None of these are likely to be huge problems but there are likely a lot of those little snags hiding in different systems.
  4. Will secondary viewports be available on the HUD, to assist in hanger landings? See my answer to (2) for why we’re not keen to add general purpose camera views onto things that can fly around arbitrary places. However, that’s not to say that no view like this could possibly work. The graphics team has generally argued for these kinds of features to be presented as a kind of “scanner view”, which would mean it could have a visually appealing non-photorealistic look, and give us the freedom to leave out major performance sinks that you don’t actually need, or would actively interfere with landing. For instance, volumetric fog has major performance and memory costs, but landing in fog is probably harder, so why not have a scanner that doesn’t see fog at all?