I’ve recently been trying to put together a remote gaming setup, and while I got it to work with Windows 10, the hardware H.264 encoding was a little flakey – it would drop out all the time at higher resolutions, and only one user could be logged in at the same time. So I started looking at Windows Server 2016.
One interesting obstacle is that the latest Nvidia drivers (49x+) don’t work on Windows 2016 server. But I randomly tried the latest 3xx series drivers which support the GeForce GTX 10xx cards I use, and those worked. Eventually after a lot of bisecting updates, I found that the very latest series of drivers before 49x+ – the 47x series “gaming ready” does in fact work. The studio driver of the exact same version, unfortunately, refuses to install.
The experience so far is pretty good, and I haven’t noticed any hardware encoding dropouts. There are two easy ways to tell when hardware encoding drops out:
- Keep Task Manager up monitoring GPU encoding core consumption – when hardware encoding drops out, the encoding load on the GPU will drop to 0.
- The image quality will improve dramatically (Nvidia hardware encoding is is a bit blocky under a gaming workload), but the frame rate will tank because all of the CPU gets eaten by Windows’ built in H.264 encoding.
I recently came across a rather interesting issue that seems to be relatively unrecognised – since 18xx updates, the idling Windows guest VMs seem to be consuming about 30% of CPU on the Linux KVM host. This took me a little while to get to the bottom off, and after excluding the possibility of it being caused by any active processes from inside the VM, I eventually pinned it down to the way system timers are used.
What seems to be happening is that the Windows kernel keeps polling the CPU timer all the time at a rather aggressive rate, which manifests as rather high CPU usage on the host even though the guest is not doing any productive work. On large virtualization server, this is obviously going to pointlessly burn through a huge amount of CPU for no benefit.
The solution is to expose an emulated Hyper-V clock. For all other clocks, the kernel seems to incessantly poll the timers, but for Hyper-V it recognises that this is a bad idea in a virtual machine, and starts to behave in a more sensible way.
To achieve this, add this to your libvirt XML guest definition:
<spinlocks state='on' retries='8191'/>
This gets the VM’s idle CPU usage from 30%+ down to a much more reasonable 1%.
Answer: 8337601 seconds. One second more than 96.5 days.
As previously mentioned, I have been working on SEO and marketing recently, and it is fascinating what insights one gains into the inner workings of things that power the digital world of today. One such example is figuring out the shortest cache time for Lighthouse / PageSpeed Insights to pass the audit.
To keep your Lighthouse / Google PageSpeed Insights happy, you should set your caching time to at least 8337601 seconds. In Apache config, that would mean adding a line like this:
Header always set Cache-Control "max-age=8337601, public"
Why would you want this to be as short as possible but long enough to keep Lighthouse happy? Well, you want to make sure that your page updates show up as quickly as possible and you don’t want stale content sticking around in the caches, reducing the effectiveness of the optimisations and improvements you make to your site all the time. At the same time, Google has said that page speed is now a ranking factor, and the algorithm they use is related to the one that Lighthouse uses.
So, if you want to set your caching time to a minimum possible time that seems to keep google happy – set it to 8337601 seconds.