The Sync Wars
It’s a good read, and clarifies a lot of things about G-Sync and Nvidia’s intention in all of this. I decided to write a quick summary of events anyways because (as I’m sure most graphics programmers have) I’ve had to look into refresh rate work-arounds and technologies personally throughout my journey within computer graphics. So we’ll cover everything from the ground up.
Starting from the beginning, within all display technologies (even cathode-based televisions) vertical blanking interval (VBI), or otherwise known as VBLANK, is the time difference between the last line of the first frame and the first line of the next frame. During a VBI no picture information is transmitted to the display to allow some time for the electron beam of the panel to return to the top of the screen. If you vary this value, you also vary the refresh rate of the screen (the number of times in a second that a display hardware updates its buffer). If you link or synchronize this with the rate at which the GPU draws and outputs frames, you then get an “adaptive” refresh rate. There are various names for this; refresh rate switching, variable frame rate, variable VBLANK, dynamic refresh rate, etc., but they all essentially mean the same thing.
There are however numerous ways to perform this “sync” stage. In Nvidia’s implementation using the G-Sync module, the display holds onto a VBLANK until the next frame is received. The GPU ouputs whatever frame-rate the hardware can manage, while the monitor handles the “sync” part. The opposite is true in AMD’s implementation, where the VBLANK length is set as variable, and the driver has to decide what VBLANK length to set for the next frame. It does this via an additional hardware buffer implemented onto the GPU to store frames for controlled release to the varied VBLANK lengths. In AMD’s implementation, speculation involved in setting the right VBLANK length for the next frame could cause some software overhead for the host system. That same overhead however, is transferred to the display in Nvidia’s implementation. Inversely, the G-Sync module is now required to deal with the overhead associated with the additional frame buffer. The display controllers inside Nvidia GPUs do not support dynamic refresh rates the way AMD’s do, hence the reasoning behind Nvidia’s deployment of external hardware.
With the differences between the implementations aside it’s time to tackle some history, so let’s rewind slightly.
Some years ago VESA created a free, better version of HDMI (which, fyi, is not a VESA-approved free standard, HDMI was created by a separate for-profit group of corporations that charge a licensing fee for it’s use) called DisplayPort. Laptop manufacturers said ‘cool, a free display interface, let’s get that extended a bit for our use.’ eDP is born, and it includes the power saving feature for “seamless refresh rate switching.” AMD and Nvidia both support this because it’s in the standard for mobile/laptop components, in addition to their unification of their mobile, laptop and desktop silicon chips. Normal desktop monitors do not support it since it’s an embedded interface, so nobody notices for years.
VESA keeps doing their thing with regular old DisplayPort; releasing revisions, making it better and begin planning DisplayPort v1.3 which is due to release this year. Nvidia sees this coming since they’re a member of VESA (and who knows, it is highly probable they assisted in developing 1.3 and the logic controllers to support it). Somebody at Nvidia decides; ‘Let’s pitch this old idea that’s about to come to desktops anyway as our own before everybody else supports it because it’s the freaking standard. Oh, and let’s re-brand it so it looks like Nvidia came up with it too!‘ Nvidia reveals G-Sync and suddenly everyone is amazed at how variable refresh rates is a stroke of genius.
Fast forward approximately 6 months later to CES’14, where AMD clarifies ‘But… we’ve had support for variable refresh rates since the Radeon 5000 series in our GPU hardware. In fact so have Intel (who also make embedded Laptop GPU’s). Have a look at our APU do the exact same thing as G-Sync on a regular laptop because laptops use the VESA-developed eDP standard for display hardware.’
Eleven years prior AMD (known as ATi at the time), had already filed a patent for hardware-based dynamic framerate adjustment (also the most likely reason why Nvidia do not have the same on-board GPU solution to frame-syncing as AMD), this was implemented by AMD starting from the Radeon 5000 series 3 years ago for use with monitors that supported variable VBLANK, which monitor manufacturers (VESA and co.) never developed to implement into desktop displays.
It was revealed that all Nvidia had done is re-produce the same ASCI component for a monitor to support a feature of DisplayPort 1.3 before it became a ratified standard, and worse of all, lock it down to function only with Nvidia hardware. It’s probable that G-Sync is just eDP or DP 1.3 in camouflage, who knows since Nvidia avoid talking about their ‘secret sauce’ especially since it’s likely a rehashed open specification.
I can safely predict that most monitors will support DP 1.3 within the next 12-18 months, maybe even sooner due to the recent G-Sync/Free-Sync wars turning heads at CES’14 (which in some ways is a good thing for the rest of industry). In the meantime I shall patiently await DP 1.3 support to arrive. As a final word for those wanting/waiting to use Nvidia GPU’s solely for G-Sync, unless you’re an early adopter with little care for budgets and performance/cost ratios, consider that you’ll be paying a premium for what is currently being implemented as an open standard in the coming year.