It’s a good read, and clarifies a lot of things about G-Sync and Nvidia’s intention in all of this. I decided to write a quick summary of events anyways because (as I’m sure most graphics programmers have) I’ve had to look into refresh rate work-arounds and technologies personally throughout my journey within computer graphics. So we’ll cover everything from the ground up.
Read the rest of this entry
On one cold night not too long ago, as I pondered my upcoming PC build (a build that I’ve been
dreaming planning to do for the past 2 years, and has yet to happen, and inevitably wont for the foreseeable future) I wondered whether I could create my own benchmark. And to keep it simple, one targeted primarily for CPU usage.
I thought of a few ways to test my aging CPU’s little remaining mettle (for reference it’s a Core 2 Duo E6600. And yes, I know… it sucks). But without getting too much memory or HDD I/O activity involved, I eventually decided on the ever reliable prime numbers. “Easy enough” I thought to myself, shouldn’t be more than a few lines of code.
And surprisingly, for once, I was right.
I like Visual Studio 2012, I really do. But deep down inside, somewhere, I also really don’t like it.
When VS2012 was released I was eager to explore and experiment its inner depths. Compelling additions to the .NET Framework 4.5, improvements to semantic colourization (slowly catching up to XCode), and an actual editing environment for shaders in HLSL – they were all so, so tempting. But even with all that, even with all the squeaky IDE goodness that would’ve made my programming life far more easy and awesome, I simply couldn’t get passed that UI. I couldn’t. I tried, but I couldn’t. Why is everything monochrome? Why are the new icons small and difficult to spot (without colours they all practically look alike)? And, for some bizarre reason, why are all the menus UPPERCASE?
According to Andrew Binstock in his review of Visual Studio 2012:
Speaking off the record, a Microsoft employee with the Windows 8 team confessed that the Metro team was “very surprised” when they saw how the Visual Studio team had interpreted the Metro guidelines.
So letting that quote sink in, and without risking taking this post any further into rant-land, I shall explain what it is I did to heal myself of any further eye-related injury.
The use of const is something of a mythical being for some programmers. I’ve seen it used different ways; some throw it around a lot because they know about the ‘good’ but don’t fully grasp the ‘when’. Then there are those who don’t use it at all because it may feel unnecessary or hard to get used to. I was a mix of both during my cowboy coding days, and since working on more business-orientated projects const-correctness has been beaten into my programming instinct many times over. You’ll quickly realise it’s value when working across teams and very large code bases. Read the rest of this entry
Since finishing my IK research project a few months back I’ve been slowly developing upon the original code base for experimentation purposes, and needless to say it has grown to be very large. I began thinking about usability, or more specifically re-usability in different projects. I had written code that I could find many uses for, and considering the trouble of re-writing sections every time I wanted to experiment with new methods or try a new game idea, the thought of compiling this into a nice simple library crossed my mind.
So I figured I should write about some of the experiences I’ve had with software engineering/programming interviews in the past. I’ve been lucky enough to sample a number of different ‘flavours’ of interviews. Through my travels I quickly learned that interviewing in itself is just as much a skill as knowing C++ or riding a horse. I do scratch my head at some of the practices found in interviewing, I know it may seem fickle of me to question interview practices, but I’ll ramble on about some of them later nonetheless. Keep in mind there’s probably more to interviewing than what I’ve written, I’m simply drawing on past experiences. Also unlike senior or regular engineering positions, there’s not much text on junior-level interview topics, so I suppose this will add (hopefully in a good way) to that area. Let’s jump right in, shall we?
Software companies like assurances when they hire new developers, they want to make sure you know (and I mean know) the innards of your code. And as I began preparing for my first ever interview long ago, this hit me pretty hard. Thing is, this interview was no small matter. It was a SDET position at Microsoft, and I was getting flown out from the UK to their HQ in Redmond, Seattle, so it was a pretty big deal. Microsoft are known to be notoriously difficult during their interview process, for the weeks leading up to the interview, scared is a very big understatement as to how I was feeling in my overly fancy Bellevue hotel room a handful of days prior.
This project was done as a part of my honours-stage research paper. The overall idea was born from realizing how little flexibility animation in real-time currently has; while animation blending, ragdoll physics, and simple IK re-targeting techniques are commonplace, virtually little else in the way of full-character motion generation existed. The benefits such a notion were enormous both from a game-design and real-time perspective. So I decided to investigate current procedural animation techniques, and developed a procedural animation system of my own, which I later named IKAS (Inverse Kinematic Animation System).
Developed to handle the concept of partial truth and human reasoning, Fuzzy theory has found significant use in artificial intelligence. The application of fuzzy concepts has been widespread, ranging from machine learning to animation.
This demo demonstrates the implementation of a fuzzy inference system and value defuzzification. The demo allows the varying of numerous options to combine different types of fuzzy theory methods. Implication and output methods can be specified, this includes either centroid or a modified high-value defuzzification method, as well as input/output variables and rule sets.
Bezier curves, or curves in general, are arguably one of the most useful tools in graphics programming. They have so many practical applications particularly in games they’re almost always used one way or another, whether that be camera movement, animation, motion paths, the possibilities are endless. The demo was built upon a framework that I’ve used for most of my DirectX projects, it’s sustained heavy amounts of iterations through the years and it’s served its purpose very well considering.
This demo was a quick experimentation of Bezier curve manipulation, rendering composite bezier curves and then animating an object along the curve at a speed dictated by some simple physical properties. The user can place control points for a bezier curve arround a small level, the user can then tell the application to run, and a spaceship will fly along the user defined bezier curve at velocities dependant on the incline of the slope.
W, A, S, D, PageUp and PageDwn move the current control point. Selecting “Add control point” creates a new control point for placement while fixating the previous. Hitting “Run” will switch to view mode where a simple model follows the path created by the user. Reset the curve by clicking “Restart.”
The aim for this demo was to create a 3D scene using the Direct3D API. The application design allows the user dynamic control over numerous variables, such as the wave’s height, speed, size and texture scale. The code uses the concepts talked about in Frank Luna’s book ‘Programming in DirectX 9.0c’.
The application starts with a flat vertex grid and uses displacement mapping to offset each vertex over time. Displacement mapping itself is accomplished by associating each vertex with a displacement vector, these displacement vectors specify a direction for the Vertex Shader to offset (i.e displace) the vertices based on a defined length stored in the displacement map. The result is what you see below.
My original idea behind this project was to create numerous ‘types’ of water by using different rendering methods to show various styles and levels of realism. However this was out of reach considering the time frame we had available, so I eventually settled on using displacement mapping as my method for this project. Other techniques I investigated were noise-based water simulations, and I am working on implementing these along with added reflection/distortion effects within the demo.
All source code for the project can be found here. You will need Visual Studio and DirectX installed to run the project. Most modern gaming PCs should be capable of running the application at a smooth frame rate.