In my last blog post, I talked about the ball rendering in Hustle Kings, and how I used a per-pixel ray-cast and pixel coverage value to produce infinitely smooth balls. The pool balls are obviously going to be the primary focal point in any pool game, but it is equally important that the rest of the scene looks just as impressive. There is no point having some parts of your scene look ultra cool if the rest just doesn’t gel together.
I’ve always felt that the overall quality of a rendered scene is governed primarily by its lighting, and this is one thing I was keen to make sure we got right in Hustle Kings. Our pool hall backgrounds are all pretty much static, so pre-baked lightmaps were obviously the way to go. This way, we could make our lighting look as nice as we wanted, knowing that run-time rendering performance was never going to be an issue.
The one performance related point worth considering here though is the format in which lightmaps are stored. We found that it was essential that we store high range lighting information, so simple LDR 8-bit per channel textures were not an option. However, I did experiment with compressing down the high range lighting information to an 8-bit LogLUV format. Quality turned out to be very good – we actually use this format to do our HDR scene rendering and MSAA at the same time, something the PlayStation3 hardware would not be capable of doing if we were rendering to native floating point surfaces. However, as it turned out, rendering performance was greater rendering straight from RGBA16F lightmap textures – most of our shaders are ALU bound, so it is worth spending the extra texture bandwidth to save the ALU operations involved in decoding the texture. We use more texture memory this way, but that didn’t turn out to be an issue for us.
In terms of actually generating the lightmaps, using some internally developed tools, the artists firstly generate a unique UV mapping for each object in the scene, ensuring adequate space for padding (so bilinear filtering doesn’t introduce unwanted edge artefacts). The artists have full control over prioritising lightmap texture space to certain parts of the mesh. Once we have this mapping, Mantis (our proprietary engine) is used to generate the lighting information using the in-game render code.
We don’t use point or directional lights for any of our lighting – the artists light the scene with modelled luminous surfaces and textures, and then we use a global illumination model to distribute this light around the scene. The light above the pool table is modelled, textured, and given a luminosity value, and the same goes for all of the neon signs, and spot lights etc. Lightmapping is then simply a process of evaluating how much of the lighting emitted from these luminous surfaces contributes to each point on each surface in the scene.
To generate a lightmap for an object, the first stage is to render out the world space positions and normals into separate textures, in lightmap space. That is, the vertex shader sets output position to be the lightmap UV, I interpolate and render out world space position to one render target, and evaluate the normal map and transform into world space, rendering this into a second render target. I then scan through all pixels that were written to in these textures, and for each one, render a cube-map, centred at the world space position on the surface being processed. I then use this cube-map to determine the scene lighting contribution at the point in question.
Collapsing the cube-map down to a single lighting value involves performing a weighted average over all pixels in the cube map. The weighting values are driven by a cosine falloff from the world space normal, multiplied by a constant factor. Ideally, I would average samples uniformly distributed over a sphere, but since this is a cube, and there are effectively more samples per unit angle at the corners, I have to reduce the weighting of these samples accordingly.
Once a base lightmap has been generated for all objects, we have effectively simulated a single bounce of light around the scene. Running the same process again, using the lightmaps generated in the first pass, gives us a second bounce of light. So for the first lighting pass, all surfaces rendered into the cube-map are black, except for those that are luminous, but for subsequent passes, we use the lightmaps generated in the previous pass for rendering the scene into the cube-maps. The artists are able to run the process as many times as they like until they are happy with the final lighting solution.
One final point worth mentioning is regarding the cloth colours on the pool table. The player is able to change this to one of a number of pre-set colours, and the selected cloth colour needs to influence the light bouncing around the scene. One initial idea was to generate a separate set of lightmaps for each cloth colour, but this seemed quite restrictive. What I ended up doing was firstly determining which surfaces were affected by light bounced from the table, and then rendering two lightmaps for each of these surfaces. One lightmap would be with a completely black cloth, and the other with a pure white cloth. At load time, I then simply do an RGB interpolation between these two lightmaps based on the chosen cloth colour.
Now, the balls and the pool cue are not static – the lighting on these objects needs to change as they move around the scene, so we cannot use lightmaps here. However, it is imperative that the balls gel with the rest of the scene, so we use pretty much the same approach to generate these lighting values as we do to generate the lightmap data.
Firstly, I take the extents of the volume within which the balls and cues can appear, and divide this down into a volume of cells. At the corner of every cell, I render a cube-map of the scene with final lightmaps, to determine the lighting at that point, and then encode this down to a spherical harmonic. In areas of the grid where the lighting derivative across the cell exceeds a threshold, I divide the grid down further. Whenever a grid cell completely occupies dead space (ie, is completely embedded inside the table geometry), I stop dividing that cell. This grid gets collapsed into a kd-tree, and then optimsed. The optimisation process is simply a matter of recursively pruning any leaf nodes where the corner points added by the leaf have lighting values that are very close to what would be computed by interpolating the lighting in the parent cell. I do this until the kd-tree occupies no more than a pre-designated amount of memory.
At run-time, we can then compute the lighting at any point within the volume by traversing down the kd-tree to a leaf node, and interpolating the lighting between the 8 corner points, giving us a single spherical harmonic.
That pretty much sums up the lighting system we use in Hustle Kings. All that really remains is the shadows cast from the balls, and I’ll cover this in some depth in my next article.