Retro Tech: Quake 3 Light Volumes

June 17, 2016

In preparing the prototype for my new game, Robyn hUD, I wanted to pull together something playable as quickly as possible. Unlike, my previous Sleuthhounds games, Robyn HUD is rendered in real time 3D, which of course requires a 3D level and 3D objects to populate it. Having done some mod and map work many moons ago on Quake III: Arena I was already familiar with the GTKRadiant mapping tool needed to create 3D levels. Looking for the quickest path to creating my prototype, it made sense to use GTKRadiant to build the levels and then load in the BSP level files that GTKRadiant creates.

Loading in the BSP files meant, of course, having to understand the file format in order to extract the data needed to render the level with my own game code. For this, I found two really useful articles on the web. The Unofficial Quake 3 Map Specs by Kekoa Proudfoot describes the different data structures stored in the BSP file and is a good general reference for being able to extract the needed data. Rendering Quake 3 Maps by Morgan McGuire was also useful for providing a good start on how to take the data extracted from the BSP file and actually render it so the geometry showed up in the places it was supposed to.

Even though the articles I found were great starting points, I did have to do some spelunking of my own into the data to be able to render levels properly. Specifically I had to do a bit of puzzling out with regards to the lighting information stored in the BSP files. What follows is what I’ve learned from my own experiments. It works well for me but is in no way an official description of how iD (the developers of Quake III) intended for the lighting data to be used.

[Test level (WIP) and entities (also WIP) without any lighting effects.]
Test level (WIP) and entities (also WIP) without any lighting effects.

Static Level Lighting – Lightmaps and Vertex Colors

With that disclaimer out of the way, the first area with regards to lighting that I turned my attention to were the lightmaps and vertex colours stored within the BSP file. To get the most performance possible when the game is running, Quake III does not dynamically light the entire level. Instead it uses a combination of vertex colours and what are known as lightmaps to create areas of light and shadow.

When creating any 3D object, you typically give that object a texture, a skin that wraps over that object to give it more detail than what you would actually want to model in 3D. For example, if you were creating a wooden chest, you wouldn’t want to model all of the wood grain. Instead, you would make a 2D picture of the wood grain and apply it to the different surfaces of the object to texture it.

Lightmaps are basically another texture that gets applied to the surfaces of a level. When GTKRadiant creates its BSP files it also generates these lightmaps. Basically, it’s creating extra textures that when blended with the real textures of the level will produce a final look to the level that makes it appear as though it’s been lit (when in fact it’s just had two layers of texturing applied to it).

All good in theory, and Kekoa Proudfoot’s article briefly touches upon the lightmaps, indicating that each lightmap is stored in the BSP file as a texture that is 128x128 pixels in size with each pixel being comprised of three bytes for the red, green, and blue color values of the light or shadow at a given point in the light map.

Ah, I thought, this will be easy. All I have to do is load that data into textures and apply them to the appropriate surfaces. Here was the first result:

[Test level with lightmaps applied directly from BSP file.  Too dark.]
Test level with lightmaps applied directly from BSP file. Too dark.

In general this first result seemed really dark to me and clearly there was more to do than just the lightmaps. As you can see, the desk in the foreground is one of the few areas that is quite brightly lit, which didn’t seem right to me.

I knew that in addition to the lightmaps, the BSP data also stored vertex colours for all the vertices in the level. I figured that must mean that a combination of the lightmaps and the vertex colours would give the correct lighting result. So I changed the vertices to render with the colours as indicated in the BSP file rather than just as white vertices as I had been previously doing. Here’s what happened:

[Test level with lightmaps and vertex colours applied directly from BSP file.  Way too dark!]
Test level with lightmaps and vertex colours applied directly from BSP file. Way too dark!

Really, dark. Really, really dark. So dark, in fact, that it was almost impossible to make out any detail. Not great when you’re trying to experiment with the playability of a game. My first thought was that I simply hadn’t made the lights bright enough in GTKRadiant. So I went back and brightened up all the lights. And everything was still really dark. So I brightened the lights again. Eventually I figured I had brightened the lights to the point where each one was like a sun blazing away on the surface of the earth. And the level was still really dark. To illustrate the point, here’s what the lightmap from my test level looked like:

[Raw lightmap from BSP file.]
Raw lightmap from BSP file.

As you can see, even at its brightest, it’s still a rather mushy sort of grey. I had an idea forming at the back of my mind, and to help test it out, I took that coloured but dark lightmap and converted it to be just its luminance so I could more easily tell how bright the brightest areas were without having to worry about the color, like so:

[Lightmap converted to luminance (brightness) values.]
Lightmap converted to luminance (brightness) values.

I found that as far as the luminance went, the brightest area seemed to only get to around 61 or 62 or so. That was significant. Each pixel in a lightmap is made of three bytes of data representing the red, grene, and blue colour components. A byte, for those who have forgotten, is a value that ranges from 0 to 255. I had assumed that since the BSP file was storing bytes that the lightmap colour data would be in that same range, 0 to 255. However, seeing the brightest values as being around the 62 mark, I came to wonder if maybe the light data had been squashed to be represented on a scale of 0 to 63 instead (0 to 63 is 64 different values, which require 6 bits to represent, which is why seeing values trending towards that point was significant).

As an experiment, I changed my lightmap building code and the vertex colouring code to rescale the colour values. In essence for each of the three colour components I did the following:

component = component X 255 / 63

This scales the colour values so that instead of being in the range 0 to 63 they’re now in the range 0 to 255. As the following screenshot shows, rescaling these values suddenly got things working properly.

[Test level with lightmaps and vertex colours rescaled to 0 to 255 range.  Just right.]
Test level with lightmaps and vertex colours rescaled to 0 to 255 range. Just right.

Dynamic Entity Lighting – Light Volumes (Ambient Light)

Getting the level itself to render well with its lightmaps is only half the battle when it comes to applying in game lighting. The light maps work for the static geometry of the level, but something more is needed when you start putting entities such as characters and other objects into the level. If you don’t apply any sort of lighting effects to those entities they really stand out from the background, like so:

[Lit test level with unlit (WIP) entities.]
Lit test level with unlit (WIP) entities.

Within the BSP file, in addition to the lightmaps for the static geometry, GTKRadiant also stores light volumes to help light the dynamic entities you may have moving around the level. As per Kekoa Proudfoot’s documentation, these light volumes describe the ambient light and the directional lights in different parts of the level. The documentation, does not go into the details of how to apply this information to dynamic entities, so there was a bit of head scratching for me here.

First of all, what exactly are light volumes? Light volumes are another little trick that Quake III uses to help speed up the lighting of entities within a level. At a high level, a given light volume describes how entities that are within that light volume should be lit. So if you have a light volume just to the left of a bright blue light, for example, then that light volume will indicate that entities placed within that volume should be lit blue with the blue light coming from the right.

So, second question, how are light volumes arranged within a level? In the BSP file, light volumes are strung out in a one–dimensional array. However, levels are obviously in three dimensions. These three dimensions are all represented in the one dimensional array as described in Proudfoot’s documentation.

Each light volume is a block that is 64x64 units on its base and 128 units in height (these units match the sizes used in GTKRadiant so you don’t have to do any conversion as far as that’s concerned). To describe what this means, let’s look at the overall dimensions of my test level:

  • –384 to 384 going left to right (768 width)
  • –384 to 400 going from front to back (784 depth)
  • –128 to 208 going from bottom to top (336 height)

As per the calculations that Proudfoot provides, this works out to be a level that has a width of 13 light volumes, a depth of 13 light volumes, and a height of 3 light volumes. Here's a shot of the test level with the static lighting turned off and showing the different light volumes.

[Large boxes show the ambient colour and small boxes show the directional colour and direction of the light volumes.]
Large boxes show the ambient colour and small boxes show the directional colour and direction of the light volumes.

Coming back to the BSP format, the light volumes are arranged from left to right, back to front, and bottom to top.

From my experiments, it seems that the light information stored within a light volume is calculated based on the light that should illuminate the center of the base of the volume. Light typically comes from overhead, so GTKRadiant calculating the light based on the bottom or floor of a light volume makes sense. This is especially important when it comes to directional lighting.

Important: GTKRadiant works out the vertical light volumes based on the lowest point in your level. So if the lowest point is at –32, for example, then your light volumes will start at –32, 96, 224, 352, etc. This is important to know when building your map, because if you build maps such that the tops of floors aren’t at some multiple of 128 away from the lowest point in your map, then entities may not be lit as you expect. A light that you place in your map near the ceiling, may actually appear near the floor in a light volume if you haven’t lined things up properly. This may have the effect of lighting your dynamic entities from below instead of from above, as an extreme example.

Once I had the light volume ordering sorted out, I started adding light effects to the entities in my level. First I added in the ambient lighting, the general level and color of light contained within a light volume.

[Entities lit with 0 to 63 range ambient light.  Too dark.]
Entities lit with 0 to 63 range ambient light. Too dark.

Whoops! It turns out that the light values stored in the light volumes are, like the lightmaps, also on a scale of 0 to 63 instead of 0 to 255. Applying the same light rescaling as for lightmaps corrected this to produce the proper ambient lighting for dynamic entities.

[Entities lit with ambient light rescaled to 0 to 255 range.  Better.]
Entities lit with ambient light rescaled to 0 to 255 range. Better.

Dynamic Entity Lighting – Light Volumes (Directgional Light)

The last component of lighting to bring in from the BSP file was the directional lighting (which I checked and found to also be in the range of 0 to 63 and so again requiring rescaling to 0 to 255). Directional lighting is a little trickier than ambient lighting because, in addition to the light colour, directional lighting also has the direction to deal with.

Each light volume stores information for one directional light. The idea here is that GTKRadiant, when it creates the BSP file, looks at all the different lights in the environment. For each light volume it figures out what the overall directional light is going to be. For example, If you have a green light to the left of a volume and a red light to the right but twice as far away, then the overall light direction will be towards the right (because the green light on the left is closer so has more influence on the direction) and the colour will be a sort of greenish brown (again because the green light is the closer light but we still have some aspects of red coming in).

Light volumes store the direction of their directional lighting as two byte values that represent degrees indicating the angle along which the light is being directed. Degrees, of course, can range from 0 to 360, but byte values range from 0 to 255. So here we first have to rescale the stored degree value in a similar manner to rescaling the colour amounts. In this case:

degree = degree X 360 / 255

Once you have the degrees in the proper range, you then need to use them to determine what direction your directional light is pointing in 3D space. This took a bit of playing around with to get something that worked out properly.

First I defined two unit vectors:

  • A view vector pointing along the positive X axis (1, 0, 0).
  • An up vector pointing along the positive Y axis (0, 1, 0).

From my experimentations, I found this to work as an initial orientation for light within a light volume before it had been rotated to the proper angle.

As far as rotating the light direction was concerned, I made use of quaternions (which are way beyond the scope of this article) to get the correct final direction.

First, I performed a yaw rotation (rotating around the vertical axis) by the second angle stored for the light volume. If you have a light to the right of a light volume, then this rotation turns the view vector to face the left, away from the light (the up vector is still pointing up at this point).

I then performed a pitch rotation (rotation around the left–right horizontal axis) to angle the light up or down. I found that I had to subtract the first angle from 270 to get the pitch correct for when a light was directly above a light volume. In such a situation, the first angle stored is actually set to 0. Recall that the initial light vector was pointed along the X axis, but in this case we ultimately want it pointing straight down. It’s therefore necessary to pitch down effectively by 90 degrees (I pitched in the positive direction which required 270 degrees, but I could as easily have just pitched –90 degrees instead).

After performing the necessary rotations, the view vector now points along the direction the light is travelling. Now, in my game code I’m using OpenGL, and directional lights in OpenGL are actually set up by providing a vector that points towards where the light is coming from instead of away. To handle this, the final step is to invert the view vector (basically switch the sign from positive to negative and vice versa on each component of the view vector).

Once you have the directional colour and direction extracted from the light volume data you can create a single light source with those values that will light all the entities that fall within that given light volume (obviously, you’d look up different colours and directions for entities in different light volumes). For me, the result was something like this:

[Entities with ambient and directional lighting.  Just right.]
Entities with ambient and directional lighting. Just right.

[Unlit level and entities for comparison.]
Unlit level and entities for comparison.

Better Dynamic Entity Lighting – Interpolation

Lighting entities based on the light volumes from the BSP file really helps them fit into the level. However, once I got everything running I did find one last lighting issue I wanted to deal with. The light volumes in the BSP are fairly large, being 64x64 units on their base. This means that the lighting values can change quite significantly from one light volume to an adjacent volume.

This exhibits itself when an entity moves between light volumes. All of a sudden the lighting on the entity will pop to a different colour and/or direction. What would be better would be for the lighting effects to smoothly change as an entity gets farther from the center of one light volume and closer to the center of the next.

[The entity (dot) should be mostly green but still increasingly red, yellow, or blue as it moves towards those light volumes.]
The entity (dot) should be mostly green but still increasingly red, yellow, or blue as it moves towards those light volumes.

If we look down upon our map, we can consider the light volumes as a two dimensional grid (they’re actually in three dimensions, of course, but it’s easier to consider and describe the two dimensional case). Each light volume is 64x64 units large. It’s likely that most of the time, an entity will be somewhere in between the centers of adjacent light volumes rather than being dead center on one light volume. In point of fact, when we’re considering the two dimensional case, the entity is going to be somewhere between the centers of four light volumes as shown.

Based on the four light volumes, what we want to do is combine the amount of light from the different volumes such that as we move closer to the center of one, it becomes the dominant light. In essence, we want to interpolate the light factors between the four volumes.

There are many articles out there on interpolation that cover the subject better than I can in this space (given that this blog post has already gotten quite long). So I won’t go into all the gory details. Basically, you need to figure out how much light each volume is contributing based on where the entity is.

The light volumes are each 64x64 units big. If you also think about the entity itself as having a 64x64 unit square around it, then you can determine how much area of that square overlaps a given light volume. For example, if the entity is standing right in the center of a square, then that square will have 100% of its area covered and so 100% of its light should be applied.

If the entity is standing at the intersection of four light volumes, then a quarter of the entity’s square will overlap each of those volumes. So 25% of the light from each square should be applied.

Other methods also exist for interpolating the light between adjacent light volumes. The important thing is that once that’s implemented, then the lighting applied to your dynamic entities will smoothly change as those entities move between light volumes.

Whew! That was a long one. And maybe nobody’s too interested anymore seeing as Quake III is almost twenty years old. However, I suspect the fundamentals of iD created engines (and their mapping tools) have remained the same, so anyone playing around with more modern tools in those chains may find this useful. And besides, I wanted to capture all my notes somewhere in case I ever needed to refer back to them. Huzzah!