Pages

Friday 25 June 2010

Cube Lighting

The light volume stuff I've been posting about uses an interesting lighting mechanism. For each point in the volume, a light colour and six intensity values are generated. The six intensity values represent the intensity of the incoming light from six separate directions; +X,+Y+Z and -X,-Y-Z.

To shade a point you locate it's position in the light volume, sample the colour and intensity values and based on the point's normal, shade it according to the incoming light intensity. The example I used to do this is as follows.

float intensity = dot(saturate(normal),posLuminence) + dot(saturate(-normal),negLuminence);

Although when I implemented that I got this for a scene with equal intensity from all directions. The thin darker bands show the correct intensity but too much light is accumulating where the intensity is interpolated.


You'd expect an even intensity across all surfaces so I'd definitely messed up somewhere. I remembered that Half Life 2 used a similar ambient cube lighting system so I looked into how they were doing things and found a few hints here, in section 8.4.1.

Based on that I ended up with the following solution.

Input.Normal = normalize(Input.Normal);
float3 originalSign = sign(Input.Normal);
float3 nSquared = Input.Normal * Input.Normal;
float3 modifiedNormal = nSquared * originalSign;
float intensity = dot(saturate(modifiedNormal),posLum) + dot(saturate(-modifiedNormal),negLum);

And the result?





That seems to have done the trick. I now get much smoother results without artifacts. Also, an equal intensity from all directions now produces a completely flat shaded surface as you'd expect.

Note: The images and code in this post are all based on experiments in 3DS Max. I created a maxscript which baked the lighting results to vertex colours. I normally resort to this method when I get completely stuck :)

Thursday 24 June 2010

Lighting Volumes II

Some more progress on implementing lighting volumes. I'm now able to light objects using a volume texture built from the volume lighting samples mentioned in the previous post. The effect looks much better in motion so I'll post some videos soon. Until then, some screenshots...


BugBackToad model by SonK, which can be found at http://ompf.org/forum/viewtopic.php?f=7&t=752


As before, the static building geometry is lit by a lightmap. The toad creature is being lit by the lighting volume. A representation of which you can see in the last three screenshots. The light volume is 64x32x64 and was generated with 664 rays per sample in about 10 seconds.

Generating each of the samples turned out to be quite easy but it took me a while to convert those samples into a volume texture I could use. The sample colours are copied from a one dimensional array straight into the volume texture. In the end the trick was to create the initial sample positions, layer by layer, in the same order as the volume texture stores them. Obvious really :)

Converting the fragments world space position into a UVW coordinate that could be used to lookup the light volume took some head scratching but I got there in the end. This code assumes that the volumes transform represents the lowest corner of the box, not it's physical center.

// Matrix used to map lookup UVW from volume world space to volume texture space. // Where vW, vH and vL are the dimensions of the volume.
Matrix invVolumeDimensions = ((1/vW,0,0,0),
                              (0,1/vH,0,0),
                              (0,0,1/vL,0),
                              (0,0,0,1));

//Matrix used to map a world position to volume texture space.
Matrix invVolumeTransform = (inverse worldSpaceVolumeTransform) * invVolumeDimensions;

// Transform world position into light volume texture space
float3 volumeUVW = fragmentWorldPosition * invVolumeTransform;

// Sample light volume
float3 volColour = tex3D(volumeColourSampler,volumeUVW);


Tuesday 22 June 2010

Towards lighting volumes (and away from GPU ray-tracing)

So a few days back a bunch of Gamefest 2010 presentations went online and amongst them was a talk by Monolith on lighting volumes. I had been using cube maps to light dynamic objects but this approach seems much nicer. As a bonus, the lighting volume can also be used to light static geometry and provide volumetric lighting effects, enabling you to light the space between objects as well as the objects themselves (Think the heavily light polluted city of Blade Runner).

Blade Runner. Copyright © 1982, 1991 by the Blade Runner Partnership.

So first I had to generate the volume samples. The paper and talk mention rendering cube maps to generate the lighting volume data but I've opted to use my lightmap tool to gather lighting information. Older versions of the lightmap tool used hemicube rendering (which was very slow) and volume sample counts quickly add up. For example, a 128*128*128 light volume requires 2,097,152 separate samples, multiply that by 6 individually rendered cube map faces and you've got a lot of draw calls. Add the cost of frequently dragging that data off the graphics card and onto regular memory and you're left with a lot of time to practice your knitting.

So here's the result so far. I've used the street scene again and sampled a 32*16*16 volume. In this case there are a lot of wasted samples and I'm sure I'd be better off custom fitting smaller sampling volumes.  To clarify, the building surfaces are still being lit with lightmaps in these screenshots. I haven't gotten around to outputting the light volume textures and rendering with them yet.




In other news, I've decided to put my attempt at GPU ray-tracing aside for now. It's no quick task and it seemed unproductive to go marching down another avenue to replace a method that is actually performing quite well. On the plus side, I've since revisited the problems I was having with the depth peeling method and have managed to solve the one which was bothering me most. :)

Monday 14 June 2010

GPU raytrace test

Whilst I'm quite happy with the depth peeling approach my lightmapper currently uses it does suffer from precision issues in certain circumstances. Firstly, light leaks are very common and although I've found a solution, it wreaks havoc with thin double sided objects. Secondly, very large scenes containing small objects pose a problem as the resolution of the depth peel render target becomes too sparse to handle the smaller objects accurately.

I've recently been reading up on GPU raytracing so I thought I'd give it a go. To start with I've opted for the easiest thing I could think of which is ambient occlusion on a small scene with no ray acceleration structure. Here are the results...





  
The resulting AO map.

Scene Stats:
  •  364 samples per pixel
  •  1024x1024 AO map
  •  A whopping 62 polygons!
  •  Render time of roughly 10 seconds (A quick test in mental ray with 64 samples per pixel took roughly 1 minute)


It's an interesting start and I'm keen to see how it pans out.

Thursday 10 June 2010

An old animation

Quite a few weeks back I posted these two stills (I also use the character to test my GPU lightmap tool)...


... they were taken from an old animation I did which I've recently uploaded to my website. You can check it out here.

The animation was a test sequence for a short film that didn't quite make it past the start line. The character was thrown together and animated over the course of a few evenings. About a year later (!) I built a nicer background, rendered the whole thing to separate layers and then comped it all together.