Pages

Tuesday 25 April 2017

Real Time Global Illumination

 
In keeping with a lot of the older posts on this blog I thought I'd write about the realtime GI system I'm using in a project I'm working on. It's a complete fresh start from the GI stuff I've written about previously. Previous efforts were based on volume textures but dealing with the sampling issues is a pain in the ass so I've switched to good old fashioned lightmaps. This is all a lot of effort to go to so why bother? The short answer is I love the way it looks. As a bonus it simplifies the lighting process and there's a subtlety to the end results that is very hard to achieve without some sort of physically based light transport. A single light can illuminate an entire scene and the bounce light helps ground and bind all the elements together.
 
The process can be divided into five stages, lightmap UVs, surfel creation, surfel clustering, visibility sampling and realtime update.The clustering method was inspired by this JCGT article however I'm not using spherical harmonics and I generate surfels and form factor weights differently. The JCGT article is fantastic and well worth a read.

Before you run off, here it is in action.



 Lightmap UVs

The lighting result is stored in a lightmap so the first step is a good set of UVs. These lightmaps are small and every pixel counts so you have to be pretty fussy about how the UVs are laid out. UV verts are snapped to pixel centers and there needs to be at least one pixel between all charts in order to prevent bilinear sampling from sampling incorrect charts. The meshes are unwrapped in Blender then packed via a custom command line tool. This uses a brute force method that simply tests each potential chart position in turn, for simple scenes and pack regions up to 256x256 the performance is acceptable.



  

Surfels and Clustering

Next up we have to divide the scene into surfels (surface elements) and then cluster those surfels into a hierarchy. At runtime these surfels are lit and the lighting results are propagated up the hierarchy. This lighting information is then used to update the lightmap.
 

Surfel placement plays a big part in the quality of the illumination and I've been through a few iterations. Initially I tried random placement with rejection if a surfel was too close to it's neighbours but this was hellishly slow. I also tried a 3D version of this which was much faster but looking at the results I felt the coverage could be better. Particularly around edges and on thin objects, the neighbour rejection techniques would often leave gaps that I felt could be filled. This seemed like it could be addressed by relaxing the points but I wanted to try something else.

I decided to try working in 2D using the UV's which in this case are stretch free, uniformly scaled and much easier to work with. The technique I settled on first generates a high density, evenly distributed set of points on each UV chart. N points are selected from this set and used as initial surfel locations and these locations are then refined via k-means clustering.

This results in a set of well spaced surfels that accurately approximate the scene geometry and makes it easy to specify the desired number of surfels. For each chart N is simply 

(chart_area / total_area) * total_surfel_count


The initial high density point distribution.
Surfel creation via k-means clustering of the high density point distribution.

These surfels are then clustered via hierarchical agglomerative clustering which repeatedly pairs nearby surfels until the entire surfel set is contained in a binary tree. Distance, normal, UV chart and tree balancing metrics help tune how the hierarchy is constructed. I'm still experimenting with these factors.

Hierarchical agglomerative clustering in action.

Lightmap visibility sampling

Influencing clusters for the highlighted lightmap texel.
Once the surfel hierarchy has been constructed each lightmap texel needs to locate the surfels that most contribute to it's illumination. Initially I used an analytic form factor but this would sometimes cause lighting flareouts if a texel and surfel were too close. Clamping the distance worked but felt like a bit of a hack so I switched to simply casting a bunch of cosine weighted rays about the hemisphere. Each ray hit locates the nearest surfel and the final form factor weight for each surfel is simply

 num_hits / total_rays

Once all rays have been cast the form factor weights are propagated up the hierarchy. The hierarchy is then refined by successively selecting the children of the highest weighted cluster. At each iteration the highest weighted cluster is removed and it's two children are selected in it's place. This process repeats until a maximum number of clusters is selected or no further subdivision can take place. The texel then has a set of clusters and weights that best approximate it's lighting environment.

Lighting update

The realtime lighting phase consists of several stages. First the surfels direct lighting is evaluated for each direct light source, visibility is accounted for by tracing a single ray from the surfels position to the light source. The lighting result from the previous frame is also added to the current frames direct lighting to simulate multiple bounces. There's a bit of a lag here but it's barely noticeable. Lighting values for each cluster are then updated by summing the lighting of it's two children.

Each active texel in the lightmap is then updated by accumulating the lighting from it's set of influencing clusters. The lightmap is then ready to be used.

Direct light only.
Direct light with one light bounce.

Direct light with multiple light bounces.
Timings for each stage (i7-6700k @ 4.0ghz)


  Surfel illumination (1008 surfels):               0.36ms
  Sum Clusters (2015 clusters):                     0.08ms
  Sum Lightmap texels (6453 texels * 90 clusters):  0.64ms

Environmental Lighting

Environment lighting is provided by surfels positioned in a sphere around the scene. These are treated identically to geometry surfels except for the lighting update where a separate illumination function is used. Currently it's a simple two colour blend but could just as easily be a fancy sky illumination technique or an environment map.




To finish up here are some more examples without any debug overlay. These were taken with an accumulation technique that allows for soft shadows and nice anti-aliasing.





Thursday 1 March 2012

2012













Just a quick post to say I'm still alive really. Is it 2012 already? Amazing. 


Monday 29 August 2011

Rust animation test

Haven't had much free time the last few months but I did manage to let this rendered animation test grow slightly out of control. Big thanks to Stephan Schutze (www.stephanschutze.com, Twitter: @stephanschutze) for the awesome audio work.




Concept and design work


This little guy started out as a bunch of thumbnail sketches (below left) well over a year ago, but the design also shares some similarities with an even older concept (below right).


Eventually I got around to modelling and although the concepts don't really show it, I drew a lot of inspiration from the Apple IIe and Amiga 500 computers of my misspent youth. The 3D paint-over below shows an early version with only one antennae. The final version has a second antennae which was an accident, I kept it when I realised they could work almost like ears and added a bit more personality.


And finally, a snippet from an old mock comic book panel, just for the hell of it :)

Tuesday 31 May 2011

Light volume sampling again

Recently I've been taking another look into how I render light volumes, I don't get nearly as much time as I'd like to work on this stuff these days but I've hit upon some improvements so I thought I'd write it up. 

The two primary issues I've come across with light volumes are light leaks and ensuring that the light volumes are padded so as to avoid issues with linear sampling. This post goes into a bit more detail about the issues I encountered and provides a bit more background to this post.

In a nutshell though, sampling a 3D scene at sparse regular intervals doesn't tend to yield very good results. Frequently the scene is sampled from behind geometry, causing light leaks and other issues.

My previous solution used a bunch of raycasts to determine the best new location for each sample point whenever there was scene geometry close by. Whilst this was an improvement it wasn't always effective and often required extra geometry to prevent light leaks. Another common problem occurred around hard geometry corners, with the new sample point often ending up on one side of a corner when really it needs to sample both sides. This tended to look quite bad in situations where the lighting differs substantially on either side of the geometry edge. The image below hopefully makes the problem clearer.

Problem: In this case the issue shows up as a sawtooth pattern on the darker side of the object.

Cause: The sample point can only be in one place at a time?

A solution: Split the sample point

A solution to this problem is to just split the sample point in cases like this. Because the lighting is evaluated and stored as the average amount of incoming light from six basic directions (+X,+Y,+Z,-X,-Y,-Z), we can separate the sample locations for each of these base vectors. If the light volume texel contains geometry then each sample direction moves to the closest face with the most similar geometry normal. If no similar normal is found then the sample points simply move to the closest face, regardless of it's geometry normal.




So this works pretty good but we're still left with the problem of linear sampling causing further light leaks. In other words, samples that don't contain geometry can still contribute their lighting information.

Previously I'd just use a very wide search radius for each sample, enough to take this effect into account, but this caused further problems as it wasn't always easy to predict where a sample would end up. To solve this I've implemented a post render padding stage, very similar to how light maps are padded, only in 3 dimensions. The padding process looks for texels that contain no geometry but that have neighbours which do. These "empty" texels are then set to contain the average lighting value of all there geometry containing neighbours. This has the effect of padding the light volume and removing the remaining light leaks.

Stepping through these issues and solutions using a simple scene we have:

A) No Fixes. 

No attempt to fix the sample locations or pad the light volume. Light leaks are a big problem here.

B) Sample points fixed. 

Fixing the sample locations certainly improves things, the point sampled version barely displays any light leaks but we still hit problems when using linear filtering.

C) Sample points fixed and volume padded. 

Finally, padding the light volume solves the remaining light leak issues and smooths the overall result.

That example scene was pretty simple so there are some examples of more complex scenes below. Click for larger versions.







Sunday 15 May 2011

Sunday Scribble

I'll hopefully have another post on lighting volumes soon but till then, a scribble!