Volumetric Clouds and Offscreen Particles

ParticleCloudsAt the end of 2012 and in January this year I played around with volumetric clouds. My first clouds I introduced in a rush where just big billboards with a value noise texture. I was relatively satisfied until a friend of mine came up with an improved billboard based approach in the game Skytides. He used Fourier Opacity Maps to compute volumetric shadowing. By the way this technique is very nice to get shadows for particles and other alpha blended objects. Now I was interested in real volumetric clouds in a real time context.

Ready
My first idea was to use ellipsoids and write a closed form ray casting which should be done in real time. Together with some noise they should form up single clouds. In the end the number of ellipsoids was limited and the quality bad.

So what comes next?

 

Ray tracing seemed to be the only promising step. I found some interesting stuff about cellular automatons for clouds by Dobashi. Nishita and co. So I tried to implement a ray marching on an integer texture where each bit is a voxel. In each voxel the boundary hit point to the next voxel can be computed in closed form too. This idea guaranteed an optimal step width but creates a very blocky look. I used dithering and interpolation which produced good quality results but was slow or too few detailed.

VoxelClouds1.0GhostBlocksVoxelClouds2.2

The last algorithm's results you see in the title bar of this page. It is so simple in its basics that you will hang me. The trick to get rid of block artefacts is to use fuzzy values and a linear sampler!

I use a standard ray marching (a for loop in the pixel shader) on a volume texture filled with a value noise. To animate the clouds I just move the texture around and sample twice at different locations.

To get clouds you need several rays: the fist ray accumulates the alpha values along the view direction. To get a volumetric lightning a ray into the lights direction is casted at each point which is not totally transparent. It took me some time to find a proper accumulation of alpha values. Assuming where 1 means an opaque value than must be computed per step on the ray. is the amount of background shining through the clouds. Finally the alpha value of a cloud is . There are different reasons to stop the ray casting:

  • The scene depth is reached (this makes things volumetric - of course the scene depth buffer is required)
  • The view distance is exceeded
  • is close to 0

If you wanna have a good visual appearance this is enough! Stop reading her implement it and you will be done in a few hours.

Now comes the part of making things fast.

1. One ray into light direction per pixel on the primary ray - REALLY?
Yes! But no complete ray. I used a constant length ray of 6 steps. One might use the same alpha accumulation technique but just summing the density produces good results. The thing why this can work fast is to have relatively dense clouds. The primary ray is stifled after a few steps once it reaches a cloud. So there are not more light-castings than relevant.

2. What step size to choose?
The bigger the steps the fewer samples are necessary and everything gets faster and uglier. But in distance a small stepping ray is an overkill. I found a linear increasing step width ok.

3. Obviously clouds are smooth. They are so smooth you can put a 5x5 Gaussian filter onto it without seeing any difference. So why compute so many pixels with so expensive ray marching?
This part is the hardest and the reason for the "Offscreen Particles" in the title. All the nice things implemented so far could be computed on a 4 or 8 times smaller texture and than sampled up to screen size. I do not have screen shots of the upcoming problems when sampling alpha textures up but the GPU Gems 3 article has.
My solution works with only two passes the cloud pass and the upsampling. It is even not necessary to sample the z-buffer down.
The first pass renders the clouds to a small offscreen texture. It just fetches a single depth value at the current pixel position * 4 + 1.5. The 1.5 is more or less the centre of the represented area and must be changed if an other scaling factor than 4 is chosen. The output is: (Light intensity, Cloud Alpha, Sampled Z). The last component is the key. It remembers the actually used depth for later.
During the upsampling the linear interpolated texel from the cloud texture is taken. In case that (abs(Scene Z - Sampled Z) > threshold) the value taken from the clouds is not representative for the scenes pixel. This happens only at edges where we would get the artefacts. Now I take the 8 neighbourhood samples and search for the one with the smallest deviation from the scene depth. The so chosen representative is very likely a good approximation. There is no guarantee to find a better pixel but subjectively all artefacts are released. You might find some if directly searching for them. If someone has an idea how to do a more elegant search than you see in the shader below, please let me know.

In the following shader I used a doubled view distance and a distortion to get a better horizon. There also might be other not explained noise from other experiments.

 

 

All together this runs with >50 fps on my laptop's Intel chip - in full HD:

cropped-header.png

2 thoughts on “Volumetric Clouds and Offscreen Particles

    1. Johannes Post author

      I retested the approach with a quadratic increase (just change line 71). While standing still the scene looks good but when moving with the camera it flickers at the horizon. This was still the case when I made the step length so small that the performance was worse than with linear increase.
      I think this is mathematically reasonable too: In the perspective view the screen space distance between two points A,B is |A/z-B/z| = |A-B|/z. Using a linear increasing step width means to use a constant width in respect to the screen space.

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *