Photon mapping
From Wikipedia, the free encyclopedia
In computer graphics, photon mapping is a global illumination algorithm developed by Henrik Wann Jensen that solves the rendering equation. Rays from the light source and rays from the camera are traced independently until some termination criterion is met, then they are connected in a second step to produce a luminance value. It is used to realistically simulate the interaction of light with different objects. Specifically, it is capable of simulating the refraction of light through a transparent substance such as glass or water, diffuse interreflection between illuminated objects, and some of the effects caused by particulate matter such as smoke or water vapor.
The desired effects of the refraction of light through a transparent medium are called caustics. A caustic is a pattern of light that is focused on a surface after having had the original path of light rays bent by an intermediate surface. For example, as light rays pass through a glass of wine sitting on a table and the liquid it contains, they are refracted and focused on the table the glass is standing on. The wine also changes the pattern and color of the light.
With photon mapping, light packets called photons are sent out into the scene from the light source. Whenever a photon intersects with a surface, the intersection point, incoming direction, and energy of the photon are stored in a cache called the photon map. After intersecting the surface, a new direction for the photon is selected using the surface's bidirectional reflectance distribution function (BRDF). There are two methods for determining when a photon should stop bouncing. The first is to decrease the energy of the photon at each intersection point. Using this method, when a photon's energy reaches a predetermined low-energy threshold, the bouncing stops. The second method uses a Monte Carlo method technique called Russian roulette. In this method, at each intersection the photon has a certain likelihood of continuing to bounce with the scene, and that likelihood decreases with each bounce.
The photon map is then used during rendering to estimate the density of photons accumulated, as an estimate for radiance, thus the photon map data structure should be optimised for the k-nearest neighbor algorithm, as the spatial distribution of the k nearest photons are used for the estimation. Therefore, Jensen's implementation uses kd-trees for the photon map.
To avoid emitting unneeded photons, the initial direction of the outgoing photons is often constrained. Instead of simply sending out photons in random directions, they are sent in the direction of a known object that is a desired photon manipulator to either focus or diffuse the light. There are many other refinements that can be made to the algorithm: for example, choosing the amount of photons to send, and where and in what pattern to send them.
Photon mapping is generally a preprocess and is carried out before the main rendering of the image. Often the photon map is stored on disk for later use. Once the actual rendering has started, each intersection of an object and a ray is tested to see if it is within a certain range of one or more stored photons, and if so, the energy of the photons is added to the energy calculated using a standard illumination equation. The slowest part of the algorithm is searching the photon map for the nearest photons to the point being illuminated.
Although photon mapping was designed to work primarily with ray tracers, it can also be used with scanline renderers.