A Lens Model for
Ray Tracing

by Tim Ledlie and Matt Hanlon
12 January 2001
Computer Graphics 275

Introduction
The standard ray tracing algorithm is based on a pinhole camera model.  In this model, all rays are cast through a single point onto an image screen (Fig. 1).  As a result, only light from one direction hits a given point on the screen. [N.B. In our original ray tracer the image screen was considered to be between the 'pinhole' and the objects in the world, but the result is equivalent.] 

Fig. 1. In a pinhole model, light can only get from a point (a) in the world onto the film plane (c) by following a single path.


Although it is possible to create an actual pinhole camera, such a device is not generally a practical optical instrument.  Real-world instruments, like a camera or an eyeball, allow light from an opening, or aperture, that is significantly larger than point-sized.  Using such an aperture allows light from a much larger cone of space to reach a single point on the screen.  Thus it becomes necessary to focus the incoming light into a coherent image using a lens.  For our project, we have extended the ray tracer produced in assignment 6 to include a more realistic camera (eyeball) model, using an aperture of finite size and a lens.

 

Algorithm

To create an image, we represent the screen as an image plane just as in the original ray tracer.  Rather than cast a single ray from a given point P on the screen, however, we now must represent the fact that light from every point on the lens reaches P.  We model this by casting rays from P at many points on the lens in some random fashion (see below) and averaging these rays.  Note that in our model, we represent the aperture of the system by restricting the radius of the lens, i.e., the surface at which we cast rays.

Having selected a set S of points si on the surface of the lens, we must now compute a refraction through the lens and cast the rays into the 'world'. Actual lenses usually have spherical surfaces.  It is possible, then, to cast rays through the lens and explicitly refract them through the lens's geometry.  Less costly, however, is the method of applying an idealized perspective transformation to the rays.  Convex lenses, the ones modeled here, have the property that rays of light parallel to the axis of the lens (in our model always taken to be the z axis of the eye coordinate system) will all be refracted to pass through a single point, called the focal point.  The distance of the focal point from the center of the lens is called the focal length, f.  It follows from this that an object a distance z on one side of the lens will have an image at (signed) distance z' on the other side where 1/z - 1/z' = 1/f.  The total transformation from point P to image Q can be expressed as a matrix multiplication.  If P is at (x, y, z) and the lens has thickness t and focal length f, and

then the image Q is at the point (X'/W', Y'/W', Z'/W').  Note that this is a 'thick lens' approximation, i.e. it can take in to account lenses of non-negligible thickness.

In a camera (or eyeball), the image that is created on the film (or retina) is upside-down and backwards, and is only later made into a correctly-oriented image by the developer (brain).  So too does the above transformation produce a flipped image.  We have therefore also multiplied the resulting x and y coordinates by -1 to correct for this fact.

This transformation returns a point Q in the world through which every ray that reaches P on the screen must pass.  Therefore we can simply cast a rays at this Q from each point in S and average the results to compute radiance at P (Fig. 2).

Fig. 2; Rays are cast from our set of points S through Q, the image point of P.  The results are weighted and averaged to compute the color at P. In this case, since some rays hit an object and others do not, the pixel at P will be part of a blurry image of the object.

It is also important to note that due to properties of the lens, the amount of light reaching a point P from a given si also varies with the distance between the two points and the angle that the ray between them makes with the z-axis.  Therefore, the averaging of the incoming rays must be weighted by a factor of (cos2 q) / d2, where d is the length of the segment between P and Q and q is the angle between that segment and the normal, to make the model more accurate. [Kolb, Mitchel, Hanrahan]

 

 Randomization

A crucial question we dealt with in this project was how to best choose the points in S through which we cast rays.  Our first solution was to pick n points from the lens surface with uniform probability over the area of the lens.  This method suffices, but produces 'grainy' or speckled images, for the following reason.  Imagine a portion of the image screen that in an ideal situation would be a blurry image of a points near the edge of a yellow object in black space.  Rays from some points on the lens, in real life, would intersect the object, while others would not.  Choosing random lens points in our model, it is possible that we happen to choose no rays which intersect the object, resulting in a black pixel, or only rays which hit the object, resulting in a bright yellow pixel.  Of course the graininess decreases as the number of random lens points chosen per screen point, but this quickly grows costly.

Ideally, to reduce this effect, we would select rays from a probability distribution based on the above weighting formula for radiance (also eliminating the need to further weight the rays).  However this proved to be impractical to implement.  Rather, we wanted  a more straightforward method which attempts to force an 'evenness' of the distribution of random points.  To this end we divide up the lens into n concentric rings of equal width, and from each ring choose m points uniformly from the area of the ring, where m and n are arbitrary inputs.  Note that when n=1 this method is equivalent to our earlier, uniform distribution. Using this method, we concentrate rays through the center of the lens, which is advantageous since such a distribution will be "best" for the center of the image, which is closest to the center of the lens. 

We made one further modification with the intent of speeding up the algorithm.  We now include the option to generate the random lens points before any ray-casting, and use the same set of points for every point on the image screen.  This change results in slightly faster rendering times, especially for large S.  Perhaps more interestingly, since we use the same S for every screen point, "speckling" does not result from random "misses" of the cast rays.  Rather, each pixel tends to have random "hits" and "misses" in approximately the same patterns.  The result are ghostly silhouettes that surround out-of-focus objects.  Curiously, these silhouettes often look more like "real blur" that one might see through an out-of-focus camera (or while not wearing one's glasses, as the myopic authors of this document know all too well).  We believe that this is a result of a bias of our perceptual system, rather than the actually being "better blur".  However, this result might indicate that a fixed-S system might me more desirable for non-large sets of lens points.

It is worth noting that the undesirable "speckling" and "silhouetting" might be rather easily fixed with a overlain jittering process, perhaps computing the radiance from four nearby regions of the image screen in any of the above ways and averaging them.  Because the ray casting was already, in our estimation, slow enough, we decided that this additional feature would not be worthwhile.

 

 Code Design
 This project was built in C++ extending the (solution) code for assignment 6.  A LENS class, defined in lens.hh was created, and contains properties of the lens set as inputs to the renderer; appropriate functions (to choose random lens points and find the image of a given point in space through the lens) reside in  lens.cc. Also the parser.y files were altered to accept the syntax described below.  The primary changes to the file render.cc were in the "big" for-loop to generate random points and cast rays appropriately as described above.

 

 Method of Use
 The extended ray tracer accepts as its command line argument an input file identical to the original specification, with the addition of an (obligatory) Lens section, between the World section and the Lights section.  It has seven parameters in the following format:

Lens{ <f> <z> <aperture> <thickness> <rings> <pointsPerRing> <preRandomize> }

where:

  • f is the focal length (float)
  • z is the position of the lens along the z axis of the 'eye' coordinates (float).[Note that the eye always 'looks down' the negative-z direction, and our screen is always taken to be centered at z=0.  The value for this parameter is typically around -1.]
  • aperture is the radius of the aperture (float).
  • thickness is the thickness of the lens in the z-dimension (float).
  • rings is the number of concentric rings the lens is divided into for randomization (int).
  • pointsPerRing is the number of random points chosen per rings (int).
  • preRandomize is a Boolean value that when true (!=0) indicates that the random points [the points in S] should be pre-calculated, and be used to cast rays from all points on the image screen.  When false (=0), new random points are calculated for each point on the screen (bool/int).

Note that one may approximate the pinhole camera model by using an arbitrarily small aperture and thickness=0.

 

Results
The extended version of  render was able to produce many interesting  outputs.  The thumbnails below link to a larger jpeg file, and there are also links to the complete .tga file.  All images were created with this input file.  Most significantly, we were able to use the lens system to focus on elements at varying distances away from the film plane. 
This image of three balls was produced using the solution render275 (pinhole model). (3ball16.tga)
By using the formula as described above, we can focus the image at one plane at a time.  The next series of images explores this capability by using a constant screen-to-lens-distance (z = -1) and varying the focal length of the lens.  Note this is analogous to the focusing system in the eye, rather than a camera, in which the properties of the lens stay constant but the z-distance is movable.  These images are created using a thin lens with 12 rays (3 rings x 4) cast per screen point, and a relatively small aperture of  0.2.
In this image, the green ball is in focus.  The focal length is 0.95238. (3ball27.tga) Hint: Close one eye and stare for a few moments at the green ball.  Your brain might be tricked into perceiving the green ball as closer in the z-dimension.
Here, the red ball is in focus with f = 0.967742. (3ball28.tga)
Finally, the blue ball in focus with f = 0.990099 (3ball29.tga)
All of the above images were created using the "pre-randomizing" technique where the same random set of lens points were used for each screen point.  A different sort of blurriness was achieved when a new random set was calculated each time.
This image uses a new set of random points for each screen point.  Note the grainy, speckled quality. (3ball10.tga)
Conversely, this image uses the "pre-randomizing".  Note the halos. (3ball11.tga)
This image and the next one each have the same number of random rays per pixel as the two above, but more evenly distributed over the lens.

This image uses new random points for each pixel. (3ball12.tga)

This image uses "pre-randomizing".(3ball13.tga)
Another interesting feature is the ability to change the aperture, and along with it the depth of field.  Depth of field refers to the range along the z-axis of in-focus items.  The smaller the aperture, the larger this range. 
This image, again focused on the red ball, has a very small aperture (0.1) and therefore a large depth of field.  Note that the blue and green balls are only somewhat blurry. (3ball18.tga)
A much larger aperture, (here 1.0) creates a small depth of field.  Here the red ball remains in focus, but even its immediate surroundings are very blurry. (3ball22.tga)
Another variable in the lens system is the thickness of the lens.
The thickness of the lens, here very large (10.0), affects both the focus and the perspective transform of the lens. (3ball7.tga)
Here are several images we produced that we found interesting, created with this input file
Here is the image created with the solution, pinhole ray tracer (busy_pinhole.tga)
focus on the green ball (busy1.tga)
The following two images were made with preRandomization on.
focus on the pink ball (busy6.tga)
focus on the blue, refractive ball (busy7.tga)
The following two images were made with preRandomization off.
focus on the pink ball (busy8.tga)
focus on the blue, refractive ball (busy9.tga)
Information about the parameters used to create them is available in imageinfo.txt.
References