links to this page:    
View this PageEdit this PageUploads to this PageHistory of this PageTop of the SwikiRecent ChangesSearch the SwikiHelp Guide
Last updated at 1:08 pm UTC on 16 January 2006

CS4451 Final Project

Rather than look at the typical C-source for a ray tracer, I decided to look at some object-oriented designs for a ray tracer. Ray tracers implemented in C are fairly clean, but do not look nearly as clean as the Java ray tracer that I found. I will go through the code step by step explaining or questioning each step as I go.

The code can be downloaded here: java_renderer.tar.gz

The tar gzipped file contains a compiled java binary with a html file. Using a typical java-supported browser, you can view the html file to see the java renderer engine in action.

Object Creation

The code begins in miniray.java in the init() function. The init function is told to create a new image and create a new Scene object.

This is the init() function:

public void init() {
	offImage = createImage(size().width, size().height);
	offGraphics = offImage.getGraphics();
	world = new Scene();

In the constructor of the Scene object, it creates four objects. Each object is a sphere. The sphere is created by specifying a radius, the center position, and the material of the object. The material of the object is specified with a diffuse and specular component and a specular constant.

This is the beginning of the constructor, more objects are created, but only one is shown here.

Scene() {
	numObj = 0;
	obj[numObj++] = new Sphere(1., new Vector(1., 1., -8.),
				   new Material(
				   new Vector(.6, .2, .2),
				   new Vector(.5, .5, .5),

The first sphere has a radius of 1.0, it is positioned at (x=1, y=1, x=-8). It has a material with (0.6, 0.2, 0.2) as the diffuse component and (0.5, 0.5, 0.5) as the specular component. 65 is the specular constant.

The material's constructor looks like this:

Material(Vector inDiffuse, Vector inSpecular, double inKspec) {
	diffuse = new Vector(inDiffuse.getX(), inDiffuse.getY(),
	specular = new Vector(inSpecular.getX(), inSpecular.getY(),
	kspec = inKspec;

Main algorithm

After creating all the objects and initializing them to the correct materials, the java applet goes to the run function which looks like this:

public void run() {
	while(numRows  height) {
		catch(InterruptedException ie) {

This part of the code is run as a thread in the background. The function calls the "doRow()" function until the number of rows equal the height and repaints the image after each "doRow()" has completed.

It looks like doRow() is the scanline algorithm. Here is the doRow() function:

public void doRow() {
	// generate pixels
	int iy = numRows;
	int off = iy * width;
	double y = span/2. - numRows * xyinc;
	double x = -span/2.;
	for (int ix = 0; ix  width; ix++) {
		Vector dir = new Vector(x, y, -1.);
		dir = dir.normalize();
		Vector col = trace(origin, dir, 1, 1.);
		int a = 255;
		int r = (int)(col.getX() * 255.);
		if (r > 255)
			r = 255;
		int g = (int)(col.getY() * 255.);
		if (g > 255)
			g = 255;
		int b = (int)(col.getZ() * 255.);
		if (b > 255)
			b = 255;
		pgPixels[off] = (a24) | (r16) |
				(g8) | b;
		x += xyinc;
	// create the image
	img = createImage(new MemoryImageSource(width,

In the doRow() function, it is a scanline algorithm going through each row of the image and sending out a ray using the trace() function called at the beginning of the loop. The result of trace() is stored into a vector where x represents the red value, y represents the green value, and z represents the blue value. The a value is the alpha channel and is set to 255 at all times. The single pixel value is stored into the pgPixel[] array. This array represents one row of the image and it is combined into the image using the createImage function.

Sending a ray out starting at trace()

The main algorithm calls trace() on each pixel as you can see from the doRow() function above.

This is a snippet of code from the doRow() function:
		Vector dir = new Vector(x, y, -1.);
		dir = dir.normalize();
		Vector col = trace(origin, dir, 1, 1.);

Here we can see it creates a new vector that starts at the current pixel (x, y) position (variable "dir"). From the trace() function below, we can see it trying to find the closest intersection using a ray from the origin to the current scanline pixel. If there is an intersection (ie. closest != null) we get the color from the shade function. If there isn't an intersection, the color is set to the background color.

public Vector trace(Vector org, Vector dir, int level,
		    double weight) {
	Vector col = new Vector();
	if (level >= maxLevel || weight  minWeight)
		col = origin;
	else {
		Intersection closest = world.intersect(org, dir);
		if (closest != null)
			col = shade(closest, org, dir, level, weight);
			col = backCol;
	return col;

The variable "closest" is an intersection object which knows which object in the scene which it intersects with. It also knows the distance and position of the intersection.

This code snippet is from the shade function. I will step through each step of it:
public Vector shade(Intersection closest, Vector org, Vector dir,
		    int level,  double weight) {

As a reminder, the variable "closest" is the intersection variable. We determine the intersection point and the normal. The object is the object that we intersected with and is one of the four objects initialized in the very first step.

	SceneObject obj = closest.getObject();
	Material matl = obj.getMaterial();
	Vector pnt = closest.getPosition();
	Vector normal = obj.getNormal(pnt);

We find the direction of the light source:

	Vector lightDir = lightPos.subtract(pnt);
	lightDir = lightDir.normalize();

We find the diffuse component and we send a ray out to the light source to see if we are in shadow.

	double diffFac = lightDir.dot(normal);
	if (diffFac  0.) // self shadowing
		diffFac = 0.;
	Intersection shadower = world.intersect(pnt, lightDir);
	if (shadower != null) // shadowed
		diffFac = 0.;

We find the reflection direction:

	Vector shortNorm = normal.scale(- normal.dot(dir));
	Vector off = shortNorm.subtract(origin.subtract(dir));
	Vector reflDir = off.add(shortNorm);

We determine the specular component. if we are in the near the specular point, we calculate a specular value, otherwise the value is zero.

	double specFac = 0.;
	if (diffFac > 0.) {
		specFac = lightDir.dot(reflDir);
		if (specFac > 0.)
			specFac = Math.pow(specFac, matl.getKspec());
			specFac = 0.;

We calculate the reflection color by running trace again and incrementing the level. If you look back at the trace function, you will see if that the level has reached a certain "maxLevel", we stop trying to recursively find a color from reflection. Note that trace calls shade again, and shade will call trace again if level has not reached "maxLevel". This is the recursive procedure to determine the correct reflection color.

	Vector reflCol = trace(pnt, reflDir, level+1,
		      weight  matl.getSpecular().maximumElement());
	Vector diffProd =
	Vector specProd =
	Vector reflProd = matl.getSpecular().product(reflCol);

The final color value is calculated here and returned to the calling function.

	Vector col = new Vector();
	return col;

Overall impressions

I feel that creating a renderer in an object-oriented language seems to work quite well. Although the renderer seems to only render spheres, it is quite capable of other objects. There is a Plane object included with the renderer should you feel like including planes and spheres in your image. Other objects are certainly possible and only require creating a new class defining the object and a method for determining an intersection through it.

The design of this renderer was done very well and seems to create fairly good looking images. However, the garbage collection of object-oriented systems tend to slow things down quite a bit and probably would not be suitable for a production-level renderer.

Dean Mao