Volume Rendering of the Photographic Visible Human Data Set


Steven Matuszek
matuszek@umbc.edu
University of Maryland Baltimore County


Abstract

Most volume rendering is done from true density data, such as the results of MRI and CT scans. This data is not as high-resolution as photographic data. An application is devised to create density data from photographs and combine it with the color information, resulting in high-resolution full-color volume renderings. The data set used is the Visible Human data, and the vtk toolkit is used for rendering.

Brightness-based and edge-extraction schemes are tried for the mapping of color to density. Both show promise, and this work should be extended.


Background and Motivation

Accurate three-dimensional modeling of human anatomy has long been one of the canonical aims of computer graphics. A computer model can be used for training, teaching, or, in the case of data collected on an individual subject, diagnosis. Volume rendering is especially appropriate for these applications because of the nature of X-ray CT (computer tomography) and PET (positron emission tomography) scans, which provide true volumetric data.

One limitation of tomographic data is its low resolution. A typical X-ray CT scan results in a resolution of 512 by 512 pixels in each of 50 image planes (Watt p298). This may be acceptable for some applications, but greater precision is available.

Enter the Visible Human Project. The Visible Human is a human cadaver, frozen and sliced into thousands of very thin cross sections. This provides us with data whose only limitation is the resolution at which we can scan it in.

The disadvantage, then, is that we are back to two-dimensional data, and must reconstruct the three-dimensional model from those data. Since it is photographic data, we must figure out how to separate the geometric data from artifacts of the data collection process, such as lighting effects. (This is a neat reversal of much of traditional graphics, which is concerned with the mapping from clean geometry to realistic images.)

In light of all this, we propose to build an application specifically for volume rendering of thorax data from the Visible Human project. This application will have two main modules; the first will interpret the data and use our knowledge of the anatomy that we expect to be present to create a three-dimensional model. The second will volume render that model to create realistic images.

Rather than reinvent the wheel, we will use existing volume rendering packages for the second part of the application. Specifically, the VTK (Visualization Toolkit) fits our needs, as the source code is available and new modules can be added; it also provides the user interface to interact with the volume renderer.

Volume rendering necessitates having a transfer function to calculate the accumulation of light through the rendered volume, and in the case of the human body this is most likely a representation of the densities of the tissues shown. Another approach, however, is to represent transitions between values as being more dense, resulting in the appearance of relatively solid outer boundaries around areas of like color, which are assumed to be body structures. I am leaning towards the second approach, as it is more abstract and requires less anatomical information.


Work done in this area

First, the Visible Human Project's own list of research that's going on, (http://www.nlm.nih.gov/research/visible/visible_human.html). Some of the links are NPAC's planar and axial viewers, NLM-Maryland's Visible Human Explorer -- both of which show cross-sections (coronal, axial, sagittarial) rather than attenuation-based 3-D images.

In fact, Loyola, CieMed, Queensland Tech, Michigan, the Colorado CHS, Stanford, Penn, WUStL, are all doing cross-sections, not volumetric views.

Cross-sectioning is not particularly difficult, once registration issues have been worked out. Some of these systems produce output that is also tagged with segmentation data (i.e. you click on a point and a status line reads "left ventricle"). But it isn't real-time. Nor is it what we are really doing here.

People doing surfaces

Some very well-known work by Bill Lorensen and the GE group (http://www.crd.ge.com/esl/cgsp/projects/vm/) involves extraction of surfaces from the volumetric data. They are getting excellent results, but this is not volume rendering in the sense of propagating a ray through a volume and calculating attenuation to determine illumination.

The Vesalius Project (http://www.cs.stevens-tech.edu/vesalius/) had the best images at the Visible Human Conference if for no other reason than that they eschewed specular reflections. They have an interesting segmentation algorithm which we may or may not want to look into -- we aren't doing segmentation as such but we do want to save color regions. However, we are more likely to work this by considering local gradients. They are also doing surface reconstruction, not volume rendering.

People doing volume rendering

Image-based rendering

Image-based volume rendering is another related but not identical approach. Marc Levoy and Pat Hanrahan are doing this at Stanford. To quote their page,

The general notion of generating new views from pre-acquired imagery is called image-based rendering.... We are investigating a new image-based rendering technique that does not require depth information or image correspondences, yet allows full freedom in the range of possible views.
Well, depth information and image correspondence is exactly what we have in the Visible Human photographs, and we intend to make full use of it.

Preliminary Conclusions

The innovative aspect of this project is that it proposes to volume render based on color opposed to the actual density data that would be gotten from many scientific visualization applications, or indeed from the corresponding tomographic data of the Visible Human itself.

As for the volume rendering aspect, I found it difficult to find any work that attempts to do photograph- or color-based volume rendering. This isn't too surprising, considering there has probably never been such a volume of points where the color of every point is known over all three dimensions. A search of SIGGRAPH's bibliography database turned up articles and papers from 1988 to 1998 about volume rendering, none of which appeared to be relevant. The ACM Digital Library, similarly, turns up papers about color maps and Victoria Interrante's paper about 3D lines and surface shape and so forth, but nothing that seems to be congruent to what I'm doing.

So I would say that from a volume rendering perspective, this is a somewhat unusual visualization; and from a Visible Human perspective, most people who are volume rendering are doing so based on the CT and MRI data, not the color of every pixel, which is what we're doing. The advantage of this is that we should get much higher resolution.


Creating the Application

To create a volume to volume-render, it is necessary to infer volume data from the given color data. (Again, this is a reversal of the usual pipeline.) Thus the first part of our application is code to preprocess the data we are given to work from.

The initial slices of information look like this:

As you can see, this image contains a great deal of extraneous information. The first filter I wrote cropped the images down to size. The second removed the blue from around the image. The blue is the gel in which the body was frozen. It's very pretty but we don't want it. Making use of the fact that little or none of the human body is green, blue or magenta, this filter simply replaces with black any pixel whose blue value is more than 1.2 times the red and green values. The result:

Now what we have here is an image that is 1280 x 560 x 24-bit color. This is 2.15 megabytes of data. Eventually we will want to access this entire resolution, but for now, everything we are doing will work just as well with a lower-resolution version. So I shrank all these images down to one-sixteenth their size, resulting in images this actual size (the above are displayed smaller than actual size):

By the way, the "de-bluing" process incurred a glitch, which occurred only on certain slices. In some way, the red, green, and blue values, which are stored sequentially, got knocked out of phase, resulting in blues turning green, greens turning red, and reds turning blue. When this was de-blued, the results looked like this:

I honestly still don't know why this happened -- some vagary of the PPM file format, I suppose. The fix was to take the first few bytes of the file (where you would expect a phase problem to begin) and replace them with the bytes from a healthier file. Since these are a few pixels at the upper left corner, there is no important difference in the result, but the phase shift disappears.

This done, the next step was to create a preliminary mapping of the color data onto density, and apply it to our color volume. The first attempt at this, for proof of concept, simply mapped brightness onto density. The brightness was taken as the average of the red, green and blue values, and this was divided by ten to produce a volume that would be mostly transparent (low density) but still retain structural information. It is still too dark to be distinguished clearly on many monitors, however, so this is a brightened example:

With a density volume created, the next phase of the project was to implement the volume renderer in vtk. The toolkit already includes a vtkVolumeRenderer object, so all that was necessary was to customize it for our image name, size and density to opacity mapping.

The source code can be found in code/ as Try.C.

This is what the results look like from one view:

You can see that structure is maintained, especially in the bones at the top and bottom, the opening of the spinal column in the center, and the layers of muscle and fat. This, then, would seem to be a good start.

At this point I set out to extend the functionality of vtk's VolumeMapper to be able to display color information on a voxel-by-voxel basis, rather than with a density to color function such as is usually used in volume rendering applications. I asked a question about this on the vtk mailing list, and was contacted by one Dr. Lisa Sobierajski Avila, one of the vtk developers. She informed me that she was in fact in the process of adding just such a capability to the vtk class, and that the code would be available soon.

If this is done for us, then the remainder of our project is the determination of the mapping from color to density. Therefore I set out to implement the next step up in complexity.

Our main interest in viewing this anatomical data is to be able to discern structures. One way of doing this is by creating surfaces that represent the surfaces of anatomical structures, such as bones. This is why so many people are using the VH volume data for surface extraction.

In our case, we would represent the surface of a structure by making it more visible than the rest of the tissue, implying higher density. To achieve this, I wrote a filter to apply a Laplacian transform to the image data, thus extracting edges between areas of like color. This was the result:

You can see that this really brings out the structure in the data. When we use this as our density data, we get:

You can see that this is adversely affected by the noise of the image. But you can see that the spinal column opening appears solid, as does the outer skin. This is clearly an approach with promise.


Conclusions: Immediate Future Work

At this time, the end of the semester put a halt to my progress. I realize there is still much to be done on this project.

The brightness-based and edge-extraction density maps both show promise. The best results would probably be gained by a combination of the two techniques. The brightness technique could be extended by taking color into effect; for example, assigning lower densities to redder voxels. The edge-extraction probably just needs tweaking; a pre-transform posterization of the image could clamp out the noise that is causing the problems.

Also at this time, the updated functionality of the vtkVolumeMapper has not been made available. I am waiting to hear from Dr. Avila about this. As soon as the change is made, I will implement it in my code.

To see the application in action, you can run /home/cerberus/sfa/steve/vtk/Try. The command line is:

Try 400 175 42 1280 1.0 1.0 3.0 0 2000 0 255

I believe the permissions are set up correctly -- in any case I would be happy to demonstrate the application.

And, of course, once it seems to run well, we can try it out on the full-resolution images, and should get some truly impressive results.


All the images
All the source code


Steven Matuszek
December 14, 1998