Hi, everyone. I am ked. The title of my presentation is “Gradient Free Visualization with Multiple Light Approximates”
This work was derived from an observation that MRI datasets produced worse rendering quality than CT scan. These are our previous rendering results. When widely used Phong shading was applied, the MRI visualization generated more artificial result and that was easily misleading.
This condition can be attributed to the unreliable gradient that is caused by noise and the wrong numerical computing.
For example, the left image is a sagittal slice of a brain dataset scanned by MRI . The estimated gradient estimation is showed as the right image. The top is the direction and the bottom is the magnitude.
Showed as the red rectangles, unreliable gradient usually occurs in the region of low magnitude and it is easily affected by noise.
Traditional visualization method just like Phong shading mostly depends on gradient information, so noisy volume rendering is showed as the right image.
Here, we propose a gradient-free approach that gets rid of the tricky estimation and improves the quality of medical imaging, where white matter and grey matter are classified as the green material and the white material.
Our algorithm is a two-pass approach.
The first pass is an off-line processing that accumulates attenuation maps for orthogonal directions.
Each light is linearly interpolated to approximate the contribution of attenuation maps in the second pass. The whole process is gradient-free and can be executed in real-time interaction.
The attenuation maps are derived from the standard integral of volume rendering .
Where the c hat function is the color of each voxel and then the color is exponentially attenuated by summation of other voxels in front of it.
So, our attenuation map is the pre-computed summation of a direction. In the implementation, a smoothing function is applied to avoid artifacts .
The attenuation map of the opposite direction can be obtained from the subtraction of two attenuation value.
Therefore, we totally create 6 orthogonal attenuation maps in the first pass.
In the second pass, the light vector D is decomposed to xyz axes by inner production and then the signs of projections are used to select attenuation maps for computing.
Finally, the lighting result of a voxel is the summation of contribution from each light. The lighting value is then multiplied by assigned color to create a shading result .
In this implementation, the global attenuation factor gamma is adjustable to users. A smaller attenuation factor creates a uniform lighting result while a larger one enhances outside surface and blends inside structure into the background. This image shows the effects of different attenuation factors and bright scales.
This is a comparison of different lighting results. It is obvious that lighting could improve the spatial perception of pure rendering and our multiple lighting approach provides more details than single light source.
Our method can be easily extended to explorative models . For example, while unnecessary structures are removed, the lighting should be corrected by subtracting the attenuation value of the cutting plane.
Moreover, we can rewrite the integral of volume rendering such that the spatial information is majorly provided by the lighting
Showed as the image, this visualization is similar to the effect of maximum intensity projection but contains more spatial information.
Our system was built on a laptop of Acer Aspire. This table lists the datasets applied in our experiments and their time cost. It is worth to note that the resolution of attenuation maps is lower than original data and the computing of attenuation maps takes only few seconds. The rendering stage can achieve about ten FPS for interaction.
These are another results generated from our experiments.