Konstantin Shkurko

CS 6630, Introduction to Visualization

The goal of this course is to develop a broad understanding of the principles, methods, and techniques for designing effective visualizations of data. The course spanned a wide range of topics, encompassing foundations in both information and scientific visualization. We gained experience in using cutting-edge data analysis tools, as well as in developing your own visualization tools using a variety of languages and toolkits.

Projects

  1. data exploration
  2. time series
  3. parallel coordinates
  4. 2D spatial data
  5. 3D spatial data
  6. DVR with photon map

main picture

Final. DVR with Photon Map

Updated: 12.13.12

Description

You should pick a paper, find a data set, and implement the technique. In your notebook, you'll need to include a (short) summary of the paper you are implementing and why you chose it. Then, describe in detail your implementation process, including challenges, design decisions, any deviations or improvements on the original description, etc. Is this an effective technique? Why? In what situations is this a good technique to use? What extensions do you think would be interesting to follow-up on? Also, make sure to cite any outside libraries or code that you used.

Note This is a joint project between myself and Sam Quinan.

Summary

This project was inspired by a recent paper "Historygrams: Enabling Interactive Global Illumination in Direct Volume Rendering using Photon Mapping" by Jonsson et. al, IEEE Transactions on Visualization and Computer Graphics, 2012.

The main contribution of this work is an incremental update algorithm that allows interactive changes to transfer functions by leveraging an additional acceleration structure utilizing key properties of photon mapping. Besides providing global illumination effects like shadows and color bleeding, photons are inherently nested such that any change to a parent propagates to all of its descendants. The key novel acceleration structure proposed by the authors leverages this fact and introduces historygrams, which encode this strongly-linked relationship between parents and descendants.

There are two key questions we feel this paper raises within the context of direct volume rendering. The first is: what are the benefits of global illumination (GI)? The second poses whether photon mapping as a suitable alternative to conventional methods for computing indirect lighting effects. It may be possible to extract furhter benefits allowing fully dynamic volume exploration beyond changing the transfer function.

Implementation

The backbone of the paper consists of three major components: a raycaster for direct volume visualization, a volumetric photon mapper for GI and a novel incremental update algorithm. While each requires a substantial amount of work on its own, the incremental update relies on full functionality of the other two components. As a result, we focused our efforts on the first two components. Even though the incremental update is the key contribution of this paper, its implementation is not essential to answering the questions we posed earlier.

The primary goal was to implement an integrated raycaster and a photon mapper on a GPU. If time permitted, we then planned to add the incremental update algorithm. In order to verify that our code achieves the appropriate functionality, we chose to first implement each piece on the CPU. This also mitigated the risk associted with utilizing a foreign architecture. Despite our careful planning we were able to achieve most but not all of our goals. We successfully implemented both a CPU and a GPU raycaster, as well as the CPU volumetric photon mapper.

Raycaster

We started with the raycaster because it's essential for generating all of our images. The first step was implementing a flexible framework capable of seamlessly supporting various data set types and their corresponding 1D transfer functions. This provided a few challenges, mostly in organization (eg. heavy C++ template use). Because the volumes and the raycaster were destined for execution on the GPU, we consulted CUDA samples to better understand the interoperability between the host and the hardware device. We were pleasantly surprised to find a simple volume raycaster implementation, which we were able to use to test our brand new framework and to convince ourselves that it was effective.

As a GPU raycaster is unsuitable for prototyping and testing a photon mapper (either CPU or GPU), we still needed to implent a full raycaster on the CPU. We faced challenges from learning the intricacies of casting rays through the screen buffer (with all the corresponding spatial trasformations), to computing correct ray-object intersections, and even to accumulating samples correctly.

Once complete, we added gradient-based diffuse shading. This presented us with the additional challenge of handling homogeneous sections of the volume where gradients evaluate to the zero vector. We initially approximated the shading normal at a sample location with zero gradient as the 3 vector [0, 0, 0], but this biased the output because homogeneous sections of the volume were left out of the ray-march accumulation (recall: diffuse shading is calculated using the dot product of the normal and light direction). If we instead ignore these shading calculations within these homogeneous regions, we introduce a bias in the opposite direction. Our eventual solution was a tweaked approximation where a small constant factor (15% of the sample color) is always contributed, like an artificial ambient term. The remainder of the contribution (85%) is based entirely on the diffuse shading calculation. This configuration seemed to provide the best visual results.

We then extended the GPU raycaster to include gradient-based shading in two ways: either by interpolating pre-computed (on the CPU) gradients or computing them on the fly. The latter proved to have superior performance.

Photon Mapper

Implementing a volumetric photon mapper proved to be much trickier. First, we extended a basic OpenGL photon visualization tool provided by Cem Yuksel to build the infrastructure necessary to generate, store and sample the photons using a CPU raycaster. Although simple in theory, the challenge in handling participating media came from importance sampling the transmissivity term once the scattering direction was known (sampled from isotropic Henyey-Greenstein phase function). This took quite a bit of time to get right, but in the end our photon mapper properly supports scattering in heterogeneous volumes. The image shown below was a welcome sight because it signified this functionality.

Heterogeneous volume photon mapping works!
Heterogeneous volume photon mapping works!

Initially, we estimated photon density without using any acceleration structures, which unsurprisingly came at a huge performance cost. As a result, we borrowed a KD-tree implementation from PBRT2. As a stretch goal, we had wanted to implement the hashed linked list data structure on the CPU before porting it to the GPU. This particular data structure is essential for GPU-based photon mapping because it allows performing nearest neighbor searches (necessary for photon density estimation) without sorting the entire photon database per lookup. Unfortunately, we weren't able to reach this stretch goal, which prohibited the GPU implementation.

Transfer Functions

Finding appropriate transfer functions for our volumes also turned out to be a non-trivial task, as modifications to ray-marching step-size can have major effects on the apparent accumulated opacity of a given ray. As such, the transfer function opacity values for translucent portions of the volume must take this step-size into account during design. We wound up using ImageVis3D to get rough segmentations, but even those required significant modification by hand in order to achieve quality results.

Results

Comparison between ImageVis3D, CUDA DVR and CPU DVR

Below, you can see a comparison of output images for the Bonsai dataset (from left to right) between ImageVis3D, CUDA direct volume renderer, and CPU DVR without photon mapping. The stark differences between ImageVis3D and our DVR version can be attributed to the crudeness of the first iteration in transfer function design, initially generated with ImageVis3D. There is also a slight difference between the CUDA and CPU versions of the images below, which can be attributed to gamma correction (none in our CPU DVR) as well as a slightly more refined transfer function for the CPU DVR version.

ImageVis3D, Bonsai dataset
ImageVis3D
Our CUDA DVR, Bonsai dataset
Our CUDA DVR
Our CPU DVR, Bonsai dataset
Our CPU DVR

The next set of results show the images generated for the Boston Teapot dataset. Once again they are ordered from left to right as ImageVis3D, CUDA DVR and CPU DVR. The behavior is very similar to the Bonsai dataset listed above. One artifact visible in the ImageVis3D image is due to samping distance being too large, which results in jaggied appearance. Visualizing structures which result in thin shells will always be sensitive to the selection in the marching step size. For the other two images, we set the step size far smaller than the minimum required to satisfy the Nyquist-Shannon sampling theorem and therefore we can resolve the thin shell without much trouble. This of course comes at a great computational cost.

ImageVis3D, Boston Teapot dataset
ImageVis3D
Our CUDA DVR, Boston Teapot dataset
Our CUDA DVR
Our CPU DVR, Boston Teapot dataset
Our CPU DVR

The final dataset that we explored was the Engine dataset. Below you can see the images from ImageVis3D, CUDA DVR and CPU DVR from left to right. There isn't much of a difference between images generated by ImageVis3D and the CUDA DVR, which may be attributed to a similar ray marching sampling distance. Although, one big difference is the lack of specular highlights on the CUDA DVR image. The image generated using the CPU DVR is more transparent because we didn't set the sampling distance small enough to resolve very thin shells.

ImageVis3D, Engine dataset
ImageVis3D
Our CUDA DVR, Engine dataset
Our CUDA DVR
Our CPU DVR, Engine dataset
Our CPU DVR

There seem to be some visual differences between the images generated with ImageVis3D and our DVR implementations. The primary reason for this might be the progressive refinement-type ray marching implementation which results in a very high sampling rate. As a result, the images from ImageVis3D may look more solid.

Another difference is the fact that ImageVis3D implements a surface-like appearance including specular highlights. Even though there's diffuse (approximating 'surface' normals using gradients) shading, none of our DVR implementations use specular hightlights.

Evaluation of the implementation of photon mapping

NOTE: there are a few simulations still computing the photon map images, so the images shown here are incomplete.

Let's start by looking at some of the early test images, which seem to show significant benefits from using a photon map for GI, if we ignore the clear issues with undersampling during ray-marching. One clear distinction between the two images is that the one using the photon map brightens up, however this seems to be non-uniform.

Early test, no photon mapping, Bonsai
No photon mapping
Early test, photon mapping, Bonsai
With photon mapping

Since the coefficient of scattering (used for sampling the distance between absorption/scattering events when tracing photons) is inversely proportional to the opacity (or density when no transfer function is used) at a sampling location, there's more photons scattering in areas of high opacity/density.

This behavior is clearly shown in the images below. From left to right: CPU DVR without photon map, photon map with 1,000 photons, and a photon map with 1 million photons. Photons in the two images on the right were allowed to scatter up to 3 times before Russian Roulette was utilized for absorption.

ImageVis3D, Engine dataset
No photons
Our CUDA DVR, Engine dataset
1,000 photons
Our CPU DVR, Engine dataset
1 million photons

Technique Effectiveness

One key takeaway is that it seems that using many photons introduces too much scattering and adds too much brightness. This may be an issue with our particular implementation or letting photons scatter too much (as in too many bounces per photon). As a result, it seems that final images might be highly sensitive to parameters like light power, maximum recursion depth, volume albedo, and the maximum number of photons to trace. Unfortunately, we would need a lot more simulations (including a Cornell Box) to undersdand which of these possibilities and parameter settings, if any, contribute to the results we've seen so far.

As an aside, it may be worth noting that the authors limited the number of bounces to two for all of their examples.

Another interesting thing to note was that we were expecting to see shadowing appear in many of the images. So far, this hasn't been the case, which may be due to a poor camera and light placement or to the fact that we don't currently support surface-like reflection when sampling the volume phase function. Adding this feature in would be fairly straightforward.

Extention

Using photon mapping to compute global illumination effects within direct volume rendering offers a few excellent properties. For example, the ability to incrementally update only the relevant information allows interactive exploration of the data by changing transfer functions. Another benefit of photon mapping is that the photon map is unchanged as long as the light remains static relative to the volume.

There are a few challenges left to be solved for to this method (many inherent to the photon mapping itselt): if the light moves, the entire photon map needs to be recomputed, density estimation at ray samples is expensive currently prohibiting interactive camera movements, producing sharp lighting features requires to trace many photons (actual number is volume dependent), and storing the photon map incurs considerable memory overhead in addition to the actual volume and might prohibit development of out of core rendering techniques which utilize photon mapping.

There were a couple of issues we ran into besides the ones mentioned above. One of the most important ones is the lack of shadows in our images, which could be fixed by using phase functions which approximate surface behavior in areas with high gradient magnitudes or high opacity. Additionally, this extension should allow reproducing surfaces within the volume.

Mixed Conclusions

Sam and I agree that utilizing GI to an appropriate degree benefits our spacial awareness of structures that reside within the data, especially when considering effects like shadows and color bleeding which suggest proximity and orientation. Of course, as with any visualization technique, these effects are strongly guided by parameters, task and perception. Therefore, without much practice and experience, it may be easy to produce misleading visualizations, independent of resulting "beauty".

Konstantin

Unfortunately, Sam and I mostly disagree about the concluding remarks from this project. I strongly believe that Volumetric Photon Mapping offers quite a few strong benefits for DVR, which, once the technology evolves, should allow real-time, physically-based, light-weight (potentially replacing the volume itself) user-controlled methods to increase spatial (and perhaps even temporal) awareness within the data. However, at this stage, there's much work to be done and it may seem as through this approach is destined for failure.

Sam

I would argue that I do not see any clear and present benefits volumetric photon mapping over other GI approximation methods for direct volume rendering. Papers like Kniss, et al's A Model for Volume Lighting and Modeling (2003) and Schlegel, et al's Extinction-Based Shading and Illumination in GPU Volume Ray-Casting (2011) appear to present methods that offer visually similar (though not truly physically based) light transport support. If we actually got benefits like free shadows, there might be a stronger argument for encouraging the use of photon mapping in DVR; but I am not entirely convinced that the theoretical benefits outweigh the known costs, especially when other viable options exist. If the simulation of physically accurate effects like color bleeding are interfering with our ability to accurately perform tasks, then it is by definition not an optimal solution. I think the choice to use volumetric photon mapping or not is not clear cut and will be highly dependent on task.

Code

Download the final version source(zip, 15.5 MB)


main picture

Project 5: 3D Spatial Data

Updated: 11.04.12

Description

The need to visualize 3D volumes is a ubiquitous problem is medicine, engineering, and the computational sciences. These volumes are often complex and contain a wide range of scalar values that represent things like different tissue types, pressure fields, and temperatures. In this assignment you'll explore several methods for visualizing the geometry embedded in a scalar field, and compare and contrast the results. You'll be ramping up your VTK skills by using a number of functions for dealing with volumes. For this assignment you will explore the volume rendering capabilities of VTK to produce direct volume renderings of the head of a mummy.

Part 1: Color Compositing Transfer Functions

Probably the hardest part of this project is coming up with a transfer function which generates a meaningful visualization. Experiment with different transfer functions (both types: the function for opacity, and the function for color).

The top row below shows a few images of the volume rendering which makes the skin surface semi-transparent and the bone surface - white and opaque. The bottom row below shows the visualization using isosurfaces.

Volume renderingDirect volume rendering, view 1Direct volume rendering, view 2Direct volume rendering, view 3
Isosurface renderingIsosurfaces, view 1Isosurfaces, view 2Isosurfaces, view 3
Question

Describe your transfer functions. What relationship do they have to the isovalues shown in your isosurface rendering?

The color transfer function was as follows: (0, (0.0, 0.0, 0.0)), (80, (1.0, 0.4, 0.3)), (85, (0, 0, 0)), (90, (1, 1, 1)), (255, (1, 1, 1)), which is (value, (color)). The opacity transfer function was: (74, 0.0), (78, 0.35), (84, 0.55), (90, 0.0), (120, 0.0), (140, 1.0) , which is (value, opacity).

These values are selected such that the skin is mapped to (74-90) and the bone is mapped to (120-255).

Besides transfer functions for opacity and color, I decided to include a gradient opacity transfer function to try to filter out pieces of cloth over the face. It seems to work fairly well.

Question

Do you think volume rendering the mummy dataset offers a clear advantage over isosurfacing? Explain why or why not.

I think that volume rendering is a better visualization technique overall because it allows to see structures that might otherwise be occluded by an isosurface. Also, isosurfaces select a specific isovalue to show as a surface, while volume rendering allows to visualize a range of values simultaneously. This is extremely helpful for CT scans where a certain organ or a structure may contain a range of values due to small variations in tissue densities. Of course isosurfaces are extremely useful for datasets where it's possible to separate the data into a few representative values or thresholds (a distance field for ex, or maybe some highly dense material in a CT scan). Volume renderers nowdays can also include advanced lighting techniques like shadows which add the cues for curvature and relative spatial locations that arize naturally from isosurfaces.

Part 2: Maximum Intensity Projection Renderings

There is a simpler way of generating an image with volume ray-casting: maximum intensity projection, or MIP. Use VTK to produce MIP renderings of the mummy datasets. Your MIP renderings should be gray-scale only. Generate a MIP rendering and a compositing-based volume rendering of one of the mummy datasets.

The top row of the images below show the MIP renderings of the dataset, while the bottom row shows the composition-based volume rendering of the same dataset with a small change of the opacity transfer function (reduced it to produce less saturated images).

MIPMIP, view 1MIP, view 2MIP, view 3
MIP-like direct
volume rendering
MIP-like direct volume rendering, view 1MIP-like direct volume rendering, view 2MIP-like direct volume rendering, view 3
Question

What are some advantages and disadvantages of MIP versus compositing-based volume rendering?

MIP applies transfer functions to the maximum intensity along the ray. This visualization method should be excellent for finding really dense tissues in a CT scan which may help identify certain structures or diseases. Considering something like a stress computation for a mechanical part or a building, one could use MIP to identify areas which may need to be addressed. Compositing-based volume rendering would identify these areas as well, except they may be occluded or misidentified (when all the samples along a ray contribute just the right amount to the final pixel color).

Unlike compositing-based volume rendernig, MIP doesn't offer the ability to look at how structures within the dataset look like, say within a certain value range. Also, MIP by definition is missing any sort of depth cues: imaging a case of a dataset where two materials are arranged in a line and cross over each other making an X. Then when viewing this dataset from the side, MIP will only pick out the denser material only.

Part 3: Extra for cs6630 students

Direct volume rendering is a very flexible process: there are many parameters (in addition to the transfer function) which will effect the final image. In these two options you explore three parameters: sample distance spacing, interpolation type, and dataset resolution.

1. Sample distance along the rays

The vtkVolumeRayCastMapper has a SetSampleDistance method. What is the relationship between image quality, rendering time, and sample distance? Give an example of a feature in the dataset which can be lost by increasing the sample distance too much. Is there a sample distance that seems 'small enough', so that smaller distances offer no clear advantage?

Below are the images resulting in changing the sampling distance for the first visualization. The top row from left to right uses the following sampling distances: 2, 0.8, and 0.5. The bottom row from left to right uses the following sampling distances: 0.4, 0.2, and 0.1. Locations that show the differences the most are the forehead and the left ear.

DVR, sample distance 2
sample distance 2
DVR, sample distance 0.8
sample distance 0.8
DVR, sample distance 0.5
sample distance 0.5
DVR, sample distance 0.4
sample distance 0.4
DVR, sample distance 0.2
sample distance 0.2
DVR, sample distance 0.1
sample distance 0.1

When the sampling distance is too large, then we see a lot of artifacts in the form of visible lines similar to contours (in my images, they are red). Decreasing the sampling distance removes these artifacts but only after a certain point (seems to be about 0.4). The correct sampling distance depends on the smallest spacing between our voxel data - according to the Nyquist sampling theorem, the minimum sampling rate (inverse of sampling distance) must be at least 2ce as high as the maximum frequency of the signal we'd like to recover. So for our 128 dataset the distance is about 0.88, which requires the sampling distance less than 0.44 to resolve all details and remove contour-like artifacts.

As the images show, decreasing the sampling distance below 0.4 doesn't improve the image quality as dramatically as going from 1 to 0.4. Any feature that resolves to less than half the sampling distance in size will be lost (see paragraph above - hence the requirement for sampling distance to be at least half the data distance). Even though the increase in sampling rate improves the image quality, it comes at an expense of computational time (it seems worse than linear but I didn't time image generation).

2. Interpolation method and data resolution

We talked about nearest neighbor and trilinear interpolation in class; VTK lets you choose either one with methods in the vtkVolumeProperty class. Describe and demonstrate the differences between the two different interpolation methods. Experiment with different resolutions of the data. How does increasing the dataset resolution change the difference between the two interpolation methods?

The images below show the differences between the 3 datasets as well as the interpolation strategies within the first visualization. From left to right, the dataset resolution changes from 50, 80 to 128. The top row has the tri-linear interpolation while the bottom uses the nearest neighboor.

Data resolution 50Data resolution 80Data resolution 128
Tri-linear interpolationDataset resolution 50, tri-linear interpolationDataset resolution 80, tri-linear interpolationDataset resolution 128, tri-linear interpolation
Nearest neighborDataset resolution 50, nearest neighborDataset resolution 80, nearest neighborDataset resolution 128, nearest neighbor

One clear difference between tri-linear and nearest neighboor interpolation is the cubic appearance in the renderings using nearest neighbor. Each cube corresponds to a voxel of data in our dataset. One thing to notice is that as the data resolution increases, the nearest neighbor interpolation method slowly approaches the results from the tri-linear interpolation; however, this effect will only work at a certain zoom level beyond which the problem will still remain. Using higher order interpolation like quadratic or cubic (or even radial basis functions) would offer improvements in the final images over the tri-linear interpolation.

Code

Download the final version source(zip, 1.2 MB)


main picture

Project 4: 2D Spatial Data

Updated: 10.22.12

Description

Scalar data defined over a grid is an incredibly common data type, showing up in medical applications, computational science, radar applications, and many other places. The purpose of this assignment is to give you a little experience with VTK by exploring several different ways to visualize scalar data. For this assignment you will create visualizations of several data sets, all of which are mappings of R2:R1. You will learn a bit of Python and VTK and gain some skills at data format hacking.

The assignment is broken into several parts: height field, contour maps and color maps.

Part 1: Height Field

In this frst part of the assignment you will be visualizing a 2D spatial data set as a height Feld. The two spatial dimensions will determine the X and Y coordinates, while the value at each data point will determine the Z coordinate.

All of the meshes below were generated using the Delaunay triangulation. Below are the images generated with the MRI data set. Basically, I created a vtkPolyData from the points with x, y coordinates coming from one file and z coordinate from the measurements.

Delaunay triangulated thorax datasetDelaunay triangulated thorax dataset, rotated

Below are the images for the small mountain image dataset. These were generated by reading in each image, and then generating points to be places into a vtkPolyData - the x, y coordinates came from the grid location normalized to [0, 1], and the z coordinate from the image color scaled to [0, 0.5]. Below are the images for the small image, the image on the right is the view from behind of the image on the left.

Delaunay triangulated smaller mountain datasetDelaunay triangulated smaller mountain dataset, rotated

Below are the images for the large image, the image on the right is the view from behind of the image on the left.

Delaunay triangulated larger mountain datasetDelaunay triangulated larger mountain dataset, rotated
Question

Is this a good way to look at these data sets? Why or why not?

This visualization technique might make sense for topological-type of data, like the mountain we have a picture of or canyons on Mars, especially for a fly-through of the data. However, I don't think this works too well for scientific data, like the MRI data set we have. The two major issues are the use of perspective projection, which is nonlinear in depth - because of the z-divide (final projection onto image plane step following after transformation of a point using the perspective matrix) - we can't accurately judge distances between any two untranfsormed points based on their locations on the image.

The previous issue can be fixed by using an orthographic projection. However it would still suffer from another issue: occlusion. For example, we can't see any data behind the large peaks. This technique might be rather good for 3D viewing (as in with polarized glasses) because the visualization can utilize our visual ability for depth perception.

Part 2: Contour Maps

Another way to visualize this type of data is to produce a contour map. We have provided two additional data sets, body.vtk and brain.vtk, that we would like you to visualize as contour maps. The first is an infrared image of the human body, the second is a brain dataset.

The images below show the contour mapped datasets (which is also color mapped with cyan to magenta), and on the right this visualization allows for placing different contours at different heights.

Contour map, body datasetHeight elevated contour map, body dataset, rotated
Contour map, brain datasetHeight elevated contour map, brain dataset, rotated
Questions

1. What properties does a dataset need to have in order to have contour lines that make sense and contribute to the visualization?
2. What kinds of data typically have these properties can be found?

An obvious part of the answer is that the data must have either 2D or 3D support (this way we get isosurfaces). I typically think of contour lines as most helpful when they have rather large length (or area) so that there wouldn't be lots of small circles in places. So the data should not be too oscillatory (an example of this is the body data set, or the noise outside of the brain). Another perhaps useful feature is to have a decently large range of the data (this way errors in data won't affect contours too much).

One requirement is that the data is scalar. Contour maps make the most sense in topographical data. Datasets with structures should work ok with this technique.

Extra for 6630: Own dataset

The dataset that I have chosen to visualize comes from my friend's PhD research - collisionless N-body simulations of particles representing stars in a disc galaxy to study the effect called radial migration. During this effect, a star corotating with the spiral pattern (angular velocity of the star equals the spiral's pattern speed) gains or looses angular momentum and ends up on a nearly-circular orbit with a different radius without an increase in its orbit's eccentricity. From simulations, we find that spirals are transient and keep recurring in typical disc galaxies.

I was hoping to extract some spiral structure using contour maps, but was not really able to do so. First, I had to convert the point data into 2D density data that can be looked at as an image which are shown below. The image on the left has the resolution of 512x512 and produced contour maps that were too complicated. The image on the right has the resolution of 150x150, which worked better.

Density image of high res spiral galaxy simulation
512x512
Density image of low res spiral galaxy simulation
150x150

The images below go through several different number of contours for smaller dataset: 10, 20 and 30 (left to right).

10 contours20 contours30 contours
Contour map, low res spiral galaxy, 10 contoursContour map, low res spiral galaxy, 20 contoursContour map, low res spiral galaxy, 30 contours
Height elevated contour map, low res spiral galaxy, 10 contours, rotatedHeight elevated contour map, low res spiral galaxy, 20 contours, rotatedHeight elevated contour map, low res spiral galaxy, 30 contours, rotated

This data has an interesting spiral structure, which I wanted to see with contours. However, what surprised me is that the contour maps showed that the spiral galaxies are elliptical rather than circular. Also looking at how the (value) height of contours varies suggested a geometric-type increase in densities (which is expected given the data shows a bunch of points rotating around a central mass of points).

Part 3: Color Maps

Visualize the above two data sets with vtk in two ways: 1. as a color-mapped plane (data is planar, but map the data value to color) and 2. using the colormap on the height field.

The output of the visualization tool is shown below. From left to right: small mountain data, large mountain data, and thorax data. From top to bottom: plane visualization, height field, height field from the back.

Mountain, smallMountain, largeThorax
Plane visualizationPlane color map, low res mountainPlane color map, high res mountainPlane color map, thorax
Height fieldHeight elevated color map, low res mountainHeight elevated color map, high res mountainHeight elevated color map, thorax
Height field
rear view
Height elevated color map, low res mountain, back viewHeight elevated color map, high res mountain, back viewHeight elevated color map, thorax, back view
Questions

1. What are the benefits of the different approaches?
2. Which do you think works best, and why?

The color mapped plane is very nice if the data is very large and should allow a quick overview of the entire dataset. Unfortunately, this visualization suffers from the same issues (linked with color perception) as heat maps. On ther other hand, adding color relative to height to a height map improves the perception of the date.

I personally like color mapped height fields because the visualization uses two strong visual encoding channels - color and spacial position. Because they represent the same data, one reinforces the other and helps judge data when one of the two color encodingd fail.

Extra for 6630

What is visually wrong with the color map of the thorax data? What is causing this? (Answer this question with what you know about graphics, colormaps and interpolation.) Now that you have identified the problem, find a way for vtk to visualize the thorax data without the "artifacts".

Even though it can be induced by the vtk filter used, vtkShepardMethod, the basic issue is with resampling. Because the locations of some of the data points doesn't lie exactly on any grid, we have to interpolate the values at those locations onto a grid. I have supersampled the data at 100x100 resolution, which should satisfy the Nyquist-Shannon sampling theorem (we must sample at least at 2x the highest frequency we'd like to resolve). This becomes a big issue in the locations where the data oscillates a lot - the interpolated values at frid locatoins will end up somewhere in between correct values, thus producing the spikes we see.

This can be fixed by applying a Gaussian blur to the grid (image) after the initial sampling has been computed. The result is shown below. Another way to fix it might be to extract some contour maps and use their projections onto the grid to rebuild the height field (or even sample the height field directly from a triangulation)

Plane color map, thorax, fixed
Plane visualization
Height elevated color map, thorax, fixed
Height field
Height elevated color map, thorax, fixed, back view
Height field, rear view

Another possible issue is the excessive smoothing/blurring given by the bi-linear interpolation of data values onto the grid. To fix it, one could use bi-cubic (or any other highter order) interpolation.

Code

Download the final version source(zip, 315 KB)


main picture

Project 3: Parallel Coordinates

Updated: 10.12.12

Description

Exploring multidimensional data is a fundamental challenge in visualization. In this assignment you will implement a widely used visual representation: parallel coordinates. The goal of the assignment is two-fold. First, you will gain experience designing and implementing a variety of interaction mechanisms to support data exploration. And second, you will evaluate the effectiveness of parallel coordinates on a variety of multidimensional data sets. This assignment is intended to get you sketching your ideas on paper before coding, and to further your understanding and experience programming in Processing.

Part 1

Create a paper prototype by sketching out (on paper) what you want your basic parallel coordinates visualization to look like. Make sure to include things like labels and titles in your sketch. hink about what you might do to deal with many overlapping lines (like using transparency...). Below is the sketch for the parallel coordinates implementation.

Initial sketch for parallel coordinate visualization
Initial sketch

Next, create a Processing sketch for the assignment. Use the FloatTable.pde from the previous assignment to read in the cars data set. Write a basic renderer to show the data set according to your paper prototype. Make sure to label everything that you can! (titles, axes, ranges, etc).

Below is the image showing the initial implementation of the parallel coordinates, showing the entire cars data set. Because there is a lot of data on the screen, each line is drawn with transparency.

Cars dataset shown in working prototype
Cars dataset shown in working prototype

Part 2

Parallel coordinates is hardly useful without interactivity. Before diving into the code, you need to spend some time thinking about what interaction mechanisms you think you need to make your visualization useful, as well as how you want to design those mechanisms. You should develop a series of story boards on paper that explore different ways to interact with the visualization. You must support the following basic tasks: filter the data across multiple attributes, reorder the axes, and invert the axes.

Below is the storyboard describing all of the interactions our parallel coordinates visualizer.

Sketch of interactions within parallel coordinates
Sketch of interactions within parallel coordinates

Now, we can look at the implementation of the interactions mentioned above. The first image is another base visualization of the car dataset. The second image shows the result of selecting a different axis (weight). The visual cues are darkening of the axis, labels, and re-coloring of lines based on their value at the selected axis (red means closer to the top, blue - bottom).

Base visualization
Base visualization
Selecting different axis, weight
Selecting different axis, weight

The next image shows reordering axes - moving mpg axis to be rightmost axis. The ability to move an axis is shown by selection cues and mouse cursor changing to a type MOVE. Also, I've used the Integrator.pde to interpolate the horizontal positions of axes during moving.

Reordering axes, mpg moved rightmost
Reordering axes, mpg moved rightmost

The next image shows reversing an axis, weight. The triangle above an axis shows its direction, so upwards means max value is on top. An axis can be reversed by clicking on this triangle. When the mouse hovers over a triangle it increases in size and becomes black to show the ability to interact, as shown with acceleration.

Reversing an axis, weight
Reversing an axis, weight

The final interaction is filtering. The first image below shows the visual cue to create a filter, which is the line across the cylinders axis. The color corresponds to the selected value, which is also displayed above. To use this feature, the mouse must be sufficiently close to an axis.

Creating a filter
Creating a filter

The final image shows all of the possible interactions and cues for filters. In this image we have filtered cylinders, horsepower and acceleratoin axes. Currently selected axis is cylinders, hence the filter is rendered as a color gradient with colors corresponding to the filter value. Filters on an inactive axis, like horsepower, have other cues: axis limit labels have lighter color and the color of the filter itself is light gray. When hovering over a filter has the following cues: mouse cursor changes to type HAND, axis shows signals of being selected, and the color of the filter and its limits changes to dark gray. Finally, the color of the lines is based on their location on the active axis subject to being within all filters, otherwise they are light gray.

Creating a new filter requires clicking and dragging the mouse along an axis. To delete a filter, a single click on the filter is needed. To move a filter, the user clicks and drags a filter along an axis. Note that selecting an axis requires a click on the axis itself.

Filters demonstration
Filters demonstration

I think this design works fairly well but it only provides the basics of interaction. I hope a lot of the interaction is quite intuitive. There are quite a few things that could improve the interaction experience:

Most importantly, the visualization is sluggish and needs optimizations (at least in line drawing). Creating images to be used for cursors could add a guidance for the usage. Adding staggered animations when axis reverse could help visual understanding of the effect (by staggered, I mean move upper lines down first delaying the ones below while moving in order). Displaying data information when mouse hovers over a line and specifying filter limits using text input could add precision in understanding the data. Finally, zooming in via magnifying glass (the non-linear filter) on a portion of an axis would help when data clumps together. Another way is to introduce logarithmic scaling.

Part 3

You must use k-means clustering to aggregate the data, and develop a way to show the clusters in your parallel coordinates visualization. You may use existing k-mean clustering algorithms to create your clusters, or be adventurous and implement it yourself (it is straight-forward to do). Experiment with number of clusters you create and find a number that does a good job in creating meaningful clusters. The most obvious way to show the clusters is to assign each cluster a color (hue) and render the data points in the appropriate color according to their cluster membership. Remember that we can only distinguish a limited number of colors. However you encode the clusters, allow the user to choose to show the clusters, or not.

Extra credit

A way to accommodate a limited color palette is by creating an interactive color legend. The visualization widget serves dual purpose: it works as a legend to inform the user about the encoding, and it also lets the user dynamically set the color of each group.

I've implemented the k-means clustering within the processing framework using the code found on this page. There were a few changes to the visualization, shown in the image below. The second image shows the default cluster visualization, where each black line is the centroid of a cluster and all lines are gray. By default, the clustering algorithm computes 6 clusters using the Eucledian distance and are started with random seeds. A new panel is introduced which controls cluster parameters: top portion shows the clusters and their colors, middle section shows possible color selections, and bottom shows control buttons.

Base image for clustering
Base data ready for clustering
Base image showing clusters
Clusters

The visualization relies on pairs of color to identify clusters, where the darker color corresponds to the centroid and the lighter one for all of the data within the cluster. In the image below, we show the first cluster being blue. The image also depicts all of the interactions the user can make. Hovering a mouse of a cluster shows it as a possible choice by darkening the background. Once a selection has been made, the cluster background turns white. Hovering the mouse over a color on the bottom puts a dark gray line underneath. Hovering over a button, also makes it gray.

To change a color for a cluster, the user first clicks on a cluster, then on a color below. The button functions are as follows: clear - clears color selections, show/hide - shows/hides the cluster visualization or the original, -/+ - remove/add a cluster. The second image below shows a cluster visualization with a color for each cluster.

Interacting with clusters
Interacting with clusters
All clusters colored
Coloring all clusters
Data Exploration

Let's look at the cars data set first. Looking at 3 clusters (image shown below), we can see some not very surprising patterns emerge: cars that have lower mpg, have 8 cylinders, higher displacement, horsepower and weight, which then translates into faster acceleration. It's a bit surprising to see the centroid around 1974, which was right after the muscle cars era in the US (origin).

The second image below shows another interesting find, which is a combintation of 2 images generated seperately. The image looks at cars that weigh 3,000 - 3,200 lbs and we find an outlied (shown in red) that is the only one with 8 cylinders. It also happens to be the oldest and fastest car of the bunch. This is interesting, since I typically think of 8 cylinder cars from the 60s as very heavy (supported by 3rd image).

Strong correlations within cars dataset
Strong correlations within cars dataset
Interesting outlier in cars dataset
Interesting outlier in cars dataset
Muscle car era shown
Muscle car era shown

Let's look at the cameras data set. This is a tough one because of it's density as well as clumping of the data values. Looking at clustering with 2 vs 8 clusters, we see that it's tough to gather much meaning out of the data using them.

2 clusters within camera dataset
Two clusters within camera dataset
8 clusters within camera dataset
Eight clusters within camera dataset

I decided to filter data based on zoom wide to pick out cameras without lenses - I'm thinking those will be DSLRs. Clicking on weight to have the colors represent a sorting based on it, we see that the heavier cameras are either really old or new, with a strong correlation between the number of effective pixels, but not price (surprisingly). It seems that basic selling features of cameras, like resolution and production year, are strongly correlated. Other features seem to be all over the place, which makes sense since as the technology matures same features can be put into smaller size at the same price OR better features can be placed in the same size at higher price. Because of this, there aren't really strongly correlated axes across the board.

Looking at wide zoom of 1, camera dataset
Looking at wide zoom of 1, camera dataset

There seems to be a sweet spot for dimensions, weight and included storage of the cameras produced under 540 dollars, which makes sense. Locking to 2006-2007 years produces the second image below, and looking at the majority of tele zoom features we get the 3rd image below - the clumping of weight and size is incredible. Looking at the same visualization but for cameras made in 1998-1999, we see that both weight ans size have increased. Also the number of lines indicates a much smaller number of cameras being produced.

Cameras costing below 540
Cameras that cost below 540
Cameras costing below 540 produced in 2006-2007
Cameras that cost below 540, produced in 2006-2007
Cameras costing below 540 produced in 2006-2007 with specific tele zoom
Cameras that cost below 540, produced in 2006-2007, with specific tele zoom
Cameras costing below 540 produced in 1998-1999 with specific tele zoom
Cameras that cost below 540, produced in 1998-1999, with specific tele zoom
Critique

This visualization technique seems to work well when there's not too many data points to look through. Also, it seems that having a good distribution of data points within each dimension helps with visualization (improving the tool could help with this though). When there are really strong positive or negative corelations between dimensions in our data, the parallel coordinates are excellent at showing them (cars data set); however, when the correlations are rather weak (as shown in zoom, weight, size, etc features of cameras data set) this technique is very difficult to use.

Interactivity certainly helped finding some of the correlations - especially the ability to create/move filters, combined applicatoin of filters, moving axes around, and reversing some of them. There could be other additions that could help more, but filters are probably the most important of the essential interactions.

Using parallel coordinates for ordinal data might not be too tough, as long as our qualitiative data can be assigned a numeric value. Unfortunately, parallel coordinates imply some sort of ordering within the data along any particular axis, which imposes a condition on our qualitative data. For example when looking at the country of origin for the cars data set, it should make no difference which way the axis is oriented, but without knowing what the data represents it may be easy to forget and draw incorrect conclusions.

Code

Download the final version source(zip, 37 KB)


main picture

Project 2: Time Series

Updated: 09.16.12

Description

In this assignment you will develop an interactive viewer for looking at time series that explores several different visual representations. The main goal is to get your feet wet with the Processing programming language, specifically the functionality for: drawing basic primitives; handling mouse and keyboard events; and working with text. You will be following along and implementing the code described in Chapter 4 of Ben Fry's book Visualizing Data.

Part 1

Read pages 54-72, and implement the code described in the book. Change the numerical labels on the x- and y-axes to use the Georgia font, and all of the text titles to use the Verdana font. Choose a font size that seems most appropriate to you. Change the y-axis title to run along the axis, ie. rotate it by 90-degrees (hint: look under the Transform section of the reference page). Make any other changes you feel are appropriate. What did you change and why?

Below are the two images that I've generated - the one on the left is the result of the unmodified code from the book, while the one on the right is the final updated one. There are a few things I've changed. Font sizes are larger and anti-aliased. The rotated y-axis title also moved left and the plot width increased. Added tick marks to the x-axis and lines for y-axis. Finally, the main title can be centered, but doesn't seem necessary with a single word title.

Making the plot wider maximizes the viewing area, while adding horizontal mark lines helps identify the location of data points easier.

Initial plot
Initial plot
Part 1 implemented
Part 1 implemented

Part 2

Read pages 73-82, and implement the code described in the book. Allow the user to select which representation to render the data using the keyboard. Specifically, let 1 select dots; 2 select connected dots; 3 select line chart; 4 select filled chart. (hint: look under the Input section of the reference page). Use the Georgia font for the highlight tags on mouse rollover. What representation do you think is the most appropriate, and why?

I think the best representation for this data is either connected points or lines, which would depend on the density of points. In our particula case, there aren't too many points so connecting them would show both the measured data as well as the interpolation in between. Also, lines provide an easier interpretation of data with trends in it.

The top row of images below shows (left to right) points, connected points, and smoothed curve. The bottom row of images below shows (left to right) area, smoothed area, and smoothed curve with mouse over a data point.

Points
Points
Connected points
Connected points
Smoothed curve
Smoothed curve
Area
Area
Smoothed area
Smoothed area
Smoothed curve with selected point
Smoothed curve, selection

Part 3

Read pages 83-93, and implement the code described in the book. You may skip the Better Tab Images section if you want. For the interpolation, use the provided Integrator instead. Is this tool an effective way to look at time series data? Why or why not? What is missing? How would you change the tool to support looking at time series with many time points (like >1000)?

The animation the integrator provides is an excellent way to see how the shape changes between different data sets. This is a great way to compare between two or maybe three data sets interactively, but not as good as overlaying several curves on the same figure, which allows comparing all data simultaneously. Another issue with the tool (or perhaps more of a question about polish) is being able to handle interactive interpolation when there is a difference in the number of data points between two data sets.

Improving the tool to handle very wide data sets could be done in two steps. First, have an interactive window which would allow scrolling/zooming around the entire data set, while the plot shows a part of it. Secondly, it might be useful to allow the interpolation to happen from left to right instead of all at once. Also perhaps showing the functions being interpolated during the animation could make things more clear. Interpolating between colors of different functions as well as their values might also be interesting.

Another possibility is to break the whole time period up into equal portions and overlay all those plots. This would allow to easily see trends within some period of time. Of course this later technique wouldn't scale beyond 5-6 periods and therefore there'll be a number of data points when it will also break.

Part 4

For 6630 students only. Add an additional tab for a summary view of all of the time series at once. How you do this is up to you. You do not need to support mouseover in this view (but, you can if you want!). Why did you choose this visualization method?

I've implemented the basic overlay since it seemed to provide the easiest way to see all of the data at once. I kept all of the different types of plotting (except for area) and the mouse hover effects as can be seen in the images below. The highlight has been updated to have the same color as the plot. I've also added a legend on the right.

There are several things that could be better. First, I could have used different marks for each line. Another visualization could have overlayed three axes stacked vertically. Another way to display these might be to look at plots as deviations from the average for each beverage.

Part 4, points
Points
Part 4, connected points
Connected points
Part 4, curved lines selected
Curved lines, selection

For the final visualization of the data, I used smooth lines interpolating the data. I decided not to add any of the data points because the plot seemed too cluttered. As a result the only way to distinguish between data is color, which is listed in the legend in the top-right corner of the data.

Part 4, final visualization
Final visualization

Code

Download the final version source(zip, 7 KB)


main picture

Project 1: Data Exploration

Updated: 09.11.12

Description

The task in this assignment is to use an existing visualization tool (Tableau) to formulate and answer a series of specific questions about a data set of your choice. After answering the questions you should create a final visualization that is designed to communicate your findings. You should maintain a web notebook that documents all the questions you asked and the steps you performed from start to finish.

Data-set

I chose to use the data set from the FAA describing every commercial flight during the month of December 2009. The original data came from www.transtats.bts.gov. The data contains the following information date, airline code, flight number, origin, destination, plane tail number, departure and arrival times, departure and arrival delays, time in air, and distance travelled.

Initial question

There are a whole lot of things one can ask about this data set. What are the busiest routes around Christmas time? What is the correlation between airplanes arriving late and average temperature (or temperature at destination)? How many miles does a single plane fly?

Process

As I was playing with the data, I realized I needed to clean up some of it. So I added a custom field for the dates to unlock date-based plotting capability within Tableau. The command was simple: Date(Str([MONTH])+"/"+Str([DAY_OF_MONTH])+"/"+Str([YEAR])) (although I suspect the result wasn't cached making plot updates rather slow).

Secondly, I added a custom geotag which specified airport locations based on their codes, from Open Data. The initial plot was a mirror image of the US (and placed everything in Russia), so I had to negate all longitudes of the original data. There were a few airports missing, so I filled in the missing information based on data from Open Flights You can see the resulting map below.

Fixed airport locations as origin
Fixed airport locations as origin

I tried adding paths which would connect origins to destinations, which didn't work easily out of the box. So I had to add latitude and longitude columns for origin and destination. Then with some custom SQL loading tips from Creating hub and spoke diagrams, I was able to generate paths between airports. After a bit more pain and time, I get the following image, which shows the connections between airports.

Flight paths around the US
Flight paths around the US

After some tweaking, we can see how connected Salt Lake City is with the rest of the US. Tweaking included filtering by origin airport (SLC). As a measure of popularity, the line thickness indicates how many times the particular airport was counted as a destination. Also, I turned on marks at end points of paths and annotations for the airports. On the right, you can see the top 10 most common flights from SLC.

Flights originating in SLC airport
Flights from SLC
Top 10 most common flights originating in SLC airport
Top 10 flights from SLC

Now we can look at the most flown-to airports in the month of December of 2009, and then compare it to image showing airports with more than 100 planes scheduled to arrive there for December 25th and non-surprisingly, those appear to be the large airport hubs.

Most flown to airports for Dec 2009
Most visited airports, Dec 2009
Most flown to airports for Dec 25, 2009
Most visited airports, Dec 25 2009
Trends in delays

Back to trying to answer the original question about delays, which have a few interesting trends.

The scatter plot shows a strong correlation between departure and arrival delays, which is logical. One interesting thing that popped out of the data is how linear the data is - after fitting a line to it, we see that on average planes arrived at least 11 min late in December of 2009 (aka non-zero y-intercept), but would make up 4% of the departure delay time (from line slope).

Correlation between departure and arrival delays
Correlation between departure and arrival delays

I then decided to see how this differs between airlines, which is shown in the image to the right. The top plot shows the slopes of lines fitted to data, while the bottom shows the y-intercepts for those lines. In the end, the lower these values are the better. It's evident that AS and UA arrived eariler on average in December 2009. It seems that DL and NW might be best overall because they arrived about on time, but often making up for the departure delays.

Classifying lines fitted to data
Classifying lines fitted to data
Adding temperature data

First off, I found some data for average temperature for the month of December 2009 from National Climatic Data Center. However the data isn't completely relevant because the absolute temperature doesn't imply airport preparedness for things like de-icing. What is more relevant is the difference between the average temp and the average temperature over a few decades. I obtained the monthly data for 1971-2000. Below are the two maps showing the average temperatures for December 2009 (left) and the difference from normalized temperatures (right).

Average temperatures in Dec 2009
Average temperatures in Dec 2009
Average temperature difference from norm
Difference from normalized temperatures
Final visualization

Below you can see the final visualization that I have come up with. It took a bit of time to get here, which included playing with color maps to make sure not to use red-green as well as no overlapping shades of similar color between plane paths and the map. I couldn't figure out how to layer all of the information in the plot at the same time in Tableau, so I ended up using Illustrator to layer things appropriately and then Photoshop to rasterize. See the caption below the picture of the image.

Final visualization
Final visualization
Caption

These plots compare the average arrival delay (in minutes) for a select number of airports as recorded during December 2009. The left plot shows the averaged arrival delays during December 25 as a color of the paths. The right plot shows the averaged arrival delays for the entire month as a color. The color ranges are the same in both plots and indicate that December 25th had particularly severe delays in DEN, DFW and ORD airports; meanwhile looking at the entire month shows that arrivals along many routes were delayed but less severely. To correlate how strongly unexpected poor weather conditions influence airport delays, colors of states display the difference in average December 2009 temperature from average over 1970-2000. Texas, where DFW is located, was about 40F below the 30 year average which may explain such arrival delays. Both JFK and EWR located in New York and New Jersey have faired well, which corresponds to about 30F above the average temperature.

Paragraph

We often ponder why our plane is late as we're waiting at the gate during the typical trek during the holiday season every year. Many factors may contribute to arrival delays pushing back our departures. We have looked at the flight arrival delay data gathered during December of 2009. To investigate a connection between differences in temperatures and arrival delays, we created a visualization plotting average flight delays over the map showing temperatures. However, the absolute temperature might not have the strongest correlation with delays if it's expected for a geographical region. Therefore, we looked at the difference between average December 2009 temperature and the normalized temperature for each state, measured by averating temperatures from 1970 to 2000. Some interesting patterns emerge. For example, it seems travel during December 25th may incur much wider range of delays when compared to the entire month of December. Also, much lower than normal temperatures seem to have affected DFW negatively, while much higher than normal temperatures affected the North-East of the US positively (see JFK and EWR). When looking at the arrival delays averaged over the entire month, it seems that many of the commonly-flown routes have some delay (up to 30 min).

Updated: 12.13.12 © Konstantin Shkurko, 2010 - validate css, html