*animation techniques*- These techniques simulate continuous motion by rapidly displaying images. The viewer is given the impression that he is watching a continuous motion. To achieve this, the graphical hardware needs image display rates of at least 25 images per second, otherwise motion will look shaky. Most graphical hardware can not reach that display rate for moderate sized images, so one uses video hardware.

*animation script*- An animation script is used to determine which image will be the next one to be displayed and how long it will be displayed.

*back-to-front order*- Compositing can also be done in front-to-back order without altering the final result.

*camera and object movement*- A
*camera*is a means to view data in a viewport. An*object*represents the result of a visualization or annotation tool, e.g. an isosurface, a cutting plane, a bounding box, etc. *CGM*- A general image file format. The Computer Graphics Metafile, has been an ISO standard since 1987. It has the capability to encompass both graphical and image data.
*color coding*- IThe use of colors (usually from red to blue), in most visualization techniques, to reveal transitions in some quantity.
*computational steering*- To interactively change the simulation parameters and immediately see the effect of this change through the new data.
*'compute' modules*- One could of course, write these modules oneself. However, writing an interpreter, as used in the Compute module, requires great expertise.

*Cutting planes*- This technique makes it possible to view scalar data on a cross-section of the data volume with a cutting plane. One defines a regular, Cartesian grid on the plane and the data values on this grid are found by interpolation of the original data. A convenient colormap is used to make the data visible.

*data browsing*- The use of visualization techniques to allow one to easily browse and understand enormous quantities of data.
*data manipulation or selection*- It is possible to sample a volume of interest (VOI), but this VOI must be defined within the data file format.
*data size*- We assume one floating point number to occupy 4 bytes.
*data value*- This was done considering video limitations. Full saturated colors cannot be represented well on video. Hence we have taken less than full Saturation (90 %).
*distributive processing*- As applied to scientific visualization, this is when the computation is distributed over a set of high-performance computers and the actual visualization is done on a graphical workstation.
*flipbook animation*- This is a well known animation technique. The generated images are displayed one after the other. Its name is attached to the thumbing or flipping through a series of images.
*GIF*- GIF, the Graphical Interchange Format , is quite widespread and can encode a number of separate images of different sizes and colors.

*the HDF format*- The Hierarchical Data Format was developed by NCSA, University of Illinois, at Urbana-Champaign. It is a multi-object file format for the transfer of graphical and floating-point data between different hardware platforms. Data Visualizer only supports HDF on Silicon Graphics workstations.
*high-performance computers*- These can be high-performance workstations or supercomputers depending on what is available and what is needed for the simulation.

*images in memory*- It is clear that the number of images that can be cached in physical memory is limited. Increasing the number of images will eventually lead to paging and hence to a slower display rate.

*keyframe animation*- For this technique one only has to generate so-called
*keyframes*. Keyframes mark changes in the characteristics of the motion, for example the sudden change in the direction of motion of an electron due to a collision with an ion. Interpolation techniques are used to generate a set of images between two keyframes. The larger the interpolated set of images the smoother the conversion from one keyframe to the other will appear to the viewer. *Isosurfaces*- This technique produces surfaces in the domain of the
scalar quantity on which the scalar quantity has the same
value, the so-called
*isosurface value*. *Lossless**methods*- Lossless compression methods are methods for which the original, uncompressed data can be recovered exactly. Examples of this category are the Run Length Encoding, and the Lempel-Ziv Welch algorithm.

*lossy methods*- In contrast to lossless compression, the original data cannot be recovered exactly after a lossy compression of the data. An example of this category is the Color Cell Compression method.Lossy compression techniques can reach reduction rates of 0.9, whereas lossless compression techniques normally have a maximum reduction rate of 0.5
*necessary information*- This information consists of connections, connection element type, vertex (node) normals, and vertex (node) colors.
*the netCDF format*- The network Common Data Form is a data abstraction for storing and retrieval of scientific data, in particular multi-dimensional data. It is a distributed, machine independent software library based on this data abstraction which allows the creation, access and sharing of data in a self-describing and network transparent form. It is defined by Unidata.
*reading netCDF format*- There are some restrictions. If the coordinate grid is irregularly spaced some modifications in the netCDF file have to be made.

*Orthogonal slicers*- Often one wants to focus on the influence of only two independent variables (i.e. coordinates). Thus, the other independent variables are kept constant. This is what the orthogonal slicer method does.
*parameter function editing in AVS*- The Compute module of DX can perform the same actions.
*particle advection or streamlines*- A method for outlining the topology, i.e. the field lines, of a vector field. One takes a set of starting points, finds the vectors at these points by interpolation, if necessary, and integrates the points along the direction of the vector. At the new positions the vector values are found by interpolation and one integrates again. This process stops if a predetermined number of integration steps has been reached or if the points end up outside the data volume. The calculated points are connected by lines. The particle advection method places little spheres at the starting points representing massless particles. The particles are also integrated along the field lines. After every integration step each particle is drawn together with a line or ribbon tail indicating the direction in which the particle is moving.
*the PLOT3D format*- The PLOT3D format, defined by NASA, is a commonly used data format in the computational fluid dynamics world.
*PostScript*- PostScript or more specifically Encapsulated PostScript Format (EPSF), is a page description language with sophisticated text facilities . For graphics, as compared to CGM, it tends to be expensive in terms of storage.
*PPM, PGM, PBM*- PPPM, the Portable Pixmap Format (24 bits per pixel), PGM, the Portable Greyscale Format (8 bits per pixel), and PBM, the Portable Bitmap Format (1 bit per pixel) formats are pixel based and are distributed with the the X-Window system.

*property editor*- The property editor is used to set the material properties of a geometrical object, like transparency, diffusion coefficient, specular coefficient, specular color, etc.

*Ray casting*- For every pixel in the output image a ray is shot into
the data volume. At a predetermined number of evenly
spaced locations along the ray the color and opacity
values are obtained by interpolation. The interpolated
colors and opacities are merged with each other and with
the background by
*compositing*in back-to-front order to yield the color of the pixel. These compositing calculations are simply linear transformations. Performing this in a back-to-front order, i.e. starting at the background and moving towards the image plane, will produce the pixel color. The opacity acts as a data selector. Sample points with opacity values close to 1 (opaque) hide almost all information along the ray between the background and opacity values close to 0 (transparent) transfer information almost unaltered.

*RGB*- RGB, the Red Green Blue format, is used by most visualization software packages as the internal image format. The format consist of a header containing the dimensions of the image, followed by the actual image data. The image data is stored as a 2D array of tupels. Each tupel is a vector with 3 components: R, G, and B. The RGB components determine the color of every pixel (picture element) in the image.

*Scalar glyph*- Scalar glyphs is a technique which puts a sphere or a diamond on every data point.

*scene*- A
*scene*is an arrangement of objects, generated by visualization techniques, geometries, and annotations in 3D space that can be rendered.

*sequence number*- The sequence number of the first image in the sequence is
0000, the second is 0001, and so on. This format of the
sequence numbers has the considerable advantage of
producing the correct ordering of the sequence with the
UNIX
*ls*command. For example, the ordering of ls is used by a command that can write the sequence to laser disk.

*splatting*- This technique was developed to improve the speed of
calculation of volume rendering techniques like ray
casting, at the price of less accurate rendering. It
differs from ray casting in the projection method.
Splatting projects voxels, i.e. volume elements, on the
2D viewing plane. It approximates this projection by a
*Gaussian splat*, which depends on the opacity and on the color of the voxel (other splat types, like linear splats can also be used ). A projection is made for every voxel and the resulting splats are composited on top of each other in back-to-front order to produce the final image. *surface rendering techniques*- We will stick to the names used in sections 2.3 and 2.4. The names of the actual techniques in the visualization packages may differ from them.

*streaklines*- A method for outlining the topology, i.e. the field lines, of a vector field. One takes a set of starting points, finds the vectors at these points by interpolation, if necessary, and integrates the points along the direction of the vector. At the new positions the vector values are found by interpolation and one integrates again. This process stops if a predetermined number of integration steps has been reached or if the points end up outside the data volume. The calculated points are connected by lines. The streaklines technique considers the vector field to be time dependent. Hence, the streakline technique interpolates not only in the spatial direction, but also in the time direction.

*streamlines*- A method for outlining the topology, i.e. the field lines, of a vector field. One takes a set of starting points, finds the vectors at these points by interpolation, if necessary, and integrates the points along the direction of the vector. At the new positions the vector values are found by interpolation and one integrates again. This process stops if a predetermined number of integration steps has been reached or if the points end up outside the data volume. The calculated points are connected by lines. The streamlines technique considers the vector field to be static.

*textures*- This is a technique to color arbitrary surfaces, e.g. those generated by the isosurface techniques, according to a 3D scalar field. An interpolation scheme is used to determine the values of the scalar field on the surface. A colormap is used to assign the color.
*3-dimensional dependences*- As we need the full 4D data sets with four variables per
grid point (
**v**and*OD*) we have split the data set into parts containing 100 time steps each. *turnkey application*- This kind of application allows the user to supply data and select from a fixed set of operations, hence functional extension is not supported. This is one of the most severe drawbacks of turnkey systems. If none of the operations suits the user's needs, the user can not develop a desired operation himself and plug it into the application, so that it would be accessible from within the graphical user interface mode or the command language mode.
*vector glyphs*- This technique uses needle or arrow glyphs to represent vectors at each data point. The direction of the glyph corresponds to the direction of the vector and its magnitude corresponds to the magnitude of the vector.
*vector quantities*- Unless otherwise stated, it is implicitly assumed that a vector has three components. So a vector is often written (x, y, z), and x, y, and z are typically floating points (scalar quantities).
*velocity components*- The individual components of the velocity vector.
*widgets*- A widget is a graphical representation of a logical input device. There is a loose definition of a widget as a term used to describe any abstract device.

*w**ireframe*- Wireframe rendering is a fast method in which objects are drawn as if they have been made of wires, with only their edges showing. All edges are drawn because hidden-edge removal is not performed.

*XBM*- XBM is the X-Window one Bit image file format, which has been standardized by the MIT X-consortium.A major constraint on the use of the large data volume which has to be dealt with. Large sets of image data can have severe implications for storage, memory, and transmission costs. Therefore, compression techniques are very important. There are two categories, lossless and lossy methods, based on whether or not it is possible to reconstruct the initial picture after compression.