Visualization Concepts

TOC PREV NEXT INDEX

5 Catalog of Visualization Techniques


This chapter provides a comprehensive catalog of techniques that you can employ using AVS/Express to effectively visualize your data. Examples of the use of these techniques in real-world scenarios together with sample images, can be found in Chapter 3, Example scenarios.

This chapter explains:

5.1 The Data Visualization Libraries

This section provides an overview of the Data Visualization libraries, The following sections describe the use of these objects as visualization techniques.

Macros

Visualization macro objects (cut, streamlines, data_math, and so on) are higher-level visualization tools that combine visualization base modules with UI objects in a structure that is convenient for building visualization networks and applications. Visualization macros have input/output ports and UI widgets that automatically appear on the Module control panel in the SingleWindowApp and MultiWindowApp applications. You connect them using the Network Editor.

For example, the advector macro that does particle advection is actually a subnetwork composed of a series of hierarchical macros that are in turn composed of the base modules DVstream, DVloop, DVadvect, DVglyph, and various UI widgets.

But one need not be aware of this underlying hierarchy to use the advector macro. Simply drag the advector icon into the Network Editor's workspace. Its widgets appear organized together on a Module Stack panel, and then you can connect suitable input modules or macros and an output renderer and begin work.

Macros are defined in v/modules.v.

In Libraries.Main, the visualization macros are classified as follows:

Input
Modules and macros that input data files from disk or create data. Geometries, for example, create objects such as slice planes, 2D and 3D axes, crosshairs, arrows, and diamonds, that are used by other modules to slice data, represent vector quantities, act as data probes, show data, and so forth.
Filters
Modules and macros that primarily modify the field's Node_Data.
Mappers
Modules that primarily modify the field's mesh.
Imaging
Modules and macros that use the ip_Image data type for image processing. These are documented in Image Processing5-24".
Viewers
Modules and macros that render data on the screen. These objects are part of the Graphics Display Kit, not the Data Visualization Kit. They are documented in the Graphics Display Kit manual.
Output
Modules and macros that write data to disk files.

DV prefix base modules

The "DV" base modules (DVcut, DVstream, DVdata_math, and so on) are the lower-level functions from which the visualization macros are constructed. They perform one basic operation on a defined Field input and produce a new, transformed Field output. These base modules have no user interface widgets to control their parameters. Rather, users compose their own user interfaces using either the AVS/Express supplied User Interface Kit widgets, or using widgets that they code and import into the system.

The DV base modules are defined in v/dv.v.

Base visualization objects can be viewed as special V group objects, from which an application can be built. Defined as groups, the base modules define their input and output data templates and the name of the method to call when the object is notified. For example, a V description of an isosurface group object template includes "input_field," "output_field," and "level." The description also includes the method "iso," the name of the data processing function associated with the object. Note that this group object can be used to process any field that has mesh and node data.

group DViso {
Mesh+Node_Data+Iparam &in {
xform+nonotify;
nnodes+req;
};
float+Iparam level;
Mesh+Oparam out {
&xform<weight=-1> => in.xform;
};
NParam_Data+Oparam nparam;
method+notify_val+notify_inst iso_update = "iso";
};

V provides the ability to associate the name of a data processing method ("iso" in the example above) with a callable function that gets executed when the object is notified. The set of all visualization data functions provides a powerful and extensible interface between the Object Manager and the Data Visualization Kit.

Base modules are also sometimes referred to as primitives.

Mid-level and DV_Param macros

When you look at the icons under Libraries.Visualization.Macros, you will also see a large number of icons with the DV_Param prefix, or no prefix at all. These are V macros that define a series of objects that are in between the low-level DV base modules, and the high-level AVS/Express macros. They define a collection of user interface macros, plus a set of parameter ports that together make it convenient to create the AVS/Express macros.

The AVS/Express macros use these intermediate level hierarchical macro objects to define themselves. You can use them to construct macros with your own user interface design.

These intermediate level macro objects are not documented on their own reference pages. See the corresponding high-level macros for documentation of their parameters and as an example of their use.

5.2 Overview of the Data Visualization Techniques

Having reached this point in this book, you should already be aware of the core concepts of data placement in AVS/Express - you should know how to import information and how to manipulate fields and other AVS/Express-specific constructs. However, until you begin to understand the fundamentals of information visualization, the branch of computer graphics that deals with rendering information in a graphical context, attempting to interpret this data in a meaningful fashion may prove daunting.

This chapter deals with some of the underpinnings of visualization - the computer science and mathematical techniques that allow you to take your data from the information space in which it currently dwells, to the visualization space that graphically represents this data in a form that is meaningful in a visual context. While these techniques are applicable everywhere, you should use the descriptions of the visualization techniques in this chapter to focus on using AVS/Express as the tool to realizing a useful visual representation. Think of the computer science and mathematical techniques discussed in this chapter as a blueprint and think of AVS/Express as the box of tools required to make your blueprint a reality.

Visualization techniques

The techniques discussed in this chapter are:

Although these techniques only scratch the surface of the methods available to you in the field of information visualization, they will give you a solid foundation.

Data preprocessing

Prior to using the above techniques, indeed, sometimes prior to reading the data sets into AVS/Express at all, it can be useful to apply various data manipulation tricks to prepare the data for the visualization. These techniques, known as data preprocessing, are also defined and discussed below.

These preprocessing tools, combined with the computer science and mathematics techniques described previously, can help you make sense of both collected and generated data sets. By applying a few of these simple techniques, insight can be obtained from extraordinarily large collections of numeric information.

5.3 Visualization techniques

Information with a spatial component collected as a volume is typically represented as discrete points in 3-space. Volumes of data can also be generated, rather than collected (for example, if you are running a fluid flow simulation tracking the location of each particle in your fluid model). Alternatively, information that is only two dimensional but also has a time component is often represented as a series of 2D slices stacked one on top of another. These types of views into your data are called volumetric rendering, and they pose a number of problems. Most notably, if your data is represented as a solid volume of points, how do you see the internal structure?

A number of techniques exist for allowing the information at your disposal to reveal internal structure. This section deals with a few of the more popular techniques, and although they are presented separately for the sake of clarity, there is no reason why several different visualization techniques cannot be used in combination.

Exteriors and edges

Chances are good that the data to be analyzed is finite - it exists in its own data space and it has its own boundaries, dimensionality, and context. It is often necessary to know the outer reaches of both the data and the data space that it inhabits. Although the terms data boundaries and data space extents may sound like the same concept, this is not necessarily always the case. It is easiest to talk about these terms when dealing with information that has a strong spatial component, such as map data or fluid flow regions. However, these terms apply equally well to information that is not anchored in space. Information of this type can include stock portfolio history or the economic forecasts of the European Union (EU).

The data collected by an experiment or generated in a simulation has two containment properties. The first is the sum total of the data points themselves: all of the data that has been collected or generated (using AVS/Express terminology, this would be the coordinate array). All of this data falls within it own data boundaries. By definition, the data boundary is the set that inclusively contains the data itself. When visualized, the data boundaries manifest themselves as visible edges, boundaries, and surfaces. In other words, the data itself has minimum and maximum values in each of the dimensions the analyst chooses to represent. These minima and maxima are the data boundaries.

By contrast, the data space extents are the limits imposed on the boundaries for the coordinate system in which the data set has been placed. Ideally, the data space extents are a metaset for the data boundaries, but this is not necessarily the case. (It is quite possible for information in an analyst's data set to lie outside the data space extents.)When forced into a graphical context, these data space extents can be rendered as edges, walls, or surfaces. Therefore, the data space extents can be thought of as the minima and maxima of the coordinate system(s) containing the data.

Let's consider an example to illustrate the difference between these two concepts. Assume that a market analyst wishes to examine all of the census information for the United States. In addition, the researcher is expecting the maximum data value to be 5,000,000 individuals.

The census data collected represents local populations rounded to the nearest 1000 individuals. A quick statistical analysis shows that the minimum value for the data set is 1000, and that the maximum value is 7,000,000. Furthermore, even though the researcher is interested in all 50 US states, there is no census data for Alaska and Hawaii.

Given this example, the data boundaries retrieved from the analysis shows the minima and maxima values to be 1000 and 7,000,000 respectively, and the minima and maxima for the latitude and longitude to be restricted to the continental US. The researcher, however, is using data space extents that run the values from 1000 to 5,000,000 and the spatial extents from 175oW longitude to 45oW longitude and 15oN latitude to 85oN latitude (the continental U.S. plus Canada and Mexico).

edges

Produces a wireframe representation of a mesh, including internal cell edges. This allows you to see the internal structure of the mesh.

Contours, isolines, and isosurfaces

Digital data, whether generated or collected, consists of discrete values over its own data boundaries. It is often convenient to group these values into "buckets" or "bins" which represent ranges of the possible data values. When examining demographic data, for instance, it might be interesting to see how the information is divided into age groups: 0-15 years, 16-25 years, and so on.

In traditional 2D visualization, contours are a technique used for visualizing these bins and their relationship to each other. When employing this technique, an algorithm produces potential boundary lines, or isolines, between data that falls on either side of a selected data value. Most often used in mapping to show elevation differences, contouring can be applied to nearly any 2D data set. Differences in magnetic field strength across a given surface, for example, can be represented with contour lines.

Internal structures of solid volumes may reveal a tremendous amount of information about the nature of the data. Being able to see inside a 3D model of a storm cloud, for instance, will provide the meteorologist great insight into the formation of the mesocyclone hidden beneath the opacity of the water vapor. To be able to perform this feat, it would be useful to take the concept of contouring data and extend it to three dimensions.

Contour lines can be taken across any plane in a given data set. By taking contours of various data values across each xy plane at discrete z intervals, the resulting set of stacked contour slices would begin to reveal the internal structure of the data set. Wire frames of the internal contours can be achieved by taking a second series of contours of the same data values across either the xz or yz (or both).

Once the wire frame objects that represent the internal structure of a volumetric data set have been created, it is only a small, conceptual leap to envision solid objects in place of these wire frames. A visually solid surface placed at a specific contour layer for a given data value is called an isosurface. A solid isosurface created at a 35o F temperature boundary inside of an atmospheric model, for example, would allow a researcher to see the relative shape and penetration of a storm core region.

Discretionary use of isolines, contours, wire frames, and isosurfaces will allow you to glimpse into the inner structure of your data.

Slices and cross-sections

Detecting patterns and structure using contouring and isosurfaces in a very complex volume, a volume composed of very high resolution data, or with a data set that does not lend itself easily to "binning" might prove elusive. Some volumetric data, such as medical magnetic resonance imaging (MRI) scans, may reveal more structure when the volume is dissected with arbitrary, two dimensional slices.

Imagine the structure inside a fresh water lake. If data from this lake was collected as digitized data, and a limnologist wanted to see the precise location of the pockets of cold water, a wire frame representing the boundary of each water pocket might be all that is required. The locations of schools of fish swimming in each cold water pocket could likewise be easily represented with additional wire frame surfaces.

However, what if the limnologist was interested in additional structure within the water of the lake? How could he or she easily visualize the location of each underwater eddy? Creating a wire frame surface for each eddy, although possible, would consume a lot of computer time, and more than likely result in a meaningless jumble of visual information.

A technique that might be employed in this case would be to carefully divide the representation of the lake into an arbitrary number of slices. These slices could be through any plane of the data set, no matter how it was oriented within the lake. By keeping track of the location of the slice within the original data set (see Exteriors and edges on page 5-7.) the limnologist can begin to gain insight into the complex structure of the lake. Color coding of the data, as well as some of the additional image processing techniques discussed in Image Processing5-24, would aid the researcher in understanding the nature inherent in the slice of data.

Colors, Lookup Tables (LUTs), and colormaps

One of the visual cues that aid humans as they navigate through their environment is the ability to detect and discern individual wavelengths of visible light. This information is represented in our brains as color and is passed off to different areas of our brain for further processing. The human ability to detect small differences in the wavelengths of visible light emitted or reflected by an object is probably an evolutionary survival skill - it is, after all, easier to spot the bright orange tiger against the dark brown grass if the eye can discern the difference between the wavelengths reflected off these two objects. (Wavelengths which are, incidentally, extremely close together in the spectrum.) Because we evolved with such a fine sensitivity to discriminate between two different wavelengths of light, color is one of the best cues available in information visualization.

Digital data values in a data set can be assigned colors in a variety of ways, but one of the most common is a specialized form of binning. Specific data values or specific ranges of data values are assigned an ordinal number. This number is used as an index into a list of colors in a color Lookup Table, or LUT. The color values at each index in the LUT are represented by a distribution of intensities on each of the red, green, and blue channels. The intensities in each of the RGB channels (the total number of color choices available to any one piece of data) are restricted by your computer's graphics hardware.

You may hear talk of 1-bit color, 4-bit color, 8-bit color, 12-bit color, 16-bit color, 24-bit color or 32-bit color. The number of color bits, or color depth, assigned to your computer's graphics hardware is the maximum limit on the color values that can be placed in your software's LUT. The relationship between this maximum number of colors available and the color depth of your hardware is simple for bit depths of 24 or less: 2n where n is the number of bits of color depth. So, 1-bit color means your software can only have a maximum of two colors in the LUT, 4-bit color allows for 16 colors, and so on. (Note: Typically 32-bit color systems allow each index to be represented by three 8-bit intensity values of red, green, and blue. The remaining 8 bits are normally reserved for an overlay plane or alpha channel to use for annotations, labels, and other objects not associated with the data values being displayed.)

There are several different formats for the LUTs themselves, only two of which we will discuss here: pseudocolor and true color. For pseudocolor, there is a single index pointing to a single value representing the color value. This value is broken up into some combination of bits to represent the red, green, and blue intensities. (24-bit color, for example, can have the first 8bits represent the red, the next 8 bits represent the blue and the final 8 bits represent the green.) True color, on the other hand, allows for a greater flexibility of color choices. Rather than each data value having a single index into a single LUT, each data value has three indices referencing three separate LUTs, one for each intensity of red, green, and blue. This allows for greater flexibility in the total number of color values that can be placed on the screen at one time.

The complete list of color values placed at each index of the LUT is referred to as a color map. Researchers often choose from a catalog of colormaps that are specific to their field of interest. A common color map is the cold-hot color map used by thermodynamics researchers: low number indexing in the LUT starts out in the blue color range, proceeds through yellow and then on into red in the higher indexes. AVS/Express comes complete with a standard catalog of color maps to meet the needs of most users and developers. However, if your needs require a specialized color map, AVS/Express allows you to create and save your own.

Glyphs

There will be many times that you will want to place markers in your data set or represent some data values as specific 3D graphic icons. Such a small 3D graphic objects which represent one or more data values at a single location in space is called a glyph. A single glyph can be used to represent many properties or variables of collected data at any given point in space. As such, they are useful tools for allowing you to interpret a large quantity of information at a single glance.

As a complex example of glyphs employed as a multi-variate analysis tool, consider a stock market analyst who is viewing a client's stock portfolio. The client may have 15 different stock holdings, and the analyst would like an effective way to represent not only the client's current stock holdings, but also each stock's current value, history, and projected market trend. Furthermore, the analyst wants this representation in a single, comprehensible view that allows rapid comparisons between the client's stock holdings. One way to achieve these objectives would be through the use of glyphs.

To begin, the client's portfolio could be represented as a volume in space: the x and y axes of the entire volume could represent the time of purchase of each stock, and each stock's current number of shares respectively. The z axis of the entire volume could represent the current dollar value of each stock. At each (x,y,z) location, the analyst might place a cylindrical glyph. The height (z value) of each glyph could represent that stock's position in time throughout the client's holding. The diameter of the glyph could represent the value of that stock holding at the particular moment in time denoted by the glyph's z value. The color of the glyph could indicate the projected forecast for that stock. (Green, for example, could indicate the stock was not ready to sell, red could indicate that the client should consider selling the stock.)

The resulting image would resemble a room of floating, tubular glyphs. The room could be rotated, and the relative size, shapes, positions, and colors of each glyph could be observed. In a single image, the use of glyphs allows a researcher to absorb a large quantity of information easily.

Another common use of glyphs is as an intelligent marker. Particle positions in a fluid flow model, for example, could be represented as spheres. The size of the sphere could indicate the type of particle that is being tracked in the model, and the color of the sphere could indicate fluid temperature at that specific location.

Vector fields

A vector field is traditionally used to denote data that not only has a position in space, but one or more additional components representing direction, velocity or energy. Information is either collected, generated or interpolated at regular intervals in a cartesian spatial system. Additionally, the information may be collected at these locations over time enabling you to view a fourth dimension if the fields are displayed in rapid succession.

Vector fields predate computer graphics by many decades. Early vector fields were laboriously plotted by hand on 2D graphs to represent magnetic fields, winds, and other rigorous, multi-variate data. Today, computers are used to represent vector fields not only in two dimensions but also in three or four dimensions. In many ways, a vector field can be considered a highly specific form of a glyph.

At each (x,y) or (x,y,z) location, a small vector (which can be thought of as an arrow glyph) is drawn. The direction of the vector can give information to the researcher as to which path a particle at that position would take. The length of the vector may represent the velocity of a particle at that location. Additional information could be added to the vector in the form of the thickness of the arrow glyph (perhaps field strength) and the color of the arrow glyph (perhaps fluid temperature).

When the entire scene is viewed as a whole, immediate insight (even to an untrained eye) can be gained on the behavior of the field that is being examined. The behavior of fluid flowing through a pipe, of the field direction and intensities of a magnetic source, or of the migratory patterns of birds become immediately obvious to the observer.

City scapes, ribbon plots, and surface plots

Recall from previous discussions that data is often organized into bins or buckets that represent some scalar value for a given variable. For example, the number of sales for each month of the year may be represented as a single row table of numbers: 120 for January, 95 for February, and so on. Additionally, the entries in this table may not represent the total number of items for a given variable but rather a specific value for this variable. Say, the height of the landscape at 50 foot intervals taken along a 1000 foot straight line: x=0 has a height of 25 feet, x=50 has a height of 5 feet, and so on.

Traditional ways of viewing information like this is via histograms, a solid bar plotted for each of the variables (along the x axis) up to a height (along the y axis) that represents the value at that variable. Histograms are still valid ways of viewing single row tabular information, and they can even be used for representations of simplistic, multi-variate data by placing multiple histogram bars at each variable entry. Say, for instance, the sales for the German and US offices for each month are to be represented: 120US and 110Deutsch for January, 95US and 150Deutsch for February, and so on.

However, what if your data is not in a single row table? What if your table contains multiple rows? What if the data is not evenly spaced? What if some of the values in some of the rows/columns are missing, or what if some of the row/column positions contain multiple entries? Viewing multiple histograms for each of these combinations can become confusing. In the realm of 3D information visualization, there are answers to these questions.

City scapes

City scape plots are used to represent information that have no xy relationship.

The two dimensional histogram has an analogous concept in three dimensions called a city scape. Rather than columns running along a 2D x axis, a city scape consists of square columns at xy locations. The height of the column represents the value at the location that an analyst wants to examine. Again, multivariate information can be introduced by varying the x width and y width of each column as well as the color. The name city scape is derived from the effect of viewing one of these plots edge-on: the blockiness of the square columns gives one the impression of a city's skyline.

(The AVS/Express module city_plot takes 2-space fields with node data and creates a 3-space field city scape using the node data as the z value.)

Ribbon plots

Ribbon plots are used to represent information that has a relationship in either the x or the y dimension.

Ribbon plots are the 3D versions of line graphs. Information that can have a xy position and a z value representing information to be examined can be plotted in an xyz volume with ribbons or streamers connecting either the xz locations or the yz locations. Also, additional information can be added to the scene by varying the width or thickness of each ribbon by some value, by adding color to the ribbon that varies with a change in value, or by some combination of the two.

(The AVS/Express module ribbon_plot takes 2-space fields with node data and creates a 3-space field ribbon plot using the node data as the z value.)

Surface plots

Surface plots are used to represent information that has a relationship in both the x and y dimension.

Surface plots are a logical extension of ribbon plots. Here, rather than ribbons connecting either the xz or yz locations, the z values are treated as the height of a surface at some xy location. Gradients are computed between xy locations to keep the appearance of a smooth, unbroken surface. Additional information can be added by varying the color of the surface based on some extra value that is important to the researcher. A common example of surface plots would be a topological terrain map of some region of the earth.

More exotic examples of surface plots can also be constructed. Consider a 2D map of the US that contains additional information concerning the population of the cities and towns. The xy coordinates would correspond to longitude and latitude values. A surface could then be constructed based on the z value representing the population measurement at each longitude and latitude location (mountainous areas would appear near New York, Chicago and Los Angeles, and valleys would appear in the Dakotas, the Rockies, and the southwestern states of Arizona and New Mexico). Additional information, perhaps income information, could be represented as colors on the surface.

The interesting point about the surface plot in this scenario is that not every xy location would need a census value. Since surface plots essentially interpolate z values when the gradients of the surface are computed, the US census plot would appear as an unbroken landscape.

(The AVS/Express module surf_plot takes 2-space fields with node data and creates a 3-space field surface plot using the node data as the z value.)

Probes and interactions

Often, data visualization is required to be more than a one-way paradigm. It may become necessary for a researcher to interact with their data set. Observing a surface plot of recent stock performance, for example, a stock analyst might notice an anomaly represented as a spike in the surface. The stock analyst may then want to interact with the data by clicking on the spike, which would call up another data set that might explain the anomaly.

A simpler mode of interaction would be through the use of a probe, a geometric construct that you can insert into a scene and move. The probe then generates an event when it collides with a data location. The response to this event is determined by the user and application needs: often it is no more than a printout of the value of the data point or its location, but other reactions can be tied to the event as well.

5.4 Geometries

Geometries are macros that create simple geometric objects (lines, planes, diamonds, arrows, crosshairs, etc.) as unstructured Meshes. You can use these simple geometries in two basic ways:

When used with the glyph macro, a copy of the geometry will be drawn at each node in the mesh. The copies can be colored and scaled according to the data values at that node location. In some sense, this visualization technique is the closest representation of a field to reality-it shows data only at the nodes where it actually exists; no interpolation is done that "fills in" data between the real node locations.
In advector, the geometry becomes the representation of the particles that are being advected.
In probe, streamlines, and interp data, the geometry is a pointer that you move around the field, sampling the data it is pointing to or intersects.
In slice and cut, the geometry is a slicing object that either divides a field in two, or extracts the data where the slicing object intersects the field.

Geometries can be 1, 2, or 3D objects (a point, a line, a box). Geometries can be specialized according to the type of mesh you are intending to sample with them. The "F" prefix geometries take a field as input, tailoring their output to the extents of this input field. For example, FLine2D assumes a 2D input field, while FLine3D assumes a 3D input field. Other geometries do not take inputs (Line2D, Line3D).

Lastly, some geometries come with their own Transformation Panel user interface that lets you move them around in space, while others require that you use the data viewer's transformation facilities. In this latter case, to move the geometry you select the geometry as the current object, then transform it with the mouse buttons or the viewer's object transformation panel.

Example network

Here is one sample network (Libraries.Examples.Vizualization.Grad) that shows both basic functions of geometries-drawing a glyph at node points and creating a sampling object-used at once. The network creates a picture of the vector gradient in a field. In this example:

And here is the resulting output:

Figure E-1


Note that Arrow1 could be replaced with any number of geometries (Arrow2, Axis3D, Cross3D, Diamond3D). The only difference is that you would see that geometry instead of the wireframe arrows. FPlane could be replaced with FPoint3D, FLine3D, or FBox. Then, instead of a slice plane of vectors, you would see respectively, a single point vector, all vectors along a line, or all vectors within the volume of a box.

Network Editor locations

The Geometries are located in these paths in the Network Editor:

Libraries.Main.Geometries
Libraries.Visualization.Geometries
Libraries.Templates.GEOMS

V locations

Geometries are defined in v/geoms.v.

5.5 Data preprocessing

More often than not, collected data is imperfect. There may be a lot of noise, too much data, too little data, and so on. Although it is certainly possible to read the information and perform any number of the above visualization techniques to extract the information desired, the simple fact is: visualization can be expensive. It costs unnecessary CPU cycles to process information through the system that is ultimately meaningless to the research at hand.

Rather than carry around unnecessary baggage, or shunt noisy data through a CPU intensive system, it is often advantageous to preprocess the information prior to mining it for information. At any point in the data cycle, data can be added, removed, reduced, enlarged or cleaned up. Where these preprocessing filters are placed depends on what kind of preprocessing needs to be performed.

This section will help guide you through some of the more popular preprocessing techniques. By examining the techniques and then re-examining your data, you should be able to decide which of these filters will help reduce your information to its essential components. These techniques are just starting points, others will become obvious to you as you continue to work with your data sets.

Combiners and extractors

It is often the case that in order to understand a given scenario presented by a data set it is necessary to create a new context for the information. This can be done by either adding or subtracting information to the data.

Combiners are processes that are used to merge two or more separate data sets. A geologist wishing to understand the potential impact of an earthquake in the San Francisco Bay area would need several data sets: data representing the geologic activity and forecasts for the area, GIS information to place that data in a geographic context, census information showing the population densities in the area, and perhaps residential zoning information. The process of ingesting all of this information, registering it so that the data appears in a common spatial context, and correlating this information along common timestamps would be the responsibility of a combiner.

An extractor is a process or group of processes that do the opposite - they remove information of interest from a larger context. Given a large data set of information, it may be necessary for a researcher to pare down the information to focus on a core set of data specific to his research. Given a full set of medical data for seniors in the New England region, a researcher may only be interested in males over 60 who smoke. An extraction process would be required to sift through the large data set looking for the specific information.

Data filters

Collected information is often accompanied by noise: sensor inefficiency, unwanted information from outside the collection area, missing or incomplete data, and so on. Although it is possible in a sophisticated information visualization system, such as AVS/Express, to process a data set containing all of this extraneous information, it may make more sense to remove this information up front.

Data filter is a term that refers to processes at various stages of the visualization cycle which either remove unwanted information or let only certain pieces of information through. Although most commonly employed as a front end to the visualization software, these filters can also be applied at the data collection point, as a post-processor to the data collection, or at various stages of visualization where information is added or subtracted via combiners and extractors.

Wherever it is applied, the main goal of these data filters is to restrict the amount of information that gets processed for the final analysis, or to tag dubious or missing data values so they can be easily identified by upstream processes.

Interpolation

Missing, erroneous, unavailable, or non-contiguous data is a fact of life in most data sets. This is true regardless of the data collection process regardless of whether collection is via sophisticated electronic sensors or by door-to-door pollsters. Unfortunately, many analysis algorithms require a uniform distribution of information and do not tolerate (or, at least, do not gracefully handle) missing or non-contiguous information.

In these cases, it is necessary for a computer algorithm to "fill in the blanks" with best guesses at the missing information. The myriad algorithms that perform these guesses fall under the category of interpolation routines. Rather than patch the holes in a data set with random numbers, interpolation routines examine surrounding information in the data set, and compute a best estimate based on trends it sees in that information.

The simplest form of an interpolation routine merely takes the average or the mean of the surrounding data. Consider a large n x m matrix of numbers containing missing data. By passing a 3 x 3 window over the data, missing data values can be interpolated by averaging the remaining values in the window. As an example, consider the window:

3
4
-1
6
X
 2
5
3
 1

By averaging the remaining values in the window, a value for X can be estimated as 2.6.

More sophisticated examples of interpolation methods can be drawn from the field of numerical analysis, such as the piecewise polynomial approximation method of filling in points of a curve using a method known as spline interpolation. The term spline comes from a construction method originally used to make canoes. A piece of wet, flexible wood, called a spline, was bent between two fixed points. The resulting parabolic shape became the shape of the canoe body.

The mathematical equivalent of the wooden spline is essentially the same idea: unreported points between two known points on a curve are fitted via a spline, whose shape has been determined based on the known behavior of the curve before and after the two points. Splines, the concept of which can be extended to three dimensions as well, can be approximated using a variety of mathematical formulae. The basic idea, however, is to derive a simple polynomial equation that fits the behavior of the known points. Values at missing locations can then be interpolated by plugging the desired locations into the polynomial.

There are obvious problems with interpolation schemes: the interpolation method may not be the best fit for the behavior of the known data, boundary problems exist for most interpolation methods, and so on. However, the most severe danger of interpolation schemes comes not from the interpolation results, but from the mind of the researcher - it is deceptively easy for a researcher to forget to take into account the nature of interpolated data. In essence, the researcher runs the risk of using interpolated information as though it were real data.

Cropping and cutting

The easiest way to remove extraneous or unwanted data from a data set is to simply throw it away! Most commonly used in image processing, cropping and cutting have applications in other visualization fields as well.

Cropping refers to the technique of removing all of the information except the region of interest. In the image processing realm, the technique can be easily imagined: outline an area with a mouse, and everything outside the area is removed from the data set. The same principle can apply to, for example, a three dimensional matrix of data: specify an region of interest (ROI) using eight (x,y,z) vertices to define a bounding cube, and remove all data from outside that ROI.

Cutting is the opposite: an ROI is again defined, but this time only data outside the ROI is retained, the rest of the information is removed.

Data sampling and regridding

If the researcher is fortunate enough, he or she may be able to exercise a certain amount of control over the data collection method. If this is the case, thought should be given to the spatial location of the data collection points as well as the frequency of data collection at those points. These factors taken together is often referred to as data sampling.

By paying attention to the data sampling rate and the data sampling location up-front, it is possible to avoid certain processing techniques after the data has been collected. The mathematics in many visualization techniques (such as both two- and three-dimensional contouring) require that information is placed at regular intervals in a rigid grid structure. Likewise, many techniques, such as certain frequency resolution filters, require that data is collected at regular intervals.

Unfortunately, many researchers either do not have the luxury of being able to control their data sampling locations and rates or the nature of the data itself makes regular data sampling difficult or impossible. For these situations, analysts often turn to a specialized form of interpolation called regridding: the interpolation of data to fit a regularized grid.

Several interpolation routines exist for the regridding of data. One of the more popular methods is a simple plane (3D) or line (2D) fitting routine. Using the data that does exist, a system of linear equations representing a plane (or line) that best fits the observed points is determined. Linear algebra techniques (Gauss-Jordan Elimination, for example) are then employed to reduce the resulting matrix to yield a solution for the linear equations. All that remains at this point is to iterate through a regular grid, plugging the grid coordinates into the reduced equation for the plane. The interpolated values at those grid locations are the end result.

Coordinate transformations

Although modern computers are quite adept at handling complex floating point math, it is advantageous to save these processing cycles wherever possible. Often, one of the easiest ways to save costly mathematical computation is to merely translate your data from one coordinate system to another. This process is called coordinate transformation.

Consider astronomical data collected with a sensor that returns its information in the cartesian coordinate system as a collection of (x,y,z) data points with Earth at the center of the coordinate system. While this may have been the most efficient way for the sensor to return its information, the burden is now placed on the analyst's software to plot this data as information in a sun-centered, spherical coordinate system as (r, j, z) data points.

Conversions between these coordinate spaces employ quite a bit of trigonometry:

Rather than requiring the analyst's visualization algorithm to perform this computation on every data point every time a new visualization operation is performed, the analyst can spend the CPU cycles up front, and convert the data set from cartesian to spherical coordinates prior to importing the data into the visualization system.

Conclusion

Information visualization is more than just taking data into a complicated graphics package and rendering that information on the screen. It is often necessary to "coax" your data into revealing the hidden information within.

There is no single panacea for visualization or data preprocessing techniques, however with time and experience, the proper use of the methods described here, as well as many more that you will learn and invent along the way, will become second nature.

5.6 Image Processing

A subfield of information visualization dealing with imagery is called image processing. In this usage, an "image" is any 2D data field that deals exclusively with two dimensional imagery obtained from either sensors or from generated information or from a two dimensional cross-section (see Slices and cross-sections on page 5-9) from a larger volume of data. Image processing refers to the manipulation of an image as data in order to extract as much information as possible.

The terms image and imagery are slightly misleading in this context, since it implies that the sensors are gathering information from the spectrum of visible light. While this is true in the case of data obtained from photography, visible light satellite imagery, and so on, many sensors collect information from a variety of electromagnetic sources outside the range of human vision: radar, radio astronomy, electron microscopy, and so on.

However, because human senses remain rooted in that area of the electromagnetic spectrum we call visible light, the data from these extra sensory images are assigned color maps that allow the information to be viewed directly on a computer screen. Images displayed and stored in this way are referred to as false color or pseudocolor images, because the colors assigned to the data values are essentially taken from arbitrary color maps that are meaningful to the researcher in some way. Likewise, imagery taken from the visible spectrum can also have pseudocolors assigned to the data values. Often, information that may be hidden due to low contrast between two adjacent data values reveals itself in this manner.

Image processing, however, is not just tinkering with the color maps. It also involves manipulation of the image as a data set: cropping, rotation, additions and subtractions of two or more images, bitwise ands and ors of two or more images, and so on. In addition to these relatively simple mathematical tricks, the field of image processing includes far more advanced techniques, such as:

AVS/Express' Image Processing Kit includes modules and methods to perform all of these tasks and more. AVS/Express can read and write a variety of popular image formats, such as GIF, TIFF, and JPG, as well as AVS/Express' own image format.

The Image Processing Libraries

The IP image processing macros are located in these paths in the Network Editor:

IP macros with user interfaces
Format conversion macros
Base modules and definitions

V locations

The Image Processing Kit objects are defined in these V files:

IP macros with user interfaces.
v/ip.v
IP base modules and data object definitions.
v/ip_pkg.v

Using the Image Processing objects

The AVS/Express Image Processing Kit provides objects that perform common image processing functions.

The ip_Image and ip_Roi formats

To use the IP objects of any data type the data must be in ip_Image format. ip_Image is defined in v/ip_pkg.v.

To use region of interest (ROI) objects with IP objects, data must be in ip_Roi format. ip_Roi is defined in v/ip_pkg.v.

Format conversion

Various macros are provided that convert data between AVS/Express Field format and ip_Image and ip_Roi formats:

Viewing images in a renderer

To view the output of one of the IP image processing macros, you have a choice between two dimensional and three dimensional rendering.

A two dimensional rendering, using Uviewer2D or the 2D port of Uviewer, means that the image is displayed as an X pixmap in a 2D camera. You can translate and scale the image, but you cannot rotate it in the (nonexistent) Z plane.

A three dimensional rendering, using Uviewer3D or the 3D port of Uviewer, means that the image is displayed as a mesh in a 3D camera. You can translate, scale, and rotate the image in X, Y, and Z. Note that rendering an image as a mesh is slower than rendering it in a 2D camera as an X pixmap.

Note: In order to see the image in the viewer, and before moving or scaling the image, you should initialize the image in the view by clicking the Reset, Normalize, Center button on the DataViewer tool bar.

Reset resets the input to its original position. Normalize makes the image fill the view. Center sets the center of transformations performed on the object to the center of its extents instead of its lower left hand corner.

All renderers can be found under Libraries.Main.Viewers.

5.7 Adding your own image readers and writers

This section outlines how to write your own image readers and writers to supplement the image formats already supported by the DVread_image/ Read_Image and DVwrite_image/ Write_Image objects.

AVS/Express provides Application Programming Interfaces (APIs) that allow you to more easily write your own image readers and writers for AVS/Express.

Image reader API

The image reader API provides five function calls:

Image writer API

The image writer API provides ten calls:

Implementing an image reader

To add an image reader you write a library consisting (minimally) of this set of functions and "register" the library in the modules/rd_image.c source code. The necessary steps are outlined in the following procedures:

1. Edit v/modules.v and add the name of your reader to the UIradioBoxLabel labels list. Its position in the list must correspond to that in FUNCread_map array in modules/rd_image.c since the integer value returned for the type will correspond to this order. See later in these procedures for further information.
2. Edit modules/image.h to add a #define DV_IMAGE_FORMAT_xxx entry for the new file format. Add function prototypes for the interface functions to the new image reader library, also in image.h. Note that the function arguments must correspond to the library API, that is, be identical to others in image.h. The API function names can be any (unique) name you choose.
3. Edit modules/rd_image.c to add the new format to the FUNCread_map list. This array is a list of entries of the form:
<DV_IMAGE_FORMAT_xxx>, <format info>
This is used to map the UI type value to an independent file format definition. Ensure your reader's position in the list correspond with that in the UIradiobox list.
Add an xxx_info[] static char array of library and function names, and a funcs_t static struct for the new reader library. The function names, of course, correspond to those in modules/image.h. The library name in the xxx_info array can be anything you choose but must be unique. Ensure that the info and funcs lists are correctly added as the <format info> in the FUNCread_map array.
The xxx_info[] array is used if the reader library is to be dynamically loaded, the funcs_t struct is used if the reader library is linked into the AVS/Express application (either statically or as a shared library). For further explanation of this subject, see below.
4. Also in modules/rd_image.c, add a DV_IMAGE_FORMAT_xxx case to the format switch in DVread_image_update().
Add an else clause for the new filetype in FUNCget_image_filetype(). This is the function that determines the format from the file itself.
Add a DV_FILETYPE_xxx case to the type switch in FUNCget_image_filetype_name(). This function returns the ASCII name of a filetype given its integer value.
5. Write the library functions. These will likely consist of the set of API interface functions, plus some lower-level reader functions that access header information, colormap data (if present), and the raw image data from the file. These functions will also do any data decompression required.
You may look at the modules/image/libavsx library functions as an example of how to write the interface functions. This is a very simple example, but it does illustrate, in particular, how the API defines a void struct to reference data between the library and the caller. This is the mechanism by which the generic FUNCread_image() function in modules/rd_image.c could be written: the caller has no knowledge of the data structure internals being passed to and from the library.
6. Decide whether you want to build the reader library into the AVS/Express executable (either statically or dynamically) at link time, or dynamically load the library at run time. For most platforms, loading the library dynamically at run time is the default (see the appropriate include/<machine>/machinc.mk file to find out if this is the case for your platform).
Loading the library dynamically at run time has the benefit of occupying no physical memory when you run the application until a call is made into the library. This is particular advantageous if the library is very large and never referenced. There are disadvantages, however:
To link the reader libraries into the executable, the NO_DL_LOAD compiler flag must be defined when compiling modules/rd_image.c. You can do this in two ways:
Note: Regardless of the method it will be necessary to edit the express.mk makefile to add the list of libraries normally referenced at run time to the express link line. To do this, search for "-lmods" in that file and add the list right after that library. You can find this list in v/templ.v. Search this file for "NO_DL_LOAD" - the list of libraries is right after that reference. Add your new library to this list.
To permanently bind the dynamic libraries at link time the best solution is to edit the include/<machine>/machinc.mk file and add NO_DL_LOAD to the list of CONFIGFLAGS. This will cause NO_DL_LOAD to be defined in the compiler flags list, and it will automatically add all the normally dynamically loaded libraries to the express link line. Note you will need to add your library to the v/templ.v NO_DL_LOAD library list for inclusion in express.mk.
There is currently no easy way to selectively add dynamic libraries to the list for inclusion at link time and reference others at run time, although this might be done by editing v/templ.v, the machinc.mk and rd_image.c files.

Implementing an image writer

To add an image writer you write a library consisting (minimally) of the set of API functions and "register" the library in the modules/wr_image.c source code. The necessary steps are outlined in the following procedures:

1. Edit v/modules.v and add a new UIoption to the format list following the format_label UIlabel. Add this option to the format_db UIradioBox cmdList. It's position in the list must correspond to the that in FUNCwrite_map array in modules/wr_image.c since the integer value returned for the type will correspond to this order. See later in these procedures for further information. Update the active fields in the UIoptions for the new writer. This enables or disables the various radiobox buttons and should be set to be appropriate for your new writer. Add the new value in the correct place to the switched list of values that are set according to the selectedItem in the format_rb. This looks complicated but is actually straightforward once you understand the mechanism. This is the mechanism to use if you wish to add new parameters to the write_image module.
2. Edit modules/image.h and add a #define DV_IMAGE_FORMAT_xxx entry for the new file format. Add function prototypes for the interface functions to the new image writer library, also in image.h. Note that the function arguments must correspond to the library API, that is, be identical to others in image.h. The API function names can be any (unique) name you choose.
3. Edit modules/wr_image.c and add the new format to the FUNCwrie_map list. This array is a list of entries of the form:
<DV_IMAGE_FORMAT_xxx>, <format info>
This is used to map the UI type value to an independent file format definition. Ensure your writer's position in the list correspond with that in the UIradiobox list.
Add an xxx_info[] static char array of library and function names, and a funcs_t static struct for the new library. The function names, of course, correspond to those in modules/image.h. The library name in the xxx_info array can be anything you choose but must be unique. Ensure that the info and funcs lists are correctly added as the <format info> in the FUNCwrite_map array.
The xxx_info[] array is used if the reader library is to be dynamically loaded, the funcs_t struct is used if the reader library is linked into the AVS/Express application (either statically or as a shared library). For further explanation of this subject, see below.
4. Also in modules/wr_image.c, add a DV_IMAGE_FORMAT_xxx case to the format switch in DVwrite_image_update().
5. Write the library functions. These will likely consist of the set of API interface functions, plus some lower-level writer functions that write header information, colormap data (if present) and the encoded image data from the AVS field. These functions will also do any data compression required.
You may look at the modules/image/libavsx library functions as an example of how to write the interface functions. This is a very simple example, but it does illustrate, in particular, how the API defines a void struct to reference data between the library and the caller. This is the mechanism by which the generic FUNCwrite_image() function in modules/wr_image.c could be written: the caller has no knowledge of the data structure internals being passed to and from the library.
6. Decide whether you want to build the writer library into the AVS/Express executable (either statically or dynamically) at link time, or dynamically load the library at run time. For most platforms, loading the library dynamically at run time is the default (see the appropriate include/<machine>/machinc.mk file to find out if this is the case for your platform).
Loading the library dynamically at run time has the benefit of occupying no physical memory when you run the application until a call is made into the library. This is particular advantageous if the library is very large and never referenced. There are disadvantages, however:
To link the reader libraries into the executable, the NO_DL_LOAD compiler flag must be defined when compiling modules/wr_image.c. This can be achieved in two ways:
Note: regardless of the method it will be necessary to edit the express.mk makefile and add the list of libraries normally referenced at run time to the express link line. To do this search for "-lmods" in that file and add the list right after that library. The list can be found in v/templ.v. Search this file for "NO_DL_LOAD" - the list of libraries is right after that reference. Add your new library to this list.
To permanently bind the dynamic libraries at link time the best solution is to edit the include/<machine>/machinc.mk file and add NO_DL_LOAD to the list of CONFIGFLAGS. This will cause NO_DL_LOAD to be defined in the compiler flags list, and it will automatically add all the normally dynamically loaded libraries to the express link line. Note you will need to add your library to the v/templ.v NO_DL_LOAD library list for inclusion in express.mk.
There is currently no easy way to selectively add dynamic libraries to the list for inclusion at link time and reference others at run time, although this might be done by editing v/templ.v, the machinc.mk and wr_image.c files.

XIL on SUNOS5

If you are running SunOS5, the following image processing functions use Sun's X Image Processing Library (XIL):

Note that XIL only supports the byte and short data types. The IP library functions process the other data types.

Regions of Interest (ROI) currently are not accelerated. Therefore, the IP library function is used when there are any regions of interest.

You can define the following environment variables for control and timing (note that their values are ignored):

5.8 Geographic Information System (GIS) Components

This section introduces the Geographic Information System (GIS) functionality provided by AVS/Express.

For detailed information, see the online documentation for the GIS objects.

Geographic Information Systems (GIS) is a branch of cartography and mathematics dealing with computer generated mapping and coordinate transformation systems. GIS can be readily applied to a variety of fields as diverse as geology, oceanography, demographics, planetary science, statistics, and finance.

The basis of GIS is to provide a common coordinate system to represent spatial data in a real world or planetary model. Information tends to represented as a latitude and longitude coordinate as well as an optional altitude component. Typical GIS applications render information that has a natural spatial component, such as oil and mineral detection or air traffic routing, into world model that already contains information in a geo- or planetary-centric view, such as a map of Colorado.

AVS/Express, provides most of the components necessary for importing and manipulating information in these real world models. Modules are provided for converting from (x,y,z) coordinate space to latitude, longitude, altitude coordinate space, as well as modules for transforming those coordinates from one cartographic representation to another. (Say, for instance, from mercator map projections to Albers map projections.)

Data can be passed into these modules in ways that are natural for AVS/Express users and developers: via existing data fields. Points, polylines or mesh data fields can easily be passed into one of the GIS transformation modules, and the resulting data field can be passed on to other, existing AVS/Express modules.

Note: It is important that when importing (x,y,z) data into the GIS modules and macros that the following rule is remembered: When cartographers translate data to a cartesian plane, the meridian (longitude) values map to the x-axis, and the parallel (latitude) values map to the y-axis. Therefore, when passing (x,y,z) data into the GIS components, all components expect the ordering as (longitude, latitude, altitude). If you do not see the expected results while using the GIS components, this is the first item you should check.

AVS/Express also provides data readers for two widely used GIS data sets:

To learn more about Geographic Information Systems, there are two excellent references from the U.S. Geologic Survey. These references were extensively utilized in the creation of the AVS/Express GIS modules:

"Map Projections Used by the USGS," Geologic Survey Bulletin #1532, John P. Synder, U.S. Government Printing Office.
"Map Projections: A Working Manual," US Geologic Survey Professional Paper #1395, John P. Synder, U.S. Government Printing Office

5.9 Accessing Data on the World Wide Web

IMPORTANT: The macros and the web-related functionality that is discussed in this section are not supported on the IBM or the Digital platforms.

The advent of the World Wide Web (WWW) has resulted in a common method of data sharing: information is placed on remote web server and the address, or uniform resource locator (URL), of the location of that data is made public.

Although accessing data sets over the web is convenient, it typically requires a reader on the client side to understand the information as it comes over the wire.

IMPORTANT: Without a reader on the client side, you would have to download the data sets into a directory on the local machine prior to use; however, the modules that are introduced in this section do not support "reading" from a local machine.

The following modules make it possible to read AVS/Express data sets that are published on the WWW directly into an existing AVS/Express network:

Retrieves and caches the contents located at a specific URL.
A user interface wrapper for the W3Cget_URL module
An application module that reads a web-based AVS/Express field.
An application module that reads a web-based AVS/Express geometry.
A sample module showing the capabilities of W3Cget_URL and ReadWebField.

These modules and this web-related functionality allow for convenient data sharing as well as access to potentially dynamic information using the web as a medium of transport. For details on these modules, see the online documentation for these objects.

Note: The URL reading modules in this version of AVS/Express rely heavily on an AVS-modified version of the WWWLibrary from the World Wide Web Consortium (W3C). The W3C, formed in 1994, is an international consortium of industry and academic institutions devoted to developing common standards for the web. For more information on the W3C, see http://www.w3.org/pub/WWW/.

Virtual Reality Modeling Language (VRML) Output

The OutputVRML macro contains a VRML renderer-a stream device that captures the view and writes graphics primitives to a VRML-formatted file.

For details on the OutputVRML macro, see the online documentation for the object.

VRML is a network transparent protocol for communicating 3D graphics over the World Wide Web. VRML files are bound with an identifying MIME type and transported using standard HTTP services on a web server. Your basic HTML web browser processes VRML files with special purpose 3D viewers, called VRML browsers. Your VRML browser may be supplied with your HTML browser, but it is usually downloaded separately, perhaps from a third party vendor. The VRML browser can either be a stand-alone browser or a plug-in component that can be embedded in frames or within HTML pages displayed in the main HTML browser.

The AVS/Express VRML renderer supports two versions of the VRML standard.

Anchors

The OutputVRML module can create anchors in the output scene. An anchor is the head of a hyperlink within a web document. In HTML the anchor is implemented using the following syntax (with an optional TITLE attribute):

<A HREF="url" TITLE="label"> anchor text </A>

When the pointer is placed over the anchor text in the browser, the label, or destination URL, may be displayed in the status bar. If the anchor text is selected by clicking with a mouse button, the hyperlink is activated and the destination URL becomes the current document in the browser.

A similar hyperlink paradigm is supported in VRML. There is a 3D anchor object which has URL and label attributes. When the pointer is placed over the 3D object within the VRML scene, the label, or destination URL, may be displayed in the browser. When the 3D anchor object is selected by clicking a mouse button, the destination URL becomes the current target of the browser. The destination URL can be an HTML text document or another 3D VRML world.

The AVS/Express GDobject has two string sub-objects that support the creation of anchors in VRML output. The new sub-objects are:

The string specifies the destination URL for the hyperlink. For example, "http://www.avs.com"
The string specifies the hyperlink label. For example, "AVS Home Page"

When the VRML renderer encounters a GDobject with non-empty WWW_url value, it creates the appropriate anchor syntax for the object's geometry. That object will become the anchor for a hyperlink in the VRML output.

Note: VRML 2 has optional support for a frame target attribute, which allows VRML anchors to operate in a framed web environment. This feature is not supported in this AVS/Express release.

Here are some examples of AVS/Express visualizations using anchors:

Example 1

An orthoslice through a uniform field is written as a JPEG image. A 3D scene containing the orthoslice mesh is written in VRML with the slice object hyperlinked to the image. Selecting the 3D slice in the VRML browser loads the 2D slice image into the browser.

Example 2

Two versions of the orthoslice scene are created using a downsize module, one at high resolution and one at low resolution. The two scenes are written to separate VRML files. The slice object in each scene is the anchor for a hyperlink to the other scene. The slices are given labels "high-resolution version" and "low-resolution version". In this scenario, the low-resolution scene is used as a 3D thumbnail for the more detailed visualization. In other words, clicking on the low-resolution slice displays the higher quality rendering.

Example 3

The orthoslice is animated through the dataset using the Loop module. A sequence of n VRML scenes is written using dynamic mode in the OutputVRML module. The output filenames are generated by concatenating the loop counter to a fixed filename. Each scene also contains a user interface to control the animation with reset/next/previous actions that have the appropriate URLs set in their WWW_url attributes. For example, a non-cyclic algorithm might be:

Node Names

Node names are added to VRML 2 output using the DEF syntax. A named node can be queried from a Java application, and its attributes can be modified. A name is added to the transform group node containing primitives, and also the appearance node which contains material and texture nodes.

The DEF name is derived from the name of the corresponding AVS/Express data object, but underscores ("_") are added to generate a legal VRML name. The rules for adding the underscore are:

For example, when the Read Geoms module loads the math.geo geometry, a data object called math.obj1 is created in the AVS/Express viewer. The OutputVRML module converts this name to math_obj1, because the point character (".") is illegal in VRML 2 names. The appearance node gets the same name, but with an _app suffix. For example, the math.obj1 appearance node will be called math_obj1_app.

Multiple primitive groups can correspond to a single GDobject name. If these are derived from additional render modes, then the non-default groups are given an additional suffix as described below:

For example, if the math surface has lines mode Regular, then the outline object in VRML 2 output will be called math_obj1_outline, and the appearance node for the lines will be called math_obj1_outline_app.

There can still be multiple occurrences of a DEF node name, either multiple primitive groups per object, or multiple primitive appearances per group. Any Java application manipulating these nodes should make sure it loops over repeated names.

World Wide Language Support

The VRML 2 standard supports a UTF8 encoding of the Unicode character set. This is a special encoding that leaves ASCII values unchanged and requires only one byte per character. Other languages, including the ISO8859 family of 8-bit European character sets, must use two or more bytes per character. See http://vag.vrml.org/VRML2.0/FINAL/spec/part1/nodesRef.html#Text.

Several components of VRML 2 output can be internationalized. These components include:

Stroke text is converted to line primitives by the AVS/Express Graphics Display Kit and it cannot be internationalized in AVS/Express viewers or in VRML output.

The scene title and viewpoint name are translated using the standard AVS/Express dictionary mechanism. The dictionary mechanism is used for English names in the default C locale. The keywords VRML_TITLE and VRML_VIEWPOINT are translated using the vrml.dct dictionary file under the current project.

For example, the full filename on a UNIX system would be:

<project>/runtime/nls/<locale>/vrml.dct

where:

project is the project directory

locale is the AVS/Express locale name for the current session

Node names are derived from the AVS/Express GDobject names as they appear in the AVS/Express viewers. Annotation text labels are standard text objects from AVS/Express. Both the object names and the text labels are AVS/Express string objects that can hold internationalized text in the usual way.

AVS/Express Features Supported by OutputVRML

These AVS/Express features are supported by the OutputVRML macro:

AVS/Express Features Not Supported in VRML

The VRML standards provide different levels of support for capturing the contents of complex AVS/Express scenes, so the following AVS/Express features are not supported in VRML 1 or VRML 2.

The AVS/Express Graphics Display Kit has some restrictions, so the following feature is not supported in this release:

The implementation of OutputVRML has some restrictions, so the following AVS/Express features are not supported in this release:

The following features are not supported in the VRML 1 standard:

Orthographic camera projections are not supported in the VRML 2 standard.

VRML 1 Browser Requirements

The list below outlines the generic features that are required for VRML 1 browsers:

VRML 2 Browser Requirements

The OutputVRML module does not use PROTO, EXTERNPROTO, ROUTE or the Script node in VRML 2. The list below outlines the generic features that are required for VRML 2 browsers:


TOC PREV NEXT INDEX