Fatih Erol, Stefan Eilemann, Renato Pajarola, Cross-segment load balancing in parallel rendering, In: Eurographics Symposium on Parallel Graphics and Visualization, Eurographics, 2011-04-10. (Conference or Workshop Paper published in Proceedings)
With faster graphics hardware comes the possibility to realize even more complicated applications that require more detailed data and provide better presentation. The processors keep being challenged with bigger amount of data and higher resolution outputs, requiring more research in the parallel/distributed rendering domain. Optimizing resource usage to improve throughput is one important topic, which we address in this article for multi-display applications, using the Equalizer parallel rendering framework. This paper introduces and analyzes cross-segment load balancing which efficiently assigns all available shared graphics resources to all display output segments with dynamical task partitioning to improve performance in parallel rendering. |
|
Philipp Schlegel, Automatic transfer function generation and extinction- based approaches in direct volume visualization, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2011. (Dissertation)
Direct volume visualization has become an important tool in many domains for visualizing and examining volumetric datasets. The tremendous increase in computing power of the hardware over the past years makes it possible to immediately visualize volumetric datasets obtained from scanning devices at fully interactive frame rates. However, despite this change of paradigm compared to the slow offline methods of the past, direct volume visualization suffers from disadvantages constricting an immediate, reliable analysis of volumetric datasets.This thesis begins with an overview of different methods for direct volume visualization followed by an in-depth review of the theoretical foundation including inherent challenges. Subsequently selected state-of-the-art techniques used in this thesis are explained in detail. One challenge that all techniques have in common is the dependency on good transfer functions. Only good transfer functions allow for the right insight into the dataset permitting a reliable analysis. These transfer functions are often constructed manually in a time consuming and cumbersome trial-and-error process. We propose an automated general purpose approach for generating a set of best transfer functions based on information theory. Our algorithm appraises the information content of the images generated by a particular transfer function when rotating the dataset, as it is the case in interactive sessions. Quantifying the quality of a transfer function in this way enables a directed search for the set of best transfer functions in a feedback loop employing a combination of two different optimization algorithms. This set of best, distinct transfer functions helps the user to gain an immediate overview of each facet of a dataset.When visualizing volumetric datasets, it is of major importance that domain experts are able to recognize small features, to distinguish the relationship and connectivity between them and to get the right perception. For this the applied illumination and shading model plays an important part. Sophisticated models including realistic looking directional shadows, ambient occlusion and color bleeding effects can greatly enhance the perception. Unfortunately common models exhibiting these effects are expensive to compute and not suitable for interactive applications. We present a method showing how these effects can be applied to GPU volume ray-casting while fully maintaining interactivity based on the original, exponential extinction coefficient of the volume rendering integral. Exploiting the fact that the original, exponential extinction coefficient is summable, our framework is built on top of a 3D summed area table that allows for quick lookups of extinction queries.Technically volumetric datasets consist of discrete scalar or sometimes vector data. As the resolution of this data hardly ever fits the resolution of the output device, the data needs to be interpolated or reconstructed. Volume visualization methods based on 3D textures can profit from fast built-in trilinear interpolation of the hardware. However, trilinear interpolation is not the first choice when it comes to image quality. Volume splatting on the other hand is a volume visualization technique that makes it easy to integrate arbitrary interpolation schemes. The performance of volume splatting is directly related to the applied interpolation scheme and the resulting interpolation kernel respectively. In this thesis we introduce an algorithm for volume splatting that greatly enhances the performance by reducing the required amount of splatting operations from interpolation kernel slices. Further, we show how the image quality of volume visualization can be enhanced by using the original, exponential extinction coefficient of the volume rendering integral instead of common alpha-blending simplifications. |
|
Daniel Pfeifer, Parallel out-of-core rendering of massive meshes, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2011. (Master's Thesis)
Advances in computer aided design technologies and 3D scanning lead to geometric
models of ever increasing complexity. Today's massive models not only exceed the
capability for interactive rendering on high-end graphics hardware, they also
exceed the amount of main memory available on a common PC. Out-of-Core
algorithms are a common approach of keeping only a subset of data in main
memory. In addition, parallel rendering can be used to distribute rendering
tasks to a number of rendering nodes where each node only loads a subset of the
complete model. Loading a subset of the model however is not easily adaptable
with current standard polygonal mesh files. This thesis presents a new, flexible
file format for massive meshes along with a set of reusable programming
libraries and utilities to convert, modify, and visualize files in this format.
|
|
Susanne Suter, José A Iglesias Guitian, F Marton, M Agus, Andreas Elsener, C P E Zollikofer, M Gopi, E Gobbetti, Renato Pajarola, Interactive multiscale tensor reconstruction for multiresolution volume visualization, IEEE Transactions on Visualization and Computer Graphics, Vol. 17 (12), 2011. (Journal Article)
Large scale and structurally complex volume datasets from high-resolution 3D imaging devices or computational simulations pose a number of technical challenges for interactive visual analysis. In this paper, we present the first integration of a multiscale volume representation based on tensor approximation within a GPU-accelerated out-of-core multiresolution rendering framework. Specific contributions include (a) a hierarchical brick-tensor decomposition approach for pre-processing large volume data, (b) a GPU accelerated tensor reconstruction implementation exploiting CUDA capabilities, and (c) an effective tensor-specific quantization strategy for reducing data transfer bandwidth and out-of-core memory footprint. Our multiscale representation allows for the extraction, analysis and display of structural features at variable spatial scales, while adaptive level-of-detail rendering methods make it possible to interactively explore large datasets within a constrained memory footprint. The quality and performance of our prototype system is evaluated on large structurally complex datasets, including gigabyte-sized micro-tomographic volumes. |
|
Christian Venzin, Constraint simplification of vector data on 3D terrain surfaces, 2011. (Other Publication)
The efficient and high-quality rendering of large vector data sets onto multiresolution 3D landscapes poses a significant problem in three-dimensional (3D) geographic information systems. The advances in technology of remote sensing have led to very complex and high resolution digital elevation models (DEM), which can only be rendered in real time with elaborated level-of-detail (LOD) models, making the combination of 2D vector data and 3D terrain surfaces very challenging.In this work the problems, constraints, requirements and solutions of combining variable level-of-detail DEM triangulations with adaptive vector maps are studied, in the context of interactive 3D geovisualization.The survey shows that the geometry and texture based approaches to render vector data onto 3D landscapes have their issues and therefore a solution based on the shadow volume algorithm, which overcomes the limitations of the other approaches, is provided and discussed more detailed. This one allows a per pixel exact mapping and an easy integration of the vector data into a variable-LOD based interactive visualization, rendering the vector data independent of the underlying 3D terrain model.
|
|
Fabian Schneider, Automatic generalization and simplification of massive vector and network maps, 2011. (Other Publication)
Vector maps and network graph data are widely used in visualization applications and GIS-based decision making processes. In order to meet the rising demand for powerful 3D visualizations of massive vector maps, level-of-detail (LOD) models have to be developed and applied. Accurate and efficient vector map generalization is needed to generate the different LODs. Among all generalization operators, line simplification is the one that is most investigated and used. The Douglas-Peucker line simplification algorithm delivers visually pleasing results and preserves the shape of the original line, but introduces topological inconsistencies. The introduction of Epsilon-Voronoi diagrams solves the problem of intersecting polylines, but cannot avoid self-intersections. Therefore, the polylines have to be split into monotone subpolylines. The use of frame buffers and out-of-core systems enables fast interactive visualization. Even the integration of large vector maps in 3D terrain visualization is possible by using the shadow volume approach and texture-based mapping. However, the on-the-fly generation of continuous LODs is still prospects ahead. |
|
C Papageorgopoulou, Susanne Suter, Frank J Rühli, F Siegmund, Harris lines revisited: prevalence, comorbidities, and possible etiologies, American Journal of Human Biology, Vol. 23 (3), 2011. (Journal Article)
Objectives: The occurrence of transverse radiopaque lines in long bones—Harris lines (HLs)—is correlated with episodes of temporary arrest of longitudinal growth and has been used as an indicator of health and nutritional status of modern and historical populations. However, the interpretation of HLs as a stress indicator remains debatable. The aim of this article is to evaluate the perspectives and the limitations of HLs analyses and to examine their reliability as a stress indicator.Methods: The study was conducted on 241 tibiae from a medieval Swiss skeletal material and was carried out using a standardized, semiautomated HL detection and analysis tool developed by the authors. We compared four different age-at-formation estimation methods and analyzed the correlation of HL occurrence to life expectancy, mean-age-at-death, stature, tibia length, and metabolic disorders as expressed by linear enamel hypoplasia and hypothyroidism.Results: The evaluation of the age-at-formation estimation methods showed statistical significant differences. Therefore, a mathematical framework for the conversion between the methods has been developed. Remodeling had eliminated about half of the HLs formed during adolescence, and a further half of the remaining ones during early adulthood, whereas no association between the aforementioned conditions and HL prevalence could be determined. The peaks of high HL frequency among various populations were found to parallel normal growth spurts and growth hormone secretion.Conclusions: We suggest a reconsideration of HLs as more of a result of normal growth and growth spurts, rather than a pure outcome of nutritional or pathologic stress. |
|
Philipp Schlegel, Maxim Makhinya, Renato Pajarola, Extinction-based shading and illumination in GPU volume ray-casting, IEEE Transactions on Visualization and Computer Graphics, Vol. 17 (12), 2011. (Journal Article)
Direct volume rendering has become a popular method for visualizing volumetric datasets. Even though computers are continually getting faster, it remains a challenge to incorporate sophisticated illumination models into direct volume rendering while maintaining interactive frame rates. In this paper, we present a novel approach for advanced illumination in direct volume rendering based on GPU ray-casting. Our approach features directional soft shadows taking scattering into account, ambient occlusion and color bleeding effects while achieving very competitive frame rates. In particular, multiple dynamic lights and interactive transfer function changes are fully supported. Commonly, direct volume rendering is based on a very simplified discrete version of the original volume rendering integral, including the development of the original exponential extinction into a-blending. In contrast to a-blending forming a product when sampling along a ray, the original exponential extinction coefficient is an integral and its discretization a Riemann sum. The fact that it is a sum can cleverly be exploited to implement volume lighting effects, i.e. soft directional shadows, ambient occlusion and color bleeding. We will show how this can be achieved and how it can be implemented on the GPU. |
|
Proceedings Symposium on Parallel Graphics and Visualization, Edited by: Torsten Kuhlen, Renato Pajarola, Kun Zhou, Eurographics Association, Oxford, UK, 2011. (Proceedings)
|
|
Yongwei Miao, Jieqing Feng, Renato Pajarola, Visual saliency guided normal enhancement technique for 3D shape depiction, Computers & Graphics, Vol. 35 (3), 2011. (Journal Article)
Visual saliency can effectively guide the viewer's visual attention to salient regions of a 3D shape. Incorporating the visual saliency measure of a polygonal mesh into the normal enhancement operation, a novel saliency guided shading scheme for shape depiction is developed in this paper. Due to the visual saliency measure of the 3D shape, our approach will adjust the illumination and shading to enhance the geometric salient features of the underlying model by dynamically perturbing the surface normals. The experimental results demonstrate that our non-photorealistic shading scheme can enhance the depiction of the underlying shape and the visual perception of its salient features for expressive rendering. Compared with previous normal enhancement techniques, our approach can effectively convey surface details to improve shape depiction without impairing the desired appearance. |
|
Prashant Goswami, Y Zhang, Renato Pajarola, E Gobbetti, High quality interactive rendering of massive point models using multi-way kd-trees, In: Pacific Graphics, 2010-09-25. (Conference or Workshop Paper published in Proceedings)
We present a simple and efficient technique for out-of-core multiresolution construction and high quality visualization of large point datasets. The method introduces a novel hierarchical LOD data organization based on multi-way kd-trees that simplifies memory management and allows controlling the LOD tree’s height. The technique is incorporated in a full end- to-end system, which is evaluated on complex models made of hundreds of millions of points. |
|
Prashant Goswami, P Schlegel, B Solenthaler, Renato Pajarola, Interactive SPH simulation and rendering on the GPU, In: ACM SIGGRAPH / Eurographics Symposium on Computer Animation (SCA), 2010-07-02. (Conference or Workshop Paper published in Proceedings)
In this paper we introduce a novel parallel and interactive SPH simulation and rendering method on the GPU using CUDA which allows for high quality visualization. The crucial particle neighborhood search is based on Z-indexing and parallel sorting which eliminates GPU memory overhead due to grid or hierarchical data structures. Furthermore, it overcomes limitations imposed by shading languages allowing it to be very flexible and approaching the practical limits of modern graphics hardware. For visualizing the SPH simulation we introduce a new rendering pipeline. In the first step, all surface particles are efficiently extracted from the SPH particle cloud exploiting the simulation data. Subsequently, a partial and therefore fast distance field volume is rasterized from the surface particles. In the last step, the distance field volume is directly rendered using state-of-the-art GPU raycasting. This rendering pipeline allows for high quality visualization at very high frame rates. |
|
Prashant Goswami, Maxim Makhinya, Jonas Bösch, Renato Pajarola, Scalable parallel out-of-core terrain rendering, In: Eurographics Symposium on Parallel Graphics and Visualization, 2010-05-02. (Conference or Workshop Paper published in Proceedings)
In this paper, we introduce a novel out-of-core parallel and scalable technique for rendering massive terrain datasets. The parallel rendering task decomposition is implemented on top of an existing terrain renderer using an open source framework for cluster-parallel rendering. Our approach achieves parallel rendering by division of the rendering task either in sort-last (database) or sort-first (screen domain) manner and presents an optimal method for implicit load balancing in the former mode. The efficiency of our approach is validated using massive elevation models. |
|
Maxim Makhinya, S Eilemann, Renato Pajarola, Fast compositing for cluster-parallel rendering, In: Eurographics Symposium on Parallel Graphics and Visualization, 2010-05-02. (Conference or Workshop Paper published in Proceedings)
The image compositing stages in cluster-parallel rendering for gathering and combining partial rendering results into a final display frame are fundamentally limited by node-to-node image throughput. Therefore, efficient image coding, compression and transmission must be considered to minimize that bottleneck. This paper studies the different performance limiting factors such as image representation, region-of-interest detection and fast image compression. Additionally, we show improved compositing performance using lossy YUV subsampling and we propose a novel fast region-of-interest detection algorithm that can improve in particular sort-last parallel rendering. |
|
Susanne Suter, C P E Zollikofer, Renato Pajarola, Multiscale Tensor Approximation for Volume Data, Department of Informatics, University of Zurich, Zurich, 2010-02. (Book/Research Monograph)
Advanced 3D microstructural analysis in natural sciences and engineering depends ever more on modern data acquisition and imaging technologies such as micro-computed or synchrotron tomography and interactive visualization. The acquired high-resolution volume data sets have sizes in the order of tens to hundreds of GBs, and typically exhibit spatially complex internal structures. Such large structural volume data sets represent a grand challenge to be explored, analyzed and interpreted by means of interactive visualization, since the amount of data to be rendered is typically far beyond the current performance limits of interactive graphics systems. As a new approach to tackle this bottleneck problem, we employ higher-order tensor approximations (TAs). We demonstrate the power of TA to represent, and focus on, structural features in volume data. We show that TA yields a high data reduction at competitive rate distortion and that, at the same time, it provides a natural means for multiscale volume feature representation. |
|
Igor Bozic, Efficient compression for the visualization of large volume data, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2010. (Bachelor's Thesis)
Due to the enormous size of high-resolution volume data visualized in the areas such as bio-medical imaging, computational fluid or the increasing importance of video communication, efficient and compact data representations are important. Most of the compression techniques known today, especially in signal analysis and image compression, use wavelet filters for compression and decompression. This thesis focuses mainly on the Haar and the Daubechies wavelet transform, a library for the decomposition and composition of volume data has been made and tested. An introduction of the filter banks in 1-D, 2-D and 3-D will be illustrated, associated properties of the common wavelet transform, especially of the Haar and Daubechies filters, will be given. Papers, in the field of data compression, of the last ten years have been investigated, classified and illustrated. |
|
Samuel Mezger, Spatial data structures benchmarking framework, 2010. (Other Publication)
This paper is a documentation to a `Facharbeit' at the Visualization
and Multimedia Lab at the University of Zürich. The task was to
implement a framework in C++ to benchmark the performance of different spatial
data structures for three dimensional point data. A focus was kept on a direct
representation of the theory with clean object orientation and a clear
separation of concerns, good maintainability and extensibility of the resulting
code.
Data is read in from vertices stored in a ply file and loaded into a
structure as configured by the user. Various manipulation of this data is performed, such as
accessing and moving points or finding neighbours. For benchmarking, the time
these operations take is measured.
This paper begins with a short introduction into the theoretical background of
selected data structures, then the implementation is described by explaining
the general approach and specific problems and their solutions. Finally, some
initial benchmarking results are shown, that lead to the conclusion that there
is no one best data structure among those tested (grid, bucket point region
kd-tree and bucket sliding midpoint kd-tree), but the sliding midpoint kd-tree
performs more predictably than the point region version.
For actual use, the data structures would possibly have to be re-implemented,
as their implementations for the benchmark framework is intended for relative
performance comparison only, not for absolute efficiency. |
|
Serge Hänni, Interactive feature detector for biomedical structural analysis, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2010. (Bachelor's Thesis)
Histological dental growth analyses are conducted to obtain various biological, anthropological and forensic conclusions,
ranging from phylogenetic insights to time estimation such as the individual's age at death. They focus
on manual counting and measurement of certain growth structures found in dental hard tissues. As these analyses
are tedious to perform and are prone to observer errors, computational tools and algorithms are needed to facilitate
their performance and increase their reliability. To cope with this problem, concepts that support a high degree
of user interactivity with a software framework are developed. The implementation allows to manually annotate
dental structures on digital images in several ways and includes a first approach to an automatic detection of
these dental structures. It is shown that the semi-automatic detection needs to be further improved and that
further tools are needed to simplify studies and their reproducibility among researchers. |
|
Barbara Solenthaler, Renato Pajarola, Performance Comparison of Parallel PCISPH and WCSPH, No. IFI-2010.0003, Version: 1, 2010. (Technical Report)
|
|
Proceedings Symposium on Parallel Graphics and Visualization, Edited by: J Ahrens, K Debattista, Renato Pajarola, Eurographics Association, Oxford, UK, 2010. (Edited Scientific Work)
|
|