Philipp Schlegel, Renato Pajarola, Visibility-difference entropy for automatic transfer function generation, In: Proceedings SPIE Conference on Visualization and Data Analysis, SPIE - International Society for Optical Engineering, 2013-02-03. (Conference or Workshop Paper published in Proceedings)
Direct volume rendering allows for interactive exploration of volumetric data and has become an important tool in many visualization domains. But the insight and information that can be obtained are dependent on the transfer function defining the transparency of voxels. Constructing good transfer functions is one of the most time consuming and cumbersome tasks in volume visualization. We present a novel general purpose method for automatically generating an initial set of best transfer function candidates. The generated transfer functions reveal the major structural features within the volume and allow for an efficient initial visual analysis, serving as a basis for further interactive exploration in particular of originally unknown data. The basic idea is to introduce a metric as a measure of the goodness of a transfer function which indicates the information that can be gained from rendered images by interactive visualization. In contrast to prior methods, our approach does not require a user feedback-loop, operates exclusively in image space and takes the characteristics of interactive data exploration into account. We show how our new transfer function generation method can uncover the major structures of an unknown dataset within only a few minutes. |
|
Marcos Balsa Rodr\'\iguez, Enrico Gobbetti, Jos\'e Antonio Iglesias Guiti\'an, Maxim Makhinya, Fabio Marton, Renato Pajarola, Susanne Suter, A Survey of Compressed GPU Direct Volume Rendering, 2013. (Other Publication)
Great advancements in commodity graphics hardware have favored GPU-based volume rendering as the main adopted solution for interactive exploration of rectilinear scalar volumes on commodity platforms. Nevertheless, long data transfer times and GPU memory size limitations are often the main limiting factors, especially for massive, time-varying or multi-volume visualization, or for networked visualization on the emerging mobile devices. To address this issue, a variety of level-of-detail data representations and compression techniques have been introduced. In order to improve capabilities and performance over the entire storage, distribution and rendering pipeline, the encoding/decoding process is typically highly asymmetric, and systems should ideally compress at data production time and decompress on demand at rendering time. Compression and level-of-detail pre-computation does not have to adhere to real-time constraints and can be performed off-line for high quality results. In contrast, adaptive real-time rendering from compressed representations requires fast, transient, and spatially independent decompression. In this report, we review the existing compressed GPU volume rendering approaches, covering compact representation models, compression techniques, GPU rendering architectures and fast decoding techniques. |
|
Shmuel Friedland, Volker Mehrmann, Renato Pajarola, Susanne Suter, On best rank one approximation of tensors, Numerical Linear Algebra with Applications, Vol. 20 (6), 2013. (Journal Article)
Today, compact and reduced data representations using low rank data approximation are common to represent high-dimensional data sets in many application areas as for example genomics, multimedia, quantum chemistry, social networks or visualization. In order to produce such low rank data representations, the input data is typically approximated by so-called alternating least squares (ALS) algorithms. However, not all of these ALS algorithms are guaranteed to converge. To address this issue, we suggest a new algorithm for the computation of a best rank one approximation of tensors, called alternating singular value decomposition. This method is based on the computation of maximal singular values and the corresponding singular vectors of matrices. We also introduce a modification for this method and the alternating least squares method, which ensures that alternating iterations will always converge to a semi-maximal point. (A critical point in several vector variables is semi-maximal if it is maximal with respect to each vector variable, while other vector variables are kept fixed.) We present several numerical examples that illustrate the computational performance of the new method in comparison to the alternating least square method. |
|
Susanne Suter, Maxim Makhinya, Renato Pajarola, TAMRESH: Tensor Approximation Multiresolution Hierarchy for Interactive Volume Visualization, Computer Graphics Forum, Vol. 32 (3), 2013. (Journal Article)
Interactive visual analysis of large and complex volume datasets is an ongoing and challenging problem. We tackle this challenge in the context of state-of-the-art out-of-core multiresolution volume rendering by introducing a novel hierarchical tensor approximation (TA) volume visualization approach. The TA framework allows us (a) to use a rank-truncated basis for compact volume representation, (b) to visualize features at multiple scales, and (c) to visualize the data at multiple resolutions. In this paper, we exploit the special properties of the TA factor matrix bases and define a novel multiscale and multiresolution volume rendering hierarchy. Different from previous approaches, to represent one volume dataset we use but one set of global bases (TA factor matrices) to reconstruct at all resolution levels and feature scales. In particular, we propose a coupling of multiscalable feature visualization and multiresolution DVR through the properties of global TA bases. We demonstrate our novel TA multiresolution hierarchy based volume representation and visualization on a number of mCT volume datasets. |
|
Prashant Goswami, Fatih Erol, Rahul Mukhi, Renato Pajarola, Enrico Gobbetti, An efficient multiresolution framework for high quality interactive rendering of massive point clouds using multi-way kd-trees, Visual Computer, Vol. 28 (1), 2013. (Journal Article)
We present an efficient technique for out-of-core multi-resolution construction and high quality interactive visualization of massive point clouds. Our approach introduces a novel hierarchical level of detail (LOD) organization based on multi-way kd-trees, which simplifies memory management and allows control over the LOD-tree height. The LOD tree, constructed bottom up using a fast high-quality point simplification method, is fully balanced and contains all uniformly sized nodes. To this end, we introduce and analyze three efficient point simplification approaches that yield a desired number of high-quality output points. For constant rendering performance, we propose an efficient rendering-on-a-budget method with asynchronous data loading, which delivers fully continuous high quality rendering through LOD geo-morphing and deferred blending. Our algorithm is incorporated in a full end-to-end rendering system, which supports both local rendering and cluster-parallel distributed rendering. The method is evaluated on complex models made of hundreds of millions of point samples. |
|
Rahul Mukhi, Real-time water simulation, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2012. (Master's Thesis)
In this work I present an implementation of water simulation techniques to run in real time with user interaction for video games. The technique combines local shallow water simulation exploiting rigid body interaction via user with computationally cheaper FFT water simulation method.
The scope of my work is to implement a solution that achieves plausible looking but not realistic physically accurate results. It includes generation of waves created by the interaction of rigid body with water surface, reflecting boundaries for vertical walls and absorbing boundaries between shallow water equations and FFT implemented domain. The rendering of water includes reflection and refraction with alpha blending using fresnel term. The parameters can be tuned to achieve desired effects as per the requirements. |
|
Fabio Guggeri, Riccardo Scateni, Renato Pajarola, Shape reconstruction from raw point clouds using depth carving, In: Proceedings EUROGRAPHICS Short Papers, 2012-05-13. (Conference or Workshop Paper)
Shape reconstruction from raw point sets is a hot research topic. Point sets are increasingly available as primary input source, since low-cost acquisition methods are largely accessible nowadays, and these sets are more noisy than used to be. Standard reconstruction methods rely on normals or signed distance functions, and thus many methods aim at estimating these features. Human vision can however easily discern between the inside and the outside of a dense cloud even without the support of fancy measures. We propose, here, a perceptual method for estimating an indicator function for the shape, inspired from image-based methods. The resulting function nicely approximates the shape, is robust to noise, and can be used for direct isosurface extraction or as an input for other accurate reconstruction methods. |
|
Stefan Eilemann, Ahmet Bilgili, Marwan Abdellah, Juan Hernando, Maxim Makhinya, Renato Pajarola, Felix Schürmann, Parallel rendering on hybrid multi-GPU clusters, In: Eurographics Symposium on Parallel Graphics and Visualization, Eurographics, 2012-05-01. (Conference or Workshop Paper published in Proceedings)
Achieving efficient scalable parallel rendering for interactive visualization applications on medium-sized graphics clusters remains a challenging problem. Framerates of up to 60hz require a carefully designed and fine-tuned parallel rendering implementation that fits all required operations into the 16ms time budget available for each rendered frame. Furthermore, modern commodity hardware embraces more and more a NUMA architecture, where multiple processor sockets each have their locally attached memory and where auxiliary devices such as GPUs and network interfaces are directly attached to one of the processors. Such so called fat NUMA processing and graphics nodes are increasingly used to build cost-effective hybrid shared/distributed memory visualization clusters. In this paper we present a thorough analysis of the asynchronous parallelization of the rendering stages and we derive and implement important optimizations to achieve highly interactive framerates on such hybrid multi-GPU clusters. We use both a benchmark program and a real-world scientific application used to visualize, navigate and interact with simulations of cortical neuron circuit models. |
|
Susanne Suter, Renato Pajarola, Tensor approximation properties for multiresolution and multiscale volume visualization, In: Posters IEEE Visualization Conference, Seattle, WA, 2012-01-14. (Conference or Workshop Paper)
Interactive visualization and analysis of large and complex volume data is still a big challenge. Compression-domain volume rendering methods have shown that mathematical tools to represent and com- press large data are very successful. We use a new framework that is widely used for data approximation and tensor approximation (TA). Specific properties of the TA bases are elaborated in the context of multiresolution and multiscale volume visualization. |
|
Yongwei Miao, Jonas Bösch, Renato Pajarola, M Gopi, Jieqing Feng, Feature sensitive re-sampling of point set surfaces with Gaussian spheres, Science China Information Sciences, Vol. 55 (9), 2012. (Journal Article)
Feature sensitive simplification and re-sampling of point set surfaces is an important and challenging issue for many computer graphics and geometric modeling applications. Based on the regular sampling of the Gaussian sphere and the surface normals mapping onto the Gaussian sphere, an adaptive re-sampling framework for point set surfaces is presented in this paper, which includes a naive sampling step by index propagation and a novel cluster optimization step by normalized rectification. Our proposed re-sampling scheme can generate non-uniformly distributed discrete sample points for the underlying point sets in a feature sensitive manner. The intrinsic geometric features of the underlying point set surfaces can be preserved efficiently due to our adaptive re-sampling scheme. A novel splat rendering technique is adopted to illustrate the efficiency of our re-sampling scheme. Moreover, a numerical error statistics and surface reconstruction for simplified models are also given to demonstrate the effectiveness of our algorithm in term of the simplified quality of the point set surfaces. |
|
Kun Zhou, Renato Pajarola, Guest editor's introduction: Special Section on the Eurographics Symposium on Parallel Graphics and Visualization (EGPGV), IEEE Transactions on Visualization and Computer Graphics, Vol. 18 (6), 2012. (Journal Article)
|
|
Renato Pajarola, Information overload, Public Service Review: Science and Technology (16), 2012. (Journal Article)
|
|
David Klaper, Image registration of visible human modalities: Facharbeit, 2012. (Other Publication)
The Visible Human Project offers different medical modalities including cryosections and CT of a male and a female human. One interesting research question is what we can learn from combining these modalities. In order to facilitate this comparison, the goal of this thesis is registering the image modalities and producing a multimodal volume. A sequence of steps has been developed to clean, register and post-process the data. Eventually, two multimodal volumes were produced through rigid and nonrigid image registration. The visible male dataset was successfully registered with good precision throughout the body. Owing to the deformations between fresh CT and frozen cryosections, the precision of the Visible Female registration is lower. Nevertheless, about 85% of the body were successfully registered. |
|
Juan Pablo Carbajal, Harnessing Nonlinearities: Behavior Generation from Natural Dynamics, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2012. (Dissertation)
|
|
Philipp Schlegel, Automatic transfer function generation and extinction-based approaches in direct volume visualization, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2012. (Dissertation)
Direct volume visualization has become an important tool in many domains for visualizing and examining volumetric datasets. The tremendous increase in computing power of the hardware over the past years makes it possible to immediately visualize volumetric datasets obtained from scanning devices at fully interactive frame rates. However, despite this change of paradigm compared to the slow offline methods of the past, direct volume visualization suffers from disadvantages constricting an immediate, reliable analysis of volumetric datasets.This thesis begins with an overview of different methods for direct volume visualization followed by an in-depth review of the theoretical foundation including inherent challenges. Subsequently selected state-of-the-art techniques used in this thesis are explained in detail. One challenge that all techniques have in common is the dependency on good transfer functions. Only good transfer functions allow for the right insight into the dataset permitting a reliable analysis. These transfer functions are often constructed manually in a time consuming and cumbersome trial-and-error process. We propose an automated general purpose approach for generating a set of best transfer functions based on information theory. Our algorithm appraises the information content of the images generated by a particular transfer function when rotating the dataset, as it is the case in interactive sessions. Quantifying the quality of a transfer function in this way enables a directed search for the set of best transfer functions in a feedback loop employing a combination of two different optimization algorithms. This set of best, distinct transfer functions helps the user to gain an immediate overview of each facet of a dataset.When visualizing volumetric datasets, it is of major importance that domain experts are able to recognize small features, to distinguish the relationship and connectivity between them and to get the right perception. For this the applied illumination and shading model plays an important part. Sophisticated models including realistic looking directional shadows, ambient occlusion and color bleeding effects can greatly enhance the perception. Unfortunately common models exhibiting these effects are expensive to compute and not suitable for interactive applications. We present a method showing how these effects can be applied to GPU volume ray-casting while fully maintaining interactivity based on the original, exponential extinction coefficient of the volume rendering integral. Exploiting the fact that the original, exponential extinction coefficient is summable, our framework is built on top of a 3D summed area table that allows for quick lookups of extinction queries.Technically volumetric datasets consist of discrete scalar or sometimes vector data. As the resolution of this data hardly ever fits the resolution of the output device, the data needs to be interpolated or reconstructed. Volume visualization methods based on 3D textures can profit from fast built-in trilinear interpolation of the hardware. However, trilinear interpolation is not the first choice when it comes to image quality. Volume splatting on the other hand is a volume visualization technique that makes it easy to integrate arbitrary interpolation schemes. The performance of volume splatting is directly related to the applied interpolation scheme and the resulting interpolation kernel respectively. In this thesis we introduce an algorithm for volume splatting that greatly enhances the performance by reducing the required amount of splatting operations from interpolation kernel slices. Further, we show how the image quality of volume visualization can be enhanced by using the original, exponential extinction coefficient of the volume rendering integral instead of common alpha-blending simplifications. |
|
Maxim Makhinya, Performance challenges in distributed rendering systems, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2012. (Dissertation)
|
|
Prashant Goswami, Level-of-detail and parallel solutions in computer graphics, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2012. (Dissertation)
|
|
Jonas Bösch, Efficient stream-processing of large geometric data sets, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2012. (Dissertation)
|
|
Duck-Bong Kim, Renato Pajarola, Kwan Heng Lee, Efficient reduction of point data sets for surface splatting using geometry and color attributes, International Journal of Advanced Manufacturing and Technology, Vol. 61 (5-8), 2012. (Journal Article)
Surface splat, one of the point-based rendering primitives, has offered a powerful alternative to triangle meshes when it comes to the rendering of highly complex objects due to its potential for high-performance and high-quality rendering. Recently, the technological advance of 3D scanners has made it possible to acquire color as well as geometry data of highly complex objects with very high speed and accuracy. However, scanning and acquisition systems often produce surfaces that are much more dense than actually required for the intended application. Therefore, reduction of point data set is necessary to further process the model. Although many efficient sampling methods for point-based surfaces have been proposed to reduce the complexity of geometric models, none of these has taken into account color, which is fundamental for achieving a high quality visual appearance. Therefore, we propose an efficient sampling method of point data sets for surface splatting which uses both geometry and color attributes. Our proposed method converts a dense set of point samples into a sparse set of object space splats. It successfully approximates of the original model within a given geometric and color error. In order to measure color differences between point samples with consistency, the color error tolerance is evaluated in a CIELAB uniform color space. |
|
Prashant Goswami, Renato Pajarola, Time adaptive approximate SPH, In: Workshop on Virtual Reality Interaction and Physical Simulation VRIPHYS, Eurographics, VRIPHYS 2011, 2011-12-05. (Conference or Workshop Paper published in Proceedings)
In this paper, we present two different techniques to accelerate and approximate particle-based fluid simulations. The first technique identifies and employs larger time steps than dictated by the CFL condition. The second introduces the concept of approximation in the context of particle advection. For that, the fluid is segregated into active and inactive particles, and a significant amount of computation is saved on the passive particles. Using these two optimization techniques, our approach can achieve up to 7 times speed-up compared to a standard SPH method and it is compatible with other SPH improvement methods. We demonstrate the effectiveness of our method using up to one million particles and also compare it to standard SPH particle simulation visually and statistically. |
|