Bjorn Sommer, Alexandra Diehl, Michael Aichem, Philipp Meschenmoser, Kim Rehberg, David Weber, Ying Zhang, Karsten Klein, Daniel Keim, Falk Schreiber, Tiled Stereoscopic 3D Display Wall-Concept, Applications and Evaluation, Electronic Imaging, Vol. 2019 (3), 2019. (Journal Article)
|
|
Manuela Waldner, Alexandra Diehl, Denis Gračanin, Rainer Splechtna, Claudio Delrieux, Krešimir Matković, A Comparison of Radial and Linear Charts for Visualizing Daily Patterns, IEEE transactions on visualization and computer graphics, 2019. (Journal Article)
|
|
Rafael Ballester-Ripoll, David Steiner, Renato Pajarola, Multiresolution Volume Filtering in the Tensor Compressed Domain, IEEE Transactions on Visualization and Computer Graphics, Vol. 24 (10), 2018. (Journal Article)
Signal processing and filter operations are important tools for visual data processing and analysis. Due to GPU memory and bandwidth limitations, it is challenging to apply complex filter operators to large-scale volume data interactively. We propose a novel and fast multiscale compression-domain volume filtering approach integrated into an interactive multiresolution volume visualization framework. In our approach, the raw volume data is decomposed offline into a compact hierarchical multiresolution tensor approximation model. We then demonstrate how convolution filter operators can effectively be applied in the compressed tensor approximation domain. To prevent aliasing due to multiresolution filtering, our solution (a) filters accurately at the full spatial volume resolution at a very low cost in the compressed domain, and (b) reconstructs and displays the filtered result at variable level-of-detail. The proposed system is scalable, allowing interactive display and filtering of large volume datasets that may exceed the available GPU memory. The desired filter kernel mask and size can be modified online, producing immediate visual results. |
|
Yongwei Miao, Yuliang Sun, Xudong Fang, Jiazhou Chen, Xudong Zhang, Renato Pajarola, Relief Generation from 3D Scenes guided by Geometric Texture Richness, Computational Visual Media, Vol. 4 (3), 2018. (Journal Article)
Typically, relief generation from an input 3D scene is limited to either bas-relief or high-relief modeling. This paper presents a novel unified scheme for synthesizing reliefs guided by the geometric texture richness of 3D scenes; it can generate both basand high-reliefs. The type of relief and compression coefficient can be specified according to the user’s artistic needs. We use an energy minimization function to obtain the surface reliefs, which contains a geometry preservation term and an edge constraint term. An edge relief measure determined by geometric texture richness and edge z-depth is utilized to achieve a balance between these two terms. During relief generation, the geometry preservation term keeps local surface detail in the original scenes, while the edge constraint term maintains regions of the original models with rich geometric texture. Elsewhere, in highreliefs, the edge constraint term also preserves depth discontinuities in the higher parts of the original scenes. The energy function can be discretized to obtain a sparse linear system. The reliefs are obtained by solving it by an iterative process. Finally, we apply non-linear compression to the relief to meet the user’s artistic needs. Experimental results show the method’s effectiveness for generating both bas- and high-reliefs for complex 3D scenes in a unified manner. |
|
Thomas Huber, Volume Rendering in VR for Hurricane Simulations, University of Zurich, Faculty of Business, Economics and Informatics, 2018. (Bachelor's Thesis)
Volume rendering for scientific visualization has gained popularity in the last twenty years, accompanied by the evolution of data generating methods in fields like medical engineering or geoscience. Increasingly big datasets in the form of point clouds demand intricate but efficient algorithms to visualize and navigate them. To convey key features distinguishable by the human eye, the volumes need to be colored and illuminated in meaningful variations. Navigation in 3D space to change viewing angles and free interaction with the datasets are additional requirements. Due to constant development and improvement of the graphics hardware, real-time rendering of very big point clouds became possible in recent years.
To enhance the immersion of volume visualizations, this thesis explores the presentation of hurricane data in a virtual reality environment. A volume renderer was implemented in the pre-existing GlobeEngine framework, incorporating a direct volume rendering algorithm based on ray casting. The application is able to perform physically based volume visualization using an emission-absorption model. Additional features are isosurface extraction and local maximum intensity projection, combined with Blinn-Phong illumination.
Various user interaction possibilities were included into the program. Transfer function loading, editing and saving is provided by a special widget. The illumination parameters can be controlled during runtime as well as the different rendering methods and their settings.
The volumes can be rendered to be displayed in an HTC Vive VR-System for an augmented viewing experience and basic controller interaction with the scene geometry is possible. |
|
Silvo Sposetti, GPU Based Water Rendering for GIS Visualization Systems, University of Zurich, Faculty of Business, Economics and Informatics, 2018. (Bachelor's Thesis)
The rendering of water surfaces is one of the most important topics in game-centric, geographical or cartographical 3D applications. Until recently, the majority of Geographical Information Systems (GIS) usually preferred not to show complex water surfaces because of the direct performance impact that such a feature would imply on the application. For people observing terrain data, the visualisation of more realistic water for near-sea terrains and lake regions can easily put the data into perspective. Modern graphics hardware has reached a point where this feature can be feasibly rendered without drastically decreasing overall performance.
This Bachelor thesis is divided in two portions: a written and an implementation part. The latter takes the form of an example inside the GlobeEngine framework, allowing the direct customization of a geometric object representing the water surface. The written part focuses instead on related work, theoretical concepts, implementation details and possible future expansions. |
|
Benjamin Bürgisser, Fabio Zünd, Renato Pajarola, Robert W Sumner, Campus Explorer: Facilitating Student Communities through Gaming, In: Proceedings International Conference on Game and Entertainment Technologies, IADIS, Madrid, 2018-07-18. (Conference or Workshop Paper published in Proceedings)
University students are often highly focused on their current lectures and imminent exams and thus neglect to interact with students across departments and to engage in campus life. To facilitate a more closely-knit community of university students, we evaluate a set of suitable core game mechanics, social features, and reward systems to motivate students to explore their university and to meet other students. Our prototype mobile application implements a location-based approach and includes game mechanics such as building check-ins, meeting other students, campus expeditions, and campus events. We evaluate the potential of our approach using both qualitative and quantitative data collected during an initial playtesting phase. Our analysis has shown that our location-based mechanics and a focus on social features were well received by students. Players engaged in exploring the campus and see potential in location-sharing and future collaboration features. |
|
Claudio Mura, Gregory Wyss, Renato Pajarola, Robust Normal Estimation in Unstructured 3D Point Clouds by Selective Normal Space Exploration, Visual Computer, Vol. 34 (6-8), 2018. (Journal Article)
We present a fast and practical approach for estimating robust normal vectors in unorganized point clouds. Our proposed technique is robust to noise and outliers and can preserve sharp features in the input model while being significantly faster than the current state-of-the-art alternatives. The key idea to this is a novel strategy for the exploration of the normal space: First, an initial candidate normal vector, optimal under a robust least median norm, is selected from a discrete subregion of this space, chosen conservatively to include the correct normal; then, the final robust normal is computed, using a simple, robust procedure that iteratively refines the candidate normal initially selected. This strategy allows us to reduce the computation time significantly with respect to other methods based on sampling consensus and yet produces very reliable normals even in the presence of noise and outliers as well as along sharp features. The validity of our approach is confirmed by an extensive testing on both synthetic and real-world data and by a comparison against the most relevant state-of-the-art approaches. |
|
Benjamin Bürgisser, Campus Explorer: Facilitating Student Communities through Gaming, University of Zurich, Faculty of Business, Economics and Informatics, 2018. (Master's Thesis)
University students are often highly focused on their current lectures and imminent exams and thus miss out on interacting with students across departments and engaging in campus life.
To facilitate a more closely knit community of university students, we present Campus Explorer, a prototype mobile game aimed at gamifying campus life.
Campus Explorer allows for evaluating a set of suitable core game mechanics, social features and reward systems to motivate students to explore their university and meet other students. The prototype implements a location-based approach and includes game mechanics such as building check-ins, setting up meetings, campus expeditions, events, and more.
Each of these features is implemented as a point of interest displayed on a map. A point of interest can be accessed by the players when they are physically close to a predefined location and can eventually be checked into after solving a task. The task required can be varied, which allows for testing different mechanics based on a common location-based core mechanic.
In this thesis, we present our basic point of interest framework and the implementation of different mechanics built on top of it. We evaluate the potential of our approach using both qualitative and quantitative data collected during an initial playtesting phase. |
|
Georgios-Tsampikos Michailidis, Refinement and optimization methods for reconstructing and modeling indoor environments, University of Zurich, Faculty of Business, Economics and Informatics, 2018. (Dissertation)
Driven by the development in digital 3D imaging and scanning technology, the efficient capturing and modeling of 3D objects and environments has become a critical task in many application domains such as in architecture, engineering, robotics, navigation, construction and facility management. In particular, the availability of highly expressive 3D models from indoor environments could create highly added value in these domains, since these models, generally known as Building Information Models (BIMs), could provide semantically rich representations of the scene, allowing its analysis and manipulation. Also, they could provide accurate identification of their main architectural wall structures, such as the windows and doors, based on their as-is condition, which might be significantly different from the one they were designed.
Despite the many recent research efforts, the fully automatic generation of semantically enriched 3D models from building interiors still remains a very challenging and time-consuming process. The main difficulties lie on the accurate and fast acquisition of the surrounding environment and the creation of a faithful and semantically rich 3D model. Although the open problems in the field are still manifold, we focused in this thesis specifically on the following three important challenges: first, how to capture efficiently the 3D information of the scene, using a method which combines increased accuracy, high performance and produces adequate results for the majority of real-world indoor applications; second, how to faithfully capture the finer details of structural building elements and generate a semantically rich building model; third, how to allow the successful and automatic reconstruction of 3D BIM models without relying on restrictive assumptions about the scene or the dataset.
For each of these issues, we propose a solution which advances the state-of- the-art and allows for improved performance and better quality to the extracted results. Our first work focuses on the efficient and fast acquisition of the scene and introduces a new hardware-efficient stereo vision method. In our method, a local-based correlation algorithm computes the matching cost values and an optimization technique based on Discrete Dynamical Systems refines the extracted depth information. In the same contribution, we propose also an efficient parallel-pipelined hardware architecture, which implements the proposed stereo reconstruction method on a custom FPGA device, allowing high processing speeds for high-resolution stereo images.
The second research contribution of this thesis focuses on the 3D modeling stage of the reconstruction pipeline and aims to recover and semantically label the architectural wall elements of indoor environments. This is achieved by partitioning the wall surfaces of the reconstructed building models into small planar patches and classifying them using a bayesian graph-cut optimization technique. Due to its beneficial design, our approach can be embedded as a post-processing unit to the majority of modern modeling pipelines, enabling them to extract automatically semantically rich 3D models.
The last research contribution of this thesis lifts a restrictive and widely-used assumption in the field, that the scanner viewpoint positions should be known a priori, in order the reconstruction and 3D modeling of the scene to be performed successfully. Specifically, this work constitutes the first method for re-engineering and reconstructing the original scanner positions from raw point clouds. It relies on the scanning characteristics of the acquisition process and employs a statistical analysis to raw point data, in order to reveal the features which will allow the retrieval of the true scanner positions.
Extensive qualitative and quantitative evaluations on real-world and synthetic datasets reveal the advantageous behavior of the proposed methods, as well as their efficiency and performance under challenging and difficult indoor conditions. |
|
David Steiner, Scalable visualization of large datasets, University of Zurich, Faculty of Business, Economics and Informatics, 2018. (Dissertation)
An exponential growth of datasets from different fields of science creates the need for scalable visualization systems to display and explore the data interactively. Such datasets include laser scans of architecture or cultural heritage, which can consist of many hundred millions or even billions of points. Other examples include high-resolution X-ray microtomographies of objects that need to be closely examined in a non-destructive manner, in fields like biology, medicine, or anthropology. The resulting volumetric models can capture details in the micrometer range and also often consist of many billions of points.
Visualizing such datasets at interactive frame rates poses a major challenge to the underlying rendering system, as it often means to process gigabytes of data within a time frame of only a few milliseconds. Consequently, there are high demands regarding the system's throughput and latency. These are often met via scaling the system, i.e., allowing it to accomodate more workload.
Strategies for scaling can include making better use of the available resources, e.g., reducing bandwidth requirements and computational costs. A specific example is our volume visualization system that we extended to allow interactive filtering of volume models (e.g., for feature detection or denoising) in the tensor-compressed domain. These filter operations can be performed significantly faster than with comparable approaches, due to reduced computational and bandwidth costs.
More significantly, a visualization system can be scaled by utilizing additional resources within a machine, or additional machines. Especially the latter creates further challenges, such as additional communication and synchronization overheads as well as load imbalances. For the development of scalable visualization systems, overcoming such load imbalances is critical, especially when facing the unpredictable load often created by user interaction. Similarly, the amount of available resources might fluctuate, if a machine is not dedicated to only a single task, e.g., in the context of virtualization.
We consequently developed a scalable and flexible rendering task partitioning method and associated node affinity model which allow fine-grained implicit dynamic load balancing via a task pulling mechanism. Our method often outperforms traditional load balancing approaches in terms of performance and scalability, especially in the context of unpredictable load and varying compute resources.
Furthermore, we conducted a study in which we in detail examined the scalability of various load balancing methods provided by the Equalizer parallel rendering framework, which our visualization systems are based on. Finally, we also extended the set of utilities provided by the framework, providing diverse features for alleviating tasks like systematically, reproduceably, and automatically evaluating the performance of scalable visualization systems, the collection of data, and using optimized I/O. |
|
Matthias Thöny, Raimund Schnürer, René Sieber, Lorenz Hurni, Renato Pajarola, Storytelling in Interactive 3D Geographic Visualization Systems, ISPRS International Journal of Geo-Information, Vol. 7 (3), 2018. (Journal Article)
The objective of interactive geographic maps is to provide geographic information to a large audience in a captivating and intuitive way. Storytelling helps to create exciting experiences and to explain complex or otherwise hidden relationships of geospatial data. Furthermore, interactive 3D applications offer a wide range of attractive elements for advanced visual story creation and offer the possibility to convey the same story in many different ways. In this paper, we discuss and analyze storytelling techniques in 3D geographic visualizations so that authors and developers working with geospatial data can use these techniques to conceptualize their visualization and interaction design. Finally, we outline two examples which apply the given concepts. |
|
Rafael Ballester-Ripoll, Enrique G Paredes, Renato Pajarola, Tensor Algorithms for Advanced Sensitivity Metrics, SIAM/ASA Journal on Uncertainty Quantification, Vol. 6 (3), 2018. (Journal Article)
Following up on the success of the analysis of variance (ANOVA) decomposition and the Sobol indices (SI) for global sensitivity analysis, various related quantities of interest have been defined in the literature, including the effective and mean dimensions, the dimension distribution, and the Shapley values. Such metrics combine up to exponential numbers of SI in different ways and can be of great aid in uncertainty quantification and model interpretation tasks, but are computationally challenging. We focus on surrogate-based sensitivity analysis for independently distributed variables, namely, via the tensor train (TT) decomposition. This format permits flexible and scalable surrogate modeling and can efficiently extract all SI at once in a compressed TT representation of their own. Based on this, we contribute a range of novel algorithms that compute more advanced sensitivity metrics by selecting and aggregating certain subsets of SI in the tensor compressed domain. Drawing on an interpretation of the TT model in terms of deterministic finite automata, we are able to construct explicit auxiliary TT tensors that encode exactly all necessary index selection masks. Having both the SI and the masks in the TT format allows efficient computation of all aforementioned metrics, as we demonstrate in a number of example models. |
|
Luka Lapanashvili, Development of a Physically-Based 3D Rendering Framework in OpenGL, University of Zurich, Faculty of Business, Economics and Informatics, 2018. (Bachelor's Thesis)
In computer graphics one of the main hurdles that has yet to be overcome, is real time photo realistic rendering. Recent updates to popular engines such as the Unreal Engine and other proprietary engines, such as the Forstbyte-Engine have demonstrated that near photorealism is achievable. This sudden upgrade in visual quality stems in part from the idea to rethink the lighting model and approach it in a more physically plausible way. Due to the growing number of academic papers available on this topic, the question arises, if it is possible for an independent developer to profit from these technological advances and develop a standalone system, which is capable of achieving near real life rendering quality. The project presented in this thesis show, that even in a rather short amount of time, it is still possible to develop a system that produces results that are visually comparable to some of the engines currently available on the market. |
|
Andreas Milz, Tet Label Rendering for Star System Data Visualization, University of Zurich, Faculty of Business, Economics and Informatics, 2018. (Bachelor's Thesis)
This bachelor thesis presents an approach to render text labels in the context of the GlobeEngine developed by the Visualization and Multimedia Lab of the University of Zurich. The presented approach uses an existing text rendering library called Slug to accomplish rendering tasks. The text rendering package implemented over the course of the thesis provides the ability to render text labels as well as to perform text label culling for a given scene. By using a sort-and-sweep technique for collision detection, the approach is able to determine which labels have to be displayed, resulting in generating an image containing only non-overlapping labels. The implemented text rendering package is then used to provide text label rendering functionality for the existing exoViewer solar system visualization application. It enables the application to generate a three-dimensional visualization scene containing multiple thousands of text labels while retaining interactive frame rates. |
|
Matthias Thöny, Markus Billeter, Renato Pajarola, Large-Scale Pixel-Precise Deferred Vector Maps, Computer Graphics Forum, Vol. 37 (1), 2018. (Journal Article)
Rendering vector maps is a key challenge for high-quality geographic visualization systems. In this paper, we present a novel approach to visualize vector maps over detailed terrain models in a pixel-precise way. Our method proposes a deferred line rendering technique to display vector maps directly in a screen-space shading stage over the 3D terrain visualization. Due to the absence of traditional geometric polygonal rendering, our algorithm is able to outperform conventional vector map rendering algorithms for geographic information systems, and supports advanced line anti-aliasing as well as slope distortion correction. Furthermore, our deferred line rendering enables interactively customizable advanced vector styling methods as well as a tool for interactive pixel-based editing operations. |
|
Henry Raymond, Emergent Narrative through Reasoning Agents in Location-Based Multiplayer Games, University of Zurich, Faculty of Business, Economics and Informatics, 2017. (Master's Thesis)
Two essential elements of a modern video game are its story and how the story is told. But the number of possible sequences of events that the game's designers must create increases with the player's ability to affect the story. Designers therefore tend to restrict the player's freedom in unrealistic ways, thus making the player's experience less enjoyable.
In this thesis, we present an approach that addresses this problem by treating each non-player character in a game as a reasoning agent. Using a planning approach adopted from the field of artificial intelligence, we let the agents determine their own actions and on this basis let emergent narrative create the story.
The implementation of this thesis comprises two parts: A library called NPCengine that controls agents in the game world and a mobile game intended as a technology demonstrator. |
|
Francesca Monzeglio, Avalanche Transceiver Training Simulation, University of Zurich, Faculty of Business, Economics and Informatics, 2017. (Bachelor's Thesis)
Avalanches represent the main natural hazard in alpine regions. Technological progress, especially the widespread use of avalanche transceivers, improved the first response time of the companions of buried victims with the consequence of increasing the survival chances. Despite their simple design and usability, rescuer are required to be trained in the manipulation of such transceivers.
This Bachelor thesis consists of an implementation and a written part. The implementation, called Avalanche Transceiver Training Simulation, provides the experience of a transceiver search that can be performed without any spatial constraints. It consists of a first person simulation in the form of a serious game, implemented using the GlobeEngine framework. At the end of the written thesis, some ideas for future expansions are proposed, considering the beneficial effects that an immersive experience offered by 3D virtual reality environments may have on the learning outcomes.
|
|
A B M Tariqul Islam, Christian Scheel, Renato Pajarola, Oliver G Staadt, Robust enhancement of depth images from depth sensors, Computers & Graphics, Vol. 68, 2017. (Journal Article)
In recent years, depth cameras (such as Microsoft Kinect and ToF cameras) have gained much popularity in computer graphics, visual computing and virtual reality communities due to their low price and easy availability. While depth cameras (e.g. Microsoft Kinect) provide RGB images along with real-time depth information at high frame rate, the depth images often suffer from several artifacts due to inaccurate depth measurement. These artifacts highly degrade the visual quality of the depth frames. Most of these artifacts originate from two main sources—the missing/invalid depth values and fluctuating valid depth values on the generated contents. In this paper, we propose a new depth image enhancement method, for the contents of depth cameras, which addresses these two main sources of artifacts. We introduce a robust 1D Least Median of Squares (1D LMedS) approach to estimate the depth values of those pixels which have missing/invalid depth values. We use a sequence of frames to look for invalid depth values (considered as outliers), and finally, replace those values with stable and more plausible depth values. By doing so, our approach improves the unstable nature of valid depth values in captured scenes that is perceived as flickering. We use self-recorded and reference datasets along with reference methods to evaluate the performance of our proposed 1D LMedS. Experimental results show improvements both for static and moving parts of a scene. |
|
Rafael Ballester-Ripoll, Tensor methods for high-dimensional analysis and visualization, University of Zurich, Faculty of Business, Economics and Informatics, 2017. (Dissertation)
Most visual computing domains are witnessing a steady growth in sheer data set size, complexity, and dimensionality. Flexible and scalable mathematical models that can efficiently compress, process, store, manipulate, retrieve and visualize such data sets are therefore of paramount importance, especially for higher dimensions. In this context, tensor decompositions constitute a powerful mathematical framework for compactly representing and operating on both dense and sparse data. Initially proposed as an extension of the concept of matrix decomposition for three and more dimensions, they have found various applications in data-intensive machine learning and high-dimensional signal processing. This thesis aims to help bridge these aspects and tackle modern visual computing challenges under the paradigm of a common representation format, namely tensors. Many kinds of data admit a natural representation as higher-order tensors and/or can be parametrized, learned, or interpolated in the form of compact tensor models. Numerous tools that are native and unique to said decompositions exist for analysis and visualization, and such tools can be exploited as soon as the known ground-truth is abstracted into this kind of reduced representation. To this end we develop a volume compression algorithm tailored to high reduction rates in visualization applications; we explore compressed-domain processing possibilities including multiresolution convolution, derivation, integration and summed area tables; we produce visualization diagrams directly from compressed tensors via interactive reconstruction; and we propose sensitivity analysis algorithms for model interpretation and knowledge discovery. Emphasis is placed on compactness and interactivity and is addressed via careful tensor format selection and model building, as well as a range of auxiliary technical tools including out-of-core memory management, adaptive quantization, parallelized multilinear algebra operations, and others. We conclude that the models chosen result in a viable and fruitful toolbox for data of diverse origin, size, dimensionality, resolution, and sparsity. |
|