Joel Barmettler, Physical Sun and Sky Models in appleseed, University of Zurich, Faculty of Business, Economics and Informatics, 2020. (Bachelor's Thesis)
The sky is the most prominent - and thus most important - source of light in outdoor scenes. Unfortunately, offline ray-traced rendering engines have often been relying on analytic sky models that deliver poor visual results and inaccurate radiance values for their visual representation of the sky. A physically-based sky model would overcome those issues and provide a more accurate portrayal. In this thesis, the current research state regarding Computer Graphic (CG) sky rendering is assessed and an improved version of the Nishita93 sky model is proposed. The suggested implementation is then compared to other state-of-the-art sky models, including a physically-based sky model implemented by the Cycles rendering engine by Blender Foundation. Moreover, it considers the atmosphere's ozone layer to better account for the blueness of the sky. The results show that the improved Nishita model delivers superior visuals and radiance accuracy values compared to all of the other analytic sky models, while also being faster, more flexible, and more accurate than Blender's implementation. The developed model is contributed to the open-source offline rendering engine appleseed in C++. |
|
Lizeth Fuentes Perez, Luciano Romero Calla, Anselmo A Montenegro, Claudio Mura, Renato Pajarola, A Robust Feature-aware Sparse Mesh Representation, In: Proceedings of Pacific Graphics Short Papers, Pacific Graphics, Wellington New Zealand, 2020-10-01. (Conference or Workshop Paper published in Proceedings)
The sparse representation of signals defined on Euclidean domains has been successfully applied in signal processing. Bringing the power of sparse representations to non-regular domains is still a challenge, but promising approaches have started emerging recently. In this paper, we investigate the problem of sparsely representing discrete surfaces and propose a new representation that is capable of providing tools for solving different geometry processing problems. The sparse discrete surface representation is obtained by combining innovative approaches into an integrated method. First, to deal with irregular mesh domains, we devised a new way to subdivide discrete meshes into a set of patches using a feature-aware seed sampling. Second, we achieve good surface approximation with over-fitting control by combining the power of a continuous global dictionary representation with a modified Orthogonal Marching Pursuit. The discrete surface approximation results produced were able to preserve the shape features while being robust to over-fitting. Our results show that the method is quite promising for applications like surface re-sampling and mesh compression. |
|
Rafael Ballester-Ripoll, Peter Lindstrom, Renato Pajarola, TTHRESH: tensor compression for multidimensional visual data, IEEE Transactions on Visualization and Computer Graphics, Vol. 26 (9), 2020. (Journal Article)
Memory and network bandwidth are decisive bottlenecks when handling high-resolution multidimensional data sets in visualization applications, and they increasingly demand suitable data compression strategies. We introduce a novel lossy compression algorithm for multidimensional data over regular grids. It leverages the higher-order singular value decomposition (HOSVD), a generalization of the SVD to three dimensions and higher, together with bit-plane, run-length and arithmetic coding to compress the HOSVD transform coefficients. Our scheme degrades the data particularly smoothly and achieves lower mean squared error than other state-of-the-art algorithms at low-to-medium bit rates, as it is required in data archiving and management for visualization purposes. Further advantages of the proposed algorithm include very fine bit rate selection granularity and the ability to manipulate data at very small cost in the compression domain, for example to reconstruct filtered and/or subsampled versions of all (or selected parts) of the data set |
|
Giovanni Pintore, Claudio Mura, Fabio Ganovelli, Lizeth Fuentes Perez, Renato Pajarola, Enrico Gobbetti, Automatic 3D Reconstruction of Structured Indoor Environments, In: ACM SIGGRAPH Courses, ACM Digital Library, Los Angeles, 2020-08-17. (Conference or Workshop Paper)
Creating high-level structured 3D models of real-world indoor scenes from captured data is a fundamental task which has important applications in many fields. Given the complexity and variability of interior environments and the need to cope with noisy and partial captured data, many open research problems remain, despite the substantial progress made in the past decade. In this tutorial, we provide an up-to-date integrative view of the field, bridging complementary views coming from computer graphics and computer vision. After providing a characterization of input sources, we define the structure of output models and the priors exploited to bridge the gap between imperfect sources and desired output. We then identify and discuss the main components of a structured reconstruction pipeline, and review how they are combined in scalable solutions working at the building level. We finally point out relevant research issues and analyze research trends. |
|
Linda Samsinger, AI-powered Classification and Query of Color Patterns: With Applications to Movie Pictures, University of Zurich, Faculty of Business, Economics and Informatics, 2020. (Master's Thesis)
An exhaustive evaluation of 29 multi-class and multi-label classification algorithms for mapping self-specified color name categories to all color space values in the CIE-L*ab color solid enables an effective color-aware
search system. Based on these classified colors, higher-chromatic patterns from color theory and its rules, such as color contrasts, can be detected in a repository of movie pictures by an exploratory attempt to concretize their scientific definitions from the realm of art. Color histograms are drawn indirectly from color palettes instead of images for pairwise histogram similarity computation. Hence, a retrieval system involving three components is built: (a) a query of colors in images or their color palettes, (b) their top-n similarity and (c) their automated color contrast annotation. The implementation of the proposed method is conducted on the ERC FilmColors project’s sample movie Jigokumon which consists of 569 subsequently shot video frames. A best macro F1-score of 92.7% was achieved using an Extra Trees classifier on Gaussian multi-label color classification which outperforms other task-adapted classifiers in this line of research. The resulting system is adaptable to digital movie databases (DMDb) with implications for 21st-century cinematography. |
|
Yuliang Sun, Yongwei Miao, Jiazhou Chen, Renato Pajarola, PGCNet: Patch Graph Convolutional Network for point cloud segmentation of indoor scenes, Visual Computer, Vol. 36, 2020. (Journal Article)
Semantic segmentation of 3D point clouds is a crucial task in scene understanding and is also fundamental to indoor scene applications such as indoor navigation, mobile robotics, augmented reality. Recently, deep learning frameworks have been successfully adopted to point clouds but are limited by the size of data. While most existing works focus on individual sampling points, we use surface patches as a more efficient representation and propose a novel indoor scene segmentation framework called patch graph convolution network (PGCNet). This framework treats patches as input graph nodes and subsequently aggregates neighboring node features by dynamic graph U-Net (DGU) module, which consists of dynamic edge convolution operation inside U-shaped encoder–decoder architecture. The DGU module dynamically update graph structures at each level to encode hierarchical edge features. Incorporating PGCNet, we can segment the input scene into two types, i.e., room layout and indoor objects, which is afterward utilized to carry out final rich semantic labeling of various indoor scenes. With considerable speedup training, the proposed framework achieves effective performance equivalent to state-of-the-art for segmenting standard indoor scene dataset. |
|
Gaudenz Halter, Perceptual Texture Features in Moving Image: An Exploratory Visualization Approach, University of Zurich, Faculty of Business, Economics and Informatics, 2020. (Master's Thesis)
Computational methods have become a significant source of data in the humanities. However, the increasing size and dimensionality of these datasets pose new challenges to scholars working with them, including how to establish an overview over the dataset, how to establish new hypotheses and how to test them. Material, pattern and texture usage in moving image is an attractive example of such multi-dimensional datasets in film studies, as an almost infinite number of combinations thereof are possible. This thesis proposes interactive data visualization and exploration of film features as a solution to assist film researchers in the investigation of relationships between material, pattern, textures and a wide range of qualitative analysis labels in a large film database. By providing multiple visual components in combination with exhaustive computational methods and an interactive user-interface, it allows the user to understand global and local structure of the dataset as well as the relationship between different feature groups of features. Finally it embeds the resulting visualization tool into the well-established research framework of the FilmColors project. |
|
Alireza Amiraghdam, Alexandra Diehl, Renato Pajarola, LOCALIS: Locally-adaptive Line Simplification for GPU-based Geographic Vector Data Visualization, Computer Graphics Forum, Vol. 39 (3), 2020. (Journal Article)
Visualization of large vector line data is a core task in geographic and cartographic systems. Vector maps are often displayed at different cartographic generalization levels, traditionally by using several discrete levels-of-detail (LODs). This limits the generalization levels to a fixed and predefined set of LODs, and generally does not support smooth LOD transitions. How- ever, fast GPUs and novel line rendering techniques can be exploited to integrate dynamic vector map LOD management into GPU-based algorithms for locally-adaptive line simplification and real-time rendering. We propose a new technique that inter- actively visualizes large line vector datasets at variable LODs. It is based on the Douglas-Peucker line simplification principle, generating an exhaustive set of line segments whose specific subsets represent the lines at any variable LOD. At run time, an appropriate and view-dependent error metric supports screen-space adaptive LOD levels and the display of the correct subset of line segments accordingly. Our implementation shows that we can simplify and display large line datasets interactively. We can successfully apply line style patterns, dynamic LOD selection lenses, and anti-aliasing techniques to our line rendering. |
|
Giovanni Pintore, Claudio Mura, Fabio Ganovelli, Lizeth Fuentes Perez, Renato Pajarola, Enrico Gobbetti, State-of-the-art in Automatic 3D Reconstruction of Structured Indoor Environments, Computer Graphics Forum, Vol. 39 (2), 2020. (Journal Article)
Creating high-level structured 3D models of real-world indoor scenes from captured data is a fundamental task which has important applications in many fields. Given the complexity and variability of interior environments and the need to cope with noisy and partial captured data, many open research problems remain, despite the substantial progress made in the past decade. In this survey, we provide an up-to-date integrative view of the field, bridging complementary views coming from computer graphics and computer vision. After providing a characterization of input sources, we define the structure of output models and the priors exploited to bridge the gap between imperfect sources and desired output. We then identify and discuss the main components of a structured reconstruction pipeline, and review how they are combined in scalable solutions working at the building level. We finally point out relevant research issues and analyze research trends. |
|
Louis Bienz, Spatial Music Visualization using Localized Manifold Harmonics, University of Zurich, Faculty of Business, Economics and Informatics, 2020. (Bachelor's Thesis)
Music can be visualized in multiple ways. Today's media player often have integrated visualizers, which make use of signal processing. This approach can be extended onto 3D objects, respectively triangular meshes. At first, a normalized, symmetric Laplace Operator is constructed for the embedded mesh. After that, the eigenvectors of this Operator are computed. Those eigenvectors form the so called Manifold Harmonic Basis. The obtained basis enables a Fourier-like transformation of the mesh. This enables to apply filters on a mesh, analogously to applying filters on a signal. In this Thesis, filters generated by music, are applied on a mesh. This way, music is visualized on a 3D object. The whole process of filtering a mesh, needs to be computed in real-time. Therefore, it is implemented on the GPU. More specifically, a Compute Shader is used for the filtering task. It calculates new positions for each vertex. Afterwards, the mesh is rendered with a simple Vertex and Fragment Shader. Furthermore, sound speakers can be placed around the mesh on the xz-plane. The distance between the mesh's vertices and the center of a speaker is measured. A function determines a factor depending on this distance. This factor is used to increase or decrease the filter's effect on a vertex. Thus, each speaker induces local visualizations on the mesh. |
|
Kate Gadola, Large Scale Vegetation Rendering, University of Zurich, Faculty of Business, Economics and Informatics, 2020. (Bachelor's Thesis)
Many types of applications require the real-time rendering of vegetation. This is a challenging task, as
plants are complex objects by nature. Large forest scenes can potentially contain millions of trees, each one
of which has thousands of leaves. However, rendering every last leaf is not only extremely inefficient, but
it is also not necessary. One method with which we can reduce the complexity of such a scene is to replace
the detailed models of the trees using simple proxy objects, and then recreating the appearance of trees by
simulating the way the objects interact with light. This avenue is explored in the scope of this bachelor
thesis. We create an application that models the vegetation of Switzerland with the goal of creating images
that are as realistic as possible, while still being efficient enough to be rendered in real-time. |
|
Stefan Eilemann, David Steiner, Renato Pajarola, Equalizer 2.0 - Convergence of a Parallel Rendering Framework, IEEE Transactions on Visualization and Computer Graphics, Vol. 26 (2), 2020. (Journal Article)
Developing complex, real world graphics applications which leverage multiple GPUs and computers for interactive 3D rendering tasks is a complex task. It requires expertise in distributed systems and parallel rendering in addition to the application domain itself. We present a mature parallel rendering framework which provides a large set of features, algorithms and system integration for a wide range of real-world research and industry applications. Using the Equalizer parallel rendering framework, we show how a wide set of generic algorithms can be integrated in the framework to help application scalability and development in many different domains, highlighting how concrete applications benefit from the diverse aspects and use cases of Equalizer. We present novel parallel rendering algorithms, powerful abstractions for large visualization setups and virtual reality, as well as new experimental results for parallel rendering and data distribution. |
|
Luciano Romero Calla, Lizeth Fuentes Perez, Anselmo A Montenegro, A minimalistic approach for fast computation of geodesic distances on triangular meshes, Computers & Graphics, Vol. 84, 2019. (Journal Article)
The computation of geodesic distances is an important research topic in Geometry Processing and 3D Shape Analysis as it is a basic component of many methods used in these areas. In this work, we present a minimalistic parallel algorithm based on front propagation to compute approximate geodesic distances on meshes. Our method is practical and simple to implement, and does not require any heavy pre-processing. The convergence of our algorithm depends on the number of discrete level sets around the source points from which distance information propagates. To appropriately implement our method on GPUs taking into account memory coalescence problems, we take advantage of a graph representation based on a breadth-first search traversal that works harmoniously with our parallel front propagation approach. We report experiments that show how our method scales with the size of the problem. We compare the mean error and processing time obtained by our method with such measures computed using other methods. Our method produces results in competitive times with almost the same accuracy, especially for large meshes. We also demonstrate its use for solving two classical geometry processing problems: the regular sampling problem and the Voronoi tessellation on meshes. |
|
Georgios-Tsampikos Michailidis, Renato Pajarola, ASPIRE: Automatic scanner position reconstruction, Visual Computer, Vol. 35 (9), 2019. (Journal Article)
The recent advances in 3D laser range scanning have led to significant improvements in capturing and modeling 3D envi- ronments, allowing the creation of highly expressive and semantically rich 3D models from indoor environments, generally known as building information models. Despite the capabilities of state-of-the-art methods to generate faithful architectural 3D building models, the majority of them rely explicitly on the prior knowledge of scanner positions in order to reconstruct them successfully. However, in real-world applications, this metadata information gets typically lost after the point cloud registration, which means that none of these methods could work in practice and the creation of their building models would be impossible. Therefore, we present a novel pipeline that allows to automatically and accurately reconstruct the original scanner positions under very challenging conditions, without requiring any prior knowledge about the environment or the dataset. Being independent from laser range scanner manufacturers, it can be applied to almost every real-world LiDAR appli- cation. Our method exploits only information derived from the raw point data and is applicable to all scientific and industrial applications, where the original scan positions typically get lost after registration by the proprietary software provided by the scanner manufacturers. We demonstrate the validity of our approach by evaluating it on several real-world and synthetic indoor environments. |
|
Gaudenz Halter, Rafael Ballester-Ripoll, Barbara Flückiger, Renato Pajarola, VIAN – a visual annotation tool for film analysis, Computer Graphics Forum, Vol. 38 (3), 2019. (Journal Article)
Color plays a fundamental role in film design and production. Unfortunately, existing solutions for film analysis in the digital humanities address perceptual and spatial color information only tangentially. We introduce VIAN, a visual film annotation system centered on the semantic aspects of color film analysis. The tool enables expert-assessed labeling, curation, visualization, and classification of film color features based on their perceived context and aesthetic quality. Crucially, it is also the first of its kind that incorporates foreground-background information as it is made possible by modern deep learning segmentation methods. The proposed visual front-end is seamlessly integrated with a multimedia data management system, so that films can undergo a full color-oriented analysis pipeline by scholars and practitioners. |
|
Georgios-Tsampikos Michailidis, Renato Pajarola, Enhanced Reconstruction of Architectural Wall Surfaces for 3D Building Models, In: Posters Eurographics Conference, DSpace platform, Genoa, 2019-05-06. (Conference or Workshop Paper)
The reconstruction of architectural structures from 3D building models is a challenging task and a lot of research has been done in recent years. However, most of this work is focused mainly on reconstructing accurately the architectural shape of interiors rather than the fine architectural details, such as the wall elements (e.g. windows and doors). We focus specifically on this problem and propose a method that extends current solutions to reconstruct accurately severely occluded wall surfaces. |
|
Jonathan Stahl, Winter Tour and Trek Planning Tool, University of Zurich, Faculty of Business, Economics and Informatics, 2019. (Bachelor's Thesis)
Snow avalanches pose a serious hazard for winter sports enthusiasts. Proper planning and risk assessment is indispensable for a safe ski tour. With today’s technology a lot of information can be gathered about the terrain and avalanche situation. It seems to make sense to use digital aids to facilitate the planning of a safe tour. Several digital planning tools already exist today. But most of them, do not integrate all the necessary information into one application. Thus, the user has to combine several resources in order to assess the avalanche risk. Besides, the approach of 3D visualizations in this field is still in its infancy. This Bachelor thesis consists of an implementation, an evaluation and written part. It tries to exploit the possibility to integrate several factors necessary for winter tour and trek planning into a single interactive 3D tool. The implemented tool called TrekPlanningViewer was developed with the GlobeEngine framework. It supports traditional and new forms of visualizations. After the implementation the prospective benefits of such a tool was evaluated in a controlled experiment. The written part documents the process and findings. |
|
Rafael Ballester-Ripoll, Enrique G Paredes, Renato Pajarola, Sobol Tensor Trains for Global Sensitivity Analysis, Reliability Engineering & System Safety, Vol. 183, 2019. (Journal Article)
Sobol indices are a widespread quantitative measure for variance-based global sensitivity analysis, but computing and utilizing them remains challenging for high-dimensional systems. We propose the tensor train decomposition (TT) as a unified framework for surrogate modeling and sensitivity analysis via Sobol indices. We first overview several strategies to build a TT surrogate using either an adaptive sampling strategy or a predefined set of samples. Our main contribution is the introduction of the Sobol TT, which compactly represents variance components for all possible joint variable interactions of any order. Our formulation allows efficient aggregation and subselection operations, and we are able to obtain related Sobol indices (closed, total, and superset indices) at negligible cost. Furthermore, we exploit an existing global optimization procedure within the TT framework for variable selection and model analysis tasks. We demonstrate our algorithms with two analytical models and a parallel computing simulation data set. |
|
Rafael Ballester-Ripoll, Renato Pajarola, Tensor Decompositions for Integral Histogram Compression and Look-Up, IEEE Transactions on Visualization and Computer Graphics, Vol. 25 (2), 2019. (Journal Article)
Histograms are a fundamental tool for multidimensional data analysis and processing, and many applications in graphics and visualization rely on computing histograms over large regions of interest (ROI). Integral histograms (IH) greatly accelerate the calculation in the case of rectangular regions, but come at a large extra storage cost. Based on the tensor train decomposition model, we propose a new compression and approximate retrieval algorithm to reduce the overall IH memory usage by several orders of magnitude at a user-defined accuracy. To this end we propose an incremental tensor decomposition algorithm that allows us to compress integral histograms of hundreds of gigabytes. We then encode the borders of any desired rectangular ROI in the IH tensor-compressed domain and reconstruct the target histogram at a high speed which is independent of the region size. We furthermore generalize the algorithm to support regions of arbitrary shape rather than only rectangles, as well as histogram field computation, i.e., recovering many histograms at once. We test our method with several multidimensional data sets and demonstrate that it radically speeds up costly histogram queries while avoiding storing massive, uncompressed IHs. |
|
Renato Pajarola, Tensor Methods for Global Sensitivity Analysis, 2019. (Other Publication)
Sobol indices and other, more recent quantities of interest (such as the effective and mean dimensions, the dimension distribution, or the Shapley values) are of great aid in sensitivity analysis, uncertainty quantification, and model interpretation. Unfortunately, computing such indices is still challenging for high-dimensional systems. We propose the tensor train decomposition (TT) as a unified framework for surrogate modeling and sensitivity analysis of independently distributed variables. To this end, we introduce the Sobol tensor train (Sobol TT) data structure, which compactly represents variance components for all possible joint variable interactions of any order. Our formulation allows efficient aggregation and subselection operations, and we are able to obtain related Sobol indices and other related quantities at low computational cost. |
|