Barbara Flückiger, Evirgin Noyan, Enrique G. Paredes, Rafael Ballester-Ripoll, Renato Pajarola, Deep Learning Tools for Foreground-Aware Analysis of Film Colors, In: Workshop on Computer Vision in Digital Humanities, Montreal, Canada, 2017. (Conference or Workshop Paper)
|
|
Claudio Mura, Renato Pajarola, Exploiting the Room Structure of Buildings for Scalable Architectural Modeling of Interiors, In: Posters ACM SIGGRAPH, ACM, Los Angeles, 2017. (Conference or Workshop Paper)
We propose a scalable strategy for the architectural modeling of large-scale interiors from 3D point clouds. We exploit the fact that buildings are structured into di erent rooms to cast the model- ing of a large, multi-room environment as a set of simpler and independent reconstruction problems. This drastically reduces the complexity of the computation and makes the processing of large- scale datasets feasible even without using restrictive priors that a ect the precision of the nal output. |
|
Cyrill Halter, Static Occlusion Completion in TLS Point Clouds Using Handheld Devices, University of Zurich, Faculty of Business, Economics and Informatics, 2017. (Bachelor's Thesis)
Terrestrial Laser Scanners (TLS) produce point clouds which may have holes, caused by occlusions in the scene. Even with multiple viewpoints, these holes can be difficult to rectify, due to the limited mobility of the TLS. This thesis describes an approach for the completion of occlusions in TLS point clouds using data acquired using a handheld Google Tango device. First, viewpoints are suggested to the user for the completion of the occlusions present in the scan. The additional data then gets aligned to the TLS point clouds and points suitable for completion are selected. The evaluation of the method shows promising results for a semiautomatic processing pipeline. |
|
Alireza Amiraghdam, Climate Data Visualization: A Study of Using Topology of Vector Fields for Creating Streamlines to Visualize Wind Flows, University of Zurich, Faculty of Business, Economics and Informatics, 2017. (Master's Thesis)
The recent advancement in processing power, data storage capacity and accuracy of measuring devices resulted in the generation of large datasets in various disciplines. The size and accuracy of such datasets are increasing day after day. For example, climate datasets cover large areas such as cities, countries, or the whole world. In addition, they contain multiple variables and they exist in high resolutions nowadays. The size of these datasets make analysis, interpretation, and extracting meaningful information difficult. Scientific visualization of climate data helps to understand large complex data by exploiting the power of human visual perception.
One of the variables that are measured in climate data is the wind flow.
When measuring the speed and direction of the wind, the resulting data can be a 2D or 3D vector field. Furthermore, time can be an additional dimension as well.
Streamlines are a popular way of visualizing flows. After decades of studying methods of creating streamlines from vector fields, the challenges are not completely resolved. One challenge in creating comprehensible streamlines is the seeding strategy. Different seeding strategies result in different looking visualization that can perform well in demonstrating the properties of the underlying flow or not.
Another way of visualizing vector fields is by presenting their topology. Topology shows the structure of the flow. It is based on the concept of critical points and the separatrices that connect them together.
In this thesis, the application of vector field topology in visualizing wind flows is studied. The studied visualization technique is a combination of geometric visualization and feature-based visualization. For this purpose, first, the critical points are identified and classified in the vector field. Then, the topology of the vector field is extracted. Topology partitions the domain into regions. The regions are created by sorting the separatrices. In each region, the lines that are perpendicular to the vector field are generated at equal distances. The longest perpendicular line is used to place the seed point. During extending the streamlines, the distance of them are monitored at each step and new seed points are added if needed and the streamlines that are too close to each other are terminated.
Finally, we compared the topology-based streamline creation technique with the simple technique of creating streamlines from random seed points. The results show that the length of the streamlines needed in topology-based technique is lower that the length of the streamlines in randomly-seeded technique in order to create equally comprehensible results. In addition, if we assume that the creation of the topology and perpendicular lines can be done in a pre-processing step, in the same amount of time that is needed for creating topology-based streamlines comparable results can be produced using the randomly-seeded technique. |
|
Benjamin Bürgisser, David Steiner, Renato Pajarola, bRenderer: A Flexible Basis for a Modern Computer Graphics Curriculum, In: Proceedings Eurographics Education Papers, Lyon, France, 2017-04-24. (Conference or Workshop Paper published in Proceedings)
In this article, we present bRenderer, a basic educational 3D rendering framework that has resulted from four years of experience in teaching an introductory-level computer graphics course at the University of Zurich. Our renderer is based on the observation that teaching a single basic but comprehensive computer graphics course often means to face the choice between students learning a low-level graphics API bottom-up on one side, or a powerful (game) engine on the other. Solutions between these two extremes tend to be either too rudimentary to easily allow advanced visual effects in student projects, or too abstract to facilitate learning about the underlying principles of computer graphics. Our platform-independent framework abstracts the functionality of its underlying graphics API and libraries to an extent that still preserves the main concepts taught in a computer graphics course. Consequently, bRenderer can be used in student projects, as well as in exercises. It helps students to easily understand how a renderer is implemented without getting distracted by the particular implementation of the framework or platform-specific characteristics. |
|
Josua Fröhlich, Tensorplot: Interactive Visualization of High-Dimensional Data, University of Zurich, Faculty of Business, Economics and Informatics, 2017. (Bachelor's Thesis)
We propose a web-based GUI to interactively explore multi-dimensional data using an efficient compressed format called tensor decomposition, as an open-source package using bokeh 0.12.3 in Python. Although many libraries and software solutions exist to interactively inspect sparse data, where each sample is a point, visualization and navigation in dense data is not as straightforward. Our offered solution generically adapts to the supplied tensor data, and offers a dynamic layout that can be accessed via the browser, where plots and layouts can be put together by removing and adding plots on the fly or exporting and importing layouts to be shared. Sophisticated sketch-based queries allow to find a specific spot in a data set, and various linking and brushing techniques ensure an intuitive user experience and provide a means for understanding complex data correlations and meanings. |
|
Gaudenz Halter, Color-Palettes in Movies and the ELAN Graphical Annotator, 2017. (Other Publication)
When it comes to perception of films, color-usage is undoubtedly one of the most influential and best remembered features. Consequently, a considerable amount of research has focused on film-color-features and their visualization. The most common of these feature-vectors is the color-histogram, which has already proven its ability to predict a variety of features associated with films. However, these histograms have often been visualized by ìhand-pickingî the bins and their alignment in the one-dimensional feature-vector. This thesis introduces a new method to create a feature-vector from a color-histogram, which is ordered in a perceptually meaningful manner using a Hilbert room-filling curves. It further investigates possible visualizations of this feature-vector and its performance in dimensional reduction using t-SNE. The result is a tool, which allows exploration of film-colors using both the newly introduced ìHilbert-Histogramî as well as already existing descriptors.
A second part contains a brief introduction into the Graphical Annotator, a tool currently in development for graphical annotation of movies.
|
|
Claudio Mura, Room-aware Architectural 3D Modeling of Building Interiors from Point Clouds, Institut für Informatik, Universität Zürich, Wirtschaftswissenschaftliche Fakultät, 2017. (Dissertation)
|
|
Matthias Thöny, Interactive visualization of large-scale geographic data, University of Zurich, Faculty of Business, Economics and Informatics, 2017. (Dissertation)
Geographic information systems progress strongly in a lot of different application domains. In addition, the gathering of geographic information is inevitable and large-scale geographic databases and server infrastructures need to handle this amount of data. Geographic data is growing in precision and complexity, so that it is necessary to explore and analyse datasets in an interactive way. Interactive visualization is key to explore geographic data sets and therefore indispensable for everyone using geographic data. In the following, new ways for interactive rendering techniques are presented to handle the challenges in nowadays large- scale geographic information systems. The thesis starts with an introduction to geographic information systems followed by an analysis of requirements and challenges for geographic visualization systems and virtual globe systems. Furthermore, an introduction to the rendering pipeline is presented. To face the challenges of today’s geographic visualization systems we introduce the GlobeEngine framework which enables a modular structure for rapid prototyping of geographic visualization applications and also contains all running prototypes of this work. The main algorithmic contribution of this thesis is new rendering techniques, namely a new version of the RASTeR terrain engine, an innovative technique for rendering vector maps for interactive terrain and map visualizations and a novel graph bundling technique using vector maps as basis for bundling paths in an interactive 3D perspective environment. Terrain visualization is the basis for an interactive 3D geographic visualization system. However, it is hard for decision-makers to clearly identify the best choice of terrain rendering algorithms. Therefore, this work provides an overview over terrain rendering requirements and existing terrain rendering solutions as well as their applicability to modern graphics systems. Furthermore, this thesis shows how RASTeR can be adapted to modern graphics hardware and a set of terrain visualization features, such as edge highlighting, ambient occlusion or terrain slope, aspect and flow visualizations to extend the capabilities of the existing terrain visualization. Often, vector map visualizations are used on top of 3D terrain rendering. In- interactive rendering of large-scale vector maps is a key challenge for high-quality geographic visualization software systems. This thesis contains a novel approach for the visualization of large-scale vector maps over detailed height-field terrains. This method uses a deferred line shading approach to render large-scale vector maps directly in a screen-space shading stage over a terrain visualization. The fact that there is no traditional geometric polygonal rendering involved allows our algorithm to outperform conventional vector map rendering algorithms for geographic information systems. A flexible clustered deferred line rendering approach allows a user to interactively customize and apply advanced vector styling methods, as well as the integration into a vector map level-of-detail system. Dense line graphs and polyline maps are challenging for interactive visualization in geographic information systems. Bundling techniques are a common approach to reduce clutter and have successfully been demonstrated for the display of complex planar graphs. Previous techniques typically applied some forms of attraction or repulsion forces to bundle edges. In geographic visualizations, it is often necessary to take the semantic information into account and constrain path bundles to follow some reference network vector map. This thesis applies a novel method which uses geographic vector map reference information to route, visualize and simplify path bundles along their network paths in a constrained environment using adaptive B-splines. The thesis is concluded by a summary and a future work section presenting future research topics. |
|
Alireza Amiraghdam, Matthias Thöny, Renato Pajarola, Visualization of a Large Climate Dataset, 2017. (Other Publication)
|
|
Georgios-Tsampikos Michailidis, Renato Pajarola, Bayesian graph-cut optimization for wall surfaces reconstruction in indoor environments, Visual Computer, Vol. 33 (10), 2017. (Journal Article)
In this paper, a new method capable to extract the wall openings (windows and doors) of interior scenes from point clouds under cluttered and occluded environments is presented. For each wall surface extracted by the polyhedral model of a room, our method constructs a cell complex representation, which is used for the wall object segmentation using a graph cut method. We evaluate the results of the proposed approach on real-world 3D scans of indoor environments and demonstrate its validity. |
|
Rafael Ballester-Ripoll, Enrique G Paredes, Renato Pajarola, A Surrogate Visualization Model Using the Tensor Train Format, In: Proceedings ACM SIGGRAPH ASIA Symposium on Visualization, ACM, Macao, 2016-12-05. (Conference or Workshop Paper published in Proceedings)
Complex simulations and numerical experiments typically rely on a number of parameters and have an associated score function, e.g. with the goal of maximizing accuracy or minimizing computation time. However, the influence of each individual parameter is often poorly understood a priori and the joint parameter space can be difficult to explore, visualize and optimize. We model this space as an N-dimensional black-box tensor and apply a cross approximation strategy to sample it. Upon learning and compactly expressing this space as a surrogate visualization model, informative subspaces are interactively reconstructed and navigated in the form of charts, images, surface plots, etc. By exploiting efficient operations in the tensor train format, we are able to produce diagrams such as parallel coordinates, bivariate projections and dimensional stacking out of highly-compressed parameter spaces. We demonstrate the proposed framework with several scientific simulations that contain up to 6 parameters and billions of tensor grid points. |
|
Matthias Thöny, Markus Billeter, Renato Pajarola, Deferred Vector Map Visualization, In: Proceedings ACM SIGGRAPH ASIA Symposium on Visualization, ACM, Macao, 2016-12-05. (Conference or Workshop Paper published in Proceedings)
Interactive rendering of large scale vector maps is a key challenge for high-quality geographic visualization software systems. In this paper we present a novel approach for the visualization of large scale vector maps over detailed height-field terrains. Our method uses a deferred line shading approach to render large scale vector maps directly in a screen-space shading stage over a terrain visualization. The fact that there is no traditional geometric polygonal rendering involved allows our algorithm to outperform conventional vector map rendering algorithms for geographic information systems. Our flexible clustered deferred line rendering approach allows a user to interactively customize and apply advanced vector styling methods, as well as the integration into a vector map level-of-detail system. |
|
Rafael Ballester-Ripoll, Renato Pajarola, Tensor Decomposition Methods in Visual Computing, In: IEEE Visualization Tutorials, Baltimore, USA, 2016. (Conference or Workshop Paper)
Initially proposed as an extension of the concept of matrix decomposition for three and more dimensions, tensor decompo- sitions have found numerous applications in visualization and visual computing. They constitute a powerful mathematical framework for compactly representing and manipulating dense data fields, especially in many dimensions. This course will introduce the most popular decomposition models and showcase emerging tensor methods for compression, interactive visualization, texture synthesis, denoising, and multidimensional inpainting. Multidimensional visual data types of interest include image and geometry ensembles, hyperspectral images, volumes and corresponding time-varying data. |
|
Rafael Ballester-Ripoll, Renato Pajarola, Compressing Bidirectional Texture Functions via Tensor Train Decomposition, In: Proceedings Pacific Graphics Short Papers, The Eurographics Association, Okinawa, 2016-10-11. (Conference or Workshop Paper published in Proceedings)
Material reflectance properties play a central role in photorealistic rendering. Bidirectional texture functions (BTFs) can faithfully represent these complex properties, but their inherent high dimensionality (texture coordinates, color channels, view and illumination spatial directions) requires many coefficients to encode. Numerous algorithms based on tensor decomposition have been proposed for efficient compression of multidimensional BTF arrays, however, these prior methods still grow exponentially in size with the number of dimensions. We tackle the BTF compression problem with a different model, the tensor train (TT) decomposition. The main difference is that TT compression scales linearly with the input dimensionality and is thus much better suited for high-dimensional data tensors. Furthermore, it allows faster random-access texel reconstruction than the previous Tucker-based approaches. We demonstrate the performance benefits of the TT decomposition in terms of accuracy and visual appearance, compression rate and reconstruction speed. |
|
Samuel von Baussnern, Semantic Labelling of Multi-Room Building Interiors using Machine Learning, University of Zurich, Faculty of Business, Economics and Informatics, 2016. (Bachelor's Thesis)
Recent improvements in laser scanning technologies made it possible to quickly
scan multiple rooms and acquire accurate 3D point cloud data. Using this data,
3D-models can easily be created and further used in other systems. Identifying
the semantic meaning - the type of a room - is the natural progression of this
process and currently a heavy research topic, mainly in the autonomous robotics
community.
The claim of this experimental thesis is, that it is not necessary to identify and
classify objects placed in rooms in order to identify the underlying class of a
given room, e.g. "living room".
To test this hypothesis unstructured laser measurements are mapped onto planar
shapes. Features are then directly extracted from these shapes without any
further classification process. Different Machine Learning algorithms are then
evaluated on these features.
Previous work tried to reconstruct and classify real-world objects, such as
tables and chairs, from the unstructured point clouds and used these objects
to extract features, and then finally to classify rooms and add semantic
meaning.
The developed pipeline was tested with a dataset of 65 rooms, each of the
classes (bathrooms, bedrooms, corridors, offices, and kitchens) represented
by 13 rooms each.
First results are promising: using all five classes an accuracy of 0.80 is
achieved. Further refinements in the feature extraction and selection, in
addition to more testing on more data, is needed before it can be used in
a real-world environment. |
|
Christian Tresch, CNN in RGB-D Image Segmentation Preprocessing, Training, Filtering and Visualization, University of Zurich, Faculty of Business, Economics and Informatics, 2016. (Bachelor's Thesis)
The thesis is documenting the project of using a deep learning approach for image segmentation. The works objective was to investigate deeper how such a state-of-the-art machine learning algorithm works. The task was to use a Convolutional Neural Network (CNN) in combination with extracted image features to produce a segmented output image. For this task a newly released network architecture called a Fully Convolutional Network (FCN) was used in order to segment the image data. The data used in the process was RGB-D image data from the NYUD2 dataset containing color and depth information. Training the FCN was done using an end-to-end pixel-wise supervised learning strategy. Subsequently the networks architecture, its learned filter parameters and the internal activation maps were extracted and filtered. The goal of this step was the acquisition of network information relevant to the user. In the filtering process the networks data was searched for candidates of strongest, most average and weakest connections between the intermediary processing layers. The extracted data was then labeled and visualized in a 3-dimensional viewer module as a Directed Acyclic Graph (DAG). The visualization allowed the user to explore his data sent through the network and deeper investigate the internal state of the network after forwarding his own input.
|
|
David Steiner, Enrique G Paredes, Stefan Eilemann, Renato Pajarola, Dynamic work packages in parallel rendering, In: Proceedings Eurographics Symposium on Parallel Graphics and Visualization, Groningen, Netherlands, 2016-06-06. (Conference or Workshop Paper published in Proceedings)
Interactive visualizations of large-scale datasets can greatly benefit from parallel rendering on a cluster with hardware accelerated graphics by assigning all rendering client nodes a fair amount of work each. However, interactivity regularly causes unpredictable distribution of workload, especially on large tiled displays. This requires a dynamic approach to adapt scheduling of rendering tasks to clients, while also considering data locality to avoid expensive I/O operations. This article discusses a dynamic parallel rendering load balancing method based on work packages which define rendering tasks. In the presented system, the nodes pull work packages from a centralized queue that employs a locality-aware dynamic affinity model for work package assignment. Our method allows for fully adaptive implicit workload distribution for both sort-first and sort-last parallel rendering. |
|
Siddhartha, Which Classifier to apply?: Bayesian Active Learning on ImageNet, University of Zurich, Faculty of Business, Economics and Informatics, 2016. (Master's Thesis)
In this work, we study the sequential decision making problem where we seek to classify images from ImageNet by querying from a pool of binary classifiers. ImageNet is a widely used benchmark image dataset with a hierarchical structure. We will show how the hierarchical structure of the ImageNet dataset can be used in our framework for classification. Our framework operates in greedy fashion in the following way. Given a test image, the greedy framework sequentially and adaptively selects a subset of classifiers, from a given pool, that are most informative about an image. The classifiers are chosen based on some selected criterion. In our work, we extensively experiment and compare various different criteria for the classifier selection in two broad scenarios, namely, the Naive Bayes Setting and the Equivalence Class setting. In the Naive Bayes setting, we try to identify the true label of an image. However, in the equivalence class setting we try to identify the hypothesis group where the true label of the image lies in. In both the scenarios, we consider the tests to be noisy. We show that the results are promising and the framework has the potential to compete with other known techniques in active learning such as MIAL, SVM-MIAL etc. As a baseline, we use the convolutional neural network to compare the performance of our algorithm. We show that in certain cases for the ECD and DRD problem, our framework performs nearly as good as a convolutional neural network. |
|
Reto Wettstein, Modeling the 3D Micro-Architecture of Lung Carcinoma, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2016. (Bachelor's Thesis)
Currently, much research is done to improve the current tumor classification system and the 3D pathological grading of Lung Squamous-Cell Carcinoma and further, to make predictions about the invasion patterns of this kind of tumor. A new approach, from which scientists hope to gain insightful information to reach these goals, aims to model the 3D micro-architecture of such carcinoma. To create a 3D model, initially a successful segmentation of the relevant tissue types is needed. This bachelor's thesis presents, in a first step, a processing pipeline which performs this segmentation on an image stack of a Lung Squamous-Cell Carcinoma imaged by Propagation-Based Phase Contrast X-ray Microscopy. Every image passes through multiple processing steps consisting of a foreground-background transformation, a watershed transformation, a Graph Cut algorithm to merge the results of the previously mentioned steps and a neural network classification step. Based on the results of the segmentation, in a second step of this work, a 3D visualization is performed in which the relevant tissue structures are well visible. |
|