René Cutura

Berufserfahrung


  • Universität Stuttart

    Entwickeln von Methoden um Resultate von Dimensionsreduktion besser interpretierbar zu machen.

    Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 251654672 – TRR 161, project A08


  • Universität Wien

    wissenschaftlicher Projektmitarbeiter

    FFG ICT of the Future program via the ViSciPub project (no. 867378).


  • Schöller Bleckmann Edelstahlrohr GmbH

    First-Level Support, Softwareverteilung mit OPSI, Dokumentation mit Wiki.



  • Communication Company Walzl & Schoissengayer OG

    Durchführung von Interviews für Markt- und Meinungsforschung.


Ausbildung


  • Universität Wien

    Lehramtsstudium Mathematik und Informatik.
    Diplomarbeit: "VisCoDeR: A tool for Visually Comparing Dimensionality Reduction Algorithms".



  • Kolleg für Wirtschaftsingenieurswesen in der HTBLuVA Wiener Neustadt

    Ausbildungsschwerpunkt: Betriebsinformatik.



  • Fachschule for Elektrotechnick in der HTBLuVA Wiener Neustadt

    Projektarbeit "Messsystem Handbike".



  • Musikhauptschule Schöllerstraße in Neunkirchen


Publikationen


  • SiGrid: Gridifying Scatterplots with Sector-Based Regularization and Hagrid

    René Cutura Hennes Rave Quynh Quang Ngo Vladimir Molchanov Lars Linsen Daniel Weiskopf Michael Sedlmair

    teaser Hagrid is a state-of-the-art space-filling-curve-based method for gridifying scatterplots. However, it exhibits limitations in preserving the global structures of scatterplots with areas of varying density due to the incompatibility of adapting the granularity level of the underlying space-filling curve to regions with different densities. To compensate for this shortcoming, we introduce SiGrid that combines Hagrid with the Sector-Based Regularization (SBR) technique. SiGrid applies SBR to generate a scatterplot with a more uniform and generally lower density as an intermediate step. This intermediate scatterplot can then be fed to Hagrid for improved results. We quantitatively evaluate SiGrid by comparing it to Hagrid over a set of 502 scatterplots of different sizes, ranging from 50 to 10000 points per dataset, using relevant quality metrics. While generally slower, the results demonstrate that SiGrid outperforms Hagrid regarding the quality metrics of rank-wise neighborhood preservation (trustworthiness), ordering preservation, and pairwise distance preservation (cross-correlation).



  • Hagrid: using Hilbert and Gosper curves to gridify scatterplots

    René Cutura Cristina Morariu Zhanglin Cheng Yunhai Wang Daniel Weiskopf Michael Sedlmair

    teaser A common enhancement of scatterplots represents points as small multiples, glyphs, or thumbnail images. As this encoding often results in overlaps, a general strategy is to alter the position of the data points, for instance, to a grid-like structure. Previous approaches rely on solving expensive optimization problems or on dividing the space that alter the global structure of the scatterplot. To find a good balance between efficiency and neighborhood and layout preservation, we propose HAGRID, a technique that uses space-filling curves (SFCs) to “gridify” a scatterplot without employing expensive collision detection and handling mechanisms. Using SFCs ensures that the points are plotted close to their original position, retaining approximately the same global structure. The resulting scatterplot is mapped onto a rectangular or hexagonal grid, using Hilbert and Gosper curves. We discuss and evaluate the theoretic runtime of our approach and quantitatively compare our approach to three state-of-the-art gridifying approaches, DGRID, Small multiples with gaps SMWG, and CorrelatedMultiples CMDS, in an evaluation comprising 339 scatterplots. Here, we compute several quality measures for neighborhood preservation together with an analysis of the actual runtimes. The main results show that, compared to the best other technique, HAGRID is faster by a factor of four, while achieving similar or even better quality of the gridified layout. Due to its computational efficiency, our approach also allows novel applications of gridifying approaches in interactive settings, such as removing local overlap upon hovering over a scatterplot.



  • DaRt: Generative Art using Dimensionality Reduction Algorithms

    René Cutura Katrin Angerbauer Frank Heyen Natalie Hube Michael Sedlmair

    teaser Dimensionality Reduction (DR) is a popular technique that is often used in Machine Learning and Visualization communities to analyze high-dimensional data. The approach is empirically proven to be powerful for uncovering previously unseen structures in the data. While observing the results of the intermediate optimization steps of DR algorithms, we coincidently discovered the artistic beauty of the DR process. With enthusiasm for the beauty, we decided to look at DR from a generative art lens rather than their technical application aspects and use DR techniques to create artwork. Particularly, we use the optimization process to generate images, by drawing each intermediate step of the optimization process with some opacity over the previous intermediate result. As another alternative input, we used a neural-network model for face-landmark detection, to apply DR to portraits, while maintaining some facial properties, resulting in abstracted facial avatars. In this work, we provide such a collection of such artwork.



  • Hagrid — Gridify Scatterplots with Hilbert and Gosper Curves

    René Cutura Cristina Morariu Zhanglin Cheng Yunhai Wang Daniel Weiskopf Michael Sedlmair

    teaser A common enhancement of scatterplots represents points as small multiples, glyphs, or thumbnail images. As this encoding often results in overlaps, a general strategy is to alter the position of the data points, for instance, to a grid-like structure. Previous approaches rely on solving expensive optimization problems or on dividing the space that alter the global structure of the scatterplot. To find a good balance between efficiency and neighborhood and layout preservation, we propose Hagrid, a technique that uses space-filling curves (SFCs) to “gridify” a scatterplot without employing expensive collision detection and handling mechanisms. Using SFCs ensures that the points are plotted close to their original position, retaining approximately the same global structure. The resulting scatterplot is mapped onto a rectangular or hexagonal grid, using Hilbert and Gosper curves. We discuss and evaluate the theoretic runtime of our approach and quantitatively compare our approach to three state-of-the-art gridifying approaches, DGrid, Small multiples with gaps SMWG, and CorrelatedMultiples CMDS, in an evaluation comprising 339 scatterplots. Here, we compute several quality measures for neighborhood preservation together with an analysis of the actual runtimes. The main results show that, compared to the best other technique, Hagrid is faster by a factor of four, while achieving similar or even better quality of the gridified layout. Due to its computational efficiency, our approach also allows novel applications of gridifying approaches in interactive settings, such as removing local overlap upon hovering over a scatterplot.



  • [funded by: FFG (ViSciPub)]

    DruidJS — A JavaScript Library for Dimensionality Reduction

    René Cutura Christoph Kralj Michael Sedlmair

    teaser Dimensionality reduction (DR) is a widely used technique for visualization. Nowadays, many of these visualizations are developed for the web, most commonly using JavaScript as the underlying programming language. So far, only few DR methods have a JavaScript implementation though, necessitating developers to write wrappers around implementations in other languages. In addition, those DR methods that exist in JavaScript libraries, such as PCA, t-SNE, and UMAP, do not offer consistent programming interfaces, hampering the quick integration of different methods. Toward a coherent and comprehensive DR programming framework, we developed an open source JavaScript library named DruidJS. Our library contains implementations of ten different DR algorithms, as well as the required linear algebra techniques, tools, and utilities.



  • [funded by: FFG (ViSciPub)]

    Compadre — Comparing and Exploring High-Dimensional Data with Dimensionality Reduction Algorithms and Matrix Visualizations

    René Cutura Michael Aupetit Jean-Daniel Fekete Michael Sedlmair

    teaser We propose Compadre, a tool for visual analysis for comparing distances of high-dimensional (HD) data and their low-dimensional projections. At the heart is a matrix visualization to represent the discrepancy between distance matrices, linked side-by-side with 2D scatterplot projections of the data. Using different examples and datasets, we illustrate how this approach fosters (1) evaluating dimensionality reduction techniques w.r.t. how well they project the HD data, (2) comparing them to each other side-by-side, and (3) evaluate important data features through subspace comparison. We also present a case study, in which we analyze IEEE VIS authors from 1990 to 2018, and gain new insights on the relationships between coauthors, citations, and keywords. The coauthors are projected as accurately with UMAP as with t-SNE but the projections show different insights. The structure of the citation subspace is very different from the coauthor subspace. The keyword subspace is noisy yet consistent among the three IEEE VIS sub-conferences.



  • [funded by: FFG (ViSciPub)]

    VisCoDeR — A tool for Visually Comparing Dimensionality Reduction Algorithms

    Rene Cutura Stefan Holzer Michael Aupetit Michael Sedlmair

    teaser We propose VisCoDeR, a tool that leverages comparative visualization to support learning and analyzing different dimensionality reduction (DR) methods. VisCoDeR fosters two modes. The Discover mode allows to qualitatively compare several DR results by juxtaposing and linking the resulting scatterplots. The Explore mode allows for analyzing hundreds of differently parameterized DR results in a quantitative way. We present case studies that show that our approach helps to understand similarities and differences between DR algorithms.


als Koauthor


  • [funded by: DFG (TRR 161)]

    An Image Quality Dataset with Triplet Comparisons for Multi-dimensional Scaling

    Mohsen Jenadeleh Frederik L. Dennig René Cutura Quynh Quang Ngo Daniel A. Keim Michael Sedlmair Dietmar Saupe

    teaser In the early days of perceptual image quality research more than 30 years ago, the multidimensionality of distortions in perceptual space was considered important. However, research focused on scalar quality as measured by mean opinion scores. With our work, we intend to revive interest in this relevant area by presenting a first pilot dataset of annotated triplet comparisons for image quality assessment. It contains one source stimulus together with distorted versions derived from 7 distortion types at 12 levels each. Our crowdsourced and curated dataset contains roughly 50,000 responses to 7,000 triplet comparisons. We show that the multidimensional embedding of the dataset poses a challenge for many established triplet embedding algorithms. Finally, we propose a new reconstruction algorithm, dubbed logistic triplet embedding (LTE) with Tikhonov regularization. It shows promising performance. This study helps researchers to create larger datasets and better embedding techniques for multidimensional image quality.



  • [funded by: DFG (TRR 161)]

    Predicting User Preferences of Dimensionality Reduction Embedding Quality

    Cristina Morariu Adrien Bibal René Cutura Benoit Frenay Michael Sedlmair

    teaser A plethora of dimensionality reduction techniques have emerged over the past decades, leaving researchers and analysts with a wide variety of choices for reducing their data, all the more so given some techniques come with additional hyper-parametrization (e.g., t-SNE, UMAP, etc.). Recent studies are showing that people often use dimensionality reduction as a black-box regardless of the specific properties the method itself preserves. Hence, evaluating and comparing 2D embeddings is usually qualitatively decided, by setting embeddings side-by-side and letting human judgment decide which embedding is the best. In this work, we propose a quantitative way of evaluating embeddings, that nonetheless places human perception at the center. We run a comparative study, where we ask people to select “good” and “misleading” views between scatterplots of low-dimensional embeddings of image datasets, simulating the way people usually select embeddings. We use the study data as labels for a set of quality metrics for a supervised machine learning model whose purpose is to discover and quantify what exactly people are looking for when deciding between embeddings. With the model as a proxy for human judgments, we use it to rank embeddings on new datasets, explain why they are relevant, and quantify the degree of subjectivity when people select preferred embeddings.



  • [funded by: DFG (TRR 161)]

    Accessibility for Color Vision Deficiencies: Challenges and Findings of a Large Scale Study on Paper Figures

    Katrin Angerbauer Nils Rodrigues René Cutura Seyda Öney Nelusa Pathmanathan Cristina Morariu Daniel Weiskopf Michael Sedlmair

    teaser We present an exploratory study on the accessibility of images in publications when viewed with color vision deficiencies (CVDs). The study is based on 1,710 images sampled from a visualization dataset (VIS30K) over five years. We simulated four CVDs on each image. First, four researchers (one with a CVD) identified existing issues and helpful aspects in a subset of the images. Based on the resulting labels, 200 crowdworkers provided 30,000 ratings on present CVD issues in the simulated images. We analyzed this data for correlations, clusters, trends, and free text comments to gain a first overview of paper figure accessibility. Overall, about 60 % of the images were rated accessible. Furthermore, our study indicates that accessibility issues are subjective and hard to detect. On a meta-level, we reflect on our study experience to point out challenges and opportunities of large-scale accessibility studies for future research directions.



  • [funded by: DFG (EXC 2075 & TRR 161)]

    Metaphorical Visualization: Mapping Data to Familiar Concepts

    Gleb Tkachev René Cutura Michael Sedlmair Steffen Frey Thomas Ertl

    teaser We present a new approach to visualizing data that is well-suited for personal and casual applications. The idea is to map the data to another dataset that is already familiar to the user, and then rely on their existing knowledge to illustrate relationships in the data. We construct the map by preserving pairwise distances or by maintaining relative values of specific data attributes. This metaphorical mapping is very flexible and allows us to adapt the visualization to its application and target audience. We present several examples where we map data to different domains and representations. This includes mapping data to cat images, encoding research interests with neural style transfer and representing movies as stars in the night sky. Overall, we find that although metaphors are not as accurate as the traditional techniques, they can help design engaging and personalized visualizations.



  • [funded by: FFG (ViSciPub)]

    Illegible Semantics: Exploring the Design Space of Metal Logos

    Gerrit J. Rijken René Cutura Frank Heyen Michael Sedlmair Michael Correll Jason Dykes Noeska Smit

    teaser The logos of metal bands can be by turns gaudy, uncouth, or nearly illegible. Yet, these logos work: they communicate sophisticated notions of genre and emotional affect. In this paper we use the design considerations of metal logos to explore the space of “illegible semantics”: the ways that text can communicate information at the cost of readability, which is not always the most important objective. In this work, drawing on formative visualization theory, professional design expertise, and empirical assessments of a corpus ofmetal band logos, we describe a design space of metal logos and present a tool through which logo characteristics can be explored through visualization. We investigate ways in which logo designers imbue their text with meaning and consider opportunities and implications for visualization more widely.



  • [funded by: DFG (TRR 161)]

    DumbleDR: Predicting User Preferences of Dimensionality Reduction Projection Quality

    Cristina Morariu Adrien Bibal René Cutura Benoit Frenay Michael Sedlmair

    teaser A plethora of dimensionality reduction techniques have emerged over the past decades, leaving researchers and analysts with a wide variety of choices for reducing their data, all the more so given some techniques come with additional parametrization (e.g. t-SNE, UMAP, etc.). Recent studies are showing that people often use dimensionality reduction as a black-box regardless of the specific properties the method itself preserves. Hence, evaluating and comparing 2D projections is usually qualitatively decided, by setting projections side-by-side and letting human judgment decide which projection is the best. In this work, we propose a quantitative way of evaluating projections, that nonetheless places human perception at the center. We run a comparative study, where we ask people to select 'good' and 'misleading' views between scatterplots of low-level projections of image datasets, simulating the way people usually select projections. We use the study data as labels for a set of quality metrics whose purpose is to discover and quantify what exactly people are looking for when deciding between projections. With this proxy for human judgments, we use it to rank projections on new datasets, explain why they are relevant, and quantify the degree of subjectivity in projections selected.



  • [funded by: FFG]

    Caarvida: Visual Analytics for Test Drive Videos

    Alexander Achberger René Cutura Oguzhan Türksoy Michael Sedlmair

    teaser We report on an interdisciplinary visual analytics project wherein automotive engineers analyze test drive videos. These videos are annotated with navigation-specific augmented reality (AR) content, and the engineers need to identify issues and evaluate the behavior of the underlying AR navigation system. With the increasing amount of video data, traditional analysis approaches can no longer be conducted in an acceptable timeframe. To address this issue, we collaboratively developed Caarvida, a visual analytics tool that helps engineers to accomplish their tasks faster and handle an increased number of videos. Caarvida combines automatic video analysis with interactive and visual user interfaces. We conducted two case studies which show that Caarvida successfully supports domain experts and speeds up their task completion time.


Trainings


  • The principles of effective leadership



  • Selbstmanagement zwischen Struktur und Flexibilität



  • Agiles Projectmanagement mit Scrum



  • Philosophy of Science