Pedro Miguel Sánchez Sánchez, José María Jorquera Valero, Alberto Huertas Celdran, Gérôme Bovet, Manuel Gil Pérez, Gregorio Martínez Pérez, LwHBench: A low-level hardware component benchmark and dataset for Single Board Computers, Internet of Things, Vol. 22 (1), 2023. (Journal Article)
In today’s computing environment, where Artificial Intelligence (AI) and data processing are moving toward the Internet of Things (IoT) and Edge computing paradigms, benchmarking resource-constrained devices is a critical task to evaluate their suitability and performance. Between the employed devices, Single-Board Computers arise as multi-purpose and affordable systems. The literature has explored Single-Board Computers performance when running high-level benchmarks specialized in particular application scenarios, such as AI or medical applications. However, lower-level benchmarking applications and datasets are needed to enable new Edge-based AI solutions for network, system and service management based on device and component performance, such as individual device identification. Thus, this paper presents LwHBench, a low-level hardware benchmarking application for Single-Board Computers that measures the performance of CPU, GPU, Memory and Storage taking into account the component constraints in these types of devices. LwHBench has been implemented for Raspberry Pi devices and run for 100 days on a set of 45 devices to generate an extensive dataset that allows the usage of AI techniques in scenarios where performance data can help in the device management process. Besides, to demonstrate the inter-scenario capability of the dataset, a series of AI-enabled use cases about device identification and context impact on performance are presented as exploration of the published data. Finally, the benchmark application has been adapted and applied to an agriculture-focused scenario where three RockPro64 devices are present. |
|
Angel Luis Perales Gómez, Lorenzo Fernández Maimó, Alberto Huertas Celdran, Félix J García Clemente, An interpretable semi‐supervised system for detecting cyberattacks using anomaly detection in industrial scenarios, IET Information Security, Vol. 17 (4), 2023. (Journal Article)
When detecting cyberattacks in Industrial settings, it is not sufficient to determine whether the system is suffering a cyberattack. It is also fundamental to explain why the system is under a cyberattack and which are the assets affected. In this context, the Anomaly Detection based on Machine Learning (ML) and Deep Learning (DL) techniques showed great performance when detecting cyberattacks in industrial scenarios. However, two main limitations hinder using them in a real environment. Firstly, most solutions are trained using a supervised approach, which is impractical in the real industrial world. Secondly, the use of black‐box ML and DL techniques makes it impossible to interpret the decision made by the model. This article proposes an interpretable and semi‐supervised system to detect cyberattacks in Industrial settings. Besides, our proposal was validated using data collected from the Tennessee Eastman Process. To the best of our knowledge, this system is the only one that offers interpretability together with a semi‐supervised approach in an industrial setting. Our system discriminates between causes and effects of anomalies and also achieved the best performance for 11 types of anomalies out of 20 with an overall recall of 0.9577, a precision of 0.9977, and a F1‐score of 0.9711. |
|
Tzvetan Popov, Marius Tröndle, Zofia Barańczuk-Turska, Christian Pfeiffer, Stefan Haufe, Nicolas Langer, Test-retest reliability of resting-state EEG in young and older adults, Psychophysiology, Vol. 60 (7), 2023. (Journal Article)
The quantification of resting-state electroencephalography (EEG) is associated with a variety of measures. These include power estimates at different frequencies, microstate analysis, and frequency-resolved source power and connectivity analyses. Resting-state EEG metrics have been widely used to delineate the manifestation of cognition and to identify psychophysiological indicators of age-related cognitive decline. The reliability of the utilized metrics is a prerequisite for establishing robust brain-behavior relationships and clinically relevant indicators of cognitive decline. To date, however, test-retest reliability examination of measures derived from resting human EEG, comparing different resting-state measures between young and older participants, within the same adequately powered dataset, is lacking. The present registered report examined test-retest reliability in a sample of 95 young (age range: 20-35 years) and 93 older (age range: 60-80 years) participants. A good-to-excellent test-retest reliability was confirmed in both age groups for power estimates on both scalp and source levels as well as for the individual alpha peak power and frequency. Partial confirmation was observed for hypotheses stating good-to-excellent reliability of microstates measures and connectivity. Equal levels of reliability between the age groups were confirmed for scalp-level power estimates and partially so for source-level power and connectivity. In total, five out of the nine postulated hypotheses were empirically supported and confirmed good-to-excellent reliability of the most commonly reported resting-state EEG metrics. |
|
Leonard Bauersfeld, Angel Romero, Manasi Muglikar, Davide Scaramuzza, Cracking double-blind review: Authorship attribution with deep learning, PLoS ONE, Vol. 18 (6), 2023. (Journal Article)
Double-blind peer review is considered a pillar of academic research because it is perceived to ensure a fair, unbiased, and fact-centered scientific discussion. Yet, experienced researchers can often correctly guess from which research group an anonymous submission originates, biasing the peer-review process. In this work, we present a transformer-based, neural-network architecture that only uses the text content and the author names in the bibliography to attribute an anonymous manuscript to an author. To train and evaluate our method, we created the largest authorship-identification dataset to date. It leverages all research papers publicly available on arXiv amounting to over 2 million manuscripts. In arXiv-subsets with up to 2,000 different authors, our method achieves an unprecedented authorship attribution accuracy, where up to 73% of papers are attributed correctly. We present a scaling analysis to highlight the applicability of the proposed method to even larger datasets when sufficient compute capabilities are more widely available to the academic community. Furthermore, we analyze the attribution accuracy in settings where the goal is to identify all authors of an anonymous manuscript. Thanks to our method, we are not only able to predict the author of an anonymous work but we also provide empirical evidence of the key aspects that make a paper attributable. We have open-sourced the necessary tools to reproduce our experiments. |
|
Cataldo Musto, Amra Delic, Oana Inel, Marco Polignano, Amon Rapp, Giovanni Semeraro, Jürgen Ziegler, 5th Workshop on Explainable User Models and Personalised Systems (ExUM), In: Adjunct Proceedings of the 31st ACM Conference on User Modeling, Adaptation and Personalization, 2023. (Conference or Workshop Paper)
|
|
Riccardo Dal Bello, Mariia Lapaeva, Agustina La Greca Saint-Esteven, Philipp Wallimann, Manuel Günther, Ender Konukoglu, Nicolaus Andratschke, Matthias Guckenberger, Stephanie Tanadini-Lang, Patient-specific quality assurance strategies for synthetic computed tomography in magnetic resonance-only radiotherapy of the abdomen, Physics and imaging in radiation oncology, Vol. 27, 2023. (Journal Article)
BACKGROUND AND PURPOSE
The superior tissue contrast of magnetic resonance (MR) compared to computed tomography (CT) led to an increasing interest towards MR-only radiotherapy. For the latter, the dose calculation should be performed on a synthetic CT (sCT). Patient-specific quality assurance (PSQA) methods have not been established yet and this study aimed to assess several software-based solutions.
MATERIALS AND METHODS
A retrospective study was performed on 20 patients treated at an MR-Linac, which were selected to evenly cover four subcategories: (i) standard, (ii) air pockets, (iii) lung and (iv) implant cases. The neural network (NN) CycleGAN was adopted to generate a reference sCT, which was then compared to four PSQA methods: (A) water override of body, (B) five tissue classes with bulk densities, (C) sCT generated by a separate NN (pix2pix) and (D) deformed CT.
RESULTS
The evaluation of the dose endpoints demonstrated that while all methods A-D provided statistically equivalent results (p = 0.05) within the 2% level for the standard cases (i), only the methods C-D guaranteed the same result over the whole cohort. The bulk densities override was shown to be a valuable method in absence of lung tissue within the beam path.
CONCLUSION
The observations of this study suggested that the use of an additional sCT generated by a separate NN was an appropriate tool to perform PSQA of a sCT in an MR-only workflow at an MR-Linac. The time and dose endpoints requirements were respected, namely within 10 min and 2%. |
|
Andri Färber, Alexandre de Spindler, Andrina Moser, Gerhard Schwabe, Closing the Loop for Patients with Chronic Diseases - from Problems to a Solution Architecture, In: The 11th IEEE International Conference on Healthcare Informatics, 2023-06-26. (Conference or Workshop Paper published in Proceedings)
There is growing evidence that mobile health (mHealth) applications can assist patients with chronic conditions. However, most mHealth apps are isolated from healthcare professional (HCP) workflows and IT infrastructure. The resulting fragmentation of digital support in healthcare calls for integrating architectures. They would benefit patients, HCPs, product managers, and software developers. Our analysis of existing architectures has revealed valuable architectural elements, but none of the analyzed architectures provided sufficient integration for the chronically ill. Therefore, we propose an architecture for integrated mHealth solutions. We followed a design science research approach and performed all activities of the DSRM Process Model. By forming a closed control loop and engaging HCPs, the architecture is designed to improve patient adherence to treatment, health literacy, and recall of recommendations and information. The resulting Closing-the-Loop Architecture (LoopArt) deploys three software agents: a Health Literacy Agent, an Adherence Agent, and a Conversational Agent. For demonstration purposes, the Health Literacy Agent was implemented for obese patients as an integrated system consisting of a mHealth app and a collaboration tool as part of the electronic medical record (EMR). |
|
Pedro Miguel Sánchez Sánchez, Alberto Huertas Celdran, Gérôme Bovet, Gregorio Martínez Pérez, Burkhard Stiller, A Trustworthy Federated Learning Framework for Individual Device Identification, In: 2023 JNIC Cybersecurity Conference (JNIC), Institute of Electrical and Electronics Engineers, 2023-06-21. (Conference or Workshop Paper published in Proceedings)
IoT scenarios face cybersecurity concerns due to unauthorized devices that can impersonate legitimate ones by using identical software and hardware configurations. This can lead to sensitive information leaks, data poisoning, or privilege escalation. Behavioral fingerprinting and ML/DL techniques have been used in the literature to identify devices based on performance differences caused by manufacturing imperfections. In addition, using Federated Learning to maintain data privacy is also a challenge for IoT scenarios. Federated Learning allows multiple devices to collaboratively train a machine learning model without sharing their data, but it requires addressing issues such as communication latency, heterogeneity of devices, and data security concerns. In this sense, Trustworthy Federated Learning has emerged as a potential solution, which combines privacy-preserving techniques and metrics to ensure data privacy, model integrity, and secure communication between devices. Therefore, this work proposes a trustworthy federated learning framework for individual device identification. It first analyzes the existing metrics for trustworthiness evaluation in FL and organizes them into six pillars (privacy, robustness, fairness, explainability, accountability, and federation) for computing the trustworthiness of FL models. The framework presents a modular setup where one component is in charge of the federated model generation and another one is in charge of trustworthiness evaluation. The framework is validated in a real scenario composed of 45 identical Raspberry Pi devices whose hardware components are monitored to generate individual behavior fingerprints. The solution achieves a 0.9724 average F1-Score in the identification on a centralized setup, while the average F1-Score in the federated setup is 0.8320. Besides, a 0.6 final trustworthiness score is achieved by the model on state-of-the-art metrics, indicating that further privacy and robustness techniques are required to improve this score. |
|
Chao Feng, Jan Von der Assen, Alberto Huertas Celdran, Steven Näf, Gérôme Bovet, Burkhard Stiller, FeDef: A Federated Defense Framework Using Cooperative Moving Target Defense, In: 2023 8th International Conference on Smart and Sustainable Technologies (SpliTech), Institute of Electrical and Electronics Engineers, 2023-06-20. (Conference or Workshop Paper published in Proceedings)
With the growing concerns about cybersattacks on IoT devices, many different cybersecurity solutions have been introduced. Among them, the Moving Target Defense (MTD) paradigm aims to reduce the likelihood of a successful threat event by changing the attack surface proactively or reactively. While proactive approaches degrade the quality of service, reactive ones cannot prevent damage. Thus, this work proposes FeDef, a federated and cooperative framework able to deploy reactively and proactively MTD techniques on resource-constrained devices affected by command and control-based malware. The performance of FeDef has been evaluated in a scenario composed of several devices infected with Bashlite. Multiple experiments have demonstrated the improvement in terms of system-wide infection time, service disruption, and resource consumption. Results show that FeDef can be implemented with limited resources and minimal impact on network and service availability. |
|
Mathias Gehrig, Davide Scaramuzza, Recurrent Vision Transformers for Object Detection with Event Cameras, In: 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023, Institute of Electrical and Electronics Engineers, 2023. (Conference or Workshop Paper published in Proceedings)
We present Recurrent Vision Transformers (RVTs), a novel backbone for object detection with event cameras. Event cameras provide visual information with submillisecond latency at a high-dynamic range and with strong robustness against motion blur. These unique properties offer great potential for low-latency object detection and tracking in time-critical scenarios. Prior work in event-based vision has achieved outstanding detection performance but at the cost of substantial inference time, typically beyond 40 milliseconds. By revisiting the high-level design of recurrent vision backbones, we reduce inference time by a factor of 6 while retaining similar performance. To achieve this, we explore a multi-stage design that utilizes three key concepts in each stage: first, a convolutional prior that can be regarded as a conditional positional embedding. Second, local and dilated global self-attention for spatial feature interaction. Third, recurrent temporal feature aggregation to minimize latency while retaining temporal information. RVTs can be trained from scratch to reach state-of-the-art performance on event-based object detection - achieving an mAP of 47.2% on the Gen1 automotive dataset. At the same time, RVTs offer fast inference (< 12 ms on a T4 GPU) and favorable parameter efficiency (5 × fewer than prior art). Our study brings new insights into effective design choices that can be fruitful for research beyond event-based vision. |
|
Yannick Schnider, Stanislaw Woźniak, Mathias Gehrig, Jules Lecomte, Axel Von Arnim, Luca Benini, Davide Scaramuzza, Angeliki Pantazi, Neuromorphic Optical Flow and Real-time Implementation with Event Cameras, In: 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2023, Institute of Electrical and Electronics Engineers, 2023-06-18. (Conference or Workshop Paper published in Proceedings)
Optical flow provides information on relative motion that is an important component in many computer vision pipelines. Neural networks provide high accuracy optical flow, yet their complexity is often prohibitive for application at the edge or in robots, where efficiency and latency play crucial role. To address this challenge, we build on the latest developments in event-based vision and spiking neural networks. We propose a new network architecture, inspired by Timelens, that improves the state-of-the-art self-supervised optical flow accuracy when operated both in spiking and non-spiking mode. To implement a real-time pipeline with a physical event camera, we propose a methodology for principled model simplification based on activity and latency analysis. We demonstrate high speed optical flow prediction with almost two orders of magnitude reduced complexity while maintaining the accuracy, opening the path for real-time deployments. |
|
Manasi Muglikar, Leonard Bauersfeld, Diederik Paul Moeys, Davide Scaramuzza, Event-Based Shape from Polarization, In: 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023, Institute of Electrical and Electronics Engineers, 2023-06-18. (Conference or Workshop Paper published in Proceedings)
State-of-the-art solutions for Shape-from-Polarization (SfP) suffer from a speed-resolution tradeoff: they either sacrifice the number of polarization angles measured or necessitate lengthy acquisition times due to framerate constraints, thus compromising either accuracy or latency. We tackle this tradeoff using event cameras. Event cameras operate at microseconds resolution with negligible motion blur, and output a continuous stream of events that precisely measures how light changes over time asynchronously. We propose a setup that consists of a linear polarizer rotating at high speeds in front of an event camera. Our method uses the continuous event stream caused by the rotation to reconstruct relative intensities at multiple polarizer angles. Experiments demonstrate that our method outperforms physics-based baselines using frames, reducing the MAE by 25% in synthetic and real-world datasets. In the real world, we observe, however, that the challenging conditions (i.e., when few events are generated) harm the performance of physics-based solutions. To overcome this, we propose a learning-based approach that learns to estimate surface normals even at low event-rates, improving the physics-based approach by 52% on the real world dataset. The proposed system achieves an acquisition speed equivalent to 50 fps (>twice the framerate of the commercial polarization sensor) while retaining the spatial resolution of 1 MP. Our evaluation is based on the first large-scale dataset for event-based SfP. |
|
Nico Messikommer, Carter Fang, Mathias Gehrig, Davide Scaramuzza, Data-Driven Feature Tracking for Event Cameras, In: 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023, Institute of Electrical and Electronics Engineers, 2023-06-18. (Conference or Workshop Paper published in Proceedings)
Because of their high temporal resolution, increased resilience to motion blur, and very sparse output, event cameras have been shown to be ideal for low-latency and low-bandwidth feature tracking, even in challenging scenarios. Existing feature tracking methods for event cameras are either handcrafted or derived from first principles but require extensive parameter tuning, are sensitive to noise, and do not generalize to different scenarios due to unmodeled effects. To tackle these deficiencies, we introduce the first data-driven feature tracker for event cameras, which leverages low-latency events to track features detected in a grayscale frame. We achieve robust performance via a novel frame attention module, which shares information across feature tracks. By directly transferring zero-shot from synthetic to real data, our data-driven tracker outperforms existing approaches in relative feature age by up to 120 % while also achieving the lowest latency. This performance gap is further increased to 130 % by adapting our tracker to real data with a novel self-supervision strategy. |
|
Haiyu Wu, Grace Bezold, Manuel Günther, Terrance Boult, Michael C King, Kevin W Bowyer, Consistency and Accuracy of CelebA Attribute Values, In: Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Institute of Electrical and Electronics Engineers, 2023-06. (Conference or Workshop Paper published in Proceedings)
We report the first systematic analysis of the experimental foundations of facial attribute classification.Two annotators independently assigning attribute values shows that only 12 of 40 common attributes are assigned values with >= 95% consistency, and three (high cheekbones, pointed nose, oval face) have essentially random consistency. Of 5,068 duplicate face appearances in CelebA, attributes have contradicting values on from 10 to 860 of the 5,068 duplicates. Manual audit of a subset of CelebA estimates error rates as high as 40% for (no beard=false), even though the labeling consistency experiment indicates that no beard could be assigned with >= 95% consistency. Selecting the mouth slightly open (MSO) for deeper analysis, we estimate the error rate for (MSO=true) at about 20% and (MSO=false) at about 2%. A corrected version of the MSO attribute values enables learning a model that achieves higher accuracy than previously reported for MSO. Corrected values for CelebA MSO are available at https:// github.com/ HaiyuWu/ CelebAMSO. |
|
Gabriela Morgenshtern, Arnav Verma, Sana Tonekaboni, Robert Greer, Jürgen Bernard, Mjaye Mazwi, Anna Goldenberg, Fanny Chevalier, RiskFix: Supporting Expert Validation of Predictive Timeseries Models in High-Intensity Settings, In: EuroVis 2023 - Short Papers, The Eurographics Association, 2023. (Conference or Workshop Paper published in Proceedings)
Many real-world machine learning workflows exist in longitudinal, interactive machine learning (ML) settings. This longitudinal nature is often due to incremental increasing of data, e.g., in clinical settings, where observations about patients evolve over their care period. Additionally, experts may become a bottleneck in the workflow, as their limited availability, combined with their role as human oracles, often leads to a lack of ground truth data. In such cases where ground truth data is small, the validation of interactive machine learning workflows relies on domain experts. Only those humans can assess the validity of a model prediction, especially in new situations that have been covered only weakly by available training data. Based on our experiences working with domain experts of a pediatric hospital's intensive care unit, we derive requirements for the design of support interfaces for the validation of interactive ML workflows in fast-paced, high-intensity environments. We present RiskFix, a software package optimized for the validation workflow of domain experts of such contexts. RiskFix is adapted to the cognitive resources and needs of domain experts in validating and giving feedback to the model. Also, RiskFix supports data scientists in their model-building work, with appropriate data structuring for the re-calibration (and possible retraining) of ML models. |
|
Luciano Romero Calla, Bipul Mohanto, Renato Pajarola, Oliver Staadt, Multi-Display Ray Tracing Framework, In: Posters Eurographics Conference, Eurographics Association. 2023. (Conference Presentation)
We present a framework that will provide a highly efficient and scalable multi-display ray-tracing based rendering system capable of utilizing multiple GPU devices to produce high-quality images. Our system integrates advanced technologies, including MPI, CUDA, CUDA IPC, OptiX 7.6, and C++, resulting in a cutting-edge solution for interactive rendering. |
|
Sverrir Arnórsson, Florian Abeillon, Ibrahim Al-Hazwani, Jürgen Bernard, Hanna Hauptmann, Mennatallah El-Assady, Why am I reading this? Explaining Personalized News Recommender Systems, In: EuroVis Workshop on Visual Analytics (EuroVA), The Eurographics Association, 2023. (Conference or Workshop Paper published in Proceedings)
Social media and online platforms significantly impact what millions of people get exposed to daily, mainly through recommended content. Hence, recommendation processes have to benefit individuals and society. With this in mind, we present the visual workspace NewsRecXplain, with the goals of (1) explaining and raising awareness about recommender systems, (2) enabling individuals to control and customize news recommendations, and (3) empowering users to contextualize their news recommendations to escape from their filter bubbles. This visual workspace achieves these goals by allowing users to configure their own individualized recommender system, whose news recommendations can then be explained within the workspace by way of embeddings and statistics on content diversity. |
|
Johanna Schmidt, Harald Piringer, Thomas Mühlbacher, Jürgen Bernard, Human-Based and Automatic Feature Ideation for Time Series Data: A Comparative Study, In: EuroVis Workshop on Visual Analytics (EuroVA), 2023-06-12. (Conference or Workshop Paper published in Proceedings)
Feature ideation is a crucial early step in the feature extraction process, where new features are extracted from raw data. For phenomena existing in time series data, this often includes the ideation of statistical parameters, representations of trends and periodicity, or other geometrical and shape-based characteristics. The strengths of automatic feature ideation methods are their generalizability, applicability, and robustness across cases, whereas human-based feature ideation is most useful in uncharted real-world applications, where incorporating domain knowledge is key. Naturally, both types of methods have proven their right to exist. The motivation for this work is our observation that for time series data, surprisingly few human-based feature ideation approaches exist. In this work, we discuss requirements for human-based feature ideation for VA applications and outline a set of characteristics to assess the goodness of feature sets. Ultimately, we present the results of a comparative study of humanbased and automated feature ideation methods, for time series data in a real-world Industry 4.0 setting. One of our results and discussion items is a call to arms for more human-based feature ideation approaches. |
|
Linda Weigl, Tom Barbereau, Johannes Sedlmeir, Liudmila Zavolokina, Mediating the Tension between Data Sharing and Privacy: The Case of DMA and GDPR, In: 31st European Conference on Information Systems (ECIS 2023), Norway, 2023-06-11. (Conference or Workshop Paper published in Proceedings)
The Digital Markets Act (DMA) constitutes a crucial part of the European legislative framework addressing the dominance of 'Big Tech'. It intends to foster fairness and competition in Europe's digital platform economy by imposing obligations on 'gatekeepers' to share end-user-related information with business users. Yet, this may involve the processing of personal data subject to the General Data Protection Regulation (GDPR). The obligation to provide access to personal data in a GDPR-compliant manner poses a regulatory and technical challenge and can serve as a justification for gatekeepers to refrain from data sharing. In this research-in-progress paper, we analyze key tensions between the DMA and the GDPR through the paradox perspective. We argue through a task-technology fit approach how privacy-enhancing technologies-particularly anonymization techniques-and portability could help mediate tensions between data sharing and privacy. Our contribution provides theoretical and practical insights to facilitate legal compliance. |
|
Dario Staehelin, Maike Greve, Gerhard Schwabe, Empowering community health workers with mobile health: learnings from two projects on non-communicable disease care, In: European Conference on Information Systems ECIS 2023, AIS Electronic Library (AISeL), 2023. (Conference or Workshop Paper published in Proceedings)
Community-based healthcare is a promising approach to tackling workforce shortage in healthcare, especially in low- and middle-income countries. Community health workers (CHWs) are lay cadres that bridge healthcare disparities by living in the community where they should provide basic health services, mainly through education. However, high attrition rates and underperformance of these health workers limit the scope of such programs. In addition, mobile health is not the hoped-for silver bullet to solve the two challenges. This paper examines two pilot projects using mobile health for non-communicable disease care from an empowerment perspective. We propose design knowledge of mobile health for the structural empowerment of CHWs. Furthermore, we evaluate their psychological empowerment by analyzing mobile health's intended and unintended consequences. Finally, our study demonstrates how the empowerment of CHWs could help overcome the persisting challenges and lead to a sustainable and resilient health system. |
|