Remy Egloff, TaskSnap: Semi-Automatic Task Context Capturing & Task Resumption Support for Software Developers, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Master's Thesis)
The workday of software developers is highly fragmented, as many developers work on multiple tasks per day and frequently switch between them, for example, to help co-workers or when being stuck. Frequent task switches introduce time overhead, as the user must always capture and restore the tasks' working context (e.g., applications, documents, folders) and mental context (e.g., task knowledge, goals, intentions). Previous work focused on supporting users in restoring their working context, for instance, by keeping track of task-related documents or web pages. However, these approaches frequently do not help users to re-establish their mental task context and operate fully automated, which can lead to restoring task-unrelated artifacts. In addition, existing approaches are generally not targeting software developers by not displaying source code related information. To overcome these shortcomings, we propose an approach that facilitates task context capturing and resumption for software developers and data scientists by allowing the user to semi-automatically create a snapshot of a task's associated working and mental context at any time. Later, when resuming a task, all information stored in a snapshot can be restored. A two-week pilot study with six participants showed that the approach fitted well into existing workflows, supported users in capturing their working and mental context, and saved them time when resuming a task. Users mainly created snapshots when having enough time, for example, at the end of the workday to reflect on the day and detach from work. Creating snapshots during instant task switches was less common, as participants did not encounter these situations frequently during the pilot study, likely because they were part-time developers. In addition, participants curated snapshots by providing thorough descriptions of their intent on how a task should be continued and frequently restored them within 24 hours. |
|
Christoph Bachmann, ScreenCurator: Curation of digital knowledge with screenshots, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Bachelor's Thesis)
In today's world, knowledge workers are often overwhelmed by the vast amounts of information they encounter while carrying out their tasks. As a result, it is vital to develop effective strategies for efficiently reusing previously foraged information to minimize foraging effort. One of these strategies is information curation, which is the concept of keeping, managing, and exploiting foraged information. Existing prototypes that addressed this topic have mostly specific use cases, like web resource curation or task history curation. Only a few of them allow the capturing of cross-application settings. None of these prototypes are optimized to support users in information foraging tasks. They lack extensive retrieval functionality, semantic content analysis, and structuring options for curated assets. To fill this void, we designed and developed the ScreenCurator. Our application allows users to capture cross-application screen settings and store them with extensive metadata. This combination shall enable comprehensive retrievability and reusability of curated knowledge. To provide users with a simple and pleasant experience, the ScreenCurator implements a certain degree of automation combined with an intuitive interface. Our application was evaluated in a user study where seven participants used the ScreenCurator for 10-15 working days besides their daily tasks. The gathered feedback implied that our approach improved the experience of taking and retrieving screenshots. Furthermore, two high-level use cases could be identified: long-term backups and short-term to-dos. Nevertheless, we found that the ScreenCurator needs to increase the implemented degree of automation and add further structuring options. Additionally, it would be of great value if the ScreenCurator would enable collaborative curation and knowledge sharing. Besides extending the feature set, care should be taken to maintain the simplicity and intuitiveness of the application. |
|
Virginia Schmid, Wie kommen CSR-Aktivitäten einer Meta-Organisation in Krisensituationen zustande? Eine Fallstudie zur Gratisnutzung des Schweizer öffentlichen Verkehrs durch die ukrainischen Flüchtlinge im Jahr 2022, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Bachelor's Thesis)
|
|
Fiona Bühlmann, Legitimitätsstrategien im Kontext von radikalem und graduellem Legitimitätsverlust: Eine Analyse am Beispiel der Schweizer Skigebiete, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Bachelor's Thesis)
|
|
Dario Gagulic, Computing the Trustworthiness Level of Black Box Machine and Deep Learning Models, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Master's Thesis)
The field of Artificial Intelligence (AI) is rapidly evolving and increasingly being integrated into our everyday life. Black Box Machine and Deep Learning systems support humans in making important decisions in safety-critical industries, that consequently influence the lives of real people. This has raised the need for the ability to assess the model’s trustworthiness. Trust is a subjective concept and depends on many factors. As Black Box models grow bigger and become more complex, it has become impossible, even for domain experts, to understand their reasoning and analyze how such models derive conclusions. Luckily, early work has developed automatic tools that allow the computation and evaluation of trust in a particular system, based on the pillars called fairness, explainability, robustness, and methodology. The algorithm computes various metrics and relies on the user to upload the model, the used dataset, and the FactSheet describing the applied training methodology. This forms a problem when computing the trustworthiness level of Black Box Machine and Deep Learning models with limited data access. Notably, the presented work identified two common definitions of the term Black Box established in the research community. The first focuses on complex systems with limited interpretability, and the underexplored second definition with respect to trustworthiness assessment describes systems with limited information available. Therefore, this master’s thesis introduces a Black Box Taxonomy, categorizing Machine Learning models based on interpretability into different subgroups and adding another dimension distinguishing their available information levels. Further, a novel approach is proposed introducing a synthetic dataset generator to compute the trust score of Black Box models. The generator offers two approaches (MUST and MAY) to balance privacy and accuracy concerns. This solution addresses incomputable metrics, leading to a more accurate trustworthiness assessment. In order to validate the approach, the implementation was evaluated on two real-world scenarios. |
|
Lynn Zumtaugwald, Designing and Implementing an Advanced Algorithm to Measure the Trustworthiness Level of Federated Learning Models, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Master's Thesis)
Artificial intelligence (AI) has immersed our daily lives and assists in the decision process of critical sectors such as medicine and law. Therefore it is now more important than ever before that AI systems developed are reliable, ethical, and do not cause harm to humans. The High-Level Expert Group on AI (AI-HLEG) of the European Commission has laid the foundation by defining seven key requirements for trustworthy AI systems.
To address concerns about privacy risks associated with centralized learning approaches federated learning (FL) has emerged as a promising and widely used alternative. FL allows multiple clients to collaboratively train machine learning models without the need for sharing private data. Because of the high adaption of FL systems, ensuring that they are trustworthy is crucial. Previous research efforts have proposed a trustworthy FL taxonomy with six pillars, each comprehensively defined with notions and metrics. This taxonomy covers six of the seven requirements defined by the AI-HLEG. However, one notable aspect that has been largely overlooked by research is the requirement for environmental well-being in trustworthy AI/FL. This leaves a significant gap between the expectations set by governing bodies and the guidelines applied and measured by researchers.
This master thesis addresses this gap by introducing the sustainability pillar to the trustworthy FL taxonomy and thus presenting the first taxonomy that comprehensively addresses all the requirements defined by the AI-HLEG. The sustainability pillar focuses on assessing the environmental impact of FL systems and incorporates three main aspects: hardware efficiency, federation complexity, and the carbon intensity of the energy grid, each with well-defined metrics. As a second contribution, this master thesis extends an existing prototype to evaluate the trustworthiness of FL systems with the sustainability pillar.
The prototype is then extensively evaluated in various scenarios, involving different federation configurations. The results shed light on the trustworthiness of different federation configurations in different settings with varying complexities, hardware, and energy grids used. Importantly, the sustainability pillar’s score corrects the overall trust score by considering the environmental impact of FL systems across seven key pillars. Thus, the proposed taxonomy and prototype are the first to comprehensively address all seven AI-HLEG requirements and lay the foundation for a more accurate trustworthiness assessment of FL systems. |
|
Tim Portmann, Data Discovery in a DDoS Data Mesh Network, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Bachelor's Thesis)
Distributed Denial-of-Service (DDoS) attacks continue to pose a persistent threat in today’s digital landscape. Collaborative defense approaches continuously gain popularity by proposing a distributed defense approach for a distributed attack. Central to such collaborative defense approaches is the exchange of DDoS attack data amongst the parties of the defense architecture.
While existing research proposes concepts that enable the collaborative sharing of DDoS information, data-centric solutions remain scarce. Oftentimes, the proposed concepts share a common drawback: Their dependence on specific technologies or hardware that restricts their broad adoption.
This thesis aims to propose a data-centric solution that enables decentralized parties in a collaborative DDoS defense architecture to exchange DDoS attack information. The proposed solution utilizes a data mesh network to handle information exchange, complemented by a data discovery service to act upon the exchanged DDoS data.
First, extensive research into the subject and tools available to build a DDoS data mesh architecture is explored. Subsequently, a design proposal for the DDoS data mesh architecture, including data discovery capabilities, is described. Based on this design, a DDoS data mesh prototype is implemented and deployed, using the tools explored earlier. Finally, the data mesh is evaluated in regard to its performance and data discovery capabilities.
The solution proposed utilizes a technology stack consisting of MySQL instances as DDoS data repositories, Trino as a distributed query engine, and Apache Superset as the data discovery service. This combination enables the efficient exchange and exploration of DDoS data, making it effective for collaborative DDoS defense scenarios and a viable data-centric solution for the exchange of DDoS attack data. |
|
Johanna Bieri, Visualization of Facial Attribute Classifiers via Class Activation Mapping, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Bachelor's Thesis)
The use of convolutional neural networks (CNNs) in image classification tasks is a rapidly progressing field of research, including the classification of facial attributes. However, it is not yet completely understood how CNNs make decisions. To improve the transparency of the decision-making process and thus enhance interpretability and trustworthiness of CNNs, methods have been developed to visualize this process. In this thesis, we use the Gradient-weighted Class Activation Mapping (Grad-CAM) technique proposed by Selvaraju et al. (2017) to identify the regions of an image that the CNN uses for classification. This technique produces class-specific heatmaps that are intuitively interpretable. In order to evaluate the class activation maps, we define a set of masks, one for each of the 40 facial attributes that we examine. By using an approach called Acceptable Mask Ratio (AMR) we quantify how much of the activated area lies within the masked area. The higher the value of the AMR the more active is the CNN within the area that we expect, which usually corresponds to the location of the attribute being classified. We compare two different CNNs, one considers the class imbalance inherent to the data set (balanced CNN), and the other does not (unbalanced CNN). Our results show that overall the balanced CNN more often uses image regions that lie within the masked area. Furthermore, the results show an unexpected pattern for the unbalanced CNN namely for highly biased attributes the Grad-CAMs for the majority class show no activity at all. |
|
Nimra Ahmed, “Women just have to accept it when the man wants it”: An Investigation of the Practice of Forced Marriage and the Potential for Design Interventions, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Master's Thesis)
There has been a growing interest in Human-Computer Interaction (HCI), and Computer-Supported Cooperative Work (CSCW) in research on marginalized communities and women’s health and well-being. Important work has been done considering domestic violence (DV), intimate partner violence (IPV), and technologies to address these problems,
but little research thus far has looked at the issue of forced marriage. In this paper, we present a study investigating the experiences of individuals affected by forced marriage from various cultures, ethnicities and backgrounds. We also examine the processes and challenges for helping organizations that provide assistance to people in forced marriage situations and explore opportunities for the design of technologies to support individuals affected by forced marriages.
Through in-depth interviews and participatory design exercises with people affected by forced marriage and help organization staff members, we offer a rich account of the experiences surrounding forced marriage and identify avenues via which the HCI and CSCW research communities can leverage their expertise to address the problem of forced marriage, potentially contributing to the reduction or elimination of this harmful practice. |
|
Kartikey Sharma, Using Large Language Models (LLMs) to Expand Condensed Coordinated German and English Expressions into Explicit Paraphrases, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Master's Thesis)
This master’s thesis explores fine-tuning Large Language Models (LLMs) to reformulate condensed coordinated expressions found in job postings. This kind of condensed coordinated expression is frequently used in job postings, which is our target text genre for this work. Four gold-standard datasets were created for two tasks in English and German.
The first task focuses on truncated word completion, where elided text like “Haus- und Gartenarbeit” (house and garden work) needs to be completed to “Hausarbeit und Gartenarbeit”. The German GS dataset consists of 510 samples, while the English GS contains 402 samples. The primary goal is to assess the LLMs’ performance in this task and identify promising models for the second, more complex task.
The second task involves expanding condensed coordinated soft-skill requirements
like “Sie arbeiten sehr selbständig, ziel- und kundenorientiert” into explicit self-contained paraphrases such as “Sie arbeiten sehr selbständig, arbeiten zielorientiert und arbeiten kundenorientiert”. To achieve a proper mapping of soft-skill requirements to a detailed domain ontology, it is crucial to provide self-contained text spans that refer to a single concept. For creating the German GS, we utilized In-Context Learning with ChatGPT, providing 5 examples in the prompt to generate additional samples. Subsequently, these samples were used to fine-tune GPT-3 and later manually verified to form a GS dataset comprising 1968 samples.
In the first task, T5-large, and FLAN-T5-large, and GPT models showed similar levels of accuracy. However, in the second task, T5-large and FLAN-T5-large performed poorly. To improve results, we applied PEFT-based techniques, LORA, to fine-tune BLOOM, T5-Large, FLAN T5-XXL, and mT5-XL on a single GPU. Among these, GPT-3 demonstrated superior performance, closely followed by mT5-XL in overall evaluations. For evaluation, we measured how incomplete soft skill text spans were completed, assessed both completed and incomplete soft skills, and evaluated overall sentence similarity. Error metrics such as Rouge-L, average Levenshtein distance, % of matched skills, and Cosine Similarity were used to evaluate soft skill changes and overall text similarity. In conclusion, Large Language Models (LLMs) effectively expanded condensed coordinated expressions into simpler formulations, including completing hyphenated words in German, without relying on traditional methods sensitive to grammatical and spelling errors. |
|
Mark Rüetschi, How do Decentralised Finance Protocols compare to traditional financial products? Which taxonomic approach allows for their categorization?, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Bachelor's Thesis)
Decentralized finance (DeFi) has grown rapidly since 2020, but it also has seen a large correction in 2022. By the end of 2022, the total value locked in DeFi smart contracts has increased by a factor of almost 70, compared to 2020 (Nansen DeFi Statistics 2023). Due to open access, transparency, high interoperability and low intermediation, DeFi application are facing different circumstances than their traditional counterparts. The ecosystem has created new inventions and is still evolving. DeFi protocols are improving their services or adding new services to their portfolio in order to become platforms that offer an increased user experience. This thesis creates a taxonomy of decentralized finance protocols with goal to facilitate future research in this area. Additionally, a comparison to traditional financial applications is made in order to derive possible implications to traditional finance. Different approaches to loan issuance can be found. Even if there is no credit issuance or a securities market in DeFi, blockchain technology seems to offer some benefits in this field. Decentralized exchanges are usually designed differently to traditional order book exchanges. They are finding innovative ways to adopt traditional order book functionalities and under certain circumstances they can be beneficial over order book exchanges. Other DeFi inventions cannot be found in traditional finance. Inventions like flash loans, perpetual swaps and yield farming bring new possibilities to the DeFi ecosystem, but they also certain risks and have lead to several exploits. Risks and opportunities around these inventions are discussed in this thesis. |
|
Thanh Cong Huynh, CO2-Emissionen und Energieverbrauch von Video-Livestreams: Die Plattform twitch.tv, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Bachelor's Thesis)
In this study the energy consumption and greenhouse gas emissions of a video livestream with a length of four hours on the livestreaming platform twitch.tv were calculated. The video transmission requires the end devices of the streamer and the viewer, the data centres and the communication network. The communication network is separated into wide area network, radio access network and fixed network. In the reference scenario, the livestreamer uses a desktop PC and two screens to broadcast a video with a resolution of 1080p and 60hz through a fixed network connection. Viewers can play the livestream on different end devices. As a result of the calculations, the range of greenhouse gas emissions generated during the livestream is between 207 and 804g CO2. The difference is due to the choice of the used end device and the difference between wireless and landline connections. For the end devices, the screen size is an important factor for the contribution to the total energy consumption of a livestream. The radio access network connection has the highest energy intensity because of the consumption of the older radio generations compared to 4G.
The enhancement of the internet infrastructure to 5G will lead to more efficient transmission and the main consumption will shift to the end devices. The choice of the end device and its usage can offset the savings of the energy and the greenhouse emissions by switching to a better internet infrastructure. Strategies are needed against more intensive production and consumption to oppose the climate change. |
|
Baiyun Yuan, The Analysis of Recruitment Criteria in China’s Internet and Finance Industries, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Master's Thesis)
This study investigates recruitment criteria in the Internet and finance industries across key Chinese cities: Beijing, Shanghai, Guangzhou, Shenzhen, and Hangzhou. Utilizing a web crawl technique to get online recruitment platform data (https://www.zhaopin.com/), we examined 174,016 job postings and administered a questionnaire to explore recruitment discrimination. Using Chinese word segmentation and relevant techniques, our findings reveal variations in job opportunities, educational preferences, salaries, and essential skills between the Internet and finance sectors in key Chinese cities. Recruitment discrimination rates fluctuate across cities, with Shenzhen reporting elevated rates. Education discrimination prevails, accompanied by age and gender discrimination. Notably, female individuals are more likely to perceive gender discrimination. |
|
Jie Liao, Bluetooth Low Energy Device Classifier, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Master's Thesis)
In 2011, the introduction of Bluetooth Low Energy (BLE) marked a significant shift in wireless communication, paving the way for the Internet of Things (IoT) and the rise of location-based trackers. While devices like Apple's AirTag provide convenience, they pose security risks, notably the potential for malicious actors to track individuals unbeknownst to them. This work aims to address security concerns related to BLE trackers, especially considering the disparity between protections for iOS and Android users. The research focuses on creating an Android application, improving upon previous tools like HomeScout, which had limited classification capabilities. A feature based prototype was proposed and three classification models including SVM, Random Forest, and Multi-layer Perceptron were evaluated. The result was an effective classification method for BLE devices, with the Multi-Layer Perceptron model outperforming others with a 94.5\% accuracy on test data. The model was further tested on unseen device to evaluate its generalization capability, which achieved a 88\% of accuracy in with binary classification target, tracker and non-tracker. This model was integrated into the HomeScout app after resolving an identified bug in the original application. Eventually, Homescout is able to identify tracker and non-tracker device after integration. Future work entails refining the prototype, enhancing the dataset's diversity, and ensuring user privacy in public datasets. |
|
Bulin Shaqiri, A System for Cost-Efficient Cybersecurity Planning, Compliance, and Investment Prioritization, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Master's Thesis)
While the digital era provides many advantages, it also comes with significant risks related to cybersecurity. Organizations must be proactive in reducing the risks involved with conducting business in a connected and complex digital world. However, despite the abundance of available resources on cybersecurity guidelines, frameworks, and certifications, Small and Medium-sized Enterprises (SMEs) still struggle to understand their unique cybersecurity requirements and develop tailored cybersecurity strategies. Most notably, existing resources are often too abstract, geared towards larger and more mature organizations, or lack practical guidance. Moreover, they often focus on technical aspects and neglect essential dimensions of cybersecurity, such as the economic and societal dimensions. This is especially apparent in case of cybersecurity certifications. To address these gaps, this Master Thesis introduces three key contributions.
Firstly, the CyberTEA methodology is extended to provide SMEs with practical cybersecurity guidelines and allow them to verify compliance with a set of baseline cybersecurity requirements, all while getting formally acknowledged for that. This, in turn, ensures a more holistic approach that incorporates technical, economic, and societal aspects. This methodology is further validated by mapping it against the components of the NIST Cybersecurity Framework (CSF). Secondly, a novel lightweight cybersecurity certification scheme called CERTSec is proposed to offer SMEs an invaluable entry point into the complex world of cybersecurity. This three-tiered certification scheme takes into account key dimensions of cybersecurity and allows businesses to continuously enhance their cybersecurity posture. CERTSec also underscores the importance of annual reassessments within an ever-evolving threat landscape. The final contribution of this work lies in the development of a prototype that automates processes within the proposed certification scheme.
Three technical requirements have been selected and automated, making the prototype able to (i) determine whether Websites establish secure connections, (ii) perform network reachability analysis, and (iii) conduct comprehensive vulnerability analyses on the networks, technologies and software provided. Evaluations have been conducted to highlight the feasibility of key features used for the automation of the certification scheme processes. The results suggest that it is possible to conduct automation for risk analysis without significant impacts (in terms of resource consumption and overall time spent) on the entire process. Furthermore, a detailed case study is shown to demonstrate the feasibility and application of CERTSec for SMEs. |
|
Janosch Baltensperger, A Secure Aggregation Protocol for Decentralized Federated Learning, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Master's Thesis)
Poisoning attacks pose a substantial threat to the trustfulness of Federated Learning. For example, malicious participants can degrade the model performance of honest members or implement backdoors that can be exploited at inference time to take advantage of incorrect predictions. Researchers have been highly active to mitigate poisoning attacks. Existing approaches prominently aim for defenses against poisoning attacks in centralized settings. While decentralized Federated Learning has gained significant attention as a promising approach without a central entity, the security aspects related to poisoning attacks remain largely unaddressed.
This work introduces a defense approach called “Sentinel” for mitigating poisoning attacks in horizontal, decentralized Federated Learning. Sentinel leverages the advantage of local data availability and defines a three-step aggregation protocol composed of similarity filtering, bootstrap validation and normalization to protect against malicious model updates. The proposed defense mechanism is evaluated on various datasets under different types of poisoning attacks and threat levels. An extension of Sentinel, called SentinelGlobal, is presented, which incorporates a global trust protocol to reduce computational complexity and further improve the effectiveness against adversaries. Both Sentinel and SentinelGlobal demonstrate promising results against untargeted and targeted poisoning attacks. Hence, this work contributes to the advances in research against poisoning attacks in decentralized federated systems. Additionally, the results of this work highlight the need for more sophisticated defense strategies against backdoor attacks, independent of the Federated Learning architecture. |
|
Maximilian Rümmelein, Exploring risk Premia in Cryptocurrency Markets: An Analysis of Factors Influencing Returns, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Bachelor's Thesis)
|
|
Lukas David Emanuel Dekker, Protective Closing Strategy for Option Selling via Deep Reinforcement Learning, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Master's Thesis)
Selling put options can be lucrative; however, the returns tend to exhibit a strong negative skewness. Moreover, the seller may have liquidity issues during the holding period, especially when margin requirements become too large. Existing hedging techniques often overlook potential liquidity problems during the holding period, focusing solely on terminal losses. To address this limitation, we present a novel risk management approach by reformulating the closing time of the short position as an optimal stopping problem. To find the solutions, we decompose the holding period into a sequence of binary stopping decisions, which naturally fit into the reinforcement learning framework. Multiple deep reinforcement learning algorithms, namely Deep Q-Learning, Rainbow, and Synchronous Advantage Actor-Critic, are employed to identify the optimal times for closing the position. Our training framework introduces a new reward function that enables the agents to maximize each option’s profit and enhance its Sharpe ratio. In a simulated environment with nontrivial optimal stopping solutions, we demonstrate the e↵ectiveness of the algorithms and our training setup. Furthermore, we apply these algorithms to market data; specifically, SPY put option data from 2005 to 2022. During this analysis, we encounter a significant imbalance in the training data between paths with negative and positive returns, making it challenging for the algorithms to learn an optimal solution. Consequently, we propose several approaches to tackle this issue in future research. Overall, our work presents a promising approach to address liquidity concerns during option selling strategies, and our findings contribute to the advancement of reinforcement learning techniques in the financial domain.
|
|
Pascal Kiechl, Simulator of Distributed Datasets for Pulse-wave DDoS Attacks, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Master's Thesis)
The ever increasing scale and frequency of Distributed Denial-of-Service (DDoS) attacks, as well as the emergence of new forms of attacks, such as pulse-wave DDoS attacks, highlights the importance of ensuring that mitigation capabilities are able to keep up with the escalating threat posed by DDoS attacks. To that end, much work has been done with regard to the generation of DDoS datasets which form the basis for developing effective mitigation tools such as Intrusion Detection Systems (IDS). However, existing datasets typically represent a single, victim-centric viewpoint, which has limitations compared to a distributed dataset that provides multiple different perspectives onto an attack. Thus, this thesis implements a simulator for distributed datasets specifically focused on pulse-wave DDoS attacks, for which at current no datasets are publicly available. The simulator provides high flexibility and configurability in the types of use cases that can be modeled, allowing for the creation of different topologies and attack compositions. The evaluation demonstrates the tool’s capability to create of a wide range of diverse datasets that exhibit different characteristics with regard to metrics that are commonly used in a DDoS attack’s fingerprint. As such, this thesis represents a significant step towards enabling a better understanding of pulse-wave DDoS attacks and thereby the development of improved tools to help defend against them. |
|
Elliott Wallace, Enforcing Privacy in a Smart Home Environment via Pi-hole Integration, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Bachelor's Thesis)
The Internet of Things (IoT) platform is one of the key drivers of the smart home market, having revolutionized the advancement of smart home technology. Besides the many benefits for convenience and efficiency, there are also concerns about security and privacy in such environments. The increasing complexity of smart homes and hardware limitations of individual devices necessitate the storage and processing of data in remote cloud environments. This raises privacy issues due to potential misuse or disclosure of sensitive information about residents. To the author's knowledge, no existing Privacy Enhancing Technology (PET) offers a lightweight approach to enforce privacy in smart home environments by combining existing tools into a unifying framework. The goal of this thesis is to take a first step towards an extensible open source software system that integrates into the smart home environment with the purpose of monitoring smart home device communications and controlling their communication behavior through user-defined policies. To this end, a prototype application is developed, which monitors smart home devices' Domain Name System (DNS) requests and enforces policies via a DNS sinkhole mechanism. The prototype system is deployed to a system-on-chip platform and evaluated in a live smart home environment to gain insight into the viability of the prototype. The aim is to examine the performance, effectiveness, and limitations of the prototype with the intention of validating the general approach. The results of these experiments indicate that the prototype successfully achieves the goals outlined in this thesis. The application prototype is capable of monitoring the network activity of smart home devices. The collected data are processed to gain insights and make this information transparent to the users. Furthermore, the prototype allows users to define simple allow/block policies which are subsequently enforced by the system. |
|