Achim Guldner, Rabea Bender, Coral Calero, Giovanni S Fernando, Markus Funke, Jens Gröger, Lorenz Hilty, Julian Hörnschemeyer, Geerd-Dietger Hoffmann, Dennis Junger, Tom Kennes, Sandro Kreten, Patricia Lago, Franziska Mai, Ivano Malavolta, Julien Murach, Kira Obergöker, Benno Schmidt, Arne Tarara, Joseph P De Veaugh-Geiss, Sebstian Weber, Max Westling, Volker Wohlgemuth, Stefan Naumann, Development and evaluation of a reference measurement model for assessing the resource and energy efficiency of software products and components—Green Software Measurement Model (GSMM), Future Generation Computer Systems, Vol. 155, 2024. (Journal Article)
In the past decade, research on measuring and assessing the environmental impact of software has gained significant momentum in science and industry. However, due to the large number of research groups, measurement setups, procedure models, tools, and general novelty of the research area, a comprehensive research framework has yet to be created. The literature documents several approaches from researchers and practitioners who have developed individual methods and models, along with more general ideas like the integration of software sustainability in the context of the UN Sustainable Development Goals, or science communication approaches to make the resource cost of software transparent to society. However, a reference measurement model for the energy and resource consumption of software is still missing. In this article, we jointly develop the Green Software Measurement Model (GSMM), in which we bring together the core ideas of the measurement models, setups, and methods of over 10 research groups in four countries who have done pioneering work in assessing the environmental impact of software. We briefly describe the different methods and models used by these research groups, derive the components of the GSMM from them, and then we discuss and evaluate the resulting reference model. By categorizing the existing measurement models and procedures and by providing guidelines for assimilating and tailoring existing methods, we expect this work to aid new researchers and practitioners who want to conduct measurements for their individual use cases. |
|
Narges Ashena, Oana Inel, Badrie L Persaud, Abraham Bernstein, Casual Users and Rational Choices within Differential Privacy, In: 2024 IEEE Symposium on Security and Privacy (SP), Institute of Electrical and Electronics Engineers, Los Alamitos, CA, USA, 2024-05. (Conference or Workshop Paper published in Proceedings)
In light of recent growth in privacy awareness and data ownership rights, differential privacy (DP) has emerged as a promising technique employed by several well-known data controller entities. This raises the question of how casual users, as the immediate recipients of privacy threats and risks, comprehend and perceive DP and its key parameter ε, as DP's provided protection depends on it. Existing studies show that ordinary users have the potential to understand the fundamental mechanism of DP and its implications for the privacy-utility trade-off when they are communicated clearly through textual and visual aids and, accordingly, make informed decisions about sharing their data under DP protection. However, these attempts either only implicitly mention a few possible values for ε, such as low, medium, and high, or altogether leave it out of the communication. In this paper, we conduct a between-subject user study (N=426) to investigate the effectiveness of nine interactive visual tools to communicate ε explicitly and on a continuous scale in a data-sharing scenario related to publishing positive COVID-19 test results. These interactive visual tools allow casual users to visualize DP's effects on data accuracy and/or privacy loss for various ε values. We found that visualizations incorporating the privacy loss component have a significant impact on assisting users in selecting values that are closer to the recommended values by experts. However, depending on the ratio between DP noise and underlying data, the accuracy loss component disparately affects users' ε decision; the bigger the relative error, the bigger the selected epsilon and vice versa. Thus, accuracy portrayals should be carried out with care. We contextualize our findings in the existing literature and conclude with insights and recommendations on effectively employing our findings to communicate DP to casual users. |
|
Alexander Lill, André Meyer, Thomas Fritz, On the Helpfulness of Answering Developer Questions on Discord with Similar Conversations and Posts from the Past, In: 46th International Conference on Software Engineering (ICSE 2024), ACM Digital library, 2024-04-14. (Conference or Workshop Paper published in Proceedings)
A big part of software developers’ time is spent finding answers to their coding-task-related questions. To answer their questions, developers usually perform web searches, ask questions on Q&A websites, or, more recently, in chat communities. Yet, many of these questions have frequently already been answered in previous chat conversations or other online communities. Automatically identifying and then suggesting these previous answers to the askers could, thus, save time and effort. In an empirical analysis, we first explored the frequency of repeating questions on the Discord chat platform and assessed our approach to identify them automatically. The approach was then evaluated with real-world developers in a field experiment, through which we received 142 ratings on the helpfulness of the suggestions we provided to help answer 277 questions that developers posted in four Discord communities. We further collected qualitative feedback through 53 surveys and 10 follow-up interviews. We found that the suggestions were considered helpful in 40% of the cases, that suggesting Stack Overflow posts is more often considered helpful than past Discord conversations, and that developers have difficulties describing their problems as search queries and, thus, prefer describing them as natural language questions in online communities. |
|
Francesco Barile, Tim Draws, Oana Inel, Alisa Rieger, Shabnam Najafian, Amir Ebrahimi Fard, Rishav Hada, Nava Tintarev, Evaluating explainable social choice-based aggregation strategies for group recommendation, User modeling and user-adapted interaction, Vol. 34 (1), 2024. (Journal Article)
Social choice aggregation strategies have been proposed as an explainable way to generate recommendations to groups of users. However, it is not trivial to determine the best strategy to apply for a specific group. Previous work highlighted that the performance of a group recommender system is affected by the internal diversity of the group members’ preferences. However, few of them have empirically evaluated how the specific distribution of preferences in a group determines which strategy is the most effective. Furthermore, only a few studies evaluated the impact of providing explanations for the recommendations generated with social choice aggregation strategies, by evaluating explanations and aggregation strategies in a coupled way. To fill these gaps, we present two user studies (N=399 and N=288) examining the effectiveness of social choice aggregation strategies in terms of users’ fairness perception, consensus perception, and satisfaction. We study the impact of the level of (dis-)agreement within the group on the performance of these strategies. Furthermore, we investigate the added value of textual explanations of the underlying social choice aggregation strategy used to generate the recommendation. The results of both user studies show no benefits in using social choice-based explanations for group recommendations. However, we find significant differences in the effectiveness of the social choice-based aggregation strategies in both studies. Furthermore, the specific group configuration (i.e., various scenarios of internal diversity) seems to determine the most effective aggregation strategy. These results provide useful insights on how to select the appropriate aggregation strategy for a specific group based on the level of (dis-)agreement within the group members’ preferences. |
|
Pedro Miguel Sánchez Sánchez, Alberto Huertas Celdran, Timo Schenk, Adrian Lars Benjamin Iten, Gérôme Bovet, Gregorio Martínez Pérez, Burkhard Stiller, Studying the Robustness of Anti-Adversarial Federated Learning Models Detecting Cyberattacks in IoT Spectrum Sensors, IEEE Transactions on Dependable and Secure Computing, Vol. 21 (2), 2024. (Journal Article)
Device fingerprinting combined with Machine and Deep Learning (ML/DL) report promising performance when detecting spectrum sensing data falsification (SSDF) attacks. However, the amount of data needed to train models and the scenario privacy concerns limit the applicability of centralized ML/DL. Federated learning (FL) addresses these drawbacks but is vulnerable to adversarial participants and attacks. The literature has proposed countermeasures, but more effort is required to evaluate the performance of FL detecting SSDF attacks and their robustness against adversaries. Thus, the first contribution of this work is to create an FL-oriented dataset modeling the behavior of resource-constrained spectrum sensors affected by SSDF attacks. The second contribution is a pool of experiments analyzing the robustness of FL models according to i) three families of sensors, ii) eight SSDF attacks, iii) four FL scenarios dealing with anomaly detection and binary classification, iv) up to 33% of participants implementing data and model poisoning attacks, and v) four aggregation functions acting as anti-adversarial mechanisms. In conclusion, FL achieves promising performance when detecting SSDF attacks. Without anti-adversarial mechanisms, FL models are particularly vulnerable with > 16% of adversaries. Coordinate-wise-median is the best mitigation for anomaly detection, but binary classifiers are still affected with > 33% of adversaries. |
|
Pedro Miguel Sánchez Sánchez, Alberto Huertas Celdran, José R Buendía Rubio, Gérôme Bovet, Gregorio Martínez Pérez, Robust Federated Learning for execution time-based device model identification under label-flipping attack, Cluster Computing, Vol. 27 (1), 2024. (Journal Article)
The computing device deployment explosion experienced in recent years, motivated by the advances of technologies such as Internet-of-Things (IoT) and 5G, has led to a global scenario with increasing cybersecurity risks and threats. Among them, device spoofing and impersonation cyberattacks stand out due to their impact and, usually, low complexity required to be launched. To solve this issue, several solutions have emerged to identify device models and types based on the combination of behavioral fingerprinting and Machine/Deep Learning (ML/DL) techniques. However, these solutions are not appropriate for scenarios where data privacy and protection are a must, as they require data centralization for processing. In this context, newer approaches such as Federated Learning (FL) have not been fully explored yet, especially when malicious clients are present in the scenario setup. The present work analyzes and compares the device model identification performance of a centralized DL model with an FL one while using execution time-based events. For experimental purposes, a dataset containing execution-time features of 55 Raspberry Pis belonging to four different models has been collected and published. Using this dataset, the proposed solution achieved 0.9999 accuracy in both setups, centralized and federated, showing no performance decrease while preserving data privacy. Later, the impact of a label-flipping attack during the federated model training is evaluated using several aggregation mechanisms as countermeasures. Zeno and coordinate-wise median aggregation show the best performance, although their performance greatly degrades when the percentage of fully malicious clients (all training samples poisoned) grows over 50%. |
|
Beibei Han, Yingmei Wei, Qingyong Wang, Francesco Maria De Collibus, Claudio Tessone, MT²AD: multi-layer temporal transaction anomaly detection in ethereum networks with GNN, Complex & Intelligent Systems, Vol. 10 (1), 2024. (Journal Article)
In recent years, a surge of criminal activities with cross-cryptocurrency trades have emerged in Ethereum, the second-largest public blockchain platform. Most of the existing anomaly detection methods utilize the traditional machine learning with feature engineering or graph representation learning technique to capture the information in transaction network. However, these methods either ignore the timestamp information and the transaction flow direction information in transaction network or only consider single transaction network, the cross-cryptocurrency trading patterns in Ethereum are usually ignored. In this paper, we introduce a Multi-layer Temporal Transaction Anomaly Detection (MT$^2$AD) model in Ethereum network with graph neural network. Specifically, for a given Ethereum token transaction network, we first extract its initial features including the structure subgraph and edge’s feature. Then, we model the temporal information in subgraph as a series of network snapshots according to the timestamp on each edge and time window. To capture the cross-cryptocurrency trading patterns, we combine the snapshots from multiple token transactions at a given timestamp, and we consider it as a new combined graph. We further use the graph convolution encoder with attention mechanism and pooling operation on this new graph to obtain the graph-level embedding, and we transform the anomaly detection on dynamic multi-layer Ethereum transaction networks as a graph classification task with these graph-level embeddings. MT$^2$AD can integrate the transaction structure feature, edge’s feature and cross-cryptocurrency trading patterns into a framework to perform anomaly detection with graph neural networks. Experiments on three real-world multi-layer transaction networks show that the proposed MT$^2$AD (0.8789 Precision, 0.9375 Recall, 0.4987 FbMacro and 0.9351 FbWeighted) can achieve the best performance on most evaluation metrics in comparison with some competing approaches, and the effectiveness in consideration of multiple tokens is also demonstrated. |
|
Pedro Miguel Sánchez Sánchez, Alberto Huertas Celdran, Gérôme Bovet, Gregorio Martínez Pérez, Single-board device individual authentication based on hardware performance and autoencoder transformer models, Computers and Security, Vol. 137, 2024. (Journal Article)
The proliferation of the Internet of Things (IoT) has led to the emergence of crowdsensing applications, where a multitude of interconnected devices collaboratively collect and analyze data. Ensuring the authenticity and integrity of the data collected by these devices is crucial for reliable decision-making and maintaining trust in the system. Traditional authentication methods are often vulnerable to attacks or can be easily duplicated, posing challenges to securing crowdsensing applications. Besides, current solutions leveraging device behavior are mostly focused on device identification, which is a simpler task than authentication. To address these issues, an individual IoT device authentication framework based on hardware behavior fingerprinting and Transformer autoencoders is proposed in this work. To support the design, a threat model details the security problems faced when performing hardware-based authentication in IoT. This solution leverages the inherent imperfections and variations in IoT device hardware to differentiate between devices with identical specifications. By monitoring and analyzing the behavior of key hardware components, such as the CPU, GPU, RAM, and Storage on devices, unique fingerprints for each device are created. The performance samples are considered as time series data and used to train outlier detection transformer models, one per device and aiming to model its normal data distribution. Then, the framework is validated within a spectrum crowdsensing system leveraging Raspberry Pi devices. After a pool of experiments, the model from each device is able to individually authenticate it between the 45 devices employed for validation. An average True Positive Rate (TPR) of 0.74±0.13 and an average maximum False Positive Rate (FPR) of 0.06±0.09 demonstrate the effectiveness of this approach in enhancing authentication, security, and trust in crowdsensing applications. |
|
Yifei Liu, Mathias Gehrig, Nico Messikommer, Marco Cannici, Davide Scaramuzza, Revisiting Token Pruning for Object Detection and Instance Segmentation, In: IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024. (Conference or Workshop Paper published in Proceedings)
Vision Transformers (ViTs) have shown impressive performance in computer vision, but their high computational cost, quadratic in the number of tokens, limits their adoption in computation-constrained applications. However, this large number of tokens may not be necessary, as not all tokens are equally important. In this paper, we investigate token pruning to accelerate inference for object detection and instance segmentation, extending prior works from image classification. Through extensive experiments, we offer four insights for dense tasks: (i) tokens should not be completely pruned and discarded, but rather preserved in the feature maps for later use. (ii) reactivating previously pruned tokens can further enhance model performance. (iii) a dynamic pruning rate based on images is better than a fixed pruning rate. (iv) a lightweight, 2-layer MLP can effectively prune tokens, achieving accuracy comparable with complex gating networks with a simpler design. We evaluate the impact of these design choices on COCO dataset and present a method integrating these insights that outperforms prior art token pruning models, significantly reducing performance drop from ~1.5 mAP to ~0.3 mAP for both boxes and masks. Compared to the dense counterpart that uses all tokens, our method achieves up to 34% faster inference speed for the whole network and 46% for the backbone. |
|
Dzmitry Katsiuba, Mateusz Dolata, Gerhard Schwabe, Power of Language Automation: The Potential for Closing the Loop in Responding to Online Customer Feedback, In: Hawaii International Conference on System Sciences 2024 (HICSS-57), Hawaii International Conference on System Sciences (HICSS), 2024-01-03. (Conference or Workshop Paper published in Proceedings)
Online customer feedback management is playing an increasingly important role for businesses. Quickly providing guests with good responses to their reviews can be challenging, especially as the number of reviews increases. To address these challenges, this paper explores the response process and the potential for AI augmentation in the formulation and quality assurance of responses. As part of a design science research approach, it proposes an orchestration concept for humans and AI in intelligence co-writing in the hospitality industry and a novel NLP-based solution, which combines the advantages of human and AI in one application. The evaluation of the developed artifact shows that it is currently not possible to close the loop and automate the response process completely. This study describes the necessary components and provides transferable design knowledge. It opens possibilities for practical applications of NLP and further IS research. |
|
Christophe Viguerie, Raffaele Fabio Ciriello, Liudmila Zavolokina, Formative Archetypes in Enterprise Blockchain Governance: Exploring the Dynamics of Participant Dominance and Platform Openness, In: 57th Hawaii International Conference on System Sciences, Hawaii International Conference on System Sciences (HICSS), 2024-01-03. (Conference or Workshop Paper published in Proceedings)
It is widely assumed that blockchain should, in principle, lead to decentralization. Yet, in practice, many enterprise blockchains are highly centralized. To explain this conundrum, we conduct a multi-case study of four enterprise blockchains: Walmart DL Freight, Contour, Chronicled MediLedger, and Cardossier. Exploring the dynamics of participant dominance and platform openness during their formative stages, we theorize that these blockchains correspond to the distinct archetypes of Chief, Clan, Custodian, and Consortium, respectively. Importantly, these archetypes shape the subsequent evolution of the governance approach, thus explaining why and how enterprise blockchains with dominant participants and limited openness later exhibit more centralized governance. |
|
Mateusz Dolata, Gerhard Schwabe, Towards the Socio-Algorithmic Construction of Fairness: The Case of Automatic Price-Surging in Ride-Hailing, International Journal of Human-Computer Interaction, Vol. 40 (1), 2024. (Journal Article)
Algorithms take decisions that affect humans, and have been shown to perpetuate biases and discrimination. Decisions by algorithms are subject to different interpretations. Algorithms’ behaviors are basis for the construal of moral assessment and standards. Yet we lack an understanding of how algorithms impact on social construction processes, and vice versa. Without such understanding, social construction processes may be disrupted and, eventually, may impede moral progress in society. We analyze the public discourse that emerged after a significant (five-fold) price-surge following the Brooklyn Subway Shooting on April 12 2022, in New York City. There was much controversy around the two ride-hailing firms’ algorithms’ decisions. The discussions evolved around various notions of fairness and the algorithms’ decisions’ justifiability. Our results indicate that algorithms, even if not explicitly addressed in the discourse, strongly impact on constructing fairness assessments and notions. They initiate the exchange, form people’s expectations, evoke people’s solidarity with specific groups, and are a vehicle for moral crusading. However, they are also subject to adjustments based on social forces. We claim that the process of constructing notions of fairness is no longer just social; it has become a socio-algorithmic process. We propose a theory of socio-algorithmic construction as a mechanism for establishing notions of fairness and other ethical constructs. |
|
Mathias Gehrig, Manasi Muglikar, Davide Scaramuzza, Dense Continuous-Time Optical Flow from Event Cameras, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024. (Journal Article)
We present a method for estimating dense continuous-time optical flow from event data. Traditional dense optical flow methods compute the pixel displacement between two images. Due to missing information, these approaches cannot recover the pixel trajectories in the blind time between two images. In this work, we show that it is possible to compute per-pixel, continuous-time optical flow using events from an event camera. Events provide temporally fine-grained information about movement in pixel space due to their asynchronous nature and microsecond response time. We leverage these benefits to predict pixel trajectories densely in continuous time via parameterized Bézier curves. To achieve this, we build a neural network with strong inductive biases for this task: First, we build multiple sequential correlation volumes in time using event data. Second, we use Bézier curves to index these correlation volumes at multiple timestamps along the trajectory. Third, we use the retrieved correlation to update the Bézier curve representations iteratively. Our method can optionally include image pairs to boost performance further. To the best of our knowledge, our model is the first method that can regress dense pixel trajectories from event data. To train and evaluate our model, we introduce a synthetic dataset (MultiFlow) that features moving objects and ground truth trajectories for every pixel. Our quantitative experiments not only suggest that our method successfully predicts pixel trajectories in continuous time but also that it is competitive in the traditional two-view pixel displacement metric on MultiFlow and DSEC-Flow. Open source code and datasets are released to the public. |
|
Rafael Henrique Vareto, Yu Linghu, Terrance Edward Boult, William Robson Schwartz, Manuel Günther, Open-set face recognition with maximal entropy and Objectosphere loss, Image and Vision Computing, Vol. 141, 2024. (Journal Article)
Open-set face recognition characterizes a scenario where unknown individuals, unseen during the training and enrollment stages, appear on operation time. This work concentrates on watchlists, an open-set task that is expected to operate at a low false-positive identification rate and generally includes only a few enrollment samples per identity. We introduce a compact adapter network that benefits from additional negative face images when combined with distinct cost functions, such as Objectosphere Loss (OS) and the proposed Maximal Entropy Loss (MEL). MEL modifies the traditional Cross-Entropy loss in favor of increasing the entropy for negative samples and attaches a penalty to known target classes in pursuance of gallery specialization. The proposed approach adopts pre-trained deep neural networks (DNNs) for face recognition as feature extractors. Then, the adapter network takes deep feature representations and acts as a substitute for the output layer of the pre-trained DNN in exchange for an agile domain adaptation. Promising results have been achieved following open-set protocols for three different datasets: LFW, IJB-C, and UCCS as well as state-of-the-art performance when supplementary negative data is properly selected to fine-tune the adapter network. |
|
Alberto Huertas Celdran, Pedro Miguel Sánchez Sánchez, Gérôme Bovet, Gregorio Martínez Pérez, Burkhard Stiller, CyberSpec: Behavioral Fingerprinting for Intelligent Attacks Detection on Crowdsensing Spectrum Sensors, IEEE Transactions on Dependable and Secure Computing, Vol. 21 (1), 2024. (Journal Article)
Integrated sensing and communication is a novel paradigm using crowdsensing spectrum sensors to help with the management of spectrum scarcity. However, well-known vulnerabilities of resource-constrained spectrum sensors and the possibility of being manipulated by users with physical access complicate their protection against spectrum sensing data falsification (SSDF) attacks. Most recent literature suggests using behavioral fingerprinting and Machine/Deep Learning (ML/DL) for improving similar cybersecurity issues. Nevertheless, the applicability of these techniques in resource-constrained devices, the impact of attacks affecting spectrum data integrity, and the performance and scalability of models suitable for heterogeneous sensors types are still open challenges. To improve limitations, this work presents seven SSDF attacks affecting spectrum sensors and introduces CyberSpec, an ML/DL-oriented framework using device behavioral fingerprinting to detect anomalies produced by SSDF attacks. CyberSpec has been implemented and validated in ElectroSense, a real crowdsensing RF monitoring platform where several configurations of the proposed SSDF attacks have been executed in different sensors. A pool of experiments with different unsupervised ML/DL-based models has demonstrated the suitability of CyberSpec detecting the previous attacks within an acceptable timeframe. |
|
Liudmila Zavolokina, Andreas Hein, Arthur Carvalho, Gerhard Schwabe, Helmut Krcmar, Preface to the special issue on “Enterprise and organizational applications of distributed ledger technologies, Electronic Markets, Vol. 34 (1), 2024. (Journal Article)
|
|
Mateusz Dolata, Kevin Crowston, Making sense of AI systems development, IEEE Transactions on Software Engineering, Vol. 50 (1), 2024. (Journal Article)
We identify and describe episodes of sensemaking around challenges in modern Artificial-Intelligence (AI)-based systems development that emerged in projects carried out by IBM and client companies. All projects used IBM Watson as the development platform for building tailored AI-based solutions to support workers or customers of the client companies. Yet, many of the projects turned out to be significantly more challenging than IBM and its clients had expected. The analysis reveals that project members struggled to establish reliable meanings about the technology, the project, context, and data to act upon. The project members report multiple aspects of the projects that they were not expecting to need to make sense of yet were problematic. Many issues bear upon the current-generation AI’s inherent characteristics, such as dependency on large data sets and continuous improvement as more data becomes available. Those characteristics increase the complexity of the projects and call for balanced mindfulness to avoid unexpected problems. |
|
Yu Zhou, Weilin Zhan, Zi Li, Tingting Han, Taolue Chen, Harald Gall, DRIVE: Dockerfile Rule Mining and Violation Detection, ACM Transactions on Software Engineering and Methodology, Vol. 33 (2), 2023. (Journal Article)
A Dockerfile defines a set of instructions to build Docker images, which can then be instantiated to support containerized applications. Recent studies have revealed a considerable amount of quality issues with Dockerfiles. In this article, we propose a novel approach, Dockerfiles Rule mIning and Violation dEtection (DRIVE), to mine implicit rules and detect potential violations of such rules in Dockerfiles. DRIVE first parses Dockerfiles and transforms them to an intermediate representation. It then leverages an efficient sequential pattern mining algorithm to extract potential patterns. With heuristic-based reduction and moderate human intervention, potential rules are identified, which can then be utilized to detect potential violations of Dockerfiles. DRIVE identifies 34 semantic rules and 19 syntactic rules including 9 new semantic rules that have not been reported elsewhere. Extensive experiments on real-world Dockerfiles demonstrate the efficacy of our approach. |
|
Benjamin Kraner, Nicolo Vallarano, Claudio Tessone, Tokenization of the Common: An Economic Model of Multidimensional Incentives, In: Middleware '23: 24th International Middleware Conference, ACM Digital library, 2023-12-11. (Conference or Workshop Paper published in Proceedings)
The concept of the tragedy of the commons, originally rooted in economics, describes the depletion of shared resources due to self-interested actions by individuals. This work proposes a novel solution to address this economic challenge by leveraging tokens to capture its multidimensional nature. By utilising blockchain and DLTs, this decentralised approach aims to achieve a social optimum while promoting self-regulation. The paper presents a mathematical treatment of the tragedy of the commons, incorporating multi-dimensional tokens and exploring the divergence from the classic optimal solution, highlighting the potential of tokenisation in shaping a sustainable and efficient economy. |
|
Dario Staehelin, Gianluca Miscione, Mateusz Dolata, From Solution Trap to Solution Patchwork: Tensions in Digital Health in the Global Context, In: 2023 International Conference on Information Systems, Association for Information Systems, 2023-12-10. (Conference or Workshop Paper published in Proceedings)
This paper problematizes underlying assumptions in Design Science Research – and Information Systems Research more broadly by conceptualizing the „solution trap“. The solution trap is caused by the incompatibility of co-existing solutions in complex socio-technical contexts. Information systems bring diverse cultures and theories together, causing tensions in the different institutional logics. We emphasize the need for a nuanced understanding of context unevenness and propose solution patchwork as a coordination approach to evade the solution trap. Substantiating the preliminary insights and propositions with a literature review and further empirical grounding will transition this research-in-progress to a full paper. |
|