Rafael Polanía, Denis Burdakov, Todd Anthony Hare, Rationality, preferences, and emotions with biological constraints: it all starts from our senses, Trends in Cognitive Sciences, Vol. 28 (3), 2024. (Journal Article)
Is the role of our sensory systems to represent the physical world as accurately as possible? If so, are our preferences and emotions, often deemed irrational, decoupled from these 'ground-truth' sensory experiences? We show why the answer to both questions is 'no'. Brain function is metabolically costly, and the brain loses some fraction of the information that it encodes and transmits. Therefore, if brains maximize objective functions that increase the fitness of their species, they should adapt to the objective-maximizing rules of the environment at the earliest stages of sensory processing. Consequently, observed 'irrationalities', preferences, and emotions stem from the necessity for our early sensory systems to adapt and process information while considering the metabolic costs and internal states of the organism. |
|
David R Bell, Olivier Ledoit, Michael Wolf, A novel estimator of Earth’s curvature (allowing for inference as well), Annals of Applied Statistics, Vol. 18 (1), 2024. (Journal Article)
This paper estimates the curvature of the Earth, defined as one over its radius, without relying on physical measurements. The orthodox model states that the Earth is (nearly) spherical with a curvature of π/20,000km. By contrast, the heterodox flat-Earth model stipulates a curvature of zero. Abstracting from the well-worn arguments for and against both models, rebuttals and counter-rebuttals ad infinitum, we propose a novel statistical methodology based on verifiable flight times along regularly scheduled commercial airline routes; this methodology allows for both estimating and making inference for Earth’s curvature. In particular, a formal hypothesis test resolutely rejects the flat-Earth model, whereas it does not reject the orthodox spherical-Earth model. |
|
Steven Ongena, Which banks for green growth? A review and a tentative research agenda, Journal of Sustainable Finance and Accounting, Vol. 1, 2024. (Journal Article)
Which commercial banks are best for green growth? This note aims to review the literature and to provide some potential elements for a research agenda in this space. Commercial bank size (business model, too-big-to-fail status, political or media connections), ownership (captivity, common), depositors, nationality, and orientation are discussed as salient dimensions. |
|
Pedro Miguel Sánchez Sánchez, Alberto Huertas Celdran, Gérôme Bovet, Gregorio Martínez Pérez, Adversarial attacks and defenses on ML- and hardware-based IoT device fingerprinting and identification, Future Generation Computer Systems, Vol. 152, 2024. (Journal Article)
In the last years, the number of IoT devices deployed has suffered an undoubted explosion, reaching the scale of billions. However, some new cybersecurity issues have appeared together with this development. Some of these issues are the deployment of unauthorized devices, malicious code modification, malware deployment, or vulnerability exploitation. This fact has motivated the requirement for new device identification mechanisms based on behavior monitoring. Besides, these solutions have recently leveraged Machine and Deep Learning (ML/DL) techniques due to the advances in this field and the increase in processing capabilities. In contrast, attackers do not stay stalled and have developed adversarial attacks focused on context modification and ML/DL evaluation evasion applied to IoT device identification solutions. However, literature has not yet analyzed in detail the impact of these attacks on individual identification solutions and their countermeasures. This work explores the performance of hardware behavior-based individual device identification, how it is affected by possible context- and ML/DL-focused attacks, and how its resilience can be improved using defense techniques. In this sense, it proposes an LSTM-CNN architecture based on hardware performance behavior for individual device identification. Then, the most usual ML/DL classification techniques have been compared with the proposed architecture using a hardware performance dataset collected from 45 Raspberry Pi devices running identical software. The LSTM-CNN improves previous solutions achieving a +0.96 average F1-Score and 0.8 minimum TPR for all devices. Afterward, context- and ML/DL-focused adversarial attacks were applied against the previous model to test its robustness. A temperature-based context attack was not able to disrupt the identification, but some ML/DL state-of-the-art evasion attacks were successful. Finally, adversarial training and model distillation defense techniques are selected to improve the model resilience to evasion attacks, improving its robustness from up to 0.88 attack success ratio to 0.17 in the worst attack case, without degrading its performance in an impactful manner. |
|
Pedro Miguel Sánchez Sánchez, Alberto Huertas Celdran, Ning Xie, Gérôme Bovet, Gregorio Martínez Pérez, Burkhard Stiller, FederatedTrust: A solution for trustworthy federated learning, Future Generation Computer Systems, Vol. 152, 2024. (Journal Article)
The rapid expansion of the Internet of Things (IoT) and Edge Computing has presented challenges for centralized Machine and Deep Learning (ML/DL) methods due to the presence of distributed data silos that hold sensitive information. To address concerns regarding data privacy, collaborative and privacy-preserving ML/DL techniques like Federated Learning (FL) have emerged. FL ensures data privacy by design, as the local data of participants remains undisclosed during the creation of a global and collaborative model. However, data privacy and performance are insufficient since a growing need demands trust in model predictions. Existing literature has proposed various approaches dealing with trustworthy ML/DL (excluding data privacy), identifying robustness, fairness, explainability, and accountability as important pillars. Nevertheless, further research is required to identify trustworthiness pillars and evaluation metrics specifically relevant to FL models, as well as to develop solutions that can compute the trustworthiness level of FL models. This work examines the existing requirements for evaluating trustworthiness in FL and introduces a comprehensive taxonomy consisting of six pillars (privacy, robustness, fairness, explainability, accountability, and federation), along with over 30 metrics for computing the trustworthiness of FL models. Subsequently, an algorithm named FederatedTrust is designed based on the pillars and metrics identified in the taxonomy to compute the trustworthiness score of FL models. A prototype of FederatedTrust is implemented and integrated into the learning process of FederatedScope, a well-established FL framework. Finally, five experiments are conducted using different configurations of FederatedScope (with different participants, selection rates, training rounds, and differential privacy) to demonstrate the utility of FederatedTrust in computing the trustworthiness of FL models. Three experiments employ the FEMNIST dataset, and two utilize the N-BaIoT dataset, considering a real-world IoT security use case. |
|
Stephan Nebe, André Kretzschmar, Maike C Brandt, Philippe Tobler, Characterizing human habits in the lab, Collabra: Psychology, Vol. 10 (1), 2024. (Journal Article)
Habits pose a fundamental puzzle for those aiming to understand human behavior. They pervade our everyday lives and dominate some forms of psychopathology but are extremely hard to elicit in the lab. In this Registered Report, we developed novel experimental paradigms grounded in computational models, which suggest that habit strength should be proportional to the frequency of behavior and, in contrast to previous research, independent of value. Specifically, we manipulated how often participants performed responses in two tasks varying action repetition without, or separately from, variations in value. Moreover, we asked how this frequency-based habitization related to value-based operationalizations of habit and self-reported propensities for habitual behavior in real life. We find that choice frequency during training increases habit strength at test and that this form of habit shows little relation to value-based operationalizations of habit. Our findings empirically ground a novel perspective on the constituents of habits and suggest that habits may arise in the absence of external reinforcement. We further find no evidence for an overlap between different experimental approaches to measuring habits and no associations with self-reported real-life habits. Thus, our findings call for a rigorous reassessment of our understanding and measurement of human habitual behavior in the lab. |
|
Delia Coculescu, Gabriele Visentin, A default system with overspilling contagion, Frontiers of Mathematical Finance, Vol. 3 (1), 2024. (Journal Article)
Some dynamical contagion models for default risk have been proposed in the literature, where a system (composed of individual debtors) evolves as a Markov process conditionally on the observation of its stochastic environment, with interacting intensities. The Markovian assumption necessitates that the environment evolves autonomously and is not influenced by the transitions of the system. We extend this classical literature and allow a default system to have a contagious impact on its environment. With a certain probability, the transition of a debtor to the default state has an impact on the system's environment. This in turn affects the transition intensities of the other debtors inside the system. Therefore, in our framework, contagion can either be contained within the default system (i.e., direct contagion from a counterparty to another) or spill from the default system over its environment (indirect contagion). This type of model is of interest whenever one wants to capture within a model possible impacts of the defaults of a class of debtors on the more global economy and vice versa. |
|
Giuseppe Sorrenti, Ulf Zölitz, Denis Ribeaud, Manuel Eisner, The causal impact of socio-emotional skills training on educational success, Review of Economic Studies, 2024. (Journal Article)
We study the long-term effects of a randomized intervention targeting children's socio-emotional skills. The classroom-based intervention for primary school children has positive impacts that persist for over a decade. Treated children become more likely to complete academic high school and enrol in university. Two mechanisms drive these results. Treated children show fewer attention deficit/hyperactivity disorder symptoms: they are less impulsive and less disruptive. They also attain higher grades, but they do not score higher on standardized tests. The long-term effects on educational attainment thus appear to be driven by changes in socio-emotional skills rather than cognitive skills. |
|
Thomas F Epper, Helga Fehr-Duda, Risk in Time: The Intertwined Nature of Risk Taking and Time Discounting, Journal of the European Economic Association, Vol. 22 (1), 2024. (Journal Article)
Standard economic models view risk taking and time discounting as two independent dimensions of decision making. However, mounting experimental evidence demonstrates striking parallels in patterns of risk taking and time discounting behavior and systematic interaction effects, which suggests that there may be common underlying forces driving these interactions. Here, we show that the inherent uncertainty associated with future prospects together with individuals’ proneness to probability weighting generates a unifying framework for explaining a large number of puzzling behavioral findings: delay-dependent risk tolerance, aversion to sequential resolution of uncertainty, preferences for the timing of the resolution of uncertainty, the differential discounting of risky and certain outcomes, hyperbolic discounting, subadditive discounting, and the order dependence of prospect valuation. Furthermore, all these phenomena can be accommodated by the same set of preference parameter values and plausible levels of inherent uncertainty. |
|
Pedro Miguel Sánchez Sánchez, Alberto Huertas Celdran, José R Buendía Rubio, Gérôme Bovet, Gregorio Martínez Pérez, Robust Federated Learning for execution time-based device model identification under label-flipping attack, Cluster Computing, Vol. 27 (1), 2024. (Journal Article)
The computing device deployment explosion experienced in recent years, motivated by the advances of technologies such as Internet-of-Things (IoT) and 5G, has led to a global scenario with increasing cybersecurity risks and threats. Among them, device spoofing and impersonation cyberattacks stand out due to their impact and, usually, low complexity required to be launched. To solve this issue, several solutions have emerged to identify device models and types based on the combination of behavioral fingerprinting and Machine/Deep Learning (ML/DL) techniques. However, these solutions are not appropriate for scenarios where data privacy and protection are a must, as they require data centralization for processing. In this context, newer approaches such as Federated Learning (FL) have not been fully explored yet, especially when malicious clients are present in the scenario setup. The present work analyzes and compares the device model identification performance of a centralized DL model with an FL one while using execution time-based events. For experimental purposes, a dataset containing execution-time features of 55 Raspberry Pis belonging to four different models has been collected and published. Using this dataset, the proposed solution achieved 0.9999 accuracy in both setups, centralized and federated, showing no performance decrease while preserving data privacy. Later, the impact of a label-flipping attack during the federated model training is evaluated using several aggregation mechanisms as countermeasures. Zeno and coordinate-wise median aggregation show the best performance, although their performance greatly degrades when the percentage of fully malicious clients (all training samples poisoned) grows over 50%. |
|
Tobias Schultheiss, Uschi Backes-Gellner, Does updating education curricula accelerate technology adoption in the workplace? Evidence from dual vocational education and training curricula in Switzerland, Journal of Technology Transfer, Vol. 49 (1), 2024. (Journal Article)
In an environment of accelerating technological change and increasing digitalization, firms need to adopt new technologies faster than ever before to stay competitive. This paper examines whether updates of education curricula help to bring new technologies faster into firms’ workplaces. We study technology changes and curriculum updates from an early wave of digitalization (i.e., computer-numerically controlled machinery, computer-aided design, and desktop publishing software). We take a text-as-data approach and tap into two novel data sources to measure change in educational content and the use of technology at the workplace: first, vocational education curricula and, second, firms’ job advertisements. To examine the causal effects of adding new technology skills to curricula on the diffusion of these technologies in firms’ workplaces (measured by job advertisements), we use an event study design. Our results show that curriculum updates substantially shorten the time it takes for new technologies to arrive in firms’ workplaces, especially for mainstream firms. |
|
Daniel Fasnacht, Christian Straube, Quantum computing as an enabling technology for the next business cycle, HMD Praxis der Wirtschaftsinformatik, Vol. 61 (1), 2024. (Journal Article)
We need more computing capacity for the next growth cycle, and computers with conventional transistor technology are reaching their limits. So new ideas are required. The quantum computer, which overcomes the binary system and is not based on silicon microchips, could be a solution. This technology will continue to develop exponentially and transform science, the economy, and society. Furthermore, the paradigm of quantum communication offers an entirely novel possibility of distributed computing by allowing quantum computers to be networked via quantum channels to intrinsically secure communication. This article explains how quantum computers exploit new phenomena that do not occur in classical physics. Along the four primary application areas identified (optimization, simulation, machine learning, and cryptography), we describe possible applications in various industries. Our critical appraisal presents the technical challenges that still hold the potential for quantum computing to complement traditional computing systems. Accordingly, small and mid-sized companies do not necessarily need to invest in quantum computers but in their use. Quantum as a service can be the first step for visionary leaders to get familiar with it and gain a competitive advantage early on. |
|
Alin Marius Andrieş, Alexandra Maria Chiper, Steven Ongena, Nicu Sprincean, External wealth of nations and systemic risk, Journal of Financial Stability, Vol. 70, 2024. (Journal Article)
External imbalances played a pivotal role leading to the global financial crisis and were an important cause of turmoil. While current account (flow) imbalances narrowed in the aftermath of the crisis, the net international investment position (NIIP) (stock) imbalances persisted. This study explores the implications of countries’ net foreign positions on systemic risk. Using a sample of 470 banks located in 49 advanced economies, emerging countries, and developing economies over 2000–2020, we find robust empirical evidence that banks can reduce their systemic risk exposure when the countries in which they are incorporated improve their NIIPs and maintain creditor status vis-à-vis the rest of the world. However, only the equity component of the NIIP is responsible for this outcome, whereas debt flows are not significant. Similarly, we find that the mitigating effect of an external balance sheet on systemic risk is derived from valuation gains rather than from the incremental net acquisition of assets or liabilities represented by the current account. Our findings are particularly relevant for policymakers seeking to improve banks’ resilience to adverse shocks and maintain financial stability. |
|
Mahmoud Fatouh, Simone Giansante, Steven Ongena, Leverage ratio, risk-based capital requirements, and risk-taking in the United Kingdom, Financial markets, institutions & instruments, Vol. 33 (1), 2024. (Journal Article)
We assess the impact of the leverage ratio capital requirements on the risk-taking behaviour of banks both theoretically and empirically. Conceptually, introducing binding leverage ratio requirements into a regulatory framework with risk-based capital requirements induces banks to re-optimise, shifting from safer to riskier assets (higher asset risk). Yet, this shift would not be one-for-one due to risk weight differences, meaning the shift would be associated with a lower level of leverage (lower insolvency risk). The interaction of these two changes determines the impact on the aggregate level of risk. Empirically, we use a difference-in-differences setup to compare the behaviour of UK banks subject to the leverage ratio requirements (LR banks) to otherwise similar banks (non-LR banks). Our results show that LR banks did not increase asset risk, and slightly reduced leverage levels, compared to the control group after the introduction of leverage ratio in the UK. As expected, these two changes led to a lower aggregate level of risk. Emperical results indicate that credit default swap spreads on the 5-year subordinated debt of LR banks decreased relative to non-LR banks post leverage ratio introduction, suggesting the market viewed LR banks as less risky, especially during the COVID 19 stress. |
|
Fadong Chen, Zhi Zhu, Qiang Shen, Ian Krajbich, Todd Anthony Hare, Intrachoice dynamics shape social decisions, Management Science, Vol. 70 (2), 2024. (Journal Article)
Do people have well-defined social preferences waiting to be applied when making decisions? Or do they have to construct social decisions on the spot? If the latter, how are those decisions influenced by the way in which information is acquired and evaluated? These temporal dynamics are fundamental to understanding how people trade off selfishness and prosociality in organizations and societies. Here, we investigate how the temporal dynamics of the choice process shape social decisions in three studies using response times and mouse tracking. In the first study, participants made binary decisions in mini-dictator games with and without time constraints. Using mouse trajectories and a starting time drift diffusion model, we find that, regardless of time constraints, selfish participants were delayed in processing others’ payoffs, whereas the opposite was true for prosocial participants. The independent mouse trajectory and computational modeling analyses identified consistent measures of the delay between considering one’s own and others’ payoffs (self-onset delay, SOD). This measure correlated with individual differences in prosociality and predicted heterogeneous effects of time constraints on preferences. We confirmed these results in two additional studies, one a purely behavioral study in which participants made decisions by pressing computer keys, and the other a replication of the mouse-tracking study. Together, these results indicate that people preferentially process either self or others’ payoffs early in the choice process. The intrachoice dynamics are crucial in shaping social preferences and might be manipulated via nudge policies (e.g., manipulating the display order or saliency of self and others’ outcomes) for behavior in managerial or other contexts. |
|
Antonello Cirulli, Michal Kobak, Urban Ulrych, Portfolio Construction with Hierarchical Momentum, The Journal of Portfolio Management, Vol. 50 (4), 2024. (Journal Article)
This article presents a portfolio construction approach that combines the hierarchical clustering of a large asset universe with the stock price momentum. On one hand, investing in high-momentum stocks enhances returns by capturing the momentum premium. On the other hand, hierarchical clustering of a high-dimensional asset universe ensures sparse diversification, stabilizes the portfolio across economic regimes, and mitigates the problem of increased drawdowns typically present in momentum portfolios. Moreover, the proposed portfolio construction approach avoids the covariance matrix inversion. An out-of-sample backtest on a non-survivorship-biased dataset of international stocks shows that, compared to the model-based and model-free benchmarks, hierarchical momentum portfolios achieve improved cumulative and risk-adjusted portfolio returns as well as decreased portfolio drawdowns net of transaction costs. The study further suggests that the unique characteristics of the hierarchical momentum portfolios arise because of both dimensionality reduction via clustering and momentum-based stock selection. |
|
Maximilian D Gilger, Lydia Hellrung, Philipp T Neukam, Nils B Kroemer, Stephan Nebe, Shakoor Pooseh, Yacila I Deza-Lougovski, Michael N Smolka, Arbitration between model-free and model-based control is not affected by transient changes in tonic serotonin levels, Journal of Psychopharmacology, Vol. 38 (2), 2024. (Journal Article)
Background: Serotonin has been suggested to modulate decision-making by influencing the arbitration between model-based and model-free control. Disruptions in these control mechanisms are involved in mental disorders such as drug dependence or obsessive-compulsive disorder. While previous reports indicate that lower brain serotonin levels reduce model-based control, it remains unknown whether increases in serotonergic availability might thus increase model-based control. Moreover, the mediating neural mechanisms have not been studied yet. Aim: The first aim of this study was to investigate whether increased/decreased tonic serotonin levels affect the arbitration between model-free and model-based control. Second, we aimed to identify the underlying neural processes. Methods: We employed a sequential two-stage Markov decision-task and measured brain responses during functional magnetic resonance imaging in 98 participants in a randomized, double-blind cross-over within-subject design. To investigate the influence of serotonin on the balance between model-free and model-based control, we used a tryptophan intervention with three intervention levels (loading, balanced, depletion). We hypothesized that model-based behaviour would increase with higher serotonin levels. Results: We found evidence that neither model-free nor model-based control were affected by changes in tonic serotonin levels. Furthermore, our tryptophan intervention did not elicit relevant changes in Blood-Oxygenation-Level Dependent activity. |
|
Pedro Miguel Sánchez Sánchez, Alberto Huertas Celdran, Gérôme Bovet, Gregorio Martínez Pérez, Single-board device individual authentication based on hardware performance and autoencoder transformer models, Computers and Security, Vol. 137, 2024. (Journal Article)
The proliferation of the Internet of Things (IoT) has led to the emergence of crowdsensing applications, where a multitude of interconnected devices collaboratively collect and analyze data. Ensuring the authenticity and integrity of the data collected by these devices is crucial for reliable decision-making and maintaining trust in the system. Traditional authentication methods are often vulnerable to attacks or can be easily duplicated, posing challenges to securing crowdsensing applications. Besides, current solutions leveraging device behavior are mostly focused on device identification, which is a simpler task than authentication. To address these issues, an individual IoT device authentication framework based on hardware behavior fingerprinting and Transformer autoencoders is proposed in this work. To support the design, a threat model details the security problems faced when performing hardware-based authentication in IoT. This solution leverages the inherent imperfections and variations in IoT device hardware to differentiate between devices with identical specifications. By monitoring and analyzing the behavior of key hardware components, such as the CPU, GPU, RAM, and Storage on devices, unique fingerprints for each device are created. The performance samples are considered as time series data and used to train outlier detection transformer models, one per device and aiming to model its normal data distribution. Then, the framework is validated within a spectrum crowdsensing system leveraging Raspberry Pi devices. After a pool of experiments, the model from each device is able to individually authenticate it between the 45 devices employed for validation. An average True Positive Rate (TPR) of 0.74±0.13 and an average maximum False Positive Rate (FPR) of 0.06±0.09 demonstrate the effectiveness of this approach in enhancing authentication, security, and trust in crowdsensing applications. |
|
Reto Eberle, Alexandra Allgaier, Andreas Buchs, FER-Leitfaden: Nachhaltigkeitsmanagement und -berichterstattung bei KMU, Expert Focus, Vol. 98 (1), 2024. (Journal Article)
Die FER-Fachkommission hat am 5. Dezember 2023 der Veröffentlichung eines Diskussionspapiers über die Nachhaltigkeit in der FER zugestimmt. Dieses Papier enthält einen Leitfaden, der KMU in sieben Schritten darin unterstützt, Nachhaltigkeit in der Organisation zu verankern und transparent darüber zu berichten. Die Öffentlichkeit ist bis am 14. April 2024 zur Kommentierung eingeladen. |
|
Beibei Han, Yingmei Wei, Qingyong Wang, Francesco Maria De Collibus, Claudio Tessone, MT²AD: multi-layer temporal transaction anomaly detection in ethereum networks with GNN, Complex & Intelligent Systems, Vol. 10 (1), 2024. (Journal Article)
In recent years, a surge of criminal activities with cross-cryptocurrency trades have emerged in Ethereum, the second-largest public blockchain platform. Most of the existing anomaly detection methods utilize the traditional machine learning with feature engineering or graph representation learning technique to capture the information in transaction network. However, these methods either ignore the timestamp information and the transaction flow direction information in transaction network or only consider single transaction network, the cross-cryptocurrency trading patterns in Ethereum are usually ignored. In this paper, we introduce a Multi-layer Temporal Transaction Anomaly Detection (MT$^2$AD) model in Ethereum network with graph neural network. Specifically, for a given Ethereum token transaction network, we first extract its initial features including the structure subgraph and edge’s feature. Then, we model the temporal information in subgraph as a series of network snapshots according to the timestamp on each edge and time window. To capture the cross-cryptocurrency trading patterns, we combine the snapshots from multiple token transactions at a given timestamp, and we consider it as a new combined graph. We further use the graph convolution encoder with attention mechanism and pooling operation on this new graph to obtain the graph-level embedding, and we transform the anomaly detection on dynamic multi-layer Ethereum transaction networks as a graph classification task with these graph-level embeddings. MT$^2$AD can integrate the transaction structure feature, edge’s feature and cross-cryptocurrency trading patterns into a framework to perform anomaly detection with graph neural networks. Experiments on three real-world multi-layer transaction networks show that the proposed MT$^2$AD (0.8789 Precision, 0.9375 Recall, 0.4987 FbMacro and 0.9351 FbWeighted) can achieve the best performance on most evaluation metrics in comparison with some competing approaches, and the effectiveness in consideration of multiple tokens is also demonstrated. |
|