Hui Chen, Alexander Wenning, Higher-Order Beliefs, Market-Based Incentives, and Information Quality, European Accounting Review, Vol. 33 (2), 2024. (Journal Article)
We investigate how interdependence among investors' beliefs affects the reliance on market prices as a performance measure and how this in turn affects the firm's preference for financial reporting quality. When investors want to align their values more with other investors' beliefs, optimal contracts become more reliant on the accounting report and less on the market price, emphasizing the stewardship role of accounting in a herding market. If the baseline accounting quality required by a reporting standard is high enough, the firm prefers to increase its accounting quality for the sake of contracting efficiency. However, if the baseline quality is low, the firm further lowers accounting quality for the same reason. The benchmark level that determines whether the firm prefers to increase accounting quality increases with the interdependence of investors' beliefs, implying that it is difficult to align the information and stewardship roles of accounting in a herding market. |
|
Piotr P Brud, Jan Cieciuch, Temperamental underpinnings of borderline personality disorder and its facets, Personality and Mental Health, 2024. (Journal Article)
Temperament is claimed to be the basis for personality; therefore, discovering the temperamental underpinnings of borderline personality disorder and its facets is crucial for understanding this personality disorder. In this article, we explore these underpinnings by using a new model of temperament, based on the Regulative Theory of Temperament, the Big Two of temperament, and the Circumplex of Personality Metatraits. Two studies were conducted on adults—the first was in a general population sample (N = 315) and the second was in a clinical sample (N = 113) in people with a diagnosis of borderline personality disorder. The following measurements were used: The Screening Instrument for Borderline Personality Disorder (SI‐Bord), the Five‐Factor Borderline Inventory‐Short Form (FFBI‐SF), and the Temperament Metadimensions Questionnaire (TMQ). General borderline was explained by Reactivity (high Sensitivity) and Activity (high Dynamism). At the facet level, the Borderline Internalizing Facet was mainly explained by Reactivity (high Sensitivity), while the Borderline Externalizing Facet was explained by Activity (high Dynamism) in addition to Reactivity (high Sensitivity). The results of our study revealed specific temperamental underpinnings of borderline and its facets. Reactivity underlies all borderline facets, while Activity differentiates between the Borderline Externalizing Facet and Borderline Internalizing Facet. |
|
Lauren Howe, Laura M Giurge, Alexander Wagner, Jochen Menges, CEOs Showing Humanity: Seemingly Generic Human Care Statements in Conference Calls and Stock Market Performance during Crisis, Academy of Management Discoveries, 2024. (Journal Article)
Conference calls provide opportunities for CEOs to inform market participants (i.e., financial analysts and investors) about their companies’ prospects. Much research has focused on how CEOs speak about business-related topics in these calls, yet surprisingly the literature has not considered how statements that go beyond financial information affect market participants. When we explored archival data of how CEOs of publicly traded U.S.-based companies from the Russell 3000 Index spoke about COVID-19 in conference calls as the pandemic began in 2020, we noticed that about half of CEOs made human care statements that expressed a concern for people, with seemingly little direct financial relevance. However, although these statements were largely generic, vague expressions rather than clear plans, we discovered that the more such statements CEOs made, the better their companies fared on the stock market when stock prices tumbled globally. Follow-up explorations unveiled a negative association between CEO human care statements and stock volatility, meaning that market participants discounted these companies’ future earnings less. Our explorations suggest that it pays off for CEOs to go beyond mere financial information and show some humanity, with implications for downstream theorizing about CEO impression management. |
|
Francesco Barile, Tim Draws, Oana Inel, Alisa Rieger, Shabnam Najafian, Amir Ebrahimi Fard, Rishav Hada, Nava Tintarev, Evaluating explainable social choice-based aggregation strategies for group recommendation, User modeling and user-adapted interaction, Vol. 34 (1), 2024. (Journal Article)
Social choice aggregation strategies have been proposed as an explainable way to generate recommendations to groups of users. However, it is not trivial to determine the best strategy to apply for a specific group. Previous work highlighted that the performance of a group recommender system is affected by the internal diversity of the group members’ preferences. However, few of them have empirically evaluated how the specific distribution of preferences in a group determines which strategy is the most effective. Furthermore, only a few studies evaluated the impact of providing explanations for the recommendations generated with social choice aggregation strategies, by evaluating explanations and aggregation strategies in a coupled way. To fill these gaps, we present two user studies (N=399 and N=288) examining the effectiveness of social choice aggregation strategies in terms of users’ fairness perception, consensus perception, and satisfaction. We study the impact of the level of (dis-)agreement within the group on the performance of these strategies. Furthermore, we investigate the added value of textual explanations of the underlying social choice aggregation strategy used to generate the recommendation. The results of both user studies show no benefits in using social choice-based explanations for group recommendations. However, we find significant differences in the effectiveness of the social choice-based aggregation strategies in both studies. Furthermore, the specific group configuration (i.e., various scenarios of internal diversity) seems to determine the most effective aggregation strategy. These results provide useful insights on how to select the appropriate aggregation strategy for a specific group based on the level of (dis-)agreement within the group members’ preferences. |
|
Sergei Ketkov, A study of distributionally robust mixed-integer programming with Wasserstein metric: on the value of incomplete data, European Journal of Operational Research, Vol. 313 (2), 2024. (Journal Article)
This study addresses a class of linear mixed-integer programming (MILP) problems that involve uncertainty in the objective function parameters. The parameters are assumed to form a random vector, whose probability distribution can only be observed through a finite training data set. Unlike most of the related studies in the literature, we also consider uncertainty in the underlying data set. The data uncertainty is described by a set of linear constraints for each random sample, and the uncertainty in the distribution (for a fixed realization of data) is defined using a type-1 Wasserstein ball centered at the empirical distribution of the data. The overall problem is formulated as a three-level distributionally robust optimization (DRO) problem. First, we prove that the three-level problem admits a single-level MILP reformulation, if the class of loss functions is restricted to biaffine functions. Secondly, it turns out that for several particular forms of data uncertainty, the outlined problem can be solved reasonably fast by leveraging the nominal MILP problem. Finally, we conduct a computational study, where the out-of-sample performance of our model and computational complexity of the proposed MILP reformulation are explored numerically for several application domains. |
|
Pedro Miguel Sánchez Sánchez, Alberto Huertas Celdran, Timo Schenk, Adrian Lars Benjamin Iten, Gérôme Bovet, Gregorio Martínez Pérez, Burkhard Stiller, Studying the Robustness of Anti-Adversarial Federated Learning Models Detecting Cyberattacks in IoT Spectrum Sensors, IEEE Transactions on Dependable and Secure Computing, Vol. 21 (2), 2024. (Journal Article)
Device fingerprinting combined with Machine and Deep Learning (ML/DL) report promising performance when detecting spectrum sensing data falsification (SSDF) attacks. However, the amount of data needed to train models and the scenario privacy concerns limit the applicability of centralized ML/DL. Federated learning (FL) addresses these drawbacks but is vulnerable to adversarial participants and attacks. The literature has proposed countermeasures, but more effort is required to evaluate the performance of FL detecting SSDF attacks and their robustness against adversaries. Thus, the first contribution of this work is to create an FL-oriented dataset modeling the behavior of resource-constrained spectrum sensors affected by SSDF attacks. The second contribution is a pool of experiments analyzing the robustness of FL models according to i) three families of sensors, ii) eight SSDF attacks, iii) four FL scenarios dealing with anomaly detection and binary classification, iv) up to 33% of participants implementing data and model poisoning attacks, and v) four aggregation functions acting as anti-adversarial mechanisms. In conclusion, FL achieves promising performance when detecting SSDF attacks. Without anti-adversarial mechanisms, FL models are particularly vulnerable with > 16% of adversaries. Coordinate-wise-median is the best mitigation for anomaly detection, but binary classifiers are still affected with > 33% of adversaries. |
|
Reint Gropp, Thomas Mosk, Steven Ongena, Ines Simac, Carlo Wix, Supranational Rules, National Discretion: Increasing versus Inflating Regulatory Bank Capital?, Journal of Financial and Quantitative Analysis, Vol. 59 (2), 2024. (Journal Article)
We study how banks use "regulatory adjustments" to inflate their regulatory capital ratios and whether this depends on forbearance on the part of national authorities. Using the 2011 EBA capital exercise as a quasi-natural experiment, we find that banks substantially inflated their levels of regulatory capital via a reduction in regulatory adjustments — without a commensurate increase in book equity and without a reduction in bank risk. We document substantial heterogeneity in regulatory capital inflation across countries, suggesting that national authorities forbear their domestic banks to meet supranational requirements, with a focus on short-term economic considerations. |
|
Julia Wamsler, Denis Vuckovac, Martin Natter, Alexander Ilic, Live shopping promotions: which categories should a retailer discount to shoppers already in the store?, OR Spektrum, Vol. 46 (1), 2024. (Journal Article)
Digitalization allows retailers to target customers with personalized promotions when they enter the store. Although traditional promotional retailer objectives, such as store visit, become obsolete once the shopper is already in the store, retailers still tend to target customers based on indicators that drive store visit, such as recency, frequency, and monetary value (RFM). In order to improve promotional efficiency, the authors propose targeting shoppers based on information derived from regularity patterns in individual interpurchase times at the point of sale. When compared to RFM-based targeting, the proposed live targeting approach translates into higher redemption rates (+ 10.5 percentage points), revenues (+ 42.3 percentage points), and purchase frequencies (+ 44.2 percentage points). The findings emphasize the importance of promotional timing and of considering customers’ outside potential for dynamic in-store targeting. |
|
Tobias Schimanski, Andrin Reding, Nico Reding, Julia Bingler, Mathias Kraus, Markus Leippold, Bridging the gap in ESG measurement: Using NLP to quantify environmental, social, and governance communication, Finance Research Letters, Vol. 61, 2024. (Journal Article)
Environmental, social, and governance (ESG) criteria take a central role in fostering sustainable development in economies. This paper introduces a class of novel Natural Language Processing (NLP) models to assess corporate disclosures in the ESG subdomains. Using over 13.8 million texts from reports and news, specific E, S, and G models were pretrained. Additionally, three 2k datasets were developed to classify ESG-related texts. The models effectively explain variations in ESG ratings, showcasing a robust method for enhancing transparency and accuracy in evaluating corporate sustainability. This approach addresses the gap in precise, transparent ESG measurement, advancing sustainable development in economies. |
|
Thomas Puschmann, Marine Huang-Sui, A taxonomy for decentralized finance, International Review of Financial Analysis, Vol. 92, 2024. (Journal Article)
Decentralized Finance (‘DeFi’) has gained tremendous momentum over the past three years by using novel approaches to disintermediating financial institutions in the provision of financial services. However, empirical research in this field is still rare, and a more comprehensive understanding of the domain is a missing component in academic research. This paper develops a taxonomy based on a comprehensive literature analysis to structure this emerging field systematically. The taxonomy includes three perspectives (strategy, organization, technology) and seven dimensions (blockchain, value proposition, token type, business process, price mechanism, protocol type, integration type) as well as thirty-six characteristics. The application of the taxonomy to 278 DeFi start-ups reveals that most of the DeFi start-ups focus on Ethereum (36.3%) and have a focus on analytics and automation (52%), while, surprisingly only a few incorporate decentralized governance approaches (3.3%), provide decentralized exchanges (14%) or integrate off-chain data. |
|
Dario Mazzilli, Manuel Mariani, Flaviano Morone, Aurelio Patelli, Equivalence between the Fitness-Complexity and the Sinkhorn-Knopp algorithms, Journal of Physics: Complexity, Vol. 5 (1), 2024. (Journal Article)
We uncover the connection between the Fitness-Complexity algorithm, developed in the economic complexity field, and the Sinkhorn-Knopp algorithm, widely used in diverse domains ranging from computer science and mathematics to economics.
Despite minor formal differences between the two methods, both converge to the same fixed-point solution up to normalization.
The discovered connection allows us to derive a rigorous interpretation of the Fitness and the Complexity metrics as the potentials of a suitable energy function.
Under this interpretation, high-energy products are unfeasible for low-fitness countries, which explains why the algorithm is effective at displaying nested patterns in bipartite networks.
We also show that the proposed interpretation reveals the scale invariance of the Fitness-Complexity algorithm, which has practical implications for the algorithm's implementation in different datasets.
Further, analysis of empirical trade data under the new perspective reveals three categories of countries that might benefit from different development strategies. |
|
Alex Mari, Andreina Mandelli, René Algesheimer, Empathic voice assistants: Enhancing consumer responses in voice commerce, Journal of Business Research, Vol. 175, 2024. (Journal Article)
Artificial intelligence (AI)-enabled voice assistants (VAs) are transforming firm-customer interactions but often come across as lacking empathy. This challenge may cause business managers to question the overall effectiveness of VAs in shopping contexts. Recognizing empathy as a core design element in the next generation of VAs and the limits of scenario-based studies in voice commerce, this article investigates how empathy exhibited by an existing AI agent (Alexa) may alter consumer shopping responses. AI empathy moderates the original structural model bridging functional, relational, and social-emotional dimensions. Findings of an individual-session online experiment show higher intentions to delegate tasks, seek decision assistance, and trust recommendations from AI agents perceived as empathic. In contrast to individual shoppers, families respond better to functional VA attributes such as ease of use when AI empathy is present. The results contribute to the literature on AI empathy and conversational commerce while informing managerial AI design decisions. |
|
Delia Coculescu, Mederic Motte, Huyen Pham, Opinion dynamics in communities with major influencers and implicit social influence via mean-field approximation, Mathematics and Financial Economics, 2024. (Journal Article)
We study binary opinion formation in a large population where individuals are influenced by the opinions of other individuals. The population is characterised by the existence of (i) communities where individuals share some similar features, (ii) opinion leaders that may trigger unpredictable opinion shifts in the short term (iii) some degree of incomplete information in the observation of the individual or public opinion processes. In this setting, we study three different approximate mechanisms: common sampling approximation, independent sampling approximation, and, what will be our main focus in this paper, McKean–Vlasov (or mean-field) approximation. We show that all three approximations perform well in terms of different metrics that we introduce for measuring population level and individual level errors. In the presence of a common noise represented by the major influencers opinions processes, and despite the absence of idiosyncratic noises, we derive a propagation of chaos type result. For the particular case of a linear model and particular specifications of the major influencers opinion dynamics, we provide additional analysis, including long term behavior and fluctuations of the public opinion. The theoretical results are complemented by some concrete examples and numerical analysis, illustrating the formation of echo-chambers, the propagation of chaos, and phenomena such as snowball effect and social inertia. |
|
Fang Zhou, Linyuan Lu, Jianguo Liu, Manuel Mariani, Beyond network centrality: Individual-level behavioral traits for predicting information superspreaders in social media, National Science Review, 2024. (Journal Article)
Understanding the heterogeneous role of individuals in large-scale information spreading is essential to manage online behavior as well as its potential offline consequences. To this end, most existing studies from diverse research domains focus on the disproportionate role played by highly-connected “hub” individuals. However, we demonstrate here that information superspreaders in online social media are best understood and predicted by simultaneously considering two individual-level behavioral traits: influence and susceptibility. Specifically, we derive a nonlinear network-based algorithm to quantify individuals’ influence and susceptibility from multiple spreading event data. By applying the algorithm to large-scale data from Twitter and Weibo, we demonstrate that individuals’ estimated influence and susceptibility scores enable predictions of future superspreaders above and beyond network centrality, and reveal new insights on the network position of the superspreaders. |
|
Christian Ewerhart, A game-theoretic implication of the Riemann hypothesis, Mathematical Social Sciences, Vol. 128, 2024. (Journal Article)
The Riemann hypothesis (RH) is one of the major unsolved problems in pure mathematics. In the present paper, a parameterized family of non-cooperative games is constructed with the property that, if RH is true, then any game in the family admits a unique Nash equilibrium. We argue that this result is not degenerate. Indeed, neither is the conclusion a tautology, nor is RH used to define the family of games. |
|
Stephan Nebe, André Kretzschmar, Maike C Brandt, Philippe Tobler, Characterizing Human Habits in the Lab, Collabra: Psychology, Vol. 10 (1), 2024. (Journal Article)
Habits pose a fundamental puzzle for those aiming to understand human behavior. They pervade our everyday lives and dominate some forms of psychopathology but are extremely hard to elicit in the lab. In this Registered Report, we developed novel experimental paradigms grounded in computational models, which suggest that habit strength should be proportional to the frequency of behavior and, in contrast to previous research, independent of value. Specifically, we manipulated how often participants performed responses in two tasks varying action repetition without, or separately from, variations in value. Moreover, we asked how this frequency-based habitization related to value-based operationalizations of habit and self-reported propensities for habitual behavior in real life. We find that choice frequency during training increases habit strength at test and that this form of habit shows little relation to value-based operationalizations of habit. Our findings empirically ground a novel perspective on the constituents of habits and suggest that habits may arise in the absence of external reinforcement. We further find no evidence for an overlap between different experimental approaches to measuring habits and no associations with self-reported real-life habits. Thus, our findings call for a rigorous reassessment of our understanding and measurement of human habitual behavior in the lab. |
|
Delia Coculescu, Gabriele Visentin, A default system with overspilling contagion, Frontiers of Mathematical Finance, Vol. 3 (1), 2024. (Journal Article)
Some dynamical contagion models for default risk have been proposed in the literature, where a system (composed of individual debtors) evolves as a Markov process conditionally on the observation of its stochastic environment, with interacting intensities. The Markovian assumption necessitates that the environment evolves autonomously and is not influenced by the transitions of the system. We extend this classical literature and allow a default system to have a contagious impact on its environment. With a certain probability, the transition of a debtor to the default state has an impact on the system's environment. This in turn affects the transition intensities of the other debtors inside the system. Therefore, in our framework, contagion can either be contained within the default system (i.e., direct contagion from a counterparty to another) or spill from the default system over its environment (indirect contagion). This type of model is of interest whenever one wants to capture within a model possible impacts of the defaults of a class of debtors on the more global economy and vice versa. |
|
Giuseppe Sorrenti, Ulf Zölitz, Denis Ribeaud, Manuel Eisner, The causal impact of socio-emotional skills training on educational success, Review of Economic Studies, 2024. (Journal Article)
We study the long-term effects of a randomized intervention targeting children's socio-emotional skills. The classroom-based intervention for primary school children has positive impacts that persist for over a decade. Treated children become more likely to complete academic high school and enrol in university. Two mechanisms drive these results. Treated children show fewer attention deficit/hyperactivity disorder symptoms: they are less impulsive and less disruptive. They also attain higher grades, but they do not score higher on standardized tests. The long-term effects on educational attainment thus appear to be driven by changes in socio-emotional skills rather than cognitive skills. |
|
Thomas F Epper, Helga Fehr-Duda, Risk in Time: The Intertwined Nature of Risk Taking and Time Discounting, Journal of the European Economic Association, Vol. 22 (1), 2024. (Journal Article)
Standard economic models view risk taking and time discounting as two independent dimensions of decision making. However, mounting experimental evidence demonstrates striking parallels in patterns of risk taking and time discounting behavior and systematic interaction effects, which suggests that there may be common underlying forces driving these interactions. Here, we show that the inherent uncertainty associated with future prospects together with individuals’ proneness to probability weighting generates a unifying framework for explaining a large number of puzzling behavioral findings: delay-dependent risk tolerance, aversion to sequential resolution of uncertainty, preferences for the timing of the resolution of uncertainty, the differential discounting of risky and certain outcomes, hyperbolic discounting, subadditive discounting, and the order dependence of prospect valuation. Furthermore, all these phenomena can be accommodated by the same set of preference parameter values and plausible levels of inherent uncertainty. |
|
Pedro Miguel Sánchez Sánchez, Alberto Huertas Celdran, José R Buendía Rubio, Gérôme Bovet, Gregorio Martínez Pérez, Robust Federated Learning for execution time-based device model identification under label-flipping attack, Cluster Computing, Vol. 27 (1), 2024. (Journal Article)
The computing device deployment explosion experienced in recent years, motivated by the advances of technologies such as Internet-of-Things (IoT) and 5G, has led to a global scenario with increasing cybersecurity risks and threats. Among them, device spoofing and impersonation cyberattacks stand out due to their impact and, usually, low complexity required to be launched. To solve this issue, several solutions have emerged to identify device models and types based on the combination of behavioral fingerprinting and Machine/Deep Learning (ML/DL) techniques. However, these solutions are not appropriate for scenarios where data privacy and protection are a must, as they require data centralization for processing. In this context, newer approaches such as Federated Learning (FL) have not been fully explored yet, especially when malicious clients are present in the scenario setup. The present work analyzes and compares the device model identification performance of a centralized DL model with an FL one while using execution time-based events. For experimental purposes, a dataset containing execution-time features of 55 Raspberry Pis belonging to four different models has been collected and published. Using this dataset, the proposed solution achieved 0.9999 accuracy in both setups, centralized and federated, showing no performance decrease while preserving data privacy. Later, the impact of a label-flipping attack during the federated model training is evaluated using several aggregation mechanisms as countermeasures. Zeno and coordinate-wise median aggregation show the best performance, although their performance greatly degrades when the percentage of fully malicious clients (all training samples poisoned) grows over 50%. |
|