Elliott Wallace, Enforcing Privacy in a Smart Home Environment via Pi-hole Integration, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Bachelor's Thesis)
The Internet of Things (IoT) platform is one of the key drivers of the smart home market, having revolutionized the advancement of smart home technology. Besides the many benefits for convenience and efficiency, there are also concerns about security and privacy in such environments. The increasing complexity of smart homes and hardware limitations of individual devices necessitate the storage and processing of data in remote cloud environments. This raises privacy issues due to potential misuse or disclosure of sensitive information about residents. To the author's knowledge, no existing Privacy Enhancing Technology (PET) offers a lightweight approach to enforce privacy in smart home environments by combining existing tools into a unifying framework. The goal of this thesis is to take a first step towards an extensible open source software system that integrates into the smart home environment with the purpose of monitoring smart home device communications and controlling their communication behavior through user-defined policies. To this end, a prototype application is developed, which monitors smart home devices' Domain Name System (DNS) requests and enforces policies via a DNS sinkhole mechanism. The prototype system is deployed to a system-on-chip platform and evaluated in a live smart home environment to gain insight into the viability of the prototype. The aim is to examine the performance, effectiveness, and limitations of the prototype with the intention of validating the general approach. The results of these experiments indicate that the prototype successfully achieves the goals outlined in this thesis. The application prototype is capable of monitoring the network activity of smart home devices. The collected data are processed to gain insights and make this information transparent to the users. Furthermore, the prototype allows users to define simple allow/block policies which are subsequently enforced by the system. |
|
Michael Blum, Tag Explorer - An Interactive Exploration Tool for Digital Edition Annotation Practices, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Master's Thesis)
The Digital Humanities are increasingly employing computational methods for the curation of their research artifacts. The digitization of historical documents and their subsequent curation and annotation is common practice. The resulting digital editions often utilize a semi-structured data format to enhance the digitized research objects with annotations. Despite the presence of established annotation standards, annotation practices can still differ significantly within and across editions, resulting in considerable heterogeneity. This hampers the interoperability and reusability of digital editions.
We contribute a visual analytics (VA) approach for the exploration of annotation practices within and across digital editions. We worked closely with the digital edition community to develop Tag Explorer, a VA tool tailored to their needs. Multiple coordinated views visualize annotation practices on various granularity levels, enabling users to better understand common practices and differences of editions stemming from heterogeneous sources. The users can adapt the visualizations to their information needs by delineating the exploration space and switching between different viewpoints. Tag Explorer fills a gap in the existing landscape of VA tools for the Digital Humanities, allowing the exploration of annotation strategies within and across heterogeneous digital editions.
We evaluated our approach by two case studies with domain experts. Tag Explorer enabled the domain experts to check existing hypotheses, inspired potential improvements in their own editions, and uncovered unexpected findings regarding the annotation practices within and across digital editions. These insights help domain experts making more informed decisions during the annotation process, leading to more interoperable and reusable digital editions. |
|
Andrianos Michail, Automatic Re-Generation of Sentences To Different Readability Levels, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Master's Thesis)
The task of text simplification is to reduce the linguistic complexity of a text in order to make it more accessible. Previous work on text simplification has primarily focused on either a single level of simplification or multiple levels of simplification, but always with the goal of making the text simpler. In this work, we explore a related task: re-generating sentences to produce equivalent text that targets an audience at a different readability level, whether that level is simpler or more advanced. We formulate the problem as a sequence to sequence task and explore different methods of using the pre-trained T5 encoder-decoder model to perform the task. In particular, we investigate the use of the hyperformer++ \cite{mahabadi2021parameter} architecture to solve the task, and propose and evaluate custom variants of the architecture designed to maximize positive transfer between different transformation pairs. According to automatic metrics, our custom variant of hyperformer++ is able to compete with strong baselines while only storing a small fraction of parameters compared to updating the entire language model. |
|
Christoph Mayer, Adaptive factorised data processing via reinforcement learning, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Master's Thesis)
Query optimisation remains an open problem in the field of database research. Inspired by the recent successes of reinforcement learning in various domains, adaptive approaches have emerged for addressing the problem. This thesis introduces a novel system called FRANTIC, which builds on recent advances and extends the adaptive approach to encompass factorised databases. By combining adaptivity and factorisation, FRANTIC outperforms competitors.
Unlike previous research on factorised databases, which often assumed knowledge of a good factorised query plan, FRANTIC leverages reinforcement learning to efficiently explore the vast space of potential query plans, seeking effective execution strategies for queries.
In addition to providing a performant implementation of FRANTIC, this thesis explores the system's inner workings in detail. Experiments reveal that design choices around the data partitioning, which is required for parallel processing, significantly impact the system's performance and even influence the effectiveness of different execution strategies. By shedding light on the interplay between the used join algorithm and data partitioning, robust heuristics are derived, enabling the reduction of the optimisation problem's search space. Consequently, this approach reduces the number of potential execution plans, making the optimisation problem more tractable. |
|
Minjoo Kwak, Multi-dimensional Data Clustering based on Parallel Histogram Plot, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Master's Thesis)
Histograms are widely used because they are easy to implement and provide a simple overview of the underlying data. However, histograms are limited to two dimensions and thus not suited for multi-dimensional data. To resolve this, several models have been designed in the existing literature. These typically combine parallel coordinates plot (PCP) with histograms, so that they can represent multidimensional data. However, these existing models typically do not enable clustering of multivariate data or user interaction. To fill this gap, this thesis introduces a new "clustering PHP application" which offers a visual explorative framework with user interaction for the purpose of clustering. This application integrates PHP, Principal Component analysis (PCA), and scatter plots to merge their respective advantages. First, the PCA part offers ideas about variables such as how important they are and how they are related. Variables of interest can then be plotted on the PHP, which was adjusted for clustering (clustering PHP), to visually find relationships between variables. Axes on the clustering PHP can be reordered to focus on specific variables. Finally, a scatter plot helps users to observe local features and allows for the selection of principal components or variables. Interactions are immediately synchronized on the scatter plot and clustering PHP to detect data points sharing similarities on subspaces effortlessly. Overall this "clustering PHP application" thus helps users to determine clustering groups and improve clustering accuracy. In summary, "clustering PHP application" can help a user to explore data and make subspace clustering with complex multi-dimensional data more easy and efficient. |
|
Karin Thommen, Swiss German Speech-to-Text: Test and Improve the Performance of Models on Spontaneous Speech, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Master's Thesis)
Translators, voice recordings, and voice control are often pre-installed on mobile devices to make everyday life easier. However, Swiss German speakers must use Standard German or English when using speech recognition systems. The latest research shows that most of these systems are trained and evaluated on prepared speech. It remains an open question how these speech-to-text systems behave if they are applied to spontaneous speech, which consists of incomplete sentences, hesitations, and fillers. This can be summarised in the following research question: How does the performance of pre-trained speech models drop when fine-tuning on spontaneous speech compared to fine-tuning on prepared speech? Differences in speech styles lead to the assumption that performance drops when it comes to spontaneous speech. To assess the differences between prepared and spontaneous speech, two state-of-the-art pre-trained multilingual models were fine-tuned on the corresponding data. One is XLS-R developed by Facebook and proposed in 2022. Another model is Whisper by OpenAI, proposed in 2023. Thus, one main challenge is to make the models that are trained on two distinct speech styles comparable. Surprisingly, the results of both models disprove the hypothesis, as they perform better on spontaneous speech. Multiple improvement techniques were evaluated on their impact on the models. On the one hand, increasing the size of the data set significantly increases performance. However, one main issue in automatically transcribing Swiss German is finding the correct word boundaries. As many errors occur at the character level, it remains open which evaluation metric is the most appropriate for spontaneous speech and a low-resource language like Swiss German. |
|
Fabio Suter, Können «Contemplation Questions» und «Regulatory Focus» die Entscheidungsfähigkeit unterstützen?, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Bachelor's Thesis)
Die Förderung von ethischem Verhalten hat im Zusammenhang mit Fehlverhalten und Unternehmens-
skandalen immer mehr an Bedeutung gewonnen. Diese Arbeit hat das Ziel, in diesem Rahmen die Forschungsfrage zu beantworten, ob Contemplation Questions und Regulatory Focus die Entscheidungs-
fähigkeit unterstützen können. Dazu wurde eine Literaturrecherche zur Vertiefung des Verständnisses durchgeführt und darauf aufbauend wurden Annahmen und Hypothesen gebildet, die mittels Gedankenexperiment untersucht wurden. Die Erkenntnisse führen zum Schluss, dass Contemplation Questions und Regulatory Focus vereinzelt in der Lage sein sollten, die Entscheidungsfähigkeit unter passenden Bedingungen zu unterstützen. Ob dies jedoch in Verbindung der beiden Komponenten möglich ist, kann abschliessend nur durch weiterführende empirische Untersuchungen bestätigt werden. |
|
Luca Andrea Comino, Backtesting des UBS Swiss Real Estate Bubble Index. Ist der Real Estate Bubble Index der UBS indikativ für Schweizer Wohneigentumspreise? , University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Bachelor's Thesis)
Die vorliegende Bachelorarbeit untersuchte die Aussagekraft des UBS Swiss Real Estate Bubble Index als Indikator für Schweizer Wohneigentumspreise, welche mittels eines Backtests überprüft wurde. Die daraus folgende Fragestellung lautete: «Ist der Real Estate Bubble Index der UBS indikativ für Schweizer Wohneigentumspreise?».
Der UBS Swiss Real Estate Bubble Index stellt einen wichtigen Referenzwert für Immobilienmarktteilnehmer zur Messung von Risiken dar und wird zur Einschätzung eines potenziellen Überbewertungs- sowie Blasen-
risikos auf dem schweizerischen Wohneigentumsmarkt verwendet. Der Bubble Index wird quartalsweise von der UBS herausgegeben und besteht aus sechs gleich-
gewichteten Subindizes, welche sich aus makro-
ökonomischen, wohneigentumsmarkt- und finanzierungs-
bezogenen Daten zusammensetzen. Die Standardab-
weichungen des historischen Mittelwertes der gleich-
gewichteten Subindizes bilden die Werte des UBS Swiss Real Estate Bubble Index. Bei einer stark positiven Abweichung der Subindikatoren vom historischen Mittel können die Preise als überbewertet betrachtet werden. Dabei versucht der Index mit den Subindizes den fundamentalen, verhaltensbezogenen sowie charttechnischen Aspekt der Blasenidentifizierung zu verbinden (Holzhey, 2013).
Verschiedene Studien beschäftigten sich bereits mit Analysen, welche potenzielle indikative Faktoren für Schweizer Wohneigentumspreise identifizieren sollten. Dabei stehen vor allem demografische und ökonomische Faktoren im Zentrum. Die vorliegende Arbeit stellt sich die Aufgabe, ob und wie sich die Veränderungen des Index auf schweizerische Wohneigentumspreise auswirkte. In einem effizienten Markt würden rationale Marktakteure auf Veränderungen von Überbewertungs- oder Unterbewertungseinschätzungen des Index reagieren, indem sie Kauf- oder Verkaufs-
entscheidungen treffen, die zu einer Anpassung der Preise führen. Diese Anpassungen sollten dazu führen, dass sich die Preise ihrem fundamentalen Wert angleichen (Wienkamp, 2019). |
|
Raffael Mogicato, Learning Semantics of Classes in Image Classification; Attention-Sharing between Hierarchies, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Master's Thesis)
Deep convolutional neural networks (CNNs) have become the state-of-the-art approach for image classification.
While these networks are very effective at identifying the class to which an image belongs, they often do not properly learn the semantic relationship between classes.
This means that models treat all misclassifications equally during training, regardless of the semantic distance between the predicted and actual class.
This approach does not reflect the complexity of the real world, where some entities are more similar to each other, making mistakes between related classes less severe than those of unrelated classes.
An architecture suited for hierarchical classification is presented as a potential solution to this problem. Rather than just predicting a single class, networks predict a simplified hierarchy consisting of higher-level concepts.
This thesis explores how the architecture of CNNs can be adapted to incorporate hierarchical information to increase performance and the semantical conditioning of CNNs.
The ultimate goal is to enhance the accuracy and robustness of image classification models by improving their understanding of the semantic relationships between classes, which could potentially lead to fewer and less severe misclassifications.
To achieve this, several architectures are explored -- all using a ResNet backbone with classifiers for each hierarchical level -- that are compared with a baseline model that does not utilize the hierarchy for predictions.
Most importantly, this thesis proposes an attention mechanism that does not contain any extra trainable parameters.
This attention mechanism transforms the deep features given to a lower-level classifier based on the weight matrix from the higher-level classifier.
This transformation aims to highlight features relevant to the classification of the higher-level concept, thus enabling the model to learn the decision boundary between classes of different higher-level concepts.
This attention mechanism can effectively increase the classification accuracy for the ImageNet classes compared to a baseline architecture.
Furthermore, when provided ground-truth information about the hierarchies from classes during training, it effectively learns the decision boundaries between classes from different higher-level concepts.
This thesis also explores whether these architectures can be used for open-set classification.
While showing some potential, the attention mechanism could likely be adapted for open-set classification, representing a promising possibility for future research. |
|
Livia Stöckli, Opening the Black Box of IT-Supported Patient-Centered Care: How the Digital Companion Influences Obesity Counselling, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Bachelor's Thesis)
Patient-centered care enables physicians to understand their patients as people with individual needs. It helps the patient to be informed, respected, and involved in decisions about their treatment. Thus, it should be the standard approach in the medical practice. However, with the current increase in people suffering from chronic diseases, the healthcare system reaches its limits and new forms of treatment have to be explored. Technology can pose a relief both for the physician and the patient. During a consultation, technological means can help recall knowledge and assist in the decision-making process. At home, it can support the patient to adhere to the treatment and provide motivation. Yet currently those two aspects are often disconnected from each other, and no exchange of data happens between the technologies used at home and the ones in the medical practice. Additionally, the provision of patient-centered care might suffer from the involvement of technology in the consultation and lead to the further scattering of information about the patient. The Digital Companion Project aims to close the loop between obesity consultations and improve the connection between physician and patient. The relevant data gathered by the patient at home can be accessed by the physician and the patient receives the information discussed during the consultation on their device. This bachelor thesis analyzes the use of the Digital Companion in a field study with twenty-seven patients and six physicians to figure out if proper patient-centered care was provided. Furthermore, emerging practices regarding patient-centered care and the influence of the device on the consultation were observed. The results show that the Digital Companion could improve the provision of patient-centered care in all aspects. It helped involve the patients in the decision-making and led to a formulation of a realistic treatment plan. Trust was established quickly, and the patients were openly sharing personal details about their lives. During the consultation, the Digital Companion worked as calm technology, did not disrupt the conversation, and did not attract unnecessary attention. |
|
Laurin Van den Bergh, Improved Losses for Open-Set Classification, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Master's Thesis)
Open-Set classification (OSC) addresses one of the core issues of traditional classification techniques, namely, the underlying closed-world assumption. The goal of OSC methods is to classify known classes correctly while also rejecting unknown classes. We propose two novel generic loss functions, Margin-OS and Margin-EOS, which combine the Entropic Open-Set and Objectosphere loss with margin-based loss functions used in face recognition tasks, CosFace and ArcFace, to learn discriminative features. We find that the margin has a positive effect on the closed-set accuracy but a mixed effect on the open-set performance. For applications that can tolerate high false positive rates, our losses improve the classification of known classes, but for low false positive rates the margin negatively impacts the training which leads to subpar classification of known samples. |
|
Dennis Arend, Option Trading using Implied and Breakeven Volatility, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Master's Thesis)
This study explores the concept of breakeven volatility (BEV) as an unbiased estimator of an
option’s fair implied volatility and its applicability in option pricing. Historical BEVs are computed for 230,000 S&P 500 index option contracts, and a predictive model is built to produce contemporaneous estimates. The model is compared to a similar implementation based on implied volatility
instead of BEV, by testing both models’ pricing ability in an out-of-sample trading environment.
The findings consistently demonstrate that the BEV model outperforms the implied volatility model,
illustrating the unique value of using BEV as a measure of an option’s fair volatility level to price
options. This research contributes to the limited literature on empirical option pricing using BEV,
emphasizing its potential as a tool for accurate option pricing.
Keywords: breakeven volatility, implied volatility, option pricing, SPX index options
|
|
Dario Samuele Spielmann, Analyse der Preis- und Volatilitätsentwicklung auf dem europäischen und Schweizer Elektrizitätsmarkt, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Bachelor's Thesis)
Ein gut funktionierender Elektrizitätsmarkt ist von essenzieller Bedeutung für unserer Gesellschaft.
Um diesen besser zu verstehen, wurde er bereits umfassend erforscht. Die Coronapandemie und Ukrainekrise
haben jedoch grosse Veränderungen mit sich gebracht, die bis jetzt kaum untersucht wurden.
Vorliegende Arbeit erleuchtet diese Forschungslücke indem Strompreise und -verbräuche mittels Korrelationen
und Regressionen analysiert werden. Die Ergebnisse zeigen, dass die Elektrizitätsmärkte
während den Preisexplosionen in den Jahren 2021 und 2022 stärker voneinander abhängen und die
Nachfrage nach Elektrizität die Preise nahezu nicht beeinflusst hat. In der Schweiz konnte im Jahr
2022 sogar eine leicht negative Korrelation zwischen Preis und Verbrauch festgestellt werden.
|
|
Weixian Nie, Comparison of Value-at-Risk using regime-switching GARCH models for industrial metals futures, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Master's Thesis)
This thesis compares GARCH models, Stochastic Volatility (SV) models, and Markov-switching
GARCH (MSGARCH) models in terms of forecasting one-day-ahead Value-at-Risk (VaR) for
industrial metals futures. GARCH and MSGARCH models are estimated with three innovation
distributions: normal, student-t, and generalized error distributions (GED). For in-sample
analysis, we implement these models to compare the Akaike information criterion (AIC) as well
as their in-sample conditional volatility. Out-of-sample VaR forecasting performance is evaluated
based on conditional coverage test. The results show MSGARCH models outperform the other
models in predicting a one-day-ahead VaR for both long and short trading positions. |
|
Greta Benetazzo, Comparative Analysis of Predictive Models: Backtest Using the Basel Traffic Light Approach, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Master's Thesis)
Value-at-Risk (VaR) is a widely used statistical measure of financial risk. It provides an estimate
of the maximum expected loss at a given confidence level over a specified time horizon. VaR
models can be based on different statistical approaches, including Parametric models
(Standard Normal, Weighted Standard Normal Increasing and Decreasing) and NonParametric
models (Historical, Weighted Historical Increasing and Decreasing). However,
there is no consensus on which approach is the best for predicting financial risk, and the
choice of model can have a significant impact on the accuracy and robustness of VaR
estimates.
The goal of this study is to compare the performance of two different VaR models in
forecasting the Value at Risk of different indices from the three asset classes of Equities,
Commodities, and Fixed Income. The models to be tested are the Standard Normal and
Historical, and for both we will analyze the three cases of Equally Weighted, Weighted
Increasing and Weighted Decreasing. The primary objective of this study is to identify the
model that provides the most accurate and robust predictions of risk for the three different
indices.
To achieve this objective, we will calculate VaR using the two different models for three
different indices from three different asset classes. For each of the two models, we will apply
different weights to the sample observations, equal, increasing and decreasing, so that each
time different importance will be attributed to more dated or recent data. We will then
backtest the VaR estimates using the Traffic Light Approach from the Basel II regulation, which
is a supervisory tool used by regulators to assess the operational risk management practices
of banks. We will classify the VaR estimates into three categories based on their performance:
green, yellow, and red. The green category represents VaR estimates that perform well, the
yellow category represents VaR estimates that need improvement, and the red category
represents VaR estimates that are not acceptable.
We will compare the performance of the VaR models based on the number of green lights
achieved during the backtesting process. We will also analyze the results to determine which
VaR model is the most robust and accurate for predicting the risk of different indices.
Moreover, we will compare the results obtained utilizing 2-years data with the ones obtained
utilizing 10-years data in order to add robustness to the findings.
The expected findings of this study are that one of the VaR models will perform better than
the others in terms of accuracy and robustness. We also expect to observe that for different
asset classes the best performing model will vary, showing how one or another model best
suits the different characteristics of each. The findings of this study will contribute to the
existing literature on VaR modeling and model selection.
In conclusion, this study aims to provide insights into the performance of different VaR models
and their suitability for predicting financial risk. The findings of this study will be of interest to
risk managers, investors, and regulators who use VaR as a tool for measuring and managing
financial risk. |
|
Alexander Werder, The Value of Dividend Growth Models in Nord American Stock Markets, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Bachelor's Thesis)
Criticism towards the “standard” Dividend Growth Model (Gordon Growth Model) and its exponential growth pattern have sprouted alternative models such as the “modified” Dividend Growth model, with a linear growth pattern, introduced by Balschun and Schindler (2015). Applying standard portfolio construction techniques, this thesis asses if the models can enhance the portfolio management process regarding risk-adjusted excess returns within the North American stock markets. The investment universe is given by the S&P500 from 2003 until 2022. Based on the “modified” Dividend Growth Model, value is provided with certain input parameters, whereas the “standard” Dividend Growth Model does not provide value. |
|
Simeon Nathan Vogt, Faktor-Modelle im Schweizer Aktienmarkt, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Bachelor's Thesis)
In dieser Arbeit wird mit dem Fama-French-3-Factor- und Carhart-4-Factor-Model eine empirische Analyse im Schweizer Aktienmarkt durchgeführt. Im untersuchten Zeitraum von Juli 2012 bis Juni 2022 werden die Risk Premiums dieser Modelle berechnet. Unabhängig von den gewählten Modellannahmen können das durchschnittliche Market Risk und Momentum Premium signifikant ungleich null nachgewiesen werden. Das mittlere Size Premium wird Annahmen-unabhängig nicht signifikant ungleich null belegt. Die Schlussfolgerung zur Signifikanz des durchschnittlichen Value Premiums fällt aufgrund der Abhängigkeit zu den gewählten Modellannahmen nicht eindeutig aus. Der Modellvergleich mittels Regressionsanalyse zeigt, dass das Carhart-Model, auch unter adjustierten Bedingungen, durchgehend die höhere Modellgüte als das Fama-French-Model aufweist |
|
Nicolas Heierli, Cross-Sectional Momentum in the Swiss Equity Market, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Bachelor's Thesis)
This study implements momentum strategies on the Swiss Performance Index (SPI) and examines their profitability over an 18-year time horizon. The robustness of these strat-egies is analyzed by adjusting certain parameters including lags, transaction costs, size and weight of the winner portfolios and short selling. The results are consistent with the existing literature and confirm the validity of momentum strategies in the Swiss equity market. The results remain significant after risk adjustments, indicating that the momen-tum effect persists after controlling for risk. While the momentum phenomenon is par-tially explained, certain underlying factors remain unanswered, suggesting potential areas for future research. |
|
Runyu Qi, Carry trade past and now: Is 2008 really a turning point?, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Master's Thesis)
I conduct a longitude study of the carry trade with 2008 as the cutting point.
Employing a sample with 17 currencies and over 30 years since 1993, I find
that the basic carry trade strategy with G10 currencies does underperform after 2008 relative to itself before 2008 and relative to the market. I demonstrate
that refined carry strategies including basic carry with an extended currency
basket, carry momentum, and volatility-adjusted carry can avoid this downturn. I observe that overall traditional risk factors fail to explain the excess
return of the carry trade. However, I show that during certain periods more
significance can be observed, and higher returns are associated with a lower
and more negative correlation with the market.
Keywords: Carry, Momentum, Investment Strategy, Factor Model
|
|
Christoph Julian Mück, Post-Jump Return Dynamics and News Sentiment, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Master's Thesis)
We investigate the stock return predictive power at the high-frequency level after statistically significant
overnight jumps conditioned on prevailing stock and market level news sentiment. We provide evidence
that sentiment variables as well as the jump direction explain variation in intraday returns following a
jump event and document the effect over the trading day. We identify overnight jumps through highfrequency based jump tests and calculate our sentiment variables from the Thomson Reuters News Analytics dataset. We document our findings for S&P 500 constituents from 2004 to 2021. In the case of positive
jumps, we document a stronger overreaction behaviour to both, the direction of the jump and to the prevailing news sentiment, whilst for negative jumps, we can only document a reversal behaviour relating to
the direction of the jump. In addition, the paper presents a trading strategy based on the observed phenomena. The strategy exhibits no correlation to the market portfolio, exhibits tail hedging characteristics,
whilst maintaining a positive drift component.
Key words: stock return predictability, statistical jumps, private information, news sentiment, nontrading hour information, market level sentiment, company-specific sentiment
|
|