Kathrin Hondl, Thorsten Hens, Notfallrettung mit Risiken, In: Tagesschau.de, 20 March 2023. (Media Coverage)

|
|
Thorsten Hens, Thorsten Hens, Übernahme durch Konkurrentin UBS, In: Tagesschau.de, 19 March 2023. (Media Coverage)

|
|
Karoline Arn, Urs Birchler, Urs Birchler: «Auch eine solvente Bank kann Opfer werden» , In: SRF, 17 March 2023. (Media Coverage)

|
|
Sven Zaugg, Marc Chesney, Wie die CS das Vertrauen der Kunden zurückgewinnen kann, In: SRF, 16 March 2023. (Media Coverage)

|
|
Redaktion, Marc Chesney, Krise bei der Credit Suisse - Wie die CS das Vertrauen der Kunden zurückgewinnen kann, In: Schweizer Radio DRS, 16 March 2023. (Media Coverage)

Die CS muss sich das Vertrauen ihrer Kundschaft wieder erarbeiten. Das braucht Zeit – doch viel Zeit hat die CS nicht. |
|
Fynn Bachmann, Philipp Hennig, Dmitry Kobak, Wasserstein t-SNE, In: Machine Learning and Knowledge Discovery in Databases, Springer, Switzerland, p. 104 - 120, 2023-03-16. (Book Chapter)
 
Scientific datasets often have hierarchical structure: for example, in surveys, individual participants (samples) might be grouped at a higher level (units) such as their geographical region. In these settings, the interest is often in exploring the structure on the unit level rather than on the sample level. Units can be compared based on the distance between their means, however this ignores the within-unit distribution of samples. Here we develop an approach for exploratory analysis of hierarchical datasets using the Wasserstein distance metric that takes into account the shapes of within-unit distributions. We use t-SNE to construct 2D embeddings of the units, based on the matrix of pairwise Wasserstein distances between them. The distance matrix can be efficiently computed by approximating each unit with a Gaussian distribution, but we also provide a scalable method to compute exact Wasserstein distances. We use synthetic data to demonstrate the effectiveness of our Wasserstein t-SNE, and apply it to data from the 2017 German parliamentary election, considering polling stations as samples and voting districts as units. The resulting embedding uncovers meaningful structure in the data. |
|
Anna Katharina Spälti, Benjamin Lyons, Florian Stoeckel, Sabrina Stöckli, Paula Szewach, Vittorio Mérola, Christine Stednitz, Paola López González, Jason Reifler, Partisanship and anti-elite worldviews as correlates of science and health beliefs in the multi-party system of Spain, Public Understanding of Science, 2023. (Journal Article)
 
In a national sample of 5087 Spaniards, we examine the prevalence of 10 specific misperceptions over five separate science and health domains (climate change, 5G technology, genetically modified foods, vaccines, and homeopathy). We find that misperceptions about genetically modified foods and general health risks of 5G technology are particularly widespread. While we find that partisan affiliation is not strongly associated with any of the misperceptions aside from climate change, we find that two distinct dimensions of an anti-elite worldview—anti-expert and conspiratorial mindsets—are better overall predictors of having science and health misperceptions in the Spanish context. These findings help extend our understanding of polarization around science beyond the most common contexts (e.g. the United States) and support recent work suggesting anti-elite sentiments are among the most important predictors of factual misperceptions. |
|
Eflamm Mordrelle, Alexander Wagner, Nachhaltigkeit am Ende? Ukraine-Krieg und schlechte Performance stürzen Investitionen mit ökologischen Ansprüchen in eine Sinnkrise, In: Neue Zürcher Zeitung, 11 March 2023. (Media Coverage)

Der Trend zum nachhaltigen Investieren ist ungebrochen. Doch die Kritik wächst. Die Eindämmung von Greenwashing und die politische Vereinnahmung von ESG drängen in den Vordergrund. |
|
Alexander Soutschek, Philippe Tobler, A process model account of the role of dopamine in intertemporal choice, eLife, Vol. 12, 2023. (Journal Article)
 
Theoretical accounts disagree on the role of dopamine in intertemporal choice and assume that dopamine either promotes delay of gratification by increasing the preference for larger rewards or that dopamine reduces patience by enhancing the sensitivity to waiting costs. Here, we reconcile these conflicting accounts by providing empirical support for a novel process model according to which dopamine contributes to two dissociable components of the decision process, evidence accumulation and starting bias. We re-analyzed a previously published data set where intertemporal decisions were made either under the D2 antagonist amisulpride or under placebo by fitting a hierarchical drift diffusion model that distinguishes between dopaminergic effects on the speed of evidence accumulation and the starting point of the accumulation process. Blocking dopaminergic neurotransmission not only strengthened the sensitivity to whether a reward is perceived as worth the delay costs during evidence accumulation (drift rate) but also attenuated the impact of waiting costs on the starting point of the evidence accumulation process (bias). In contrast, re-analyzing data from a D1 agonist study provided no evidence for a causal involvement of D1R activation in intertemporal choices. Taken together, our findings support a novel, process-based account of the role of dopamine for cost-benefit decision making, highlight the potential benefits of process-informed analyses, and advance our understanding of dopaminergic contributions to decision making. |
|
Marek Pycia, M Bumin Yenmez, Matching with externalities, Review of Economic Studies, Vol. 90 (2), 2023. (Journal Article)
 
We incorporate externalities into the stable matching theory of two-sided markets. Extending the classical substitutes condition to markets with externalities, we establish that stable matchings exist when agent choices satisfy substitutability. We show that substitutability is a necessary condition for the existence of a stable matching in a maximal-domain sense and provide a characterization of substitutable choice functions. In addition, we extend the standard insights of matching theory, like the existence of side-optimal stable matchings and the deferred acceptance algorithm, to settings with externalities even though the standard fixed-point techniques do not apply. |
|
Andrea Giuffredi-Kähr, Malin Sophie Pimper, Sybilla Merian, Sabrina Stöckli, Martin Natter, Share a Future Without Plastic - by Strengthening Group Identity and Group Efficacy, In: Climate Challenge Conference (Pre-conference of the Society of Consumer Psychology Conference). 2023. (Conference Presentation)

|
|
Shiyuan Zhang, Gender difference and language: an empirical analysis based on survey data, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Master's Thesis)

|
|
Daniil Ratarov, The Impact of Pre-training on Automated Code Revision After Review, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Master's Thesis)
 
Code review is a process in which developers assess code changes submitted by their peers. Despite its numerous benefits, code review is a time-consuming and costly endeavor for both the reviewers and the code author. Reviewers are tasked with meticulously scrutinizing the author’s code and offering natural language comments to identify functional or non-functional issues. Meanwhile, the author must comprehend the review feedback and revise the submitted changes accordingly, a task referred to as ‘Code Revision After Review’ (CRA). Existing research has explored methods to automate the CRA task, by pre-training large language models (LLMs), such as CodeBERT and CodeT5 on source code data and fine-tuning them to generate revised code. Although these models utilize distinct pre-training strategies, the impact of these strategies on the CRA task has yet to be investigated. In this paper, we present an empirical study aimed at investigating the effects and efficacy of various pre-training strategies on the CRA task. In this context, we also introduce and evaluate CodeRef—a novel ensemble of pre-training strategies that substantially surpasses baseline performance, achieving at least four times greater likelihood of producing perfectly revised code. Our findings underscore the significance of pre-training in achieving optimal performance and offer insights into various pre-training strategies that may be applicable to other code refinement tasks. |
|
Dominic Bachmann, Data Analysis on the Scalability and Fairness of Polygon, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Bachelor's Thesis)
 
Ethereum in its current version reaches a maximum transaction throughput of 15 transactions a second and thus suffers from a scalability problem. The Polygon proof-of-stake blockchain presents itself as an already active solution to this problem. Previous research focuses on the fairness in other proof-of-stake blockchains and the scalability issue in general. Our contribution is to provide a careful investigation of incentives and decentralisation in Polygon PoS. To this end, we analyse the scalability potential by having a look at transactions, usage and distribution of rewards to participants in the network. Our results indicate that Polygon PoS, as a cheap solution, can enhance the transaction throughput. Furthermore, the blockchain has a fairly good user adoption paired with climate-friendliness. However, in order to be the ultimate scaling solution, there is the need to double-down on incentives and increase performance by a lot. It can also be shown that certain participants get disproportionately more rewards than others, as seen by applying measures like the Gini and Nakamoto index to the data. Centralisation seems to be a problem throughout the network. In other words, we find that Polygon PoS at the current stage is lacking incentives and decentralisation and only early adopters of the blockchain can profit from it. |
|
Ledri Thaqi, Multimodal Clinical NLP in Radiology; Visual Question Generation task, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Master's Thesis)
 
With the recent emergence of Vision Language models in the cross-domains of Computer Vision and Natural Language Processing, novel capabilities are being presented to a wide variety of tasks in different domains. Tasks such as Visual Question Answering and Visual Question generation are increasingly being studied in both the general domain and medical domain. However, such Vision Language tasks are still in the early adoption phases in the medical domain. Thus, recent studies are starting to focus more on the Visual Question Answering and Visual Question Generation tasks in the radiology domain, mainly due to the potential benefits for the radiology domain while utilizing the capabilities of Vision Language models.
The main focus of this thesis is the Visual Question Generation task in the radiology domain, which we aim to explore how it can be implemented and what multimodal considerations are required. We investigate the differences and capabilities of model architectures by first implementing a baseline model with a CNN-RNN architecture and then to our knowledge the first Transformer-based model architecture focused on the VQG task in radiology. Lastly, we also contribute to future work involved in this domain by providing comprehensive reasoning of model architectures with respect to the textual and visual data modalities and their implications on performance. We show that Visual Question Generation of Radiology images is a complex task with many factors influencing the performance of the model, ranging from the quality and size of the dataset to model architecture decisions. |
|
Said Haji Abukar, Creation of a Platform to Compute the Trustworthiness Level of Unsupervised and Supervised ML/DL Models, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Bachelor's Thesis)
 
AI has the potential to revolutionize industries and improve daily life through the development of advanced machine learning (ML) and deep learning (DL) models. These models, such as chatbots and language models, use algorithms or artificial neural networks to recognize patterns and make decisions. ML involves training algorithms on large datasets to identify patterns and make decisions, while DL uses artificial neural networks composed of interconnected nodes called artificial neurons to process and transmit information. Neural networks can learn and make decisions by adjusting the connections between neurons based on input data. There are two types of ML and DL: unsupervised and supervised. Unsupervised learning involves using algorithms or neural networks to learn from data without labeled outcomes, while supervised learning involves training algorithms or neural networks on labeled data to make predictions or decisions.
As AI becomes more advanced and widespread, it is important to have confidence in the decisions and actions of these systems. Trusted AI refers to the reliability and ethical behavior of AI systems. It is crucial to have a framework for evaluating the trustworthiness of different AI models to ensure their safe and responsible deployment. A taxonomy of pillars and metrics can be used to quantify the trustworthiness of AI models, allowing for a structured and comprehensive evaluation of their strengths and limitations. The following bachelor thesis aims to survey existing platforms, define requirements and develop a web app that allows the computation of the trustscore, pillarscores, metricscores of supervised and unsupervised and DL platform is extended to allow for user management, and the return of the trustworthiness levels via API endpoints. |
|
Jiani Zheng, Climate Change, Biodiversity Losses and Financial Risks, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Dissertation)

|
|
Marco Lang, Spannungsfeld zwischen sozialer, rechtlicher und politischer Verantwortung von Unternehmen: Eine empirische Fallstudie eines Pharmaunternehmens im Kontext des Ukrainekriegs, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Bachelor's Thesis)

|
|
Loris Keist, Integration of Matrix Transposition into Database Systems, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Bachelor's Thesis)
 
Due to increased use cases in real-world applications, merging relational
database management systems with linear algebra operations has been an ongoing topic. It allows the analysis of large amounts of data stored in database systems. Multiple approaches have been integrated, but the linear algebra operation, matrix transpose, remains particularly difficult to implement. This thesis attempts to allow the transposition of relations in database management systems and avoid previous issues encountered with matrix transposition. The solution is based on decoupling the logical and physical levels, in a database management system. Decoupling the two levels adds flexibility to the system and can be used to store relations differently than they are on their logical level. It was possible to directly implement the idea in the database management system MonetDB and evaluate it against a basic version of transpose. The evaluation shows some improvements in performance and the solution allows the transposition of relations with a large number of tuples, which has been a main issue for matrix transpose in database management systems. |
|
Yifei Liu, Improving Vision Transformers by Incorporating Spatial Priors and Sparse Computation, University of Zurich, Faculty of Business, Economics and Informatics, 2023. (Master's Thesis)
 
Vision Transformers (ViTs) are powerful deep learning models and have recently made impressive strides in the computer vision field. However, vision transformers are not data efficient, and their high computational cost, quadratic in the number of tokens, currently limits their adoption in power- and computation-constrained applications. To improve the data and inference efficiency of ViTs, we explore two different paths. First, we notice that the tokens in ViTs do not take any inductive bias. We extract more fine-grained tokens (dubbed subtokens) from each token by expanding its channel dimension to spatial dimensions, and introduce convolutions or shifting on the subtokens to insert intra-token spatial priors. The subtoken convolution improves the classification accuracy for ViTs training from scratch by 2.21% on small datasets (Cifar100) and 1.14% on larger datasets (ImageNet-1K), and also shows faster convergence speed. Secondly, recent studies have shown that not all tokens are helpful for the final task, and ViTs can be made more efficient by pruning redundant tokens. However, active research is mostly focusing on high-level tasks like image classification.
To extend the token pruning methods to more complex downstream tasks, we revisit the designs of token pruning and find three key components that lead to better performance: (1) the token selection should not be based on the class token, (2) a dynamic pruning rate is better than a static pruning rate, (3) preserving the feature map of all tokens is better than dropping tokens for all later layers. To this end, we propose SViT, a simple yet effective dynamic token selection scheme that selects and processes highly informative tokens while preserving a structured feature map, thus maintaining compatibility with downstream tasks.
On the image classification task (ImageNet-1K), we improve the throughput of DeiT-S by 49% with only 0.4% accuracy drop. On object detection and instance segmentation tasks(COCO), we improve the inference speed by 32.5% with -0.3 box AP and no drop in mask AP. |
|