Marc Oliver Rieger, Mei Wang, Prospect theory for continuous distributions, Journal of Risk and Uncertainty, Vol. 36 (1), 2008. (Journal Article)
We extend the original form of prospect theory by Kahneman and Tversky from finite lotteries to arbitrary probability distributions, using an approximation method based on weak convergence. The resulting formula is computationally easier than the corresponding formula for cumulative prospect theory and makes it possible to use prospect theory in future applications in economics and finance. Moreover, we suggest a method how to incorporate a crucial step of the “editing phase” into prospect theory and to remove in this way the discontinuity of the original model. |
|
D Schunk, A Markov chain Monte Carlo algorithm for multiple imputation in large surveys, Advances in Statistical Analysis (AStA), Vol. 92 (1), 2008. (Journal Article)
Important empirical information on household behavior and household finances, used heavily by researchers, central banks, and for policy consulting, is obtained from surveys. However, various interdependent factors that can only be controlled to a limited extent lead to unit and item nonresponse, and missing data on certain items is a frequent source of difficulties in statistical practice. All the more, it is important to explore techniques for the imputation of large survey data. This paper presents the theoretical underpinnings of a Markov Chain Monte Carlo multiple imputation procedure and outlines important technical aspects of the application of MCMC-type algorithms to large socio-economic datasets. In an exemplary application it is found that MCMC algorithms
have good convergence properties even on large datasets with complex patterns of missingness, and that the use of a rich set of covariates in the imputation models has a
substantial effect on the distributions of key financial variables. |
|
Christian Ewerhart, Natacha Valla, Liquidité des marchés financiers et prêteur en dernier ressort, Revue de la stabilité financière / Financial Stability Review (11; Nu), 2008. (Journal Article)
À l’été 2007, les problèmes liés à la dette subprime aux États-Unis ont entraîné des perturbations sur de nombreux segments du système financier, en particulier sur les marchés monétaires interbancaires, obligeant les banques centrales américaine et européenne à intervenir à maintes reprises afin de rétablir un bon fonctionnement. Cet article examine les circonstances dans lesquelles une pénurie de liquidité peut apparaître et évalue différentes possibilités qui s’offrent au prêteur en dernier ressort pour restaurer la stabilité financière. Il montre également que l’évaluation des risques des entités financières à levier financier ne doit pas reposer uniquement sur les données de bilan, mais prendre aussi en compte, de manière
explicite, les sûretés, l’illiquidité et l’indisponibilité potentielle des prix du marché.
Nous en tirons principalement deux conclusions. Premièrement, nous établissons une hiérarchie claire entre les instruments de politique. D’après la relation entre risque et efficience, les injections de liquidité ciblées (facilités d’urgence) sont à privilégier. En effet, lorsque la liquidité est utilisée dans un but spéculatif en période de crise, les opérations d’open market non discriminatoires risquent d’attirer des participants manquant de fonds, qui peuvent détourner la monnaie centrale et en priver ceux qui en ont le plus besoin. Les injections de liquidité ciblées deviennent alors strictement préférables.
Deuxièmement, à notre avis, les cessions forcées d’actifs peuvent perturber les marchés dans le cas où les investisseurs ont un levier financier élevé. Compte non tenu du financement externe et de la renégociabilité des contrats de prêt, si un investisseur au levier total est touché par un choc sur la liquidité, il sera contraint de se défaire d’une partie de ses actifs. Sur des marchés qui ne sont pas parfaitement liquides, ces liquidations induisent des baisses de prix, qui, en présence d’entraves à la gestion standard du risque, entraînent un réexamen des bilans évalués à la valeur de marché, des appels de marge et des cessions supplémentaires. Dans le pire des scénarios, l’investisseur à fort levier ne pourra peut-être pas faire face à toutes ces contractions de la liquidité et aux appels de marge dont elles s’accompagnent. Il en résulte alors un effondrement du marché des actifs illiquides, ce qui rend la valorisation de ces actifs relativement ambiguë. Pour l’investisseur, en raison de la rupture potentielle des échanges, le niveau des pertes
déclenchant le défaut opérationnel est probablement plus bas que celui donné par les mesures standard du risque. |
|
Cerstin Mahlow, Michael Piotrowski, Linguistic Support for Revising and Editing, In: Proc. of Computational Linguistics and Intelligent Text Processing. 9th International Conference, CICLing 2008, Haifa, Israel, Springer, February 2008. (Conference or Workshop Paper)
Revising and editing are important parts of the writing process. In fact, multiple revision and editing cycles are crucial for the production of high-quality texts. However, revising and editing are also tedious and error-prone, since changes may introduce new errors.
Grammar checkers, as offered by some word processors, are not a solution. Besides the fact that they are only available for few languages, and regardless of the questionable quality, their conceptual approach is not suitable for experienced writers, who actively create their texts. Word processors offer few, if any, functions for handling text on the same cognitive level as the author: While the author is thinking in high-level linguistic terms, editors and word processors mostly provide low-level character oriented functions. Mapping the intended outcome to these low-level operations is distracting for the author, who now has to focus for a long time on small parts of the text. This results in a loss of global overview of the text and in typical revision errors (duplicate verbs, extraneous conjunctions, etc.)
We therefore propose functions for text processors that work on the conceptual level of writers. These functions operate on linguistic elements, not on lines and characters. We describe how these functions can be implemented by making use of NLP methods and linguistic resources.
|
|
Matthias Spinner, Combining Ajax with Semantics - Development of a Culturally Adaptive User Interface, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2008. (Master's Thesis)
Adaptions of applications for the individual user are often not or only insufficiently developed. If an application is adapted, then usually only for certain countries and groups of people. Not often enough the effort is made to create an individualized adaption. The reasons are on the one hand the high expenditure of the realisation and on the other hand the problem to know, how such a personified adaption should look like, thus which individual needs and requirements the user has.
This paper describes the implementation of a Web platform named CUMOWeb, which demonstrates an approach of an individual adaption, based on a Todo application. The site-elements are modularly built and can therefore be freely combined. Therefore an individual adaption can be automatically generated for each user. This generation is based on user specific cultural dimensions, which are provided by the ontology CUMO. In consequence it is possible to present an individually adapted interface already at the users first visit. |
|
Marius Flückiger, Entwicklung eines Lernspiels zur Messung von Lernerfolg und Flow, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2008. (Master's Thesis)
This thesis describes the development of a learning game to measure learning success and „flow“. A person in the flow state is fully immersed in what he is doing, therefore flow could have a positive effect on learning. The goal of this thesis is to examine the correlation between flow and learning success. Based on several game and flow theories, a learning game was developed in order to induce flow in the player and to measure his learning success. An experiment conducted with the developed learning game showed that a higher flow value leads to a higher learning success. Thus a positive correlation between flow and learning success could be identified. |
|
Lukas Wälli, Effort Estimation Methods for defined work-packages of the IT development at UBS, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2008. (Master's Thesis)
This diploma thesis was carried out in the context of the UBS environment.
It focuses on an enmeshed software effort estimation method, which is dependent
upon an integrated requirements definition, resource planning and project
management. The diploma thesis discusses general estimation principles and approaches which lead to a more accurate and precise effort estimation. It points out
the differentiation between business targets, effort estimation as work and duration
and discusses the current situation at UBS before stressing the main problems and
issues. The target situation presents a light iterative estimation approach based
on experience-value-driven estimating factors. The approach separates effort from
duration and focuses onto enmeshed project management activities. The effort estimation discipline consists of estimation attributes, which are mainly input and
parameter definitions, the estimation model, which is the algorithmic approach and
the estimation process, which defines management activities. Since the diploma
thesis was written in the context of a practical implementation, the realization is
part of it too, exposing hurdles to be taken. An analysis chapter discusses the set
targets, potential risks and gives an outlook. This article answers the questions if and how UBS applies its effort estimation process in an adequate way. It is shown, that effort estimation may not be separated from surrounding and directly dependent processes. It presents a modular estimation approach grounded in estimating factors. |
|
Christian Jaldon, Die erfolgskritische Rolle des mittleren Managements im Wissenstransferprozess bei strategischen IT Projekten, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2008. (Master's Thesis)
A systematic knowledge management can remarkably contribute to make IT projects
successful. When dealing with IT outsourcing projects the knowledge transfer represents one of the critical success factors. Thus the success basically depends on the transferred knowledge, the transfer methods and how effectively and efficiently the knowledge transfer process has been formed. Ikujiro Nonaka and Hirotaka Takeuchi two renowned business experts from Japan have developed a new management model, the so-called “Middle-up-down-Management”. Contrary to the widely spread negative attitude towards the middle management they claim that especially middle management plays a critical part in making knowledge projects a success. This thesis focuses on the role of the middle management dealing with strategic IT projects and analyzes how far the middle management influences the success of the knowledge transfer respectively the success of the entire IT project by analyzing four case studies from well known IT service provider. |
|
Sonja Näf, Mining Software Repositories with Relational Data Mining Methods, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2008. (Master's Thesis)
In complex software projects a lot of information about defect, release and source code history is gathered. Researchers figured out that mining these software repositories could provide valuable information about the software development. So far, software repositories were mined with traditional data mining methods which are suitable for propositional data. Propositional data is flat and homogeneous, held in a single-table-database. This thesis compares the traditional approach with relational data mining methods which are able to handle heterogeneous data. First, an introduction about relational data mining is given and then a few relational data mining tools
are introduced. In a next step we present the data for our experiments and the necessary data preparations. Finally, we conduct several experiments which show the advantages as well as the weaknesses of the relational approach. |
|
Daniel Gassmann, Kritische Erfolgsfaktoren für den Wissenstransfer in IT-Offshore-Beziehungen, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2008. (Master's Thesis)
To ensure their ability to remain competitive, many companies go offshore, meaning that parts of their business processes are relocated to a distant country. The basic principle of a successful IT offshore project is the knowledge transfer: Thereby the available knowledge in the company must be transferred to the employees of the distant company, so that they can realize their mandate being a part of a global value chain.
An effective and efficient knowledge transfer depends on a row of critical success factors. Therefore, the present case study will indentify the factors which have a considerable influ-ence on the efficiency and effectiveness of the knowledge transfer. As a result of this, an inte-grated process-model is proposed, giving the management a well established base for the execution of IT offshore projects and the related knowledge transfer. |
|
Mark Furrer, Call Pattern Anomaly Detection in Voice over IP Systems, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2008. (Master's Thesis)
Anomaly Detection Systems have been applied successfully to a great variety of applications. To the present day there is still no system available that detects anomalies in call patterns in a Voice over IP system in order to prevent possible abuses. This work analyses di erent established methods for anomaly detection and examines their applicability in this context. The goal of this work is to design, develop and prototypically implement an Anomaly Detection System which is able to monitor a user's call behavior and detect anomalies in real time. The Anomaly Detection System developed in this work takes into account the call parameters destination number, day of the week and time of day. The pro le creation, as well as the classi cation process are realized with statistical methods. The implementation is done in C++ and connected to Asterisk® using Asterisk's FastAGI protocol. The evaluation shows that the prototype can operate in real time successfully. The false positive and the false negative rates depend on the actual values of the classi er settings (thresholds, number of calls used for pro le creation, etc.). The results show that using the same values for all the pro les do not lead to optimal classi cation results for all pro les. Further investigations with respect to a dynamic adjustment of the con guration values to the user pro le are necessary. Also, the expansion of the model to take into account additional parameters (for example the location of the user) must be considered. Therefore the model could be expanded to a multi-stage classi er. |
|
Marcel Lanz, Einführung des PUA-Tools, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2008. (Master's Thesis)
The PUA-Tool is a computer program to support the analysis of a project environment,
implemented by the MIO group at the University of Zurich. The PUA-Tool with the
version number 2.0 has been tested in productive environment and is market-ready. The
MIO group is interested to release the software under an open source license.
This thesis describes a suitable concept for the introduction of the software on the market. The work further depicts the process of planning and implementing the steps of the concept. Finally, the results of this process are presented. These are, besides others, the implementation of a further module of the test framework, the release of the PUA-Tool under the GNU GPL License, a project-homepage and an enhanced version 2.1. |
|
Sascha Karlen, Vorverarbeitung von Punktwolken Daten für das Strem-Processing, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2008. (Master's Thesis)
This Student Project provides several aspects of a more efficient neighborsearch needed in particle based fluidsimulations. One of those aspects uses memorizing of its local spatial environment (environmental knowledge). This environmental knowledge can be used at a later time to calculate the neighbors. Thus neighborsearch from scratch can be avoided. Other provided aspects do not deal with the environmental knowledge to improve neighborsearch. All aspects are not competitive to each other |
|
Stefan Bösch, Mechanisms for Mapping End Systems in the Internet to Geographic Location Informaition, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2008. (Master's Thesis)
Today’s Internet states a global network of networks with a broad variety of service providers. These offer goods like music files or live video streams that are electronically distributable over the Internet. In order to sell such a service in a global environment, a legally valid contract between a provider and his customer has to be concluded, based on legal foundations of both parties. In some situations, a service provider requires means to determine a client’s geographic location on country level. Geographic location
information is not obtainable directly from the Internet’s TCP/IP protocol stack.
Accordingly, this work attempts to investigate suitable methods that allow a mapping of end systems or intermediate systems in the Internet to their actual geographic location and provides a comprehensive compilation of feasible approaches. Furthermore, it addresses a prototypical implementation of two candidate systems that are considered best suited for this task. A first candidate states a location information server that provides position data within an administrative domain. A second candidate represents an IP address query tool that enables country lookups at all five regional Internet registries. Both candidates are evaluated regarding their location information granularity, reliability, automation and performance in order to provide an indication of their applicability in a productive environment and to draw a comparison with existing solutions. |
|
Thierry Kramis, Distributed Storage Strategies for IP Traffic Traces / DIPStorage, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2008. (Master's Thesis)
Measuring IP traffic traces gets increasingly more complex as data flows tend to increase over time. Processing and storage may no longer only be done by a single unit, such as a router, but may be handled by multiple units. Therefore this thesis develops two kinds of distributed storage strategies for IP traffic traces, and responds to the need of joining multiple nodes into a processing and storage network on top of one of the most used P2P frameworks, Freepastry. The first storage strategy adopts a random approach towards distributing either processing power or storage capacity. The second strategy is based on a data-centric replication strategy and is therefore completely different from the random approach. The thesis will also compare these two strategies and concludes why one strategy is favourable over the other. This work was started on the basis of an existing project from a previous semester thesis. Therefore the analysis chapter will focus on the existing project where as the following conclusion will show what parts of that application might be reusable and what parts should be replaced for a variety of reasons. Furthermore this thesis will include work from an ongoing project at the University of Konstanz where they build and test an XML based storage system called TreeTank. The implementation of this storage system will hopefully result in a more robust and highly efficient storage and processing platform for IP traffic traces.
The solution to the overall problem is an architecture design called DIPStorage. |
|
Lukas Isliker, Peer-to-Peer-based Multi-Path Large File Transfer, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2008. (Master's Thesis)
Transfer protocols like TCP and UDP exist for broad use. Their handicap is the lack of flexibility in sending data parallel along different paths through the Internet. Studies of 28 have shown that in 30 - 80 per cent of the cases the data do not take the
best way through the Internet. It is the aim of this work to develop a mechanism to enable the data to reach the receiver by means of dfferent paths, in order to improve the efficiency of the data transfer.The developed mechanism has to be implemented, tested and then evaluated The implemented application is able to find a path through different autonomous systems. Tests on the application demonstrated, that in an overlay network, consisting of 9 nodes, at the most two different paths may result. To clarify the effciency of the data transfer via parallel paths, further tests are necessary. The implemented thesis is based on the assumption, that two paths disturbe each other,
if sent via the same physical path. But this is not granted. This thesis provides a basis for further and more distincted research work in order to verify the existence of bottlenecks. |
|
Stephan Blatti, Entwurf und Implementierung eines Provenance Browsers für die Visualisierung von Data Provenance, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2008. (Master's Thesis)
|
|
Nicolas Bettenburg, Sascha Just, Adrian Schröter, Cathrin Weiss, Rahul Premraj, Thomas Zimmermann, What Makes a Good Bug Report?, In: Proceedings of the 16th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE), February 2008. (Conference or Workshop Paper)
|
|
Jonas Luell, Abraham Bernstein, Alexandra Schaller, Hans Geiger, Foreign Exchange (S.114-177), In: Swiss Financial Center Watch Monitoring Report, February 2008. (Conference or Workshop Paper)
|
|
Christian Kündig, User Model Editor for Ontology-based Cultural Personalization, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2008. (Bachelor's Thesis)
Past research has shown that personalized applications can increase user satisfaction and productivity.
Cultural user modelling helps exploiting these advatages by lowering the impact of the
bootstrapping process. Cultural user modells don’t require tedious capturing processes, as they
can profit from already known preferences funded in the users cultural background. This bachelor
thesis explains the fundamentals of cultural user modelling, personalization and as well the
privacy aspects of concern. Ultimatately a user modelling system based on the cultural user model
ontology CUMO is presented and implemented. This Systems allows a user to maintain his
user model and to give access to it to external applications. |
|