R Foellmi, Josef Zweimüller, Income distribution and demand-induced innovation, Review of Economic Studies, Vol. 73 (4), 2006. (Journal Article)
We introduce non-homothetic preferences into an innovation-based growth model and study how income and wealth inequality affect economic growth. We identify a (positive) price effect -- where increasing inequality allows innovators to charge higher prices and (negative) market-size effects -- with higher inequality implying smaller markets for new goods and/or a slower transition of new goods into mass markets. It turns out that price effects dominate market-size effects. We also show that a redistribution from the poor to the rich may be Pareto improving for low levels of inequality. |
|
Reto Wettstein, Kundenverhalten in web-basierten sozialen Netzwerken Eine Evaluation von Vorhersagemodellen, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2006. (Master's Thesis)
In every business, customer data are a big asset. Analyzing them allows you to segment, target and position your offers in terms of prize and channel. Data mining methods as an explorative way to analyze customer data made their way into corporate data warehouses more than ten years ago. Nowadays where web-based social networks offer customer created behavioural network data in real time, the mining community sees new applications of relational data mining approaches that take features of connected member-profiles and relations into their reasoning. Two freely available workbenches that incorporate such relational algorithms are NetKit-SRL and Proximity. Our work applied and compared these two software packages on a data set of 42?044 interconnected member-profiles of a web-based social network with widely used propositional algorithms like C5, Logistic-Regression and Neural Nets. The scope of data has been enriched with ego-net centrality and density measures from the corpus of measures commonly known in the social network analysis (sna) field. It has been shown that the incorporation of sna-measures must not improve the mining results with traditional algorithms as well as with relational ones. Furthermore it can be stated, that relational algorithms on networked data are not in every case superior to traditional algorithms on propositionalized data. Our work names the moderating variables that led to these outcomes. With our key finding in detecting meaningful correlations between sna- and activity- measures we have been able to design the ?social mailing model?, a direct mailing model that could lead to a substantial improvement in conversion rate. A real world experiment would therefore be one of the proposed next steps. |
|
Michael Würsch, Improving Abstract Syntax Tree based Source Code Change Detection, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2006. (Master's Thesis)
Changes are a crucial part of the life-cycle of modern software systems. Common versioning systems such as CVS store version histories of source code. Usually, they are not capable of tracking changes on a more sophisticated level. They provide lexical but not syntactical change analysis.
The existing Eclipse-plug-in ChangeDistiller bridges this gap by providing a sophisticated analysis of structural source code changes. It uses an abstract syntax tree (AST) representation of subsequent revisions of source code files and compares the trees by using a change detection algorithm for hierarchically structured information. The outcome is an edit script describing the operations that are necessary to transform the original version of the tree into the modified one.
We aim at improving the sub-algorithm responsible for matching trees. It yields insufficiencies in terms of matching leaves in general, it often produces sub-optimal results for small subtrees, and it is not able to handle large number of changes adequately. To overcome this issues, we propose customized similarity measures and a similarity ranking algorithm for leaves, as well as dynamic modulation of the tree similarity thresholds whenever small tree structures are encountered.
To prove our claims, we establish an extensive benchmark for investigating runtime performance and accuracy. The benchmark is based on the JUnit regression testing framework and relies on artificial source code examples, as well as on examples taken from a medium-sized real project. |
|
Andreas Scherer, Guido Palazzo, Towards a Political Conception of Corporate Social Responsibility. Business & Society and the Contribution of Recent Habermasian Political Philosophy, In: IFSAM VIIIth World Congress 2006. 2006. (Conference Presentation)
|
|
Cristian Morariu, Implementation of a Non-Repudiation Approach for Web-Services in Mobile Grids, In: 12th Eunice Summer School. 2006. (Conference Presentation)
|
|
Cristian Morariu, Reto Zimmermann, Burkhard Stiller, Implementation of a Non-Repudiation Approach for Web-Services in Mobile Grids, In: 12th Eunice Summer School 2006, Stuttgart, Germany, 2006-09-18. (Conference or Workshop Paper published in Proceedings)
|
|
Andrea Schenker-Wicki, Comment: Funding Schools by Formula, In: International Conference: Educational Systems and the Challenge of Improving Results. 2006. (Conference Presentation)
|
|
Helmut Max Dietl, Egon Franck, Wenn Kapitalisten zu Sozialisten werden, In: Neue Zürcher Zeitung, 214, p. 59, 15 September 2006. (Newspaper Article)
|
|
Marc Ziegler, Fumiya Iida, Rolf Pfeifer, "Cheap" underwater locomotion: Roles of morphological properties and behavioral diversity, In: 9th Int. Conference on Climbing and Walking Robots, virtual, 2006. (Conference or Workshop Paper published in Proceedings)
Toward adaptive underwater locomotion, this paper presents the experimental results of fish-like swimming robots that we have newly developed. By using motor control with only one degree of freedom, these robots exhibit surprisingly rich behavioural diversity in three dimensional underwater environment. This paper focuses on some of the behavior variations, i.e. forward, turning, and vertical movement, which are required for the underwater three dimensional navigation. The visual behavior analysis shows that, even though there is only one motor, these behavior are possible because these robots exploit the unique interaction with the environment derived from the morphological properties. For better understanding how the material influences the swimming behaviour, a second robot with bending sensors implemented in its tail-fin measures the deflexion during swimming. Moreover, some of the behaviors demonstrated by these robots have a considerable similarity to those of biological systems, which would also contribute to understand the adaptive behavior of animals. Based on the experimental results, we speculate further issue on ``cheap"" underwater locomotion. |
|
Nikolaus Augsten, Michael Böhlen, Johann Gamper, An Incrementally Maintainable Index for Approximate Lookups in Hierarchical Data, In: VLDB2006: 32th International Conference on Very Large Data Bases, 2006-09-12. (Conference or Workshop Paper published in Proceedings)
|
|
Arturas Mazeika, Michael Hanspeter Böhlen, Andrej Taliun, Adaptive density estimation, In: 32nd International Conference on Very Large Data Bases, VLDB Endowment, 2006-09-12. (Conference or Workshop Paper published in Proceedings)
This demonstration illustrates the APDF tree: an adaptive tree that supports the effective and effcient computation of continuous density information. The APDF tree allocates more partition points in non-linear areas of the density function and fewer points in linear areas of the density function. This yields not only a bounded, but a tight control of the error. The demonstration explains the core steps of the computation of the APDF tree (split, kernel additions, tree optimization, kernel additions, unsplit) and demos the implementation for different datasets. |
|
Arturas Mazeika, Michael Hanspeter Böhlen, Cleansing databases of misspelled proper nouns, In: CleanDB 2006, 2006-09-11. (Conference or Workshop Paper published in Proceedings)
The paper presents a data cleansing technique for string databases. We propose and evaluate an algorithm that identifies a group of strings that consists of (multiple) occurrences of a correctly spelled string plus nearby misspelled strings. All strings in a group are replaced by the most frequent string of this group. Our method targets proper noun databases, including names and addresses, which are not handled by dictionaries. At the technical level we give an efficient solution for computing the center of a group of strings and determine the border of the group. We use inverse strings together with sampling to efficiently identify and cleanse a database. The experimental evaluation shows that for proper nouns the center calculation and border detection algorithms are robust and even very small sample sizes yield good results. |
|
Paolo Arena, Luigi Fortuna, Mattia Frasca, Luca Patanè, Cristiano Alessandro, Donato Barbagallo, Learning high sensors from reflexes via spiking networks in roving robots., In: 8th International IFAC Symposium on Robot Control, 2006. (Conference or Workshop Paper published in Proceedings)
|
|
Arturas Mazeika, Janis Petersons, Michael Hanspeter Böhlen, PPPA: Push and Pull Pedigree Analyzer for large and complex pedigree databases, In: 10th East-European Conference on Advances in Databases and Information Systems, Springer, 2006-09-03. (Conference or Workshop Paper published in Proceedings)
In this paper we introduce a novel push and pull technique to analyze pedigree data. We present the Push and Pull Pedigree Analyzer (PPPA) to organize large and complex pedigrees and investigate the development of genetic diseases. PPPA receives as input a pedigree (ancestry information) of different families. For each person the pedigree contains information about the occurrence of a specific genetic disease. We propose a new solution to arrange and visualize the individuals of the pedigree based on the relationships between individuals and information about the disease. PPPA starts with random positions of the individuals, and iteratively pushes apart non-relatives with opposite diseases patterns and pulls together relatives with identical disease patterns. The goal is a visualization that groups families with homogeneous disease patterns.We investigate our solution experimentally with genetic data from peoples from South Tyrol, Italy. We show that the algorithm converges independent of the number of individuals n and the complexity of the relationships. The runtime of the algorithm is super-linear wrt n. The space complexity of the algorithm is linear wrt n. The visual analysis of the method confirms that our push and pull technique successfully deals with large and complex pedigrees. |
|
Stefania Leone, Ela Hunt, Thomas B Hodel, Michael Böhlen, Klaus R Dittrich, Design and implementation of a document database extension, In: 10th East-European Conference on Advances in Databases and Information Systems, Alexander Technological Educational Institute of Thessaloniki, 2006-09-03. (Conference or Workshop Paper published in Proceedings)
Integration of text and documents into database management systems has been the subject of much research. However, most of the approaches are limited to data retrieval. Collaborative text editing, i.e. the ability for multiple users to work on a document instance simultaneously, is rarely supported. Also, documents mostly consist of plain text only, and support very limited meta data storage or search. We address the problem by proposing an extended definition of document data type which comprises not only the text itself but also structural information such as layout, template and semantics, as well as document creation meta data. We implemented a new collaborative data type Document which supports document manipulation via a text editing API and extended SQL syntax (TX SQL), as detailed in this work. We report also on the search capabilities of our document management system and present some of the future challenges for collaborative document management. |
|
Margit Osterloh, Bruno Frey, Corporate governance for knowledge production: theoretical foundations and practical implications, Corporate Ownership and Control, Vol. 3 (4), 2006. (Journal Article)
Agency Theory as the dominant view of Corporate Governance disregards that the key task of firm governance is to generate, accumulate, transfer, and protect firm specific knowledge. Three different foundations to the theory of the firm which underpin different concepts of corporate governance are discussed: The traditional view of the firm as a nexus of contracts, the view of the firm as a nexus of firm specific investments and the view of the firm as a nexus of firm specific knowledge investments. The latter view distinguishes two fundamental differences between contracting firm specific knowledge investments in contrast to financial investment: (1) A knowledge worker cannot contract his or her future knowledge in the same way as the exchange of tangible goods. (2) Only insiders can evaluate firm specific knowledge generation and transformation. We suggest a concept of corporate governance that takes investments in firm specific knowledge into account: (1) The board should rely more on insiders. (2) Those employees of the firm making firm-specific knowledge investments should elect the insiders. (3) A neutral person should chair the board. This concept provides a theoretical foundation of corporate governance based in the knowledge-based theory of the firm. |
|
David Kurz, Katrin Hunt, Abraham Bernstein, Dragana Radovanovic, Paul E. Erne, Jean-Christophe Stauffer, Osmund Bertel, Development of a novel risk stratification model to improve mortality prediction in acute coronary syndromes: the AMIS (Acute Myocardial Infarction in Switzerland) model, In: World Congress of Cardiology 2006, September 2006. (Book Chapter)
Background: Current established models predicting mortality in acute coronary syndrome (ACS) patients are derived from randomised controlled trials performed in the 1990's, and are thus based on and predictive for selected populations. These scores perform inadequately in patients treated according to current guidelines. The aim of this study was to develop a model with improved predictive performance applicable to all kinds of ACS, based on outcomes in real world patients from the new millennium.
Methods: The AMIS (Acute Myocardial Infarction in Switzerland)-Plus registry prospectively collects data from ACS patients admitted to 56 Swiss hospitals. Patients included in this registry between October 2001 and May 2005 (n = 7520) were the basis for model development. Modern data mining computational methods using new classification learning algorithms were tested to optimise mortality risk prediction using well-defined and non-ambiguous variables available at first patient contact. Predictive performance was quantified as ""area under the curve"" (AUC, range 0 - 1) in a receiver operator characteristic, and was compared to the benchmark risk score from the TIMI study group. Results were verified using 10-fold cross-validation.
Results: Overall, hospital mortality was 7.5%. The final prediction model was based on the ""Averaged One-Dependence Estimators"" algorithm and included the following 7 input variables: 1) Age, 2) Killip class, 3) systolic blood pressure, 4) heart rate, 5) pre-hospital mechanical resuscitation, 6) history of heart failure, 7) history of cerebrovascular disease. The output of the model was an estimate of in-hospital mortality risk for each patient. The AUC for the entire cohort was 0.875, compared to 0.803 for the TIMI risk score. The AMIS model performed equally well for patients with or without ST elevation myocardial infarction (AUC 0.879 and 0.868, respectively). Subgroup analysis according to the initial revascularisation modality indicated that the AMIS model performed best in patients undergoing PCI (AUC 0.884 vs. 0.783 for TIMI) and worst in patients receiving no revascularisation therapy (AUC 0.788 vs. 0.673 for TIMI). The model delivered an acurate and reproducible prediction over the complete range of risks and for all kinds of ACS.
Conclusions: The AMIS model performs about 10% better than established risk prediction models for hospital mortality in patients with all kinds of ACS in the modern era. Modern data mining algorithms proved useful to optimise the model development. |
|
David Kurz, Katrin Hunt, Abraham Bernstein, Dragana Radovanovic, Paul E. Erne, Osmund Bertel, Inadequate performance of the TIMI risk prediction score for patients with ST-elevation myocardial infarction treated according to current guidelines, In: World Congress of Cardiology 2006, September 2006. (Book Chapter)
Background: Mortality prediction of patients admitted with ST elevation myocardial infarction (STEMI) is currently based on models derived from randomised controlled trials performed in the 1990's, with selective inclusion and exclusion criteria. It is unclear whether such models remain valid in community-based populations in the modern era.
Methods: The AMIS (Acute Myocardial Infarction in Switzerland)-Plus registry prospectively collects data from ACS patients admitted to 56 Swiss hospitals. We analysed hospital mortality for patients with ST-elevation myocardial infarction (STEMI) included in this registry between 1997-2005, and compared it to mortality as predicted by the benchmark risk score from the TIMI study group. This is an integer score calculated from 10 weighted parameters available at admission. Each score value delivers a hospital mortality risk prediction (range 0.7% for 0 points, 31.7% for >8 points).
Results: Among 7875 patients with STEMI, overall hospital mortality was 7.3%. The TIMI risk score overestimated mortality risk at each score level for the entire population. Subgroup analysis according to initial revascularisation treatment (PCI n=3358, thrombolysis n=1842, none n=2675) showed an especially poor performance of the TIMI risk score for patients treated by PCI. In this subgroup no relevant increase in mortality was observed up until 5 points (actual mortality 2.7%, predicted 11.6%), and remained below 5% up till 7 points (predicted 21.5%) (Figure 1).
Conclusions: The TIMI risk score overestimates the mortality risk and delivers poor stratification in real life patients with STEMI treated according to current guidelines. |
|
Enrico De Giorgi, Thorsten Hens, Making prospect theory fit for finance, Financial markets and portfolio management, Vol. 20 (3), 2006. (Journal Article)
The prospect theory of Kahneman and Tversky (in Econometrica 47(2), 263–291, 1979) and the cumulative prospect theory of Tversky and Kahneman (in J. Risk uncertainty 5, 297–323, 1992) are descriptive models for decision making that summarize several violations of the expected utility theory. This paper gives a survey of applications of prospect theory to the portfolio choice problem and the implications for asset pricing. We demonstrate that prospect theory (and similarly cumulative prospect theory) has to be re-modelled if one wants to apply it to portfolio selection. We suggest replacing the piecewise power value function of Tversky and Kahneman (in J. Risk uncertainty 5, 297–323, 1992) with a piecewise negative exponential value function. This latter functional form is still compatible with laboratory experiments but it has the following advantages over and above Tversky and Kahneman’s piecewise power function:
1. The Bernoulli Paradox does not arise for lotteries with finite expected value.
2. No infinite leverage/robustness problem arises.
3. CAPM-equilibria with heterogeneous investors and prospect utility do exist.
4. It is able to simultaneously resolve the following asset pricing puzzles: the equity premium, the value and the size puzzle.
In contrast to the piecewise power value function it is able to explain the disposition effect.
Resolving these problems of prospect theory we show how it can be combined with mean–variance portfolio theory. |
|
René Algesheimer, Andreas Herrmann, M Dimpfel, Die Wirkung von Brand Communities auf die Markenloyalität – eine dynamische Analyse, Journal of Business Economics / Zeitschrift für Betriebswirtschaft, Vol. 76 (9), 2006. (Journal Article)
Interaktionen in Brand Communities beeinflussen die Markenwahl und weitere Größen wie die Treue zu einer Marke oder die Bereitschaft, die Marke zu empfehlen. In Anbetracht dieses Befundes wurde die Wirkung von Brand Communities auf diese Variablen analysiert mit dem Ziel, Brand Communities im Sinne des Unternehmens zu gestalten. Die theoretische Basis bildeten Thibaut und Kelleys klassischen Austauschtheorien sowie Festingers Theory of Informal Social Communication. Hieraus wurden Hypothesen abgeleitet und ein Modell zur Erfassung der Wirkung ausgewählter Determinanten auf unternehmerische Zielgrößen entwickelt. Eine empirische Untersuchung im Markt für Automobil-Communities auf Basis der Kausalanalyse diente dazu, die formulierten Hypothesen zu überprüfen. Aus den Ergebnissen ergeben sich Anregungen für die Konzeptualisierung und Operationalisierung der Erscheinung Brand Community. Darüber hinaus zeigt sich die Wirkung bestimmter Facetten einer Brand Community auf die Loyalität der Kunden zur Marke, was die Diskussion um Markenloyalität ergänzt. |
|