Peter Lukas Weibel, Simulative performance evaluation for the design of distributed systems, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2004. (Dissertation)
Performance evaluations have mostly been measurements to determine the
processing speed of a system or component. For the case of distributed
systems the performance is often only tested when the system is used in
either a test environment or even in the productive environment. It is only
then that real usage scenarios, real amounts of data, and real effects of work
load and disturbances are present and thus measurements realistic.
Many modern approaches allow the realization of all kinds of design
conceptions for distributed systems. Only few of them seriously consider
the performance aspect. In this thesis we present an approach that allows
statements about usefulness and consequences of design conceptions for a
system from the performance perspective even before the system has been
realized or changed. The intention is a complement for systems design, not
an examination after completion of a system’s realization.
The core of our approach is an evaluation process that is closely integrated
with the design process for a distributed system. The design model created
there is translated into an evaluation model to be examined. The aim is to
allow statements about resource usage, response time, and other performance
indicators for the system’s performance to find out whether the chosen system
architecture can satisfy the requirements. Different usage scenarios can be
used to do that.
Once an evaluation model is created, evaluation strategies are applied
to gain knowledge about its performance. We present different strategies in
this dissertation thesis. The so-called Cold Start Protocol, e.g., is a simple
strategy to efficiently determine a throughput maximum for simple cases.
More complex strategies have to be applied if the system usage is complex;
they typically rely on the more simple strategies for their own realization.
The strategies are the core of our research. We use them to test hypotheses and to perform learning processes. They allow an evaluation system to
execute standard tasks of performance evaluation without necessarily being
controlled by an expert. A tool implementing these strategies is a means for
designers to examine their design decisions by executing an evaluation, and
even to compare alternatives directly. Even simple examinations of scalability are possible with this approach.
The strategies are realized using variation of specific parameters of
the evaluation models. The variations refer to user-determined model parameters. The strategies determine individual configurations for which
a simulation experiment is executed. As a result of the simulation series, the
strategies are able to determine the effects of the variation.
Finally, the results are presented in a suitable way, most as graphic
representation. This representation in most cases contains the results of
multiple experiments. It is aimed to facilitate the interpretation, and to
support the users to draw the right conclusions from the evaluation. |
|
Konstantin Beck, Risiko Krankenversicherung : Risikomanagement in einem regulierten Krankenversicherungsmarkt, University of Zurich, Faculty of Business, Economics and Informatics, 2004. (Habilitation)
«Risiko Krankenversicherung» beschreibt die soziale und private Krankenversicherung der Schweiz mit statistischen Methoden. Die Verankerung der Autorinnen und Autoren in Wissenschaft und Praxis führt zu einer einzigartigen Mischung aus Analyse und praktischer Erfahrung. Das Buch folgt den aktuellen Fragen zu KVG und VVG: Kostenanstieg, faire Prämienkalkulation, Solvenz, optimaler Risikoausgleich, Entwicklung von Managed Care. |
|
Peter Lukas Weibel, Simulative performance evaluation for the design of distributed systems, University of Zurich, Faculty of Business, Economics and Informatics, 2004. (Dissertation)
Performance-Evaluationen sind normalerweise Messungen, mit denen eine Verarbeitungsgeschwindigkeit eines Systems oder einer Komponente nachgemessen wird. Die Performance verteilter Systeme wird häufig erst untersucht, wenn die Systeme in einer Testumgebung oder sogar in der produktiven Umgebung eingesetzt werden. Erst dann sind die realen Benutzungsszenarien, Datenmengen, Belastungen und Störeffekte vorhanden, also die Messungen realistisch. Viele moderne Ansätze erlauben die Realisierung aller möglichen Design-Konzepte für Verteilte Systeme. Nur wenige davon adressieren den Performance- Aspekt nachhaltig. In dieser Arbeit präsentieren wir einen Ansatz, der erlaubt, aus Sicht der Performance Aussagen über Nützlichkeit und Konsequenzen von Design-Konzepten im Gesamtzusammenhang zu machen, bevor ein System realisiert oder geändert wird. Es geht dabei um eine Ergänzung im System-Design und nicht um eine Überprüfung nach der Fertigstellung eines Systems. Kernpunkt unseres ansatzes ist ein Evaluationsprozess, der eng mit dem Designprozess für ein verteiltes System zu integrieren ist. Das Designmodell wird in ein Evaluationsmodell übernommen und dort untersucht. Es geht darum, vor der Realisierung Aussagen über die Ressourcennutzung, Antwortzeiten und andere Indikatoren für die Leistung zu machen und zu überprüfen, ob die gewählte Systemarchitektur den Anforderungen entspricht. Dabei können verschiedene Benutzungsszenarien zum Einsatz kommen. Ist ein Evaluationsmodell erstellt, so kann mit verschiedenen Strategien Wissen darüber gewonnen werden. Wir stellen verschiedene Strategien in dieser Dissertation vor. Das sogenannte Cold Start Protokoll ist z.B. eine einfache Strategie zur effizienten Ermittlung des Durchsatzmaximums für einfache Fälle. Ist die Systembenutzung komplex, so kommen kompliziertere Strategien zur Anwendung, die jedoch zumeist auf die einfacheren Strategien zurückgreifen. Die Strategien sind auch Kern unserer Forschung. Mit ihnen untersuchen wir Hypothesen und führen Lernprozesse durch. Sie erlauben einem Evaluationssystem, ohne permanente Steuerung durch einen Experten Standardaufgaben der Performanceevaluation durchzuführen. Damit wird Designern ein Mittel in die Hand gegeben, Designentscheidungen mittels Evaluation zu überprüfen und Alternativen direkt zu vergleichen. Das geht sogar bis zu einfachen Untersuchungen der Skalierbarkeit. Realisiert werden die Strategien durch Variation, indem gewisse Parameter eines Modells durch die Strategien variiert werden können. Variationen beziehen sich auf durch Benutzer festgelegte Modellparameter. Die Strategien bestimmen einzelne Konfigurationen, für die dann jeweils ein Simulationsexperiment durchgeführt wird. Als Resultat der Simulationsreihe kann eine Strategie dann die Effekte der Variation ermitteln. Schliesslich werden Resultate in geeigneter Form, zumeist graphisch, präsentiert. Diese Darstellung umfasst meist die Resultate vieler Experimente. Ihre Aufgabe ist, die Interpretation zu erleichtern und die Benutzer darin zu unterstützen, die richtigen Schlussfolgerungen aus der Evaluation zu ziehen.
Performance evaluations have mostly been measurements to determine the processing speed of a system or component. For the case of distributed systems the performance is often only tested when the system is used in either a test environment or even in the productive environment. It is only then that real usage scenarios, real amounts of data, and real effects of work load and disturbances are present and thus measurements realistic. Many modern approaches allow the realization of all kinds of design conceptions for distributed systems. Only few of them seriously consider the performance aspect. In this thesis we present an approach that allows statements about usefulness and consequences of design conceptions for a system from the performance perspective even before the system has been realized or changed. The intention is a complement for systems design, not an examination after completion of a system's realization. The core of our approach is an evaluation process that is closely integrated with the design process for a distributed system. The design model created there is translated into an evaluation model to be examined. The aim is to allow statements about resource usage, response time, and other performance indicators for the system's performance to find out whether the chosen system architecture can satisfy the requirements. Different usage scenarios can be used to do that. Once an evaluation model is created, evaluation strategies are applied to gain knowledge about its performance. We present different strategies in this dissertation thesis. The so-called Cold Start Protocol, e.g., is a simple strategy to efficiently determine a throughput maximum for simple cases. More complex strategies have to be applied if the system usage is complex; they typically rely on the more simple strategies for their own realization. The strategies are the core of our research. We use them to test hypotheses and to perform learning processes. They allow an evaluation system to execute standard tasks of performance evaluation without necessarily being controlled by an expert. A tool implementing these strategies is a means for designers to examine their design decisions by executing an evaluation, and even to compare alternatives directly. Even simple examinations of scalability are possible with this approach. The strategies are realized using variation of specific parameters of the evaluation models. The variations refer to user-determined model parameters. The strategies determine individual configurations for which a simulation experiment is executed. As a result of the simulation series, the strategies are able to determine the effects of the variation. Finally, the results are presented in a suitable way, most as graphic representation. This representation in most cases contains the results of multiple experiments. It is aimed to facilitate the interpretation, and to support the users to draw the right conclusions from the evaluation. |
|
Mathias Hoffmann, Comment on Michael D. Bordo and Thomas F. Helbling "Have National Business Cycles become more synchronized?", In: Macroeconomic policies in the world economy, Berlin, p. 40 - 52, 2004. (Book Chapter)
|
|
Yong Xia, Martin Glinz, Rigorous EBNF-based Definition for a Graphic Modeling Language, In: Proceedings of the Tenth Asia-Pacific Software Engineering Conference (APSEC’03), December 2003. (Conference or Workshop Paper published in Proceedings)
Today, the syntax of visual specification languages such
as UML is typically defined using meta-modelling techniques.
However, this kind of syntax definition has drawbacks.
In particular, graphic meta-models are not powerful
enough, so they must be augmented by a textual constraint
language.
As an alternative, we present in this paper, a text-based
technique for the syntax definition of a graphic specification
language. We exploit the fact that in a graphic specification
language, most syntactic features are independent of
the layout of the graph. So we map the graphic elements to
textual ones and define the context-free syntax of this textual
language in EBNF. Using our mapping, this grammar also
defines the syntax of the graphic language. Simple spatial
and context-sensitive constraints are then added by attributing
the context-free grammar. Finally, for handling complex
structural and dynamic information in the syntax, we give a
set of operational rules that work on the attributed EBNF.
We explain our syntax definition technique by applying
it to the modelling language ADORA which is being developed
in our research group. We also briefly discuss the application
of our technique to the syntax definition of UML.
At last we mention the advantages of our method over
the metamodeling techniques. |
|
N E Fuchs, U Schwertel, Reasoning in Attempto Controlled English, In: Principles and Practice of Semantic Web Reasoning, International Workshop PPSWR 2003, 2003-12-08. (Conference or Workshop Paper published in Proceedings)
Attempto Controlled English (ACE) – a subset of English
that can be unambiguously translated into first-order logic – is a knowledge representation language. To support automatic reasoning in ACE
we have developed the Attempto Reasoner RACE (Reasoning in ACE).
RACE proves that one ACE text is the logical consequence of another
one, and gives a justification for the proof in ACE. Variations of the
basic proof procedure permit query answering and consistency checking.
Reasoning in RACE is supported by auxiliary first-order axioms and by
evaluable functions. The current implementation of RACE is based on
the model generator Satchmo. |
|
Marc Chesney, Bharat R Hazari, Illegal Migrants, Tourism and Welfare: A Trade Theoretic Approach, Pacific Economic Review, Vol. 8 (3), 2003. (Journal Article)
Many countries receive illegal migrants but are reluctant to accept them due to possible negative externalities. We provide a rationale for not policing illegal migration by linking it to the tourism industry. By paying illegal migrants less than local workers, the relative price of the non-traded goods is shown to be lower than it would be in the absence of such workers. An expansion in tourist trade, under certain intensity conditions, necessarily raises resident welfare and employment. This tourist boom necessarily lowers the welfare of the illegal migrants. It is established that an increase in tourism increases the supply of illegal migrants. |
|
Pascal Botteron, Marc Chesney, Rajna Gibson-Asner, Analyzing firms' strategic investment decisions in a real options' framework, Journal of international financial markets, institutions & money, Vol. 13 (5), 2003. (Journal Article)
Within the context of investment under uncertainty, the real options literature has led to models that capture primarily the time to wait flexibility of monopolistic corporations' investment decision. In this paper, we propose an approach which relies on barrier options to model production and/or sales delocalization flexibility for multinational enterprises making decisions under exchange rate uncertainty. We then extend the model by introducing game theoretic considerations to show how the information set and the competitive structure of the market may lead firms to act strategically and exercise their delocalization options preemptively at an endogenously fixed exchange rate barrier. |
|
Alexandre Ziegler, Alfonso Sousa-Poza, Asymmetric Information on Workers' Productivity as a Cause for Inefficient Long Working Hours, Labour Economics, Vol. 10 (6), 2003. (Journal Article)
In this paper, a model of labor contracting with asymmetric information is developed in order to explain the existence of inefficient long working hours. Since firms cannot observe workers' true productivity, they use long working hours as a mechanism to sort productive workers. The model therefore predicts that workers with a high productivity will tend to work inefficient long hours. An empirical analysis confirms this prediction: high-productivity workers are more likely to experience hours constraints in the form of overemployment than low-productivity workers. Moreover, the extent of overemployment is positively related to productivity. |
|
Egon Franck, Beyond market power: Efficiency explanations for the basic structures of north American major league organizations, European Sports Management Quarterly, Vol. 3 (4), 2003. (Journal Article)
So far the “market power view” has been the dominant perspective of looking at the institutional setup of North American major leagues. As useful as the insights generated by this approach may be at the level of competition policy, they do not shed much light on the question of internal league organization. The reason is straightforward. A wide range of hybrid as well as all integrated structures of league organization provide an institutional infrastructure for crafting regulations aimed at the extraction of monopoly rents. A different perspective is needed in order to understand institutional choice within the range of structures, allowing for the potential abuse of market power. This paper shows that basic structures of major league organization can be explained as the result of a general attempt to increase internal efficiency in sports production. They contribute to the reduction of shirking in teams and to the protection of specific investments. |
|
Egon Franck, C Jungwirth, Reconciling rent-seekers and donators – the governance structure of open source, Journal of Management and Governance, Vol. 7 (4), 2003. (Journal Article)
Software developed and producedin open source projects has become an importantcompetitor in the software industry. Since itcan be downloaded for free and no wages arepaid to developers, the open source endeavorseems to rest on voluntary contributions byhobbyists. In the discussion of this puzzle twobasic patterns of argumentation stand out. Inwhat we call rent-seeker approaches, emphasisis put on the fact that although no wages arepaid to contributors, other pay-offs may turntheir effort into a profitable investment. Inwhat we call donator approaches the point ismade that many people contribute to open sourceprojects without expecting to ever receive anyindividual rewards.
We argue that the basic institutionalinnovation in open source has been the craftingof a governance structure, which enablesrent-seeking without crowding out donations.The focus of the presented analysis lies on thespecific institutional mechanisms, by which theopen source governance structure achieves toreconcile the interests of rent-seekers anddonators. |
|
E Droste, M Kosfeld, M Voorneveld, Best-reply matching in games, Mathematical Social Sciences, Vol. 46 (3), 2003. (Journal Article)
We study a new equilibrium concept in non-cooperative games, where players follow a behavioral rule called best-reply matching. Under this rule a player matches the probability of playing a pure strategy to the probability that this strategy is a best reply. Kosfeld, Droste, and Voorneveld [Games and Economic Behavior 40 (2002) 270] show that best-reply matching equilibria are stationary states in a simple model of social learning, where newborns adopt a best-reply to recent observations of play. In this paper we analyze best-reply matching in more detail and illustrate the concept by means of well-known examples. For example in the centipede game it is shown that players will continue with large probability. |
|
David Hausheer, Design of a Distributed P2P-based Content Management Middleware, In: 29th Euromicro Conference. 2003. (Conference Presentation)
|
|
Carmen Tanner, Sybille Wölfing Kast, Promoting sustainable consumption: Determinants of green purchases by Swiss consumers Promoting Sustainable Consumption, Psychology & Marketing, Vol. 20 (10), 2003. (Journal Article)
Given that overconsumption in industrial countries is a main cause of environmental degradation, a shift toward more sustainable consumption patterns is required. This study attempts to uncover personal and contextual barriers to consumers' purchases of green food and to strengthen knowledge about fostering green purchases. Survey data are used to examine the influence of distinct categories of personal factors (such as attitudes, personal norms, perceived behavior barriers, knowledge) and contextual factors (such as socioeconomic characteristics, living conditions, and store characteristics) on green purchases of Swiss consumers. Results from regression analysis suggest that green food purchases are facilitated by positive attitudes of consumers toward (a) environmental protection, (b) fair trade, (c) local products, and (d) availability of action-related knowledge. In turn, green behavior is negatively associated with (e) perceived time barriers and (f) frequency of shopping in supermarkets. Surprisingly, green purchases are not significantly related to moral thinking, monetary barriers, or the socioeconomic characteristics of the consumers. Implications for policy makers and for companies and marketers engaged in the promotion and commercialization of green products are discussed. |
|
Alexander Wagner, The efficiency of tradable permit markets: A few comments, In: 4th International Energy Symposium, London, 2003-10-01. (Conference or Workshop Paper published in Proceedings)
|
|
Fabio Rinaldi, James Dowdall, Michael Hess, Jeremy Elleman, Gian Piero Zarri, Andreas Persidis, Luc Bernard, Haralampos Karanikas, Multilayer annotations in Parmenides, In: K-CAP2003 workshop on, Sanibel, Florida, USA, October 2003. (Conference or Workshop Paper)
Most of the thrust in the semantic web movement comes from the observation that existing NLP tools are not sophisticated or efficient enough to process the full richness of Natural Language, and therefore Machine Understandable annotations need to be added to Web Resources in order to make them accessible by remote agents. However, when the target application is not required to handle a huge amount of documents, but more limited sets, it is conceivable and practical to take advantage of NLP tools to pre-process textual documents in order to generate annotations (to be verified by human editors).
We discuss an approach based on a combination of various Natural Language Processing techniques that addresses this issue. Documents are analized fully automatically and converted into a semantic annotation, which can then be stored together with the original documents. It is this annotation that constitutes the machine understandable resource that remote agents can query. |
|
Abraham Bernstein, Process Recombination: An Ontology Based Approach for Business Process Re-Design, SAP Design Guild, Vol. 7, 2003. (Journal Article)
A critical need for many organizations is the ability to quickly (re-)design their business processes in response to changing needs and capabilities. Current process design tools and methodologies, however, are very resource-intensive and provide little support for generating (as opposed to merely recording) new design alternatives.
This paper describes the 'process recombination,' a novel approach for template-based business process re-design based upon the MIT Process Handbook. This approach allows one to systematically generate different process (re-) designs using the repository of process alternatives stored in the Process Handbook. Our experience to date has shown that this approach can be effective in helping users produce innovative process designs.
|
|
David Dorn, Alfonso Sousa-Poza, Why is the employment rate of older Swiss so high? An analysis of the social security system, Geneva Papers on Risk and Insurance, Vol. 28 (4), 2003. (Journal Article)
Extracts of this paper were presented at the conference "Work Beyond 60: Preparing for the Demographic Shock", 6–7 March 2003 in Vienna organized by The Geneva Association, The Club of Rome, and The Risk Institute. Parts were also presented at the Bertelsmann Foundation conference "Strategien gegen den Fachkräftemangel" in Berlin, 2 July 2002 and at the Bertelsmann Foundation conference "Reformen zur Steigerung der Beschäftigungsfähigkeit älterer Arbeitskräfte" in Berlin, 26 October 2001. The authors would like to thank the participants as well as Jaap van Dam, Thomas Liebig, Fred Henneberger, and Geneviève Reday-Mulvey for their valuable comments and discussions. Alfonso Sousa-Poza would like to thank the Swiss National Science Foundation for financial assistance. The usual disclaimer applies. |
|
Marc Chesney, Pauline Barrieu, Optimal Timing to Adopt Environmental Policy in a Strategic Framework, Environmental Modeling & Assessment, Vol. 8 (3), 2003. (Journal Article)
In this paper, the problem of optimal timing, when to adopt an environmental policy in a strategic framework is considered. Using real options theory and some basic tools of game theory, we show that, under certain assumptions, a country behaving strategically should wait longer before adopting such a policy than if it behaves unstrategically or within a larger entity. Such a postponed decision is sub-optimal as regards to the environment protection. |
|
Fabio Rinaldi, James Dowdall, Michael Hess, Diego Mollà Aliod, Rolf Schwitter, Kaarel Kaljurand, Knowledge-Based Question Answering, In: Knowledge-Based Intelligent Information & Engineering Systems, KES-2003, Springer, Oxford, UK, September 2003. (Conference or Workshop Paper)
Large amounts of technical documentation are available in machine readable format, however there is a lack of effective ways to access them. In this paper we propose an approach based on linguistic techniques, geared towards the creation of a domain-specific Knowledge Base, starting from the available technical documentation. We then discuss an effective way to access the information encoded in the Knowledge Base. Given a user question phrased in natural language the system is capable of retrieving the encoded semantic information that most closely matches the user input, and present it by highlighting the textual elements that were used to deduct it. |
|