Helmut Schauer, Franziska Keller, Individualized Assessments, In: Proceedings of the ICL2004, University Press, Kassel, Deutschland, 2004. (Conference or Workshop Paper)
|
|
Angelica Marte, Ursula Schneider, Helmut Schauer, Transmediales Kommunikationswissen: Eine Anleitung in Zehn Geboten, LO Lernende Organisation (Nr. 17), 2004. (Journal Article)
|
|
Matthias Meili, Der E-Learning-Boom stockt, NZZ am Sonntag, 2004. (Journal Article)
|
|
Helmut Schauer, Langlebige Standards in einer schnelllebigen Welt, Standards in der Schulinformatik, Vol. CD-Austria (Nr. 5), 2004. (Journal Article)
|
|
Claudia Schlienger, Helmut Schauer, Coaching, In: CSCL-Kompendium: Lehr- und Handbuch zum computerunterstützten kooperativen Lernen, Oldenbourg, München, Deutschland, p. 219 - 228, 2004. (Book Chapter)
|
|
Marcel Brugger, Caroline Soltermann, Informationsmanagement bei technisch-organisatorischen Veränderungen, vdf Hochschulverlag AG an der ETH Zürich, Zürich, Schweiz, 2004. (Book/Research Monograph)
|
|
Simon Clematide, GermaNet und UniNet, LDV Forum, Vol. 19 (1/2), 2004. (Journal Article)
|
|
Mark Klein, Abraham Bernstein, Towards High-Precision Service Retrieval, IEEE Internet Computing, Vol. 8 (1), 2004. (Journal Article)
Online repositories are increasingly being called on to provide access to services that describe or provide useful behaviors. Existing techniques for finding the services offer low retrieval precision, returning many irrelevant matches. This article describes a novel service retrieval approach that captures service semantics via process models, and applies a pattern-matching algorithm to locate desired services. |
|
Guruduth Banavar, Abraham Bernstein, Challenges in design and software infrastructure for ubiquitous computing applications, Advances in Computers, Vol. 62, 2004. (Journal Article)
|
|
Patrick Ziegler, Klaus R. Dittrich, Three Decades of Data Integration - All Problems Solved?, In: 18th IFIP World Computer Congress (WCC 2004), Volume 12, Building the Information Society, Kluwer Academic Publishers, Toulouse, France, August 22-27, 2004. (Conference or Workshop Paper)
|
|
Patrick Ziegler, Klaus R. Dittrich, User-Specific Semantic Integration of Heterogeneous Data: The SIRUP Approach, In: First International IFIP Conference on Semantics of a Networked World (ICSNW 2004), Springer, Paris, France, June 17-19, 2004. (Conference or Workshop Paper)
|
|
C. Stocker, D. Macher, R. Studler, N. Bubenhofer, D. Crvelin, R. Liniger, Martin Volk, Studien-CD Linguistik. Multimediale Einführungen und Interaktive Übungen zur Germanistischen Sprachwissenschaft, Niemeyer Verlag, Tübingen, 2004. (Book/Research Monograph)
|
|
Thomas Hodel, Harald Gall, Klaus R. Dittrich, Dynamic Collaborative Business Processes within Documents, In: In Proceedings of the 22nd Annual International Conference of Communication, 2004. (Conference or Workshop Paper)
Effective collaborate business process support is essential in today’s business. In this paper, we address this aspect within documents. Often, such text documents are stored unsystematically in a rather confusing file structure with an inscrutable hierarchy and little access control. Business data, on the other hand, are stored in a systematic way in databases allowing multi-user, multi-site, user-/role-specific controlled access. We store text documents in databases and exploit these database capabilities: collaborative business processes then can be defined per document or any part of a document. In this paper, we present this dynamic collaborative business process concept and the prototype within documents for our database-based collaborative editor. We evaluate the potential of such business processes for the quality of communication and documentation. |
|
Schahram Dustdar, Harald Gall, Roman Schmidt, Web services for Groupware in Distributed and Mobile Collaboration, In: Proceedings of the 12th Euromicro Conference on Parallel, Distributed and Network-Based Processing, 2004. (Conference or Workshop Paper)
While some years ago the focus of many Groupware systems has been the support of “Web computing”, i.e. to support access with Web browsers, the focus today is shifting towards a programmatic access to “software services”, regardless of their location and the application used to manipulate those services. Whereas the goal of “Web Computing” has been to support group work on the Web (browser), Web services support for Groupware has the goal to provide interoperability between many groupware systems. The contribution of this paper is threefold: (i) to present a framework consisting of three levels of Web services for Groupware support, (ii) to present a novel Web services management and configuration architecture with the aim of integrating various Groupware systems in one overall configurable architecture, and (iii) to provide a use case scenario and preliminary proof-of-concept implementation example. Our overall goal for this paper is to provide a sound and flexible architecture for gluing together various Groupware systems using Web services technologies. |
|
Michael Fischer, Harald Gall, Visualizing Feature Evolution of Large-Scale Software based on Problem and Modification Report Data, Journal of Software Maintenance and Evolution: Research and Practice, Vol. 16 (6), 2004. (Journal Article)
Gaining higher-level evolutionary information about large software systems is a key challenge in dealing with increasing complexity and architectural deterioration. Modification reports and problem reports (PRs) taken from systems such as the concurrent versions system (CVS) and Bugzilla contain an overwhelming amount of information about the reasons and effects of particular changes. Such reports can be analyzed to provide a clearer picture about the problems concerning a particular feature or a set of features. Hidden dependencies of structurally unrelated but over time logically coupled files exhibit a good potential to illustrate feature evolution and possible architectural deterioration. In this paper, we describe the visualization of feature evolution by taking advantage of this logical coupling introduced by changes required to fix a reported problem. We compute the proximity of PRs by applying a standard technique called multidimensional scaling (MDS). The visualization of these data enables us to depict feature evolution by projecting PR dependence onto (a) feature-connected files and (b) the project directory structure of the software system. These two different views show how PRs, features and the directory tree structure relate. As a result, our approach uncovers hidden dependencies between features and presents them in an easy to assess visual form. A visualization of interwoven features can indicate locations of design erosion in the architectural evolution of a software system. As a case study, we used Mozilla and its CVS and Bugzilla data to show the applicability and effectiveness of our approach. |
|
7th International Workshop on Principles of Software Evolution, Edited by: Katsuro Inoue, Tsuneo Ajisaka, Harald Gall, Kyoto, Japan, 2004. (Proceedings)
|
|
Niklas Auerbach, Anonymous digital identity in e-government, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2004. (Dissertation)
The ongoing implementation of e-government has brought many governments to consider issuing digital identity cards. This thesis focuses on the impact of digital identity cards on the citizen’s privacy. Potential privacy threats are discussed and countermeasures that pertain to enhancing privacy are proposed. We advocate that digital identity should not solely be based on elements that disclose a citizens identity. Instead this thesis proposes a concept for digital identity cards that includes an anonymous component. This proposed approach is different from the approach taken by the current pro jects for digital identity cards. We propose a concept that comprises pseudonymous credentials as part of the citizen’s digital identity. We discuss current implementations of pseudonymous credential systems and consider problems resulting from the implementation in resource-restricted smart card environments. We discuss requirements for the use of credentials as part of the citizen’s digital identity. We discuss conceptual issues that must be addressed for a deployment of credentials. We consider the infrastructure that is necessary to support pseudonymous credentials. We discuss conceptual issues such as the choice of credential system, devices for the secure storage of credentials, non-transferability and revocation of digital credentials. An architecture is proposed that supports the use of the extended form of digital identity. We discuss barriers that must be overcome on the way to implementation. With the ongoing migration towards digital identity cards, we expect that privacy will become an issue of growing importance. This thesis contributes to the discussion on privacy in the domain of e-government and proposes anonymous services based on pseudonymous credentials as a means to alleviate potential privacy problems related to the use of electronic identity cards. |
|
Peter Lukas Weibel, Simulative performance evaluation for the design of distributed systems, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2004. (Dissertation)
Performance evaluations have mostly been measurements to determine the
processing speed of a system or component. For the case of distributed
systems the performance is often only tested when the system is used in
either a test environment or even in the productive environment. It is only
then that real usage scenarios, real amounts of data, and real effects of work
load and disturbances are present and thus measurements realistic.
Many modern approaches allow the realization of all kinds of design
conceptions for distributed systems. Only few of them seriously consider
the performance aspect. In this thesis we present an approach that allows
statements about usefulness and consequences of design conceptions for a
system from the performance perspective even before the system has been
realized or changed. The intention is a complement for systems design, not
an examination after completion of a system’s realization.
The core of our approach is an evaluation process that is closely integrated
with the design process for a distributed system. The design model created
there is translated into an evaluation model to be examined. The aim is to
allow statements about resource usage, response time, and other performance
indicators for the system’s performance to find out whether the chosen system
architecture can satisfy the requirements. Different usage scenarios can be
used to do that.
Once an evaluation model is created, evaluation strategies are applied
to gain knowledge about its performance. We present different strategies in
this dissertation thesis. The so-called Cold Start Protocol, e.g., is a simple
strategy to efficiently determine a throughput maximum for simple cases.
More complex strategies have to be applied if the system usage is complex;
they typically rely on the more simple strategies for their own realization.
The strategies are the core of our research. We use them to test hypotheses and to perform learning processes. They allow an evaluation system to
execute standard tasks of performance evaluation without necessarily being
controlled by an expert. A tool implementing these strategies is a means for
designers to examine their design decisions by executing an evaluation, and
even to compare alternatives directly. Even simple examinations of scalability are possible with this approach.
The strategies are realized using variation of specific parameters of
the evaluation models. The variations refer to user-determined model parameters. The strategies determine individual configurations for which
a simulation experiment is executed. As a result of the simulation series, the
strategies are able to determine the effects of the variation.
Finally, the results are presented in a suitable way, most as graphic
representation. This representation in most cases contains the results of
multiple experiments. It is aimed to facilitate the interpretation, and to
support the users to draw the right conclusions from the evaluation. |
|
Konstantin Beck, Risiko Krankenversicherung : Risikomanagement in einem regulierten Krankenversicherungsmarkt, University of Zurich, Faculty of Business, Economics and Informatics, 2004. (Habilitation)
«Risiko Krankenversicherung» beschreibt die soziale und private Krankenversicherung der Schweiz mit statistischen Methoden. Die Verankerung der Autorinnen und Autoren in Wissenschaft und Praxis führt zu einer einzigartigen Mischung aus Analyse und praktischer Erfahrung. Das Buch folgt den aktuellen Fragen zu KVG und VVG: Kostenanstieg, faire Prämienkalkulation, Solvenz, optimaler Risikoausgleich, Entwicklung von Managed Care. |
|
Peter Lukas Weibel, Simulative performance evaluation for the design of distributed systems, University of Zurich, Faculty of Business, Economics and Informatics, 2004. (Dissertation)
Performance-Evaluationen sind normalerweise Messungen, mit denen eine Verarbeitungsgeschwindigkeit eines Systems oder einer Komponente nachgemessen wird. Die Performance verteilter Systeme wird häufig erst untersucht, wenn die Systeme in einer Testumgebung oder sogar in der produktiven Umgebung eingesetzt werden. Erst dann sind die realen Benutzungsszenarien, Datenmengen, Belastungen und Störeffekte vorhanden, also die Messungen realistisch. Viele moderne Ansätze erlauben die Realisierung aller möglichen Design-Konzepte für Verteilte Systeme. Nur wenige davon adressieren den Performance- Aspekt nachhaltig. In dieser Arbeit präsentieren wir einen Ansatz, der erlaubt, aus Sicht der Performance Aussagen über Nützlichkeit und Konsequenzen von Design-Konzepten im Gesamtzusammenhang zu machen, bevor ein System realisiert oder geändert wird. Es geht dabei um eine Ergänzung im System-Design und nicht um eine Überprüfung nach der Fertigstellung eines Systems. Kernpunkt unseres ansatzes ist ein Evaluationsprozess, der eng mit dem Designprozess für ein verteiltes System zu integrieren ist. Das Designmodell wird in ein Evaluationsmodell übernommen und dort untersucht. Es geht darum, vor der Realisierung Aussagen über die Ressourcennutzung, Antwortzeiten und andere Indikatoren für die Leistung zu machen und zu überprüfen, ob die gewählte Systemarchitektur den Anforderungen entspricht. Dabei können verschiedene Benutzungsszenarien zum Einsatz kommen. Ist ein Evaluationsmodell erstellt, so kann mit verschiedenen Strategien Wissen darüber gewonnen werden. Wir stellen verschiedene Strategien in dieser Dissertation vor. Das sogenannte Cold Start Protokoll ist z.B. eine einfache Strategie zur effizienten Ermittlung des Durchsatzmaximums für einfache Fälle. Ist die Systembenutzung komplex, so kommen kompliziertere Strategien zur Anwendung, die jedoch zumeist auf die einfacheren Strategien zurückgreifen. Die Strategien sind auch Kern unserer Forschung. Mit ihnen untersuchen wir Hypothesen und führen Lernprozesse durch. Sie erlauben einem Evaluationssystem, ohne permanente Steuerung durch einen Experten Standardaufgaben der Performanceevaluation durchzuführen. Damit wird Designern ein Mittel in die Hand gegeben, Designentscheidungen mittels Evaluation zu überprüfen und Alternativen direkt zu vergleichen. Das geht sogar bis zu einfachen Untersuchungen der Skalierbarkeit. Realisiert werden die Strategien durch Variation, indem gewisse Parameter eines Modells durch die Strategien variiert werden können. Variationen beziehen sich auf durch Benutzer festgelegte Modellparameter. Die Strategien bestimmen einzelne Konfigurationen, für die dann jeweils ein Simulationsexperiment durchgeführt wird. Als Resultat der Simulationsreihe kann eine Strategie dann die Effekte der Variation ermitteln. Schliesslich werden Resultate in geeigneter Form, zumeist graphisch, präsentiert. Diese Darstellung umfasst meist die Resultate vieler Experimente. Ihre Aufgabe ist, die Interpretation zu erleichtern und die Benutzer darin zu unterstützen, die richtigen Schlussfolgerungen aus der Evaluation zu ziehen.
Performance evaluations have mostly been measurements to determine the processing speed of a system or component. For the case of distributed systems the performance is often only tested when the system is used in either a test environment or even in the productive environment. It is only then that real usage scenarios, real amounts of data, and real effects of work load and disturbances are present and thus measurements realistic. Many modern approaches allow the realization of all kinds of design conceptions for distributed systems. Only few of them seriously consider the performance aspect. In this thesis we present an approach that allows statements about usefulness and consequences of design conceptions for a system from the performance perspective even before the system has been realized or changed. The intention is a complement for systems design, not an examination after completion of a system's realization. The core of our approach is an evaluation process that is closely integrated with the design process for a distributed system. The design model created there is translated into an evaluation model to be examined. The aim is to allow statements about resource usage, response time, and other performance indicators for the system's performance to find out whether the chosen system architecture can satisfy the requirements. Different usage scenarios can be used to do that. Once an evaluation model is created, evaluation strategies are applied to gain knowledge about its performance. We present different strategies in this dissertation thesis. The so-called Cold Start Protocol, e.g., is a simple strategy to efficiently determine a throughput maximum for simple cases. More complex strategies have to be applied if the system usage is complex; they typically rely on the more simple strategies for their own realization. The strategies are the core of our research. We use them to test hypotheses and to perform learning processes. They allow an evaluation system to execute standard tasks of performance evaluation without necessarily being controlled by an expert. A tool implementing these strategies is a means for designers to examine their design decisions by executing an evaluation, and even to compare alternatives directly. Even simple examinations of scalability are possible with this approach. The strategies are realized using variation of specific parameters of the evaluation models. The variations refer to user-determined model parameters. The strategies determine individual configurations for which a simulation experiment is executed. As a result of the simulation series, the strategies are able to determine the effects of the variation. Finally, the results are presented in a suitable way, most as graphic representation. This representation in most cases contains the results of multiple experiments. It is aimed to facilitate the interpretation, and to support the users to draw the right conclusions from the evaluation. |
|