Gerhard Schwabe, Marianne Valerius, Adaptive Bücher für das kooperative Lernen - Anwendungen - Konzepte - Erfahrungen, In: Engelien, M.; Homann, J.:Virtuelle Organisation und Neue Medien 2001 Workshop GeNeMe 2001, Gemeinschaften in neuen Medien, Josef Eul Verlag, Lohmar, Deutschland, 2001-01-01. (Conference or Workshop Paper published in Proceedings)
|
|
Gerhard Schwabe, Christian Filk, Marianne Valerius, Warum Kooperation neu erfinden? Zum Beitrag der CSCW-Forschung dür das kollaborative E-Learning, In: 5. Internationale Tagung Wirtschaftsinformatik, Physica, Saarbrücken, Deutschland, 2001-01-01. (Conference or Workshop Paper published in Proceedings)
|
|
Gerhard Schwabe, Koordinationswerkzeuge, In: CSCW-Kompendium: Lehr- und Handbuch zum computerunterstützten kooperativen Arbeiten, Springer, Berlin / Heidelberg, Deutschland, p. 174 - 179, 2001. (Book Chapter)
|
|
Gerhard Schwabe, Mediensynchronizität - Theorie und Anwendung bei Gruppenarbeit und Lernen, In: Partizipation und Interaktion im virtuellen Seminar, Waxmann, München / Berlin, p. 111 - 134, 2001. (Book Chapter)
|
|
Gerhard Schwabe, Theorien zur Mediennutzung bei der Gruppenarbeit, In: CSCW Kompendium - Lehr- und Handbuch zur computerunterstützten Gruppenarbeit, Springer, Berlin / Heidelberg, 2001. (Book Chapter)
|
|
Gerhard Schwabe, Bedarfsanalyse, In: CSCW Kompendium - Lehr- und Handbuch zur computerunterstützten Gruppenarbeit, Springer, Berlin / Heidelberg, p. n/a, 2001. (Book Chapter)
|
|
Gerhard Schwabe, Gemeinsames Material und Gruppengedächtnis, In: CSCW Kompendium - Lehr- und Handbuch zur computerunterstützten Gruppenarbeit, Springer, Berlin / Heidelberg, 2001. (Book Chapter)
|
|
Gerhard Schwabe, Electronic Communities, WISU - das Wirtschaftsstudium, Vol. 30 (2), 2001. (Journal Article)
|
|
Gerhard Schwabe, Marianne Valerius, Grundlagen des kollaborativen Lernens mit neuen Medien, WISU - das Wirtschaftsstudium, Vol. 30 (10), 2001. (Journal Article)
|
|
Gerhard Schwabe, Helmut Krcmar, Cuparla - Telekooperation im Stuttgarter Kommunalparlament - Inter@ktive Medien - Trends und Zukunftsperspektiven, Band 1, 2001. (Other Publication)
|
|
Engin Kirda, Harald Gall, Gerald Reif, Pascal Fenkam, Clemens Kerer, Supporting Mobile Users and Distributed Teamwork, In: 6th International Conference on Telecommunications (ConTEL 2001), Zagreb, Croatia, 2001. (Conference or Workshop Paper)
Recent years have shown a strong trend toward electronic information management in many fields. The use of office applications generated vast amounts of digital information and global multi-site organizations are increasingly faced with the need for advanced Information and Communication Technology (ICT) facilities for information management and distributed working. We address these requirements and problems in the MObile Teamwork Infrastructure for Organizations Networking (MOTION) 1 project and aim at creating a highly flexible, open and scalable ICT architecture for mobile teamwork support. In this paper, we discuss key concepts, design goals and requirements we have identified to build a component-based mobile teamwork ICT architecture for complex, multi-site, multi-process organizations. We give a brief overview of the MOTION architecture and technologies and introduce the evaluation criteria for the MOTION platform. |
|
Mark Klein, Abraham Bernstein, Searching for Services on the Semantic Web using Process Ontologies (inproceedings), In: The First Semantic Web Working Symposium (SWWS-1), 2001. (Conference or Workshop Paper)
The ability to rapidly locate useful on-line services (e.g. software applications, software components, process models, or service organizations), as opposed to simply useful documents, is becoming increasingly critical in many domains. As the sheer number of such services increases it will become increasingly more important to provide tools that allow people (and software) to quickly find the services they need, while minimizing the burden for those who wish to list their services with these search engines. This can be viewed as a critical enabler of the ‘friction-free’ markets of the ‘new economy’. Current service retrieval technology is, however, seriously deficient in this regard. The information retrieval community has focused on the retrieval of documents, not services per se, and has as a result emphasized keyword-based approaches. Those approaches achieve fairly high recall but low precision. The software agents and distributed computing communities have developed simple ‘frame-based’ approaches for ‘matchmaking’ between tasks and on-line services increasing precision at the substantial cost of requiring all services to be modeled as frames and only supporting perfect matches. This paper proposes a novel, ontology-based approach that employs the characteristics of a process-taxonomy to increase recall without sacrificing precision and computational complexity of the service retrieval process.
|
|
Rui P. Brandao, Die Identifikation von Schwachstellen der Informationssicherheit als Ausgangspunkt für die Prävention und Erkennung geschäftsschädigenden Informatikmissbrauchs, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2001. (Dissertation)
|
|
Thomas Grotehen, Objectbase design: a heuristic approach, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2001. (Dissertation)
This thesis presents a methodology extension named MeTHOOD (Measures, Transformation Rules and Heuristics for Object-Oriented Design) that supports the design of objectbases. MeTHOOD integrates measures describing the design objectives, heuristics showing design alternatives and transformation rules that enable design transformations. MeTHOOD makes design knowledge for conceptual objectbase schemas (conceptual object-oriented class schemas) more tangible. Although a large amount of this important knowledge is available in the literature, it is hardly usable for designers because it is very scattered, on different levels of abstraction and not integrated. Thus, the objective of this thesis is to enable a design process for efficient and continuous quality inspection and improvement of conceptual class schemas by efficiently providing design knowledge. The core of MeTHOOD is a catalogue of integrated design knowledge consisting of already existing as well as new object-oriented design heuristics, transformation rules and measures. This knowledge is the base for the iterative MeTHOOD design process. It consists of the main activities measurement, design inspection and design transformation. Using measurement, important properties of the schema are assessed. Based on the measured values and the heuristics, it is possible to inspect the schema systematically. The result of the inspection is a set of potential design flaws. The actual design flaws (identified by the designer) can then be eliminated using schema transformation. The result is a new schema that can be compared to the old one using new and old measures. These activities are supported by concrete measures and heuristics acting as a driver for transformations. The process is supported by a design support system called MEx (MeTHOOD Expert). MEx provides semi-automatic design monitoring, heuristic checking and design transformation using the design knowledge of the MeTHOOD catalogue. |
|
Andreas Behm, Migrating relational databases to object technology, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2001. (Dissertation)
Tremendous changes have been taking place in information technology for a few decades. Due to the rapid evolution in this area, the demand for innovation in this area is much higher than elsewhere. This requires large efforts of companies to respond quickly to market conditions in order to organize work and conduct business more efficiently. In particular, companies have to reengineer existing information systems in order to take benefit of new key technologies like WWW or E-commerce. A typical scenario in many companies when applying a reengineering process is that on the one hand, a large body of data is captured in relational (or even hierarchical or network) databases, and on the other hand new object-oriented applications have to be developed. Thereby, a new object model is constructed which represents the current state of the companies business processes. However, the new object model and the existing relational database's model usually do not go well together. In other words, a large semantic gap between both models must be bridged.
The approach proposed in this thesis is database migration. Basically, this approach comprises two tasks. In the first task, the relational database schema is reengineered. The schema is transformed into a well-designed and intuitively understandable object-oriented schema, which the new applications can adapt. Afterwards, the data are (automatically) migrated into an object-oriented DBMS. Existing approaches for migration do not exploit the full potential of the object-oriented paradigm so that the resulting object-oriented schema still ""looks rather relational"" and retains the drawbacks and weaknesses of the relational schema. Therefore, one of the goals of this approach is to support schema transformation into an adequate object-oriented schema as obtained by forward engineering, rigorously using an object-oriented design method.
In the first part of the thesis, the fundamental differences between relational and object-oriented database design are discussed. The results of this part are classified into four categories: specification of object-oriented behaviour, navigational vs. set-oriented data access, object life cycles, and deficiencies of relational database schemas. Each category represents a specific aspect in which relational database design differs principally from object-oriented design.
For the implementation of the database migration process an intermediate data model (SOT, Semi Object Types) is proposed which allows to define both, schema transformation and data migration. This data model contains all object-oriented modeling constructs and supports flexible schema transformation. Furthermore, an algebra is proposed for a formal definition of the data migration process.
The schema transformation process is subdivided into three sequential phases. In the first phase, the relational schema is transformed (automatically) into an SOT schema. This initial SOT schema is then redesigned resulting in an adequate object-oriented schema. Finally, in the third phase the resulting SOT schema is (again automatically) transformed into an object-oriented schema according to the ODMG standard. The data migration process is generated automatically for each schema transformation phase.
In order to implement schema transformation, the concept of the transformation rule is proposed. The transformation rules define elementary restructuring operations within the SOT model. A basic set of transformation rules has been proposed which can be extended.
Finally, a prototype has been implemented as a proof of concept. |
|
Lei Yu, Agent oriented and role based business process management for computational media, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2001. (Dissertation)
With on-line ordering and electronic commerce, the Internet moves from a passive information role to an active role as a computational medium for supporting the execution of business processes. This study is motivated by this trend and it investigates agent oriented and role based business process management for computational media.
In this thesis, a conceptual framework for agent oriented and role based business process management has been established. We view a business process as a collection of autonomous, problem solving agents which interact with other agents when they have interdependencies. In particular, we model a process as a set of roles and a set of conversations among roles. A protocol defines a set of rules governing role-interaction in a conversation. We define roles in term of goals, qualifications, obligations, permissions, and relations to other roles, etc. Roles are assigned to agents based on the evaluation of qualifications and agents' capabilities. Once a role is assigned to an agent, the agent becomes the subject of the rights and duties specified by that role. Coordination of processes is achieved by communication among agents.
Moreover, the main constructs of the role-based process description language have been identified. The role based approach for developing agent oriented business process management systems has been developed. An agent is an active entity whose state is viewed as consisting of mental components such as beliefs, capabilities, choices, and commitments. We have shown our role model matches with the agent's mental components perfectly.
We supplemented our conceptual framework with practical dimension which leads us to build the agent enhanced peer review process management system. In the case study, we have presented the standard peer review process, discussed the administrative peer review process and ad hoc peer review process, and analyzed the Internet peer review process. Thereafter, the agent enhanced peer review process has been developed. We have modeled the process with Role Activity Diagrams and Finite States Machines. Then obligations of roles and protocol rules of conversations have been derived from them. Furthermore, a general set of messages for the agent enhanced peer review process has been defined and a deontic specification of the protocols has been shown. Finally, the peer review process management system is designed as a Multi-agent system.
The work has both practical and theoretical implications. The result contributes to the study of Computational Media, Multi-agent systems, and Business Process Management as well. From the work, we have found that agent oriented processes take a distributed, and thus a more robust, flexible, and scaleable approach to business process management. Business processes and the Internet are more effective in combination than alone. Electronic Commerce and many other ""Virtual organization"" applications can be easily imagined. |
|
Antonioli Denis N., Compressing Java binaries: the ristretto project, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2001. (Dissertation)
This thesis presents the development of a wire format for Java applications. Whereas file formats, such as the Java class file format, put nowadays the emphasis on the ease with which they can be manipulated at the expense of the space they need, a wire format is specialized to achieve short transmission times. A wire format puts hence the emphasis on the space needed and takes a complex decoding step into account.
The thesis presents the principal features of code systems, such as how the programs are represented, how they are executed and how portable the resulting environments are; it builds with these features a taxonomy of code systems. The special requirements of mobile code systems are discussed in this context.
Java class files belong to the category of object files, which are files that contain compiled code. All these files have a strict syntax and they are frequently transformed to acquire specific properties, such as a smaller size. The thesis discusses the tools and techniques used to reduce the file size.
The thesis presents for the first time detailed results of the analysis of Java applications. Beside external characteristics, such as the size and number of files, the analysis explores the content of the files.
The development of the wire format takes place in two stages. In the first stage, the format of the class file is changed in order to reduce the size of individual files. In the second stage, redundancies are detected in groups of classes and the format is adapted to take advantage of them. All these transformations are confirmed experimentally.
Finally, a measure of the distance between two classes is introduced. With this measure, data clustering algorithms can be used to define groups of classes that best compress together. |
|
Monika Krüsi Schädle, Unterschiede zwischen erfolgreichen und nicht erfolgreichen Business-Process-Reengineering-Projekten, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2001. (Dissertation)
|
|
Martin Glinz, Stefan Berner, Stefan Joos, Johannes Ryser, Nancy Schett, Yong Xia, The ADORA Approach to Object-Oriented Modeling of Software, In: Proceedings of the 13th International Conference on Advanced Information Systems Engineering, Springer-Verlag, 2001. (Conference or Workshop Paper)
|
|
Johannes Ryser, Martin Glinz, Dependency Charts as a Means to Model Inter-Scenario Dependencies, In: Modellierung 2001, GI-Edition, 2001. (Conference or Workshop Paper)
|
|