Marco Prestipino, Gerhard Schwabe, Wissensintensive Dienstleistungen im Internet: Teledienstleistungen und kollaborative Wissensgenerierung, WISU - das Wirtschaftsstudium, Vol. 31 (5), 2003. (Journal Article)
|
|
Gerhard Schwabe, Ratsinformationssysteme, In: Verwaltungslexikon, Nomos, Baden-Baden, p. ?, 2003. (Book Chapter)
|
|
Gerhard Schwabe, Groupware, In: Taschenbuch der Wirtschaftsinformatik und Wirtschaftsmathematik, Verlag Harri Deutsch, Frankfurt am Main, Deutschland, p. 13 - 16, 2003. (Book Chapter)
|
|
Claudio Nodari, Denise Da Rin, Standards in interkultureller Kommunikation, Schweizerische UNESCO-Kommission, Sektion Bildung und Gesellschaft, Bern, Switzerland, 2003. (Book/Research Monograph)
|
|
Denise Da Rin, E-Learning - The new generation of workforce training? An empirical study on computer supported collaborative learning in workforce training, In: 21st World Conference on Open Learning and Distance Education in Hong Kong, Hong Kong, Hong Kong, 2003. (Conference or Workshop Paper)
|
|
Angelica Marte, Kerstin Michelbacher, Claudia Schlienger, Aspekte virtueller Gruppenkommunikation im Ausbildungskontext, In: Schrift-basierte, asynchrone Kommunikation in Virtuellen Konferenzen: Potentiale, Grenzen und Einsatzbereiche, Frankfurt, Deutschland, 2003. (Conference or Workshop Paper)
|
|
Abraham Bernstein, Foster Provost, Scott Clearwater, The Relational Vector-space Model and Industry Classification, No. IFI-2008.0005, Version: 1, 2003. (Technical Report)
This paper addresses the classification of linked entities. We introduce a relational vector-space (VS) model (in analogy to the VS model used in information retrieval) that abstracts the linked structure, representing entities by vectors of weights. Given labeled data as background knowledge/training data, classification procedures can be defined for this model, including a straightforward, “direct” model using weighted adjacency vectors. Using a large set of tasks from the domain of company affiliation identification, we demonstrate that such classification procedures can be effective. We then examine the method in more detail, showing that as expected the classification performance correlates with the relational autocorrelation of the data set. We then turn the tables and use the relational VS scores as a way to analyze/visualize the relational autocorrelation present in a complex linked structure. The main contribution of the paper is to introduce the relational VS model as a potentially useful addition to the toolkit for relational data mining. It could provide useful constructed features for domains with low to moderate relational autocorrelation; it may be effective by itself for domains with high levels of relational autocorrelation, and it provides a useful abstraction for analyzing the properties of linked data.
|
|
Harald Gall, Mehdi Jazayeri, Jacek Kra, CVS Release History Data for Detecting Logical Couplings, In: Proceedings of the International Workshop on Principles of Software Evolution, 2003. (Conference or Workshop Paper)
The dependencies and interrelations between classes and modules affect the maintainability of object-oriented systems. It is therefore important to capture weaknesses of the software architecture to make necessary corrections. This paper describes a method for software evolution analysis. It consists of three complementary steps, which form an integrated approach for the reasoning about software structures based on historical data: 1) The Quantitative Analysis uses version information for the assessment of growth and change behavior; 2) the Change Sequence Analysis identifies common change patterns across all system parts; and 3) the Relation Analysis compares classes based on CVS release history data and reveals the dependencies within the evolution of particular entities. In this paper, we focus on the Relation Analysis and discuss its results; it has been validated based on empirical data collected from a Concurrent Versions System (CVS) covering 28 months of a Picture Archiving and Communication System (PACS). Our software evolution analysis approach enabled us to detect shortcomings of PACS such as architectural weaknesses, poorly designed inheritance hierarchies, or blurred interfaces of modules. |
|
Schahram Dustdar, Harald Gall, Manfred Hauswirth, Software-Architekturen für Verteilte Systeme, Springer, 2003. (Book/Research Monograph)
|
|
Yuan Xiangru, An adaptable approach for integrity control in federated database systems, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2003. (Dissertation)
In database federations the integrity problem arises from the degree of heterogeneity and autonomy of participating component database systems. This causes integrity control more complicate than that in traditional centralized database systems. Semantic integrity should be considered in two phases: constraint federation and constraint enforcement. Otherwise, the administrators of component database systems might not agree to join the federation, or some or even all of component databases will be in an inconsistent state when an update is executed.
The semantic integrity of federated database refers to representation integrity and enforcement integrity. The representation integrity means that the constraint definition of federated database schema should correctly reflect the characteristics of schema integration. The enforcement integrity means that the federated database state should be consistent with constraints defined over it, when an update is executed. In this thesis, we propose a new component to keep the semantic integrity of federated database.
Firstly, we specify an integrity control component, which plays an extendible part of an FDBMS. A set of modules within a reference architecture is given; different degrees of evaluation autonomy are distinguished; and coupling principle is developed to unravel conflicts between the evaluation state of component constraints and that of global ones.
Secondly, we develop integrity control enforcement policies for the federation policy. We show that this model is adaptable and flexible to represent the characteristics of federation and the application requirements of constraint enforcement plan. Thirdly, we define an integrity constraint type model that acts as a canonical model. It represents component constraints as integrity constraint types. Based on this model, federation users can translate component constraints, then integrate them and form global ones. These two models, taken together, preserve the representation integrity at the constraint federation phase.
Fourthly, we give a two-step approach to enforce constraints. This approach is made up of two parts: type-based two-phase evaluation and policy-based two-phase commitment. It maintains the enforcement integrity at the constraint enforcement phase.
Finally, the feasibility of the proposed component is implemented in a prototype, called MIGI. Spatial constraint federation and spatial constraint enforcement have been taken into account in MIGI. |
|
Daniela Damm, Eine IS-Plattform zur Unterstützung kooperativer interorganisationaler Netzwerke, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2003. (Dissertation)
Markets and as a consequence requirements on enterprises have changed in the past years. Nowadays companies have to deal with global markets, shorter product-lifecycles as well as the need for more customer-orientation. The increased dynamics of innovation together with the proliferation of knowledge force enterprises to concentrate on their core competencies and consider outsourcing of further activities. The functional construction of cooperative inter-organizational networks has become a factor for the success of an enterprise – especially for smaller and medium sized enterprises.Inter-organizational cooperation offers the possibility to join organizational knowledge and abilities of the involved parties, to expand both their business areas and their market access. These cooperative networks enable enterprises tp merge efforts and distribute their products not only faster but also further and establish higher trade barriers for competitors. Several new cooperative forms have been developed to allow the necessary amount of flexibility (e.g. virtual organizations, communities of practice).Distributed collaborations are often not possible without the use of information and communication technologies (ICT). To gain an efficient cooperation ICT have to be adjusted to the special requirements of the particular inter-organizational cooperation. In order to allow a systematic elicitation of the ICT requirements resulting the specific organizational structure of an inter-organizational cooperation, a classification frame for inter-organizational network organizations is developed in this thesis. Based on the identified classification criteria, generic ICT functionalities for the support of the business partners are specified.During the cooperation process the way of collaboration and the relationship between the partners may change. It is necessary to know the single stages of the cooperation to determine the particular needs of the participants for information and coordination. Thus, one goal of this work is to define a common model of the inter-organizational cooperation process and the main phases within this process. Identified tasks and responsibilities during the determined phases offer a better understanding of the needs of cooperative environments. Based on the obtained results, overall requirements on ICT for inter-organizational cooperations are determined.Finally both the structural-organizational as well as the process aspects of the requirements elicitation are combined and specific components of an IS architecture are determined to offer a general support of cooperative network organizations. This work is focusing mainly on three components the translator, the coordinator and the evaluator.Enterprises offer their services at the platform in form of service descriptions. However, there are several industrial service description standards like ebXML, RosettaNet, XPDL available that can be applied for this purpose. Although these standards capture similar concepts, they nonetheless rely on diverging interpretations of processes and their components. In order to allow enterprises a standard-independent access to the platform to publish their services as well as search for service partners, the platform encloses a translator component. This translator offers a semantical correct translation between two or more service description standards based on the semantical similarities between them. The semantical similarities can be determined automatically by using formal ontologies. Once the semantical similarities are detected a translations structure between two standards is build. This translation structure contains the necessary information to translate a service-based document from one standard to another. The translation implemented by the translation component offers a complete mapping of the information contained in the source document to the target standard. Still, the generated target document is correct with respect to the target standard specification. Using the translation component the heterogeneity between service offer and request can resolved and thus, the service selection as well as the service enactment can be supported.The coordination component implements a service-based, mediator-oriented coupling of the internal processes of the enterprises participating in the cooperation. Several interaction protocols are developed in order to support the interaction between business partners during the cooperative service enactment. Using standardized e-service technologies, enterprises can combine their services without time-consuming and expensive adaptation of their internal IS.To save the experiences that have been derived in the evaluation step of the termination phase, the platform offers an evaluation component. The evaluation component allows the explicit description of the reputation of an enterprise in form of a cultural independent rating and thus, can be used especially for supporting distributed cooperations. The evaluation itself is supported with a standardized questionnaire. Based on given criteria, enterprises can be categorized, which offers a more fine-grained search of partners within the network platform.For the specification of the components of the network platform use cases, scenarios, interface prototypes, code fragments as well as class models are specified. These models provide an implementation-focused description of the platform's functionality that can be used as an base for an implementation of the platform. |
|
Susanne Röhrig, Using Process Models to Analyse IT Security Requirements, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2003. (Dissertation)
As more enterprises start to transfer their business processes to electronic media, the need for appropriate information security arises. In today's business the security of information systems is not only necessary to ensure business continuity and to protect one's assets from harm; some authors even state that security is also the enabler to do business at all. Unfortunately, it is difficult to specify which measures to take in order to achieve appropriate security. Security reviews or analyses to that aim are usually both tedious and expensive, because they require the knowledge of the IT systems as well as the business processes around them - even though the latter are generally not regarded explicitely. The idea pursued in this thesis is, therefore, to examine business processes descriptions in order to analyse IT security. In this thesis the relations between business process reengineering (BPR) or general process modelling and security analysis will be investigated. Mutual influences will be explained, synergies will be pointed out. Starting from this analysis, the so-called POSeM method - the main contribution of this thesis - was developed. The abbreviation POSeM denotes Process Oriented Security Models. These models are used to select appropriate security measures for a pre-defined business process. In four steps security objectives for the business process in general will be converted into a catalogue of appropriate safeguards, i.e., safeguards that have to be implemented in order to achieve the security objectives defined before. Two rule bases are used: one to ensure the consistency of the defined security objectives and another to derive appropriate security safeguards. Both can be configured to suit the user's security needs. The first step of POSeM examines the security objectives of an enterprise and the business process in general. During a second step these security objectives are broken down into security levels that will be assigned to all parts of the business process for the protection objectives confidentiality, integrity, availability and accountability. These levels are defined according to a discrete and ordered scale consisting of the values none, low, medium, high and, very high. Furthermore components can be assigned types that will later influence the security measures to be implemented for them.
After explaining the method and its goals in general, a formal description of its rules and constraints is given. Its implementation is explained and illustrated. To prove the method's applicability it is exercised using an example from the e-business sector (i.e., the use of a content management system) and a detailed example process from the health care sector. A discussion of the method's use cases and advantages compared to ""classical""' methods as well as further research directions will be given in the last chapter. |
|
Patric Lienhard, Information Supply Chain: Handlungs- und ursachenbezogene Informationsversorgung durch Collaboration Workflow-Unterstützung, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2003. (Dissertation)
The information managers receive today about their business activities, most of which have a financial or historical nature, is no longer sufficient for the effective management of their companies. They require information which allows them to quantify and measure the fulfillment of qualitative goals within their organization, such as customer satisfaction, responsiveness, flexibility or quality. Furthermore, the information they receive needs to identify the impact of changes within the business process on the financial results (e.g. revenue, production cost, etc) of the past - but more importantly - the impact these changes have in the future. To fulfill these business requirements, a ""Logbook of the collaborative production"" is proposed. The logbook logs the cause-effect-chains along the business processes and needs to be seen as an extension to a Workflow Management System (WFMS). Instead of logging only the state-transitions of business process objects (WFMS-focus), the logbook records the state-transitions of all business objects (customer, product, offer, order, etc) that are transformed by a business process. By assigning these state-transitions along the business process, cause-effect-chains in business can be explained.
The required information needs to be structured in such a way that it can be immediately used to perform or fulfill a given task. Because WFMS can produce metadata on the context of an act in the business process, business information can be captured and delivered automatically in that context. E.g. the user of the information system does not need to enter a search query, because the WFMS can automatically derive the query from the context-attributes of the workflow-step. Moreover, the user does not need to store the documents manually in the right context, because the WFMS knows the context and can automatically save the document in the context of the user's act. The ""Logbook of the collaborative production"" is used to log the collaborative acts and the triggered transformations of the business objects. A state-transition of a business object is physically represented by a set of data generations and modifications that are fired by a workflow-act-event. This work tries to demonstrate - from a business point of view - how the correlations between a workflow step (activity) and the resulting state transformation of the given business object can be captured in the logbook. In this case, the state transformation will include a specific set of new and updated information (data), which is a consequence of the taken workflow step.
To fulfill the requirements of a given business, the ""Logbook of the collaborative production"" is applied in combination with a Collaboration Workflow Management System that uses a generic speech-act model to specify collaborative acts in a process. The reason being is, that the data logged and captured within a Collaboration Workflow Management System is more expressive than the one recorded by a ""traditional"" Workflow Management System, which in most cases can only provide information about ""who has when forwarded what to whom"".
This work discusses the issues of a business-act- and cause-effect-chain-focused information supply. The investigations consider these issues from a business, information system-modeling and technical perspective. |
|
André Aregger, Multichannel Portal Framework für e-Banking Applikationen, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2003. (Dissertation)
The goal of multichannel banking is the provision of banking services and products for customers via different sales channels whilst offering various means of access to a financial institute.
The user of a multichannel banking offering can decide to use one or more of the sales channels on offer depending on the occasion, the situation and his need, and thus conduct his daily banking business optimally and as efficiently as possible.
In this work a new application framework is presented which can be used as the basis for a multichannel infrastructure. The discussion of the fundamental problems in the development, operations and maintenance of today&Mac226;s banking applications leads to a new architectural model for multichannel infrastructures.
The use of XML as the central data format for data administration and of XSL for specific formatting stylesheets leads to a new architectural model for multichannel infrastructures. This opens the way for the development of promising application scenarios, e.g. multichannel based banking and brokerage applications. Areas of application with especially high potential are the banking domains account information, payments and securities trading. Up until now it has only been possible to satisfy the intensely information related, geographically dependent and time critical character of banking services and processes in the financial services sector through the development of proprietary applications. The use of integrated, multichannel-based banking infrastructures can make a significant contribution to the development of more efficient and more flexible solutions.
A portal framework is presented which enables the integration of various Internet based end-devices. The innovative system architecture, the portable screen layouts and concepts for screen design as well as the various user interfaces for the different end-devices are presented and discussed. In addition, the end-devices are grouped into categories and examined as to their suitability for various customer segments. In the description of the prototype system, it is shown how end-device specific style sheets in a 4-tier architecture can be dynamically deployed at run-time to generate the required target format for the specific end-devices. The various client applications, methods and techniques of implementation are discussed in order to postulate adequate recommendations for multichannel management. Aspects of usability are demonstrated by means of the implemented prototypes and the problems of specific end-devices are discussed. Finally, a possible approach to a portal application is shown which allows the integration of cross-media information and applications into a multichannel portal framework.
The close relationship of the prototype application to financial services providers allows the design of new architectural models and user interfaces for multichannel applications for financial institutes based on the results of this work.
|
|
Bianca von Bredow, Sichere Online-Transaktionen im Bereich von Electronic Government : ein Analyse- und Gestaltungsrahmen, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2003. (Dissertation)
|
|
Henrik Stormer, Ein flexibles und sicheres agentenbasiertes Workflow Management System, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2003. (Dissertation)
The research areas agents and workflow management systems were analysed almost independent in the past. Agents show a number of solutions for different problems in workflow management systems. Therefore, it has to be examined in which fields the use of agents in workflow management is promising and how a solution could look like.
Traditional workflow management systems are monolithic, i.e. they have a central server who controls the running processes. In a system with hundreds of clients this server must have very high performance. A problem is the reliability of the server. If the server crashes, the whole system stops. Another problem is the connection between the workflow system and other systems. Especially in large companies, a number of different systems are used. For each system, a special interface must be implemented, which is a time consuming task.
This thesis presents a solution to the problems mentioned above with AWA, an agent based workflow architecture. The general idea of AWA is to use mobile agents to control the workflow. This is done by defining specialised agents that work in team. For each new process instance, a new kind of agent is created who controls the overall process by creating new agents.
One focus of this work is security. On the one hand, all agents running on a system can be attacked from this system, on the other hand, a company must protect a system not only from outside but also from inside attacks. Therefore, an editor is presented that can be used to model security rules graphically.
A close description of the prototype AWA/P, who implements the AWA architecture, closes this thesis. |
|
Helmut Kneer, Extended service level management for the provisioning of streaming internet services, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2003. (Dissertation)
The global Internet is widely used to trade products, digital contents, and services. A new form of service over the Internet, also referred to as E-Commerce service, is the delivery of real-time multimedia streams on demand. The service of providing streamed information comprises the reliable transmission of single data packets from the E-Commerce Service Provider (ESP) to the end-customer via several intermediate Internet Service Providers (ISP). Reliable service delivery implies the transmission of data with specifications for the Quality of Service (QoS), hence called service provisioning. A big problem for service provisioning on today’s best effort Internet is the lack of traffic prioritization mechanisms to guarantee QoS and the fact that many ISPs are involved in the delivery process. There are approaches to enable QoS-based end-to-end communication on different levels. On the communication level, the problem is approached rather technically by implementing new Internet Protocol Architectures. Generic business models are developed on a business level, while Service Level Management (SLM) provides a framework on the contracting level to specify, negotiate, stipulate, and monitor quality and quantity of service levels between service provider and service customer. Since all of these approaches do not enable service provisioning of streaming Internet services independent from each other, we have designed and implemented a holistic approach that connects all three levels of abstraction. On the business level, we have developed an Integrated Business Model for streaming Internet services with a focus on charging and accounting for the data transport. Our Integrated Business Model includes a role model, a service model, and a business process model, which combines tasks and roles during different phases of the service delivery. On the contracting level, we have enhanced the concepts of the general SLM to form the Extended Service Level Management, which translates business needs on the business level into infrastructure requirements on the communication level. Our Extended Service Level Management is based on the specification and stipulation of Service Level Agreements (SLA) and Operational Level Agreements (OLA) among all the involved business entities, and it provides architectures for negotiating, trading, and monitoring these service contracts. |
|
Hans Fritschi, A component framework to construct active database management systems, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2003. (Dissertation)
Currently active database systems (ADBMSs) are principally conceived as rather tightly integrated software systems. They are either realized as a part of monolithic DBMSs, or the active database mechanisms are provided as a layer that resides on top of traditional DBMSs. As these modules still can be coupled tightly with the underlying DBMS, the latter approach does not ensure that the respective modules are adapted easily to other DBMSs.
Providing active database mechanisms as individual and customizable database services would therefore open the opportunity to use them in a variety of ways and environments. In that sense, this thesis investigates in the systematic provision of sophisticated active mechanisms in database or database-related environments and proposes an engineering approach to construct active database systems in a cost-effective way.
In a first stage the basic concepts of active database management systems as well as the foundations of (Active) DBMS construction are discussed. These investigations enable the conclusion that the principal approach to be devised in this thesis relies best on decomposing ADBMSs into reusable components that are recombined later on into specific active database services.
In a next step concise meta models to describe the software components and software architectures are elaborated, followed by the definition of specific reuse-oriented software processes to take ADBMSs apart into components and to build active database services out ofthese components. Subsequently a reference architecture underlying the prospective active database services is devised by applying specialized architecture design techniques. The reference architecture is specified formally by means of an architecture definition language.
Techniques to transform the reference architecture into actual software components are conceived afterwards. The procedure consists of a method to specify components in an implementation-independent way, a technique to generalize them systematically and a process to develop the components with a chosen component infrastructure. In order to recombine the components into a coherent ensemble a method to specify the prospective active database service is devised as well as a schema to classify components and specific tools that assist the software engineer in the assembly of an active database service.
Finally, a prototype has been implemented as a proof of concept.
|
|
Konstantin Beck, Urs Käser-Meier, Die Krankheitskosten im Todesfall: eine deskriptiv statistische Analyse, Managed Care : Schweizer Zeitschrift für Managed Care und Care Management, Vol. 7 (2), 2003. (Journal Article)
Die Kosten unmittelbar vor dem Tod sind ein häufiges Thema in der gesundheitspolitischen Diskussion. Der vorliegende Artikel prüft verschiedene Hypothesen zum Verlauf dieser Kosten und zum Zusammenhang mit verschiedenen Eigenschaften der Verstorbenen. Als Grundlage dient eine statistische Analyse von 14944 Todesfällen. |
|
John C Edwards, Kathleen G Rust, William McKinley, Gyewan Moon, Business ideologies and perceived breach of contract during downsizing: the role of the ideology of employee self-reliance, Journal of Organizational Behavior, Vol. 24 (1), 2002. (Journal Article)
This paper represents an initial effort to explore the empirical relationship between business ideologies and perceptions of organizational downsizing. The results of four studies, two conducted in the US and one each in Singapore and Korea, suggest that respondents' belief in the ideology of employee self‐reliance reduces the degree to which they perceive layoffs as a breach of the psychological contract. This finding appears to generalize to respondents' perceptions of their own layoffs and also to respondents' perceptions of layoffs happening to others. We spell out the implications of these results for the evolving theory of the ideological foundations of perceptions of downsizing. |
|