Contributions published at Department of Informatics (Burkhard Stiller)
Contribution | |
---|---|
Norbert E. Fuchs, Uta Schwertel, Sunna Torge, A Natural Language Front-End to Model Generation, Journal of Language and Computation, Vol. 1 (2), 2000. (Journal Article) null |
|
Stefan Joos, Adora-L - Eine Modellierungssprache zur Spezifikation von Software-Anforderungen, Universität Zürich, Institut für Informatik, Wirtschaftswissenschaftliche Fakultät, 2000. (Dissertation) The scope of this work is the development of a specification language (Adora-L) intended to describe software requirements and architecture in a single object-oriented framework. This work is motivated in two ways. First, by the severe weaknesses of existing methods in terms of system decomposition. Second, by general ideas about specifications like object-orientation and the usage of hierarchical models. The general goal is to get a comprehensive specification which describes requirements and architecture in an understandable, clear and structured way - even for large-scale specification. As already mentioned the basic idea of the specification language Adora-L is to model the aspects of data, functionality and behaviour in a single hierarchical object framework. Modeling is based on objects (so called abstract objects) instead of classes. Thus, we resolve modeling anomalies that occur in class models. Additionally modeling with abstract objects is more easier, more understandable and more precise than modeling with classes. Whole-part-hierarchies are a key feature of Adora-L. Systems are decomposed by objects, which are components of other first class objects with full object semantics. All aspect descriptions (like descriptions of behaviour, structure or functions) use this primary structure. All aspects are integrated and represented in this single integrated structure. Particularly the behaviour description is based on the statechart mechanism Harel87 and therefore, it supports an integrated behaviour modeling. To provide powerful abstraction mechanisms is crucial to manage and understand especially large-scale specifications. System decomposition through whole-part hierarchies has proven to be a convenient and powerful abstraction mechanism. It allows for the description of aspects like system structure or behaviour on different levels of abstraction. The usage of abstractions is a fundamental precondition to manage complex problem descriptions. Primary Adora-L is a graphical language: A graphical notation is used to represent the basic structure of a system. Descriptions on a detailed level will be represented textually. Another key feature of Adora-L is to model requirements with a variable degree of formalism. This enables the developer to adjust the description of requirements to cost and risk factors. So, its up to the developer/ to model different aspects or parts of the system with an arbitrary degree of detail. |
|
Anca Vaduva, Rule development for active database systems, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2000. (Dissertation) Active database management systems promise to provide an effective integration of database concepts with the rule paradigm. Their strength resides in the centralized representation of real-world semantics in form of rules instead of hiding and replicating them in application programs. However, despite their incontestable advantages, active database management systems are not widely used in practice. One of the reasons is the lack of support for application development. This thesis analyzes specific needs of support and proposes solutions, finally materialized as tools, for assisting the process of active application development. First, we provide a comprehensive overview of the life-cycle of active applications, focussing on the development of rules. Among the considered phases, we stress the rule verification and validation, which have to cope with critical problems that are typical for rules, like, e.g., rule conflicts. In this context, we present a novel approach for termination analysis that significantly improves the accuracy of existing methods. By considering composite events, more precise results can be achieved for avoiding nontermination of rule execution. The presented solution is essential for the termination analysis of expressive rule languages, as provided by many advanced active DBMS. Another contribution of this thesis is in the area of rule testing. We present a new approach for dealing with rule-specific problems that have not been addressed until now. In particular, our work focuses on determining the existence of defects caused by conflicts and dependencies between rules. Finally, we introduce and evaluate a set of tools to assist application developers during their work. The toolset provides for graphical interfaces supporting both static activities such as rule editing, browsing, termination analysis, and dynamic activities, such as testing and debugging. Static tools are used during the specification and design of active database systems, i.e., before the execution of applications. Dynamic tools assist the application developer at runtime, when the active database system is operational and rules are processed. |
|
Marcus Holthaus, Management der Informationssicherheit in Unternehmen, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2000. (Dissertation) An Information-Oriented Examination of the Reference Object, the Subjects, Institutions, Instruments and Processes of Information Security. Implementing information security in a business environment is complex and time-consuming. Many forces and influences contribute to the positive outcome of an information security project. Therefore it is an advantage to know as many of them as early as possible. These influences are examined and formalised in this dissertation and have been arranged as a framework. They have been integrated as a process. There is one most important prerequisite to successfully manage information security: Getting top management commitment to implement information security and to free the resources needed to do so. Initially, this requires management to agree on an appropriate, uniquely formulated information security goal. Actual information security related goals can then be deduced from the general business policy. Business environment developments must be considered as well as internal requirements. An information security policy - informally in the beginning - begins to develop. This policy must be communicated within the enterprise as early and as consistently as possible and must be lived after exemplarily. Based on this policy, an enterprise-wide organisation must be brought into life. It is needed to initialise and to support the execution of a corresponding information security project within the enterprise. This organisation, which can be lead by an Information Security Delegate, has the following tasks: * To formulate, to formalise and to spread the information security policy, * to split the overall project into individual, manageable parts and subjects, e.g. in accordance to department boundaries, * to promote the co-operation of information security activities between existing institutions inside the enterprise and its integration into business processes, * to supply strategic and tactical methodologies and procedures to implement information security within the individual departments, * to co-ordinate the information security activities, most importantly when problems must be solved at a level superior to an individual project, * to take care of, to provide advice for and to promote the individual information security projects, * to collect know-how and to pass it on, * to supply tools for information security administration, awareness promotion etc., and * to check on implementation and results. Furthermore, a role model has to be defined. It should describe the information security responsibilities, functions and authorities of each individual person in the enterprise. This model must be formulated in such a way that each person fits into at least one of the roles. In the proposal formulated in this dissertation, it is the duty of one person per department to co-ordinate local implementation of information security (this role is called Information Security Co-ordinator). This person must guide a process covering the following activities: * To set the boundaries of the investigation target, concerning width and depth. Setting the width boundary is done by the express inclusion or exclusion of parts of the object investigated. Setting the depth boundary is done by limiting the investigation to specified kinds of objects (information, hardware, software, co-workers etc.) and by restricting the requirements to be considered (availability, confidentiality, obligation etc.), * to identify and to administer protection objects, possibly collecting additional characteristics (requirements on the individual objects, important risks, object values etc.), * to carry out a general risk analysis to identify major risks which the protection objects are subject to, * to carry out specific risk analysises for the more exact consideration of important risks, * to select measures to reduce the identified risks, * to discuss and to promote decisions on measures to reduce the identified risks, considering costs and setting deadlines, * to co-ordinate the implementation of measures, and * to check on implementation, done by the person in charge, by those affected, by the Information Security Delegate and by other parties. These components (goal, organisation, role model, process) make up the Information Security Management Framework, which is presented in this dissertation. A short description can be found in chapter 1. A broad view is presented at the beginning of chapter 3 and it is described in detail in chapters 3 to 6. The framework looks at information security as a management function. It is deduced it from the approaches of Rühli Rühli85 and Heinrich Heinrich93. Part I and II of this dissertation are structured according to these approaches: * The foundation (chapter 2) identifies and defines terms an general information security goals and concepts, * the reference object (chapter 3) defines which part of the enterprise must be selected for information security and how it must be split into parts, * the elements of information security (chapter 4) cover institutions, motivations, specific goals and instruments, * the activities of information security (chapter 5) describe the various information security subjects which can be applied to the parts of the reference model, * the information security process (chapter 6) describes the procedure in four cycles with five phases each. A framework like this has not been described yet in any known approach. The existing procedures, which have mostly proven useful in practice, are subjected to a detailed analysis nevertheless, in order to identify their strengths and weaknesses. Is it deduced from this analysis how a new procedure must be constituted, if it were to combine the strengths and to avoid the weaknesses. This new procedure, called ""ISIWAY 4"", covers the four framework components and is defined step by step in chapter 8. ISIWAY 4 is the first of two ways in which the framework is put into concrete form. It defines the procedure to reach information security in four cycles (hence the 4 in ISIWAY 4), called Minimal Information Security, Appropriate Information Security, Risk-Related Information Security and Comprehensive Information Security. ISIWAY 4 will be subjected to the same detailed analysis as the existing procedures mentioned before, in order to explain how the requirements are fulfilled. The requirements of the new procedure will are divided in the groups Initiation, Organisation, Implementation and Content. ISIWAY 4 has been designed to be adaptable and scalable. Another required property of ISIWAY is ease of use. In order to achieve this, the procedure must not be too complex. The full ISIWAY 4 procedure is complex, so a more simple version was designed, which can be applied in smaller projects, or which may serve as an introduction to the information security process and should be easier to learn. ISIWAY 1.5 is such a simplification. It will be introduced in chapter 10 and is the second way described here to put the Information Security Framework in concrete. Additionally, this procedure is fully supported by a tool named ISIGO 1.5. This tool is presented in chapter 9 and supports the execution of each step of the ISIWAY 1.5 procedure. In addition, it considers some items of the ISIWAY 4 procedure presented in chapter 8 and is based on structures of an overall data model, which is presented in chapter 9 under the title of ISIGO CENTRAL. Thus, this dissertation offers a structured, broadly supported and detailed analysis of the information security problem field, defines an information security management framework as a general solution, contains two procedures named ISIWAY 4 and ISIWAY 1.5 which differ in complexity, and it supplies a corresponding tool named ISIGO 1.5. Therefore, all components necessary to implement information security efficiently and effectively in a business environment have been presented in this dissertation. In particular, the framework developed here (part II) can be used as a basis for further work in the information security management field. |
|
Christian Brauchle, Qualitätscontrolling für Informationssysteme : Ein prozessorientierter Ansatz zur Verbesserung des Informatikcontrolling am Beispiel einer Universalbank, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2000. (Dissertation) Information systems are used not only within companies but also from outside the company, e.g. for electronic banking or electronic commerce. In addition to the quantitative requirements, these systems also have to satisfy the qualitative needs of internal users and the company's external customers. Thus, the quality of a company's information systems is an increasing factor in success. Quality management is necessary for ensuring the success of corporate information systems. In order to permit company-wide coordination of quality management, a control is needed which provides the relevant feedback - a quality control. Quality control for information systems concerns all quality-relevant activities of a Management Information Systems (MIS) department and co-ordinates these activities with the aim of securing the required quality of information systems effectively and efficiently. Starting with a detailed analysis of the current state of MIS, quality management, and control of information systems, this thesis systematically develops a framework for the quality control for information systems. This framework provides a better co-ordination of the quality management for information systems throughout the whole company, as well as an increase in performance for the control of information systems. Quality control makes the specific instruments needed available for the planning, implementation and operational processing of information systems as well as information about the effectiveness and efficiency of the quality management activities, e.g. the price cost performance ratio or process reengineering. Few of these instruments are currently in use; their future application should operate with computer-based systems as shown in this thesis by a prototype system. The data that are currently available for the control of information systems are not sufficient for effective quality control. In order to analyse the functions, data, and organisational implementation of quality control, a specific architecture for information systems is used. With this architecture, processes can be clearly structured and modelled. The modelling of the target processes of quality control for information systems was carried out at a bank, as the banking sector works almost exclusively with computer data and therefore their information systems have to satisfy a high level of requirements. Functionality and the quality of information systems are highly valued in this sector, and thus have stringent requirements for quality control of information systems. The results of a case study at a large bank show that quality control of information systems is already covered by some activities of the MIS department. Furthermore, recommendations are given for the organisation and the processing of quality control at a bank. This quality control has to provide the MIS department with cross-processing and cross-hierarchical information on the information systems' quality. The preconditions for processing this information are: * Communication of the requirements of external customers and internal users of information systems to all MIS departments throughout the company. * Implementation of a unique, process-wide rating system. * Implementation and periodical revision of standardised documentation. * The use of continuous control mechanisms. The requirements that quality control of information systems also provides data about effectiveness and efficiency of the quality management can be fulfilled by quality cost accounting based on activity based costing. If predefined processes are given, the cost of quality and the value of the quality achieved can be counted easily and automatically. Key figures on quality that are sometimes used have to be further evaluated; a homogeneous system on key figures for information systems quality has to be developed e.g. to render a cross-functional performance comparison. The use of a system based on key financial figures is suggested. Also, the use of a balance sheet for quality is especially recommended if the cost and value of quality are to be completely counted and aggregated. This kind of balance sheet provides a quantitative result for the quality management of information systems. Emphasis must be placed on a simple and understandable implementation of such cost accounting key figure and balance sheet systems. For companies, regardless of the sector they are in, the result of this thesis is that they all have to implement the various elements of quality control for information systems. Differences are seen in the size of the companies: for large ones it is important to implement nearly all the processes shown in this thesis; smaller companies also have to rely on quality control for information systems, but for economical reasons they are advised to implement a selection of these processes, or perhaps individual functions. Further research based on this thesis can be seen in defining an implementation framework for quality control in companies, developing an integrated computer-based information system for quality control, or analysing common aspects between quality control and risk management. Any further research has to focus on the influence that economical, technical, social, and political changes have for information systems, as well as on a greater fulfillment of requirements for information systems. |
|
Markus Kradolfer, A workflow metamodel supporting dynamic, reuse-based model evolution, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2000. (Dissertation) Workflow management has received great attention in recent years since it is a key technology for the implementation and automation of business processes. The basic idea in workflow management is to capture formal descriptions of business processes and to support the automatic enactment of the processes based on these formal descriptions. The focus of this thesis is on the development of a workflow metamodel that supports dynamic workflow model evolution and the reuse of workflow types. The metamodel comprises concepts to capture functional/structural, informational, behavioral, and organizational aspects of workflows. Furthermore, the metamodel includes explicit correctness criteria for the workflow model (i.e., the workflow types defined at a certain point in time) as well as for workflow instances. Based on the workflow metamodel the problem of model evolution is investigated. A workflow model cannot be assumed to be unchanged during long periods of time. Rather, the workflow model of a workflow management system, similar to the schema of a database system, has to be adapted to its changing environment, reflecting, e.g., new customer requirements and re-engineered business processes. Therefore, workflow model evolution, i.e., the modification of the workflow model over time, should be supported. Furthermore, since workflows may be of long duration and should not have to be aborted in case of model evolution, it should be possible to modify the workflow model in the presence of workflow instances. Thus, dynamic model evolution should be supported. In the approach proposed in this thesis, in contrast to most existing approaches, workflow types are not updated in place, but they are versioned. Whenever a workflow type has to be modified, a new version of the type is derived. Workflow type versioning has the advantage that workflow instances that are in accordance with the new version can be migrated to the new version, whereas the other workflow instances remain associated with the existing version. To efficiently determine whether a workflow can actually be migrated to a target version, an approach is proposed that takes into account the operations by which the versions have been derived from each other. To modify the workflow model, a set of modification operations is provided. The set includes operations to add and delete workflow types and versions, operations to change the state of versions, and operations to change the interface as well as the body of versions. Besides workflow model evolution, the issue of workflow type reuse is addressed. The different phases of the workflow type reuse process are discussed and a workflow type development process is proposed, which, in contrast to existing approaches, poses a special emphasis on the reuse of existing workflow types. In order to better support the finding of workflow types, a faceted classification scheme is used. Furthermore, the information contained in the model and in the workflow execution history is considered, since adequate information about workflow types should be available to the workflow modeler during the reuse process in order to understand and evaluate workflow types. Finally, a prototype has been implemented as a proof of concept. |
|
André Meyer, A rapid application development framework for distributed mobile multi-media: a mobile multi-media architecture for the virtual workplace, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2000. (Dissertation) The trends towards increased mobility and global business connections force workers and managers alike to communicate faster using multiple new types of media. At the same time, the means of communication must become easier to use and more secure, because the statically localized workplace of today will dissolve in the near future into a distributed and highly mobile multi-media communication platform. As the need for - or the freedom of - mobility as part of altering life styles becomes more and more commonplace, new tools are required that support these new forms of life and work, also called Virtual Workplaces. A virtual workplace is a distributed wireless multi-media system that is aimed at mobile individuals and supports the organization of people working together in groups for a number of projects. The workplace is a virtual one because it provides for mobility in two senses: the mobile individuals and project members may be working at any place using a mobile tool and the individual project members may be distributed all over the world. Hence, there is no necessity for a permanent physical workplace that is owned or shared by any group of individuals. Furthermore, each individual member may be part of a large number of projects cooperating with sets of different people. The current trends in the working field - namely, the specialization and globalization of competence - enforce this new kind of work paradigm. The focus on the workplace is chosen here as an example for the use of the new means of mobility in general. The resulting techniques may be adapted for a wide range of application domains where the communication between mobile people and mobile information access play the central roles. The virtual workplace is designed to provide mobile members of distributed work groups with a multi-media platform that supports them in communicating with each other, and to retrieve and edit documents and information collaboratively wherever and whenever they need it. The individual project members are supported by a set of user interface, communication, and information retrieval agents. These mobile agents act for the user in the background in order to keep him undistracted from his work. The result of this thesis is to conceptualize and implement a Distributed Rapid Application Development Framework for the creation of Mobile Multi-Media applications that work on numerous current and future hardware devices. The virtual workplace is an example of such an application. The usability of mobile communication and information devices is facilitated by novel paradigms for computer-human interaction, such as pen-based user interfaces that mimic the behavior and ease-of-use of natural paper and speech recognition. In the combined employment of sophisticated mobile and agent technologies, virtual workplace scenarios leap far beyond the current state of research in the so-called field of Computer Supported Cooperative Work. The enormous potential of the framework architecture of virtual workplaces stems from new paradigms that are being developed in contrast to the mere extension and retro-fitting of old ones. |
|
Joachim Kreutzberg, Qualitätsmanagement auf dem Prüfstand : Analyse des Qualitätsmanagements von Informationssystemen, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2000. (Dissertation) null |
|
Walter Keller, Petri nets for reverse engineering, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2000. (Dissertation) The aim of this work is to conduct research into synergies between Petri-net theory and reverse engineering. The existence of such synergies is not obvious because each sector is based on different assumptions. These differences relate to two modelling paradigms: clustering and folding. Clustering merges neighboured nodes and corresponds to the construction of complex systems from subsystems. It is widely used in software engineering and in practical applications of Petri nets. Foldings only merge transitions with transitions, places with places and arcs with arcs. They group similar functionality. Hence they preserve behaviour, allow the transfer of semantics and provide deep theoretical insights by means of far-reaching connections to other models of concurrency. A folding-based Petri-net algorithm for reverse engineering is introduced. It recovers a coloured net from an unstructured flat Petri net. The two nets are connected by a folding which amounts to a compact specification of the source net. The algorithm is both flexible and scalable, and this work shows how application heuristics can be integrated into it. Its cost is almost linear with respect to the size of the input net, which is remarkable in the field of reverse engineering. Petri nets may serve as an intuitive model of the interplay of the structural, functional and dynamic aspects of a system. Various methods of modelling aspects apart from concurrency by Petri nets represent an innovation. With such a translation, the algorithm may also analyse legacy systems outside the realms of Petri nets. The result is a novel and powerful method of reverse engineering. A specific example shows how a high-level design may be recovered from low-level implementation information. Moreover, the recovered colouring contains a complete specification inclusive of the data model. The reverse engineering part of this work concentrates on folding-based Petri-net methods because they contain new features. On the other hand, clustering-based techniques share many similarities with known methods. For best results, however, clustering and folding should be appropriately combined. The foundations for such combinations are laid down in the first part of this thesis. Many Petri-net classes known from the literature may be grouped into folding-based and clustering-based types. However, no well-defined link exists between them which would allow the strengths of both approaches to be combined, especially for practical applications. Such a link is presented here in the form of an adjunction, which is a strong two-way relationship taken from category theory. It links folding-based and clustering-based categories. It is shown that these categories have properties typical of folding-based and clustering-based Petri-nets respectively. Further compatible adjunctions express the Petri-net dichotomy of structure and behaviour. To the best of the author's knowledge, this basic principle of Petri-net theory has not yet been formulated categorically. For practical applications, it is important that these concepts can be integrated fairly easily with existing Petri-net tools. This will enrich them with the power of a categorical machinery, e.g. with morphisms, universal constructions and the transfer of behaviour. Coloured nets are simply defined as special comma categories, i.e. essentially folding morphisms. The reduction algorithm introduced here is a proof of the practical value of this approach: it is an iteration of couniversal constructions and the reduction itself has couniversal properties. |
|
Takashi Suezawa, Concepts for migrating running virtual machines: design and implementation of a Java virtual machine migration system, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2000. (Dissertation) The idea to move running programs (or processes) through the network in order to resume them on another computer has already been developed some years ago. Many process migration systems have been developed since then. Process migration is especially used for load balancing and load distribution purposes. Most of these systems fail to be utilised in a heterogeneous environment, where different kinds of workstations are connected to each other. This is because in most process migration systems the representation of execution state contains runtime data that is dependent on the underlying operating system. A solution to this problem is virtual machine (VM) migration. Migration of a running VM means suspending the execution of the VM on a source computer, relocating the execution state to a target computer, and resuming execution on that target computer. VM migration differs from process migration in that VM migration considers a well-defined subset of the execution state of a process.In other words, VM migration does not migrate runtime data that are operating system dependent and thus, allows cross-platform migration. This thesis develops concepts for a system that allows the migration of a running VM and answers the following questions: * What mechanisms are required for VM migration purposes? In order to enable a VM to correctly resume execution after a migration, it is necessary to capture and to represent the execution state of the VM. On the target computer it is required to initialise a VM by means of the appropriate execution state. * Which runtime data are contained in a representation of an execution state? An execution state constitutes all relevant runtime data, such as the stack (contains temporary data such as subroutine parameters or temporary variables), the data area (contains global variables) and the text area (contains the instructions of the program). The structure of the execution states may, however, differ depending on the architecture of the virtual machines. * Is it necessary to tag the runtime data with supplementary data? In order to correctly reproduce the frozen execution state on the target computer it is necessary to associate the runtime data with type information. The necessity stems from the fact that the data types have different byte representations. If the runtime data is tagged with type information the target VM knows which byte formats it has to read from the representation. A mechanism for virtual machine migration is a powerful technique for distributed applications. VM migration is well-suited for the development of fault-tolerant (e.g. a trading system), highly available (e.g. an air-traffic control system) or resource-aware (e.g. mobile computing) systems. The described migration concepts are applicable for many different virtual machine architectures. In this thesis we describe the adaptation and implementation of these migration concepts for the Java virtual machine (JVM) in a system called Merpati. Merpati comprises an extended Java virtual machine that facilitates the migration of running Java programs between JVMs located on different remote computers. Furthermore, Merpati provides a Java application programming interface. This API enables the Java programmer to migrate a running JVM to another computer. In addition, the API allows the checkpointing and recovery of a running JVM. |
|
Johannes Ryser, Martin Glinz, A Scenario-Based Approach to Validating and Testing Software Systems Using Statecharts, In: 12th International Conference on Software and Systems Engineering and their Applications (ICSSEA’99), CNAM, Paris, 1999-12-08. (Conference or Workshop Paper published in Proceedings) Scenarios (Use cases) are used to describe the functionality and behavior of a (software) system in a user-centered perspective. As scenarios form a kind of abstract level test cases for the system under development, the idea to use them to derive test cases for system test is quite intriguing. Yet in practice scenarios from the analysis phase are seldom used to create concrete system test cases. In this paper we present a procedure to create scenarios in the analysis phase and use those scenarios in system test to systematically determine test cases. This is done by formalization of scenarios into statecharts, annotation of statecharts with helpful information for test case creation/generation and by path traversal in the statecharts to determine concrete test cases. |
|
Peter Trommler, The Application Profile Model: A Security Model for Downloaded Executable Content, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 1999. (Dissertation) With the introduction of Java the interest in transferring executable code over the Internet. First the downloaded executable content paradigm has been deployed for animations and active forms. Nowadays, with the introduction of the Network Computer there is a tendency to deploy the downloaded executable content paradigm as a distribution mechanism for general application programs in the Internet. Executing code downloaded from the Internet raises security issues beyond those found in operating systems. Therefore most systems that implement downloaded executable content offer additional security mechanisms to complement the mechanisms of the operating system. In this thesis a novel security model for downloaded executable content is developed. The application profile model is defined to merely grant the set of access rights needed by the application, the application profile. Without breaching security, the definition can be relaxed, which results in the definition of the weak application profile model. The issue of application profile selection is addressed and an algorithm is presented to select the application profile in the weak application profile model dynamically at runtime of an application. A prototype implementation demonstrates the feasibility of this approach. A method for code analysis to determine the set of access rights required for the execution of an application is developed as an alternative approach. Based on an analysis of the theoretical limitations of code analysis methods are discussed to approximate the set of access rights. The method of generalized constants is developed and applied to Java. The application profile model and code analysis are compared and combinations of both approaches are discussed. To define a security policy based on the application profile model a specification language is defined as an extension to the PLAS language, a general purpose policy language. The extension is defined in such a way that it can be integrated with other specification languages. The new model is studied in the context of a company environment and management strategies for a security policy for downloaded executable content are developed and evaluated. |
|
Johannes Ryser, Martin Glinz, A Practical Approach to Validating and Testing Software Systems Using Scenarios, In: QWE'99: Third International Software Quality Week Europe, November 1999. (Conference or Workshop Paper) null |
|
Abraham Bernstein, Populating the Specificity Frontier: IT-Support for Dynamic Business Processes, No. IFI-2008.0003, Version: 1, October 1999. (Technical Report) null |
|
Rüdiger Lause, Gerhard Schwabe, Towards a groupware didactic - Experiences from the training of groupware in Cuparla, In: ECSCW '99 Workshop on Evolving Use of Groupware, Evolving Use of Groupware, 1999-09-12. (Conference or Workshop Paper) |
|
Abraham Bernstein, Executing Programs with various degrees of Specificities: Populating the Spectrum of Specificity, No. IFI-2008.0002, Version: 1, September 1999. (Technical Report) null |
|
Abraham Bernstein, Process/Task Grammar, No. IFI-2008.0001, Version: 1, June 1999. (Technical Report) null |
|
Abraham Bernstein, Chrysanthos Dellarocas, Mark Klein, Towards Adaptive Workflow Systems - CSCW-98 Workshop Report, SGMOD-Record and SIGGROUP-Bulletin, 1999. (Journal Article) null |
|
Thomas W. Malone, Kevin Crowston, Jintae Lee, Brian Pentland, Chrysanthos Dellarocas, George Wyner, John Quimby, Charley Osborne, Abraham Bernstein, George Herman, Mark Klein, Elissa O'Donnell, Tools for inventing organizations: Toward a handbook of organizational processes (article), Management Science, Vol. 45 (3), 1999. (Journal Article) A critical need for many organizations in the next century will be the ability to quickly develop innovative business processes to take advantage of rapidly changing technologies and markets. Current process design tools and methodologies, however, are very resource-intensive and provide little support for generating (as opposed to merely recording) new design alternatives. This paper describes the Process Recombinator, a novel tool for generating new business process ideas by recombining elements from a richly structured repository of knowledge about business processes. The key contribution of the work is the technical demonstration of how such a repository can be used to automatically generate a wide range of innovative process designs. We have also informally evaluated the Process Recombinator in several field studies, which are briefly described here as well. |
|
Abraham Bernstein, Mark Klein, Thomas W. Malone, The Process Recombinator: A Tool for Generating New Business Process Ideas (inproceedings), In: ICIS, 1999. (Conference or Workshop Paper) A critical need for many organizations in the next century will be the ability to quickly develop innovative business processes to take advantage of rapidly changing technologies and markets. Current process design tools and methodologies, however, are very resource-intensive and provide little support for generating (as opposed to merely recording) new design alternatives. This paper describes the Process Recombinator, a novel tool for generating new business process ideas by recombining elements from a richly structured repository of knowledge about business processes. The key contribution of the work is the technical demonstration of how such a repository can be used to automatically generate a wide range of innovative process designs. We have also informally evaluated the Process Recombinator in several field studies, which are briefly described here as well. |