Alex Muller, Reeingineering of a Ticket Sales Systems, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2006. (Master's Thesis)
The goal of this diploma thesis is to develop a reengineering plan for the front-end of a ticket sales system written in Java using reengineering techniques and tools. The front-end component is operational, but defect from a maintenance point of view. The analysis of the component and the change propositions are based on theoretical cognitions, practically established reengineering patterns, and supporting software tools. Elements of these domains are applied in the reengineering process to find a best practice solution for the given task. The thesis leads through the reverse and forward engineering process. It is organized in three parts. The first part gives an overview to the software reengineering domain and introduces the instruments used in the thesis. The second part covers the reverse engineering of the component. A stepwise organized and goal-oriented analysis process supported by object-oriented reverse engineering patterns and tools is applied. The result of the second part is the set of classes that need to be reengineered. The third part discusses the options based on the reverse engineering results and delivers a goal architecture and a plan describing how to change the ticket sales component to make it easier to maintain and to enable its further evolution. |
|
Martin Morger, Semantic Clipboard, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2006. (Master's Thesis)
The SemanticWeb provides a framework to share data across the boundaries of applications, enterprises, and communities. It uses the Resource Description Framework to provide metadata describing any resource accessible, or at least identifiable, on the Web. Current clipboard applications allow the exchange of data between applications running on the same platform while the semantics of the data are usually only retained if the source and target applications are part of a specific application suite. Expanding the range of data sources from desktop applications to Websites, the process of copying data from a Web resource into an application results in losing most of the semantics of the data, as the target application recognizes the pasted data as formatted or plain text only. This thesis presents an implementation of the Semantic Clipboard concept using an extensible plugin architecture. The implemented Java application extracts RDF metadata describing a Web resource from accordingly annotated Websites and pastes them into a supported desktop application, retaining the semantics of the data. By implementing a plugin architecture, the Semantic Clipboard uses individual plugin modules to extract ontology-specific data from the source location, to store this data temporarily in specific data containers, and to paste it into a suitable desktop application. To extend the range of supported source ontology vocabularies and target applications, additional plugins may be developed and registered at the Semantic Clipboard. The current implementation provides a number of plugins, supporting various ontology vocabularies as well as different desktop applications on the Mac and Windows platforms. |
|
Dane Marjanovic, Developing a Meta Model for Release History Systems, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2006. (Master's Thesis)
The goal of this thesis is to construct a meta model for release history systems, based on SVN (Subversion)1 and CVS2. The meta model will encompass the core concepts of versioning systems as they are present in the mentioned tools. The release history aspect will then be extended by an issue tracking data model for which we take the Bugzilla3 data representation. With the meta model’s semantics, one will be able to model random release history systems similar to CVS or SVN. Further the meta model will be able to model the release history aspect of CMS (Configuration Management Systems) such as ClearCase4 or Visual Source Safe5, as we will validate the meta model with the Rational ClearCase data model. The focus of this thesis lies in modeling a meta concept to describe the notion of software history as it is present in representative tools for release history. The model will be conceptualized in UML 2.0 and implemented in Java with the use of HibernateHib05. The s.e.a.l. research group conducts a software evolution project, where the release history meta model, developed in this thesis is a base part of. The release history meta model is developed conceptually in this thesis. The actual implementation of the meta model is focused on the implementation of the issue tracking aspect, since the meta model incorporates the issue tracking domain as well. The release history aspect was implemented in the scope of another project6 in the s.e.a.l. research group. Thereby, a release history model was implemented, on the base of CVS. The tools used to implement the CVS data model are used to implement the issue tracking model in this effort, hence, the implementations, both of the CVS data model and issue tracking model are very closely related to a possible implementation of the release history aspect of the meta model. Keywords: Meta model, conceptual world, release history, issue tracking. |
|
Roger Loosli, Design und Entwicklung einer Client-Server Architektur für eine GIS-Applikation, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2006. (Master's Thesis)
ArcGIS is a GIS product line containing desktop applications (ArcView / ArcEditor / ArcInfo) and with ArcGIS Server an enterprise GIS server. Enterprise GIS servers are an example of an enterprise application server. Enterprise GIS servers provide GIS services to a wide range of distributed users so that GIS resources can be exploited more effectively as the are no longer available only to a small number of selected GIS specialists. This diploma thesis provides an overview on the structure and operating mode of ArcGIS server and shows, how functionality from ArcGIS desktop based GEONIS may be made available organization-wide through ArcGIS Server. |
|
Artan Kurtisi, Qualitätsmanagement von Applikationsschnittstellenspezifikationen, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2006. (Master's Thesis)
The biggest part of software development applies on systems which are of operational or technical nature. During the specification of the requirements at such systems it is very important to determine how such a system should be embedded into its environment and where the system boundaries lie. Difficulties like organizational responsibilities, adaptations of the neighbour applications, data transformations don’t make their identification simple. Therefore, it is important to set quality requirements also in this software development phase and they have to be checked for their satisfaction at the end. Goal of this thesis is the development of a quality management model that is applicable to the interface specifications, respectively by the integration of a global application within a complex IT landscape. The application of the model should be able stating the quality improvement of interfaces and it should enable measures for the quality improvement. |
|
Andreas Jetter, Assessing Software Quality Attributes with Source Code Metrics, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2006. (Master's Thesis)
This thesis is about quality assessment of software systems by using source code metrics. We define four dimensions and relate them to a number of popular quality models, i.e., the models of McCall, Boehm, ISO 9126, Dromey and Bansiya. We also relate source code metric based quality models (SMQM) to these dimensions and show that the usefulness of SMQM is limited to an architectural view. But from this point of view, it is an expressive tool to assess software. We discuss several aspects of source code measuring. The objective and subjective viewpoint are contrasted whereas the former is more an engineering approach and the latter is more an artistic one. The danger of use and abuse of metrics is also highlighted as well as the problem of validating and combining source code metrics. We developed a SMQM inspired by the quality model for object-oriented design (QMOOD) introduced by Bansiya. The quality assessor tool we implemented is able to measure Java source code measures and summarize them into abstract quality attributes. These high level attributes can be visualized in a plot to trace the evolution of the design quality over time. In a case study we use the quality assessor tool to analyze the open source project “Azureus”. “Azureus” is a medium size bit torrent client. We consider three years, during which “Azureus” had grown from 22’000 to 222’000 lines of code. We measure 19 releases and analyze them by comparing the evolution of the design metrics with the changelog data from the developer’s website. This way we are able to show that there exists a recognizable correlation between these two. |
|
Christian Hanimann, Towards an Integrated Tool Platform for Software Architecture and Evolution Analysis, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2006. (Master's Thesis)
Software maintenance and evolution are important tasks in the software lifecycle. To make software maintenance and evolution easier, procedures exist to represent the software as a model and to measure the software. There are several graphical approaches to represent this generated data. This thesis concerns part of this work. To save a generated FAMIX model of a software durable, the data of this model is saved with Hibernate in a relational database. The metrics of this software, computed with the Metrics plug-in, are mapedp with the corresponding entities of the FAMIX model and are also stored in the database. The metrics are visualised with the Kiviat Visualizer. On the basis of these graphs, several questions, concerning architecture, design and evolution, will be answered. |
|
Béla Grossmann, Change Prediction Cost Model - Developing a CPMC based on Version History data and Change Couplings, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2006. (Master's Thesis)
Software maintenance and evolution is an expensive part in the life cycle of a software system. It is of great interest to apply cost estimation models to predict future maintenance costs, especially for the management. In this thesis we examine if release history database metrics like change couplings and modification reports can support the prediction of software maintenance and evolution costs. In three approaches we show the development of a change prediction cost model. This thesis presents an insight into the data retrieval and provides a detailed description of the significance analysis of the input values. |
|
Emanuel Giger, Evolving Code Clones An Approach towards a Fine-Grained Analysis of Code Clone Changes and Change Couplings, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2006. (Master's Thesis)
The term code clone in the field of software engineering refers to the fact that a software system contains duplications in its source code. Such code clones are marked as bad smell. They are assumed to cause problems during the evolution and maintenance of a system, because programmers and developers may need to locate code clones in the entire source code to change them consistently. This problem manifests itself in change coupling groups – group of source code files that are often changed together. It is thus of importance to have a methodology to identify and ”disarm” such critical files specifically. A systematical correlation between code clones and change couplings has been assumed so far. Recent research activities could neither verify this correlation nor totally reject it. In this thesis we use a new approach that combines various technologies to investigate the relation between code clones and change couplings. We applied our approach on two case studies. The evaluation of the results could not establish a systematic correlation or an interaction between code clones and change couplings. Nevertheless the case studies pointed out certain file groups in which code clones indeed caused change couplings. The approach developed in this thesis can be used to investigate a software system, and to identify such critical file groups in a well defined process. |
|
Roman Flückiger, KiviatNavigator Navigation of Source Code Data using Kiviat-Graphs, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2006. (Master's Thesis)
Source code data of large software systems tend to be very complex. To visualize and navigate these data pools, in a manner to reveal specific software traits, remains a challenge to date. In this thesis we present an exploration strategy for navigating such source code data. We generate graphical views that expose specific design aspects, such as bad smells, and hotspots in general. The approach uses sequences of such views to incrementally gather knowledge about the code in scope. This finally allows us to identify entities of questionable design. Our approach uses the measurement mapping principle combined with kiviat diagrams to visualize system entities. We further present a prototype implementation as an Eclipse plug-in and evaluate it in a case study, analyzing parts of the Mozilla source code. |
|
Markus Fehlmann, XML to RDF Transformation, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2006. (Master's Thesis)
XML continues to be the primary format for data exchange in distributed systems. However, since several serializations of domain specific knowledge are possible, XML documents have no immanent semantic. The Semantic Web provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries. The Resource Description Framework (RDF), which is part of the Semantic Web, formalizes the meaning of information. While many documents are encoded in XML, only few documents are represented in RDF. In his PhD thesis, Reif proposed an algorithm and did a prototype implementation, called WEESA, that generates RDF graphs out of arbitrary XML documents by applying processing instructions defined in a mapping. In this thesis we propose an object-oriented architecture of the mapping algorithm in order to improve its maintainability, efficiency, and extensibility. In addition to that, we introduce new mapping directives that simplify the mapping definition process. The result of this thesis is a new implementation of the mapping algorithm that incorporates the suggested object-oriented architecture and the additional mapping constructs. Thus, the transformation from XML data to RDF could be simplified to a reasonable extent. A prominent example that benefits from our results is the semantic annotation of Web sites. |
|
Alexander Poeffel, Simulation hochbelasteter Kreisel, Universität Zürich, 2006. (Dissertation)
This paper deals mainly with modelling and simulation of traffic flows on single and multi lane roundabouts. The focus has thereby the area of judgement of traffic quality and the determination of capacity of heavily used roundabouts.
Results are used to reach the most fluent traffic flow which on the one side can handle the existing traffic and on the other side matches future requirements. Therefore traffic crossings can be built adequately what leads to a fluent traffic flow and less delays and hold-ups.
During this dissertation a computer program was built to simulate and analyse traffic flows on roundabouts. All kind of roundabout shapes existing in Switzerland can be modelled with the exception of bypasses.
To keep the modelling of the vehicle behaviour open an interpretable programming language was created and implemented to describe the rules of behaviour. These statements will be
interpreted from the vehicle objects during runtime and defines their behaviour.
Analyse of the measured length of the traffic hold-up and the queue time can also be done in the simulation program. Different graphic and spreadsheet tools allow a detailed analyse of
the results. |
|
3rd International Conference on Mining Software Repositories (MSR2006), Edited by: Stephan Diehl, Harald Gall, Martin Pinzger, Ahmed Hassan, ACM, Shanghai, China, 2006. (Proceedings)
|
|
4th International Workshop on Ubiquitous Mobile Information and Collaboration Systems (UMICS 2006), Edited by: Moira C. Norrie, Schahram Dustdar, Harald Gall, Luxemburg, 2006. (Proceedings)
|
|
Ksenia Ryndina, Jochen M. Küster, Harald Gall, Consistency of Business Process Models and Object Life Cycles, In: Models in Software Engineering, Springer, 2006. (Conference or Workshop Paper)
|
|
Michael Fischer, Harald Gall, EvoGraph: A Lightweight Approach to Evolutionary and Structural Analysis of Large Software Systems, In: 13th Working Conference on Reverse Engineering (WCRE), IEEE Computer Society, Benevento, Italy, January 2006. (Conference or Workshop Paper)
Structural analyses frequently fall short in an adequate
representation of historical changes for retrospective analysis.
By compounding the two underlying information spaces
in a single approach, the comprehension about the interaction
between evolving requirements and system development
can be improved significantly. We therefore propose
a lightweight approach based on release history data and
source code changes, which first selects entities with evolutionary
outstanding characteristics and then indicates their
structural dependencies via commonly used source code entities.
The resulting data sets and visualizations aim at a
holistic view to point out and assess structural stability, recurring
modifications, or changes in the dependencies of
the file-sets under inspection. In this paper we describe
our approach and its results in terms of the Mozilla case
study. Our approach completes typical release history mining
and source code analysis approaches, therefore past restructuring
events, new, shifted, and removed dependencies
can be spotted easily. |
|
Sunghun Kim, Thomas Zimmermann, Miryung Kim, Ahmed Hassan, Audris Mockus, Tudor Girba, Martin Pinzger, E. James Whitehead Jr., Andreas Zeller, TA-RE: An Exchange Language for Mining Software Repositories, In: Proceedings of the International Workshop on Mining Software Repositories, ACM, Shanghai, China, January 2006. (Conference or Workshop Paper)
Software repositories have been getting a lot of attention from researchers in recent years. In order to analyze software repositories, it is necessary to first extract raw data from the version control and problem tracking systems. This poses two challenges: (1) extraction requires a non-trivial effort, and (2) the results depend on the heuristics used during extraction. These challenges burden researchers that are new to the community and make it difficult to benchmark software repository mining since it is almost impossible to reproduce experiments done by another team. In this paper we present the TA-RE corpus. TA-RE collects extracted data from software repositories in order to build a collection of projects that will simplify extraction process. Additionally the collection can be used for benchmarking. As the first step we propose an exchange language capable of making sharing and reusing data as simple as possible. |
|
Gerald Reif, Harald Gall, Using WEESA to Semantically Annotate Cocoon Web Applications, In: 1st Semantic Authoring and Annotation Workshop 2006 at the 5th International Semantic Web Conference ISWC2006, Athens, Geogria, US, January 2006. (Conference or Workshop Paper)
The Semantic Web is based on the idea that Web applications provide semantically annotated Web pages. This meta-data is typically added in the semantic annotation process which is currently not part of the Web engineering process. Web engineering, however, proposes methodologies to design, implement and maintain Web applications but lack semantic annotation. In this paper we show how WEESA, a mapping from XML documents to ontologies, can be used in Apache Cocoon Web applications to semantically annotate Web pages. We introduce Cocoon transformer components that use the WEESA mapping definition to automatically generate RDF meta-data from XML documents. We further show how existing Cocoon Web applications can be extended to Semantic Web applications and discuss the experiences gained in an industry case study. |
|
Gerald Reif, Harald Gall, An Architecture for a Semantic Portal, In: International Workshop on Data Integration and Semantic Web (DISWeb'06) at the 18th Conference on Advanced Information Systems Engineering (CAiSE 2006), Springer, Luxemburg, January 2006. (Conference or Workshop Paper)
Current Web applications provide their information and functionalities to human users only. To make Web applications also accessible for machines, the Semantic Web proposes an extension of the current Web, that describes the semantics of the content and the services explicitly with machine-processable meta-data. In this paper we introduce an architecture of a Semantic Portal that provides a unique front-end to the information and functionalities of individual Semantic Web applications. To realize the portal we use WEESA to semantically annotate Web applications and provide the annotations in a knowledge base (KB) for download and querying. Based on that, the Semantic Harvester collects the KBs from individual Semantic Web applications to build the global KB of the Semantic Portal. Finally, we use Semantic Web services to make the portal a unique interface to the services of the Web applications. |
|
Gerald Reif, Semantic Annotation, In: Semantic Web - Wege zur vernetzten Wissensgesellschaft, Springer, 2006. (Book Chapter)
In diesem Kapitel wird zuerst der Begriff Semantische Annotation eingeführt und es werden Techniken besprochen um die Annotationen mit dem ursprünglichen Dokument zu verknüpfen. Weiters wird auf Probleme eingegangen, die sich beim Erstellen der Annotationen ergeben. Im Anschluss daran werden Software Tools vorgestellt, die einen Benutzer beim Annotierungsprozess unterstützen. Zum Abschluss werden Methoden diskutiert, die den Annotierungsvorgang in den Entwicklungsprozess einer Web Applikation integrieren. |
|