Daron Acemoglu, Philippe Aghion, Fabrizio Zilibotti, Vertical Integration and Distance to Frontier, Journal of the European Economic Association, Vol. 1 (2-3), 2003. (Journal Article)
We construct a model where the equilibrium organization of firms changes as an economy approaches the world technology frontier. In vertically integrated firms, owners (managers) have to spend time both on production and innovation activities, and this creates managerial overload, and discourages innovation. Outsourcing of some production activities mitigates the managerial overload, but creates a holdup problem, causing some of the rents of the owners to be dissipated to the supplier. Far from the technology frontier, imitation activities are more important, and vertical integration is preferred. Closer to the frontier, the value of innovation increases, encouraging outsourcing. |
|
Peter Zweifel, Roger Zäch, Vertical restraints: the case of multinationals, Antitrust Bulletin, Vol. 48 (1), 2003. (Journal Article)
|
|
Josef Falkinger, Volker Grossmann, Workplaces in the primary economy and wage pressure in the secondary labor market, Journal of Institutional and Theoretical Economics, Vol. 159 (3), 2003. (Journal Article)
This paper develops a two-sector general-equilibrium model in which firms in the primary economy have to create workplaces prior to production and product market competition. For this, we introduce the endogenous sunk-cost approach with two-stage decisions of firms from IO in the macro labor literature. By hypothesizing that technological change has lowered marginal costs but has raised nonproduction requirements for providing workplaces, we are able to explain downsizing of low-skilled jobs in the primary economy despite wage flexibility exante. This leads to more accentuated labor-market segmentation, i.e., an increase in wage pressure in the secondary economy. |
|
Michael Fischer, Martin Pinzger, Harald Gall, Analyzing and Relating Bug Report Data for Feature Tracking, In: Proceedings of the 10th Working Conference on Reverse Engineering (WCRE'03), IEEE Computer Society, Victoria, B.C., Canada, January 2003. (Conference or Workshop Paper)
Gaining higher level evolutionary information about large software systems is a key in validating past and adjusting future development processes. In this paper, we analyze the proximity of software features based on modification and problem report data that capture the system’s evolution history. Features are instrumented and tracked, the relationships of modification and problem reports to these features are established, and the tracked features are visualized to illustrate their otherwise hidden dependencies. Our approach uncovers these hidden relationships between features via problem report analysis and presents them in easy-to-evaluate visual form. Particular feature dependencies then can be selected to assess the feature evolution by zooming in into an arbitrary level of detail. Such visualization of interwoven features, therefore, can indicate locations of design erosion in the architectural evolution of a software system. Our approach has been validated using the large open source software project of Mozilla and its bug reporting system Bugzilla. |
|
Michael Fischer, Martin Pinzger, Harald Gall, Populating a Release History Database from Version Control and Bug Tracking Systems, In: Proceedings of the International Conference on Software Maintenance (ICSM'03), IEEE Computer Society, Amsterdam, Netherlands, January 2003. (Conference or Workshop Paper)
Version control and bug tracking systems contain large amounts of historical information that can give deep insight into the evolution of a software project. Unfortunately, these systems provide only insufficient support for a detailed analysis of software evolution aspects. We address this problem and introduce an approach for populating a release history database that combines version data with bug tracking data and adds missing data not covered by version control systems such as merge points. Then simple queries can be applied to the structured data to obtain meaningful views showing the evolution of a software project. Such views enable more accurate reasoning of evolutionary aspects and facilitate the anticipation of software evolution. We demonstrate our approach on the large open source project Mozilla that offers great opportunities to compare results and validate our approach. |
|
Thomas Gschwind, Johann Oberleitner, Martin Pinzger, Using Run-Time Data for Program Comprehension, In: Proceedings of the 11th International Workshop on Program Comprehension, IEEE Computer Society, Washington, DC, USA, 2003. (Conference or Workshop Paper)
Traditional approaches for program comprehension use static program analysis or dynamic program analysis in the form of execution traces. Our approach, however, makes use of runtime-data such as parameter and object values. Compared to traditional program comprehension techniques, this approach enables fundamentally new ways of program analysis which we have not seen so far. Reflection analysis which allows engineers to understand programs making use of reflective (dynamic) method invocations is one such analysis. Another is object tracing which allows engineers to trace and track the use of a given instance of a class within the program to be understood. In this paper, we present these techniques along with a case study to which we have applied them. |
|
Jens Knodel, Martin Pinzger, Improving Fact Extraction of Framework-Based Software Systems, In: Proceedings of the 10th Working Conference on Reverse Engineering (WCRE'03), IEEE Computer Society, Victoria, B.C., Canada, January 2003. (Conference or Workshop Paper)
Modern software frameworks provide a set of common and prefabricated software artifacts that support engineers in developing large-scale software systems. Framework-related information can be implemented in source code, comments or configuration files, but in the latter two cases, current reverse engineering approaches miss important facts reducing the quality of subsequent analysis tasks. We introduce a generic fact extraction approach for framework-based systems by combining traditional parsing with lexical pattern matching to obtain framework-specific facts from all three sources. We evaluate our approach with an industrial software application that was built using the Avalon/Phoenix framework. In particular we give examples to point out the benefits of considering framework-related information and reflect experiences made during the case study. |
|
Martin Pinzger, Harald Gall, Jean-Francois Girard, Jens Knodel, Claudio Riva, Wim Pasman, Chris Broerse, Jan Gerben Wijnstra, Architecture Recovery for Product Families, In: Proceedings of the 5th International Workshop on Product Family Engineering, Springer, Siena, Italy, 2003. (Conference or Workshop Paper)
Software product families are rarely created right away but they emerge when a domain becomes mature enough to sustain their long-term investments. The typical pattern is to start with a small set of products to quickly enter a new market. As soon as the business proves to be successful new investments are directed to consolidating the software assets. The various products are migrated towards a flexible platform where the assets are shared and new products can be derived from. In order to create and maintain the platform, the organization needs to carry out several activities such as recovering the architectures of single products and product families, designing the reference architecture, isolating the variable parts, and generalizing software components. In this paper, we introduce a product family construction process that exploits related systems and product families, and we describe methods and tools used. We also present an approach for classifying platforms according to platform coverage and variation and describe three techniques to handle variability across single products and whole product families. |
|
Martin Pinzger, Johann Oberleitner, Harald Gall, Analyzing and understanding architectural characteristics of COM+ components, In: Proceedings of the International Workshop on Program Comprehension (IWPC'03), IEEE Computer Society, Portland, Oregon, USA, 2003. (Conference or Workshop Paper)
Understanding architectural characteristics of software components that constitute distributed systems is crucial for maintaining and evolving them. One component framework heavily used for developing component-based software systems is Microsoft’s COM+. In this paper we particularly concentrate on the analysis of COM+ components and introduce an iterative and interactive approach that combines component inspection techniques with source code analysis to obtain a complete abstract model of each COM+ component. The model describes important architectural characteristics such as transactions, security, and persistency, as well as create and use dependencies between components, and maps these higher-level concepts down to their implementation in source files. Based on the model, engineers can browse the software system’s COM+ components and navigate from the list of architectural characteristics to the corresponding source code statements. We also discuss the Island Hopper application with which our approach has been validated. |
|
Gerold Schneider, Extracting and Using Trace-Free Functional Dependencies from the Penn Treebank to Reduce Parsing Complexity, In: Proceedings of Treebanks and Linguistic Theories (TLT) 2003, Växjö, Sweden, 2003. (Conference or Workshop Paper)
Many extensions to text-based, data-intensive knowledge management approaches,
such as Information Retrieval or Data Mining, focus on integrating the impressive
recent advances in language technology. For this, they need fast, robust parsers that
deliver linguistic data which is meaningful for the subsequent processing stages.
This paper introduces such a parsing system. Its output is a hierarchical structure
of syntactic relations, functional dependency structures. |
|
Elia Yuste, Fabio Rinaldi, Extracción automática de respuestas para documentacion t\'ecnica, In: SEPLN 2003 (XIX Congreso de la Sociedad Española para el Procesamiento del Lenguaje Natural), Alcalá de Henares (Madrid), Spain, 2003. (Conference or Workshop Paper)
|
|
Fabio Rinaldi, Kaarel Kaljurand, James Dowdall, Michael Hess, Breaking the Deadlock, In: ODBASE, 2003 (International Conference on Ontologies, Databases and Applications of SEmantics), Springer, Catania, Italy, 2003. (Conference or Workshop Paper)
Many of the proposed approaches to the semantic web have a substantial drawback. They are all based on the idea that web pages (or more generally, resources), will contain semantic annotations that would allow remote agents to access them. However the problem of the creation of those annotations is seldom addressed. Manual creation of the annotations is not a feasible option, except in a few experimental cases.
We propose an approach based on Language Processing techniques that addresses this issue, at least for textual resources (which still constitute the vast majority of the material available on the web). Documents are analized fully automatically and converted into a semantic annotation, which can then be stored together with the original documents. It is this annotation that constitutes the machine understandable resource that remote agents can query.
A semi-automatic approach is also considered, in which the system suggests candidate annotations and the user simply has to approve or reject them. Advantages and drawbacks of both approaches are discussed. |
|
Diego Mollà Aliod, Fabio Rinaldi, Rolf Schwitter, James Dowdall, Michael Hess, NLP for Answer Extraction in Technical Domains, In: 10th Conference of The European Chapter of the Association for Conputational Linguistics. Workshop: Natural Language Processing for Question Answering EACL-2003, Budapest, Hungary, 2003. (Conference or Workshop Paper)
In this paper we argue that question answering (QA) over technical domains is distinctly different from TREC-based QA or Web-based QA and it cannot benefit from data-intensive approaches. Technical questions arise in situations where concrete problems require specific answers and explanations. Finding a justification of the answer in the context of the document is essential if we have to solve a real-world problem. We show that NLP techniques can be used successfully in technical domains for high-precision access to information stored in documents. We present Extr- Ans, an answer extraction system over technical domains, its architecture, its use of logical forms for answer extractions and how terminology extraction becomes an important part of the system. |
|
Johannes Ryser, Szenarienbasiertes Validieren und Testen von Softwaresystemen Scenario-based validation and test of software systems (in German), University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2003. (Dissertation)
Scenarios (Use cases) – being descriptions of sequences of interactions between two or more partners, usually between a system and its users – have attracted much attention and gained wide-spread use in requirements and software engineering over the last couple of years. In scenarios, the functionality and behavior of a (software) system is captured in a user-centered perspective. To date, scenarios are mainly used in the requirements elicitation and analysis phase of software development.
Even though scenarios are mainly used in system analysis, the use of scenarios in other phases of software development is of much interest, as it could help to cut cost by reuse and improved validation and verification. As scenarios form a kind of abstract test cases for the system under development, the idea to use them to derive test cases for system test is quite intriguing. Yet in practice, scenarios from the analysis phase are seldom used to create concrete system test cases. An analysis of the central problems of software testing, and of the reasons why scenarios from the analysis phase are not used for creating test cases for system test, leads us to proposing the following premises or concepts which we consider to be valuable approaches to improve testing, and that consequently serve as a basis for the approach presented in this thesis:
* Reuse of analysis scenarios for validation purposes and in testing.
* Integration of test methods & activities and of the methods & activities of the software development methodology used.
* Creation of test cases for system test in a systematic manner and early in the development cycle.
* Modeling the dependencies and relationships among scenarios and using this dependency model to refine test case derivation from the scenario model.
In this document, a method is developed and described that is based on the concepts presented above and on the deficiencies found in existing approaches. A definite step-procedure for creating scenarios is defined and further use of analysis scenarios in development and testing is discussed. We call the approach the SCENT-Method: A method for SCENario-based validation and Test of software.
The main issues of requirements elicitation and documentation are analyzed, and the advantages and disadvantages of formal specification languages versus natural language are discussed. A detailed step-by-step procedure for the creation of descriptive, narrative scenarios is defined and a template to help describe, structure and document natural language scenarios is presented. Natural language scenario descriptions are formalized in statecharts. This helps to avoid or at least alleviate some of the problems of natural language specifications (ambiguity, inconsistencies, imprecise or vague expressions, and the like). These statecharts – being a more formal representation of the narrative scenarios – are used to systematically derive test cases from. Furthermore, the statecharts-based functional model is annotated with non-functional requirements and data, and used to validate the system. Dependencies and relations among scenarios are captured and modeled in a special model that stands complementary to the behavioral model. This so-called SCENT dependency chart is used to derive further test cases, and thus to expand and refine the test suite created from the behavioral model. A language and a notation are defined to capture and document the dependencies among the scenarios. In an example we illustrate how to use this language and notation.
The tester uses the formalized and annotated scenarios and the dependency charts for deriving test cases for system test. This is done in a systematic way by traversing paths in the statecharts and the dependency charts, respectively. First results, that have been gained by applying the method to real projects in an international company, are presented and discussed, and problems and open questions are addressed. |
|
Special section on 'Modellierung 2002'., Edited by: Martin Glinz, Günther Müller-Luschnat, Springer, Heidelberg, Germany, 2003. (Edited Scientific Work)
|
|
Martin Glinz, Desert Island Column, Automated Software Engineering, Vol. 10 (4), 2003. (Journal Article)
|
|
Dirk Frohberg, Communities - The MOBIlearn perspective, In: International Conference on Communities and Technologies, Workshop Ubiquitous and mobile computing for educational communities: Enriching and enlarging community spaces, 2003. (Conference or Workshop Paper)
|
|
Andreas Majer, Gerhard Schwabe, Korvis - Ein kommunales Rats- und Verwaltungsinformationssystem, In: Vom E-Business zur E-Society - New Economy im Wandel, Hampp, München, p. 145 - 156, 2003. (Book Chapter)
|
|
Gerhard Schwabe, Growing an application from collaboration to management support - the example of Cupark, In: Management Information Systems : Managing the Digital Firm, Pearson Prentice Hall, Upper Saddle Rive, p. 529, 2003. (Book Chapter)
|
|
Helen Sharp, Josie Taylor, Andreas Löber, Dirk Frohberg, Daisy Mwanza, Elena Murelli, Establishing user requirements for a mobile learning environment, In: Eurescom Summit 2003, 2003. (Conference or Workshop Paper)
|
|