Barbara Good, Technologie zwischen Markt und Staat: Die Kommission für Technologie und Innovation und die Wirksamkeit ihrer Förderung, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2005. (Dissertation)
Im Zentrum dieses Buches steht die schweizerische Kommission für Technologie und Innovation (KTI), die partnerschaftliche F&E-Projekte zwischen einer Forschungsstätte und einer Firma unterstützt. Dabei interessiert, welches die Wirkungen der KTI-Förderung sind und wie diese Wirkungen zustande kommen. Um diese Frage zu beantworten, werden zunächst theoretische Erörterungen zu den Wirkungen und Wirkungsmechanismen im Bereich der Forschungs-, Technologie- und Innovationsförderung angestellt. Sodann wird eine Meta-Evaluation von vierzehn früheren Evaluationsstudien zur; KTI-Förderung unternommen. Diese dient als Qualitätssicherung für die folgende Evaluationssynthese. In dieser werden die in den bestehenden Evaluationsstudien ermittelten Wirkungen zusammengestellt, in einem letzten empirischen Schritt wird basierend auf den bisher gesammelten theoretischen und empirischen Erkenntnissen eine eigene Wirkungsanalyse durchgeführt. |
|
Sandra Hopkins, Peter Zweifel, The Australian health policy changes of 1999 and 2000: an evaluation, Applied Health Economics and Health Policy, Vol. 4 (4), 2005. (Journal Article)
This article evaluates three measures introduced by the Australian Federal Government in 1999 and 2000 that were designed to encourage private health insurance and relieve financial pressure on the public healthcare sector. These policy changes were (i) a 30% premium rebate, (ii) health insurers offering lifetime enrolment on existing terms and the future relaxation of premium regulation by permitting premiums to increase with age, and (iii) a mandate for insurers to offer complementary coverage for bridging the gap between actual hospital billings and benefits paid.
These measures were first evaluated in terms of expected benefits and costs at the individual level. In terms of the first criteria, the policy changes as a whole may have been efficiency-increasing. The Australian Government mandate to launch gap policies may well have created a spillover moral hazard effect to the extent that full insurance coverage encouraged policy holders to also use more public hospital services, thus undermining the government's stated objective to relieve public hospitals from demand pressure. Without this spillover moral hazard effect, there might have been a reduction in waiting times in the public sector. Secondly, the measures were evaluated in terms of additional benchmarks of the cost to the public purse, access and equity, and dynamic efficiency. Although public policy changes were found to be largely justifiable on the first set of criteria, they do not appear to be justifiable based on the second set. Uncertainties and doubts remain about the effect of the policy changes in terms of overall cost, access and equity, and dynamic efficiency. This is a common experience in countries that have considered shifts of their healthcare systems between the private and public sectors. |
|
H Egger, P Egger, The determinants of EU processing trade, World Economy, Vol. 28 (2), 2005. (Journal Article)
This paper assesses the determinants of European outward and inward processing trade. Thereby, it distinguishes between size, relative factor endowment, (other) cost factors and infrastructure variables. Using a large panel of bilateral processing trade flows of the EU12 countries at the aggregate level over the period 1988–1999, we find that infrastructure variables, relative factor endowments and other cost variables are important determinants for the EU's outward processing trade. Costs also play a key role for the EU's inward processing trade. |
|
H Egger, V Grossmann, The double role of skilled labor, new technologies and wage inequality, Metroeconomica, Vol. 56 (1), 2005. (Journal Article)
We examine the relationship between the supply of skilled labor, technological change and relative wages. In accounting for the role of skilled labor in both production activities and productivity- enhancing "support" activities we derive the following results. First, an increase in the supply of skilled labor raises the employment share of non-production labor within firms, without lowering relative wages. Second, new technologies raise wage inequality only in so far as they give incentives to firms to reallocate skilled labor towards non-production activities. In contrast, skill-biased technological change of the sort usually considered in the literature does not affect wage inequality. |
|
Hannes Egli, The environmental Kuznets Curve: theory and evidence, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2005. (Dissertation)
|
|
S Buehler, The promise and pitfalls of restructuring network industries, German Economic Review, Vol. 6 (2), 2005. (Journal Article)
This paper examines the competitive effects of reorganizing a network industry's vertical structure. In this industry, an upstream monopolist operates a network used as an input to produce horizontally differentiated final products that are imperfect substitutes. Three potential pitfalls of restructuring integrated network industries are analyzed: (i) double marginalization, (ii) underinvestment and (iii) vertical foreclosure. The paper studies the net effect of restructuring on retail prices and cost-reducing investment and discusses policy implications. |
|
Christian Ewerhart, The pure theory of multilateral obligations, Journal of Institutional and Theoretical Economics JITE, Vol. 161 (2), 2005. (Journal Article)
|
|
Beat Fluri, Harald Gall, Martin Pinzger, Fine-Grained Analysis of Change Couplings, In: Proceedings of the 5th International Workshop on Source Code Analysis and Manipulation, IEEE Computer Society, January 2005. (Conference or Workshop Paper)
In software evolution analysis, many approaches analyze release history data available through versioning systems. The recent investigations of CVS data have shown that commonly committed files highlight their change couplings. However, CVS stores modifications on the basis of text but does not track structural changes, such as the insertion, removing, or modification of methods or classes. A detailed analysis whether change couplings are caused by source code couplings or by other textual modifications, such as updates in license terms, is not performed by current approaches.
The focus of this paper is on adding structural change information to existing release history data. We present an approach that uses the structure compare services shipped with the Eclipse IDE to obtain the corresponding fine-grained changes between two subsequent versions of any Java class. This information supports filtering those change couplings which result from structural changes. So we can distill the causes for change couplings along releases and filter out those that are structurally relevant. The first validation of our approach with a medium-sized open source software system showed that a reasonable amount of change couplings are not caused by source code changes. |
|
Gerald Reif, Harald Gall, Mehdi Jazayeri, WEESA - Web Engineering for Semanitc Web Applications, In: Proceedings of the 14th International World Wide Web Conference, Chiba, Japan, January 2005. (Conference or Workshop Paper)
The success of the Semantic Web crucially depends on the existence ofWeb pages that provide machine-understandable meta-data. This meta-data is typically added in the semantic annotation process which is currently not part of theWeb engineering process. Web engineering, however, proposes methodologies to design, implement and maintain Web applications but lack the generation of meta-data. In this paper we introduce a technique to extend existing Web engineering methodologies to develop semantically annotated Web pages. The novelty of this approach is the definition of a mapping from XML Schema to ontologies, called WEESA, that can be used to automatically generate RDF meta-data from XML content documents. We further show how we integrated the WEESA mapping into an Apache Cocoon transformer to easily extend XML based Web applications to semantically annotated Web application. |
|
Gerald Reif, WEESA - Web Engineering for Semantic Web Applications, TU Vienna, 2005. (Dissertation)
In the last decade the increasing popularity of the World Wide Web has
lead to an exponential growth in the number of pages available on the
Web. This huge number of Web pages makes it increasingly difficult for
users to find required information. In searching the Web for specific
information, one gets lost in the vast number of irrelevant search
results and may miss relevant material. Current Web applications
provide Web pages in HTML format representing the content in natural
language only and the semantics of the content is therefore not
accessible by machines. To enable machines to support the user in
solving information problems, the Semantic Web proposes an extension
to the existing Web that makes the semantics of the Web pages
machine-processable. The semantics of the information of a Web page is
formalized using RDF meta-data describing the meaning of the content.
The existence of semantically annotated Web pages is therefore crucial
in bringing the Semantic Web into existence.
Semantic annotation addresses this problem and aims to turn
human-understandable content into a machine-processable form by adding
semantic markup. Many tools have been developed that support the user
during the annotation process. The annotation process, however, is a
separate task and is not integrated in the Web engineering process.
Web engineering proposes methodologies to design, implement and
maintain Web applications but these methodologies lack the generation
of meta-data.
In this thesis we introduce a technique to extend existing XML-based
Web engineering methodologies to develop semantically annotated Web
pages. The novelty of this approach is the definition of a mapping
from XML Schema to ontologies, called WEESA, that can be used to
automatically generate RDF meta-data from XML content documents. We
further demonstrate the integration of the WEESA meta-data generator
into the Apache Cocoon Web development framework to easily extend
XML-based Web applications to semantically annotated Web application.
Looking at the meta-data of a single Web page gives only a limited
view of the of the information available in a Web application. For
querying and reasoning purposes it is better to have the full meta-data
model of the whole Web application as a knowledge base at hand. In
this thesis we introduce the WEESA knowledge base, which is generated
at server side by accumulating the meta-data from individual Web
pages. The WEESA knowledge base is then offered for download and
querying by software agents.
Finally, the Vienna International Festival industry case study
illustrates the use of WEESA within an Apache Cocoon Web application
in real life. We discuss the lessons learned while implementing the
case study and give guidelines for developing Semantic Web
applications using WEESA. |
|
Michele Lanza, Stephane Ducasse, Harald Gall, Martin Pinzger, CodeCrawler: An Information Visualization Tool for Program Comprehension, In: Proceedings of the 27th International Conference on Software Engineering, ACM, St. Louis, MO, USA, 2005. (Conference or Workshop Paper)
CODECRAWLER is a language independent, interactive, software visualization tool. It is mainly targeted at visualizing object-oriented software, and in its newest implementation has become a general information visualization tool. It has been successfully validated in several industrial case studies over the past few years. CODECRAWLER strongly adheres to lightweight principles: it implements and visualizes polymetric views, visualizations of software enriched with information such as software metrics and other source code semantics. CODECRAWLER is built on top of Moose, an extensible language independent reengineering environment that implements the FAMIX metamodel. In its last implementation, CODECRAWLER has become a general-purpose information visualization tool. |
|
Martin Pinzger, Harald Gall, Michael Fischer, Michele Lanza, Visualizing multiple evolution metrics, In: Proceedings of the ACM Symposium on Software Visualization (SoftVis'2005), ACM, St. Louis, Missouri, USA, 2005. (Conference or Workshop Paper)
Observing the evolution of very large software systems needs the analysis of large complex data models and visualization of condensed views on the system. For visualization software metrics have been used to compute such condensed views. However, current techniques concentrate on visualizing data of one particular release providing only insufficient support for visualizing data of several releases. In this paper we present the RelVis visualization approach that concentrates on providing integrated condensed graphical views on source code and release history data of up to n releases. Measures of metrics of source code entities and relationships are composed in Kiviat diagrams as annual rings. Diagrams highlight the good and bad times of an entity and facilitate the identification of entities and relationships with critical trends. They represent potential refactoring candidates that should be addressed first before further evolving the system. The paper provides needed background information and evaluation of the approach with a large open source software project. |
|
Jens Knodel, Isabel John, Dharmalingam Ganesan, Martin Pinzger, Fernando Usero, Jose L. Arciniegas, Claudio Riva, Asset Recovery and Incorporation into Product Lines, In: Proceedings of the 12th IEEE Working Conference on Reverse Engineering, IEEE Computer Society, Pittsburgh, Pennsylvania, USA, January 2005. (Conference or Workshop Paper)
Software product lines aim in having a common platform from which several similar products can be derived. The elements of the platform are called assets and they are managed in an asset base being part of the product line infrastructure. The products are then built on top of the assets. Assets can include own developments, open source or third-party software modules, as well as design and project documents. In the context of the European-wide project FAMILIES we concentrated on techniques used to build the platform with focus on the recovery of these assets from existing systems. We present an approach on how to incorporate existing assets into the product line infrastructure. Thereby we explicitly distinguish the asset origins and the different information sources available. The incorporation is a quality-driven process that is backed up by a set of reverse engineering techniques to evaluate the asset’s internal quality. The quality assessment of an asset is the critical measurement for industrial development organizations in order to incorporate assets into their product line infrastructure. |
|
Fabio Rinaldi, Elia Yuste, Gerold Schneider, Michael Hess, David Roussel, Exploiting Technical Terminology for Knowledge Management, In: Ontology Learning from Text: Methods, Evaluation and Applications, Amsterdam: IOS Press (Frontiers in artificial intelligence and applications, edited by J. Breuker et al., volume 123), 2005. (Conference or Workshop Paper)
|
|
Julie Weeds, James Dowdall, Gerold Schneider, Bill Keller, David Weir, Using Distributional Similarity to Organise BioMedical Terminology, Terminology, Vol. 11 (1), 2005. (Journal Article)
We investigate an application of distributional similarity techniques to the problem of structural organisation of biomedical terminology. Our application domain is the relatively small GENIA corpus. Using terms that havebeen accurately marked-up by hand within the corpus, we consider the problem of automatically determining semantic proximity. Terminological units are defined for our purposes as normalised classes of individual terms. Syntactic analysis of the corpus data is carried out using the Pro3Gres parser and provides the data required to calculate distributional similarity using a variety of measures. Evaluation is performed against a hand-crafted gold standard for this domain in the form of the GENIA ontology. We show that distributional similarity can be used to predict semantic type with a good degree of accuracy, reaching an optimal value of 63.1%. |
|
Gerhard Schwabe, Christoph Göth, Mobile Learning with a Mobile Game: Design and Motivational Effects, Journal of Computer Assisted Learning, Vol. 21, 2005. (Journal Article)
|
|
Marco Prestipino, Virtual Communities and Wikis from a knowledge management perspective, 2005. (Other Publication)
|
|
Bugajska Malgorzata, Framework for Spatial Visual Design of Abstract Information, In: International Conference on Information Visualization, IEEE, 2005. (Conference or Workshop Paper)
|
|
Marco Prestipino, Community Based Electronic Guidebooks, In: Proceedings of CollECTeR Europe 2005, 2005. (Conference or Workshop Paper)
|
|
Denise Da Rin, Was Mitarbeiter vom E-Learning halten, wirtschaft und weiterbildung (03), 2005. (Journal Article)
|
|