Peter Zweifel, Stefan Felder, Andreas Werblow, Population ageing and health care expenditure: new evidence on the "red herring", Geneva Papers on Risk and Insurance, Vol. 29 (4), 2004. (Journal Article)
The observation that average health care expenditure rises with age generally leads experts and laymen alike to conclude that population ageing is the main driver of health care costs. In recently published studies we challenged this view (Zweifel et al., 1999; Felder et al., 2000). Analysing health care expenditure of deceased persons, we showed that age is insignificant if proximity to death is controlled for. Thus, we argued that population ageing per se will not have a significant impact on future health care expenditure. Several authors (Salas and Raftery, 2001; Dow and Norton, 2002; Seshamani and Gray, 2004a) disputed the robustness of these findings, pointing to potential weaknesses in the econometric methodology. This paper revisits the debate and provides new empirical evidence, taking into account the methodological concerns that have been raised. We also include surviving individuals to test for the possibility that the relative importance of proximity to death and age differs between the deceased and survivors. The results vindicate our earlier findings of no significant age effect on health care expenditure of the deceased. However, with respect to the survivors, we find that age may matter. Still, a naive estimation that does not control for proximity to death will grossly overestimate the effect of population ageing on aggregate health care expenditure. Following Stearns and Norton (2004), we conclude that "it is time for time to death" in projections of future health care costs. |
|
Johannes Binswanger, Public debt and pension policy under lexicographic choice behavior : a new psychological economics approach, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2004. (Dissertation)
|
|
Peter Zweifel, Reexamining drug regulation from the perspective of innovation policy: comment, Journal of Institutional and Theoretical Economics JITE, Vol. 160 (1), 2004. (Journal Article)
This is a very colorful paper that makes for interesting reading. The author shows that the U. S. Food and Drug Administration (FDA) increasingly does not decide about market access to drugs only, but influences the innovation process as a whole. And although "information" does not appear in the title of the paper, the distribution of knowledge as affected by the FDA plays an important role at several stages of this process. For this reason, the body of this commentary is arranged according to the stages of the innovation process. |
|
Adrian Berwert, Barbara Good, Beat Hotz-Hart, Andreas Reuter-Hofer, The Finnish system of innovation - lessons for Switzerland?, Swiss Academy of Engineering Sciences, SATW, Zurich, 2004. (Book/Research Monograph)
Innovation has become a key element of the economic growth of highly developed countries. Moreover, it is an undisputed fact that Switzerland needs to strengthen its efforts at innovation. This has become clear, among
other things in the message of the Federal Council, the Swiss government, which emphasises the promotion of education, research and technology for the years 2004 to 2007. Speedy implementation of technological and scientific potentials into innovative products and services is one of the primordial requirements to be competitive in the marketplace and, hence, to secure jobs. Primarily, this is a challenge to entrepreneurs. It is more demanding to launch into activities with innovative products and services, and greater risks are involved, than in the rationalisation of existing productions. Nevertheless, although entrepreneurial skills and qualities are at the fore with regard to successful innovative processes, the influence of the state with its framework should not be overlooked. It is well worth examining and reconsidering these factors from time to time.Comparing the Swiss innovation system with those of other countries can be a highly profitable exercise. |
|
Rob Euwals, Rainer Winkelmann, Training intensity and first labor market outcomes of apprenticeship graduates, International Journal of Manpower, Vol. 25 (5), 2004. (Journal Article)
The apprenticeship system is the most important source of formal post-secondary
training in Germany. Using German register data – the IAB Employment Sample – it is found that apprentices staying with their training firm after graduation have a longer first-job durations but not higher wages than apprentices leaving the training firm. Retention rates, first job durations, and post-apprenticeship wages are all increasing functions of training intensity. Some implications for the ongoing debate as to why firms are willing to invest in general training are discussed. |
|
Gerald Reif, Harald Gall, Mehdi Jazayeri, Towards Semantic Web Engineering: WEESA - Mapping XML Schema to Ontologies, In: Workshop on Application Design, Development and Implementation Issues in the Semantic Web at the 13th International World Wide Web Conference, CEUR Workshop Proceedings, New York, USA, January 2004. (Conference or Workshop Paper)
The existence of semantically tagged Web pages is crucial to bring the Semantic Web to life. But it is still costly to develop and maintain Web applications that offer data and meta-data. Several standard Web engineering methodologies exist for designing and implementing Web applications. In this paper we introduce a technique to extend existing Web engineering techniques to develop semantically tagged Web applications. The novelty of this technique is the definition and implementation of a mapping from XML Schema to ontologies that can be used to automatically generate RDF meta-data from XML content documents. |
|
Martin Pinzger, Michael Fischer, Mehdi Jazayeri, Harald Gall, Abstracting module views from source code, In: Proceedings of the International Conference on Software Maintenance (ICSM'04), IEEE Computer Society, Chicago, USA, 2004. (Conference or Workshop Paper)
In this paper we present ArchView an approach for abstracting and visualizing software module views from source code. ArchView computes abstraction metrics that are used to filter out architectural elements and relationships of minor interest resulting in more reasonable and comprehensible module views on software architectures. |
|
Gerold Schneider, Fabio Rinaldi, Kaarel Kaljurand, Michael Hess, Steps towards a GENIA Dependency Treebank, In: Proc. of the Third Workshop on Treebanks and Linguistic Theories (TLT) 2004, Tübingen, Germany, 2004. (Conference or Workshop Paper)
In this paper we describe on-going work aimed at creating a dependency-based
annotated treebank for the BioMedical domain. Our starting point is the GENIA
corpus, which is a corpus of 2000 MEDLINE abstracts, which has been manually
annotated for various biological entities, according to the GENIA Ontology.1
There is an exponential growth of published research in this sector, which
makes it difficult even for the experts to follow the recent developments. This
creates the need for tools that can automatically process the research literature and
extract only relevant information, such as interactions between genes and proteins.
In order for these tools to be developed, annotated resources, such as corpora and
Treebanks are of fundamental importance. Such resources will support the development
of practical domain-specific information extraction tools. |
|
Gerold Schneider, Fabio Rinaldi, James Dowdall, Fast, Deep-Linguistic Statistical Minimalist Dependency Parsing, In: Proc. of COLING-2004 Recent Advances in Dependency Grammars, Geneva, Switzerland, 2004. (Conference or Workshop Paper)
We present and evaluate an implemented statistical
minimal parsing strategy exploiting DG
charateristics to permit fast, robust, deeplinguistic
analysis of unrestricted text, and compare
its probability model to (Collins, 1999) and
an adaptation, (Dubey and Keller, 2003). We
show that DG allows for the expression of the
majority of English LDDs in a context-free way
and offers simple yet powerful statistical models. |
|
Gerold Schneider, Combining Shallow and Deep Processing for a Robust, Fast, Deep-Linguistic Dependency Parser, In: Proc. of the European Summer School in Logic, Language and Information ESSLLI 2004, Nancy, France, 2004. (Conference or Workshop Paper)
This paper describes Pro3Gres, a fast, robust, broad-coverage parser that delivers deep-linguistic
grammatical relation structures as output, which are closer to predicate-argument structures and
more informative than pure constituency structures. The parser stays as shallow as is possible
for each task, combining shallow and deep-linguistic methods by integrating chunking and by expressing
the majority of long-distance dependencies in a context-free way. It combines statistical
and rule-based approaches, different linguistic grammar theories and different linguistic resources.
Preliminary evaluations indicate that the parser’s performance is state-of-the-art. |
|
Eric SanJuan, James Dowdall, Fidelia Imbekwe-SanJuan, Fabio Rinaldi, A symbolic approach to Automatic MultiWord Term Structuring, Computer Speech and Language, 2004. (Journal Article)
This paper presents a three-level structuring of multiword terms basing on lexical inclusion, WordNet
similarity and a clustering approach. Term clustering by automatic data analysis methods o?ers an inter-
esting way of organizing a domainÕs knowledge structure, useful for several information-oriented tasks like
science and technology watch, textmining, computer-assisted ontology population, Question Answering
(Q–A). This paper explores how this three-level term structuring brings to light the knowledge structures
from a corpus of genomics and compares the mapping of the domain topics against a hand-built ontology
(the GENIA ontology). Ways of integrating the results into a Q–A system are discussed. |
|
Fabio Rinaldi, Michael Hess, James Dowdall, Diego Mollà Aliod, Rolf Schwitter, New Directions in Question Answering, In: null, AAAI/MIT Press, 2004. (Book Chapter)
The current tendency in Question Answering is towards the processing of large volumes of open-domain text. This tendency is spurred by the creation of the Question Answering track in TREC, and the recent increase of systems that use the Web to extract the answers to the questions. This has undoubtly the advantage that narrow, application-specific concerns can be overlooked in favor of more general approaches. However the unconstrained nature of the domain and questions does not necessarily lead to systems that are better at specific tasks, as they might be required in a deployed application.
It has been already been observed in other competitions (notably the Information Extraction competitions organized under the name of Message Understanding Conferences) that the nature of the competitive process tends to select a type of system that better adapts to the evaluation itself, rather than systems that deal in an optimal way with the problem. To use a comparison from evolution theory, a too severe selection in a given local environment leads to a converge of the population to a very limited genetic pool, which is then uncapable of coping with even a minor change in the environment.
In restricted domains, systems cannot take advantage of the so-called ""Zipf's law of Questions"" Prager, which states that there is an inverse relation between the frequency of certain types of questions and their complexity. In other words, the questions most frequently asked are those that can be solved with simpler techniques. By targeting a smaller set of frequent questions types, system can achieve good results with limited effort.
By contrast, the non-redundant nature of most technical documentation, and the use of domain specific sublanguage and terminology, makes them unsuitable to (some of) the approaches seen in the TREC QA competition. In the proposed contribution We will discuss the specific nature of technical documentation, with examples from real domains (e.g. the Maintenance Manual of a large commercial aircraft) and illustrate solutions that have been adopted in a deployed system.
An example of the difference between technical documents and open domain texts is the focus on specific types of entities. While in Open Domain systems Named Entities play a major role, in Technical Documentation they are almost irrelevant, by contrast a far greater role is played by domain terminology.
Technical domains present the additional problem of ""domain navigation"". By assuming that users are familiar with domain concepts, inexpert users are presented with a barrier separating questions from answers. Unfamiliarity with domain terminology might lead to questions which contain imperfect formulations of domain terms. A question answering system for junior doctors or training technicians needs therefore to use whatever scarce domain knowledge is contained in a query to extract relevant answers. Detecting terminological variants and exploiting the relations between terms (like synonymy, meronymy, antonymy) is vital to this task.
Another idiosyncrasy of technical domains is the tendency towards definitional questions (""what is the ANT connection?""), which prove tricky to answer precisely in a generic document collection (and for this reason they have been deliberately left out of the recent TREC 2002). In Technical Domains it can be expected that such type of question would play a major role, and therefore systems must be capable of coping with them.
In this book chapter we aim to explain the above concepts and illustrate them with examples taken from text from technical domains. We will also illustrate why techniques that are typically used in data-intensive open-domain question-answering systems would not work effectively in technical domains that have less data redundancy. In sum, we will show that question-answering of technical domains present a better opportunity to explore content-based approaches to question-answering, while at the same time bringing the possibility of producing commercially viable systems in the short term. |
|
Manfred Klenner, Fabio Rinaldi, Michael Hess, Steps towards Semantically Annotated Language Resources, In: Proc. of LREC-2004, Lisbon, Portugal, 2004. (Conference or Workshop Paper)
|
|
Kaarel Kaljurand, Fabio Rinaldi, James Dowdall, Michael Hess, Exploiting Language Resources for Semantic Web Annotations, In: Proc. of LREC-2004, Lisbon, Portugal, 2004. (Conference or Workshop Paper)
|
|
James Dowdall, Will Lowe, Jeremy Elleman, Fabio Rinaldi, Michael Hess, The role of MultiWord Terminology in Knowledge Management, In: Proc. of LREC-2004, Lisbon, Portugal, 2004. (Conference or Workshop Paper)
|
|
James Dowdall, Fabio Rinaldi, Andreas Persidis, Kaarel Kaljurand, Gerold Schneider, Michael Hess, Terminology expansion and relation identification between genes and pathways, In: Proc. of the Workshop on Terminology, Ontology and Knowledge Representation, Universit\'e Jean Moulin (Lyon 3), January 2004. (Conference or Workshop Paper)
|
|
Yong Xia, Martin Glinz, Extending a Graphic Modeling Language to Support Partial and Evolutionary Specification, In: APSEC '04: Proceedings of the 11th Asia-Pacific Software Engineering Conference (APSEC'04), IEEE Computer Society, Busan, Korea, 2004. (Conference or Workshop Paper)
The notion of partial and evolutionary specification has gained attention both in research and industry in the last years. While many people regard this just as a process issue, we are convinced that it is as well a language problem. Unfortunately, UML is not expressive enough to deal with evolutionary information in the system. In this paper, we propose an extension o a graphic modeling language called ADORA which is developed in our research group. We conservatively extend the semantics of some ADORA constructs so that intentional incompleteness can be expressed in the language and define a calculus for refining such specifications. With the help of these extensions, evolutionary specifications can be written in a controlled and systematic way. As the language and its extensions are formally defined, the consistency of evolutionary refinements can be checked mechanically by a tool. |
|
Christian Seybold, Silvio Meier, Martin Glinz, Evolution of Requirements Models by Simulation, In: IWPSE '04: Proceedings of the 7th International Workshop on Principles of Software Evolution (IWPSE'04), IEEE Computer Society, Kyoto, Japan, 2004. (Conference or Workshop Paper)
Simulation is a common means for validating requirements models. Simulating formal models is state-of-the-art. However, requirements models usually are not formal for two reasons. Firstly, a formal model cannot be generated in one step. Requirements are vague in the beginning and are refined stepwise towards a more formal representation. Secondly, requirements are changing, thus leading to a continuously evolving model. Hence, a requirements model will be complete and formal only at the end of the modeling process, if at all. If we want to use simulation as a means of continuous validation during the process of requirements evolution, the simulation technique employed must be capable of dealing with semi-formal, incomplete models. In this paper, we present an approach how we can handle partial models during simulation and use simulation to support evolution of these models. Our approach transfers the ideas of drivers, stubs, and regression from testing to the simulation of requirements models. It also uses the simulation results for evolving an incomplete model in a systematic way towards a more formal and complete one.
|
|
Arun Mukhija, Martin Glinz, A Framework for Dynamically Adaptive Applications in a Self-Organized Mobile Network Environment, In: ICDCSW '04: Proceedings of the 24th International Conference on Distributed Computing Systems Workshops (ICDCSW'04), IEEE Computer Society, Tokyo, Japan, 2004. (Conference or Workshop Paper)
Self-organized mobile networks present a challenging environment for the execution of software applications, due to their dynamic topologies and consistently changing resource conditions. In view of the above, a desirable property for software applications to be run over these networks is their ability to dynamically adapt to changing execution environments. The Contract-based Adaptive Software Architecture(CASA) provides a framework for the development of adaptive applications that are able to adapt their functionality and/or performance dynamically in response to runtime changes in their execution environments. The approach of the CASA framework is to decouple application code from any assumptions about resource availability,while enabling the application to execute under varying resource conditions. The CASA framework relies on specifying adaptation behavior of applications in application contracts,which enables the dynamic adaptation to be carried out in an application-transparent manner. |
|
Proceedings of the Doctoral Consortium at the 12th IEEE International Requirements Engineering Conference, Edited by: Martin Glinz, Kyoto, Japan, 2004. (Proceedings)
|
|