D J Kurz, Abraham Bernstein, K Hunt, D Radovanovic, P Erne, Z Siudak, O Bertel, Simple point of care risk stratification in acute coronary syndromes: the AMIS model, Heart, Vol. 95 (8), 2009. (Journal Article)
Background: Early risk stratification is important in the management of patients with acute coronary syndromes (ACS).
Objective: To develop a rapidly available risk stratification tool for use in all ACS.
Design and methods: Application of modern data mining and machine learning algorithms to a derivation cohort of 7520 ACS patients included in the AMIS (Acute Myocardial Infarction in Switzerland)-Plus registry between 2001 and 2005; prospective model testing in two validation cohorts.
Results: The most accurate prediction of in-hospital mortality was achieved with the “Averaged One-Dependence Estimators” (AODE) algorithm, with input of 7 variables
available at first patient contact: Age, Killip class, systolic blood pressure, heart rate, pre-hospital cardio-pulmonary resuscitation, history of heart failure, history of cerebrovascular disease. The c-statistic for the derivation cohort (0.875) was essentially maintained in
important subgroups, and calibration over five risk categories, ranging from <1% to >30% predicted mortality, was accurate. Results were validated prospectively against an independent AMIS-Plus cohort (n=2854, c-statistic 0.868) and the Krakow-Region ACS Registry (n=2635, c-statistic 0.842). The AMIS model significantly outperformed established “point-of-care” risk prediction tools in both validation cohorts. In comparison to a logistic
regression-based model, the AODE-based model proved to be more robust when tested on the Krakow validation cohort (c-statistic 0.842 vs. 0.746). Accuracy of the AMIS model
prediction was maintained at 12-months follow-up in an independent cohort (n=1972, c-statistic 0.877).
Conclusions: The AMIS model is a reproducibly accurate point-of-care risk stratification tool for the complete range of ACS, based on variables available at first patient contact. |
|
Michael Imhof, Optimization strategies for RDFS-aware data storage, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2009. (Master's Thesis)
Indexing and storing triple-based Semantic Web data in a way to allow for efficient query
processing has long been a difficult task. A recent approach to address this issue is the index-
ing scheme Hexastore. In this work, we propose two novel on-disk storage models for Hexastore,
that use RDF Schema information to gather data that semantically belong together and store them
contiguously. In the clustering approach, elements of the same classes are stored contiguously
within the indices. In the subindex approach, data of the same categories are saved in separate
subindices. Thus, we expect to simplify and accelerate the retrieving process of Hexastore. The
experimental evaluation shows a clear advantage of the standard storage model against the pro-
posed approaches in terms of index creation time and required disk space. |
|
Adrian Bachmann, Abraham Bernstein, Data Retrieval, Processing and Linking for Software Process Data Analysis, No. IFI-2009.0003b, Version: 1, 2009. (Technical Report)
Many projects in the mining software repositories communities rely on software process data gathered from bug tracking databases and commit log files of version control systems. These data are then used to predict defects, gather insight into a project's life-cycle, and other tasks. In this technical report we introduce the software systems which hold such data. Furthermore, we present our approach for retrieving, processing and linking this data. Specifically, we first introduce the bug fixing process and the software products used which support this process. We then present a step by step guidance of our approach to retrieve, parse, convert and link the data sources. Additionally, we introduce an improved approach for linking the change log file with the bug tracking database. Doing that, we achieve a higher linking rate than with other approaches |
|
Ausgezeichnete Informatikdissertationen 2008, Edited by: Abraham Bernstein, Steffen Hölldobler, et al, Gesellschaft für Informatik, Bonn, 2009. (Edited Scientific Work)
|
|
The Semantic Web - ISWC 2009, Edited by: Abraham Bernstein, D R Karger, T Heath, L Feigenbaum, D Maynard, E Motta, K Thirunarayan, Springer, Berlin, 2009. (Edited Scientific Work)
This book constitutes the refereed proceedings of the 8th International Semantic Web Conference, ISWC 2009, held in Chantilly, VA, USA, during October 25-29, 2009.
The volume contains 43 revised full research papers selected from a total of 250 submissions; 15 papers out of 59 submissions to the semantic Web in-use track, and 7 papers and 12 posters accepted out of 19 submissions to the doctorial consortium.
The topics covered in the research track are ontology engineering; data management; software and service engineering; non-standard reasoning with ontologies; semantic retrieval; OWL; ontology alignment; description logics; user interfaces; Web data and knowledge; semantic Web services; semantic social networks; and rules and relatedness. The semantic Web in-use track covers knowledge management; business applications; applications from home to space; and services and infrastructure. |
|
Esther Kaufmann, Talking to the semantic web - natural language query interfaces for casual end-users, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2009. (Dissertation)
|
|
Abraham Bernstein, E Kaufmann, C Kiefer, Querying the semantic web with ginseng - A guided input natural language search Engine, In: Searching answers : Festschrift in honour of Michael Hess on the occasion of his 60th birthday, MV-Wissenschaft (Monsenstein und Vannerdat), Münster, p. 1 - 10, 2009. (Book Chapter)
|
|
K Reinecke, Abraham Bernstein, S Hauske, To Make or to Buy? Sourcing Decisions at the Zurich Cantonal Bank, In: International Conference on Information Systems (ICIS), 2008-12-14. (Conference or Workshop Paper published in Proceedings)
The case study describes the IT situation at Zurich Cantonal Bank around the turn of the millennium. Incapable to fulfill the company’s strategic goals, it is shown how the legacy systems force the company into the decision to modify or to replace the old systems with standard software packages: to make or to buy? The case study introduces the bank’s strategic goals and their importance for the three make or buy alternatives. All solutions are described in detail; however, the bank’s decision is left open for students to decide. For a thorough analysis of the situation, the student is required to put himself in the position of the key decision maker at Zurich Cantonal Bank, calculating risks and balancing advantages and disadvantages of each solution. Six video interviews reveal further technical and interpersonal aspects of the decision-making process at the bank, as well as of the situation today. |
|
Michael Meier, The extended GraphSlider-Framework, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2008. (Master's Thesis)
This diploma thesis addresses the automatic recognition of fraudulent activities in the
transaction databases of a bank. Therefore, the already existing fraud detection program
GraphSlider gets extended with new functions. The first function addresses the recognition of
fraud, based on temporal data in the database, because this data is almost always available but
very seldom used for fraud detection. The second new function addresses the recognition of
internal fraud on the employee level. In order to achieve this, our approach tries to track the
fraudulent actions back to the single employee. At the end, the new approaches are tested with
synthetic data if they are capable and if they have a good performance. |
|
Raphael Pirker, Erweiterung des DBDoc-Systems um inkrementelle Dokumentationserstellung und Dokumentation von Schemaänderungen, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2008. (Master's Thesis)
The database documentation tool DBDoc accurately portrays the status quo of a database schema and the database management syste itself. When using database documentation, however, the evolution of the database schema is also of interest. While the documentation could be copied at given intervals to an archive and then compared manually, the changes would be very hard to detect and out of context.
Within this thesis the implementation of plugins that tie into the existing infrastructure of DBDoc is discussed. The plugins automatically detect and store modified schemas and augment the documentation with information on what was changedover time. |
|
Basil Wirz, Dynamische Adaption von Benutzerschnittstellen an das Interaktionsverhalten, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2008. (Master's Thesis)
To fulfill the demand of the users towards an individually adapted userinterface
which represents their preferences, their ability or their cultural background
it can be changed either before or during the usage. This work follows the
second approach and describes the coherences of such an adaptation system for
userinterfaces. For the adaptation the interaction behavior of the users will
therefore be tracked to conduct appropriate dynamic adjustments. To approve the
applicability of the recommended solution, it will be implemented in an existing
webapplication. This implementation should demonstrate the adaptation
possibilities in a real example. An experiment of the userinteraction serves to
exploit basis data, which is necessary for the adaptation. |
|
Eirik Aune, Adrian Bachmann, Abraham Bernstein, Christian Bird, Premkumar Devanbu, Looking Back on Prediction: A Retrospective Evaluation of Bug-Prediction Techniques, November 2008. (Other Publication)
|
|
Thomas Scharrenbach, End-user assisted ontology evolution in uncertain domains, In: The Semantic Web - ISWC 2008, 7th International Semantic Web Conference, Springer, Heidelberg, 2008-10-26. (Conference or Workshop Paper published in Proceedings)
Learning ontologies from large text corpora is a well understood task while evolving ontologies dynamically from user-input has rarely been adressed so far. Evolution of ontologies has to deal with vague or incomplete information. Accordingly, the formalism used for knowledge representation must be able to handle this kind of information. Classical logical approaches such as description logics are particularly poor in adressing uncertainty. Ontology evolution may benefit from exploring probabilistic or fuzzy approaches to knowledge representation. In this thesis an approach to evolve and update ontologies is developed which uses explicit and implicit user-input and extends probabilistic approaches to ontology engineering. |
|
S Ferndriger, Abraham Bernstein, J S Dong, Y Feng, Y F Li, J Hunter, Enhancing semantic web services with inheritance, In: 7th International Semantic Web Conference (ISWC 2008), Springer, Berlin / Heidelberg, 2008-10-26. (Conference or Workshop Paper published in Proceedings)
Currently proposed Semantic Web Services technologies allow the creation of ontology-based semantic annotations of Web services so that software agents are able to discover, invoke, compose and monitor these services with a high degree of automation. The OWL Services (OWL-S) ontology is an upper ontology in OWL language, providing essential vocabularies to semantically describe Web services. Currently OWL-S services can only be developed independently; if one service is unavailable then finding a suitable alternative would require an expensive and difficult global search/match. It is desirable to have a new OWL-S construct that can systematically support substitution tracing as well as incremental development and reuse of services. Introducing inheritance relationship (IR) into OWL-S is a natural solution. However, OWL-S, as well as most of the other currently discussed formalisms for SemanticWeb Services such as WSMO or SAWSDL, has yet to define a concrete and self-contained mechanism of establishing inheritance relationships among services, which we believe is very important for the automated annotation and discovery of Web services as well as human organization of services into a taxonomy-like structure. In this paper, we extend OWL-S with the ability to define and maintain inheritance relationships between services. Through the definition of an additional “inheritance profile”, inheritance relationships can be stated and reasoned about. Two types of IRs are allowed to grant service developers the choice to respect the “contract” between services or not. The proposed inheritance framework has also been implemented and the prototype will be briefly evaluated as well. |
|
Amancio Bouza, G Reif, Abraham Bernstein, Harald Gall, SemTree: ontology-based decision tree algorithm for recommender systems, In: International Semantic Web Conference, 2008-10-26. (Conference or Workshop Paper)
Recommender systems play an important role in supporting people when choosing items from an overwhelming huge number of choices. So far, no recommender system makes use of domain knowledge. We are modeling user preferences with a machine learning approach to recommend people items by predicting the item ratings. Specifically, we propose SemTree, an ontology-based decision tree learner, that uses a reasoner and an ontology to semantically generalize item features to improve the effectiveness of the decision tree built. We show that SemTree outperforms comparable approaches in recommending more accurate recommendations considering domain knowledge. |
|
Rolf Grütter, Thomas Scharrenbach, Bettina Bauer-Messmer, Improving an RCC-Derived Geospatial Approximation by OWL Axioms, In: The Semantic Web - ISWC 2008, 7th International Semantic Web Conference, Springer, October 2008. (Conference or Workshop Paper)
|
|
Cathrin Weiss, Abraham Bernstein, Sandro Boccuzzo, i-MoCo: Mobile Conference Guide - Storing and querying huge amounts of Semantic Web data on the iPhone/iPod Touch, October 2008. (Other Publication)
Querying and storing huge amounts of Semantic Web data – this has usually required a lot of computational power. This is no
longer true if one makes use of recent research outcomes like modern RDF indexing strategies. We present a mobile conference guide application that combines several different RDF data sets to present interlinked information about publications, conferences, authors, locations, and others to the user. With our application we show that it is possible to store a big amount of indexed data on an iPhone/iPod Touch device. That querying is also efficent
is demonstrated by creating the application’s actual content out of real time queries on the data. |
|
Bettina Bauer-Messmer, Thomas Scharrenbach, Rolf Grütter, Improving an environmental ontology by incorporating user-input, In: Environmental Informatics and Industrial Ecology. Proceedings of the 22nd International Conference on Informatics for Environmental Protection, Shaker Verlag, Aachen, Aachen, 2008-09-10. (Conference or Workshop Paper published in Proceedings)
|
|
Samuel Galliker, Generierung von synthetischen Banktransaktionsdaten, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2008. (Bachelor's Thesis)
The java code that was worked out to generate synthetic bank transactions with
real distribution figures is explained in the present bachelor thesis: The
Transaction Evaluator analyses the structure of the original data, before the
Transaction Builder generates the synthetic data based on the ascertained
properties. Further, the performance of the program is evaluated on the basis of
two test sets. It turns out that the implementation works and the results are
pleasant; however it still remains to find the ideal settings. |
|
Simon Berther, Implementierung eines skalierbare Triple Stores, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2008. (Master's Thesis)
The growing number of SemanticWeb applications producesmore and more data in RDF/OWLformat.
The persistent storage of this strongly interlinked data is not trivial. The data is often
mapped to relational databases, even if they are not relational, but rather graph-based. There
are already various approaches to storing this data in a persistent way. Frequently these systems
neglect some requirements. Hexastore was developed at the University of Zurich. This sixfold
indexing approach for Semantic Web Data is scalable, does not discriminate any RDF-Elements
and is applicable to any datasets without knowing anything about the dataset beforehand. The
idea of Hexastore was implemented as an in-memory prototype. This work presents an approach
to storing Hexastore persistently on disk. The suitability of this method is documented in several
queries over two datasets and is evaluated against some reference systems. |
|