Ausgezeichnete Informatikdissertationen 2010, Edited by: Steffen Hölldobler, Abraham Bernstein, et al, Gesellschaft für Informatik, Bonn, 2011. (Edited Scientific Work)
|
|
The Semantic Web - ISWC 2011 - 10th International Semantic Web Conference, Bonn, Germany, October 23-27, 2011, Proceedings, Part II, Edited by: Lora Aroyo, Chris Welty, Harith Alani, Jamie Taylor, Abraham Bernstein, Lalana Kagal, Natasha Noy, Eva Blomqvist, Springer, Heidelberg, Germany, 2011. (Proceedings)
|
|
The Semantic Web - ISWC 2011 - 10th International Semantic Web Conference, Bonn, Germany, October 23-27, 2011, Proceedings, Part I, Edited by: Lora Aroyo, Chris Welty, Harith Alani, Jamie Taylor, Abraham Bernstein, Lalana Kagal, Natasha Noy, Eva Blomqvist, Springer, Heidelberg, 2011. (Proceedings)
|
|
Dengping Wei, Ting Wang, Ji Wang, Abraham Bernstein, SAWSDL-iMatcher: A customizable and effective Semantic Web Service matchmaker, Web Semantics: Science, Services and Agents on the World Wide Web, Vol. 9 (4), 2011. (Journal Article)
As the number of publicly available services grows, discovering proper services becomes an important issue and has attracted amount of attempts. This paper presents a new customizable and effective matchmaker, called SAWSDL-iMatcher. It supports a matchmaking mechanism, named iXQuery, which extends XQuery with various similarity joins for SAWSDL service discovery. Using SAWSDL-iMatcher, users can flexibly customize their preferred matching strategies according to different application requirements. SAWSDL-iMatcher currently supports several matching strategies, including syntactic and semantic matching strategies as well as several statistical-model-based matching strategies which can effectively aggregate similarity values from matching on various types of service description information such as service name, description text, and semantic annotation. Besides, we propose a semantic matching strategy to measure the similarity among SAWSDL semantic annotations. These matching strategies have been evaluated in SAWSDL-iMatcher on SAWSDL-TC2 and Jena Geography Dataset (JGD). The evaluation shows that different matching strategies are suitable for different tasks and contexts, which implies the necessity of a customizable matchmaker. In addition, it also provides evidence for the claim that the effectiveness of SAWSDL service matching can be significantly improved by statistical-model-based matching strategies. Our matchmaker is competitive with other matchmakers on benchmark tests at S3 contest 2009. |
|
Francisco de Freitas, Distributed signal/collect, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2011. (Master's Thesis)
New demands for analyzing and working with large data sets establish new challenges for computation models, especially when dealing with Semantic Web information. Signal/Collect proposes an elegant model for applying graph algorithms on various data sets. However, a distributed feature for horizontally scaling and processing large volumes of data is missing. This thesis analyzes existing graph computation models and compares distributed message- passing frameworks for proposing an integrated Distributed Signal/Collect solution that tries to solve the problem of limited scalability. We successfully show that it is possible to implement distributed mechanisms using the Actor Model, although with some caveats. We also propose future works in an attempt to further enhance our solution. |
|
Yves Bilgerig, Word sense disambiguation with signal/collect, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2011. (Bachelor's Thesis)
Word Sense Disambiguation is an interesting problem in the field of Natural Language Processing.
This bachelor thesis presents a graph-based implementation of a word sense disambiguation
algorithm. Besides this, it would like to show that this algorithm combined with a part-of-speechtagger
would get better results. Unfortunately, the presented algorithm could not reach the baseline
set by others work. In the last section, possible reasons are discussed. However, a slight but
positive difference could have been shown in the algorithm using a part-of-speech-tagger. |
|
Daniel Strebel, Making signal/collect scale, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2011. (Bachelor's Thesis)
The size of the indexable web, and other data collections structured as a graph, is growing at an exorbitant pace and undoubtedly exceeds the available memory resources of a single machine. This thesis presents a way that allows the Signal/Collect to process data sets that would not fit into main memory, by storing the elements of the graph on disk. It describes different back end solutions to hold the vertices and describes other measures that have to be taken in order to load large graphs on disk and execute algorithms on them. In the evaluation we show the effect of these optimizations and that on-disk storage allows processing a graph with one million vertices with only 500 MB of RAM. It is also shown that the on-disk version of a SSSP computation is considerably slower than a comparable distributed implementation and why computation times of an on-disk SSSP computation will not scale linearly with the graph size or with the number of worker threads. |
|
Jayalath Ekanayake, Jonas Tappolet, Harald Gall, Abraham Bernstein, Time variance and defect prediction in software projects: additional figures, Version: 2, 2011. (Technical Report)
This technical report contains the complete set of figures that could not be included in the article "Time variance and defect prediction in software projects". |
|
Claudia D'Amato, Abraham Bernstein, Volker Tresp, Guest editorial Preface, International Journal on Semantic Web and Information Systems (IJSWIS), Vol. 7 (2), 2011. (Journal Article)
|
|
Proc. of the ECML/PKDD 2011 Workshop on Planning to Learn and Service-Oriented Knowledge Discovery, Edited by: Jörg-Uwe Kietz, Simon Fischer, Nada Lavrac, Vid Podpecan, Zurich, Switzerland, 2011. (Proceedings)
|
|
Patrick Minder, Abraham Bernstein, CrowdLang - First steps towards programmable human computers for general computation, In: 3rd Human Computation Workshop (HCOMP 2011), AAAI Publications, San Francisco, CA, USA, 2011-01-01. (Conference or Workshop Paper published in Proceedings)
Crowdsourcing markets such as Amazon’s Mechanical Turk provide an enormous potential for accomplishing work by combining human and machine computation. Today crowdsourcing is mostly used for massive parallel information processing for a variety of tasks such as image labeling. However, as we move to more sophisticated problem-solving there is little knowledge about managing dependencies between steps and a lack of tools for doing so. As the contribution of this paper, we present a concept of an executable, model-based programming language and a general purpose framework for accomplishing more sophisticated problems. Our approach is inspired by coordination theory and an analysis of emergent collective intelligence. We illustrate the applicability of our proposed language by combining machine and human computation based on existing interaction patterns for several general computation problems. |
|
Yannick Koechlin, Tygrstore: a flexible framework for high performance large scale RDF storage, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2011. (Bachelor's Thesis)
This Thesis describes the Architecture of a highly flexible Triplestore Framework. Its main features are: pluggable backend storage facilities, horizontal Scalability, a simple API and the generation of endless result Streams. Special attention has been paid on easy extensibility. First a detailed view of the architecture is given, later more details on the actual implementation are revealed. In the end two possible triplestore setups are benchmarked and profiled. It is shown that the currently limiting factors do not lie within the architecture but in the library code for the backends. In the end possible solutions and enhancements to the framework are discussed. |
|
Clemens Wilding, IfiPipes - the RDF UI widget framework, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2011. (Master's Thesis)
This master thesis is about the creation of a semi-automatic data retrieval and visualization frame- work. The framework describes how semantic data can be retrieved from web services or SPARQL endpoints. By using Semantic Web technologies like RDF and OWL, the queried data is analyzed and depending on the type of data, a fitting visualization is displayed. An implementation of the framework is done by creating a web application in Java by using Google Web Toolkit. The application allows a user with knowledge of Semantic Web basics to query data sources from web services and allows him/her to upload new data sets. These can be visualized with little or no programming skills, which should make it easier for non-engineers to use semantic data. The application makes use of the cutting-edge tGraph framework to analyze and display temporal data. Special visualizations features provided in the web application are the display of information on a map, the use of temporal sliders to select time intervals and a tabular representation for larger data. |
|
Katja Kevic, Planning service composition using eProPlan, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2011. (Bachelor's Thesis)
This thesis tackles the problem of automating Web Service Composition using eProPlan. A main conceptualization was elaborated, as well as the ontology in OWL2 and several Web Services were implemented. This thesis demonstrates to what extent the planner can solve other planning problems which are not related to Data Mining. We developed an application which is currently partially working, but still delivers the result that eProPlan is a well-defined, coherent system, which is able to generate Web Service Compositions and thus this thesis demonstrates the extensibility of eProPlan. |
|
Katharina Reinecke, Abraham Bernstein, Improving performance, perceived usability, and aesthetics with culturally adaptive user interfaces, Transactions on Computer-Human Interaction, Vol. 18 (2), 2011. (Journal Article)
When we investigate the usability and aesthetics of user interfaces, we rarely take into account that what users perceive as beautiful and usable strongly depends on their cultural background. In this paper, we argue that it is not feasible to design one interface that appeals to all users of an increasingly global audience. Instead, we propose to design culturally adaptive systems, which automatically generate personalized interfaces that correspond to cultural preferences. In an evaluation of one such system, we demonstrate that a majority of international participants preferred their personalized versions over a non-adapted interface of the same web site. Results show that users were 22% faster using the culturally adapted interface, needed less clicks, and made fewer errors, in line with subjective results demonstrating that they found the adapted version significantly easier to use. Our findings show that interfaces that adapt to cultural preferences can immensely increase the user experience. |
|
Proceedings of the 10th International Conference on Wirtschaftsinformatik WI 2.011 Volume 2, Edited by: Abraham Bernstein, Gerhard Schwabe, Lulu, Zurich, 2011. (Proceedings)
|
|
Proceedings of the 10th International Conference on Wirtschaftsinformatik WI 2.011 Volume 1, Edited by: Abraham Bernstein, Gerhard Schwabe, Lulu, Zurich, 2011. (Proceedings)
|
|
Markus Christen, Rachel Neuhaus Bühler, Brigitte Stump Wendt, Warum eine pauschale Entschädigung für Lebendorganspender fair ist, Bioethica Forum, Vol. 3 (2), 2010. (Journal Article)
|
|
Thomas Scharrenbach, C d'Amato, N Fanizzi, R Grütter, B Waldvogel, Abraham Bernstein, Unsupervised conflict-free ontology evolution without removing axioms, In: 4th International Workshop on Ontology Dynamics (IWOD 2010), 2010-11-08. (Conference or Workshop Paper published in Proceedings)
In the beginning of the Semantic Web, ontologies were usually constructed once by a single knowledge engineer and then used as a static conceptualization of some domain. Nowadays, knowledge bases are increasingly dynamically evolving and incorporate new knowledge from different heterogeneous domains -- some of which is even contributed by casual users (i.e., non-knowledge engineers) or even software agents. Given that ontologies are based on the rather strict formalism of Description Logics and their inference procedures, conflicts are likely to occur during ontology evolution. Conflicts, in turn, may cause an ontological knowledge base to become inconsistent and making reasoning impossible. Hence, every formalism for ontology evolution should provide a mechanism for resolving conflicts. In this paper we provide a general framework for conflict-free ontology evolution without changing the knowledge representation. Using a variant of Lehmann's Default Logics and Probabilistic Description Logics, we can invalidate unwanted implicit inferences without removing explicitly stated axioms. We show that this method outperforms classical ontology repair w.r.t. the amount of information lost while allowing for automatic conflict-solving when evolving ontologies. |
|
S N Wrigley, K Elbedweihy, Dorothee Reinhard, Abraham Bernstein, F Ciravegna, Evaluating semantic search tools using the SEALS platform, In: International Workshop on Evaluation of Semantic Technologies (IWEST 2010) Workshop, 2010-11-08. (Conference or Workshop Paper published in Proceedings)
In common with many state of the art semantic technologies, there is a lack of comprehensive, established evaluation mechanisms for semantic search tools. In this paper, we describe a new evaluation and benchmarking approach for semantic search tools using the infrastructure under development within the SEALS initiative. To our knowledge, it is the first effort to present a comprehensive evaluation methodology for semantic search tools. The paper describes the evaluation methodology including our two-phase approach in which tools are evaluated both in a fully automated fashion as well as within a user-based study. We also present and discuss preliminary results from the first SEALS evaluation campaign together with a discussion of some of the key findings. |
|