M Waldburger, Decision support in contract formation for commercial electronic services with international connection, University of Zurich, Faculty of Business, Economics and Informatics, 2010. (Dissertation)
|
|
Rolf Pfeifer, L Aryananda, Dorit Assaf, Robot competition with teachers, In: 2nd International Conference on SIMULATION, MODELING, and PROGRAMMING for AUTONOMOUS ROBOTS, 2010-11-15. (Conference or Workshop Paper published in Proceedings)
This paper describes the predator and prey robot competition that took place within a robotics class for teachers. The robotics class was part of a degree program that aims at educating upper secondary school teachers of different backgrounds in informatics, a discipline that is not yet a mandatory part of the Swiss school curriculum. The aim of this robot competition was to familiarize the teachers with robotic hardware and software such that they would be able to design their own informatics class syllabus. This paper describes the custom robotic platform used, the competition, its aims and results. |
|
Susanne Suter, Christoph P E Zollikofer, Renato Pajarola, Application of tensor approximation to multiscale volume feature representations, In: VMV 2010 - 15th International Workshop on Vision, Modeling and Visualization, Eurographics Association, 2010-11-15. (Conference or Workshop Paper published in Proceedings)
Advanced 3D microstructural analysis in natural sciences and engineering depends ever more on modern data acquisition and imaging technologies such as micro-computed or synchrotron tomography and interactive visualization. The acquired volume data sets are not only of high-resolution but in particular exhibit complex spatial structures at different levels of scale (e.g. variable spatial expression of multiscale periodic growth structures in tooth enamel). Such highly structured volume data sets represent a tough challenge to be analyzed and explored by means of interactive visualization due to the amount of raw volume data to be processed and filtered for the desired features. As an approach to address this bottleneck by multiscale feature preserving data reduction, we propose higher-order tensor approximations (TAs). We demonstrate the power of TA to represent, and highlight the structural features in volume data. We visually and quantitatively show that TA yields high data reduction and that TA preserves volume features at multiple scales. |
|
Anja Feierabend, Klatsch und Tratsch am Arbeitsplatz, In: Neue Zürcher Zeitung, 265, p. 81, 13 November 2010. (Newspaper Article)
|
|
Cosmin Basca, Abraham Bernstein, Avalanche: putting the spirit of the web back into semantic web querying, In: Proceedings Of The 6th International Workshop On Scalable Semantic Web Knowledge Base Systems (SSWS2010), CEUR-WS, 2010-11-08. (Conference or Workshop Paper published in Proceedings)
Traditionally Semantic Web applications either included a web crawler or relied on external services to gain access to the Web of Data. Recent efforts have enabled applications to query the entire Semantic Web for up-to-date results. Such approaches are based on either centralized indexing of semantically annotated metadata or link traversal and URI dereferencing as in the case of Linked Open Data. By making limiting assumptions about the information space, they violate the openness principle of the Web - a key factor for its ongoing success. In this article we propose a technique called Avalanche, designed to allow a data surfer to query the Semantic Web transparently without making any prior assumptions about the distribution of the data - thus adhering to the openness criteria. Specifically, Avalanche can perform "live" (SPARQL) queries over the Web of Data. First, it gets on-line statistical information about the data distribution, as well as bandwidth availability. Then, it plans and executes the query in a distributed manner trying to quickly provide first answers. The main contribution of this paper is the presentation of this open and distributed SPARQL querying approach. Furthermore, we propose to extend the query planning algorithm with qualitative statistical information. We empirically evaluate Avalanche using a realistic dataset, show its strengths but also point out the challenges that still exist. |
|
Minh Khoa Nguyen, Cosmin Basca, Abraham Bernstein, B+Hash Tree: optimizing query execution times for on-disk semantic web data structures, In: Proceedings Of The 6th International Workshop On Scalable Semantic Web Knowledge Base Systems (SSWS2010), 2010-11-08. (Conference or Workshop Paper published in Proceedings)
The increasing growth of the Semantic Web has substantially enlarged the amount of data available in RDF format. One proposed solution is to map RDF data to relational databases (RDBs). The lack of a common schema, however, makes this mapping inefficient. Some RDF-native solutions use B+Trees, which are potentially becoming a bottleneck, as the single key-space approach of the Semantic Web may even make their O(log(n)) worst case performance too costly. Alternatives, such as hash-based approaches, suffer from insufficient update and scan performance. In this paper we propose a novel type of index structure called a B+Hash Tree, which combines the strengths of traditional B-Trees with the speedy constant-time lookup of a hash-based structure. Our main research idea is to enhance the B+Tree with a Hash Map to enable constant retrieval time instead of the common logarithmic one of the B+Tree. The result is a scalable, updatable, and lookup-optimized, on-disk index-structure that is especially suitable for the large key-spaces of RDF datasets. We evaluate the approach against existing RDF indexing schemes using two commonly used datasets and show that a B+Hash Tree is at least twice as fast as its competitors - an advantage that we show should grow as dataset sizes increase. |
|
D J Lutz, D Lamp, P Mandic, Fabio Victora Hecht, Burkhard Stiller, Charging of SAML-based federated VoIP services, In: 5th International Conference for Internet Technology and Secured Transactions (ICITST-2010), IEEE, London, UK, 2010-11-08. (Conference or Workshop Paper published in Proceedings)
Whilst SAML-based federations are most often used by
academic and semi-commercial institutions that focus only
on attribute-based authentication, we foresee a growing
interest for service providers providing charged services.
Since more and more academic and semi-commercial federation
participants offer Voice-over-IP (VoIP) services, this
type of service provides an entry point into identity federation based payment. Therefore, this paper describes an
approach on how to harmonize the SAML-based federation
technology with the needs of a payment infrastructure
for enabling charging of VoIP services within a federation.
However, since the different aspects of our approach (SAML
Payment, SIP Discovery and Tariff Function) are not bound
to VoIP applications, each one of them could be used separately or combined for several service types. |
|
S N Wrigley, K Elbedweihy, Dorothee Reinhard, Abraham Bernstein, F Ciravegna, Evaluating semantic search tools using the SEALS platform, In: International Workshop on Evaluation of Semantic Technologies (IWEST 2010) Workshop, 2010-11-08. (Conference or Workshop Paper published in Proceedings)
In common with many state of the art semantic technologies, there is a lack of comprehensive, established evaluation mechanisms for semantic search tools. In this paper, we describe a new evaluation and benchmarking approach for semantic search tools using the infrastructure under development within the SEALS initiative. To our knowledge, it is the first effort to present a comprehensive evaluation methodology for semantic search tools. The paper describes the evaluation methodology including our two-phase approach in which tools are evaluated both in a fully automated fashion as well as within a user-based study. We also present and discuss preliminary results from the first SEALS evaluation campaign together with a discussion of some of the key findings. |
|
Thomas Scharrenbach, C d'Amato, N Fanizzi, R Grütter, B Waldvogel, Abraham Bernstein, Unsupervised conflict-free ontology evolution without removing axioms, In: 4th International Workshop on Ontology Dynamics (IWOD 2010), 2010-11-08. (Conference or Workshop Paper published in Proceedings)
In the beginning of the Semantic Web, ontologies were usually constructed once by a single knowledge engineer and then used as a static conceptualization of some domain. Nowadays, knowledge bases are increasingly dynamically evolving and incorporate new knowledge from different heterogeneous domains -- some of which is even contributed by casual users (i.e., non-knowledge engineers) or even software agents. Given that ontologies are based on the rather strict formalism of Description Logics and their inference procedures, conflicts are likely to occur during ontology evolution. Conflicts, in turn, may cause an ontological knowledge base to become inconsistent and making reasoning impossible. Hence, every formalism for ontology evolution should provide a mechanism for resolving conflicts. In this paper we provide a general framework for conflict-free ontology evolution without changing the knowledge representation. Using a variant of Lehmann's Default Logics and Probabilistic Description Logics, we can invalidate unwanted implicit inferences without removing explicitly stated axioms. We show that this method outperforms classical ontology repair w.r.t. the amount of information lost while allowing for automatic conflict-solving when evolving ontologies. |
|
Felix Schläpfer, Friedrich Schneider, Messung der akademischen Forschungsleistung in den Wirtschaftswissenschaften: Reputation vs. Zitierhäufigkeiten, Perspektiven der Wirtschaftspolitik, Vol. 11 (4), 2010. (Journal Article)
Research output in economics is commonly measured based on the reputation of the journals in which an author has published. Using data from the 2010 Handelsblatt ranking of economists in German speaking countries and citation data from the Web of Science, we examine the relationship between reputation and citation frequency at the level of individual researchers. We find that the variation (variance) in individual researcher citations explains only a small fraction of the scores based on traditional measures of reputation. Our findings suggest that individual citation data are indispensable for a relevant measurement of individual research output and for providing more productive incentives in academic research. |
|
Floarea Serban, Auto-experimentation of KDD workflows based on ontological planning, In: The 9th International Semantic Web Conference (ISWC 2010), Doctoral Consortium, 2010-11-07. (Conference or Workshop Paper published in Proceedings)
One of the problems of Knowledge Discovery in Databases (KDD) is the lack of user support for solving KDD problems. Current Data Mining (DM) systems enable the user to manually design workflows but this becomes difficult when there are too many operators to choose from or the workflow's size is too large. Therefore we propose to use auto-experimentation based on ontological planning to provide the users with automatic generated workflows as well as rankings for workflows based on several criteria (execution time, accuracy, etc.). Moreover auto-experimentation will help to validate the generated workflows and to prune and reduce their number. Furthermore we will use mixed-initiative planning to allow the users to set parameters and criteria to limit the planning search space as well as to guide the planner towards better workflows. |
|
Cosmin Basca, Abraham Bernstein, Avalanche - Putting the spirit of the web back into semantic web querying, In: ISWC 2010 Posters & Demonstrations Track: Collected Abstracts, 2010-11-07. (Conference or Workshop Paper published in Proceedings)
Traditionally Semantic Web applications either included a web crawler or relied on external services to gain access to the Web of Data. Recent efforts, have enabled applications to query the entire Semantic Web for up-to-date results. Such approaches are based on either centralized indexing of semantically annotated meta data or link traversal and URI dereferencing as in the case of Linked Open Data. They pose a number of limiting assumptions, thus breaking the openness principle of the Web. In this demo we present a novel technique called Avalanche,designed to allow a data surfer to query the Semantic Web transparently.The technique makes no prior assumptions about data distribution.Specifically, Avalanche can perform “live” queries over the Web of Data. First, it gets on-line statistical information about the data distribution,as well as bandwidth availability. Then, it plans and executes the query in a distributed manner trying to quickly provide first answers. |
|
C Bird, A Bachmann, F Rahman, Abraham Bernstein, LINKSTER: enabling efficient manual inspection and annotation of mined data, In: ACM SIGSOFT / FSE '10: eighteenth International Symposium on the Foundations of Software Engineering, 2010-11-07. (Conference or Workshop Paper published in Proceedings)
While many uses of mined software engineering data are automatic in nature, some techniques and studies either require, or can be improved, by manual methods. Unfortunately, manually inspecting, analyzing, and annotating mined data can be difficult and tedious, especially when information from multiple sources must be integrated. Oddly, while there are numerous tools and frameworks for automatically mining and analyzing data, there is a dearth of tools which facilitate manual methods. To fill this void, we have developed LINKSTER, a tool which integrates data from bug databases, source code repositories, and mailing list archives to allow manual inspection and annotation. LINKSTER has already been used successfully by an OSS project lead to obtain data for one empirical study. |
|
Philip Stutz, Abraham Bernstein, William Cohen, Signal/Collect: graph algorithms for the (Semantic) Web, In: ISWC 2010, 2010-11-07. (Conference or Workshop Paper published in Proceedings)
The Semantic Web graph is growing at an incredible pace, enabling opportunities to discover new knowledge by interlinking and analyzing previously unconnected data sets. This confronts researchers with a conundrum: Whilst the data is available the programming models that facilitate scalability and the infrastructure to run various algorithms on the graph are missing. Some use MapReduce - a good solution for many problems. However, even some simple iterative graph algorithms do not map nicely to that programming model requiring programmers to shoehorn their problem to the MapReduce model. This paper presents the Signal/Collect programming model for synchronous and asynchronous graph algorithms. We demonstrate that this abstraction can capture the essence of many algorithms on graphs in a concise and elegant way by giving Signal/Collect adaptations of various relevant algorithms. Furthermore, we built and evaluated a prototype Signal/Collect framework that executes algorithms in our programming model. We empirically show that this prototype transparently scales and that guiding computations by scoring as well as asynchronicity can greatly improve the convergence of some example algorithms. We released the framework under the Apache License 2.0 (at http://www.ifi.uzh.ch/ddis/research/sc). |
|
Eya Ben Charrada, Updating requirements from tests during maintenance and evolution, In: Doctoral Symposium of the 18th ACM SIGSOFT International Symposium on Foundations of Software Engineering, 2010-11-07. (Conference or Workshop Paper published in Proceedings)
Keeping requirements specification up-to-date during the evolution of a software system is an expensive task. Consequently, specifications are usually not updated and rapidly become obsolete and unreliable. The goal of our research is to preserve the alignment between requirements and the implementation by supporting the maintenance of the specification. In this proposal, we explore the idea of using tests to automatically generate hints about the evolution of requirements. We discuss the main research questions that need to be addressed, and propose ideas to approach them. |
|
Thomas Scharrenbach, C d'Amato, N Fanizzi, R Grütter, B Waldvogel, Abraham Bernstein, Default logics for plausible reasoning with controversial axioms, In: 6th International Workshop on Uncertainty Reasoning for the Semantic Web (URSW-2010), 2010-11-07. (Conference or Workshop Paper published in Proceedings)
Using a variant of Lehmann's Default Logics and Probabilistic Description Logics we recently presented a framework that invalidates those unwanted inferences that cause concept unsatisfiability without the need to remove explicitly stated axioms. The solutions of this methods were shown to outperform classical ontology repair w.r.t. the number of inferences invalidated. However, conflicts may still exist in the knowledge base and can make reasoning ambiguous. Furthermore, solutions with a minimal number of inferences invalidated do not necessarily minimize the number of conflicts. In this paper we provide an overview over finding solutions that have a minimal number of conflicts while invalidating as few inferences as possible. Specifically, we propose to evaluate solutions w.r.t. the quantity of information they convey by recurring to the notion of entropy and discuss a possible approach towards computing the entropy w.r.t. an ABox. |
|
T Kuhn, Controlled English for knowledge representation, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2010. (Dissertation)
Knowledge representation is a long-standing research area of computer science that aims at representing human knowledge in a form that computers can interpret. Most knowledge representation approaches, however, have suffered from poor user interfaces. It turns out to be difficult for users to learn and use the logic-based languages in which the knowledge has to be encoded. A new approach to design more intuitive but still reliable user interfaces for knowledge representation systems is the use of controlled natural language (CNL). CNLs are subsets of natural languages that are restricted in a way that allows their automatic translation into formal logic. A number of CNLs have been developed but the resulting tools are mostly just prototypes so far. Furthermore, nobody has yet been able to provide strong evidence that CNLs are indeed easier to understand than other logic-based languages. The goal of this thesis is to give the research area of CNLs for knowledge representation a shift in perspective: from the present explorative and proof-of-concept-based approaches to a more engineering-focussed point of view. For this reason, I introduce theoretical and practical building blocks for the design and application of controlled English for the purpose of knowledge representation. I first show how CNLs can be defined in an adequate and simple way by the introduction of a novel grammar notation and I describe efficient algorithms to process such grammars. I then demonstrate how these theoretical concepts can be implemented and how CNLs can be embedded in knowledge representation tools so that they provide intuitive and powerful user interfaces that are accessible even to untrained users. Finally, I discuss how the understandability of CNLs can be evaluated. I argue that the understandability of CNLs cannot be assessed reliably with existing approaches, and for this reason I introduce a novel testing framework. Experiments based on this framework show that CNLs are not only easier to understand than comparable languages but also need less time to be learned and are preferred by users. |
|
Bruno Staffelbach, Ethik im Nebel von Anglizismen, In: Neue Zürcher Zeitung, 259, p. 79, 6 November 2010. (Newspaper Article)
|
|
Dennis Schoeneborn, Deparadoxification as the driving force: Luhmannian contributions to current debates on "communication constitutes organization" (CCO), In: Deutsche Gesellschaft für Publizistik- und Kommunikationsforschung (DGPuK) - Fachgruppentagung PR und Organisationskommunikation. 2010. (Conference Presentation)
|
|
David Fritschi, Nettoneugeldentwicklung im Jahre 2009 im schweizerischen Private Banking, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2010. (Bachelor's Thesis)
|
|