Shen Gao, Thomas Scharrenbach, Abraham Bernstein, The CLOCK Data-Aware Eviction Approach: Towards Processing Linked Data Streams with Limited Resources, In: The 11th Extended Semantic Web Conference, Springer, 2014-05-25. (Conference or Workshop Paper published in Proceedings)
Processing streams rather than static files of Linked Data has gained increasing importance in the web of data. When processing data streams system builders are faced with the conundrum of guaranteeing a constant maximum response time with limited resources and, possibly, no prior information on the data arrival frequency. One approach to address this issue is to delete data from a cache during processing – a process we call eviction. The goal of this paper is to show that data- driven eviction outperforms today’s dominant data-agnostic approaches such as first-in-first-out or random deletion. Specifically, we first introduce a method called Clock that evicts data from a join cache based on the likelihood estimate of contributing to a join in the future. Second, using the well-established SR-Bench benchmark as well as a data set from the IPTV domain, we show that Clock outperforms data-agnostic approaches indicating its usefulness for resource-limited linked data stream processing. |
|
Jörg-Uwe Kietz, Floarea Serban, Simon Fischer, Abraham Bernstein, “Semantics Inside!” But let’s not tell the Data Miners: Intelligent Support for Data Mining, In: European Semantic Web Conference ESWC 2014, Springer, 2014-05-25. (Conference or Workshop Paper published in Proceedings)
Knowledge Discovery in Databases (KDD) has evolved significantly over the past years and reached a mature stage offering plenty of operators to solve complex data analysis tasks. User support for building data analysis workflows, however, has not progressed sufficiently: the large number of operators currently available in KDD systems and interactions between these operators complicates successful data analysis. To help Data Miners we enhanced one of the most used open source data mining tools—RapidMiner—with semantic technologies. Specifically, we first annotated all elements involved in the Data Mining (DM) process—the data, the operators, models, data mining tasks, and KDD workflows—semantically using our eProPlan modelling tool that allows to describe operators and build a task/method decomposition grammar to specify the desired workflows embedded in an ontology. Second, we enhanced RapidMiner to employ these semantic annotations to actively support data analysts. Third, we built an Intelligent Discovery Assistant, eIda, that leverages the semantic annotation as well as HTN planning to automatically support KDD process generation. We found that the use of Semantic Web approaches and technologies in the KDD domain helped us to lower the barrier to data analysis. We also found that using a generic ontology editor overwhelmed KDD-centric users. We, therefore, provided them with problem-centric extensions to Protege. Last and most surprising, we found that our semantic modeling of the KDD domain served as a rapid prototyping approach for several hard-coded improvements of RapidMiner, namely correctness checking of workflows and quick-fixes, reinforcing the finding that even a little semantic modeling can go a long way in improving the understanding of a domain even for domain experts. |
|
Cosmin Basca, Abraham Bernstein, Querying a messy web of data with Avalanche, Journal of Web Semantics, Vol. 26, 2014. (Journal Article)
Recent efforts have enabled applications to query the entire Semantic Web. Such approaches are either based on a centralised store or link traversal and URI dereferencing as often used in the case of Linked Open Data. These approaches make additional assumptions about the structure and/or location of data on the Web and are likely to limit the diversity of resulting usages. In this article we propose a technique called Avalanche, designed for querying the Semantic Web without making any prior assumptions about the data location or distribution, schema-alignment, pertinent statistics, data evolution, and accessibility of servers. Specifically, Avalanche finds up-to-date answers to queries over SPARQL endpoints. It first gets on-line statistical information about potential data sources and their data distribution. Then, it plans and executes the query in a concurrent and distributed manner trying to quickly provide first answers. We empirically evaluate Avalanche using the realistic FedBench data-set over 26 servers and investigate its behaviour for varying degrees of instance-level distribution "messiness" using the LUBM synthetic data-set spread over 100 servers. Results show that Avalanche is robust and stable in spite of varying network latency finding first results for 80% of the queries in under 1 second. It also exhibits stability for some classes of queries when instance-level distribution messiness increases. We also illustrate, how Avalanche addresses the other sources of messiness (pertinent data statistics, data evolution and data presence) by design and show its robustness by removing endpoints during query execution. |
|
Kevin Mettenberger, Automated Pricing Mechanisms for Crowdsourcing Markets, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2014. (Master's Thesis)
Nowadays, crowdsourcing systems can be built easily using crowdsourcing markets like Amazon's Mechanical Turk or Crowdflower. The pricing of tasks is however still very simple. Wages are paid to the workers per task and have to be set in advance. In the context of this master thesis, a market interface is developed which allows to continuously allocate and dynamically price tasks on Amazon's Mechanical Turk. When allocating tasks by using an item-contract, workers are paid per succeeded task. In the case of a time-contract, the workers are paid to work on tasks for a certain amount of time. We compare the two payment policies in terms of the requester's utility and the acceptance of the crowd. The results show, that the requester's utility is significantly higher with the time-contract, while maintaining a very good acceptance of the crowd. In a second step, three pricing mechanisms are developed and evaluated against each other similar to the experiments with the two contract types. The second experiments reveal that the mean cost per task can further be improved by using the pricing mechanisms combined with the time-contract. |
|
Torsten Eymann, Dennis Kundisch, Jan Recker, Abraham Bernstein, Judith Gebauer, Oliver Günther, Wolfgang Ketter, Michael zur Mühlen, Kai Riemer, Should I Stay or Should I Go. Herausforderungen und Chancen eines Wechsels zwischen Hochschulsystemen, Wirtschaftsinformatik, Vol. 56 (2), 2014. (Journal Article)
|
|
Torsten Eymann, Dennis Kundisch, Jan Recker, Abraham Bernstein, Judith Gebauer, Oliver Günther, Wolfgang Ketter, Michael zur Mühlen, Kai Riemer, Should I Stay or Should I Go: The Challenges and Opportunities of Moving Between University Systems, Business & Information Systems Engineering, Vol. 6 (2), 2014. (Journal Article)
|
|
Aaron Shaw, Haoqi Zhang, Andrés Monroy-Hernández, Sean Munson, Benjamin Mako Hill, Elizabeth Gerber, Peter Kinnaird, Patrick Minder, Computer supported collective action, Magazine interactions, Vol. 21 (2), 2014. (Journal Article)
Social media has become globally ubiquitous, transforming how people are networked and mobilized. This forum explores research and applications of these new networked publics at individual, organizational, and societal levels. By using a gel mobility assay, we have shown that treatment of HeLa cells with 4-hydroxynonenal, a major product of the peroxidation of membrane lipids and an inducer of heat-shock proteins, has the same effect as heat shock in causing the appearance of a protein which binds to the sequence of DNA specific for the induction of heat-shock genes. Lipoperoxidation and heat exposure seem to share a common mechanism of specific gene activation. |
|
Markus Christen, Peter Brugger, Mapping collective behavior--beware of looping, Behavioral and Brain Sciences, Vol. 37 (1), 2014. (Journal Article)
We discuss ambiguities of the two main dimensions of the map proposed by Bentley and colleagues that relate to the degree of self-reflection the observed agents have upon their behavior. This self-reflection is a variant of the "looping effect" which denotes that, in social research, the product of investigation influences the object of investigation. We outline how this can be understood as a dimension of "height" in the map of Bentley et al. |
|
Frank Neugebauer, Combining streams of linked data with rich background data: Impact of the inverse cache on recall and response time, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2014. (Master's Thesis)
Stream processing engines often need to adhere to QoS contracts during their operation. As they also query external data sources for supplemental information, they might not be able to receive all results in time.
This thesis proposes and implements a local cache for the Esper complex event processing engine. This 'inverse cache' stores the results of Esper's background queries that complete after the Esper query timed out and provides this data for subsequent queries.
The evaluation of the inverse cache shows that it enables Esper to receive additional external results, leading to a higher recall and faster processing time. |
|
Sabine Müller, Henrik Walter, Markus Christen, When benefitting a patient increases the risk for harm for third persons — The case of treating pedophilic Parkinsonian patients with deep brain stimulation, International Journal of Law and Psychiatry, Vol. 37 (3), 2014. (Journal Article)
This paper investigates the question whether it is ethically justified to treat Parkinsonian patients with known or suspected pedophilia with deep brain stimulation — given increasing evidence that this treatment might cause impulse control disorders, disinhibition, and hypersexuality. This specific question is not as exotic as it looks at a first glance. First, the same issue is raised for all other types of sexual orientation or behavior which imply a high risk for harming other persons, e.g. sexual sadism. Second, there are also several (psychotropic) drugs as well as legal and illegal leisure drugs which bear severe risks for other persons. We show that Beauchamp and Childress' biomedical ethics fails to derive a veto against medical interventions which produce risks for third persons by making the patients dangerous to others. Therefore, our case discussion reveals a blind spot of the ethics of principles. Although the first intuition might be to forbid the application of deep brain stimulation to pedophilic patients, we argue against such a simple way out, since in some patients the reduction of dopaminergic drugs allowed by deep brain stimulation of the nucleus subthalamicus improves impulsive control disorders, including hypersexuality. Therefore, we propose a strategy consisting of three steps: (1) risk assessment, (2) shared decision-making, and (3) risk management and safeguards. |
|
Markus Christen, Overcoming Moral Hypocrisy in a Virtual Society, In: Complexity and Human Experiences, Pan Stanford Publishing, Stanford, p. 29 - 49, 2014. (Book Chapter)
|
|
Markus Christen, Florian Faller, Ulrich Goetz, Cornelius Müller, Outlining a serious moral game in bioethics, EAI Endorsed Transactions on Ambient Systems, Vol. 14 (3), 2014. (Journal Article)
Our contribution discusses the possibilities and limits of using video games for apprehending and reflecting on the moral actions of their players. We briefly present the results of an extended study that introduces the conceptual idea of a Serious Moral Game (SMG). Then, we outline its possible application in the domain of bioethics for training medical professionals such that they can deal better with moral problems in medical practice. We briefly sketch major components of a SMG Bioethics. The contribution should demonstrate how such an instrument may improve psychological competences that are needed for dealing with various ethical questions within healthcare. The contribution is an intermediate step of a project that aims at actually creating a SMG for training in moral competences that are needed for putting bioethics in practice. |
|
Markus Christen, Endre Bangerter, Informatisierung in der Medizin, In: Ethik und Recht in Medizin und Biowissenschaften : Aktuelle Fallbeispiele aus klinischer Praxis und Forschung, De Gruyter, Berlin, p. 279 - 285, 2014. (Book Chapter)
|
|
Markus Christen, Christian Ineichen, Carmen Tanner, How “moral” are the principles of biomedical ethics? – A cross-domain evaluation of the common morality hypothesis, BMC Medical Ethics, Vol. 15 (47), 2014. (Journal Article)
BACKGROUND: The principles of biomedical ethics - autonomy, non-maleficence, beneficence, and justice - are of paradigmatic importance for framing ethical problems in medicine and for teaching ethics to medical students and professionals. In order to underline this significance, Tom L. Beauchamp and James F. Childress base the principles in the common morality, i.e. they claim that the principles represent basic moral values shared by all persons committed to morality and are thus grounded in human moral psychology. We empirically investigated the relationship of the principles to other moral and non-moral values that provide orientations in medicine. By way of comparison, we performed a similar analysis for the business & finance domain. METHODS: We evaluated the perceived degree of "morality" of 14 values relevant to medicine (n1 = 317, students and professionals) and 14 values relevant to business & finance (n2 = 247, students and professionals). Ratings were made along four dimensions intended to characterize different aspects of morality. RESULTS: We found that compared to other values, the principles-related values received lower ratings across several dimensions that characterize morality. By interpreting our finding using a clustering and a network analysis approach, we suggest that the principles can be understood as "bridge values" that are connected both to moral and non-moral aspects of ethical dilemmas in medicine. We also found that the social domain (medicine vs. business & finance) influences the degree of perceived morality of values. CONCLUSIONS: Our results are in conflict with the common morality hypothesis of Beauchamp and Childress, which would imply domain-independent high morality ratings of the principles. Our findings support the suggestions by other scholars that the principles of biomedical ethics serve primarily as instruments in deliberated justifications, but lack grounding in a universal "common morality". We propose that the specific manner in which the principles are taught and discussed in medicine - namely by referring to conflicts requiring a balancing of principles - may partly explain why the degree of perceived "morality" of the principles is lower compared to other moral values. |
|
Markus Christen, Effy Vayena, Gesünder leben dank sozialen Netzen?, Digma, Vol. 14 (2), 2014. (Journal Article)
|
|
Markus Christen, Christian Ineichen, Merlin Bittlinger, Hans-Werner Bothe, Sabine Müller, Ethical focal points in the international practice of deep brain stimulation, AJOB Neuroscience, Vol. 5 (4), 2014. (Journal Article)
Deep brain stimulation (DBS) is a standard therapy for several movement disorders, and the list of further indications that are investigated is growing rapidly. We performed two surveys among DBS experts (n1D 113) and centers (n2D 135) to identify ethical focal points in the current global practice of DBS. The data indicate a mismatch between the patients’ fears and the frequencies of the suspected side effects, a significant “satisfaction gap,” signs of improvements of outcome, habituation effects in terms of involved disciplines, a growing spectrum of novel indications that partly conflicts with the experts’ success probability ratings, and differences in the density of supply between countries that might affect the future development of DBS. We formulate ethical recommendations with regard both to patient-related practices (e.g., recruitment, assurance of alternatives) and to institutional development (e.g., measures for quality assurance and for the development of novel DBS indications). |
|
Sabine Müller, Markus Christen, Henrik Walter, DBS combined with optogenetics — fine-tuning the mind?, AJOB Neuroscience, Vol. 5 (1), 2014. (Journal Article)
|
|
Ausgezeichnete Informatikdissertationen 2013, Edited by: Steffen Hölldobler, Abraham Bernstein, et al, Gesellschaft für Informatik, Bonn, 2014. (Edited Scientific Work)
|
|
Abraham Bernstein, Jan Marco Leimeister, Natasha Noy, Cristina Sarasua, Elena Simperl, Crowdsourcing and the Semantic Web (Dagstuhl Seminar 14282), Dagstuhl Reports, Vol. 4 (7), 2014. (Journal Article)
Semantic technologies provide flexible and scalable solutions to master and make sense of an increasingly vast and complex data landscape. However, while this potential has been acknowledged for various application scenarios and domains, and a number of success stories exist, it is equally clear that the development and deployment of semantic technologies will always remain reliant of human input and intervention. This is due to the very nature of some of the tasks associated with the semantic data management life cycle, which are famous for their knowledge-intensive and/or context-specific character; examples range from conceptual modeling in almost any flavor, to labeling resources (in different languages), describing their content in terms of ontological terms, or recognizing similar concepts and entities. For this reason, the Semantic Web community has always looked into applying the latest theories, methods and tools from CSCW (Computer Supported Cooperative Work), participatory design, Web 2.0, social computing, and, more recently crowdsourcing to find ways to engage with users and encourage their involvement in the execution of technical tasks. Existing approaches include the usage of wikis as semantic content authoring environments, leveraging folksonomies to create formal ontologies, but also human computation approaches such as games with a purpose or micro-tasks. This document provides a summary of the Dagstuhl Seminar 14282: Crowdsourcing and the Semantic Web, which in July 2014 brought together researchers of the emerging scientific community at the intersection of crowdsourcing and Semantic Web technologies. We collect the position statements written by the participants of seminar, which played a central role in the discussions about the evolution of our research field. |
|
Abraham Bernstein, Mit Computer Sprechen: unterschiede und Gemeinsamkeiten zwischen menschlicher und maschineller Sprache, In: Sprache(n) verstehen, vdf, Zurich, p. 197 - 214, 2014. (Book Chapter)
|
|