Markus Christen, Endre Bangerter, Is cyberpeace possible?, In: The nature of peace and the morality of armed conflict, Springer International Publishing, Cham, p. 243 - 263, 2017. (Book Chapter)
 
|
|
Markus Christen, Josep Domingo-Ferrer, Dominik Herrmann, Jeroen van den Hoven, Beyond Informed Consent—Investigating Ethical Justifications for Disclosing, Donating or Sharing Personal Data in Research, In: Philosophy and Computing, Springer, Cham, p. 193 - 207, 2017. (Book Chapter)
 
In the last two decades, we have experienced a tremendous growth of the digital infrastructure, leading to an emerging web ecosystem that involves a variety of new types of services. A characteristic element of this web ecosystem is the massive increase of the amount, availability and interpretability of digitalized information—a development for which the buzzword “big data” has been coined. For research, this offers opportunities that just 20 years ago were believed to be impossible. Researchers now can access large participant pools directly using services like Amazon Mechanical Turk, they can collaborate with companies like Facebook to analyze their massive data sets, they can create own research infrastructures by, e.g., providing data-collecting Apps for smartphones, or they can enter new types of collaborations with citizens that donate personal data. Traditional research ethics with its focus of informed consent is challenged by such developments: How can informed consent be given when big data research seeks for unknown patterns? How can people control their data? How can unintended effects (e.g., discrimination) be prevented when a person donates personal data? In this contribution, we will discuss the ethical justification for big data research and we will argue that a focus on informed consent is insufficient for providing its moral basis. We propose that the ethical issues cluster along three core values—autonomy, fairness and responsibility—that need to be addressed. Finally, we outline how a possible research infrastructure could look like that would allow for ethical big data research. |
|
Georg J P Link, Kevin Lumbard, Kieran Conboy, Michael Feldman, Joseph Feller, Jordana George, Matt Germonprez, Sean Goggins, Debora Jeske, Gaye Kiely, Kristen Schuster, Matt Willis, Contemporary issues of open data in information systems research: considerations and recommendations, Communications of the Association for Information Systems, Vol. 41 (25), 2017. (Journal Article)
 
Researchers, governments, and funding agencies are calling on research disciplines to embrace open data - data that is publicly accessible and usable beyond the original authors. The premise is that research efforts can draw and generate several benefits from open data, as such data might provide further insight, enabling the replication and extension of current knowledge in different contexts. These potential benefits, coupled with a global push towards open data policies, brings open data into the agenda of research disciplines – including Information Systems (IS). This paper responds to these developments as follows. We outline themes in the ongoing discussion around open data in the IS discipline. The themes fall into two clusters: (1) The motivation for open data includes themes of mandated sharing, benefits to the research process, extending the life of research data, and career impact; (2) The implementation of open data includes themes of governance, socio-technical system, standards, data quality, and ethical considerations. In this paper, we outline the findings from a pre-ICIS 2016 workshop on the topic of open data. The workshop discussion confirmed themes and identified issues that require attention in terms of the approaches that are currently utilized by IS researchers. The IS discipline offers a unique knowledge base, tools, and methods that can advance open data across disciplines. Based on our findings, we provide suggestions on how IS researchers can drive the open data conversation. Further, we provide advice for the adoption and establishment of procedures and guidelines for the archival, evaluation, and use of open data. |
|
Joint Proceedings of the 3rd Stream Reasoning (SR 2016) and the 1st Semantic Web Technologies for the Internet of Things (SWIT 2016) workshops, Edited by: Daniele Dell'Aglio, Emanuele Della Valle, Thomas Eiter, Markus Krötzsch, Maria Maleshkova, Ruben Verborgh, Federico Facca, Michael Mrissa, Aachen : M. Jeusfeld c/o Redaktion Sun SITE, Informatik V, RWTH Aachen, Germany, 2017. (Proceedings)

|
|
Joint Proceedings of the Web Stream Processing workshop (WSP 2017) and the 2nd International Workshop on Ontology Modularity, Contextuality, and Evolution (WOMoCoE 2017), Edited by: Daniele Dell'Aglio, Darko Anicic, Payam Barnaghi, Emanuele Della Valle, Deborah McGuinness, Loris Bozzato, Thomas Eiter, Martin Homola, Daniele Porello, R. Piskac c/o Redaktion Sun SITE, Informatik V, RWTH Aachen, Germany, 2017. (Proceedings)

|
|
Joint Proceedings of the 2nd RDF Stream Processing (RSP 2017) and the Querying the Web of Data (QuWeDa 2017) Workshops, Edited by: Jean-Paul Calbimonte, Minh Dao-Tran, Daniele Dell'Aglio, Danh Le Phuoc, Muhammed Saleem, Ricardo Usbeck, Ruben Verborgh, Axel-Cyrille Ngonga Ngomo, R. Piskac c/o Redaktion Sun SITE, Informatik V, RWTH Aachen, Germany, 2017. (Proceedings)

|
|
Emad Yaghmaei, Ibo van de Poel, Markus Christen, Bert Gordijn, Nadine Kleine, Michele Loi, Gwenyth Morgan, Karsten Weber, Cybersecurity and Ethics, University of Zurich / CANVAS, Zurich, https://ssrn.com/abstract=3091909, 2017. (Published Research Report)
 
|
|
Jan Mendling, Bart Baesens, Abraham Bernstein, Michael Fellmann, Challenges of Smart Business Process Management: An Introduction to the Special Issue, Decision Support Systems, 2017. (Journal Article)
 
This paper describes the foundations of smart business process management and serves as an editorial to the corresponding special issue. To this end, we introduce a framework that distinguishes three levels of business process management: multi process management, process model management, and process instance management. For each of these levels we identify major contributions of prior research and describe in how far papers assembled in this special issue extend our understanding of smart business process man- agement. |
|
Jan Mendling, Bart Baesens, Abraham Bernstein, Michael Fellmann, Challenges of Smart Business Process Management: An Introduction to the Special Issue, Decision Support Systems, 2017. (Journal Article)

This paper describes the foundations of smart business process management and serves as an editorial to the corresponding special issue. To this end, we introduce a framework that distinguishes three levels of business process management: multi process management, process model management, and process instance management. For each of these levels we identify major contributions of prior research and describe in how far papers assembled in this special issue extend our understanding of smart business process man- agement. |
|
André Golliez, Doris Albisser, Abraham Bernstein, Adelheid Bürgi-Schmelz, Claudio Dioniso, Felix Frei, Hannes Gassert, Balthasar Glättli, Edith Graf-Litscher, Franz Grüter, Peter Grütter, Ernst Hafen, Jean-Marc Hensch, Andreas Hugi, Thomas Kleiber, Denise Koopmans, Christian Laux, Alessia Neuroni, Hans-Rudolf Sprenger, Matthias Stürmer, Swiss Data Alliance -- Für eine zukunftsorientierte Datenpolitik in der Schweiz, 2017. (Other Publication)
 
|
|
Mark Alfano, Kathryn Iurino, Paul Stey, Brian Robinson, Markus Christen, Feng Yu, Daniel Lapsley, Development and validation of a multi-dimensional measure of intellectual humility, PLoS ONE, Vol. 12 (8), 2017. (Journal Article)
 
This paper presents five studies on the development and validation of a scale of intellectual humility. This scale captures cognitive, affective, behavioral, and motivational components of the construct that have been identified by various philosophers in their conceptual analyses of intellectual humility. We find that intellectual humility has four core dimensions: Open-mindedness (versus Arrogance), Intellectual Modesty (versus Vanity), Corrigibility (versus Fragility), and Engagement (versus Boredom). These dimensions display adequate self-informant agreement, and adequate convergent, divergent, and discriminant validity. In particular, Open-mindedness adds predictive power beyond the Big Six for an objective behavioral measure of intellectual humility, and Intellectual Modesty is uniquely related to Narcissism. We find that a similar factor structure emerges in Germanophone participants, giving initial evidence for the model’s cross-cultural generalizability. |
|
Bibek Paudel, Fabian Christoffel, Chris Newell, Abraham Bernstein, Updatable, accurate, diverse, and scalable recommendations for interactive applications, ACM Transactions on Interactive Intelligent Systems, Vol. 7 (1), 2016. (Journal Article)
 
Recommender systems form the backbone of many interactive systems. They incorporate user feedback to personalize the user experience typically via personalized recommendation lists. As users interact with a system, an increasing amount of data about a user’s preferences becomes available, which can be leveraged for improving the systems’ performance. Incorporating these new data into the underlying recommendation model is, however, not always straightforward. Many models used by recommender systems are computationally expensive and, therefore, have to perform offline computations to compile the recommendation lists. For interactive applications, it is desirable to be able to update the computed values as soon as new user interaction data is available: updating recommendations in interactive time using new feedback data leads to better accuracy and increases the attraction of the system to the users. Additionally, there is a growing consensus that accuracy alone is not enough and user satisfaction is also dependent on diverse recommendations.
In this work, we tackle this problem of updating personalized recommendation lists for interactive applications in order to provide both accurate and diverse recommendations. To that end, we explore algorithms that exploit random walks as a sampling technique to obtain diverse recommendations without compromising on efficiency and accuracy. Specifically, we present a novel graph vertex ranking recommendation algorithm called RP3β that reranks items based on three-hop random walk transition probabilities. We show empirically that RP3β provides accurate recommendations with high long-tail item frequency at the top of the recommendation list. We also present approximate versions of RP3β and the two most accurate previously published vertex ranking algorithms based on random walk transition probabilities and show that these approximations converge with an increasing number of samples.
To obtain interactively updatable recommendations, we additionally show how our algorithm can be extended for online updates at interactive speeds. The underlying random walk sampling technique makes it possible to perform the updates without having to recompute the values for the entire dataset.
In an empirical evaluation with three real-world datasets, we show that RP3β provides highly accurate and diverse recommendations that can easily be updated with newly gathered information at interactive speeds (≪ 100ms). |
|
Yiftach Nagar, Patrick De Boer, ANA CRISTINA BICHARRA GARCIA, Accelerating the review of complex intellectual artifacts in crowdsourced innovation challenges, In: Thirty Seventh International Conference on Information Systems, Dublin, 2016-12-11. (Conference or Workshop Paper published in Proceedings)
 
A critical bottleneck in crowdsourced innovation challenges is the process of reviewing and selecting the best submissions. This bottleneck is especially problematic in settings where submissions are complex intellectual artifacts whose evaluation requires expertise. To help reduce the review load from experts, we offer a computational approach that relies on analyzing sociolinguistic and other characteristics of submission text, as well as activities of the crowd and the submission authors, and scores the submissions. We developed and tested models based on data from contests done in a large citizen-science platform - the Climate CoLab - and find that they are able to accurately predict expert decisions about the submissions, and can lead to substantial reduction of review labor, and acceleration of the review process. |
|
Markus Göckeritz, Quantifying and Correcting the Majority Illusion in Social Networks, University of Zurich, Faculty of Business, Economics and Informatics, 2016. (Bachelor's Thesis)
 
The majority illusion that was discovered by Lerman et al. tricks individuals into perceiving a social behavior to be popular when in reality, it is not. That is, vertices in a network overestimate the presence of an attribute as highly connected vertices skew the perception of their neighbors. We show how the majority illusion can be quantified on a vertex-centric and a global perspective for binary as well as for continuous attributes. In the context of social contagion, the majority illusion is an interesting case of disproportionate experiences that can cause a false truth to propagate through a network. We propose an approach to exploit the majority illusion in order to artificially promote the diffusion of a binary attribute in a network in a threshold model Granovetter (1978). Our approach returns target vertex sets that are guaranteed to cause an influence cascade that eventually activates the entire network. Our approach out-performs a naive highest-degree approach in scale-free networks that exhibit network structures as described by Barabàsi et al. (2000) and Dorogovtsev and Mendes (2002). In small-word networks as described by Watts and Strogatz (1998) our approach returns target vertex sets that, on average, have twice the size of target vertex sets retrieved with a highest-degree approach. Additionally, we introduce an alternative dynamic diffusion model that considers the time dimension and incorporates assumptions we make about human behavior in the real world. In the diffusion model we introduce, we were unable to confirm or to disprove that the extent and speed at which a social behavior propagates in a diffusion process profits from highly clustered network structures as suggested by Centola (2010) and Centola and Baronchelli (2015).
|
|
Daniel Ritter, Interactive Visual Analysis for the Semantic Web via Spectral Coarsening, University of Zurich, Faculty of Business, Economics and Informatics, 2016. (Bachelor's Thesis)
 
In the Big Data era, data exploration and visualization systems are getting more and more important. In the last few
years very large datasets have become a major research challenge and with the development of the Semantic Web an
increasing amount of semantic data has been created in form of Resource Description Framework (RDF).
With this Design Science thesis, a tool was created to display large amounts of semantic data in serialized N-Triples
format as graphs in a web application. The Spectral Coarse Graining method was used in order to be able to produce the
representation with very large amounts of data.
This Bachelor thesis provides a basic understanding of the topics Semantic Web, Linked Data, RDF and Spectral
Coarse Graining. It shows the results of the prototype, explains the architecture of the application, presents the
frameworks and databases used, and describes the implemented features and functionalities. For the visualization of the
RDF data, seven state-of-the-art, graph-based JavaScript frameworks are analyzed and evaluated using a comprehensive
criteria catalog. Finally, the work provides an overview of similar visualization systems for the desktop and the web. |
|
Daniele Dell'Aglio, Minh Dao-Tran, Jean-Paul Calbimonte, Danh Le Phuoc, Emanuele Della Valle, A Query Model to Capture Event Pattern Matching in RDF Stream Processing Query Languages, In: Knowledge Engineering and Knowledge Management - 20th International Conference, EKAW 2016, Springer International Publishing, Cham, 2016-11-19. (Conference or Workshop Paper published in Proceedings)
 
|
|
Michael Schneider, Exploring the Suitable Workflows for Collaborative Data Analysis, University of Zurich, Faculty of Business, Economics and Informatics, 2016. (Master's Thesis)
 
In the first part of this thesis, the second iteration with the platform COLDATA is presented. The goal of COLDATA is to let freelancers work on sub-tasks on a data analysis project, lead and supervised by a data scientist. With a Design Science approach, the evaluation of the first iteration is analyzed and improvements are derived, implemented and evaluated with a usabilitytest.
The second part contains the technical documentation, consisting of different manuals.
In the third part, the solutions of an exercise in a master's course in data analysis at the EPFL were analyzed. The central question to answer was, why the results of such data analysis tasks differ although the data and the initial questions were the same. The results are different factors, that influence explicit and implicit decisions made during the analysis. |
|
Andrea Mauri, Jean-Paul Calbimonte, Daniele Dell'Aglio, Marco Balduini, Marco Brambilla, Emanuele Della Valle, Karl Aberer, TripleWave: Spreading RDF Streams on the Web, In: The Semantic Web - ISWC 2016 - 15th International Semantic Web Conference, Springer International Publishing, Cham, 2016-10-17. (Conference or Workshop Paper published in Proceedings)
 
|
|
Shen Gao, Daniele Dell'Aglio, Soheila Dehghanzadeh, Abraham Bernstein, Emanuele Della Valle, Alessandra Mileo, Planning Ahead: Stream-Driven Linked-Data Access under Update-Budget Constraints, In: The 15th International Semantic Web Conference, Heidelberg, 2016. (Conference or Workshop Paper published in Proceedings)
 
Data stream applications are becoming increasingly popular on the web.
In these applications, one query pattern is especially prominent: a join between a continuous data stream and some background data (BGD). Oftentimes, the target BGD is large, maintained externally, changing slowly, and costly to query (both in terms of time and money). Hence, practical applications usually maintain a local (cached) view of the relevant BGD. Given that these caches are not updated as the original BGD, they should be refreshed under realistic budget constraints (in terms of latency, computation time, and possibly financial cost) to avoid stale data leading to wrong answers. This paper proposes to model the join between streams and the BGD as a bipartite graph. By exploiting the graph structure, we keep the quality of results good enough without refreshing the entire cache for each evaluation. We also introduce two extensions to this method: first, we consider a continuous join between recent portions of a data stream and some BGD to focus on updates that have the longest effect. Second, we consider the future impact of a query to the BGD by proposing to delay some updates to provide fresher answers in future. By extending an existing stream processor with the proposed policies, we empirically show that we can improve result freshness by 93% over baseline algorithms such as Random Selection or Least Recently Updated. |
|
Abraham Bernstein, Society Rules, In: 10th International Conference on Web Reasoning and Rule Systems (RR 2016), Springer, 2016-09-09. (Conference or Workshop Paper)
 
Our society is full of rules: rules authorize us to achieve our goals by endowing us with legitimation, they provide the necessary structure to understand the chaos of conflicting indications or tell-tales of a situation, and oftentimes they legitimate our actions. But rules in society are different than logical rules suggest to be: they are not as unshakeable, continuously renegotiated, often even accepted to be wrong but still used, and used as inspiration in the situated context rather than universal truth.
Based on theories about the role of technology in society, this talk will first try to convey the role of rules in social science theory. Extending these insights, it will draw on examples to illustrate how they might be transferred to computer science or artificial intelligence to derive systems that are attuned to the role of rules in social environments and adhere to social rules in the environment in which they are used. |
|