Thomas Kaul, Building an agent for Texas Hold'em Poker based on a recommender system, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2011. (Bachelor's Thesis)
Poker provides an environment of great potential with well-defined rules for the research field of artificial intelligence. The popular card game provides incomplete information about the game state, non-deterministic outcome and stochastic elements where the outcome does not appear until thousands of hands have been played. These circumstances can be compared to making decisions in the real world and make the research interesting for other applications beyond poker.
A major theme of this thesis is the development of an agent for Texas Hold’em Poker Sit and Go tournaments that plays skillful poker. For decision making, our approach is based on a recommender system. We mimic the behavior and strategies of a human poker player with an artificial intelligence agent. In various simulation setups we show that our approach is evaluated superior to simple poker opponents. |
|
Matthias Z'Brun, Online transaction platforms : implementation and multi-agent simulation of the effects of ratings and trust inferencing on online transaction platforms : a game theoretic perspective, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2011. (Master's Thesis)
Online transaction platforms rely on the fair acting and behavior of the participants. It is a matter of two-sided markets, which needs both customers and suppliers to work. The transaction platform must provide the necessary trading conditions, that there is no abuse and does not cause migration of customers or suppliers. In the first part, this thesis investigates mechanisms that resist abuse provider strategies. For this purpose a game theory model of a transaction platform has been created. Measures (rating, trust, without protection) against abusive strategies were evaluated by using a simulation. The result of simulation shows that the application of the trust derived from the success of eliminating malicious behavior. On this basis, we have implemented a transaction platform TextKing which is already in use. |
|
Pascal Schöni, Multi-touch in software engineering : augmenting software engineering tasks with multi-touch technology, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2011. (Master's Thesis)
Goal of this thesis is supporting software engineering tasks with Microsoft Surface. Two proto- types were developed. The first prototype addresses agreeing on a good object-oriented design through the course of a Class Responsibility Collaboration (CRC) brainstorming session. The second prototype supports developers in reverse-engineering. It deals with how the new user interface paradigm fosters cooperation when developers collaborate on recovering a high-level design from a given code base. The evaluation shows that both prototypes work well and lead us to new ideas for enhancing the current ones. |
|
33rd International Conference on Software Engineering (ICSE 2011), Edited by: Harald Gall, Nenad Medvidovic, Honolulu, USA, 2011. (Proceedings)
|
|
Matthias Hert, G Reif, H C Gall, 'Semantic Web 2.0' - write-enabling the Web of Data, In: 6th Workshop on Semantic Web Applications and Perspectives, 2010-09-21. (Conference or Workshop Paper published in Proceedings)
The Semantic Web today is mainly a read-only Web of Data. Many of the data sets that contribute to the Semantic Web are not stored as native RDF, but generated on demand via wrappers. Despite the fact that user contribution is the key success factor in the Web 2.0, current wrapper approaches and standardization efforts still focus on read-only data access. In this paper, we argue that the Semantic Web should learn from the evolution of the Web 2.0 and consider write-enabled semantic data wrappers. |
|
P Knab, Martin Pinzger, H C Gall, Visual patterns in issue tracking data, In: International Conference on Software Process, 2010-07-08. (Conference or Workshop Paper published in Proceedings)
Software development teams gather valuable data about features and bugs in issue tracking systems. This information can be used to measure and improve the efficiency and effectiveness of the development process. In this paper we present an approach that harnesses the extraordinary capability of the human brain to detect visual patterns. We specify generic visual process patterns that can be found in issue tracking data. With these patterns we can analyze information about effort estimation, and the length, and sequence of problem resolution activities. In an industrial case study we apply our interactive tool to identify instances of these patterns and discuss our observations. Our approach was validated through extensive discussions with multiple project managers and developers, as well as feedback from the project review board. |
|
Sandro Boccuzzo, H C Gall, Multi-Touch Collaboration for Software Exploration, In: International Conference on Program Comprehension, 2010-06-30. (Conference or Workshop Paper published in Proceedings)
Software systems have grown so complex and their design is so intricate that no individual can grasp the whole picture. Touch screen technology combined with 3D software visualization offers a promising way for the software engineers involved in a project to share knowledge about a software system in an intuitive way. In this paper we present first results on how such emerging technologies can be combined to support software exploration tasks, such as identifying high-impact changes or revealing problematic parts of the design. As demonstrated with a scenario, this turns the collaborative environment into a vehicle usable during software reviews. |
|
Emanuel Giger, Martin Pinzger, Harald Gall, Predicting the fix time of bugs, In: 2nd International Workshop on Recommendation Systems for Software Engineering, 2010-05-04. (Conference or Workshop Paper published in Proceedings)
Two important questions concerning the coordination of development effort are which bugs to fix first and how long it takes to fix them. In this paper we investigate empirically the relationships between bug report attributes and the time to fix. The objective is to compute prediction models that can be used to recommend whether a new bug should and will be fixed fast or will take more time for resolution. We examine in detail if attributes of a bug report can be used to build such a recommender system. We use decision tree analysis to compute and 10-fold cross validation to test prediction models. We explore prediction models in a series of empirical studies with bug report data of six systems of the three open source projects Eclipse, Mozilla, and Gnome. Results show that our models perform significantly better than random classification. For example, fast fixed Eclipse Platform bugs were classified correctly with a precision of 0.654 and a recall of 0.692. We also show that the inclusion of postsubmission bug report data of up to one month can further improve prediction models. |
|
Michael Würsch, Giacomo Ghezzi, G Reif, H C Gall, Supporting developers with natural language queries, In: 32nd ACM/IEEE International Conference on Software Engineering, 2010-05-02. (Conference or Workshop Paper published in Proceedings)
The feature list of modern IDEs is growing steadily and mastering these tools becomes more and more demanding, especially for novice programmers. Despite their remarkable capabilities, IDEs often still cannot directly answer the questions that arise during program comprehension tasks. Instead developers have to map their questions to multiple concrete queries that can be answered only by combining several tools and examining the output of each of them manually to distill an appropriate answer. Existing approaches have in common that they are either limited to a set of predefined, hardcoded questions, or that they require to learn a specific query language only suitable for that limited purpose. We present a framework to query for information about a software system using guided-input natural language resembling plain English. For that, we model data extracted by classical software analysis tools with an OWL ontology and use knowledge processing technologies from the Semantic Web to query it. We also present a case study that demonstrates how our framework can be used to answer queries about static source code information for program comprehension purposes. |
|
A Lamkanfi, S Demeyer, Emanuel Giger, B Goethals, Predicting the severity of a reported bug, In: 7th Working Conference on Mining Software Repositories, 2010-05-02. (Conference or Workshop Paper published in Proceedings)
The severity of a reported bug is a critical factor in deciding how soon it needs to be fixed. Unfortunately, while clear guidelines exist on how to assign the severity of a bug, it remains an inherent manual process left to the person reporting the bug. In this paper we investigate whether we can accurately predict the severity of a reported bug by analyzing its textual description using text mining algorithms. Based on three cases drawn from the open-source community (Mozilla, Eclipse and GNOME), we conclude that given a training set of sufficient size (approximately 500 reports per severity), it is possible to predict the severity with a reasonable accuracy (both precision and recall vary between 0.65-0.75 with Mozilla and Eclipse; 0.70-0.85 in the case of GNOME). |
|
Giacomo Ghezzi, SOFAS: Software Analysis Services, In: 32nd ACM/IEEE International Conference on Software Engineering, 2010-05-02. (Conference or Workshop Paper published in Proceedings)
We propose a distributed and collaborative software analysis platform to enable seamless interoperability of software analysis tools across platform, geographical and organizational boundaries. In particular, we devise software analysis tools as services that can be accessed and composed over the Internet. These distributed services shall be widely accessible through a software analysis broker where organizations and research groups can register and share their tools. To enable (semi)-automatic use and composition of these tools, they will be classified and mapped into a software analysis taxonomy and adhere to specific meta-models and ontologies for their category of analysis. We claim that moving software analysis ”outside the lab and into the Web” is highly beneficial from many point of views. Simple, common analyses can be effortlessly combined together into much meaningful, complex and novel ones. Analyses can be run everywhere and anytime without the need to install several tools and to cope with many output formats. Empirical studies can be easily replicated. At last, we claim that this will greatly help in the maturing of the field and boost its role in supporting software development practices. |
|
Michael Würsch, G Reif, S Demeyer, H C Gall, Fostering synergies - how semantic web technology could influence software repositories, In: 2nd International Workshop on Search-driven Development: Users, Infrastructure, Tools and Evaluation, 2010-05-01. (Conference or Workshop Paper published in Proceedings)
The state-of-the-art in mining software repositories mirrors software artifacts from various sources into monolithic relational databases. This puts a lot of querying power in the hands of the software miners, however it comes at the cost of enclosing the data and hamper cross-application reuse. In this paper we discuss four problem scenarios to illustrate that Semantic Web technology is able to overcome these limitations. However, it requires that the software engineering research community agrees on two prerequisites: (a) a common vocabulary to talk about software repositories -- an ontology; (b) a strategy for generating unique and stable references to all software artifacts inside such a repository - a Universal Resource Identifier (URI). |
|
Matthias Hert, G Reif, H C Gall, Updating relational data via SPARQL/Update, In: Workshop on Updates in XML, 2010-03-22. (Conference or Workshop Paper published in Proceedings)
Relational Databases (RDBs) are used in most current enterprise environments to store and manage data. The semantics of the data is not explicitly encoded in the relational model, but implicitly at the application level. Ontologies and Semantic Web technologies provide explicit semantics that allows data to be shared and reused across application, enterprise, and community boundaries. Converting all relational data to RDF is often not feasible, therefore we adopt a mediation approach for ontology-based access to RDBs. Existing mapping approaches focus on read-only access via SPARQL or as Linked Data but other data access interfaces exist, including approaches for updating RDF data. In this paper we present OntoAccess, an extensible platform for ontology-based read and write access to existing relational data. It encapsulates the translation logic in the core layer that provides the foundation of an extensible set of data access interfaces in the interface layer. We further present the formal definition of our RDB-to-RDF mapping, the architecture of our mediator platform, and a performance evaluation of the prototype implementation. |
|
Nicolas Hoby, Software analyses on a multi-touch table: enhancing EvoSpaces 2 with regard to multi-user multi-touch environments, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2010. (Bachelor's Thesis)
In this bachelor thesis we show how EVOSPACES 2 can be extended to take full advantage of a multi-user, touch sensitive environment such as the Microsoft Surface. The goal of EVOSPACES 2 is to allow software engineers to collaborate on software understanding, software evolution and software production. The idea is to combine the capabilities that are provided by this specific environment with the principle of graphically visualizing software. Software visualization can greatly improve an engineer’s perception and understanding of a piece of software. Combining it with the intuitive interaction of a multi-touch environment further enhances the usability and usefulness. |
|
Roger Wolfer, BibViz, 2010. (Other Publication)
BibViz is a software developed by Amancio Bouza for visualizing citation- and reference-relationships.
This Software helps a user with an existing bibliography for a specific topic to explore further
publications that are relevant for the topic. A publication counts as relevant for the topic if they cite
publications that are already in the bibliography or if they’ve been cited by a publication in the
bibliography. The existing solution had some issues. First, the process of manually adding citations
the bibliography was very time consuming. And second, to visualize a bibliography it was necessary
to adjust the file path in the source code. The first issue is solved by a web crawler which
automatically adds citations and further publication information to the bibliography. The second
issue is solved by a GUI which allows to choose an existing bibliography, to extend this bibliography
with information the web crawler collects and to save the extended bibliography. |
|
Claudio Steffen, Quality recovery: an evaluation of static code analysis tools, 2010. (Other Publication)
The quality of an evolving system typically disintegrates as time passes, for example, because of
new and unforseen requirements. RAPS is a tool used by SwissLife AG for complex calculations
of product data. RAPS is written in C and is composed of about 1.5 million lines of code and has
evolved for over ten years. We assume that the quality of RAPS can be improved by identifying
and fixing quality issues. This thesis evaluates three tools, namely Bauhaus Suite, Imagix 4D
and the combination of Sotograph and Sotoarc, that might facilitate the process of recovering the
quality of RAPS. The goal is to evaluate what can be achieved by using the tools and to estimate
which of the tools is best suited to analyze RAPS or similar systems. Special attention is payed to
the role that visualization techniques play in the process of identifying and fixing quality issues. |
|
Thomas Hunziker, Universal data transformation: an event-driven approach for enterprise application integration, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2010. (Bachelor's Thesis)
In current ERP market only limited integrated e-commerce solutions are available. E-commerce
leaders implement the integration between the shopping solution and the ERP systems with a
proprietary application. Small and medium-sized businesses can only access the benefits of an
integration between the ERP system and the shopping system by accepting the limitations of the
ERP vendor shopping solution as the self integration of the applications is too expensive.
This work provides an application that pursued a universal approach for synchronization of
business data between an ERP system and a shop system. The universal approach reduces the
integration effort of additional applications. As result the integration between the popular open
source systems, namely OpenERP and Magento, is delivered on a prototype level. |
|
Bo Chen, Synchronization event processing: an event-driven approach for universal data synchronization, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2010. (Bachelor's Thesis)
The current market of integrated ERP and online shop systems is very small and limited. Big
players like Amazon, eBay and Digitec are mainly using unique in-house developed solutions.
Meanwhile, small and mid-sized businesses are struggling with the integration of their ERP systems
and e-commerce solutions as they cannot afford to spend millions for the integration or a
migration to ""one-size fits all"" solutions.
This work aims to provide an approach to a universal integration tool between e-commerce
solutions and back-end management systems to ease the integration effort for small and midsized
businesses. ""Universal"" means that it should be usable for any combination of e-commerce
and back-end management system.
|
|
Daniel Maciej Lawniczak, Developing an online chronic pain information system, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2010. (Master's Thesis)
Although chronic pain is widespread and has a heavy impact on the quality of life, the knowledge about this disease is scarce and the treatment is difficult. We developed a prototype for an online chronic pain information system, which is aimed at gathering and providing chronic pain knowledge. Our prototype enables patients to express their pain sensation with pain drawings and additional verbal descriptions. Multiple such entries can be reviewed in a diary. We implemented an algorithm which identifies similar pain drawings. Based on our prototype, we conducted usability tests and interviews with seven chronic pain patients. Various caretakers were involved in the development process and evaluated our prototype. We were able to show that modern information technology can be used to better understand chronic pain. |
|
Thomas Maurer, Environs: visualization of recommendation clouds on the iPhone, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2010. (Master's Thesis)
Recommender systems are a type of information retrieval and filtering systems that try to propose
items to users according to their individual preferences. Collaborative Filtering is a method to
implement such a recommender system through the prediction of ratings for items based on the
social environment of the user.
In a location recommender system the recommended items are locations, places or areas of
interest. Commonly such location recommendations focus only on the current location of the
user leaving out other important contextual factors such as time and the locations of other users.
This thesis builds on the assumption that users might be interested in places or areas where
other users with similar preferences currently are situated. We developed a visualization following
the metaphor of a heatmap – e.g. used of precipitation radar images – where the locations
of users are drawn on a map and shape clouds which recommend areas of interest visually. In
addition, we develop an abstracted view of the cloud visualization called projection which recommends
areas and places depending on hour, weekday and user preferences.
We present our implementation of such a location recommender system, in particular the visualizations.
Finally, we evaluate our visualization recommendation approach with a synthetic
data set against other collaborative filtering algorithms and can present eligible results.
|
|