Gianluca Brero, Machine Learning-powered Iterative Combinatorial Auctions, University of Zurich, Faculty of Business, Economics and Informatics, 2020. (Dissertation)
|
|
Michael Bucher, Overbidding Deviations of Single-Minded Bidders in Combinatorial Auctions, University of Zurich, Faculty of Business, Economics and Informatics, 2020. (Bachelor's Thesis)
In combinatorial auctions, single-minded bidders are bidders who have a value for strictly one bundle. There exist specific settings that allow such bidders to manipulate an auction by placing a second bid on a bigger bundle they do not actually want - they overbid. In this work, we study this behaviour by comparing two restricted strategy spaces. One, where bidders play a naive strategy and one where bidders are allowed to place one additional bid on a second bundle. By comparing these strategies, we tried to identify if naive bidders are missing out on a seemingly inefficient strategy. We identified that in some infrequent cases bidders could be incentivized to deviate from a straightforward strategy. Since combinatorial auctions are often used in market areas, where large amounts of money are involved, our results show, that overbidding deviations may should be given more attention in future research. |
|
Vitor Bosshard, Sven Seuken, The Cost of Simple Bidding in Combinatorial Auctions, In: ArXiv.org, No. 12237, 2020. (Working Paper)
We study the complexity of bidding optimally in one-shot combinatorial auction mechanisms. Specifically, we consider the two most-commonly used payment rules: first-price and VCG-nearest. Prior work has largely assumed that bidders only submit bids on their bundles of interest. However, we show the surprising result that a single-minded bidder may lose an exponential amount of utility by playing his optimal simple strategy (only bidding on his bundle of interest) compared to playing his optimal complex strategy (which involves bidding on an exponential number of bundles). Our work suggests that it is important for future research on combinatorial auctions to fully take these effects into account. |
|
Manuel Beyeler, Bidding with Upper and Lower Bounds in Machine Learning Powered Combinatorial Auctions, University of Zurich, Faculty of Business, Economics and Informatics, 2019. (Bachelor's Thesis)
In Combinatorial Auctions (CAs) multiple items are allocated to multiple bidders at the same time. CAs play an important role in today's economy. However, there exist problems in practical auctions leading to inefficiency in the final allocation. The bundle space grows exponentially in the number of items such that it is not possible for a bidder to specify values for all bundles. Furthermore, even the determination of one bundle value can be a costly task for a bidder.
This thesis addresses these problems by allowing bidders to report their values with some uncertainty and help them to focus on important bundles only. The new auction Pseudo VCG Mechanism (PVM) with Bounds and Refinement combines PVM (Brero, Lubin and Seuken, 2018) with the refinement of Lubin et al. (2008). While Brero, Lubin and Seuken (2018) focus on Machine Learning powered bundle elicitation, Lubin et al. (2008) provide a framework to refine bids with uncertainty such that the efficient allocation can be determined.
In addition, this thesis provides a new improved pricing algorithm which reduces the number of rounds during the refinement.
The evaluation shows that the allocative efficiency of the new auction design is good while the properties of PVM are maintained. Furthermore, the thesis sketches some promising ideas how the efficiency could be further improved. Meanwhile, the current design has some issues concerning low or negative payments which need to be addressed in future designs. |
|
Wei Qiu, Dynamic Clearing with Liquidity Assistance in Financial Networks with Credit Default Swaps, University of Zurich, Faculty of Business, Economics and Informatics, 2019. (Master's Thesis)
Existing financial network clearing models can be applied on limited financial networks with credit default swaps (CDSs). Simultaneous clearing model experiences default ambiguity. Dynamic clearing model is not defined for financial networks without default costs and does not guarantee that a covered CDS holder is insured. This study presents
an alternative clearing model named modified dynamic clearing model. It extends and modifies the dynamic clearing model by adding default costs and a liquidity loan provided by a central bank into the model. This study shows that in the modified dynamic clearing model, there always has a solution and a covered CDS holder is actually insured. Further, through a small-scale simulation, this study shows some common phenomenon. First, there are 2 reasons to partial repayment of liquidity loan: excessive debt liability and unforeseeable CDS liability. Second, there are 2 sources of central bank's loss: default costs and liquidity redistribution. Last, when a liquidity loan prevents at least one bank
from default, it is effective for the central bank to provide a liquidity loan. |
|
Dmitry Moor, Data markets with dynamic arrival of buyers and sellers, In: The 14th Workshop on the Economics of Networks, Systems and Computation, ACM Press, New York, New York, USA, 2019-07-28. (Conference or Workshop Paper published in Proceedings)
We propose a market design solution for a market for distributed data. The main challenges addressed by our solution are (1) different data providers produce different databases that can be joined to produce answers for users' queries; (2) data providers have high fixed costs for producing their databases; and (3) buyers and sellers can arrive dynamically to the market. Our design relies on using a Markov chain with states corresponding to different numbers of allocated databases. The transition probabilities between different states are governed by the payments suggested by the market platform to the data providers. The main challenge in this setting is to guarantee dynamic incentive compatibility, i.e., to ensure that buyers and sellers are not incentivized to arrive late to the market or to misreport their costs or values. To achieve this, we disentangle the payments suggested by the market platform to the sellers from the posted prices exposed to the buyers. We prove that the buyer-optimal payments that are exposed to sellers are non-increasing which prevents late arrivals of sellers. Further, we demonstrate that the posted prices exposed to buyers constitute a martingale process (i.e., late arrivals lead to the same expected price). Finally, we show that our design guarantees zero expected average budget deficit and we perform a number of simulations to validate our model. |
|
David Lehnherr, The Good, the Bad and the Ugly - Evaluation of Cluster Admission Policies on Historical Deployments, University of Zurich, Faculty of Business, Economics and Informatics, 2019. (Bachelor's Thesis)
Cloud computing gained recent interest due to a wide audience and the possibility of large revenue. While there is a moderate amount of literature on pricing the cloud and managing tasks, there is little on cluster scheduling. Despite the issue of today's clusters struggling with low overall usages. In this work, we compare existing bayesian admission policies with the greedy baseline approach on historic data. Moreover, we present a new admission policy based on the inequalities of Hoeding
and Bernstein. Through simulating one month of historic workload on an articially filled cluster we found strong evidence, that bayesian admission policies perform significantly better than the greedy approach. Within the bayesian policies, we nd, that policies that take the variance of active
cores into account, perform better than the others. When provided with additional information, the improvement further increased remarkably. |
|
Nils Kristof Olberg, Sven Seuken, Enabling Trade-offs in Machine Learning-based Matching for Refugee Resettlement, In: -, No. -, 2019. (Working Paper)
|
|
Ludwig Dierks, Ian Kash, Sven Seuken, On the cluster admission problem for cloud computing, In: Proceedings of the 14th Workshop on the Economics of Networks, Systems and Computation (NetEcon 2019) , New York, NY, USA, 2019. (Conference or Workshop Paper published in Proceedings)
Cloud computing providers must handle heterogeneous customer workloads for resources such as (virtual) CPU or GPU cores. This is particularly challenging if customers, who are already running a job on a cluster, scale their resource usage up and down over time. The provider therefore has to continuously decide whether she can add additional workloads to a given cluster or if doing so would impact existing workloads' ability to scale. Currently, this is often done using simple threshold policies to reserve large parts of each cluster, which leads to low average utilization of the cluster. In this paper, we propose more sophisticated policies for controlling admission to a cluster and demonstrate that they significantly increase cluster utilization. We first introduce the cluster admission problem and formalize it as a constrained Partially Observable Markov Decision Process (POMDP). As it is infeasible to solve the POMDP optimally, we then systematically design heuristic admission policies that estimate moments of each workload's distribution of future resource usage. Via simulations we show that our admission policies lead to a substantial improvement over the simple threshold policy. We then evaluate how much further this can be improved with learned or elicited prior information and how to incentivize users to provide this information.
|
|
Ludwig Dierks, Sven Seuken, Cloud Pricing: The Spot Market Strikes Back, In: Proceedings of the 2019 ACM Conference on Economics and Computation, New York, NY, USA , 2019. (Conference or Workshop Paper published in Proceedings)
Cloud computing providers must constantly hold many idle compute instances available (e.g., for maintenance, or for users with long-term contracts). The resulting low average utilization is undesirable, as a large part of the costs a providers incurs for compute instances is independent of the usage. A natural idea to increase the provider's profit is to sell these idle instances on a spot market where users can be preempted. However, this ignores the possible ``market cannibalization'' that may occur in equilibrium. In particular, users who would generate more profit in the provider's existing fixed-price market might move to the spot market and generate less profit. In this paper, we model the provider's profit optimization problem using queuing theory and game theory and analyze the equilibria of the resulting queuing system. Our model notably takes both running and fixed costs of compute resources, as well as the cost users incur when being preempted into account. Our main result is an easy-to-check condition on the setting under which offering a market consisting of fixed-price instances as well as some spot instances (using idle resources) increases the provider's profit over offering only fixed-price instances. Furthermore, we show that under our condition, such a profit increase can always be combined with a Pareto improvement for the users. This shows that offering such an additional spot market is desirable even when providers face competition. Finally, we illustrate our results numerically to demonstrate the effects the provider's costs and her strategy have on her profit.
|
|
Gianluca Brero, Sébastien Lahaie, Sven Seuken, Fast Iterative Combinatorial Auctions via Bayesian Learning, In: Proceedings of the AAAI Conference on Artificial Intelligence, arXiv, 2019-01-27. (Conference or Workshop Paper published in Proceedings)
|
|
Pouyan Rezakhani, Analysis of Systemic Risk in Financial Networks with Credit Default Swpas via Monte Carlo Simulations, University of Zurich, Faculty of Business, Economics and Informatics, 2019. (Master's Thesis)
We examine what effect the presence of CDSs has on systemic risks, mainly default ambiguity by conducting Monte Carlo simulations. Default ambiguity is a situation where it is impossible to decide which banks are in default. This phenomenon is observed when we strive to solve the clearing problem. It is known that, in financial networks with CDSs, the clearing problem is computationally hard and the desired properties proposed by Eisenberg and Noe (2001) are not guaranteed to hold anymore. Our first contribution is development of a stress testing algorithm that is both efficient and is able to provide certificates for instances in which these desiderata are not satisfied. Furthermore, we define an analysis framework which enables us to present the quantitative effects of the network structure on default ambiguity. Finally, we observe that this type of systemic risk arises in realistic networks and thus we will discuss policy implications to prevent the occurrence of this systemic risk in CDS market. |
|
Tobias Grubenmann, Daniele Dell'Aglio, Abraham Bernstein, Dmitrii Moor, Sven Seuken, Make restaurants pay your server bills, In: ISWC 2018 Posters & Demonstrations and Industry Tracks, CEUR-WS.org, Aachen, 2018-10-08. (Conference or Workshop Paper)
|
|
Marius Högger, Bayesian Optimization for Best Response Computation in Combinatorial Auctions, University of Zurich, Faculty of Business, Economics and Informatics, 2018. (Bachelor's Thesis)
The Bayes-Nash equilibrium (BNE) solution to combinatorial auctions (CAs) bears much information helping to understand allocation problems since it represents a stable situation where non of the bidders included would want to deviate. This thesis explores the possibility of using Bayesian optimization (BO) to reduce the number of expected utility evaluations when searching for the pointwise best response which is used for finding the BNE. This research is based on the algorithm presented in [Bosshard et al., 2017]. Comparing BO with pattern search (PS) on the univariate LLG-domain, the reduction of the costly evaluations is estimated and the influence of the variance of the Monte Carlo (MC) integration used for the calculation of the expected utility, is examined. Finally, the quality of the results when BO is used, is verified by comparing them against the known analytical results. Confirming that when using BO, equilibria of similar quality can be found with fewer evaluations, this thesis still highlights certain reservations concerning the effectiveness of BO for finding best responses in the LLG domain. |
|
Vitor Bosshard, Ye Wang, Sven Seuken, Non-decreasing Payment Rules for Combinatorial Auctions, In: Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}, International Joint Conferences on Artificial Intelligence Organization, California, 2018. (Conference or Workshop Paper published in Proceedings)
|
|
Gianluca Brero, Benjamin Lubin, Sven Seuken, Combinatorial Auctions via Machine Learning-based Preference Elicitation., In: nternational Joint Conference on Artificial Intelligence and the Twenty-third European Conference on Artificial Intelligence (IJCAI-ECAI-18), 2018. (Conference or Workshop Paper published in Proceedings)
|
|
Fabio Elia Isler, Design of a Combinatorial Market for Offloading Cellular Traffic via Wireless Access Points, University of Zurich, Faculty of Business, Economics and Informatics, 2018. (Master's Thesis)
Mobile network operators (MNOs) are facing explosive growth in cellular data demand in urban areas, which can overload their cellular network and lead to a bad user experience for their customers. To cope with this, some MNOs have started to deploy their own Wi-Fi access points to support their cellular network. At the same time, many already deployed third-party Wi-Fi access points stay idle most of the time and could potentially be used for on-demand offloading as well. This thesis proposes a design of a market that is built upon a combinatorial auction in which MNOs bid for offloading part of their customers' demand to such third-party Wi-Fi access points. The performance of this market is evaluated by carrying out an extensive computational study. The experimental results indicate a high efficiency of the market which all market participants can potentially benefit from. |
|
Gianluca Brero, Sébastien Lahaie, A Bayesian clearing mechanism for combinatorial auctions, In: Thirty-Second AAAI Conference on Artificial Intelligence, 2018. (Conference or Workshop Paper published in Proceedings)
|
|
Vitor Bosshard, Benedikt Bünz, Benjamin Lubin, Sven Seuken, Computing Bayes-Nash Equilibria in Combinatorial Auctions with Continuous Value and Action Spaces, In: Twenty-Sixth International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence Organization, California, 2017. (Conference or Workshop Paper published in Proceedings)
Combinatorial auctions (CAs) are widely used in practice, which is why understanding their incentive properties is an important problem. However, finding Bayes-Nash equilibria (BNEs) of CAs analytically is tedious, and prior algorithmic work has only considered limited solution concepts (e.g. restricted action spaces). In this paper, we present a fast, general algorithm for computing symmetric pure ε-BNEs in CAs with continuous values and actions. In contrast to prior work, we separate the search phase (for finding the BNE) from the verification step (for estimating the ε), and always consider the full (continuous) action space in the best response computation. We evaluate our method in the well-studied LLG domain, against a benchmark of 16 CAs for which analytical BNEs are known. In all cases, our algorithm converges quickly, matching the known results with high precision. Furthermore, for CAs with quasi-linear utility functions and independently distributed valuations, we derive a theoretical bound on ε. Finally, we introduce the new Multi-Minded LLLLGG domain with eight goods and six bidders, and apply our algorithm to finding an equilibrium in this domain. Our algorithm is the first to find an accurate BNE in a CA of this size. |
|
Daniel Abächerli, A Parametric Proof for Strategyproofness in the Large of the Probabilistic Serial Mechanism, University of Zurich, Faculty of Business, Economics and Informatics, 2017. (Master's Thesis)
The probabilistic serial mechanism (PS) (Bogomolnaia and Moulin, 2001) is one of the most well-understood mechanisms for the assignment problem. The main result of this thesis is that the degree of strategyproofness for PS converges to 1 as markets get large. This generalizes the large market results of Kojima and Manea (2010) and leads to an elegant, parametric proof that PS is strategyproof in the large (Azevedo and Budish, 2017). A second result in this thesis is the observation that the set of utility functions for which PS makes truthful reporting a dominant strategy is not, in general, equal to the URBI(r) set, which refutes a conjecture of Mennle and Seuken (2017c). |
|