Simon Hediger, Jeffrey Näf, Michael Wolf, R-NL: covariance matrix estimation for elliptical distributions based on nonlinear shrinkage, In: ArXiv.org, No. 2210.14854, 2023. (Working Paper)
We combine Tyler's robust estimator of the dispersion matrix with nonlinear shrinkage. This approach delivers a simple and fast estimator of the dispersion matrix in elliptical models that is robust against both heavy tails and high dimensions. We prove convergence of the iterative part of our algorithm and demonstrate the favorable performance of the estimator in a wide range of simulation scenarios. Finally, an empirical application demonstrates its state-of-the-art performance on real data. |
|
Simon Hediger, Loris Michel, Jeffrey Näf, On the use of random forest for two-sample testing, Computational Statistics & Data Analysis, Vol. 170, 2022. (Journal Article)
Following the line of classification-based two-sample testing, tests based on the Random Forest classifier are proposed. The developed tests are easy to use, require almost no tuning, and are applicable for any distribution on R^d. Furthermore, the built-in variable importance measure of the Random Forest gives potential insights into which variables make out the difference in distribution. An asymptotic power analysis for the proposed tests is conducted. Finally, two real-world applications illustrate the usefulness of the introduced methodology. To simplify the use of the method, the R-package “hypoRF” is provided. |
|
Simon Hediger, Jeffrey Näf, Shrinking in COMFORT, In: SSRN, No. 4069441, 2022. (Working Paper)
The present paper combines nonlinear shrinkage with the Multivariate Generalized Hyperbolic (MGHyp) distribution to account for heavy tails in estimating the first and second moments in high dimensions. An Expectation-Maximization (EM) algorithm is developed that is fast, stable, and applicable in high dimensions. Theoretical arguments for the monotonicity of the proposed algorithm are provided and it is shown in simulations that it is able to accurately retrieve parameter estimates. Finally, in an extensive Markowitz portfolio optimization analysis, the approach is compared to state-of-the-art benchmark models. The proposed model excels with a strong out-of-sample portfolio performance combined with a comparably low turnover. |
|
Arberim Bibaj, Regularized Instrumental Variable Regression, University of Zurich, Faculty of Business, Economics and Informatics, 2020. (Bachelor's Thesis)
In econometrics, the instrumental variables (IV) regression is an alternative estimation
method to ordinary least squares (OLS) if a regressor is potentially correlated with the
error terms. However, the literature has revealed that IV estimators, especially the two
stage least squares (2SLS) estimator, suer from bias when the number of instruments is
large compared to the sample size. As shown in previous studies, this so called many instru-
ments problem can be addressed by using regularization methods such as ridge or lasso. The
goal of this study was to compute regularized 2SLS estimators and to determine whether
these estimators have an advantage over standard 2SLS and OLS. A Monte Carlo simulation
demonstrates that a regularization of the 2SLS estimator helps reduce the many instruments
problem in many scenarios. This is especially the case when the number of instruments is
near the sample size. In particular, a regularization of 2SLS based on the lasso was useful.
These results are consistent with the literature. Furthermore, this work is an addition to
already existing studies as it tried other parameter values in the Monte Carlo simulation. |
|
Moritz Vandenhirtz, Additive Models for High-Dimensional Financial Data, University of Zurich, Faculty of Business, Economics and Informatics, 2020. (Bachelor's Thesis)
|
|
Shajivan Satkurunathan, Multi-Factor Models for Portfolio Selection in Large Dimensions, University of Zurich, Faculty of Business, Economics and Informatics, 2019. (Bachelor's Thesis)
|
|
Joseph P Romano, Azeem M Shaikh, Michael Wolf, A practical two-step method for testing moment inequalities, In: Working paper series / Department of Economics, No. 90, 2014. (Working Paper)
This paper considers the problem of testing a finite number of moment inequalities. We propose a two-step approach. In the first step, a confidence region for the moments is constructed. In the second step, this set is used to provide information about which moments are “negative.” A Bonferonni-type correction is used to account for the fact that with some probability the moments may not lie in the confidence region. It is shown that the test controls size uniformly over a large class of distributions for the observed data. An important feature of the proposal is that it remains computationally feasible, even when the number of moments is large. The finite-sample properties of the procedure are examined via a simulation study, which demonstrates, among other things, that the proposal remains competitive with existing procedures while being computationally more attractive. |
|
David B Bell, Olivier Ledoit, Michael Wolf, A new portfolio formation approach to mispricing of marketing performance indicators with an application to customer satisfaction, In: Working paper series / Department of Economics, No. 79, 2013. (Working Paper)
The mispricing of marketing performance indicators (such as brand equity, churn, and customer satisfaction) is an important element of arguments in favor of the financial value of marketing investments. Evidence for mispricing can be assessed by examining whether or not portfolios composed of firms that load highly on marketing performance indicators deliver excess returns. Unfortunately, extant portfolio formation methods that require the use of a risk model are open to the criticism of time-varying risk factor loadings due to the changing composition of the portfolio over time. This is a serious critique, as the direction of the induced bias is unknown. As an alternative, we propose a new method and construct portfolios that are neutral with respect to the desired risk factors a priori. Consequently, no risk model is needed when analyzing the observed returns of our portfolios. We apply our method to a frequently studied marketing performance indicator, customer satisfaction. Using various ways of measuring customer satisfaction, we do not find any convincing evidence that portfolios that load on high customer satisfaction lead to abnormal returns. |
|
Olivier Ledoit, Michael Wolf, Spectrum estimation: a unified framework for covariance matrix estimation and PCA in large dimensions, In: Working paper series / Department of Economics, No. 105, 2013. (Working Paper)
Covariance matrix estimation and principal component analysis (PCA) are two cornerstones of multivariate analysis. Classic textbook solutions perform poorly when the dimension of the data is of a magnitude similar to the sample size, or even larger. In such settings, there is a common remedy for both statistical problems: nonlinear shrinkage of the eigenvalues of the sample covariance matrix. The optimal nonlinear shrinkage formula depends on unknown population quantities and is thus not available. It is, however, possible to consistently estimate an oracle nonlinear shrinkage, which is motivated on asymptotic grounds. A key tool to this end is consistent estimation of the set of eigenvalues of the population covariance matrix (also known as the spectrum), an interesting and challenging problem in its own right. Extensive Monte Carlo simulations demonstrate that our methods have desirable finite-sample properties and outperform previous proposals. |
|
Jann Stoz, Risikovorhersage zwecks optimaler Ausschöpfung des gegebenen Risiko-Budgets für Rohstoff-Portfolios, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2013. (Bachelor's Thesis)
|
|
Michael Wolf, Dan Wunderli, Bootstrap joint prediction regions, In: Working paper series / Department of Economics, No. 64, 2013. (Working Paper)
Many statistical applications require the forecast of a random variable of interest over several periods into the future. The sequence of individual forecasts, one period at a time, is called a path forecast, where the term path refers to the sequence of individual future realizations of the random variable. The problem of constructing a corresponding joint prediction region has been rather neglected in the literature so far: such a region is supposed to contain the entire future path with a prespecified probability. We develop bootstrap methods to construct joint prediction regions. The resulting regions are proven to be asymptotically consistent under a mild high-level assumption. We compare the finitesample performance of our joint prediction regions to some previous proposals via Monte Carlo simulations. An empirical application to a real data set is also provided. |
|
Joseph P Romano, Michael Wolf, Testing for monotonicity in expected asset returns, In: Working paper series / Department of Economics, No. 17, 2013. (Working Paper)
Many postulated relations in finance imply that expected asset returns strictly increase in an underlying characteristic. To examine the validity of such a claim, one needs to take the entire range of the characteristic into account, as is done in the recent proposal of Patton and Timmermann (2010). But their test is only a test for the direction of monotonicity, since it requires the relation to be monotonic from the outset: either weakly decreasing under the null or strictly increasing under the alternative. When the relation is non-monotonic or weakly increasing, the test can break down and falsely ‘establish’ a strictly increasing relation with high probability. We offer some alternative tests that do not share this problem. The behavior of the various tests is illustrated via Monte Carlo studies. We also present empirical applications to real data. |
|
Samuel Mösle, The Economic Consequences of Mr Churchill, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2012. (Master's Thesis)
|
|
Stefan Bruder, Comparing Several Methods to Compute Joint Prediction Regions, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2012. (Master's Thesis)
|
|
Olivier Ledoit, Michael Wolf, Nonlinear shrinkage estimation of large-dimensional covariance matrices, The Annals of Statistics, Vol. 40 (2), 2012. (Journal Article)
Many statistical applications require an estimate of a covariance matrix and/or its inverse. Whenthe matrix dimension is large compared to the sample size, which happens frequently, the samplecovariance matrix is known to perform poorly and may suffer from ill-conditioning. There alreadyexists an extensive literature concerning improved estimators in such situations. In the absence offurther knowledge about the structure of the true covariance matrix, the most successful approachso far, arguably, has been shrinkage estimation. Shrinking the sample covariance matrix to amultiple of the identity, by taking a weighted average of the two, turns out to be equivalent tolinearly shrinking the sample eigenvalues to their grand mean, while retaining the sampleeigenvectors. Our paper extends this approach by considering nonlinear transformations of thesample eigenvalues. We show how to construct an estimator that is asymptotically equivalent toan oracle estimator suggested in previous work. As demonstrated in extensive Monte Carlosimulations, the resulting bona fide estimator can result in sizeable improvements over the samplecovariance matrix and also over linear shrinkage. |
|
Maarten Jan Manders, Relative-value arbitrage, excess volatility and market efficiency, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2012. (Master's Thesis)
|
|
Michael Wolf, Dan Wunderli, Fund-of-funds construction by statistical multiple testing methods, In: The Oxford Handbook of Quantitative Asset Management, Oxford University Press, Oxford, p. 116 - 135, 2011-12-15. (Book Chapter)
|
|
Olivier Ledoit, Michael Wolf, Nonlinear Shrinkage Estimation of Large-Dimensional Covariance Matrices, In: Working paper series / Institute for Empirical Research in Economics, No. No. 515, 2011. (Working Paper)
Many statistical applications require an estimate of a covariance matrix and/or its inverse. When the matrix dimension is large compared to the sample size, which happens frequently, the sample covariance matrix is known to perform poorly and may suffer from ill-conditioning. There already exists an extensive literature concerning improved estimators in such situations. In the absence of further knowledge about the structure of the true covariance matrix, the most successful approach so far, arguably, has been shrinkage estimation. Shrinking the sample covariance matrix to a multiple of the identity, by taking a weighted average of the two, turns out to be equivalent to linearly shrinking the sample eigenvalues to their grand mean, while retaining the sample eigenvectors. Our paper extends this approach by considering nonlinear transformations of the sample eigenvalues. We show how to construct an estimator that is asymptotically equivalent to an oracle estimator suggested in previous work. As demonstrated in extensive Monte Carlo simulations, the resulting bona fide estimator can result in sizeable improvements over the sample covariance matrix and also over linear shrinkage. |
|
Olivier Ledoit, Michael Wolf, Robust performance hypothesis testing with the variance, Wilmott Magazine, Vol. 2011 (55), 2011. (Journal Article)
Applied researchers often test for the difference of the variance of two investment strategies;
in particular, when the investment strategies under consideration aim to implement
the global minimum variance portfolio. A popular tool to this end is the F-test for the
equality of variances. Unfortunately, this test is not valid when the returns are correlated,
have tails heavier than the normal distribution, or are of time series nature. Instead, we
propose the use of robust inference methods. In particular, we suggest to construct a studentized
time series bootstrap confidence interval for the ratio of the two variances and to
declare the two variances different if the value one is not contained in the obtained interval.
This approach has the advantage that one can simply resample from the observed data
as opposed to some null-restricted data. A simulation study demonstrates the improved
finite-sample performance compared to existing methods. |
|
Joseph P Romano, Michael Wolf, Alternative Tests for Monotonicity in Expected Asset Returns, In: Department of Economics Working Paper Series, No. No. 17, 2011. (Working Paper)
Many postulated relations in finance imply that expected asset returns should monotonically increase in a certain characteristic. To examine the validity of such a claim, one
typically considers a finite number of return categories, ordered according to the underlying characteristic. A standard approach is to simply test for a difference in expected returns between the highest and the lowest return category. However, such an approach can be misleading, since the relation of expected returns could be flat, or even decreasing, in the range of intermediate categories. A new test, taking the entire range of categories into
account, has been proposed by Patton and Timmermann (2010). Unfortunately, the test is based on an additional assumption that can be violated in many applications of practical interest. As a consequence, it can be quite likely for the test to ‘establish’ strict monotonicity of expected asset returns when such a relation actually does not exist. We offer some alternative tests which do not share this problem. The behavior of the various tests is illustrated via Monte Carlo studies. We also present empirical applications to real data. |
|