Oliver Bachmann, Comparing commuters’ short-term and long-term travel mode demand: evidence from the Canton of Zurich, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2007. (Dissertation)
|
|
Joseph P Romano, Michael Wolf, Improved Nonparametric Confidence Intervals in Time Series Regressions, In: Working paper series / Institute for Empirical Research in Economics, No. No. 273, 2006. (Working Paper)
Confidence intervals in econometric time series regressions suffer from notorious coveragenproblems. This is especially true when the dependence in the data is noticeable andnsample sizes are small to moderate, as is often the case in empirical studies. This papernsuggests using the studentized block bootstrap and discusses practical issues, such as thenchoice of the block size. A particular data-dependent method is proposed to automate the nmethod. As a side note, it is pointed out that symmetric confidence intervals are preferred over equal-tailed ones, since they exhibit improved coverage accuracy. The improvements in small sample performance are supported by a simulation study. |
|
Joseph P Romano, Michael Wolf, Improved nonparametric confidence intervals in time series regressions, Journal of Nonparametric Statistics, Vol. 18 (2), 2006. (Journal Article)
Confidence intervals in econometric time series regressions suffer from notorious coverage problems. This is especially true when the dependence in the data is noticeable and sample sizes are small to moderate, as is often the case in empirical studies. This article suggests using the studentized block bootstrap and discusses practical issues such as the choice of the block size. A particular data-dependent method is proposed to automate the method. As a side note, it is pointed out that symmetric confidence intervals are preferred over equal-tailed ones, as they exhibit improved coverage accuracy. The improvements in small sample performance are supported by a simulation study. |
|
Michael Wolf, Resampling vs. Shrinkage for Benchmarked Managers, In: Working paper series / Institute for Empirical Research in Economics, No. No. 263, 2006. (Working Paper)
A well-known pitfall of Markowitz (1952) portfolio optimization is that the sample covariance matrix, which is a critical input, is very erroneous when there are many assets to choose from. Ifnunchecked, this phenomenon skews the optimizer towards extreme weights that tend to performnpoorly in the real world. One solution that has been proposed is to shrink the sample covariance matrix by pulling its most extreme elements towards more moderate values. An alternative solution is the resampled efficiency suggested by Michaud (1998). This paper compares shrinkagenestimation to resampled e*ciency. In addition, it studies whether the two techniques can bencombined to achieve a further improvement. All this is done in the context of an active port-nfolio manager who aims to outperform a benchmark index and who is evaluated by his realizedninformation ratio. |
|
Dennis Gärtner, Essays in industrial organization and mechanism design, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2006. (Dissertation)
|
|
Joseph P Romano, Azeem M Shaikh, Michael Wolf, Formalized Data Snooping Based on Generalized Error Rates, In: Working paper series / Institute for Empirical Research in Economics, No. No. 259, 2005. (Working Paper)
It is common in econometric applications that several hypothesis tests are carried out at the same time. The problem then becomes how to decide whichnhypotheses to reject, accounting for the multitude of tests.nThe classical approach is to control the familywise error rate (FWE), that is, thenprobability of one or more false rejections. But when thennumber of hypotheses under consideration is large, control of the FWE can become too demanding. As a result, the number of false hypotheses rejected may be small or even zero. This suggests replacingncontrol of the FWE by a more liberal measure. To this end,nwe review a number of proposals from the statistical literature.nWe briefly discuss how these procedures apply to the general problem of model selection. A simulation study and two empirical applications illustrate the methods. |
|
David Afshartous, Michael Wolf, Avoiding Data Snooping in Multilevel and Mixed Effects Models, In: Working paper series / Institute for Empirical Research in Economics, No. No. 260, 2005. (Working Paper)
"Multilevel or mixed effects models are commonly applied to hierarchical data; for example,nsee Goldstein (2003), Raudenbush and Bryk (2002), and Laird and Ware (1982). Although therenexist many outputs from such an analysis, the level-2 residuals, otherwise known as randomneffects, are often of both substantive and diagnostic interest. Substantively, they are frequently used for institutional comparisons or rankings. Diagnostically, they are used to assess the modelnassumptions at the group level. Current inference on the level-2 residuals, however, typicallyndoes not account for data snooping, that is, for the harmful effects of carrying out a multitude of hypothesis tests at the same time. We provide a very general framework that encompasses both of the following inference problems: (1) Inference on the `absolute' level-2 residuals tondetermine which are significantly different from zero, and (2) Inference on any prespecified number of pairwise comparisons. Thus, the user has the choice of testing the comparisons of interest. As our methods are flexible with respect to the estimation method invoked, the user may choose the desired estimation method accordingly. We demonstrate the methods with the London Education Authority data used by Rasbash et al. (2004), the Wafer data used by Pinheiro and Bates (2000), and the NELS data used by Afshartous and de Leeuw (2004)." |
|
Joseph P Romano, Michael Wolf, Control of Generalized Error Rates in Multiple Testing, In: Working paper series / Institute for Empirical Research in Economics, No. No. 245, 2005. (Working Paper)
Consider the problem of testing s hypotheses simultaneously. The usual approach tondealing with the multiplicity problem is to restrict attention to procedures that controlnthe probability of even one false rejection, the familiar familywise error rate (FWER). Innmany applications, particularly if s is large, one might be willing to tolerate more than onenfalse rejection if the number of such cases is controlled, thereby increasing the ability of thenprocedure to reject false null hypotheses One possibility is to replace control of the FWERnby control of the probability of k or more false rejections, which is called the k-FWER.nWe derive both single-step and stepdown procedures that control the k-FWER in finitensamples or asymptotically, depending on the situation. Lehmann and Romano (2005a)nderive some exact methods for this purpose, which apply whenever p-values are availablenfor individual tests; no assumptions are made on the joint dependence of the p-values. Inncontrast, we construct methods that implicitly take into account the dependence structurenof the individual test statistics in order to further increase the ability to detect false nullnhypotheses. We also consider the false discovery proportion (FDP) defined as the numbernof false rejections divided by the total number of rejections (and defined to be 0 if therenare no rejections). The false discovery rate proposed by Benjamini and Hochberg (1995)ncontrols E(FDP). |
|