Not logged in.

Contribution Details

Type Master's Thesis
Scope Discipline-based scholarship
Title Reinforcement Learning for Minimum Variance Portfolios
Organization Unit
Authors
  • Jonas Roth
Supervisors
  • Gianluca De Nard
  • Markus Leippold
  • Markus Kalisch
Language
  • English
Institution University of Zurich
Faculty Faculty of Business, Economics and Informatics
Number of Pages 43
Date 2022
Abstract Text Many portfolio managers need new ideas to create long-term portfolios which are also robust during volatile times. Especially in current times, a portfolio with minimal variance helps investors keep their head when the markets go crazy. While supervised learning has already been applied in various setups in finance, direct optimisation through reinforcement learning and letting an agent directly interact with the market is a relatively new approach for financial applications. I analyse of a deep reinforcement learning model to create a minimum variance portfolio. The central part of this thesis is to test the performance in terms of the standard deviation of a global minimum variance portfolio created by covariance shrinkage methods which yield state-of-the-art performance, against the portfolio generated by a deep reinforcement learning algorithm. For the deep reinforcement learning setup, I follow Cong et al. (2021) who originally suggested this direct optimisation approach. Two benchmarks are generated. The first is the equal-weighted portfolio, 1/N. The second uses covariance shrinkage to create a global minimum variance portfolio. Quadratic inverse shrinkage, an approach introduced by Leidoit and Wolf (2022) is used to get stateof- the-art performance regarding the performance measure minimal standard deviation. The empirical analysis is based on daily return data from CRSP and the firm characteristics from Gu et al. (2020) of the biggest US stocks starting from January 1, 2005, until December 31, 2021. The portfolios are generated from January 1, 2010, until December 31, 2021, and are monthly rebalanced. I compare the portfolios in terms of their standard deviation, average return, maximum drawdown, information ratio, Sharpe ratio, turnover and gross leverage. However, the most relevant performance measure is the standard deviation, as this is the main objective of the deep reinforcement learner. I train the reinforcement learning algorithm on the data from January 1, 2005, until December 31, 2009. I use a wide variety of di↵erent parameter values to achieve the maximal reward. However, the training of a deep reinforcement learning algorithm is computationally very intensive and, therefore, also very time-consuming. After generating the portfolios, I find that the 1/N portfolio with a standard deviation of 17.15% is outperformed by a relatively large margin by the deep reinforcement learning and the covariance II III shrinkage portfolio with a standard deviation of 13.98% and 12.44%, respectively. But as we can see, the deep reinforcement learning model could not outperform the global minimum variance portfolio with quadratic inverse shrinkage. Thus, I could not beat the leading benchmark with the new direct optimisation approach by Cong et al. (2021). However, with the correct parameters, it should be possible to achieve better performance with the deep reinforcement learning architecture proposed by Cong et al. (2021) than the global minimum variance portfolio generated with covariance shrinkage methods. Thus, further empirical analysis on this topic would be necessary. Jonas
PDF File Download
Export BibTeX