Abstract
Chapter 2 studies the problem of inference in a standard linear regression model with heteroskedastic errors. A GLS estimator which is based on a nonparametric kernel estimator is proposed for the volatility process. It is shown that the resulting feasible GLS estimator is √T-consistent for a wide range of deterministic and stochastic processes for the time-varying volatility. Moreover, the kernel-GLS estimator is asymptotically more efficient than OLS and hence inference based on its asymptotic distribution is sharper. A Monte Carlo exercise is designed to study the finite sample properties of the proposed estimator and it is shown that tests based on it are correctly-sized for a variety of DGPs. It is found that in some cases, testing based on OLS is invalid. Crucially, even in cases when tests based on OLS or OLS with heteroskedasticity-consistent (HC) standard errors are correctly-sized, it is found that inference based on the proposed GLS estimator is more powerful even for relatively small sample sizes.Chapter 3 suggests a multiplicative volatility model where volatility is decomposed into a stationary and a non-stationary persistent part. We provide a testing procedure to determine which type of volatility is prevalent in the data. The persistent part of volatility is associated with a nonstationary persistent process satisfying some smoothness and moment conditions. The stationary part is related to stationary conditional heteroscedasticity. We outline theory and and conditions that allow the extraction of the persistent part from the data and enable standard conditional heteroscedasticity tests to detect stationary volatility after persistent volatility is taken into account. Monte Carlo results support the testing strategy in small samples. The empirical application of the theory supports the persistent volatility paradigm, suggesting that stationary conditional heteroskedasticity is considerably less pronounced than previously thought.
Chapter 4 proposes a deep quantile estimator, using neural networks and their universal approximation property to examine a non-linear association between the conditional quantiles of a dependent variable and predictors. We present a Monte Carlo exercise where we examine the finite sample proper-ties of the proposed estimator and show that our approach delivers good finite sample performance. We use the deep quantile estimator to forecast Value-at-Risk and find significant gains over linear quantile regression alter-natives, supported by various testing schemes. This chapter also contributes to the interpretability of neural networks output by making comparisons between the commonly used SHAP values and an alternative method based on partial derivatives.
Date of Award | 1 Apr 2022 |
---|---|
Original language | English |
Awarding Institution |
|
Supervisor | George Kapetanios (Supervisor) |