In this paper we propose tests for the null hypothesis that a time series process displays a constant level against the alternative that it displays (possibly) multiple changes in level. Our proposed tests are based on functions of appropriately standardised sequences of the differences between sub-sample mean estimates from the series under investigation. The tests we propose differ notably from extant tests for level breaks in the literature in that they are designed to be robust as to whether the process admits an autoregressive unit root (the data are I(1)) or stable autoregressive roots (the data are I(0)). We derive the asymptotic null distributions of our proposed tests, along with representations for their asymptotic local power functions against Pitman drift alternatives under both I(0) and I(1) environments. Associated estimators of the level break fractions are also discussed. We initially outline our procedure through the case of non-trending series, but our analysis is subsequently extended to allow for series which display an underlying linear trend, in addition to possible level breaks. Monte Carlo simulation results are presented which suggest that the proposed tests perform well in small samples, showing good size control under the null, regardless of the order of integration of the data, and displaying very decent power when level breaks occur. An empirical application of the methods proposed in this paper suggests that the majority of the stock price series which comprise the NASDAQ 100 index display level breaks.
Download the paper in PDF format
David I. Harvey, Stephen J. Leybourne and A. M. Robert Taylor
View all Granger Centre discussion papers | View all School of Economics featured discussion papers
School of EconomicsUniversity of Nottingham University Park Nottingham, NG7 2RD
lorenzo.trapani@nottingham.ac.uk