Home Page of Jonathan B. Hill

Associate Professor of Economics

University of North Carolina Chapel Hill

Dept. of Economics
Gardner Hall 208B
University of North Carolina
Chapel Hill, NC 27599-3305

jbhill at email dot unc dot edu



Home Page




Published Papers


Working Papers


Papers on SSRN




Courses Taught



WORKING PAPERS (Under Submission or Invited Revision for Publication)


A Smoothed P-Value Test When There is a Nuisance Parameter under the Alternative (2015): revised and resubmitted to Journal of the American Statistic Association (second round)


*      Paper: PDF (updated Aug. 2016)

*      Supplemental Appendices: PDF and PDF


We present a new test when there is a nuisance parameter λ under the alternative hypothesis. The test exploits the p-value occupation time [PVOT], the measure of the subset of λ on which a p-value test based on a test statistic Tn(λ) rejects the null hypothesis. Our key contributions are: (i) if λ is part of the true data generating process, then the PVOT is a point estimate of the weighted average probability of p-value test rejection, under the null; (ii) an asymptotic critical value upper bound for our test is the significance level itself, making inference easy; (iii) we only require Tn(λ) to have a known or bootstrappable limit distribution, hence we do not require √n-Gaussian asymptotics as is nearly always assumed, and we allow for some parameters to be weakly or non-identified; and (iv) a numerical experiment, in which local asymptotic power is computed for a test of omitted nonlinearity, reveals the asymptotic critical value is exactly the significance level, and the PVOT test is virtually equivalent to a test with the greatest weighted average power in the sense of Andrews and Ploberger (1994). Since the PVOT test does not require a bootstrap step, it is especially relevant when bootstrap procedures are invalid, including estimation with heavy tailed data, when a parameter value lies on the boundary of the feasible space, and when there is a discontinuity of a test statistic over a parameter space (e.g. under weak identification). We give examples of PVOT tests of omitted nonlinearity, GARCH effects and a one time structural break. A simulation study demonstrates the merits of these PVOT tests, it demonstrates the asymptotic critical value is exactly the significance level in each case, and shows the PVOT test is much better at detecting a one time structural break than bootstrapped average and supremum transforms.




A Max-Correlation White Noise Test for Weakly Dependent Time Series (2016: with K. Motegi) : submitted


*      Paper: PDF (May 2016)

*      Supplemental Appendix: PDF (May 2016)


This paper presents bootstrapped p-value white noise tests based on the max-correlation, for a time series that may be weakly dependent under the null hypothesis. The time series may be prefiltered residuals based on a root(n)-convergent estimator. Our test statistic is a scaled maximum sample correlation coefficient where the maximum lag increases at a rate slower than the sample size n. We only require uncorrelatedness under the null hypothesis, along with a moment contraction dependence property that includes mixing and non-mixing sequences, and exploit two wild bootstrap methods for p-value computation. We operate either on a first order expansion of the sample correlation, or Delgado and Velasco's (2011) orthogonalized correlation for a fixed maximum lag, both to control for the impact of residual estimation. A numerical study shows the first order expansion is superior, especially when the maximum lag is large. When the filter involves a GARCH model then the orthogonalization breaks down, while the first order expansion works quite well. We show Shao's (2011) dependent wild bootstrap is valid for a much larger class of processes than originally considered. Since only the most relevant sample serial correlation is exploited amongst a set of sample correlations that are consistent asymptotically, empirical size tends to be sharp and power is comparatively large for many time series processes. The test has non-trivial local power against root(n)-local alternatives, and can detect very weak and distant serial dependence better than a variety of other tests. Finally, we prove that our bootstrapped p-value leads to a valid test without exploiting extreme value theoretic arguments, the standard in the literature.



Simple Granger Causality Tests for Mixed Frequency Data (2016: with E .Ghysels and K. Motegi) : submitted.


*      Paper: PDF (July 2016)

*      Supplemental Appendix: PDF (July 2016)


This paper presents simple Granger causality tests applicable to any mixed frequency sampling data setting, and feature remarkable power properties even with relatively mall low frequency data samples and a considerable wedge between sampling frequencies (for example, quarterly and daily or weekly data). Our tests are based on a seemingly overlooked, but simple, dimension reduction technique for regression models. If the number of parameters of interest is large then in small or even large samples any of the trilogy test statistics may not be well approximated by their asymptotic distribution. A bootstrap method can be employed to improve empirical test size, but this generally results in a loss of power. A shrinkage estimator can be employed, ncluding Lasso, Adaptive Lasso, or Ridge Regression, but these are valid only under a sparsity assumption which does not apply to Granger causality tests. The procedure, which is of general interest when testing potentially large sets of parameter restrictions, involves multiple parsimonious regression models where each model regresses a low frequency variable onto only one individual lag or lead of a high frequency series, where that lag or lead slope parameter is necessarily zero under the null hypothesis of non-causality. Our test is then based on a max test statistic that selects the largest squared estimator among all parsimonious regression models. Parsimony ensures sharper estimates and therefore improved power in small samples. Inference requires a simple simulation-bootstrap step since the test statistic has a non-standard limit distribution. We show via Monte Carlo simulations that the max test is particularly powerful for causality with a large time lag.


Robust Estimation and Inference for Average Treatment Effects (2014: with S. Chaudhuri). Revised and resubmitted to Journal of Econometrics (reject and resubmit)


*      Paper: PDF (March 2016)


*      Supplemental Appendix I (omitted theory, proofs): PDF (March 2016)


*      Supplemental Appendix II (omitted tables): PDF (March 2016)



We study the probability tail properties of Inverse Probability Weighting (IPW) estimators of the Average Treatment Effect (ATE) when there is limited overlap between the covariate distributions of the treatment and control groups. Under unconfoundedness of treatment assignment conditional on covariates, such limited overlap is manifested in the propensity score for certain units being very close (but not equal) to 0 or 1. This renders IPW estimators possibly heavy tailed, and with a slower than √n rate of convergence. Most existing estimators are either based on the assumption of strict overlap, i.e. the propensity score is bounded away from 0 and 1; or they truncate the propensity score; or trim observations based on a variety of techniques based on covariate or propensity score values. Trimming or truncation is ultimately based on the covariates, ignoring important information about the inverse probability weighted random variable Z that identifies ATE by E[Z]= ATE. We propose a tail-trimmed IPW estimator whose performance is robust to limited overlap. In terms of the propensity score, which is generally unknown, we plug-in its parametric estimator in the infeasible Z, and then negligibly trim the resulting feasible Z adaptively by its large values. Trimming leads to bias if Z has an asymmetric distribution and an infinite variance, hence we estimate and remove the bias using important improvements on existing theory and methods. Our estimator sidesteps dimensionality, bias and poor correspondence properties associated with trimming by the covariates or propensity score. Monte Carlo experiments demonstrate that trimming by the covariates or the propensity score requires the removal of a substantial portion of the sample to render a low bias and close to normal estimator, while our estimator has low bias and mean-squared error, and is close to normal, based on the removal of very few sample extremes.


Heavy Tail Robust Frequency Domain Estimation (2014: with A. McCloskey) : under revsion.


*      Paper: PDF (Sept. 2014)

*      Supplemental Appendix: PDF (Sept. 2014)


We develop heavy tail robust frequency domain estimators for covariance stationary time series with a parametric spectrum, including ARMA, GARCH and stochastic volatility. We use robust techniques to reduce the moment requirement down to only a finite variance. In particular, we negligibly trim the data, permitting both identification of the parameter for the candidate model, and asymptotically normal frequency domain estimators, while leading to a classic limit theory when the data have a finite fourth moment. The transform itself can lead to asymptotic bias in the limit distribution of our estimators when the fourth moment does not exist, hence we correct the bias using extreme value theory that applies whether tails decay according to a power law or not. In the case of symmetrically distributed data, we compute the mean-squared-error of our biased estimator and characterize the mean-squared-error minimization number of sample extremes. A simulation experiment shows our QML estimator works well and in general has lower bias than the standard estimator, even when the process is Gaussian, suggesting robust methods have merit even for thin tailed processes.





Robust M-Estimation for Heavy Tailed Nonlinear AR-GARCH (2011).


*      Paper: PDF (paper plus supplemental appendix)



We develop new tail-trimmed M-estimation methods for heavy tailed Nonlinear AR-GARCH models. Tail-trimming allows both identification of the true parameter and asymptotic normality for nonlinear models with asymmetric errors. In heavy tailed cases the rate of convergence is infinitesimally close to the highest possible amongst M-estimators for a particular loss function, hence super- root(n)-convergence can be achieved in nonlinear AR models with infinite variance errors, and arbitrarily near root(n)-convergence for GARCH with errors that have an infinite fourth moment. We present a consistent estimator of the covariance matrix that permits classic inference without knowledge of the rate of convergence, and explore asymptotic covariance and bootstrap mean-squared-error methods for selecting trimming parameters. A simulation study shows the estimator trumps existing ones for AR and GARCH models based on sharpness, approximate normality, rate of convergence, and test accuracy. We then use the estimator to study asset returns data.



Robust Estimation and Inference for Extremal Dependence in Time Series (2009)


*      Paper: PDF

*      Appendix C : Omitted Proofs : PDF

*      Appendix D : Omitted Figures and Tables : PDF

*      Gauss: code



Dependence between extreme values is predominantly measured by first assuming a parametric joint distribution function, and almost always for otherwise marginally iid processes. We develop semi-nonparametric and nonparametric measures, estimators and tests of bivariate tail dependence for non-iid data based on tail exceedances and events. The measures and estimators capture extremal dependence decay over time and can be re-scaled to provide robust estimators of canonical conditional tail probability and tail copula notions of tail dependence. Unlike extant offerings, the tests obtain asymptotic power of one against infinitessimal deviations from tail independence. Further, the estimators apply to dependent, heterogeneous processes with or without extremal dependence and irrespective of non-extremal properties and joint distribution specifications. Finally, we study the extremal associations within and between equity returns in the U.S., U.K. and Japan.



Gaussian Tests of 'Extremal White Noise' for Dependent, Heterogeneous, Heavy Tailed Time Series with an Application (2008) 


*      Paper: PDF (this version: Feb. 2008)

*      Appendix: PDF

*      Gauss: code


We develop a portmanteau test of extremal serial dependence. The test statistic is asymptotically chi-squared under a null of "extremal white noise", as long as extremes are Near-Epoch-Dependent, covering linear and nonlinear distributed lags, stochastic volatility, and GARCH processes with possibly unit or explosive roots. We apply tail specific tests to equity market and exchange rate returns.



Description: Description: Description: Description: Description: Description: Description: Description: shopify analytics tool