Description: Description: Description: Description: Description: Description: Description: Description: Description: Description: Description: Description: Description: Description: Description: Description: Description: Description: image002

Home Page of Jonathan B. Hill

Associate Professor of Economics

Director of Graduate Studies

University of North Carolina Chapel Hill

 
 
 
Dept. of Economics
Gardner Hall 200F
University of North Carolina
Chapel Hill, NC 27599-3305

jbhill at email dot unc dot edu

 

Office Hours (Fall 2017):

Mon. 10am-1pm

 

Wed. 10am-1pm

 

or by appointment

 

 

 

 

Home Page

 

CV

 

Published Papers

 

Working Papers

 

Papers on SSRN

 

Software

 

Courses Taught

 

 

WORKING PAPERS (Under Submission or Invited Revision for Publication)

 

Inference When There is a Nuisance Parameter under the Alternative and Some Parameters are Possibly Weakly Identified (2017)

 

*     Paper: PDF (July 2017)

*     Supplemental Material: PDF

 

We present a new robust bootstrap method for a test when there is a nuisance parameter under the alternative, and some parameters are possibly weakly or non-identified. We focus on a Bierens (1990)-type test of omitted nonlinearity for convenience of illustration, and because of difficulties that have been ignored to date. Nevertheless, our method applies to a wide range of tests, including Wald and Quasi-Likelihood Ratio. Methods for handling the nuisance parameter under the alternative include the supremum p-value, which promotes a conservative test, and test statistic transforms like the supremum and average which cannot be validly bootstrapped under weak identification. We propose a new bootstrap method for p-value computation for the test statistic by targeting specific identification cases. We then combine bootstrapped p-values across polar identification cases to form an asymptotically valid p-value that is robust to any identification case. Our method and theory allows for robust bootstrap critical value computation as well, which lead to tests with correct asymptotic level. Our bootstrap method, like conventional ones (e.g. Hansen 1996) does not lead to a consistent p-value approximation for test statistic functions like the supremum and average. We therefore smooth over the robust bootstrapped p-value as the basis for several tests which achieve the correct asymptotic level, and are consistent, for any degree of identification. A simulation study reveals possibly large empirical size distortions in non-robust tests when weak or non-identification arises. One of our smoothed p-value tests, however, dominates all other tests by delivering accurate empirical size and comparatively high power.

 

Testing Weak Form Efficiency in Stock Markets (2017: with K. Motegi) : submitted

 

*     Paper: PDF (July 2017)

 

Weak form efficiency of stock markets is tested predominantly under an independence or martingale difference assumption. Since these properties rule out weak dependence that may exist in stock returns, it is of interest to test whether returns are white noise. We perform multiple white noise tests assisted by Shao's (2011) dependent wild bootstrap. We reveal that, in rolling windows, the block structure inscribes an artificial periodicity in bootstrapped confidence bands. We eliminate the periodicity by randomizing a block size. In crisis periods, returns of FTSE and S&P have negative autocorrelations that are large enough to reject the white noise hypothesis.

 

Asymptotic Theory for the Maximum of an Increasing Sequence of Parametric Functions (2017) : submitted

 

*     Paper: PDF (July 2017)

*     Supplemental Appendix: PDF

 

We present a new general asymptotic theory for the maximum of a random array {X(n,i)}, where each X(n,i) is assumed to converge in probability as n → ∞. The array dimension L(n) is allowed to increase with the sample size n. Existing extreme value theory arguments focus on observed data X(n,i), and require a well defined limit law for maximum of X(n,i) over i by restricting dependence across i. The high dimensional central limit theory literature presumes approximability by a Gaussian law. We do not require the maximum of X(n,i) to have a well defined limit nor be approximable by a Gaussian random variable, and we do not make any assumptions about dependence across i. We apply the theory to filtered data. The main results are illustrated by looking at unit root tests for a high dimensional random variable, and a residuals white noise test.

 

A Smoothed P-Value Test When There is a Nuisance Parameter under the Alternative (2017) : submitted

 

*     Paper: PDF (first version 2015, this version July 2017)

*     Supplemental Appendix: PDF

 

We present a new test when there is a nuisance parameter λ under the alternative hypothesis. The test exploits the p-value occupation time [PVOT], the measure of the subset of λ on which a p-value test based on a test statistic Tn(λ) rejects the null hypothesis. Key contributions are: (i) An asymptotic critical value upper bound for our test is the significance level α, making inference easy. Conversely, test statistic functionals need a bootstrap or simulation step which can still lead to size and power distortions, and bootstrapped or simulated critical values are not asymptotically valid under weak or non-identification. (ii) We only require Tn(λ) to have a known or bootstrappable limit distribution, hence we do not require √n-Gaussian asymptotics, and weak or non-identification is allowed. Finally, (iii) a test based on the transformed p-value sup lL{pn(λ)} may be conservative and in some cases have nearly trivial power, while the PVOT naturally controls for this by smoothing over the nuisance parameter space. We give examples and related controlled experiments concerning PVOT tests of: omitted nonlinearity; GARCH effects; and a one time structural break. Across cases, the PVOT test variously matches, dominates or strongly dominates standard tests based on the supremum p-value, or supremum or average test statistic (with wild bootstrapped p-value).

 

A Max-Correlation White Noise Test for Weakly Dependent Time Series (2016: with K. Motegi) : submitted

 

*     Paper: PDF (first version May 2016; this version July 2017)

*     Supplemental Appendix: PDF

 

This paper presents bootstrapped p-value white noise tests based on the maximum correlation, for a time series that may be weakly dependent under the null hypothesis. The time series may be prefiltered residuals. The test statistic is a normalized weighted maximum sample correlation coefficient where the maximum lag increases at a rate slower than the sample size. We only require uncorrelatedness under the null hypothesis, along with a moment contraction dependence property that includes mixing and non-mixing sequences. We show Shao's (2011) dependent wild bootstrap is valid for a much larger class of processes than originally considered. It is also valid for residuals from a general class of parametric models as long as the bootstrap is applied to a first order expansion of the sample correlation. The test has non-trivial local power against √n-local alternatives, and can detect very weak and distant serial dependence better than a variety of other tests. Finally, we prove that our bootstrapped p-value leads to a valid test without exploiting extreme value theoretic arguments (the standard in the literature), or recent Gaussian approximation theory.

 

Testing a Large Set of Zero Restrictions in Regression Models, with an Application to Mixed Frequency Granger Causality (2016: with E. Ghysels and K. Motegi) : submitted.

 

*     Paper: PDF (March 2017)

*     Supplemental Appendix: PDF (March 2016)

 

This paper proposes a test for a large set of zero restrictions in regression models based on a seemingly overlooked, but simple, dimension reduction technique. The procedure involves multiple parsimonious regression models where key regressors are split across simple regressions: each parsimonious model has one key regressor, and other regressors that are not associated with the null hypothesis. The test is based on the maximum key squared parameter among all parsimonious regressions. Parsimony ensures sharper estimates and therefore improves power in small samples. We present the general theory of the max test and focus on mixed frequency Granger causality as a prominent application since parameter proliferation is a major challenge in mixed frequency settings.

 

Robust Estimation and Inference for Average Treatment Effects (2014: with S. Chaudhuri).

 

*     Paper: PDF (March 2016)

 

*     Supplemental Appendix I (omitted theory, proofs): PDF (March 2016)

 

*     Supplemental Appendix II (omitted tables): PDF (March 2016)

 
 

 

We study the probability tail properties of Inverse Probability Weighting (IPW) estimators of the Average Treatment Effect (ATE) when there is limited overlap between the covariate distributions of the treatment and control groups. Under unconfoundedness of treatment assignment conditional on covariates, such limited overlap is manifested in the propensity score for certain units being very close (but not equal) to 0 or 1. This renders IPW estimators possibly heavy tailed, and with a slower than √n rate of convergence. Most existing estimators are either based on the assumption of strict overlap, i.e. the propensity score is bounded away from 0 and 1; or they truncate the propensity score; or trim observations based on a variety of techniques based on covariate or propensity score values. Trimming or truncation is ultimately based on the covariates, ignoring important information about the inverse probability weighted random variable Z that identifies ATE by E[Z]= ATE. We propose a tail-trimmed IPW estimator whose performance is robust to limited overlap. In terms of the propensity score, which is generally unknown, we plug-in its parametric estimator in the infeasible Z, and then negligibly trim the resulting feasible Z adaptively by its large values. Trimming leads to bias if Z has an asymmetric distribution and an infinite variance, hence we estimate and remove the bias using important improvements on existing theory and methods. Our estimator sidesteps dimensionality, bias and poor correspondence properties associated with trimming by the covariates or propensity score. Monte Carlo experiments demonstrate that trimming by the covariates or the propensity score requires the removal of a substantial portion of the sample to render a low bias and close to normal estimator, while our estimator has low bias and mean-squared error, and is close to normal, based on the removal of very few sample extremes.

 
 

Heavy Tail Robust Frequency Domain Estimation (2014: with A. McCloskey).

 
 

*     Paper: PDF (Sept. 2014)

*     Supplemental Appendix: PDF (Sept. 2014)

 

We develop heavy tail robust frequency domain estimators for covariance stationary time series with a parametric spectrum, including ARMA, GARCH and stochastic volatility. We use robust techniques to reduce the moment requirement down to only a finite variance. In particular, we negligibly trim the data, permitting both identification of the parameter for the candidate model, and asymptotically normal frequency domain estimators, while leading to a classic limit theory when the data have a finite fourth moment. The transform itself can lead to asymptotic bias in the limit distribution of our estimators when the fourth moment does not exist, hence we correct the bias using extreme value theory that applies whether tails decay according to a power law or not. In the case of symmetrically distributed data, we compute the mean-squared-error of our biased estimator and characterize the mean-squared-error minimization number of sample extremes. A simulation experiment shows our QML estimator works well and in general has lower bias than the standard estimator, even when the process is Gaussian, suggesting robust methods have merit even for thin tailed processes.

 

 

OLD WORKING PAPERS

 

Robust Estimation and Inference for Extremal Dependence in Time Series (2009)

 

*     Paper: PDF

*     Appendix C : Omitted Proofs : PDF

*     Appendix D : Omitted Figures and Tables : PDF

*     Gauss: code

 
 
 
 

 

Dependence between extreme values is predominantly measured by first assuming a parametric joint distribution function, and almost always for otherwise marginally iid processes. We develop semi-nonparametric and nonparametric measures, estimators and tests of bivariate tail dependence for non-iid data based on tail exceedances and events. The measures and estimators capture extremal dependence decay over time and can be re-scaled to provide robust estimators of canonical conditional tail probability and tail copula notions of tail dependence. Unlike extant offerings, the tests obtain asymptotic power of one against infinitessimal deviations from tail independence. Further, the estimators apply to dependent, heterogeneous processes with or without extremal dependence and irrespective of non-extremal properties and joint distribution specifications. Finally, we study the extremal associations within and between equity returns in the U.S., U.K. and Japan.

 

 

Gaussian Tests of 'Extremal White Noise' for Dependent, Heterogeneous, Heavy Tailed Time Series with an Application (2008) 

 

*     Paper: PDF (this version: Feb. 2008)

*     Appendix: PDF

*     Gauss: code

 
 
 
 

We develop a portmanteau test of extremal serial dependence. The test statistic is asymptotically chi-squared under a null of "extremal white noise", as long as extremes are Near-Epoch-Dependent, covering linear and nonlinear distributed lags, stochastic volatility, and GARCH processes with possibly unit or explosive roots. We apply tail specific tests to equity market and exchange rate returns.

 

 

Description: Description: Description: Description: Description: Description: Description: Description: Description: Description: Description: Description: Description: Description: shopify analytics tool