首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Let r(x,z)r(x,z) be a function that, along with its derivatives, can be consistently estimated nonparametrically. This paper discusses the identification and consistent estimation of the unknown functions HH, MM, GG and FF, where r(x,z)=H[M(x,z)]r(x,z)=H[M(x,z)], M(x,z)=G(x)+F(z)M(x,z)=G(x)+F(z), and HH is strictly monotonic. An estimation algorithm is proposed for each of the model’s unknown components when r(x,z)r(x,z) represents a conditional mean function. The resulting estimators use marginal integration to separate the components GG and FF. Our estimators are shown to have a limiting Normal distribution with a faster rate of convergence than unrestricted nonparametric alternatives. Their small sample performance is studied in a Monte Carlo experiment. We apply our results to estimate generalized homothetic production functions for four industries in the Chinese economy.  相似文献   

3.
4.
We correct the limit theory presented in an earlier paper by Hu and Phillips [2004a. Nonstationary discrete choice. Journal of Econometrics 120, 103–138] for nonstationary time series discrete choice models with multiple choices and thresholds. The new limit theory shows that, in contrast to the binary choice model with nonstationary regressors and a zero threshold where there are dual rates of convergence (n1/4n1/4 and n3/4n3/4), all parameters including the thresholds converge at the rate n3/4n3/4. The presence of nonzero thresholds therefore materially affects rates of convergence. Dual rates of convergence reappear when stationary variables are present in the system. Some simulation evidence is provided, showing how the magnitude of the thresholds affects finite sample performance. A new finding is that predicted probabilities and marginal effect estimates have finite sample distributions that manifest a pile-up, or increasing density, towards the limits of the domain of definition.  相似文献   

5.
We propose a test for the slope of a trend function when it is a priori unknown whether the series is trend-stationary or contains an autoregressive unit root. The procedure is based on a Feasible Quasi Generalized Least Squares method from an AR(1) specification with parameter αα, the sum of the autoregressive coefficients. The estimate of αα is the OLS estimate obtained from an autoregression applied to detrended data and is truncated to take a value 1 whenever the estimate is in a T−δTδ neighborhood of 1. This makes the estimate “super-efficient” when α=1α=1 and implies that inference on the slope parameter can be performed using the standard Normal distribution whether α=1α=1 or |α|<1|α|<1. Theoretical arguments and simulation evidence show that δ=1/2δ=1/2 is the appropriate choice. Simulations show that our procedure has better size and power properties than the tests proposed by [Bunzel, H., Vogelsang, T.J., 2005. Powerful trend function tests that are robust to strong serial correlation with an application to the Prebish–Singer hypothesis. Journal of Business and Economic Statistics 23, 381–394] and [Harvey, D.I., Leybourne, S.J., Taylor, A.M.R., 2007. A simple, robust and powerful test of the trend hypothesis. Journal of Econometrics 141, 1302–1330].  相似文献   

6.
7.
8.
This paper introduces the concept of risk parameter in conditional volatility models of the form ?t=σt(θ0)ηt?t=σt(θ0)ηt and develops statistical procedures to estimate this parameter. For a given risk measure rr, the risk parameter is expressed as a function of the volatility coefficients θ0θ0 and the risk, r(ηt)r(ηt), of the innovation process. A two-step method is proposed to successively estimate these quantities. An alternative one-step approach, relying on a reparameterization of the model and the use of a non Gaussian QML, is proposed. Asymptotic results are established for smooth risk measures, as well as for the Value-at-Risk (VaR). Asymptotic comparisons of the two approaches for VaR estimation suggest a superiority of the one-step method when the innovations are heavy-tailed. For standard GARCH models, the comparison only depends on characteristics of the innovations distribution, not on the volatility parameters. Monte-Carlo experiments and an empirical study illustrate the superiority of the one-step approach for financial series.  相似文献   

9.
10.
This paper develops a bootstrap theory for models including autoregressive time series with roots approaching to unity as the sample size increases. In particular, we consider the processes with roots converging to unity with rates slower than n-1n-1. We call such processes weakly   integrated processes. It is established that the bootstrap relying on the estimated autoregressive model is generally consistent for the weakly integrated processes. Both the sample and bootstrap statistics of the weakly integrated processes are shown to yield the same normal asymptotics. Moreover, for the asymptotically pivotal statistics of the weakly integrated processes, the bootstrap is expected to provide an asymptotic refinement and give better approximations for the finite sample distributions than the first order asymptotic theory. For the weakly integrated processes, the magnitudes of potential refinements by the bootstrap are shown to be proportional to the rate at which the root of the underlying process converges to unity. The order of boostrap refinement can be as large as o(n-1/2+?)o(n-1/2+?) for any ?>0?>0. Our theory helps to explain the actual improvements observed by many practitioners, which are made by the use of the bootstrap in analyzing the models with roots close to unity.  相似文献   

11.
We study estimation of the date of change in persistence, from I(0)I(0) to I(1)I(1) or vice versa. Contrary to statements in the original papers, our analytical results establish that the ratio-based break point estimators of Kim [Kim, J.Y., 2000. Detection of change in persistence of a linear time series. Journal of Econometrics 95, 97–116], Kim et al. [Kim, J.Y., Belaire-Franch, J., Badillo Amador, R., 2002. Corringendum to “Detection of change in persistence of a linear time series”. Journal of Econometrics 109, 389–392] and Busetti and Taylor [Busetti, F., Taylor, A.M.R., 2004. Tests of stationarity against a change in persistence. Journal of Econometrics 123, 33–66] are inconsistent when a mean (or other deterministic component) is estimated for the process. In such cases, the estimators converge to random variables with upper bound given by the true break date when persistence changes from I(0)I(0) to I(1)I(1). A Monte Carlo study confirms the large sample downward bias and also finds substantial biases in moderate sized samples, partly due to properties at the end points of the search interval.  相似文献   

12.
Consider a class of power-transformed and threshold GARCH(p,q)(p,q) (PTTGRACH(p,q)(p,q)) model, which is a natural generalization of power-transformed and threshold GARCH(1,1) model in Hwang and Basawa [2004. Stationarity and moment structure for Box–Cox transformed threshold GARCH(1,1) processes. Statistics & Probability Letters 68, 209–220.] and includes the standard GARCH model and many other models as special cases. We first establish the asymptotic normality for quasi-maximum likelihood estimators (QMLE) of the parameters under the condition that the error distribution has finite fourth moment. For the case of heavy-tailed errors, we propose a least absolute deviations estimation (LADE) for PTTGARCH(p,q)(p,q) model, and prove that the LADE is asymptotically normally distributed under very weak moment conditions. This paves the way for a statistical inference based on asymptotic normality for heavy-tailed PTTGARCH(p,q)(p,q) models. As a consequence, we can construct the Wald test for GARCH structure and discuss the order selection problem in heavy-tailed cases. Numerical results show that LADE is more accurate than QMLE for heavy-tailed errors. Furthermore, the theory is applied to the daily returns of the Hong Kong Hang Seng Index, which suggests that asymmetry and nonlinearity could be present in the financial time series and the PTTGARCH model is capable of capturing these characteristics. As for the probabilistic structure of PTTGARCH(p,q)(p,q) model, we give in the appendix a necessary and sufficient condition for the existence of a strictly stationary solution of the model, the existence of the moments and the tail behavior of the strictly stationary solution.  相似文献   

13.
14.
15.
We consider the problem of testing whether the observations X1,…,XnX1,,Xn of a time series are independent with unspecified (possibly nonidentical) distributions symmetric about a common known median. Various bounds on the distributions of serial correlation coefficients are proposed: exponential bounds, Eaton-type bounds, Chebyshev bounds and Berry–Esséen–Zolotarev bounds. The bounds are exact in finite samples, distribution-free and easy to compute. The performance of the bounds is evaluated and compared with traditional serial dependence tests in a simulation experiment. The procedures proposed are applied to U.S. data on interest rates (commercial paper rate).  相似文献   

16.
17.
This paper is about how to estimate the integrated covariance X,YTX,YT of two assets over a fixed time horizon [0,T][0,T], when the observations of XX and YY are “contaminated” and when such noisy observations are at discrete, but not synchronized, times. We show that the usual previous-tick covariance estimator is biased, and the size of the bias is more pronounced for less liquid assets. This is an analytic characterization of the Epps effect. We also provide the optimal sampling frequency which balances the tradeoff between the bias and various sources of stochastic error terms, including nonsynchronous trading, microstructure noise, and time discretization. Finally, a two scales covariance estimator is provided which simultaneously cancels (to first order) the Epps effect and the effect of microstructure noise. The gain is demonstrated in data.  相似文献   

18.
This paper establishes the relatively weak conditions under which causal inferences from a regression–discontinuity (RD) analysis can be as credible as those from a randomized experiment, and hence under which the validity of the RD design can be tested by examining whether or not there is a discontinuity in any pre-determined (or “baseline”) variables at the RD threshold. Specifically, consider a standard treatment evaluation problem in which treatment is assigned to an individual if and only if V>v0V>v0, but where v0v0 is a known threshold, and V is observable. V can depend on the individual's characteristics and choices, but there is also a random chance element: for each individual, there exists a well-defined probability distribution for V  . The density function—allowed to differ arbitrarily across the population—is assumed to be continuous. It is formally established that treatment status here is as good as randomized in a local neighborhood of V=v0V=v0. These ideas are illustrated in an analysis of U.S. House elections, where the inherent uncertainty in the final vote count is plausible, which would imply that the party that wins is essentially randomized among elections decided by a narrow margin. The evidence is consistent with this prediction, which is then used to generate “near-experimental” causal estimates of the electoral advantage to incumbency.  相似文献   

19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号