首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
Cointegration, common cycle, and related tests statistics are often constructed using logged data, even without clear reason why logs should be used rather than levels. Unfortunately, it is also the case that standard data transformation tests, such as those based on Box–Cox transformations, cannot be shown to be consistent unless assumptions concerning whether variables I(0)I(0) or I(1)I(1) are made. In this paper, we propose a simple randomized procedure for choosing between levels and log-levels specifications in the (possible) presence of deterministic and/or stochastic trends, and discuss the impact of incorrect data transformation on common cycle, cointegration and unit root tests.  相似文献   

2.
Consider the location-scale regression model Y=m(X)+σ(X)?Y=m(X)+σ(X)?, where the error ?? is independent of the covariate X, and m   and σσ are smooth but unknown functions. We construct tests for the validity of this model and show that the asymptotic limits of the proposed test statistics are distribution free. We also investigate the finite sample properties of the tests through a simulation study, and we apply the tests in the analysis of data on food expenditures.  相似文献   

3.
Consider a class of power-transformed and threshold GARCH(p,q)(p,q) (PTTGRACH(p,q)(p,q)) model, which is a natural generalization of power-transformed and threshold GARCH(1,1) model in Hwang and Basawa [2004. Stationarity and moment structure for Box–Cox transformed threshold GARCH(1,1) processes. Statistics & Probability Letters 68, 209–220.] and includes the standard GARCH model and many other models as special cases. We first establish the asymptotic normality for quasi-maximum likelihood estimators (QMLE) of the parameters under the condition that the error distribution has finite fourth moment. For the case of heavy-tailed errors, we propose a least absolute deviations estimation (LADE) for PTTGARCH(p,q)(p,q) model, and prove that the LADE is asymptotically normally distributed under very weak moment conditions. This paves the way for a statistical inference based on asymptotic normality for heavy-tailed PTTGARCH(p,q)(p,q) models. As a consequence, we can construct the Wald test for GARCH structure and discuss the order selection problem in heavy-tailed cases. Numerical results show that LADE is more accurate than QMLE for heavy-tailed errors. Furthermore, the theory is applied to the daily returns of the Hong Kong Hang Seng Index, which suggests that asymmetry and nonlinearity could be present in the financial time series and the PTTGARCH model is capable of capturing these characteristics. As for the probabilistic structure of PTTGARCH(p,q)(p,q) model, we give in the appendix a necessary and sufficient condition for the existence of a strictly stationary solution of the model, the existence of the moments and the tail behavior of the strictly stationary solution.  相似文献   

4.
Y is conditionally independent of Z given X   if Pr{f(y|X,Z)=f(y|X)}=1{f(y|X,Z)=f(y|X)}=1 for all y on its support, where f(·|·)f(·|·) denotes the conditional density of Y   given (X,Z)(X,Z) or X.X. This paper proposes a nonparametric test of conditional independence based on the notion that two conditional distributions are equal if and only if the corresponding conditional characteristic functions are equal. We extend the test of Su and White (2005. A Hellinger-metric nonparametric test for conditional independence. Discussion Paper, Department of Economics, UCSD) in two directions: (1) our test is less sensitive to the choice of bandwidth sequences; (2) our test has power against deviations on the full support of the density of (X,Y,ZX,Y,Z). We establish asymptotic normality for our test statistic under weak data dependence conditions. Simulation results suggest that the test is well behaved in finite samples. Applications to stock market data indicate that our test can reveal some interesting nonlinear dependence that a traditional linear Granger causality test fails to detect.  相似文献   

5.
Most panel unit root tests are designed to test the joint null hypothesis of a unit root for each individual series in a panel. After a rejection, it will often be of interest to identify which series can be deemed to be stationary and which series can be deemed nonstationary. Researchers will sometimes carry out this classification on the basis of nn individual (univariate) unit root tests based on some ad hoc significance level. In this paper, we suggest and demonstrate how to use the false discovery rate (FDR)(FDR) in evaluating I(1)/I(0)I(1)/I(0) classifications.  相似文献   

6.
The objective of this paper is to integrate the generalized gamma (GG)(GG) distribution into the information theoretic literature. We study information properties of the GGGG distribution and provide an assortment of information measures for the GGGG family, which includes the exponential, gamma, Weibull, and generalized normal distributions as its subfamilies. The measures include entropy representations of the log-likelihood ratio, AIC, and BIC, discriminating information between GGGG and its subfamilies, a minimum discriminating information function, power transformation information, and a maximum entropy index of fit to histogram. We provide the full parametric Bayesian inference for the discrimination information measures. We also provide Bayesian inference for the fit of GGGG model to histogram, using a semi-parametric Bayesian procedure, referred to as the maximum entropy Dirichlet (MED). The GGGG information measures are computed for duration of unemployment and duration of CEO tenure.  相似文献   

7.
Panels with non-stationary multifactor error structures   总被引:1,自引:0,他引:1  
The presence of cross-sectionally correlated error terms invalidates much inferential theory of panel data models. Recently, work by Pesaran (2006) has suggested a method which makes use of cross-sectional averages to provide valid inference in the case of stationary panel regressions with a multifactor error structure. This paper extends this work and examines the important case where the unobservable common factors follow unit root processes. The extension to I(1)I(1) processes is remarkable on two counts. First, it is of great interest to note that while intermediate results needed for deriving the asymptotic distribution of the panel estimators differ between the I(1)I(1) and I(0)I(0) cases, the final results are surprisingly similar. This is in direct contrast to the standard distributional results for I(1)I(1) processes that radically differ from those for I(0)I(0) processes. Second, it is worth noting the significant extra technical demands required to prove the new results. The theoretical findings are further supported for small samples via an extensive Monte Carlo study. In particular, the results of the Monte Carlo study suggest that the cross-sectional-average-based method is robust to a wide variety of data generation processes and has lower biases than the alternative estimation methods considered in the paper.  相似文献   

8.
9.
This paper extends the cross-sectionally augmented panel unit root test (CIPSCIPS) proposed by Pesaran (2007) to the case of a multifactor error structure, and proposes a new panel unit root test based on a simple average of cross-sectionally augmented Sargan–Bhargava statistics (CSBCSB). The basic idea is to exploit information regarding the mm unobserved factors that are shared by kk observed time series in addition to the series under consideration. Initially, we develop the tests assuming that m0m0, the true number of factors, is known and show that the limit distribution of the tests does not depend on any nuisance parameters, so long as k≥m0−1km01. Small sample properties of the tests are investigated by Monte Carlo experiments and are shown to be satisfactory. Particularly, the proposed CIPSCIPS and CSBCSB tests have the correct size for all   combinations of the cross section (NN) and time series (TT) dimensions considered. The power of both tests rises with NN and TT, although the CSBCSB test performs better than the CIPSCIPS test for smaller sample sizes. The various testing procedures are illustrated with empirical applications to real interest rates and real equity prices across countries.  相似文献   

10.
We consider estimation of the regression function in a semiparametric binary regression model defined through an appropriate link function (with emphasis on the logistic link) using likelihood-ratio based inversion. The dichotomous response variable ΔΔ is influenced by a set of covariates that can be partitioned as (X,Z)(X,Z) where ZZ (real valued) is the covariate of primary interest and XX (vector valued) denotes a set of control variables. For any fixed XX, the conditional probability of the event of interest (Δ=1Δ=1) is assumed to be a non-decreasing function of ZZ. The effect of the control variables is captured by a regression parameter ββ. We show that the baseline conditional probability function (corresponding to X=0X=0) can be estimated by isotonic regression procedures and develop a likelihood ratio based method for constructing asymptotic confidence intervals for the conditional probability function (the regression function) that avoids the need to estimate nuisance parameters. Interestingly enough, the calibration of the likelihood ratio based confidence sets for the regression function no longer involves the usual χ2χ2 quantiles, but those of the distribution of a new random variable that can be characterized as a functional of convex minorants of Brownian motion with quadratic drift. Confidence sets for the regression parameter ββ can however be constructed using asymptotically χ2χ2 likelihood ratio statistics. The finite sample performance of the methods are assessed via a simulation study. The techniques of the paper are applied to data sets on primary school attendance among children belonging to different socio-economic groups in rural India.  相似文献   

11.
In a sample-selection model with the ‘selection’ variable QQ and the ‘outcome’ variable YY, YY is observed only when Q=1Q=1. For a treatment DD affecting both QQ and YY, three effects are of interest: ‘participation  ’ (i.e., the selection) effect of DD on QQ, ‘visible performance  ’ (i.e., the observed outcome) effect of DD on Y≡QYYQY, and ‘invisible performance  ’ (i.e., the latent outcome) effect of DD on YY. This paper shows the conditions under which the three effects are identified, respectively, by the three corresponding mean differences of QQ, YY, and Y|Q=1Y|Q=1 (i.e., Y|Q=1Y|Q=1) across the control (D=0D=0) and treatment (D=1D=1) groups. Our nonparametric estimators for those effects adopt a two-sample framework and have several advantages over the usual matching methods. First, there is no need to select the number of matched observations. Second, the asymptotic distribution is easily obtained. Third, over-sampling the control/treatment group is allowed. Fourth, there is a built-in mechanism that takes into account the ‘non-overlapping support problem’, which the usual matching deals with by choosing a ‘caliper’. Fifth, a sensitivity analysis to gauge the presence of unobserved confounders is available. A simulation study is conducted to compare the proposed methods with matching methods, and a real data illustration is provided.  相似文献   

12.
In this paper we show that the Quasi ML estimation method yields consistent Random and Fixed Effects estimators for the autoregression parameter ρρ in the panel AR(1) model with arbitrary initial conditions and possibly time-series heteroskedasticity even when the error components are drawn from heterogeneous distributions. We investigate both analytically and by means of Monte Carlo simulations the properties of the QML estimators for ρρ. The RE(Q)MLE for ρρ is asymptotically at least as robust to individual heterogeneity and, when the data are i.i.d. and normal, at least as efficient as the FE(Q)MLE for ρρ. Furthermore, the QML estimators for ρρ only suffer from a ‘weak moment conditions’ problem when ρρ is close to one if the cross-sectional average of the variances of the errors is (almost) constant over time, e.g. under time-series homoskedasticity. However, in this case the QML estimators for ρρ are still consistent when ρρ is local to or equal to one although they converge to a non-normal possibly asymmetric distribution at a rate that is lower than N1/2N1/2 but at least N1/4N1/4. Finally, we study the finite sample properties of two types of estimators for the standard errors of the QML estimators for ρρ, and the bounds of QML based confidence intervals for ρρ.  相似文献   

13.
14.
15.
We consider a stochastic frontier model with error ε=v−uε=vu, where vv is normal and uu is half normal. We derive the distribution of the usual estimate of u,E(u|ε)u,E(u|ε). We show that as the variance of vv approaches zero, E(u|ε)−uE(u|ε)u converges to zero, while as the variance of vv approaches infinity, E(u|ε)E(u|ε) converges to E(u)E(u). We graph the density of E(u|ε)E(u|ε) for intermediate cases. To show that E(u|ε)E(u|ε) is a shrinkage of u towards its mean, we derive and graph the distribution of E(u|ε)E(u|ε) conditional on uu. We also consider the distribution of estimated inefficiency in the fixed-effects panel data setting.  相似文献   

16.
17.
18.
19.
We provide sufficient conditions for the first-order approach in the principal-agent problem when the agent’s utility has the nonseparable form u(y−c(a))u(yc(a)) where yy is the contractual payoff and c(a)c(a) is the money cost of effort. We first consider a decision-maker facing prospects which cost c(a)c(a) and with distributions of returns yy that depend on aa. The decision problem is shown to be concave if the primitive of the cdf of returns is jointly convex in aa and yy, a condition we call Concavity of the Cumulative Quantile (CCQ) and which is satisfied by many common distributions. Next we apply CCQ to the distribution of outcomes (or their likelihood-ratio transforms) in the principal-agent problem and derive restrictions on the utility function that validate the first-order approach. We also discuss another condition, log-convexity of the distribution, and show that it allows binding limited liability constraints, which CCQ does not.  相似文献   

20.
In the first part of the paper, we study concepts of supremum and maximum as subsets of a topological space XX endowed by preference relations. Several rather general existence theorems are obtained for the case where the preferences are defined by countable semicontinuous multi-utility representations. In the second part of the paper, we consider partial orders and preference relations “lifted” from a metric separable space XX endowed by a random preference relation to the space L0(X)L0(X) of XX-valued random variables. We provide an example of application of the notion of essential maximum to the problem of the minimal portfolio super-replicating an American-type contingent claim under transaction costs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号