首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
We propose a consistent test for a linear functional form against a nonparametric alternative in a fixed effects panel data model. We show that the test has a limiting standard normal distribution under the null hypothesis, and show that the test is a consistent test. We also establish the asymptotic validity of a bootstrap procedure which is used to better approximate the finite sample null distribution of the test statistic. Simulation results show that the proposed test performs well for panel data with a large number of cross-sectional units and a finite number of observations across time.  相似文献   

2.
In this paper we consider the problem of testing for equality of two density or two conditional density functions defined over mixed discrete and continuous variables. We smooth both the discrete and continuous variables, with the smoothing parameters chosen via least-squares cross-validation. The test statistics are shown to have (asymptotic) normal null distributions. However, we advocate the use of bootstrap methods in order to better approximate their null distribution in finite-sample settings and we provide asymptotic validity of the proposed bootstrap method. Simulations show that the proposed tests have better power than both conventional frequency-based tests and smoothing tests based on ad hoc smoothing parameter selection, while a demonstrative empirical application to the joint distribution of earnings and educational attainment underscores the utility of the proposed approach in mixed data settings.  相似文献   

3.
Propensity score matching has become a popular method for the estimation of average treatment effects. In empirical applications, researchers almost always impose a parametric model for the propensity score. This practice raises the possibility that the model for the propensity score is misspecified and therefore the propensity score matching estimator of the average treatment effect may be inconsistent. We show that the common practice of calculating estimates of the densities of the propensity score conditional on the participation decision provides a means for examining whether the propensity score is misspecified. In particular, we derive a restriction between the density of the propensity score among participants and the density among nonparticipants. We show that this restriction between the two conditional densities is equivalent to a particular orthogonality restriction and derive a formal test based upon it. The resulting test is shown via a simulation study to have dramatically greater power than competing tests for many alternatives. The principal disadvantage of this approach is loss of power against some alternatives.  相似文献   

4.
Gábor Szűcs 《Metrika》2008,67(1):63-81
Statistical procedures based on the estimated empirical process are well known for testing goodness of fit to parametric distribution families. These methods usually are not distribution free, so that the asymptotic critical values of test statistics depend on unknown parameters. This difficulty may be overcome by the utilization of parametric bootstrap procedures. The aim of this paper is to prove a weak approximation theorem for the bootstrapped estimated empirical process under very general conditions, which allow both the most important continuous and discrete distribution families, along with most parameter estimation methods. The emphasis is on families of discrete distributions, and simulation results for families of negative binomial distributions are also presented.  相似文献   

5.
Many macroeconomic and financial variables show highly persistent and correlated patterns but are not necessarily cointegrated. Recently,  Sun et al. (2011) propose using a semiparametric varying coefficient approach to capture correlations between integrated but non cointegrated variables. Due to the complication arising from the integrated disturbance term and the semiparametric functional form, consistent estimation of such a semiparametric model requires stronger conditions than usually needed for consistent estimation for a linear (spurious) regression model, or a semiparametric varying coefficient model with a stationary disturbance. Therefore, it is important to develop a testing procedure to examine for a given data set, whether linear relationship holds or not, while allowing for the disturbance being an integrated process. In this paper we propose two test statistics for detecting linearity against semiparametric varying coefficient alternative specification. Monte Carlo simulations are used to examine the finite sample performances of the proposed tests.  相似文献   

6.
We adapt the Bierens (1990) test to the I-regular models of Park and Phillips (2001). Bierens (1990) defines the test hypothesis in terms of a conditional moment condition. Under the null hypothesis, the moment condition holds with probability one. The probability measure used is that induced by the variables in the model, that are assumed to be strictly stationary. Our framework is nonstationary and this approach is not always applicable. We show that the Lebesgue measure can be used instead in a meaningful way. The resultant test is consistent against all I-regular alternatives.  相似文献   

7.
This paper is concerned with inference about a function g that is identified by a conditional quantile restriction involving instrumental variables. The paper presents a test of the hypothesis that g belongs to a finite-dimensional parametric family against a nonparametric alternative. The test is not subject to the ill-posed inverse problem of nonparametric instrumental variable estimation. Under mild conditions, the test is consistent against any alternative model. In large samples, its power is arbitrarily close to 1 uniformly over a class of alternatives whose distance from the null hypothesis is proportional to n−1/2, where n is the sample size. Monte Carlo simulations illustrate the finite-sample performance of the test.  相似文献   

8.
The loss given default (LGD) distribution is known to have a complex structure. Consequently, the parametric approach for its prediction by fitting a density function may suffer a loss of predictive power. To overcome this potential drawback, we use the cumulative probability model (CPM) to predict the LGD distribution. The CPM applies a transformed variable to model the LGD distribution. This transformed variable has a semiparametric structure. It models the predictor effects parametrically. The functional form of the transformation is unspecified. Thus, CPM provides more flexibility and simplicity in modeling the LGD distribution. To implement CPM, we collect a sample of defaulted debts from Moody’s Default and Recovery Database. Given this sample, we use an expanding rolling window approach to investigate the out-of-time performance of CPM and its alternatives. Our results confirm that CPM is better than its alternatives, in the sense of yielding more accurate LGD distribution predictions.  相似文献   

9.
In this paper, we draw on both the consistent specification testing and the predictive ability testing literatures and propose an integrated conditional moment type predictive accuracy test that is similar in spirit to that developed by Bierens (J. Econometr. 20 (1982) 105; Econometrica 58 (1990) 1443) and Bierens and Ploberger (Econometrica 65 (1997) 1129). The test is consistent against generic nonlinear alternatives, and is designed for comparing nested models. One important feature of our approach is that the same loss function is used for in-sample estimation and out-of-sample prediction. In this way, we rule out the possibility that the null model can outperform the nesting generic alternative model. It turns out that the limiting distribution of the ICM type test statistic that we propose is a functional of a Gaussian process with a covariance kernel that reflects both the time series structure of the data as well as the contribution of parameter estimation error. As a consequence, critical values that are data dependent and cannot be directly tabulated. One approach in this case is to obtain critical value upper bounds using the approach of Bierens and Ploberger (Econometrica 65 (1997) 1129). Here, we establish the validity of a conditional p-value method for constructing critical values. The method is similar in spirit to that proposed by Hansen (Econometrica 64 (1996) 413) and Inoue (Econometric Theory 17 (2001) 156), although we additionally account for parameter estimation error. In a series of Monte Carlo experiments, the finite sample properties of three variants of the predictive accuracy test are examined. Our findings suggest that all three variants of the test have good finite sample properties when quadratic loss is specified, even for samples as small as 600 observations. However, non-quadratic loss functions such as linex loss require larger sample sizes (of 1000 observations or more) in order to ensure reasonable finite sample performance.  相似文献   

10.
Forecast evaluations aim to choose an accurate forecast for making decisions by using loss functions. However, different loss functions often generate different ranking results for forecasts, which complicates the task of comparisons. In this paper, we develop statistical tests for comparing performances of forecasting expectiles and quantiles of a random variable under consistent loss functions. The test statistics are constructed with the extremal consistent loss functions of Ehm et al. (2016). The null hypothesis of the tests is that a benchmark forecast at least performs equally well as a competing one under all extremal consistent loss functions. It can be shown that if such a null holds, the benchmark will also perform at least equally well as the competitor under all consistent loss functions. Thus under the null, when different consistent loss functions are used, the result that the competitor does not outperform the benchmark will not be altered. We establish asymptotic properties of the proposed test statistics and propose to use the re-centered bootstrap to construct their empirical distributions. Through simulations, we show that the proposed test statistics perform reasonably well. We then apply the proposed method to evaluations of several different forecast methods.  相似文献   

11.
In this paper, I introduce a simple test for the presence of the data-generating process among several non-nested alternatives. The test is an extension of the classical J test for non-nested regression models. I also provide a bootstrap version of the test that avoids possible size distortions inherited from the J test.  相似文献   

12.
In this paper it is shown that a convenient score test against non‐nested alternatives can be constructed from the linear combination of the likelihood functions of the competing models. This is essentially a test for the correct specification of the conditional distribution of the variable of interest. Given its characteristics, the proposed test is particularly attractive to check the distributional assumptions in models for discrete data. The usefulness of the test is illustrated with an application to models for recreational boating trips. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

13.
The analysis of residence histories and other longitudinal panel data is fraught with methodological problems. Much recent progress has been made in methods of analysis within discrete time. This paper extends the development of empirically tractable mixed continuous time stochastic models. Analysis of a sample of intra-urban residential histories identifies the effect of tenure type, age of household head, size of household and duration of stay on movement probabilities. Surprisingly, no further variation, as represented by a gamma mixing distribution over a hazard rate parameter, may be identified.  相似文献   

14.
The present paper introduces a methodology for the semiparametric or non‐parametric two‐sample equivalence problem when the effects are specified by statistical functionals. The mean relative risk functional of two populations is given by the average of the time‐dependent risk. This functional is a meaningful non‐parametric quantity, which is invariant under strictly monotone transformations of the data. In the case of proportional hazard models, the functional determines just the proportional hazard risk factor. It is shown that an equivalence test of the type of the two‐sample Savage rank test is appropriate for this functional. Under proportional hazards, this test can be carried out as an exact level α test. It also works quite well under other semiparametric models. Similar results are presented for a Wilcoxon rank‐sum test for equivalence based on the Mann–Whitney functional given by the relative treatment effect.  相似文献   

15.
In this paper, we build a generalized two-sector Kaleckian growth model and explore the dynamics towards long-run positions. The model incorporates conflicting claims of labour and firms over income distribution and endogenous labour-saving technical progress. Adopting a stock-flow consistent framework, our simulation experiments yield the following results. First, the ‘paradox of thrift’ and the ‘paradox of costs’ hold, meaning that lower saving rates generate higher growth rates while higher real wages generate higher profit rates, but the magnitude of the impact depends on the initial status of income distribution and monetary policy. Second, changes in autonomous labour-saving innovations might explain the phenomenon of the ‘New Economy’ of the second half of the 1990s within an alternative framework. Our simulations with a two-sector model retrieve the analytical results achieved with a one-sector Kaleckian model, with the addition of path dependence.  相似文献   

16.
Provision of most public goods (e.g., health care, libraries, education, police, fire protection, utilities) can be characterized by a two-stage production process. In the first-stage, basic inputs (e.g., labor and capital) are used to generate service potential (e.g., opening hours, materials), which is then, in the second-stage, transformed into observed outputs (e.g., school outcomes, library circulation, crimes solved). As final outputs are also affected by demand-side factors, conflating both production stages likely leads to biased inferences about public productive (in)efficiency and its determinants. Hence, this paper uses a specially tailored, fully non-parametric efficiency model allowing for both outlying observations and heterogeneity to analyse efficient public good provision in stage one only. We thereby employ a dataset comprising all 290 Flemish public libraries. Our findings suggest that ideological stance of the local government, wealth and density of the local population and source of library funding (i.e., local funding versus intergovernmental transfers) strongly affect library productive efficiency.  相似文献   

17.
The stratified logrank test can be used to compare survival distributions of several groups of patients, while adjusting for the effect of some discrete variable that may be predictive of the survival outcome. In practice, it can happen that this discrete variable is missing for some patients. An inverse-probability-weighted version of the stratified logrank statistic is introduced to tackle this issue. Its asymptotic distribution is derived under the null hypothesis of equality of the survival distributions. A simulation study is conducted to assess behavior of the proposed test statistic in finite samples. An analysis of a medical dataset illustrates the methodology.  相似文献   

18.
In this paper we estimate a dynamic structural model of employment at firm level. Our dataset consists of a balanced panel of 2790 Greek manufacturing firms. The empirical evidence of this dataset stresses three important stylized facts: (a) there are periods in which firms decide not to change their labour input, (b) there are periods of large employment changes (lumpy nature of labour adjustment) and (c) the commonality is employment spikes to be followed by smooth and low employment growth periods. Following Cooper and Haltiwanger [Cooper, R.W. and Haltiwanger, J. “On the Nature of Capital Adjustment Costs”, Review of Economic Studies, 2006; 73(3); 611–633], we consider a dynamic discrete choice model of a general specification of adjustment costs including convex, non-convex and “disruption of production” components. We use a method of simulated moments procedure to estimate the structural parameters. Our results indicate considerable fixed costs in the Greek employment adjustment.  相似文献   

19.
As the volume and complexity of data continues to grow, more attention is being focused on solving so-called big data problems. One field where this focus is pertinent is credit card fraud detection. Model selection approaches can identify key predictors for preventing fraud. Stagewise Selection is a classic model selection technique that has experienced a revitalized interest due to its computational simplicity and flexibility. Over a sequence of simple learning steps, stagewise techniques build a sequence of candidate models that is less greedy than the stepwise approach.This paper introduces a new stochastic stagewise technique that integrates a sub-sampling approach into the stagewise framework, yielding a simple tool for model selection when working with big data. Simulation studies demonstrate the proposed technique offers a reasonable trade off between computational cost and predictive performance. We apply the proposed approach to synthetic credit card fraud data to demonstrate the technique’s application.  相似文献   

20.
In a binary choice panel data model with individual effects and two time periods, Manski proposed the maximum score estimator based on a discontinuous objective function and proved its consistency under weak distributional assumptions. The rate of convergence is low ( N 1/3) and its limit distribution cannot easily be used for statistical inference. In this paper we apply the idea of Horowitz to smooth Manski's objective function. The resulting smoothed maximum score estimator is consistent and asymptotically normal with a rate of convergence that can be made arbitrarily close to N 1/2, depending on the strength of the smoothness assumptions imposed. The estimator can be applied to panels with more than two time periods and to unbalanced panels. We apply the estimator to analyze labour force participation of married Dutch females.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号