首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper proposes an estimation strategy that exploits recent non-parametric panel data methods that allow for a multifactor error structure and extends a recently proposed data-driven model-selection procedure, which has its roots in cross validation and aims to test whether two competing approximate models are equivalent in terms of their expected true error. We extend this procedure to a large panel data framework by using moving block bootstrap resampling techniques in order to preserve cross-sectional dependence in the bootstrapped samples. Such an estimation strategy is illustrated by revisiting an analysis of international technology diffusion. Model selection procedures clearly conclude in the superiority of a fully non-parametric (non-additive) specification over parametric and even semi-parametric (additive) specifications. This work also refines previous results by showing threshold effects, non-linearities, and interactions that are obscured in parametric specifications and which have relevant implications for policy.  相似文献   

2.
We develop a Bayesian semi-parametric approach to the instrumental variable problem. We assume linear structural and reduced form equations, but model the error distributions non-parametrically. A Dirichlet process prior is used for the joint distribution of structural and instrumental variable equations errors. Our implementation of the Dirichlet process prior uses a normal distribution as a base model. It can therefore be interpreted as modeling the unknown joint distribution with a mixture of normal distributions with a variable number of mixture components. We demonstrate that this procedure is both feasible and sensible using actual and simulated data. Sampling experiments compare inferences from the non-parametric Bayesian procedure with those based on procedures from the recent literature on weak instrument asymptotics. When errors are non-normal, our procedure is more efficient than standard Bayesian or classical methods.  相似文献   

3.
L. E. Storm 《Metrika》1962,5(1):158-183
Summary Methods of the nested analysis of variance procedures are discussed and a formalized three-way nested model is outlined. A summary of the statistical tests for the assumptions relevant to the model is given together with an outline of the numerical methods for calculating the sum of squares, the degrees of freedom, the mean squares, the tests of significance, and the estimate of variance components. Several methods are given for obtaining confidence limits for variance components. Procedures are presented for utilizing the variance components to estimate process and/or control limits and to obtain the most efficient or optimum sampling scheme. Some differences between the nested analysis of variance model and the crossed analysis of variance model are explained.  相似文献   

4.
This paper extends the links between the non-parametric data envelopment analysis (DEA) models for efficiency analysis, duality theory and multi-criteria decision making models for the linear and non-linear case. By drawing on the properties of a partial Lagrangean relaxation, a correspondence is shown between the CCR, BCC and free disposable hull (FDH) models in DEA and the MCDM model. One of the implications is a characterization that verifies the sufficiency of the weighted scalarizing function, even for the non-convex case FDH. A linearization of FDH is presented along with dual interpretations. Thus, an input/output-oriented model is shown to be equivalent to a maximization of the weighted input/output, subject to production space feasibility. The discussion extends to the recent developments: the free replicability hull (FRH), the new elementary replicability hull (ERH) and the non-convex models by Petersen (1990). FRH is shown to be a true mixed integer program, whereas the latter can be characterized as the CCR and BCC models.  相似文献   

5.
This paper proposes a new rank-based test of extreme-value dependence. The procedure is based on the first three moments of the bivariate probability integral transform of the underlying copula. It is seen that the test statistic is asymptotically normal and its finite- and large-sample variance are calculated explicitly. Consistent plug-in estimators for the variance are proposed, and a fast algorithm for their computation is given. Although it is shown via counterexamples that no test based on the probability integral transform can be consistent, the proposed procedure achieves good power against common alternatives, both in finite samples and asymptotically.  相似文献   

6.
A broad class of generalized linear mixed models, e.g. variance components models for binary data, percentages or count data, will be introduced by incorporating additional random effects into the linear predictor of a generalized linear model structure. Parameters are estimated by a combination of quasi-likelihood and iterated MINQUE (minimum norm quadratic unbiased estimation), the latter being numerically equivalent to REML (restricted, or residual, maximum likelihood). First, conditional upon the additional random effects, observations on a working variable and weights are derived by quasi-likelihood, using iteratively re-weighted least squares. Second, a linear mixed model is fitted to the working variable, employing the weights for the residual error terms, by iterated MINQUE. The latter may be regarded as a least squares procedure applied to squared and product terms of error contrasts derived from the working variable. No full distributional assumptions are needed for estimation. The model may be fitted with standardly available software for weighted regression and REML.  相似文献   

7.
Simultaneous optimal estimation in linear mixed models is considered. A necessary and sufficient condition is presented for the least squares estimator of the fixed effects and the analysis of variance estimator of the variance components to be of uniformly minimum variance simultaneously in a general variance components model. That is, the matrix obtained by orthogonally projecting the covariance matrix onto the orthogonal complement space of the column space of the design matrix is symmetric, each eigenvalue of the matrix is a linear combinations of the variance components and the number of all distinct eigenvalues of the matrix is equal to the the number of the variance components. Under this condition, uniformly optimal unbiased tests and uniformly most accurate unbiased confidence intervals are constructed for the parameters of interest. A necessary and sufficient condition is also given for the equivalence of several common estimators of variance components. Two examples of their application are given.  相似文献   

8.
In this paper we develop wavelet methods for detecting and estimating jumps and cusps in the mean function of a non-parametric regression model. An important characteristic of the model considered here is that it allows for conditional heteroscedastic variance, a feature frequently encountered with economic and financial data. Wavelet analysis of change-points in this model has been considered in a limited way in a recent study by Chen et al. (2008) with a focus on jumps only. One problem with the aforementioned paper is that the test statistic developed there has an extreme value null limit distribution. The results of other studies have shown that the rate of convergence to the extreme value distribution is usually very slow, and critical values derived from this distribution tend to be much larger than the true ones. Here, we develop a new test and show that the test statistic has a convenient null limit N(0,1) distribution. This feature gives the proposed approach an appealing advantage over the existing approach. Another attractive feature of our results is that the asymptotic theory developed here holds for both jumps and cusps. Implementation of the proposed method for multiple jumps and cusps is also examined. The results from a simulation study show that the new test has excellent power and the estimators developed also yield very accurate estimates of the positions of the discontinuities.  相似文献   

9.
Interval estimation is an important objective of most experimental and observational studies. Knowing at the design stage of the study how wide the confidence interval (CI) is expected to be and where its limits are expected to fall can be very informative. Asymptotic distribution of the confidence limits can also be used to answer complex questions of power analysis by computing power as probability that a CI will exclude a given parameter value. The CI‐based approach to power and methods of calculating the expected size and location of asymptotic CIs as a measure of expected precision of estimation are reviewed in the present paper. The theory is illustrated with commonly used estimators, including unadjusted risk differences, odds ratios and rate ratios, as well as more complex estimators based on multivariable linear, logistic and Cox regression models. It is noted that in applications with the non‐linear models, some care must be exercised when selecting the appropriate variance expression. In particular, the well‐known ‘short‐cut’ variance formula for the Cox model can be very inaccurate under unequal allocation of subjects to comparison groups. A more accurate expression is derived analytically and validated in simulations. Applications with ‘exact’ CIs are also considered.  相似文献   

10.
Principal components, analysis of variance and data structure   总被引:1,自引:0,他引:1  
The relation between principal components and analysis of variance is examined. It is shown that the model underlying the extended analysis of variance developed by G ollob and M andel is useful also as a model for principal component analysis. The elucidation of structure of two-factor data using the new analysis of variance model is illustrated by an example taken from thermodynamics.  相似文献   

11.
Abstract  In this paper a very natural generalization of the two-way analysis of variance rank statistic of F riedman is given. The general distribution-free test procedure based on this statistic for the effect of J treatments in a random block design can be applied in general two-way layouts without interactions and with different numbers of the continuous observations per cell provided the design scheme is connected. The asymptotic distribution under the null hypothesis of the test statistic is derived. A comparison with the method of m rankings of B enard and van E lteren is made. The disadvantage of B enard and van E lteren's test procedure is that the number of observations per block does influence the statistic twice, namely firstly by the number itself, as it should, and see ondly by the level of the ranks which will be different in different blocks if the numbers of observations per block are different. The proposed test statistic is not sensitive to differences in the levels of the ranks caused by the different numbers of observations per block. The test is derived from considerhg the K ruskal -W allis statistics per block.
Finally, the results of simulation experiments are given. The simulation is carried out for three designs and a number of normal location alternatives and gives some information about the power of the suggested test procedure. A comparison is made with B enard and van E lteren's test and with the classical analysis of variance technique. For some simple orthogonal designs the exact null distributions of B enard and van E lteren's test and the proposed test are compared.  相似文献   

12.
For a balanced two-way mixed model, the maximum likelihood (ML) and restricted ML (REML) estimators of the variance components were obtained and compared under the non-negativity requirements of the variance components by L ee and K apadia (1984). In this note, for a mixed (random blocks) incomplete block model, explicit forms for the REML estimators of variance components are obtained. They are always non-negative and have smaller mean squared error (MSE) than the analysis of variance (AOV) estimators. The asymptotic sampling variances of the maximum likelihood (ML) estimators and the REML estimators are compared and the balanced incomplete block design (BIBD) is considered as a special case. The ML estimators are shown to have smaller asymptotic variances than the REML estimators, but a numerical result in the randomized complete block design (RCBD) demonstrated that the performances of the REML and ML estimators are not much different in the MSE sense.  相似文献   

13.
A single outlier in a regression model can be detected by the effect of its deletion on the residual sum of squares. An equivalent procedure is the simple intervention in which an extra parameter is added for the mean of the observation in question. Similarly, for unobserved components or structural time-series models, the effect of elaborations of the model on inferences can be investigated by the use of interventions involving a single parameter, such as trend or level changes. Because such time-series models contain more than one variance, the effect of the intervention is measured by the change in individual variances.We examine the effect on the estimated parameters of moving various kinds of intervention along the series. The horrendous computational problems involved are overcome by the use of score statistics combined with recent developments in filtering and smoothing. Interpretation of the resulting time-series plots of diagnostics is aided by simulation envelopes.Our procedures, illustrated with four example, permit keen insights into the fragility of inferences to specific shocks, such as outliers and level breaks. Although the emphasis is mostly on parameter estimation, forecast are also considered. Possible extensions include seasonal adjustment and detrending of series.  相似文献   

14.
Logistic Regression, a review   总被引:1,自引:0,他引:1  
A review is given of the development of logistic regression as a multi-purpose statistical tool.
A historical introduction shows several lines culminating in the unifying paper of Cox (1966), in which theory as developed in the field of bio-assay is shown to be applicable to designs as discriminant-analysis and case-control study. A review is given of several designs all leading to the same analysis. The link is made with epidemiological literature.
Several optimization criteria are discussed that can be used in the case of more observations per cell, namely maximum likelihood, minimum chi-square and weighted regression on the observed logits. Recent literature on the goodness of fit problem is reviewed and finally, comments are made about the non-parametric approach to logistic regression which is still in rapid development.  相似文献   

15.
A procedure to test hypotheses about the population variance of a fuzzy random variable is analyzed. The procedure is based on the theory of UH-statistics. The variance is defined in terms of a general metric to quantify the variability of the fuzzy values about its (fuzzy) mean. An asymptotic one-sample test in a wide setting is developed and a bootstrap test, which is more suitable for small and moderate sample sizes, is also studied. Moreover, the power function of the asymptotic procedure through local alternatives is analyzed. Some simulations showing the empirical behavior and consistency of both tests are carried out. Finally, some illustrative examples of the practical application of the proposed tests are presented.  相似文献   

16.
Volatility forecasts aim to measure future risk and they are key inputs for financial analysis. In this study, we forecast the realized variance as an observable measure of volatility for several major international stock market indices and accounted for the different predictive information present in jump, continuous, and option-implied variance components. We allowed for volatility spillovers in different stock markets by using a multivariate modeling approach. We used heterogeneous autoregressive (HAR)-type models to obtain the forecasts. Based an out-of-sample forecast study, we show that: (i) including option-implied variances in the HAR model substantially improves the forecast accuracy, (ii) lasso-based lag selection methods do not outperform the parsimonious day-week-month lag structure of the HAR model, and (iii) cross-market spillover effects embedded in the multivariate HAR model have long-term forecasting power.  相似文献   

17.
Strike proneness is analyzed through the industrial relations system conceptualization. The actors, rules and ideology components of 60 industrial relations systems, of which 15 exhibited a low propensity to strike and 45 a high propensity to strike, are compared via a discriminant analysis procedure. Strike-vulnerable establishments as compared with harmonious units tend to include more strikers and have a more complicated structure of labour representation. In highly strike-prone organizations, negotiations are usually handled by representatives equipped with limited authority. These organizations tend to be less prepared rulewise for a strike situation, and their ideology is less critical of promoting union leaders to managerial positions and less favourable of opening recruitment to out-of-plant competition. The comparison of the technological, economic and power constraints of the two strike propensity groups indicates that the conflictual units, as compared with organizations with a low strike propensity, tend to be larger units and service organizations; they are mostly budget controlled and publically owned. A comprehensive industrial relations system analysis indicates that the internal components of the industrial relations system take priority in discriminating between the two strike groups. Theoretical and substantive conclusions conclude the analysis.  相似文献   

18.
The risk-adjusted discount rate method for evaluating capital investment projects applies the risk-adjusted rate to equilibrium as well as disequilibrium expected returns, leading to biased NPV calculations. This paper uses the CAPM framework, and suggests a procedure for applying the risk-adjusted rate without causing a bias. The procedure is shown to result in NPVs identical to those obtained by the certainty equivalent approach. A comparison with a previously suggested procedure is also provided.  相似文献   

19.
Dr. D. Gebhardt 《Metrika》1981,28(1):13-21
Part A of this paper describes a statistical test procedure for examining the hitting accuracy of missiles by the evaluation of hit patterns. The procedure is illustrated by a numerical example which is supplemented by a sensitivity analysis.Part B presents the theoretical background of the test procedure. In this connection approximate methods are sometimes employed, mainly to simplify practical applications. Special emphasis is placed on an examination of the size of errors in the approximations used, and on an appraisal of the power of the test.  相似文献   

20.
《Journal of econometrics》2002,108(1):133-156
By combining two alternative formulations of a test statistic with two alternative resampling schemes we obtain four different bootstrap tests. In the context of static linear regression models two of these are shown to have serious size and power problems, whereas the remaining two are adequate and in fact equivalent. The equivalence between the two valid implementations is shown to break down in dynamic regression models. Then, the procedure based on the test statistic approach performs best, at least in the AR(1)-model. Similar finite-sample phenomena are illustrated in the ARMA(1,1)-model through a small-scale Monte Carlo study and an empirical example.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号