首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Incomplete knowledge of data usually hinders the establishment of detailed input-output tables. It is for this reason that up-dating procedures (RAS) as well as short-cut methods have been developed. In this paper the short-cut output multiplier formula by Drake (1976) is compared with the output multiplier involved in the RAS procedures. It will turn out that both these output multiplier estimates are two special cases of a more general class of estimating procedures. It is demonstrated by an empirical example how this generalized procedure should be applied in practice. The kind of reasoning underlying this paper can be extended to other problems of input-output analysis as well.  相似文献   

2.
We consider estimation of panel data models with sample selection when the equation of interest contains endogenous explanatory variables as well as unobserved heterogeneity. Assuming that appropriate instruments are available, we propose several tests for selection bias and two estimation procedures that correct for selection in the presence of endogenous regressors. The tests are based on the fixed effects two-stage least squares estimator, thereby permitting arbitrary correlation between unobserved heterogeneity and explanatory variables. The first correction procedure is parametric and is valid under the assumption that the errors in the selection equation are normally distributed. The second procedure estimates the model parameters semiparametrically using series estimators. In the proposed testing and correction procedures, the error terms may be heterogeneously distributed and serially dependent in both selection and primary equations. Because these methods allow for a rather flexible structure of the error variance and do not impose any nonstandard assumptions on the conditional distributions of explanatory variables, they provide a useful alternative to the existing approaches presented in the literature.  相似文献   

3.
Instrumental variable estimation in the presence of many moment conditions   总被引:1,自引:0,他引:1  
This paper develops shrinkage methods for addressing the “many instruments” problem in the context of instrumental variable estimation. It has been observed that instrumental variable estimators may behave poorly if the number of instruments is large. This problem can be addressed by shrinking the influence of a subset of instrumental variables. The procedure can be understood as a two-step process of shrinking some of the OLS coefficient estimates from the regression of the endogenous variables on the instruments, then using the predicted values of the endogenous variables (based on the shrunk coefficient estimates) as the instruments. The shrinkage parameter is chosen to minimize the asymptotic mean square error. The optimal shrinkage parameter has a closed form, which makes it easy to implement. A Monte Carlo study shows that the shrinkage method works well and performs better in many situations than do existing instrument selection procedures.  相似文献   

4.
Bootstrap‐based methods for bias‐correcting the first‐stage parameter estimates used in some recently developed bootstrap implementations of co‐integration rank tests are investigated. The procedure constructs estimates of the bias in the original parameter estimates by using the average bias in the corresponding parameter estimates taken across a large number of auxiliary bootstrap replications. A number of possible implementations of this procedure are discussed and concrete recommendations made on the basis of finite sample performance evaluated by Monte Carlo simulation methods. The results show that bootstrap‐based bias‐correction methods can significantly improve the small sample performance of the bootstrap co‐integration rank tests.  相似文献   

5.
We present a new method for obtaining fast and accurate estimates of the price of an American put option by binomial trees. The method is based on the interpolation of suitable values obtained by modifying the contractual strike. A time-saving procedure allows us to derive all the interpolating data from a unique standard backward procedure. Received: 16 July 2001 / Accepted: 19 April 2002 {The authors would like to thank an anonymous referee for helpful comments. We also thank Antonino Zanette for his help in the refinements of the numerical procedures.  相似文献   

6.
A common strategy within the framework of regression models is the selection of variables with possible predictive value, which are incorporated in the regression model. Two recently proposed methods, Breiman's Garotte (B reiman , 1995) and Tibshirani's Lasso (T ibshirani , 1996) try to combine variable selection and shrinkage. We compare these with pure variable selection and shrinkage procedures. We consider the backward elimination procedure as a typical variable selection procedure and as an example of a shrinkage procedure an approach of V an H ouwelingen and L e C essie (1990). Additionally an extension of van Houwelingens and le Cessies approach proposed by S auerbrei (1999) is considered. The ordinary least squares method is used as a reference.
With the help of a simulation study we compare these approaches with respect to the distribution of the complexity of the selected model, the distribution of the shrinkage factors, selection bias, the bias and variance of the effect estimates and the average prediction error.  相似文献   

7.
This article proposes a balancing procedure for the deflation of input–output (I-O) tables from the viewpoint of users. This is a ‘subjective’ variant of the Weighted Least Squares (WLS) method, already known in the literature. It is argued that it is more flexible than other methods, and it is shown that SWLS subsumes the first-order approximation of RAS as a special case. Flexibility is due to the facts that (a) users can attach differential ‘reliability’ weights to first (unbalanced) estimates, depending on the confidence they have in the different parts of their pre-balancing work, (b) differently from RAS, one is not bound to take any row or column total as exogenously given, and (c) additional constraints can be added to it. The article describes also how SWLS was utilised to estimate a yearly (1959–2000) series of constant-price I-O tables for the Italian economy.  相似文献   

8.
Standard model‐based small area estimates perform poorly in presence of outliers. Sinha & Rao ( 2009 ) developed robust frequentist predictors of small area means. In this article, we present a robust Bayesian method to handle outliers in unit‐level data by extending the nested error regression model. We consider a finite mixture of normal distributions for the unit‐level error to model outliers and produce noninformative Bayes predictors of small area means. Our modelling approach generalises that of Datta & Ghosh ( 1991 ) under the normality assumption. Application of our method to a data set which is suspected to contain an outlier confirms this suspicion, correctly identifies the suspected outlier and produces robust predictors and posterior standard deviations of the small area means. Evaluation of several procedures including the M‐quantile method of Chambers & Tzavidis ( 2006 ) via simulations shows that our proposed method is as good as other procedures in terms of bias, variability and coverage probability of confidence and credible intervals when there are no outliers. In the presence of outliers, while our method and Sinha–Rao method perform similarly, they improve over the other methods. This superior performance of our procedure shows its dual (Bayes and frequentist) dominance, which should make it attractive to all practitioners, Bayesians and frequentists, of small area estimation.  相似文献   

9.
The focus of the research described in this paper is on presenting an automated forecasting system that encompasses an objective ARIMA method with the Holt-Winters procedure in a weighted averaging scheme. The system is applied to M-Competition data and the results are compared to the subjective Box-Jenkins forecasts as well as to results from two other automated methods, CAPRI and SIFT. The comparison reveals that especially for one-step ahead forecasting, the automated system competes favorably with both automated methods and the individualized Box-Jenkins analysis.  相似文献   

10.
Variable selection for additive partially linear models with measurement error is considered. By the backfitting technique, we first propose a variable selection procedure for the parametric components based on the smoothly clipped absolute deviation (SCAD) penalization, and one-step spare estimates for parametric components are also presented. The resulting estimates perform asymptotic normality as well as an oracle property. Then, two-stage backfitting estimators are also presented for the nonparametric components by using the local linear method, and the structures of asymptotic biases and covariances of the proposed estimators are the same as those in partially linear model with measurement error. The finite sample performance of the proposed procedures is illustrated by simulation studies.  相似文献   

11.
This paper addresses the issue of estimating seasonal indices for multi-item, short-term forecasting, based upon both individual time series estimates and groups of similar time series. This development of the joint use of individual and group seasonal estimation is extended in two directions. One class of methods is derived from the procedures developed for combining forecasts. The second employs the general class of Stein Rules to obtain shrinkage estimates of seasonal components. A comparative evaluation has been undertaken of several versions of these methods, based upon a sample of retail sales data. The results favour these newly developed methods and provide some interesting insights for practical implementation.  相似文献   

12.
Sequential methods have been used for many applications; especially, when fixed sample procedures are not possible and/or when “early stopping” of sampling is beneficial for applications. At the same time, the issue of how to make correct inferences when measurement errors are present has drawn considerable attention from statisticians. In this paper, the problems of sequential estimation of generalized linear models when there are measurement errors in both adaptive and fixed design cases are studied. The proposed sequential procedure is proved to be asymptotically consistent and efficient in the sense of Chow and Robbins [Ann Math Stat 36(2):457–462, 1965] when measurement errors decay gradually as the number of sequentially selected design points increases. This assumption is useful in sequentially designed experiments, and can also be fulfilled in the case when replicate measurements are available. Some numerical studies based on a Rasch model and a logistic regression model are conducted to evaluate the performance of the proposed procedure.  相似文献   

13.
Due to weaknesses in traditional tests, a Bayesian approach is developed to investigate whether unit roots exist in macroeconomic time-series. Bayesian posterior odds comparing unit root models to stationary and trend-stationary alternatives are calculated using informative priors. Two classes of reference priors which are informative but require minimal subjective prior input are used. In this sense the Bayesian unit root tests developed here are objective. Bayesian procedures are carried out on the Nelson–Plosser and Shiller data sets as well as on generated data. The conclusion is that the failure of classical procedures to reject the unit root hypothesis is not necessarily proof that a unit root is present with high probability.  相似文献   

14.
We show that use of ordinary least-squares to explore relationships involving firm-level stock returns as the dependent variable in the face of structured dependence between individual firms leads to an endogeneity problem. This in turn leads to biased and inconsistent least-squares estimates. A maximum likelihood estimation procedure that will produce consistent estimates in these situations is illustrated. This is done using methods that have been developed to deal with spatial dependence between regional data observations, which can be applied to situations involving firm-level observations that exhibit a structure of dependence. In addition, we show how to correctly interpret maximum likelihood parameter estimates from these models in the context of firm-level dependence, and provide a Monte Carlo as well as applied illustration of the magnitude of bias that can arise.  相似文献   

15.
A key requirement of repeated surveys conducted by national statistical institutes is the comparability of estimates over time, resulting in uninterrupted time series describing the evolution of finite population parameters. This is often an argument to keep survey processes unchanged as long as possible. It is nevertheless inevitable that a survey process will need to be redesigned from time to time, for example, to improve or update methods or implement more cost-effective data collection procedures. It is important to quantify the systematic effects or discontinuities of a new survey process on the estimates of a repeated survey to avoid a disturbance in the comparability of estimates over time. This paper reviews different statistical methods that can be used to measure discontinuities and manage the risk due to a survey process redesign.  相似文献   

16.
Versions 5 and 6 of LISREL (Joreskog and Sorbom, 1981) contain procedures that estimate the underlying correlation between continuous variables on the basis of crude rank category measures. The procedures assume that the distribution of the measured variables would have been bivariate normal if they had not been categorized. Using survey data and simulations, the accuracy of these polyserial/polychoric (P/P) based estimates of the underlying correlations are compared with those based on simple equal distance scoring of the categories. The results indicate that under some conditions, e.g., nearly normally distributed variables and moderate to high correlations, the polyserial/polychoric based estimates are better. Under other conditions, e.g., a moderate to high degree of skew and kurtosis and low correlations, the equal distance score based estimates are better. Under all conditions, the amount of error decreases fairly rapidly as the number of categories is increased from two to five.  相似文献   

17.
Maximum Likelihood (ML) estimation of probit models with correlated errors typically requires high-dimensional truncated integration. Prominent examples of such models are multinomial probit models and binomial panel probit models with serially correlated errors. In this paper we propose to use a generic procedure known as Efficient Importance Sampling (EIS) for the evaluation of likelihood functions for probit models with correlated errors. Our proposed EIS algorithm covers the standard GHK probability simulator as a special case. We perform a set of Monte Carlo experiments in order to illustrate the relative performance of both procedures for the estimation of a multinomial multiperiod probit model. Our results indicate substantial numerical efficiency gains for ML estimates based on the GHK–EIS procedure relative to those obtained by using the GHK procedure.  相似文献   

18.
The classical exploratory factor analysis (EFA) finds estimates for the factor loadings matrix and the matrix of unique factor variances which give the best fit to the sample correlation matrix with respect to some goodness-of-fit criterion. Common factor scores can be obtained as a function of these estimates and the data. Alternatively to the classical EFA, the EFA model can be fitted directly to the data which yields factor loadings and common factor scores simultaneously. Recently, new algorithms were introduced for the simultaneous least squares estimation of all EFA model unknowns. The new methods are based on the numerical procedure for singular value decomposition of matrices and work equally well when the number of variables exceeds the number of observations. This paper provides an account that is intended as an expository review of methods for simultaneous parameter estimation in EFA. The methods are illustrated on Harman's five socio-economic variables data and a high-dimensional data set from genome research.  相似文献   

19.
Auditing Quality of Research in Social Sciences   总被引:2,自引:0,他引:2  
A growing body of studies involves complex research processes facing many interpretations and iterations during the analyses. Complex research generally has an explorative in-depth qualitative nature. Because these studies rely less on standardized procedures of data gathering and analysis, it is often not clear how quality was insured or assured. However, one can not easily find techniques that are suitable for such complex research processes to assess the quality of the study. In this paper, we discuss and present a suitable validation procedure. We first discuss how ‘diagnosing’ quality involves three generic criteria. Next, we present findings of previous research in possible procedures to assure the quality of research in social sciences. We introduce the audit procedure designed by Halpern [(1983) Auditing Naturalistic Inquiries: The Development and Application of a Model. Unpublished doctoral dissertation, Indiana University] we found an appropriate starting point for a suitable procedure for quality judgment. Subsequently, we will present a redesign of the original procedure, with according guidelines for the researcher (the auditee) and for the evaluator of the quality of the study (the auditor). With that design, we aim to enable researchers to bring forward their explorative qualitative studies as stronger and more equally valuable to studies that can rely on standardized procedures.  相似文献   

20.
Many papers have regressed non-parametric estimates of productive efficiency on environmental variables in two-stage procedures to account for exogenous factors that might affect firms’ performance. None of these have described a coherent data-generating process (DGP). Moreover, conventional approaches to inference employed in these papers are invalid due to complicated, unknown serial correlation among the estimated efficiencies. We first describe a sensible DGP for such models. We propose single and double bootstrap procedures; both permit valid inference, and the double bootstrap procedure improves statistical efficiency in the second-stage regression. We examine the statistical performance of our estimators using Monte Carlo experiments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号