首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In a seminal paper, Mak, Journal of the Royal Statistical Society B, 55, 1993, 945, derived an efficient algorithm for solving non‐linear unbiased estimation equations. In this paper, we show that when Mak's algorithm is applied to biased estimation equations, it results in the estimates that would come from solving a bias‐corrected estimation equation, making it a consistent estimator if regularity conditions hold. In addition, the properties that Mak established for his algorithm also apply in the case of biased estimation equations but for estimates from the bias‐corrected equations. The marginal likelihood estimator is obtained when the approach is applied to both maximum likelihood and least squares estimation of the covariance matrix parameters in the general linear regression model. The new approach results in two new estimators when applied to the profile and marginal likelihood functions for estimating the lagged dependent variable coefficient in the dynamic linear regression model. Monte Carlo simulation results show the new approach leads to a better estimator when applied to the standard profile likelihood. It is therefore recommended for situations in which standard estimators are known to be biased.  相似文献   

2.
A stochastic simulation procedure is proposed in this paper for obtaining median unbiased (MU) estimates in macroeconometric models. MU estimates are computed for lagged dependent variable (LDV) coefficients in 18 equations of a macroeconometric model. The 2SLS bias for a coefficient, defined as the difference between the 2SLS estimate and the MU estimate, is on average smaller in absolute value than would be expected from Andrews exact results for an equation with only a constant term, time trend, and LDV. The results also show that in a practical sense the estimated biases are not very large because they have little effect on the overall predictive accuracy of the model and on its multiplier properties.  相似文献   

3.
This paper estimates lost sales resulting from adverse publicity for a small company soon after it began selling solar hot-water units. Ordinary least squares analysis generates estimates from seasonally adjusted data. To capture the structural shift in sales, alternative series of dummy variables are tested along with other explanatory variables. The ‘best’ of the equations is reported, and the outcome of the court suit is discussed.  相似文献   

4.
In this paper, I develop a regression-based system of labour productivity equations that account for capital-embodied technological change and I incorporate this system into IDLIFT, a structural, macroeconomic input-output model of the US economy. Builders of regression-based forecasting models have long had difficulty finding labour productivity equations that exhibit the "Solowian' property that movements in investment should cause accompanying movements in labour productivity. The production theory developed by Solow and others dictates that this causation is driven by the effect of traditional capital deepening as well as technological change embodied in capital. Lack of measurement of the latter has hampered the ability of researchers to estimate properly the productivity-investment relationship. Recent research by Wilson (2001) has alleviated this difficulty by estimating industry-level embodied technological change. In this paper, I utilize those estimates to construct capital stocks adjusted for technological change and then use these adjusted stocks to estimate Solow-type labour productivity equations. It is shown that replacing IDLIFT's former productivity equations, based on changes in output and time trends, with the new equations, results in a convergence between the dynamic behaviour of the model and that predicted by traditional (Solowian) production theory.  相似文献   

5.
This paper reviews the state of the art in the computation of robust estimates of multivariate location and shape using combinatorial estimators such as the minimum volume ellipsoid (MVE) and iterative M- and S-estimators. We also present new results on the behavior of M- and S-estimators in the presence of different types of outliers, and give the first computational evidence on compound estimators that use the MVE as a starting point for an S-estimator. Problems with too many data points in too many dimensions cannot be handled by any available technology; however, the methods presented in this paper substantially extend the size of problem that can be successfully handled.  相似文献   

6.
An empirical balance of payments model involving the demand and supply of imports and exports for 31 developing countries is estimated utilizing panel data over 1964-1987. In order to compute error-components 3SLS estimates of this model, which requires different instruments for different equations, we propose a generalization of the Fuller-Battese transformation to obtain GMM estimates. Our empirical results suggest that very little of the short-run adjustment in external imbalances is likely to be achieved by exchange rate policies, and most of the burden must fall on aggregate demand management.  相似文献   

7.
This paper compares the familiar probit model with three semiparametric estimators of binary response models in an application to labour market participation of married women. This exercise is performed using two different cross-section data sets from Switzerland and Germany. For the Swiss data the probit specification cannot be rejected and the models yield similar results. In the German case the probit model is rejected, but the coefficient estimates do not vary substantially across the models. The predicted choice probabilities, however, differ systematically for a subset of the sample. The results of this paper indicate that more work is necessary on specification tests of semiparametric models and on simulations using these models.  相似文献   

8.
There is a lack of uniformity concerning the appropriate degrees of freedom to use in estimating simultaneous equations. This issue is examined through a Monte Carlo study comparing estimates and inferences obtained using alternative choices of degrees of freedom in two and three stage least squares. While 2SLS estimates do not depend upon this choice, 3SLS estimates do. However, in the study the choice had little impact on 3SLS estimates. But the results strongly suggest that approximate tests conventionally used are much more accurate for both methods if estimated variances account for lost degrees of freedom.  相似文献   

9.
A method is presented for the estimation of the parameters in the dynamic simultaneous equations model with vector autoregressive moving average disturbances. The estimation procedure is derived from the full information maximum likelihood approach and is based on Newton-Raphson techniques applied to the likelihood equations. The resulting two-step Newton-Raphson procedure involves only generalized instrumental variables estimation in the second step. This procedure also serves as the basis for an iterative scheme to solve the normal equations and obtain the maximum likelihood estimates of the conditional likelihood function. A nine-equation variant of the quarterly forecasting model of the US economy developed by Fair is then used as a realistic example to illustrate the estimation procedure described in the paper.  相似文献   

10.
Conclusions In this paper we have proposed new techniques for simplifying the estimation of disequilibrium models by avoiding constrained maximum likelihood methods (which cannot avoid numerous theoretical and practical difficulties mentioned above) including an unrealistic assumption of the independence of errors in demand and supply system of equations. In the proposed first stage, one estimates the relative magnitude of the residuals from the demand and supply equations nonparametrically, even though they suffer from omitted variables bias, because the coefficient of the omitted variable is known to be the same in both equations. The reason for using nonparametric methods is that they do not depend on parametric functional forms of biased (bent inward) demand and supply equations. The first stage compares the absolute values of residuals from conditional expectations in order to classify the data points as belonging to the demand or the supply curve. We estimate the economically meaningful scale elasticity and distribution parameters at the second stage from classified (separated) data.We extend nonparametric kernel estimation to the r = 4 case to improve the speed of convergence, as predicted by Singh's [1981] theory. In the first stage, r = 4 results give generally improved R2 and ¦t¦ values in our study of the Dutch data—used by many authors concerned with the estimation of floorspace productivity. We find that one can obtain reasonable results by our approximate but simpler two stage methods. Detailed results are reported for four types of Dutch retail establishments. More research is needed to gain further experience and to extend the methodology to other disequilibrium models and other productivity estimation problems.This paper was processed by W. Eichhorn.  相似文献   

11.
This paper concerns estimating parameters in a high-dimensional dynamic factor model by the method of maximum likelihood. To accommodate missing data in the analysis, we propose a new model representation for the dynamic factor model. It allows the Kalman filter and related smoothing methods to evaluate the likelihood function and to produce optimal factor estimates in a computationally efficient way when missing data is present. The implementation details of our methods for signal extraction and maximum likelihood estimation are discussed. The computational gains of the new devices are presented based on simulated data sets with varying numbers of missing entries.  相似文献   

12.
Amidst the lack of consensus from previous academic studies, this paper contributes to existing literature by further examining the commencement date of the Sovereign Debt Crisis for the Greek economy. The contribution of this paper purports that the contentious issue of the start of the Greek crisis was taking place much earlier than reported by previous research. Empirical results from this paper challenge earlier studies that may have underestimated the impact of the degree of persistence in the volatility of bond returns. This analysis uses monthly 10-year Greek government bond data and three independent structural break model tests which allow for the detection of possible endogenous break dates to capture the beginning of the crisis. Each model provides empirically plausible and robust frameworks for examining the volatility of bond returns in an evolving time series behaviour. Ultimate results from a series of autoregressive EGARCH estimations, with and without dummy variables for break dates are compared. The dummy variables are incorporated within the coefficients of the mean and variance equations to validate the structural breaks in each series. Overall results show a significant presence of nonconsistent parameters capturing a structural break in the time series sample. The detection of this break, November 2009, represents a major regime change triggered by the start of the debt crisis for the Greek economy. Crucially, research implications of such excess volatilities in sovereign bond markets have poignant implications for regulators, investors and portfolio risk managers alike.  相似文献   

13.
This paper investigates the replicability of three important studies on growth theory uncertainty that employed Bayesian model averaging tools. We compare these results with estimates obtained using alternative, recently developed model averaging techniques. Overall, we successfully replicate all three studies, find that the sign and magnitude of these new estimates are reasonably close to those produced via traditional Bayesian methods and deploy a novel strategy to implement one of the new averaging estimators. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

14.
This paper constructs and estimates a structural dynamic model of occupational choice in which all occupations are characterized in a skill requirement space using data from the Dictionary of Occupational Titles and the NLSY79. This skill requirement space approach has its merit in computational simplicity as well as ease of interpretation: it allows the model to include hundreds of occupations at the three-digit census classification level without a large number of parameters. Parameter estimates indicate that wages grow with the skill requirements of an occupation and that educated and experienced individuals are better rewarded in a cognitive and interpersonal skill demanding occupation. They also suggest that ignoring self-selection into occupations and individual heterogeneity may result in counter-intuitive and biased estimates of the returns to skill requirements.  相似文献   

15.
This paper presents a generalized method of moments algorithm for estimating the structural parameters of a macroeconomic model subject to the restriction that the coefficients of the monetary policy rule minimize the central bank's expected loss function. The algorithm combines least-squares normal equations with moment restrictions derived from the first-order necessary conditions of the auxiliary optimization. We assess the performance of the algorithm with Monte Carlo simulations using three increasingly complex models. We find that imposing the optimizing restrictions when they are true improves estimation accuracy and that imposing those restrictions when they are false biases estimates of some of the structural parameters but not of the policy-rule coefficients.  相似文献   

16.
Ridge type analysis of the Theil–Goldberger mixed model, considered earlier by Saxena and Bhatta–charya (1983) for the non–Bayesian set–up, is discussed from the Bayesian view–point when a has a closed prior and the loss–function being squared error.  相似文献   

17.
Generalized extreme value (GEV) random utility choice models have been suggested as a development of the multinomial logit models that allows the random components of various alternatives to be statistically dependent. This paper establishes the existence of and provides necessary and sufficient uniqueness conditions for the solutions to a set of equations that may be interpreted as an equilibrium of an economy, the demand side of which is described by a multiple-segment GEV random choice model. The same equations may alternatively be interpreted in a maximum likelihood estimation context. The method employed is based on optimization theory and may provide a useful computational approach. The uniqueness results suggest a way to introduce segregation/integration effects into logit type choice models. Generalization to non-GEV models are touched upon.  相似文献   

18.
The New York Times model is a large-scale model which forecasts sales and earnings for the New York Times newspaper. Sturcturally, it is composed of two major blocks; a demand module, and a production, cost and revenue module. The demand module, the heart of the model, is a set of simultaneous nonlinear econometric equations which forecast physical volume, approximately 35 categories of advertising lines and 10 categories of circulation. The second block is recursive and contains roughly 300 equations, some of which are stochastic behavioral equations. This block converts the volume forecasts into paging, newsprint consumption, newsprint distribution and manning requirements. These physical flows are then monetized, using price and wage forecasts, to produce estimates of revenue, fixed and variable costs, and operating profit. This paper summarizes the development of the model, with emphasis on the advertising and circulation model. It should be noted that the structure of the model is constantly evolving. Consequently, emphasis is placed on the conceptual underpinnings of the model not on a detailed presentation of the current structure.  相似文献   

19.
20.
This Briefing Paper is thejirst ofa series of three designeddiscussed is the process of making 'constant adjustments' in forecasts. This process involves modifying the results generated by the econometric model. For the first time we are publishing tables of the constant adjustments used in the current forecast. We explain in general why such adjustments are made and also explain the actual adjustments we have made for this forecast.
The second article of the series, to be published in our February 1983 edition, will describe the potential sources of error in forecasts. In particular it will describe the inevitable stochastic or random element involved in e tatistical attempts to quantify economic behaviour. As a completely new departure the article will report estimates of future errors based on stochastic simulations of the LBS. model and will provide statistical error bad for the main elements of the forecast.
The final article, to be published in our June 1983 edition, will contrast the measures of forecast error that e e obtain from the estimation process and our stochastic e imulationsp with the errors that we have actually made, as revealed by an examination of our forecasting 'track record'. It is hoped to draw, from this comparison, some e eneral conclusions about the scope and limits of econometric forecasting producers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号