首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Both the theoretical and empirical literature on the estimation of allocative and technical inefficiency has grown enormously. To minimize aggregation bias, ideally one should estimate firm and input‐specific parameters describing allocative inefficiency. However, identifying these parameters has often proven difficult. For a panel of Chilean hydroelectric power plants, we obtain a full set of such parameters using Gibbs sampling, which draws sequentially from conditional generalized method of moments (GMM) estimates obtained via instrumental variables estimation. We find an economically significant range of firm‐specific efficiency estimates with differing degrees of precision. The standard GMM approach estimates virtually no allocative inefficiency for industry‐wide parameters. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

2.
In this paper, we study a Bayesian approach to flexible modeling of conditional distributions. The approach uses a flexible model for the joint distribution of the dependent and independent variables and then extracts the conditional distributions of interest from the estimated joint distribution. We use a finite mixture of multivariate normals (FMMN) to estimate the joint distribution. The conditional distributions can then be assessed analytically or through simulations. The discrete variables are handled through the use of latent variables. The estimation procedure employs an MCMC algorithm. We provide a characterization of the Kullback–Leibler closure of FMMN and show that the joint and conditional predictive densities implied by the FMMN model are consistent estimators for a large class of data generating processes with continuous and discrete observables. The method can be used as a robust regression model with discrete and continuous dependent and independent variables and as a Bayesian alternative to semi- and non-parametric models such as quantile and kernel regression. In experiments, the method compares favorably with classical nonparametric and alternative Bayesian methods.  相似文献   

3.
Some recent specifications for GARCH error processes explicitly assume a conditional variance that is generated by a mixture of normal components, albeit with some parameter restrictions. This paper analyses the general normal mixture GARCH(1,1) model which can capture time variation in both conditional skewness and kurtosis. A main focus of the paper is to provide evidence that, for modelling exchange rates, generalized two‐component normal mixture GARCH(1,1) models perform better than those with three or more components, and better than symmetric and skewed Student's t‐GARCH models. In addition to the extensive empirical results based on simulation and on historical data on three US dollar foreign exchange rates (British pound, euro and Japanese yen), we derive: expressions for the conditional and unconditional moments of all models; parameter conditions to ensure that the second and fourth conditional and unconditional moments are positive and finite; and analytic derivatives for the maximum likelihood estimation of the model parameters and standard errors of the estimates. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

4.
We propose a new conditionally heteroskedastic factor model, the GICA-GARCH model, which combines independent component analysis (ICA) and multivariate GARCH (MGARCH) models. This model assumes that the data are generated by a set of underlying independent components (ICs) that capture the co-movements among the observations, which are assumed to be conditionally heteroskedastic. The GICA-GARCH model separates the estimation of the ICs from their fitting with a univariate ARMA-GARCH model. Here, we will use two ICA approaches to find the ICs: the first estimates the components, maximizing their non-Gaussianity, while the second exploits the temporal structure of the data. After estimating and identifying the common ICs, we fit a univariate GARCH model to each of them in order to estimate their univariate conditional variances. The GICA-GARCH model then provides a new framework for modelling the multivariate conditional heteroskedasticity in which we can explain and forecast the conditional covariances of the observations by modelling the univariate conditional variances of a few common ICs. We report some simulation experiments to show the ability of ICA to discover leading factors in a multivariate vector of financial data. Finally, we present an empirical application to the Madrid stock market, where we evaluate the forecasting performances of the GICA-GARCH and two additional factor GARCH models: the orthogonal GARCH and the conditionally uncorrelated components GARCH.  相似文献   

5.
Instrumental variable estimation in the presence of many moment conditions   总被引:1,自引:0,他引:1  
This paper develops shrinkage methods for addressing the “many instruments” problem in the context of instrumental variable estimation. It has been observed that instrumental variable estimators may behave poorly if the number of instruments is large. This problem can be addressed by shrinking the influence of a subset of instrumental variables. The procedure can be understood as a two-step process of shrinking some of the OLS coefficient estimates from the regression of the endogenous variables on the instruments, then using the predicted values of the endogenous variables (based on the shrunk coefficient estimates) as the instruments. The shrinkage parameter is chosen to minimize the asymptotic mean square error. The optimal shrinkage parameter has a closed form, which makes it easy to implement. A Monte Carlo study shows that the shrinkage method works well and performs better in many situations than do existing instrument selection procedures.  相似文献   

6.
We consider estimation of panel data models with sample selection when the equation of interest contains endogenous explanatory variables as well as unobserved heterogeneity. Assuming that appropriate instruments are available, we propose several tests for selection bias and two estimation procedures that correct for selection in the presence of endogenous regressors. The tests are based on the fixed effects two-stage least squares estimator, thereby permitting arbitrary correlation between unobserved heterogeneity and explanatory variables. The first correction procedure is parametric and is valid under the assumption that the errors in the selection equation are normally distributed. The second procedure estimates the model parameters semiparametrically using series estimators. In the proposed testing and correction procedures, the error terms may be heterogeneously distributed and serially dependent in both selection and primary equations. Because these methods allow for a rather flexible structure of the error variance and do not impose any nonstandard assumptions on the conditional distributions of explanatory variables, they provide a useful alternative to the existing approaches presented in the literature.  相似文献   

7.
We present the sparse estimation of one-sided dynamic principal components (ODPCs) to forecast high-dimensional time series. The forecast can be made directly with the ODPCs or by using them as estimates of the factors in a generalized dynamic factor model. It is shown that a large reduction in the number of parameters estimated for the ODPCs can be achieved without affecting their forecasting performance.  相似文献   

8.
In this paper we consider estimation of demand systems with flexible functional forms, allowing an error term with a general conditional heteroskedasticity function that depends on observed covariates, such as demographic variables. We propose a general model that can be estimated either by quasi-maximum likelihood (in the case of exogenous regressors) or generalized method of moments (GMM) if the covariates are endogenous. The specification proposed in the paper nests several demand functions in the literature and the results can be applied to the recently proposed Exact Affine Stone Index (EASI) demand system of [Lewbel, A., Pendakur, K., 2008. Tricks with Hicks: The EASI implicit Marshallian demand system for unobserved heterogeneity and flexible Engel curves. American Economic Review (in press)]. Furthermore, flexible nonlinear expenditure elasticities can be estimated.  相似文献   

9.
We show how a wide range of stochastic frontier models can be estimated relatively easily using variational Bayes. We derive approximate posterior distributions and point estimates for parameters and inefficiency effects for (a) time invariant models with several alternative inefficiency distributions, (b) models with time varying effects, (c) models incorporating environmental effects, and (d) models with more flexible forms for the regression function and error terms. Despite the abundance of stochastic frontier models, there have been few attempts to test the various models against each other, probably due to the difficulty of performing such tests. One advantage of the variational Bayes approximation is that it facilitates the computation of marginal likelihoods that can be used to compare models. We apply this idea to test stochastic frontier models with different inefficiency distributions. Estimation and testing is illustrated using three examples.  相似文献   

10.
This paper studies the empirical performance of stochastic volatility models for twenty years of weekly exchange rate data for four major currencies. We concentrate on the effects of the distribution of the exchange rate innovations for both parameter estimates and for estimates of the latent volatility series. The density of the log of squared exchange rate innovations is modelled as a flexible mixture of normals. We use three different estimation techniques: quasi-maximum likelihood, simulated EM, and a Bayesian procedure. The estimated models are applied for pricing currency options. The major findings of the paper are that: (1) explicitly incorporating fat-tailed innovations increases the estimates of the persistence of volatility dynamics; (2) the estimation error of the volatility time series is very large; (3) this in turn causes standard errors on calculated option prices to be so large that these prices are rarely significantly different from a model with constant volatility. © 1998 John Wiley & Sons, Ltd.  相似文献   

11.
In this paper we, like several studies in the recent literature, employ a Bayesian approach to estimation and inference in models with endogeneity concerns by imposing weaker prior assumptions than complete excludability. When allowing for instrument imperfection of this type, the model is only partially identified, and as a consequence standard estimates obtained from the Gibbs simulations can be unacceptably imprecise. We thus describe a substantially improved ‘semi‐analytic’ method for calculating parameter marginal posteriors of interest that only require use of the well‐mixing simulations associated with the identifiable model parameters and the form of the conditional prior. Our methods are also applied in an illustrative application involving the impact of body mass index on earnings. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
In this paper, we consider testing distributional assumptions in multivariate GARCH models based on empirical processes. Using the fact that joint distribution carries the same amount of information as the marginal together with conditional distributions, we first transform the multivariate data into univariate independent data based on the marginal and conditional cumulative distribution functions. We then apply the Khmaladze's martingale transformation (K-transformation) to the empirical process in the presence of estimated parameters. The K-transformation eliminates the effect of parameter estimation, allowing a distribution-free test statistic to be constructed. We show that the K-transformation takes a very simple form for testing multivariate normal and multivariate t-distributions. The procedure is applied to a multivariate financial time series data set.  相似文献   

13.
This paper is concerned with the Bayesian estimation and comparison of flexible, high dimensional multivariate time series models with time varying correlations. The model proposed and considered here combines features of the classical factor model with that of the heavy tailed univariate stochastic volatility model. A unified analysis of the model, and its special cases, is developed that encompasses estimation, filtering and model choice. The centerpieces of the estimation algorithm (which relies on MCMC methods) are: (1) a reduced blocking scheme for sampling the free elements of the loading matrix and the factors and (2) a special method for sampling the parameters of the univariate SV process. The resulting algorithm is scalable in terms of series and factors and simulation-efficient. Methods for estimating the log-likelihood function and the filtered values of the time-varying volatilities and correlations are also provided. The performance and effectiveness of the inferential methods are extensively tested using simulated data where models up to 50 dimensions and 688 parameters are fit and studied. The performance of our model, in relation to various multivariate GARCH models, is also evaluated using a real data set of weekly returns on a set of 10 international stock indices. We consider the performance along two dimensions: the ability to correctly estimate the conditional covariance matrix of future returns and the unconditional and conditional coverage of the 5% and 1% value-at-risk (VaR) measures of four pre-defined portfolios.  相似文献   

14.
We use frequency domain techniques to estimate a medium‐scale dynamic stochastic general equilibrium (DSGE) model on different frequency bands. We show that goodness of fit, forecasting performance and parameter estimates vary substantially with the frequency bands over which the model is estimated. Estimates obtained using subsets of frequencies are characterized by significantly different parameters, an indication that the model cannot match all frequencies with one set of parameters. In particular, we find that: (i) the low‐frequency properties of the data strongly affect parameter estimates obtained in the time domain; (ii) the importance of economic frictions in the model changes when different subsets of frequencies are used in estimation. This is particularly true for the investment adjustment cost and habit persistence: when low frequencies are present in the estimation, the investment adjustment cost and habit persistence are estimated to be higher than when low frequencies are absent. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

15.
We introduce a class of semiparametric time series models (SemiParTS) driven by a latent factor process. The proposed SemiParTS class is flexible because, given the latent process, only the conditional mean and variance of the time series are specified. These are the primary features of SemiParTS: (i) no parametric form is assumed for the conditional distribution of the time series given the latent process; (ii) it is suitable for a wide range of data: non-negative, count, bounded, binary, and real-valued time series; (iii) it does not constrain the dispersion parameter to be known. The quasi-likelihood inference is employed in order to estimate the parameters in the mean function. Here, we derive explicit expressions for the marginal moments and for the autocorrelation function of the time series process so that a method of moments can be employed to estimate the dispersion parameter and also the parameters related to the latent process. Simulated results that aim to check the proposed estimation procedure are presented. Forecasting procedures are proposed and evaluated in simulated and real data. Analyses of the number of admissions in a hospital due to asthma and a total insolation time series illustrate the potential for practical situations that involve the proposed models.  相似文献   

16.
This paper proposes a conditional density model that allows for differing left/right tail indices and time-varying volatility based on the dynamic conditional score (DCS) approach. The asymptotic properties of the maximum likelihood estimates are presented under verifiable conditions together with simulations showing effective estimation with practical sample sizes. It is shown that tail asymmetry is prevalent in global equity index returns and can be mistaken for skewness through the center of the distribution. The importance of tail asymmetry for asset allocation and risk premia is demonstrated in-sample. Application to portfolio construction out-of-sample is then considered, with a representative investor willing to pay economically and statistically significant management fees to use the new model instead of traditional skewed models to determine their asset allocation.  相似文献   

17.
In this paper we consider estimating an approximate factor model in which candidate predictors are subject to sharp spikes such as outliers or jumps. Given that these sharp spikes are assumed to be rare, we formulate the estimation problem as a penalized least squares problem by imposing a norm penalty function on those sharp spikes. Such a formulation allows us to disentangle the sharp spikes from the common factors and estimate them simultaneously. Numerical values of the estimates can be obtained by solving a principal component analysis (PCA) problem and a one-dimensional shrinkage estimation problem iteratively. In addition, it is easy to incorporate methods for selecting the number of common factors in the iterations. We compare our method with PCA by conducting simulation experiments in order to examine their finite-sample performances. We also apply our method to the prediction of important macroeconomic indicators in the U.S., and find that it can deliver performances that are comparable to those of the PCA method.  相似文献   

18.
In this work, we analyze the performance of production units using the directional distance function which allows to measure the distance to the frontier of the production set along any direction in the inputs/outputs space. We show that this distance can be expressed as a simple transformation of radial or hyperbolic distance. This formulation allows to define robust directional distances in the lines of α-quantile or order-m partial frontiers and also conditional directional distance functions, conditional to environmental factors. We propose simple methods of estimation and derive the asymptotic properties of our estimators.  相似文献   

19.
The exponentiated Weibull distribution is a convenient alternative to the generalized gamma distribution to model time-to-event data. It accommodates both monotone and nonmonotone hazard shapes, and flexible enough to describe data with wide ranging characteristics. It can also be used for regression analysis of time-to-event data. The maximum likelihood method is thus far the most widely used technique for inference, though there is a considerable body of research of improving the maximum likelihood estimators in terms of asymptotic efficiency. For example, there has recently been considerable attention on applying James–Stein shrinkage ideas to parameter estimation in regression models. We propose nonpenalty shrinkage estimation for the exponentiated Weibull regression model for time-to-event data. Comparative studies suggest that the shrinkage estimators outperform the maximum likelihood estimators in terms of statistical efficiency. Overall, the shrinkage method leads to more accurate statistical inference, a fundamental and desirable component of statistical theory.  相似文献   

20.
Multivariate GARCH (MGARCH) models need to be restricted so that their estimation is feasible in large systems and so that the covariance stationarity and positive definiteness of conditional covariance matrices are guaranteed. This paper analyzes the limitations of some of the popular restricted parametric MGARCH models that are often used to represent the dynamics observed in real systems of financial returns. These limitations are illustrated using simulated data generated by general VECH models of different dimensions in which volatilities and correlations are interrelated. We show that the restrictions imposed by the BEKK model are very unrealistic, generating potentially misleading forecasts of conditional correlations. On the other hand, models based on the DCC specification provide appropriate forecasts. Alternative estimators of the parameters are important in order to simplify the computations, and do not have implications for the estimates of conditional correlations. The implications of the restrictions imposed by the different specifications of MGARCH models considered are illustrated by forecasting the volatilities and correlations of a five-dimensional system of exchange rate returns.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号