首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 421 毫秒
1.
《Labour economics》2007,14(1):73-98
Regression models of wage determination are typically estimated by ordinary least squares using the logarithm of the wage as the dependent variable. These models provide consistent estimates of the proportional impact of wage determinants only under the assumption that the distribution of the error term is independent of the regressors — an assumption that can be violated by the presence of heteroskedasticity, for example. Failure of this assumption is particularly relevant in the estimation of the impact of union status on wages. Alternative wage-equation estimators based on the use of quasi-maximum-likelihood methods are consistent under weaker assumptions about the dependence between the error term and the regressors. They also provide the ability to check the specification of the underlying wage model. Applying this approach to a standard data set, I find that the impact of unions on wages is overstated by a magnitude of 20-30 percent when estimates from log-wage regressions are used for inference.  相似文献   

2.
In this paper, we introduce the one-step generalized method of moments (GMM) estimation methods considered in Lee (2007a) and Liu, Lee, and Bollinger (2010) to spatial models that impose a spatial moving average process for the disturbance term. First, we determine the set of best linear and quadratic moment functions for GMM estimation. Second, we show that the optimal GMM estimator (GMME) formulated from this set is the most efficient estimator within the class of GMMEs formulated from the set of linear and quadratic moment functions. Our analytical results show that the one-step GMME can be more efficient than the quasi maximum likelihood (QMLE), when the disturbance term is simply i.i.d. With an extensive Monte Carlo study, we compare its finite sample properties against the MLE, the QMLE and the estimators suggested in Fingleton (2008a).  相似文献   

3.
We propose new forecast combination schemes for predicting turning points of business cycles. The proposed combination schemes are based on the forecasting performances of a given set of models with the aim to provide better turning point predictions. In particular, we consider predictions generated by autoregressive (AR) and Markov-switching AR models, which are commonly used for business cycle analysis. In order to account for parameter uncertainty we consider a Bayesian approach for both estimation and prediction and compare, in terms of statistical accuracy, the individual models and the combined turning point predictions for the United States and the Euro area business cycles.  相似文献   

4.
5.
A new semi-parametric expected shortfall (ES) estimation and forecasting framework is proposed. The proposed approach is based on a two-step estimation procedure. The first step involves the estimation of value at risk (VaR) at different quantile levels through a set of quantile time series regressions. Then, the ES is computed as a weighted average of the estimated quantiles. The quantile weighting structure is parsimoniously parameterized by means of a beta weight function whose coefficients are optimized by minimizing a joint VaR and ES loss function of the Fissler–Ziegel class. The properties of the proposed approach are first evaluated with an extensive simulation study using two data generating processes. Two forecasting studies with different out-of-sample sizes are then conducted, one of which focuses on the 2008 Global Financial Crisis period. The proposed models are applied to seven stock market indices, and their forecasting performances are compared to those of a range of parametric, non-parametric, and semi-parametric models, including GARCH, conditional autoregressive expectile (CARE), joint VaR and ES quantile regression models, and a simple average of quantiles. The results of the forecasting experiments provide clear evidence in support of the proposed models.  相似文献   

6.
Production takes time, and labor supply and profit maximization decisions that relate to current production are typically made before all shocks affecting that production have been realized. In this paper we re-examine the problem of stochastic optimal growth with aggregate risk where the timing of the model conforms to this information structure. We provide a set of conditions under which the economy has a unique, nontrivial and stable stationary distribution. In addition, we verify key optimality properties in the presence of unbounded shocks and rewards, and provide the sample path laws necessary for consistent estimation and simulation.  相似文献   

7.
Bayesian modification indices are presented that provide information for the process of model evaluation and model modification. These indices can be used to investigate the improvement in a model if fixed parameters are re-specified as free parameters. The indices can be seen as a Bayesian analogue to the modification indices commonly used in a frequentist framework. The aim is to provide diagnostic information for multi-parameter models where the number of possible model violations and the related number of alternative models is too large to render estimation of each alternative practical. As an example, the method is applied to an item response theory (IRT) model, that is, to the two-parameter model. The method is used to investigate differential item functioning and violations of the assumption of local independence.  相似文献   

8.
We assess the predictive accuracies of a large number of multivariate volatility models in terms of pricing options on the Dow Jones Industrial Average. We measure the value of model sophistication in terms of dollar losses by considering a set of 444 multivariate models that differ in their specification of the conditional variance, conditional correlation, innovation distribution, and estimation approach. All of the models belong to the dynamic conditional correlation class, which is particularly suitable because it allows consistent estimations of the risk neutral dynamics with a manageable amount of computational effort for relatively large scale problems. It turns out that increasing the sophistication in the marginal variance processes (i.e., nonlinearity, asymmetry and component structure) leads to important gains in pricing accuracy. Enriching the model with more complex existing correlation specifications does not improve the performance significantly. Estimating the standard dynamic conditional correlation model by composite likelihood, in order to take into account potential biases in the parameter estimates, generates only slightly better results. To enhance this poor performance of correlation models, we propose a new model that allows for correlation spillovers without too many parameters. This model performs about 60% better than the existing correlation models we consider. Relaxing a Gaussian innovation for a Laplace innovation assumption improves the pricing in a more minor way. In addition to investigating the value of model sophistication in terms of dollar losses directly, we also use the model confidence set approach to statistically infer the set of models that delivers the best pricing performances.  相似文献   

9.
abstract Efficient market models cannot explain the high level of trading in financial markets in terms of asset portfolio adjustment. It is presumed that much of this excessive trading is irrational ‘noise’ trading. A corollary is that there must either be irrational traders in the market or rational traders with irrational aberrations. The paper reviews the various attempts to explain noise trading in the finance literature, concluding that the persistence of irrationality is not well explained. Data from a study of 118 traders in four large investment banks are presented to advance reasons why traders might seek to trade more frequently than financial models predict. The argument is advanced that trades do not simply occur in order to generate profit, but it does not follow that such trading is irrational. Trading may generate information, accelerate learning, create commitments and enhance social capital, all of which sustain traders' long term survival in the market. The paper treats noise trading as a form of operational risk facing firms operating in financial markets and discusses approaches to the management of such risk.  相似文献   

10.
Survey Estimates by Calibration on Complex Auxiliary Information   总被引:1,自引:0,他引:1  
In the last decade, calibration estimation has developed into an important field of research in survey sampling. Calibration is now an important methodological instrument in the production of statistics. Several national statistical agencies have developed software designed to compute calibrated weights based on auxiliary information available in population registers and other sources. This paper reviews some recent progress and offers some new perspectives. Calibration estimation can be used to advantage in a range of different survey conditions. This paper examines several situations, including estimation for domains in one‐phase sampling, estimation for two‐phase sampling, and estimation for two‐stage sampling with integrated weighting. Typical of those situations is complex auxiliary information, a term that we use for information made up of several components. An example occurs when a two‐stage sample survey has information both for units and for clusters of units, or when estimation for domains relies on information from different parts of the population. Complex auxiliary information opens up more than one way of computing the final calibrated weights to be used in estimation. They may be computed in a single step or in two or more successive steps. Depending on the approach, the resulting estimates do differ to some degree. All significant parts of the total information should be reflected in the final weights. The effectiveness of the complex information is mirrored by the variance of the resulting calibration estimator. Its exact variance is not presentable in simple form. Close approximation is possible via the corresponding linearized statistic. We define and use automated linearization as a shortcut in finding the linearized statistic. Its variance is easy to state, to interpret and to estimate. The variance components are expressed in terms of residuals, similar to those of standard regression theory. Visual inspection of the residuals reveals how the different components of the complex auxiliary information interact and work together toward reducing the variance.  相似文献   

11.
In this article, we merge two strands from the recent econometric literature. First, factor models based on large sets of macroeconomic variables for forecasting, which have generally proven useful for forecasting. However, there is some disagreement in the literature as to the appropriate method. Second, forecast methods based on mixed‐frequency data sampling (MIDAS). This regression technique can take into account unbalanced datasets that emerge from publication lags of high‐ and low‐frequency indicators, a problem practitioner have to cope with in real time. In this article, we introduce Factor MIDAS, an approach for nowcasting and forecasting low‐frequency variables like gross domestic product (GDP) exploiting information in a large set of higher‐frequency indicators. We consider three alternative MIDAS approaches (basic, smoothed and unrestricted) that provide harmonized projection methods that allow for a comparison of the alternative factor estimation methods with respect to nowcasting and forecasting. Common to all the factor estimation methods employed here is that they can handle unbalanced datasets, as typically faced in real‐time forecast applications owing to publication lags. In particular, we focus on variants of static and dynamic principal components as well as Kalman filter estimates in state‐space factor models. As an empirical illustration of the technique, we use a large monthly dataset of the German economy to nowcast and forecast quarterly GDP growth. We find that the factor estimation methods do not differ substantially, whereas the most parsimonious MIDAS projection performs best overall. Finally, quarterly models are in general outperformed by the Factor MIDAS models, which confirms the usefulness of the mixed‐frequency techniques that can exploit timely information from business cycle indicators.  相似文献   

12.
A new framework for the joint estimation and forecasting of dynamic value at risk (VaR) and expected shortfall (ES) is proposed by our incorporating intraday information into a generalized autoregressive score (GAS) model introduced by Patton et al., 2019 to estimate risk measures in a quantile regression set-up. We consider four intraday measures: the realized volatility at 5-min and 10-min sampling frequencies, and the overnight return incorporated into these two realized volatilities. In a forecasting study, the set of newly proposed semiparametric models are applied to four international stock market indices (S&P 500, Dow Jones Industrial Average, Nikkei 225 and FTSE 100) and are compared with a range of parametric, nonparametric and semiparametric models, including historical simulations, generalized autoregressive conditional heteroscedasticity (GARCH) models and the original GAS models. VaR and ES forecasts are backtested individually, and the joint loss function is used for comparisons. Our results show that GAS models, enhanced with the realized volatility measures, outperform the benchmark models consistently across all indices and various probability levels.  相似文献   

13.
We consider improved estimation strategies for the parameter matrix in multivariate multiple regression under a general and natural linear constraint. In the context of two competing models where one model includes all predictors and the other restricts variable coefficients to a candidate linear subspace based on prior information, there is a need of combining two estimation techniques in an optimal way. In this scenario, we suggest some shrinkage estimators for the targeted parameter matrix. Also, we examine the relative performances of the suggested estimators in the direction of the subspace and candidate subspace restricted type estimators. We develop a large sample theory for the estimators including derivation of asymptotic bias and asymptotic distributional risk of the suggested estimators. Furthermore, we conduct Monte Carlo simulation studies to appraise the relative performance of the suggested estimators with the classical estimators. The methods are also applied on a real data set for illustrative purposes.  相似文献   

14.
Generalized extreme value (GEV) random utility choice models have been suggested as a development of the multinomial logit models that allows the random components of various alternatives to be statistically dependent. This paper establishes the existence of and provides necessary and sufficient uniqueness conditions for the solutions to a set of equations that may be interpreted as an equilibrium of an economy, the demand side of which is described by a multiple-segment GEV random choice model. The same equations may alternatively be interpreted in a maximum likelihood estimation context. The method employed is based on optimization theory and may provide a useful computational approach. The uniqueness results suggest a way to introduce segregation/integration effects into logit type choice models. Generalization to non-GEV models are touched upon.  相似文献   

15.
Abstract.  Quality‐adjusted life year (QALY) models are widely used for economic evaluation in the health care sector. In the first part of the paper, we establish an overview of QALY models where health varies over time and provide a theoretical analysis of model identification and parameter estimation from time trade‐off (TTO) and standard gamble (SG) scores. We investigate deterministic and probabilistic models and consider five different families of discounting functions in all. The second part of the paper discusses four issues recurrently debated in the literature. This discussion includes questioning the SG method as the gold standard for estimation of the health state index, re‐examining the role of the constant‐proportional trade‐off condition, revisiting the problem of double discounting of QALYs, and suggesting that it is not a matter of choosing between TTO and SG procedures as the combination of these two can be used to disentangle risk aversion from discounting. We find that caution must be taken when drawing conclusions from models with chronic health states to situations where health varies over time. One notable difference is that in the former case, risk aversion may be indistinguishable from discounting.  相似文献   

16.
The minimum discrimination information principle is used to identify an appropriate parametric family of probability distributions and the corresponding maximum likelihood estimators for binary response models. Estimators in the family subsume the conventional logit model and form the basis for a set of parametric estimation alternatives with the usual asymptotic properties. Sampling experiments are used to assess finite sample performance.  相似文献   

17.
Summary For multifactor designs based on linear models, the information matrix generally depends on a certain set of marginal tables created from the design itself. This note considers the problems of whether a set of marginal tables is consistent, in that a design exists that can yield them, and of calculating such a design when at least one does exist. The results are obtained by direct analogy with the problem of maximum likelihood estimation in longlinear models for categorical data.  相似文献   

18.
The paper proposes a novel approach to predict intraday directional-movements of currency-pairs in the foreign exchange market based on news story events in the economy calendar. Prior work on using textual data for forecasting foreign exchange market developments does not consider economy calendar events. We consider a rich set of text analytics methods to extract information from news story events and propose a novel sentiment dictionary for the foreign exchange market. The paper shows how news events and corresponding news stories provide valuable information to increase forecast accuracy and inform trading decisions. More specifically, using textual data together with technical indicators as inputs to different machine learning models reveals that the accuracy of market predictions shortly after the release of news is substantially higher than in other periods, which suggests the feasibility of news-based trading. Furthermore, empirical results identify a combination of a gradient boosting algorithm, our new sentiment dictionary, and text-features based-on term frequency weighting to offer the most accurate forecasts. These findings are valuable for traders, risk managers and other consumers of foreign exchange market forecasts and offer guidance how to design accurate prediction systems.  相似文献   

19.
Under a quantile restriction, randomly censored regression models can be written in terms of conditional moment inequalities. We study the identified features of these moment inequalities with respect to the regression parameters where we allow for covariate dependent censoring, endogenous censoring and endogenous regressors. These inequalities restrict the parameters to a set. We show regular point identification can be achieved under a set of interpretable sufficient conditions. We then provide a simple way to convert conditional moment inequalities into unconditional ones while preserving the informational content. Our method obviates the need for nonparametric estimation, which would require the selection of smoothing parameters and trimming procedures. Without the point identification conditions, our objective function can be used to do inference on the partially identified parameter. Maintaining the point identification conditions, we propose a quantile minimum distance estimator which converges at the parametric rate to the parameter vector of interest, and has an asymptotically normal distribution. A small scale simulation study and an application using drug relapse data demonstrate satisfactory finite sample performance.  相似文献   

20.
This paper proposes LASSO estimation specific for panel vector autoregressive (PVAR) models. The penalty term allows for shrinkage for different lags, for shrinkage towards homogeneous coefficients across panel units, for penalization of lags of variables belonging to another cross-sectional unit, and for varying penalization across equations. The penalty parameters therefore build on time series and cross-sectional properties that are commonly found in PVAR models. Simulation results point towards advantages of using the proposed LASSO for PVAR models over ordinary least squares in terms of forecast accuracy. An empirical forecasting application including 20 countries supports these findings.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号