首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The paper demonstrates how various parametric models for duration data such as the exponential, Weibull, and log-normal may be embedded in a single framework, and how such competing models may be assessed relative to a more comprehensive one. To illustrate the issues addressed, the survival patterns of marriages among 1203 Swedish men born 1936–1964 are studied by parametric and non-parametric survival methods. In particular, we study the sensitivity of model-choice with respect to level of aggregation of the time variable; and of covariate-effects with respect to the model chosen. In accordance with previous works our empirical results indicate that the choice of a parametric model for the duration variable is affected by the level of time aggregation. In contrast to previous results, however, our analysis shows that estimates of covariate effects are not always robust to distributional assumptions for the duration variable.  相似文献   

2.
This paper appliesa large number of models to three previously-analyzed data sets,and compares the point estimates and confidence intervals fortechnical efficiency levels. Classical procedures include multiplecomparisons with the best, based on the fixed effects estimates;a univariate version, marginal comparisons with the best; bootstrappingof the fixed effects estimates; and maximum likelihood givena distributional assumption. Bayesian procedures include a Bayesianversion of the fixed effects model, and various Bayesian modelswith informative priors for efficiencies. We find that fixedeffects models generally perform poorly; there is a large payoffto distributional assumptions for efficiencies. We do not findmuch difference between Bayesian and classical procedures, inthe sense that the classical MLE based on a distributional assumptionfor efficiencies gives results that are rather similar to a Bayesiananalysis with the corresponding prior.  相似文献   

3.
Although each statistical unit on which measurements are taken is unique, typically there is not enough information available to account totally for its uniqueness. Therefore, heterogeneity among units has to be limited by structural assumptions. One classical approach is to use random effects models, which assume that heterogeneity can be described by distributional assumptions. However, inference may depend on the assumed mixing distribution, and it is assumed that the random effects and the observed covariates are independent. An alternative considered here is fixed effect models, which let each unit has its own parameter. They are quite flexible but suffer from the large number of parameters. The structural assumption made here is that there are clusters of units that share the same effects. It is shown how clusters can be identified by tailored regularised estimators. Moreover, it is shown that the regularised estimates compete well with estimates for the random effects model, even if the latter is the data generating model. They dominate if clusters are present.  相似文献   

4.
Covariate Measurement Error in Quadratic Regression   总被引:3,自引:0,他引:3  
We consider quadratic regression models where the explanatory variable is measured with error. The effect of classical measurement error is to flatten the curvature of the estimated function. The effect on the observed turning point depends on the location of the true turning point relative to the population mean of the true predictor. Two methods for adjusting parameter estimates for the measurement error are compared. First, two versions of regression calibration estimation are considered. This approximates the model between the observed variables using the moments of the true explanatory variable given its surrogate measurement. For certain models an expanded regression calibration approximation is exact. The second approach uses moment-based methods which require no assumptions about the distribution of the covariates measured with error. The estimates are compared in a simulation study, and used to examine the sensitivity to measurement error in models relating income inequality to the level of economic development. The simulations indicate that the expanded regression calibration estimator dominates the other estimators when its distributional assumptions are satisfied. When they fail, a small-sample modification of the method-of-moments estimator performs best. Both estimators are sensitive to misspecification of the measurement error model.  相似文献   

5.
We take as a starting point the existence of a joint distribution implied by different dynamic stochastic general equilibrium (DSGE) models, all of which are potentially misspecified. Our objective is to compare “true” joint distributions with ones generated by given DSGEs. This is accomplished via comparison of the empirical joint distributions (or confidence intervals) of historical and simulated time series. The tool draws on recent advances in the theory of the bootstrap, Kolmogorov type testing, and other work on the evaluation of DSGEs, aimed at comparing the second order properties of historical and simulated time series. We begin by fixing a given model as the “benchmark” model, against which all “alternative” models are to be compared. We then test whether at least one of the alternative models provides a more “accurate” approximation to the true cumulative distribution than does the benchmark model, where accuracy is measured in terms of distributional square error. Bootstrap critical values are discussed, and an illustrative example is given, in which it is shown that alternative versions of a standard DSGE model in which calibrated parameters are allowed to vary slightly perform equally well. On the other hand, there are stark differences between models when the shocks driving the models are assigned non-plausible variances and/or distributional assumptions.  相似文献   

6.
A framework for the detection of change points in the expectation in sequences of random variables is presented. Specifically, we investigate time series with general distributional assumptions that may show an unknown number of change points in the expectation occurring on multiple time scales and that may also contain change points in other parameters. To that end we propose a multiple filter test (MFT) that tests the null hypothesis of constant expectation and, in case of rejection of the null hypothesis, an algorithm that estimates the change points.The MFT has three important benefits. First, it allows for general distributional assumptions in the underlying model, assuming piecewise sequences of i.i.d. random variables, where also relaxations with regard to identical distribution or independence are possible. Second, it uses a MOSUM type statistic and an asymptotic setting in which the MOSUM process converges weakly to a functional of a Brownian motion which is then used to simulate the rejection threshold of the statistical test. This approach enables a simultaneous application of multiple MOSUM processes which improves the detection of change points that occur on different time scales. Third, we also show that the method is practically robust against changes in other distributional parameters such as the variance or higher order moments which might occur with or even without a change in expectation. A function implementing the described test and change point estimation is available in the R package MFT.  相似文献   

7.
This paper reviews aspects of the application of logit models in economics. We consider some economic models that lead to a simple or a multinomial logit specification. A detailed account is given of the possible specifications of the multinomial model. We stress the relationship between distributional assumptions and functional form assumptions. Some (mis)specification tests with individual data are discussed. Moreover, we present a new test. We also consider grouping of continuously recorded data, and the biases introduced by an inappropriate grouping.  相似文献   

8.
We evaluate the performance of several volatility models in estimating one-day-ahead Value-at-Risk (VaR) of seven stock market indices using a number of distributional assumptions. Because all returns series exhibit volatility clustering and long range memory, we examine GARCH-type models including fractionary integrated models under normal, Student-t and skewed Student-t distributions. Consistent with the idea that the accuracy of VaR estimates is sensitive to the adequacy of the volatility model used, we find that AR (1)-FIAPARCH (1,d,1) model, under a skewed Student-t distribution, outperforms all the models that we have considered including widely used ones such as GARCH (1,1) or HYGARCH (1,d,1). The superior performance of the skewed Student-t FIAPARCH model holds for all stock market indices, and for both long and short trading positions. Our findings can be explained by the fact that the skewed Student-t FIAPARCH model can jointly accounts for the salient features of financial time series: fat tails, asymmetry, volatility clustering and long memory. In the same vein, because it fails to account for most of these stylized facts, the RiskMetrics model provides the least accurate VaR estimation. Our results corroborate the calls for the use of more realistic assumptions in financial modeling.  相似文献   

9.
The two-tiered stochastic frontier model has enjoyed success across a range of application domains where it is believed that incomplete information on both sides of the market leads to surplus which buyers and sellers can extract. Currently, this model is hindered by the fact that estimation relies on very restrictive distributional assumptions on the behavior of incomplete information on both sides of the market. However, this reliance on specific parametric distributional assumptions can be eschewed if the scaling property is invoked. The scaling property has been well studied in the stochastic frontier literature, but as of yet, has not been used in the two-tier frontier setting.  相似文献   

10.
This paper estimates food Engel curves using data from the first wave of the Survey on Health, Aging and Retirement in Europe (SHARE). Our statistical model simultaneously takes into account selectivity due to unit and item nonresponse, endogeneity problems, and issues related to flexible specification of the relationship of interest. We estimate both parametric and semiparametric specifications of the model. The parametric specification assumes that the unobservables in the model follow a multivariate Gaussian distribution, while the semiparametric specification avoids distributional assumptions about the unobservables. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

11.
This study estimates and compares the hedge ratios of the conventional and the error correction models using Japan's Nikkei Stock Average (NSA) index and the NSA index futures with different time intervals. Comparisons of out-of-sample hedging performance reveal that the error correction model outperforms the conventional model, suggesting that the hedge ratios obtained by using the error correction model do a better job in reducing the risk of the cash position than those from the conventional model. In addition, this paper evaluates the effects of temporal aggregation on hedge ratios. It is found that temporal aggregation has important effects on the hedge ratio estimates.  相似文献   

12.
This paper introduces a class of robust estimators of the parameters of a stochastic utility function. Existing maximum likelihood and regression estimation methods require the assumption of a particular distributional family for the random component of utility. In contrast, estimators of the ‘maximum score’ class require only weak distributional assumptions for consistency. Following presentation and proof of the basic consistency theorem, additional results are given. An algorithm for achieving maximum score estimates and some small sample Monte Carlo tests are also described.  相似文献   

13.
The paper provides a new and more explicit formulation of the assumptions needed by the ordinary ecological regression to provide unbiased estimates and clarifies why violations of these assumptions will affect any method of ecological inference. Empirical evidence is obtained by showing that estimates provided by three main ecological inference methods are heavily biased when compared with multilevel logistic regression applied to a unique set of individual data on voting behaviour. The main findings of our paper have two important implications that can be extended to all situations where the assumptions needed to apply ecological inference are violated in the data: (i) only ecological inference methods that allow one to model the effect of covariates have a chance to produce unbiased estimates, and (ii) there are certain data generating mechanisms producing a kind of bias in ecological estimates that cannot be corrected by modelling the effect of covariates.  相似文献   

14.
Using stratified microdata from the Canadian FAMEX (78–86) surveys, this paper investigates whether observed heterogeneity in marginal propensities to consume across strata actually hinders the aggregation process. Despite significant heterogeneity in marginal responses, the divergences between aggregate predicted consumption and the predictions from a model that uses average strata responses are found to be small, whenever the strata demands are approximatively linear at the mean and the commodity group considered is not a luxury good. On the other hand, some cross-sectional estimates obtained by pooling the strata are shown to be contaminated by unwanted cross-moments. Further, the analysis reconciles the fact that while there exists significant heterogeneity in consumer demands, the related distributional effects in the aggregate equation have not been found to be important.  相似文献   

15.
We demonstrate the use of a Naïve Bayes model as a recession forecasting tool. The approach is closely connected with Markov-switching models and logistic regression, but also has important differences. In contrast to Markov-switching models, our Naïve Bayes model treats National Bureau of Economic Research business cycle turning points as data, rather than as hidden states to be inferred by the model. Although Naïve Bayes and logistic regression are asymptotically equivalent under certain distributional assumptions, the assumptions do not hold for business cycle data. As a result, Naïve Bayes has a larger asymptotic error rate, but converges to the error rate more quickly than logistic regression, resulting in more accurate recession forecasts with limited data. We show that Naïve Bayes outperforms competing models and the Survey of Professional Forecasters consistently for real-time recession forecasting up to 12 months in advance. These results hold under standard error measures, and also under a novel measure that varies the penalty on false signals, depending on when they occur within a cycle; for example, a false signal in the middle of an expansion is penalized more heavily than one that occurs close to a turning point.  相似文献   

16.
In structural equation modeling the statistician needs assumptions inorder (1) to guarantee that the estimates are consistent for the parameters of interest, and (2) to evaluate precision of the estimates and significance level of test statistics. With respect to purpose (1), the typical type of analyses (ML and WLS) are robust against violation of distributional assumptions; i.e., estimates remain consistent or any type of WLS analysis and distribution of z. (It should be noted, however, that (1) is sensitive to structural misspecification.) A typical assumption used for purpose (2), is the assumption that the vector z of observable follows a multivariate normal distribution.In relation to purpose (2), distributional misspecification may have consequences for efficiency, as well as power of test statistics (see Satorra, 1989a); that is, some estimation methods may bemore precise than others for a given specific distribution of z. For instance, ADF-WLS is asymptotically optimal under a variety of distributions of z, while the asymptotic optimality of NT-WLS may be lost when the data is non-normal  相似文献   

17.
This paper presents and estimates an input–output model in which input coefficient changes are functions of changing prices. The model produces results that mirror the characteristics of input demand functions based on the model of cost minimization subject to producing a desired level of output. It does not rely on the specification of a functional form for input coefficients, and it does not require the use of assumptions regarding the elasticity of substitution. Instead, it allows the actual price and coefficient changes that occur between periods to identify the implicit elasticities and own- and cross-price derivatives. Using this model, it is shown how accurate measures of price effects, including the full array of own and cross-elasticities of demand, can be estimated for models comprising up to 15 sectors given data for only two time periods.  相似文献   

18.
Several optimum non-parametric tests for heteroscedasticity are proposed and studied along with the tests introduced in the literature in terms of power and robustness properties. It is found that all tests are reasonably robust to the Ordinary Least Squares (OLS) residual estimates, number and character of the regressors. Only a few are robust to both the distributional and independence assumptions about the errors. The power of tests can be improved with the OLS residual estimates, the increased sample size and the variability of the regressors. It can be substantially reduced if the observations are not normally distributed, and may increase or decrease if the errors are dependent. Each test is optimum to detect a specific form of heteroscedasticity and a serious power loss may occur if the underlying heteroscedasticity assumption in the data generation deviates from it.  相似文献   

19.
This paper provides a framework for building and estimating non-linear real exchange rate models. The approach derives the stationary distribution from a continuous time error correction model and estimates this by MLE methods. The derived distribution exhibits a wide variety of distributional shapes including multimodality. The main result is that swings in the US/UK rate over the period 1973:3 to 1990:5 can be attributed to the distribution becoming bimodal with the rate switching between equilibria. By capturing these changes in the distribution, the non-linear model yields improvements over the random walk, the speculative efficiency model, and Hamilton's stochastic segmented trends model.  相似文献   

20.
Studies of efficiency in banking and elsewhere often impose arbitrary assumptions on the distributions of efficiency and random error in order to separate one from the other. In this study, we impose much less structure on these distributions and only assume that efficiencies are stable over time while random error tends to average out. We are able to do so by estimating firm-specific effects on costs using panel data sets of over 28,000 observations on U.S. banks from 1980 to 1989. We find results similar to the literature—X-efficiencies or managerial differences in efficiency are important in banking, while scale-efficiency differences are not. However, we also find that the distributional assumptions usually imposed in the literature are not very consistent with these data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号