首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 875 毫秒
1.
Comparisons between alternative scenarios are used in many disciplines, from macroeconomics through epidemiology to climate science, to help with planning future responses. Differences between scenario paths are often interpreted as signifying likely differences between outcomes that would materialise in reality. However, even when using correctly specified statistical models of the in-sample data generation process, additional conditions are needed to sustain inferences about differences between scenario paths. We consider two questions in scenario analyses: First, does testing the difference between scenarios yield additional insight beyond simple tests conducted on the model estimated in-sample? Second, when does the estimated scenario difference yield unbiased estimates of the true difference in outcomes? Answering the first question, we show that the calculation of uncertainties around scenario differences raises difficult issues, since the underlying in-sample distributions are identical for both ‘potential’ outcomes when the reported paths are deterministic functions. Under these circumstances, a scenario comparison adds little beyond testing for the significance of the perturbed variable in the estimated model. Resolving the second question, when models include multiple covariates, inferences about scenario differences depend on the relationships between the conditioning variables, especially their invariance to the interventions being implemented. Tests for invariance based on the automatic detection of structural breaks can help identify the in-sample invariance of models to evaluate likely constancy in projected scenarios. Applications of scenario analyses to impacts on the UK’s wage share from unemployment and agricultural growth from climate change illustrate the concepts.  相似文献   

2.
Covariate Measurement Error in Quadratic Regression   总被引:3,自引:0,他引:3  
We consider quadratic regression models where the explanatory variable is measured with error. The effect of classical measurement error is to flatten the curvature of the estimated function. The effect on the observed turning point depends on the location of the true turning point relative to the population mean of the true predictor. Two methods for adjusting parameter estimates for the measurement error are compared. First, two versions of regression calibration estimation are considered. This approximates the model between the observed variables using the moments of the true explanatory variable given its surrogate measurement. For certain models an expanded regression calibration approximation is exact. The second approach uses moment-based methods which require no assumptions about the distribution of the covariates measured with error. The estimates are compared in a simulation study, and used to examine the sensitivity to measurement error in models relating income inequality to the level of economic development. The simulations indicate that the expanded regression calibration estimator dominates the other estimators when its distributional assumptions are satisfied. When they fail, a small-sample modification of the method-of-moments estimator performs best. Both estimators are sensitive to misspecification of the measurement error model.  相似文献   

3.
This paper deals with estimation of a production technology where endogeneous choice of input and output variables is explicitly recognized. In particular, we assume that producers maximize return to the outlay (RO). For simplicity and tractability we start with a Cobb–Douglas transformation function with multiple inputs and outputs and show how the first-order conditions of RO maximization can be used to derive an estimating equation which is nothing but a partial input productivity equation. This equation does not suffer from the econometric endogeneity problem although the output and input variables are endogenous. First, we consider the case where producers are fully efficient allocatively but technically inefficient. The model is estimated using a single equation stochastic frontier approach. The model is then extended to allow allocative inefficiency and it is estimated as a system using generalized method of moment. Algebraic expressions are derived to decompose the effect of technical and allocative inefficiencies on RO. We also consider translog specifications that are estimated as (1) a single equation frontier model as well as (2) a system. We use a panel of Norwegian fishing trawlers data to estimate the model. Outputs are different species caught while inputs are labor and vessel size. We also control for number of days of operation, age of the vessel and year effects. Empirical results show that the average rate of RO is reduced by about 20 to 30 % due to technical inefficiency. On the other hand, average allocative efficiency is found to be about 78 %. The average overall efficiency is found to be around 60 %.  相似文献   

4.
We investigate a novel database of 10,217 extreme operational losses from the Italian bank UniCredit. Our goal is to shed light on the dependence between the severity distribution of these losses and a set of macroeconomic, financial, and firm‐specific factors. To do so, we use generalized Pareto regression techniques, where both the scale and shape parameters are assumed to be functions of these explanatory variables. We perform the selection of the relevant covariates with a state‐of‐the‐art penalized‐likelihood estimation procedure relying on L1‐penalty terms. A simulation study indicates that this approach efficiently selects covariates of interest and tackles spurious regression issues encountered when dealing with integrated time series. Lastly, we illustrate the impact of different economic scenarios on the requested capital for operational risk. Our results have important implications in terms of risk management and regulatory policy.  相似文献   

5.
Many new statistical models may enjoy better interpretability and numerical stability than traditional models in survival data analysis. Specifically, the threshold regression (TR) technique based on the inverse Gaussian distribution is a useful alternative to the Cox proportional hazards model to analyse lifetime data. In this article we consider a semi‐parametric modelling approach for TR and contribute implementational and theoretical details for model fitting and statistical inferences. Extensive simulations are carried out to examine the finite sample performance of the parametric and non‐parametric estimates. A real example is analysed to illustrate our methods, along with a careful diagnosis of model assumptions.  相似文献   

6.
This paper examines the asymptotic and finite‐sample properties of tests of equal forecast accuracy when the models being compared are overlapping in the sense of Vuong (Econometrica 1989; 57 : 307–333). Two models are overlapping when the true model contains just a subset of variables common to the larger sets of variables included in the competing forecasting models. We consider an out‐of‐sample version of the two‐step testing procedure recommended by Vuong but also show that an exact one‐step procedure is sometimes applicable. When the models are overlapping, we provide a simple‐to‐use fixed‐regressor wild bootstrap that can be used to conduct valid inference. Monte Carlo simulations generally support the theoretical results: the two‐step procedure is conservative, while the one‐step procedure can be accurately sized when appropriate. We conclude with an empirical application comparing the predictive content of credit spreads to growth in real stock prices for forecasting US real gross domestic product growth. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

7.
The effect of differencing all of the variables in a properly specified regression equation is examined. Excessive use of the difference transformation induces a non-invertible moving average (MA) process in the disturbances of the transformed regression. Monte Carlo techniques are used to examine the effects of overdifferencing on the efficiency of regression parameter estimates, inferences based on these estimates, and tests for overdifferenccing based on the estimator of the MA parameter for the disturbances of the differences regression. Overall, the problem of overdifferencing is not serious if careful attention is paid to the properties of the disturbances of regression equations.  相似文献   

8.
Hira L. Koul 《Metrika》2002,55(1-2):75-90
Often in the robust analysis of regression and time series models there is a need for having a robust scale estimator of a scale parameter of the errors. One often used scale estimator is the median of the absolute residuals s 1. It is of interest to know its limiting distribution and the consistency rate. Its limiting distribution generally depends on the estimator of the regression and/or autoregressive parameter vector unless the errors are symmetrically distributed around zero. To overcome this difficulty it is then natural to use the median of the absolute differences of pairwise residuals, s 2, as a scale estimator. This paper derives the asymptotic distributions of these two estimators for a large class of nonlinear regression and autoregressive models when the errors are independent and identically distributed. It is found that the asymptotic distribution of a suitably standardizes s 2 is free of the initial estimator of the regression/autoregressive parameters. A similar conclusion also holds for s 1 in linear regression models through the origin and with centered designs, and in linear autoregressive models with zero mean errors.  This paper also investigates the limiting distributions of these estimators in nonlinear regression models with long memory moving average errors. An interesting finding is that if the errors are symmetric around zero, then not only is the limiting distribution of a suitably standardized s 1 free of the regression estimator, but it is degenerate at zero. On the other hand a similarly standardized s 2 converges in distribution to a normal distribution, regardless of the errors being symmetric or not. One clear conclusion is that under the symmetry of the long memory moving average errors, the rate of consistency for s 1 is faster than that of s 2.  相似文献   

9.
We discuss the problem of constructing a suitable regression model from a nonparametric Bayesian viewpoint. For this purpose, we consider the case when the error terms have symmetric and unimodal densities. By the Khintchine and Shepp theorem, the density of response variable can be written as a scale mixture of uniform densities. The mixing distribution is assumed to have a Dirichlet process prior. We further consider appropriate prior distributions for other parameters as the components of the predictive device. Among the possible submodels, we select the one which has the highest posterior probability. An example is given to illustrate the approach.  相似文献   

10.
We consider the ability to detect interaction structure from data in a regression context. We derive an asymptotic power function for a likelihood-based test for interaction in a regression model, with possibly misspecified alternative distribution. This allows a general investigation of different types of interactions which are poorly or well detected via data. Principally we contrast pairwise-interaction models with ‘diffuse interaction models’ as introduced in Gustafson et al. (Stat Med 24:2089–2104, 2005).  相似文献   

11.
In this paper, we develop and compare two alternative approaches for calculating the effect of the actual intake when treatments are randomized, but compliance with the assignment in the treatment arm is less than perfect for reasons that are correlated with the outcome. The approaches are based on different identification assumptions about these unobserved confounders. In the first approach, which stems from [Sommer, A., Zeger, S., 1991. On estimating efficacy in clinical trials. Statistics in Medicine 10, 45–52], the unobserved confounders are modeled by a discrete indicator variable that represents subject-type, defined in terms of the potential intake in the face of each possible assignment. In the second approach, confounding is modeled without reference to subject-type in the spirit of the Roy model. Because the two models are non-nested, and model comparison and assessment of the approaches in a real data setting is one of our central goals, we formulate the discussion from a Bayesian perspective, comparing the two models in terms of marginal likelihoods and Bayes factors, and in terms of inferences about the treatment effects. The latter we calculate from a predictive perspective in a way that is different from that in the literature, where typically only a point summary of that effect is calculated. Our real data analysis focuses on the JOBS II eligibility trial that was implemented to test the effectiveness of a job search seminar in decreasing the negative mental health effects commonly associated with job loss. We provide a comparative analysis of the data from the two approaches with prior distributions that are both reasonable in the context of the data and comparable across the model specifications. We show that the approaches can lead to different evaluations of the treatment.  相似文献   

12.
A simulation study was carried out to study the behaviour of the polychoric correlation coefficient in data not compliant with the assumption of underlying continuous variables. Such data can produce relatively high estimated polychoric correlations (in the order of .62). Applied researchers are prone to accept these artefacts as input for elaborate modelling (e.g., structural equation models) and inferences about reality justified by sheer magnitude of the correlations. In order to prevent this questionable research practice, it is recommended that in applications of the polychoric correlation coefficient, data is tested with goodness-of-fit of the BND, that such statistic is reported in published applications, and that the polychoric correlation is not applied when the test is significant.  相似文献   

13.
A common strategy within the framework of regression models is the selection of variables with possible predictive value, which are incorporated in the regression model. Two recently proposed methods, Breiman's Garotte (B reiman , 1995) and Tibshirani's Lasso (T ibshirani , 1996) try to combine variable selection and shrinkage. We compare these with pure variable selection and shrinkage procedures. We consider the backward elimination procedure as a typical variable selection procedure and as an example of a shrinkage procedure an approach of V an H ouwelingen and L e C essie (1990). Additionally an extension of van Houwelingens and le Cessies approach proposed by S auerbrei (1999) is considered. The ordinary least squares method is used as a reference.
With the help of a simulation study we compare these approaches with respect to the distribution of the complexity of the selected model, the distribution of the shrinkage factors, selection bias, the bias and variance of the effect estimates and the average prediction error.  相似文献   

14.
J. Medhi 《Metrika》1973,20(1):215-218
In this note we consider a generalisation of theStirling noumber of the second kind. The distribution of the sum ofn independent zero-truncated Poisson variables, which can be expressed in terms of such a gneralised number, may be called horizontally generalised Stirling distribution of the second kind. A recurrence relation for the probability function of this distribution, which will be useful for tabulation purposes, is given. The distribution function is obtained in terms of a linear combination of incomplete gamma functions.  相似文献   

15.
This paper introduces a rank-based test for the instrumental variables regression model that dominates the Anderson–Rubin test in terms of finite sample size and asymptotic power in certain circumstances. The test has correct size for any distribution of the errors with weak or strong instruments. The test has noticeably higher power than the Anderson–Rubin test when the error distribution has thick tails and comparable power otherwise. Like the Anderson–Rubin test, the rank tests considered here perform best, relative to other available tests, in exactly identified models.  相似文献   

16.
We compare four different estimation methods for the coefficients of a linear structural equation with instrumental variables. As the classical methods we consider the limited information maximum likelihood (LIML) estimator and the two-stage least squares (TSLS) estimator, and as the semi-parametric estimation methods we consider the maximum empirical likelihood (MEL) estimator and the generalized method of moments (GMM) (or the estimating equation) estimator. Tables and figures of the distribution functions of four estimators are given for enough values of the parameters to cover most linear models of interest and we include some heteroscedastic cases and nonlinear cases. We have found that the LIML estimator has good performance in terms of the bounded loss functions and probabilities when the number of instruments is large, that is, the micro-econometric models with “many instruments” in the terminology of recent econometric literature.  相似文献   

17.
We consider the popular ‘bounds test’ for the existence of a level relationship in conditional equilibrium correction models. By estimating response surface models based on about 95 billion simulated F-statistics and 57 billion t-statistics, we improve upon and substantially extend the set of available critical values, covering the full range of possible sample sizes and lag orders, and allowing for any number of long-run forcing variables. By computing approximate P-values, we find that the bounds test can be easily oversized by more than 5 percentage points in small samples when using asymptotic critical values.  相似文献   

18.
In this paper we describe methods and evaluate programs for linear regression by maximum likelihood when the errors have a heavy tailed stable distribution. The asymptotic Fisher information matrix for both the regression coefficients and the error distribution parameters are derived, giving large sample confidence intervals for all parameters. Simulated examples are shown where the errors are stably distributed and also where the errors are heavy tailed but are not stable, as well as a real example using financial data. The results are then extended to nonlinear models and to non-homogeneous error terms.  相似文献   

19.
The identification of structural parameters in the linear instrumental variables (IV) model is typically achieved by imposing the prior identifying assumption that the error term in the structural equation of interest is orthogonal to the instruments. Since this exclusion restriction is fundamentally untestable, there are often legitimate doubts about the extent to which the exclusion restriction holds. In this paper I illustrate the effects of such prior uncertainty about the validity of the exclusion restriction on inferences based on linear IV models. Using a Bayesian approach, I provide a mapping from prior uncertainty about the exclusion restriction into increased uncertainty about parameters of interest. Moderate prior uncertainty about exclusion restrictions can lead to a substantial loss of precision in estimates of structural parameters. This loss of precision is relatively more important in situations where IV estimates appear to be more precise, for example in larger samples or with stronger instruments. I illustrate these points using several prominent recent empirical papers that use linear IV models. An accompanying electronic table allows users to readily explore the robustness of inferences to uncertainty about the exclusion restriction in their particular applications. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

20.
Exact tests in single equation autoregressive distributed lag models   总被引:1,自引:0,他引:1  
For hypotheses on the coefficient values of the lagged-dependent variables in the ARX class of dynamic regression models, test procedures are developed which yield exact inference for given (up to an unknown scale factor) distribution of the innovation errors. They include exact tests on the maximum lag length, for structural change and on the presence of (seasonal or multiple) unit roots, i.e. they cover situations where usually asymptotic and non-exact t, F, AOC, ADF or HEGY tests are employed. The various procedures are demonstrated and compared in illustrative empirical models and the approach is critically discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号