首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Given the widespread use of regression methods in the social sciences, the recent concern with regression diagnostics is timely and important. Every diagnosis must be followed by an appropriate data analytic procedure: diagnosis and treatment are complementary and interdependent tasks. We propose a graphical diagnostic procedure for use in ordinary least squares contexts, several data analytic treatments, and illustrate them in an extant data set.  相似文献   

2.
This paper examines the asymptotic and finite‐sample properties of tests of equal forecast accuracy when the models being compared are overlapping in the sense of Vuong (Econometrica 1989; 57 : 307–333). Two models are overlapping when the true model contains just a subset of variables common to the larger sets of variables included in the competing forecasting models. We consider an out‐of‐sample version of the two‐step testing procedure recommended by Vuong but also show that an exact one‐step procedure is sometimes applicable. When the models are overlapping, we provide a simple‐to‐use fixed‐regressor wild bootstrap that can be used to conduct valid inference. Monte Carlo simulations generally support the theoretical results: the two‐step procedure is conservative, while the one‐step procedure can be accurately sized when appropriate. We conclude with an empirical application comparing the predictive content of credit spreads to growth in real stock prices for forecasting US real gross domestic product growth. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

3.
Conformity testing is a systematic examination of the extent to which an entity conforms to specified requirements. Such testing is performed in industry as well as in regulatory agencies in a variety of fields. In this paper we discuss conformity testing under measurement or sampling uncertainty. Although the situation has many analogies to statistical testing of a hypothesis concerning the unknown value of the measurand there are no generally accepted rules for handling measurement uncertainty when testing for conformity. Usually the objective of a test for conformity is to provide assurance of conformity. We therefore suggest that an appropriate statistical test for conformity should be devised such that there is only a small probability of declaring conformity when in fact the entity does not conform. An operational way of formulating this principle is to require that whenever an entity has been declared to be conforming, it should not be possible to alter that declaration, even if the entity was investigated with better (more precise) measuring instruments, or measurement procedures. Some industries and agencies designate specification limits under consideration of the measurement uncertainty. This practice is not invariant under changes of measurement procedure. We therefore suggest that conformity testing should be based upon a comparison of a confidence interval for the value of the measurand with some limiting values that have been designated without regard to the measurement uncertainty. Such a procedure is in line with the recently established practice of reporting measurement uncertainty as “an interval of values that could reasonably be attributed to the measurand”. The price to be paid for a reliable assurance of conformity is a relatively large risk that the procedure will fail to establish conformity for entities that only marginally conform. We suggest a two‐stage procedure that may improve this situation and provide a better discriminatory ability. In an example we illustrate the determination of the power function of such a two‐stage procedure.  相似文献   

4.
In this paper we provide a method for estimating multivariate distributions defined through hierarchical Archimedean copulas. In general, the true structure of the hierarchy is unknown, but we develop a computationally efficient technique to determine it from the data. For this purpose we introduce a hierarchical estimation procedure for the parameters and provide an asymptotic analysis. We consider both parametric and nonparametric estimation of the marginal distributions. A simulation study and an empirical application show the effectiveness of the grouping procedure in the sense of structure selection.  相似文献   

5.
In this paper, we present results pertaining to investigations concerning randomized response sampling. We consider a simple generalization for some existing investigations, and then provide the suitable choices for design parameters. It is also demonstrated the superiority of the proposed procedure over Warner (1965) procedure.  相似文献   

6.
Under minimal assumptions, finite sample confidence bands for quantile regression models can be constructed. These confidence bands are based on the “conditional pivotal property” of estimating equations that quantile regression methods solve and provide valid finite sample inference for linear and nonlinear quantile models with endogenous or exogenous covariates. The confidence regions can be computed using Markov Chain Monte Carlo (MCMC) methods. We illustrate the finite sample procedure through two empirical examples: estimating a heterogeneous demand elasticity and estimating heterogeneous returns to schooling. We find pronounced differences between asymptotic and finite sample confidence regions in cases where the usual asymptotics are suspect.  相似文献   

7.
This paper studies the semiparametric binary response model with interval data investigated by Manski and Tamer (2002). In this partially identified model, we propose a new estimator based on MT’s modified maximum score (MMS) method by introducing density weights to the objective function, which allows us to develop asymptotic properties of the proposed set estimator for inference. We show that the density-weighted MMS estimator converges at a nearly cube-root-n rate. We propose an asymptotically valid inference procedure for the identified region based on subsampling. Monte Carlo experiments provide supports to our inference procedure.  相似文献   

8.
We study a Tikhonov Regularized (TiR) estimator of a functional parameter identified by conditional moment restrictions in a linear model with both exogenous and endogenous regressors. The nonparametric instrumental variable estimator is based on a minimum distance principle with penalization by the norms of the parameter and its derivatives. After showing its consistency in the Sobolev norm and uniform consistency under an embedding condition, we derive the expression of the asymptotic Mean Integrated Square Error and the rate of convergence. The optimal value of the regularization parameter is characterized in two examples. We illustrate our theoretical findings and the small sample properties with simulation results. Finally, we provide an empirical application to estimation of an Engel curve, and discuss a data driven selection procedure for the regularization parameter.  相似文献   

9.
We consider estimation of panel data models with sample selection when the equation of interest contains endogenous explanatory variables as well as unobserved heterogeneity. Assuming that appropriate instruments are available, we propose several tests for selection bias and two estimation procedures that correct for selection in the presence of endogenous regressors. The tests are based on the fixed effects two-stage least squares estimator, thereby permitting arbitrary correlation between unobserved heterogeneity and explanatory variables. The first correction procedure is parametric and is valid under the assumption that the errors in the selection equation are normally distributed. The second procedure estimates the model parameters semiparametrically using series estimators. In the proposed testing and correction procedures, the error terms may be heterogeneously distributed and serially dependent in both selection and primary equations. Because these methods allow for a rather flexible structure of the error variance and do not impose any nonstandard assumptions on the conditional distributions of explanatory variables, they provide a useful alternative to the existing approaches presented in the literature.  相似文献   

10.
Standard-essential patents (SEPs) have become a key element of technical coordination via standard-setting organizations. Yet, in many cases, it remains unclear whether a declared SEP is truly standard-essential. To date, there is no automated procedure that allows for a scalable and objective assessment of SEP status. This paper introduces a semantics-based method for approximating the standard essentiality of patents. We provide details on the procedure that generates the measure of standard essentiality and present the results of several validation and robustness exercises. We illustrate the measure's usefulness in estimating the share of true SEPs in firm patent portfolios for several telecommunication standards.  相似文献   

11.
A decision maker, who is overwhelmed by the number of available alternatives, limits her consideration. We investigate a model where a decision maker’s capacity determines whether she is overwhelmed: She considers all the available alternatives if their number does not exceed her capacity; otherwise, she applies a shortlisting procedure to reduce the number of alternatives to within her capacity. We show how to deduce the decision maker’s capacity, her preference and the alternatives that she considers from the observed behavior. Furthermore, we provide the necessary and sufficient conditions for a consideration function to be derived by the shortlisting procedure with a limited capacity.  相似文献   

12.
Espasa and Mayo provide consistent forecasts for an aggregate economic indicator and its basic components as well as for useful sub-aggregates. To do so, they develop a procedure based on single-equation models that includes the restrictions arisen from the fact that some components share common features. The classification by common features provides a disaggregation map useful in several applications. We discuss their classification procedure and suggest some issues that should be taken into account when designing an algorithm to identify subsets of series that share one common trend.  相似文献   

13.
We provide a general methodology for forecasting in the presence of structural breaks induced by unpredictable changes to model parameters. Bayesian methods of learning and model comparison are used to derive a predictive density that takes into account the possibility that a break will occur before the next observation. Estimates for the posterior distribution of the most recent break are generated as a by‐product of our procedure. We discuss the importance of using priors that accurately reflect the econometrician's opinions as to what constitutes a plausible forecast. Several applications to macroeconomic time‐series data demonstrate the usefulness of our procedure. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

14.
We outline a new voting procedure for representative democracies. This procedure should be used for important decisions only and consists of two voting rounds: a randomly-selected subset of the citizens is awarded a one-time voting right. The parliament also votes, and the two decisions are weighted according to a pre-defined key. The final decision is implemented. As this procedure gives the society—represented by the randomly-chosen subset—a better say for important decisions, the citizens might be more willing to accept the consequences of these decisions.  相似文献   

15.
Instead of solving fixed horizon production scheduling problems with specified terminal inventory conditions, we use forecasting to extend the problem horizon until stopping rule conditions are met. Major questions for this procedure relate to how to provide data for the extended problem and when to stop the process. We provide extensive computational results indicating that relatively simple methods perform quite well.  相似文献   

16.
The present paper deals with sequential urn designs for the balanced allocation of two treatments.Wei (1977, 1978b) was among the first authors to suggest an algorithm based on the probabilistic properties of the generalized Friedman urn and recently Chen (2000) has suggested the Ehrenfest design, namely a sequential procedure based on the Ehrenfest urn process. Some extensions of these algorithms are discussed: in particular we focus on a generalization of Chen's procedure, called the Ehrenfest-type urn designs, recently introduced by Baldi Antognini (2004). By analyzing some convergence properties of the Ehrenfest process, we show that for an Ehrenfest-type urn it is possible to evaluate the variance of the design at each step and analyze the convergence to balance. Furthermore, the generalized Friedman urn and the Ehrenfest-type procedures are compared in terms of speed of convergence. The Ehrenfest design proposed by Chen converges to balance faster than the other urn procedures.  相似文献   

17.
18.
We exploit the information derived from geographical coordinates to endogenously identify spatial regimes in technologies that are the result of a variety of complex, dynamic interactions among site-specific environmental variables and farmer decision making about technology, which are often not observed at the farm level. Controlling for unobserved heterogeneity is a fundamental challenge in empirical research, as failing to do so can produce model misspecification and preclude causal inference. In this article, we adopt a two-step procedure to deal with unobserved spatial heterogeneity, while accounting for spatial dependence in a cross-sectional setting. The first step of the procedure takes explicitly unobserved spatial heterogeneity into account to endogenously identify subsets of farms that follow a similar local production econometric model, i.e. spatial production regimes. The second step consists in the specification of a spatial autoregressive model with autoregressive disturbances and spatial regimes. The method is applied to two regional samples of olive growing farms in Italy. The main finding is that the identification of spatial regimes can help drawing a more detailed picture of the production environment and provide more accurate information to guide extension services and policy makers.  相似文献   

19.
We examine the performance of a metric entropy statistic as a robust test for time-reversibility (TR), symmetry, and serial dependence. It also serves as a measure of goodness-of-fit. The statistic provides a consistent and unified basis in model search, and is a powerful diagnostic measure with surprising ability to pinpoint areas of model failure. We provide empirical evidence comparing the performance of the proposed procedure with some of the modern competitors in nonlinear time-series analysis, such as robust implementations of the BDS and characteristic function-based tests of TR, along with correlation-based competitors such as the Ljung–Box Q-statistic. Unlike our procedure, each of its competitors is motivated for a different, specific, context and hypothesis. Our evidence is based on Monte Carlo simulations along with an application to several stock indices for the US equity market.  相似文献   

20.
We propose methods for constructing confidence sets for the timing of a break in level and/or trend that have asymptotically correct coverage for both I(0) and I(1) processes. These are based on inverting a sequence of tests for the break location, evaluated across all possible break dates. We separately derive locally best invariant tests for the I(0) and I(1) cases; under their respective assumptions, the resulting confidence sets provide correct asymptotic coverage regardless of the magnitude of the break. We suggest use of a pre-test procedure to select between the I(0)- and I(1)-based confidence sets, and Monte Carlo evidence demonstrates that our recommended procedure achieves good finite sample properties in terms of coverage and length across both I(0) and I(1) environments. An application using US macroeconomic data is provided which further evinces the value of these procedures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号