首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper considers the problem of forecasting under continuous and discrete structural breaks and proposes weighting observations to obtain optimal forecasts in the MSFE sense. We derive optimal weights for one step ahead forecasts. Under continuous breaks, our approach largely recovers exponential smoothing weights. Under discrete breaks, we provide analytical expressions for optimal weights in models with a single regressor, and asymptotically valid weights for models with more than one regressor. It is shown that in these cases the optimal weight is the same across observations within a given regime and differs only across regimes. In practice, where information on structural breaks is uncertain, a forecasting procedure based on robust optimal weights is proposed. The relative performance of our proposed approach is investigated using Monte Carlo experiments and an empirical application to forecasting real GDP using the yield curve across nine industrial economies.  相似文献   

2.
This paper proposes a criterion for simultaneous generalized method of moments model and moment selection: the generalized focused information criterion (GFIC). Rather than attempting to identify the “true” specification, the GFIC chooses from a set of potentially misspecified moment conditions and parameter restrictions to minimize the mean squared error (MSE) of a user‐specified target parameter. The intent of the GFIC is to formalize a situation common in applied practice. An applied researcher begins with a set of fairly weak “baseline” assumptions, assumed to be correct, and must decide whether to impose any of a number of stronger, more controversial “suspect” assumptions that yield parameter restrictions, additional moment conditions, or both. Provided that the baseline assumptions identify the model, we show how to construct an asymptotically unbiased estimator of the asymptotic MSE to select over these suspect assumptions: the GFIC. We go on to provide results for postselection inference and model averaging that can be applied both to the GFIC and various alternative selection criteria. To illustrate how our criterion can be used in practice, we specialize the GFIC to the problem of selecting over exogeneity assumptions and lag lengths in a dynamic panel model, and show that it performs well in simulations. We conclude by applying the GFIC to a dynamic panel data model for the price elasticity of cigarette demand.  相似文献   

3.
We propose new information criteria for impulse response function matching estimators (IRFMEs). These estimators yield sampling distributions of the structural parameters of dynamic stochastic general equilibrium (DSGE) models by minimizing the distance between sample and theoretical impulse responses. First, we propose an information criterion to select only the responses that produce consistent estimates of the true but unknown structural parameters: the Valid Impulse Response Selection Criterion (VIRSC). The criterion is especially useful for mis-specified models. Second, we propose a criterion to select the impulse responses that are most informative about DSGE model parameters: the Relevant Impulse Response Selection Criterion (RIRSC). These criteria can be used in combination to select the subset of valid impulse response functions with minimal dimension that yields asymptotically efficient estimators. The criteria are general enough to apply to impulse responses estimated by VARs, local projections, and simulation methods. We show that the use of our criteria significantly affects estimates and inference about key parameters of two well-known new Keynesian DSGE models. Monte Carlo evidence indicates that the criteria yield gains in terms of finite sample bias as well as offering tests statistics whose behavior is better approximated by the first order asymptotic theory. Thus, our criteria improve existing methods used to implement IRFMEs.  相似文献   

4.
In this paper we introduce a new regression model in which the response variable is bounded by two unknown parameters. A special case is a bounded alternative to the four parameter logistic model. The four parameter model which has unbounded responses is widely used, for instance, in bioassays, nutrition, genetics, calibration and agriculture. In reality, the responses are often bounded although the bounds may be unknown, and in that situation, our model reflects the data-generating mechanism better. Complications arise for the new model, however, because the likelihood function is unbounded, and the global maximizers are not consistent estimators of unknown parameters. Although the two sample extremes, the smallest and the largest observations, are consistent estimators for the two unknown boundaries, they have a slow convergence rate and are asymptotically biased. Improved estimators are developed by correcting for the asymptotic biases of the two sample extremes in the one sample case; but even these consistent estimators do not obtain the optimal convergence rate. To obtain efficient estimation, we suggest using the local maximizers of the likelihood function, i.e., the solution to the likelihood equations. We prove that, with probability approaching one as the sample size goes to infinity, there exists a solution to the likelihood equation that is consistent at the rate of the square root of the sample size and it is asymptotically normally distributed.  相似文献   

5.
In this paper consistent and, in a well–defined sense, optimal moment–estimators of the regression coefficient in a simple regression model with errors in variables are derived. The asymptotic variance and other asymptotic properties of these estimators are given. As is known for a long time, serious estimation problems exist in this model. There are two ways out of this problem: using either additional assumptions or additional information in the data. A lot of attention has been paid to the use of additional assumptions. However, quite often this leads to rather unrealistic models. In this paper we use additional information in the data. That means here that, besides first and second order moments, third order moments are formulated as functions of the model parameters. Besides theoretical derivations a small study with generated data is discussed. This study shows that for samples larger than 50 the estimates we consider behave nicely.  相似文献   

6.
In this paper, we introduce a threshold stochastic volatility model with explanatory variables. The Bayesian method is considered in estimating the parameters of the proposed model via the Markov chain Monte Carlo (MCMC) algorithm. Gibbs sampling and Metropolis–Hastings sampling methods are used for drawing the posterior samples of the parameters and the latent variables. In the simulation study, the accuracy of the MCMC algorithm, the sensitivity of the algorithm for model assumptions, and the robustness of the posterior distribution under different priors are considered. Simulation results indicate that our MCMC algorithm converges fast and that the posterior distribution is robust under different priors and model assumptions. A real data example was analyzed to explain the asymmetric behavior of stock markets.  相似文献   

7.
The problem of estimating a linear function of k normal means with unknown variances is considered under an asymmetric loss function such that the associated risk is bounded from above by a known quantity. In the absence of a fixed sample size rule, sequential stopping rules satisfying a general set of assumptions are considered. Two estimators are proposed and second-order asymptotic expansions of their risk functions are derived. It is shown that the usual estimator, namely the linear function of the sample means, is asymptotically inadmissible, being dominated by a shrinkage-type estimator. An example illustrates the use of different multistage sampling schemes and provides asymptotic expansions of the risk functions. Received: August 1999  相似文献   

8.
L. S. Hayre 《Metrika》1983,30(1):101-107
Sequential procedures are proposed and studied for point and interval estimation of the difference between the means of two normal populations with unknown variances. The costs of sampling are allowed to be different for the two populations and to depend on the difference between the means. The procedures are shown to be asymptotically optimal, and small sample behaviour is studied by computer simulation.  相似文献   

9.
A class of stochastic unit-root bilinear processes, allowing for GARCH-type effects with asymmetries, is studied. Necessary and sufficient conditions for the strict and second-order stationarity of the error process are given. The strictly stationary solution is shown to be strongly mixing under mild additional assumptions. It follows that, in this model, the standard (non-stochastic) unit-root tests of Phillips–Perron and Dickey–Fuller are asymptotically valid to detect the presence of a (stochastic) unit-root. The finite sample properties of these tests are studied via Monte-Carlo experiments.  相似文献   

10.
The population characteristics observed by selecting a complex sample from a finite identified population are the result of at least two processes: the process which generates the values attached to the units in the finite population, and the process of selecting the sample of units from the population. In this paper we propose that the resulting observations by viewed as the joint realization of both processes. We overcome the inherent difflculty in modelling the joint processes of generation and selection by exploring second moment and other simplifying assumptions. We obtain general expressions for the mean and covariance function of the joint processes and show that several overdispersion models discussed in the literature for the analysis of complex surveys are a direct consequence of our formulation, undere particular sampling schemes and population structures.  相似文献   

11.
Chaudhuri  Arijit  Roy  Debesh 《Metrika》1994,41(1):355-362
Postulating a super-population regression model connecting a size variable, a cheaply measurable variable and an expensively observable variable of interest, an asymptotically optimal double sampling strategy to estimate the survey population total of the third variable is specified. To render it practicable, unknown model-parameters in the optimal estimator are replaced by appropriate statistics. The resulting generalized regression estimator is then shown to have a model-cum-asymptotic design based expected square error equal to that of the asymptotically optimum estimator itself. An estimator for design variance of the estimator is also proposed.  相似文献   

12.
We consider the estimation problem under the linear regression model with the modified case–cohort design. The extensions of the Buckley–James estimator (BJE) under the case–cohort designs have been studied under an additional assumption that the censoring variable and the covariate are independent. If this assumption is violated, as is the case in a typical real data set in the literature, our simulation results suggest that those extensions are not consistent and we propose a new extension. Our estimator is based on the generalized maximum likelihood estimator (GMLE) of the underlying distributions. We propose a self-consistent algorithm, which is quite different from the one for multivariate interval-censored data. We also show that under certain regularity conditions, the GMLE and the BJE are consistent and asymptotically normally distributed. Some simulation results are presented. The BJE is also applied to the real data set in the literature.  相似文献   

13.
Knowledge on the scale economies drives the incentives of regulators, governments and individual utilities to scale-up or scale-down the scale of operations. This paper considers the returns to scale (RTS) in non-convex frontier models. In particular, we evaluate RTS assumptions in a Free Disposal Hull model, which accounts for uncertainty and heterogeneity in the sample. Additionally, we provide a three-step framework to empirically analyze the existence and extent of RTS in real world applications. In a first step, the presence of scale (and scope) economies is verified. Secondly, RTS for individual observations are examined while in a third step we derive the optimal scale for a sector as a whole. The framework is applied to the Portuguese drinking water sector where we find the optimal scale to be situated around 7–10 million m3.  相似文献   

14.
This paper presents some two-step estimators for a wide range of parametric panel data models with censored endogenous variables and sample selection bias. Our approach is to derive estimates of the unobserved heterogeneity responsible for the endogeneity/selection bias to include as additional explanatory variables in the primary equation. These are obtained through a decomposition of the reduced form residuals. The panel nature of the data allows adjustment, and testing, for two forms of endogeneity and/or sample selection bias. Furthermore, it incorporates roles for dynamics and state dependence in the reduced form. Finally, we provide an empirical illustration which features our procedure and highlights the ability to test several of the underlying assumptions.  相似文献   

15.
《Journal of econometrics》1986,31(2):151-178
Two of the most widely used statistical techniques for analyzing discrete economic phenomena are discriminant analysis (DA) and logit analysis. For purposes of parameter estimation, logit has been shown to be more robust than DA. However, under certain distributional assumptions both procedures yield consistent estimates and the DA estimator is asymptotically efficient. This suggests a natural Hausman specification test of these distributional assumptions by comparing the two estimators. In this paper, such a test is proposed and an empirical example involving corporate bankruptcies is provided. The finite-sample properties of the test statistic are also explored through some sampling experiments.  相似文献   

16.
Traditional multi-echelon inventory theory focuses on arborescent supply chains that use a central warehouse which replenishes remote warehouses. The remote warehouses serve customers in their respective regions. Common assumptions in the academic literature include use of the Poisson demand process and instantaneous unit-by-unit replenishment. In the practitioner literature, single-echelon approximations are advised for setting safety stock to deal with lead time, demand, and supply variations in these settings. Using data from a U.S. supplier of home improvement products, we find that neither the assumptions from the academic literature nor the approximations from the practitioner literature necessarily work well in practice.In a variation of the strictly arborescent supply chain, the central warehouse at our real company not only replenishes other warehouses but also meets demand from customers in the region near the central warehouse. In this paper, we study this dual-role central warehouse structure, which we believe is common in practice. Using high and low volume product demand data from this company, we use Monte Carlo simulations to study the impact of (1) the use of a dual-role centralized warehouse, (2) common demand assumptions made in multi-echelon research, and (3) single-echelon approximations for managing a multi-echelon supply chain. We explore each of these under both centralized and decentralized control logic. We find that the common assumptions of theoretical models impede their usefulness and that heuristics that ignore the actual supply chain structure fail to account for additional opportunities to utilize safety stock more effectively. Researchers should be aware of the gap between standard assumptions in traditional literature and actual practice, and critically evaluate their assumptions to find a reasonable balance between tractability and relevance.  相似文献   

17.
As a result of novel data collection technologies, it is now common to encounter data in which the number of explanatory variables collected is large, while the number of variables that actually contribute to the model remains small. Thus, a method that can identify those variables with impact on the model without inferring other noneffective ones will make analysis much more efficient. Many methods are proposed to resolve the model selection problems under such circumstances, however, it is still unknown how large a sample size is sufficient to identify those “effective” variables. In this paper, we apply sequential sampling method so that the effective variables can be identified efficiently, and the sampling is stopped as soon as the “effective” variables are identified and their corresponding regression coefficients are estimated with satisfactory accuracy, which is new to sequential estimation. Both fixed and adaptive designs are considered. The asymptotic properties of estimates of the number of effective variables and their coefficients are established, and the proposed sequential estimation procedure is shown to be asymptotically optimal. Simulation studies are conducted to illustrate the performance of the proposed estimation method, and a diabetes data set is used as an example.  相似文献   

18.
We reconsider the question of allocating resources in large teams using decentralized procedures. Our assumption on the distribution of procedures is that it approximates a given undelying distribution in producers space; thus we relax prior approaches which utilize replicas or iid samplings. We examine when solving the allocation problem for the underlying distribution yields an appropriate solution to the specific sample. We then show how market- like mechanisms may be used to get a decentralized decision process which is asymptotically optimal.  相似文献   

19.
Empirical evidence suggests that ambiguity is prevalent in insurance pricing and underwriting, and that often insurers tend to exhibit more ambiguity than the insured individuals (e.g., Hogarth and Kunreuther, 1989). Motivated by these findings, we consider a problem of demand for insurance indemnity schedules, where the insurer has ambiguous beliefs about the realizations of the insurable loss, whereas the insured is an expected-utility maximizer. We show that if the ambiguous beliefs of the insurer satisfy a property of compatibility with the non-ambiguous beliefs of the insured, then optimal indemnity schedules exist and are monotonic. By virtue of monotonicity, no ex-post moral hazard issues arise at our solutions (e.g., Huberman et al., 1983). In addition, in the case where the insurer is either ambiguity-seeking or ambiguity-averse, we show that the problem of determining the optimal indemnity schedule reduces to that of solving an auxiliary problem that is simpler than the original one in that it does not involve ambiguity. Finally, under additional assumptions, we give an explicit characterization of the optimal indemnity schedule for the insured, and we show how our results naturally extend the classical result of Arrow (1971) on the optimality of the deductible indemnity schedule.  相似文献   

20.
近年,房地产企业都在进行投资规模和范围的扩展,不仅使投资效率下降,更造成了社会资源的浪费。从理论上分析我国房地产上市公司存在着过度投资行为的因素,提出相关假设,再选取房地产行业上市公司2003-2009年的样本数据,构建模型进行实证检验得出,我国房地产上市公司普遍存在着过度投资行为,但由于目前国内企业发展环境的限制,现金股利的发放还不足以成为抑制过度投资的主要机制。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号