首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 785 毫秒
1.
In practice, inventory decisions depend heavily on demand forecasts, but the literature typically assumes that demand distributions are known. This means that estimates are substituted directly for the unknown parameters, leading to insufficient safety stocks, stock-outs, low service, and high costs. We propose a framework for addressing this estimation uncertainty that is applicable to any inventory model, demand distribution, and parameter estimator. The estimation errors are modeled and a predictive lead time demand distribution obtained, which is then substituted into the inventory model. We illustrate this framework for several different demand models. When the estimates are based on ten observations, the relative savings are typically between 10% and 30% for mean-stationary demand. However, the savings are larger when the estimates are based on fewer observations, when backorders are costlier, or when the lead time is longer. In the presence of a trend, the savings are between 50% and 80% for several scenarios.  相似文献   

2.
We introduce the matrix exponential as a way of modelling spatially dependent data. The matrix exponential spatial specification (MESS) simplifies the log-likelihood allowing a closed form solution to the problem of maximum-likelihood estimation, and greatly simplifies the Bayesian estimation of the model. The MESS can produce estimates and inferences similar to those from conventional spatial autoregressive models, but has analytical, computational, and interpretive advantages. We present maximum likelihood and Bayesian approaches to the estimation of this spatial model specification along with methods of model comparisons over different explanatory variables and spatial specifications.  相似文献   

3.
Qualitative response models (QRM's) are analyzed from the Bayesian point of view, using diffuse and informative prior distributions. Exact finite-sample Bayesian and large-sample Bayesian and non-Bayesian estimation results are compared. In addition, the paper provides: (1) plots and discussion of the properties of likelihood functions for QRM's, (2) posterior distributions for logit models' derivatives and elasticities, (3) Bayesian prediction procedures for QRM's, (4) new estimates for the median and other fractiles of the logistic distribution, (5) posterior odds ratios for model selection problems, and (6) comparisons of two alternative Monte Carlo numerical integration procedures. It is concluded that asymptotic approximations are not accurate for small-to moderate-sized samples even when only a single input variable is used, and that operational Bayesian methods are available for providing both exact small-sample and large-sample approximate inferences for DRM's.  相似文献   

4.
While the likelihood ratio measures statistical support for an alternative hypothesis about a single parameter value, it is undefined for an alternative hypothesis that is composite in the sense that it corresponds to multiple parameter values. Regarding the parameter of interest as a random variable enables measuring support for a composite alternative hypothesis without requiring the elicitation or estimation of a prior distribution, as described below. In this setting, in which parameter randomness represents variability rather than uncertainty, the ideal measure of the support for one hypothesis over another is the difference in the posterior and prior log‐odds. That ideal support may be replaced by any measure of support that, on a per‐observation basis, is asymptotically unbiased as a predictor of the ideal support. Such measures of support are easily interpreted and, if desired, can be combined with any specified or estimated prior probability of the null hypothesis. Two qualifying measures of support are minimax‐optimal. An application to proteomics data indicates that a modification of optimal support computed from data for a single protein can closely approximate the estimated difference in posterior and prior odds that would be available with the data for 20 proteins.  相似文献   

5.
This paper discusses some simple practical advantages of Markov chain Monte Carlo (MCMC) methods in estimating entry and exit transition probabilities from repeated independent surveys. Simulated data are used to illustrate the usefulness of MCMC methods when the likelihood function has multiple local maxima. Actual data on the evaluation of an HIV prevention intervention program among drug users are used to demonstrate the advantage of using prior information to enhance parameter identificaiton. The latter example also demonstrates an important strength of the MCMC approach, namely the ability to make inferences on arbitrary functions of model parameters.  相似文献   

6.
We propose imposing data‐driven identification constraints to alleviate the multimodality problem arising in the estimation of poorly identified dynamic stochastic general equilibrium models under non‐informative prior distributions. We also devise an iterative procedure based on the posterior density of the parameters for finding these constraints. An empirical application to the Smets and Wouters ( 2007 ) model demonstrates the properties of the estimation method, and shows how the problem of multimodal posterior distributions caused by parameter redundancy is eliminated by identification constraints. Out‐of‐sample forecast comparisons as well as Bayes factors lend support to the constrained model.  相似文献   

7.
This paper considers factor estimation from heterogeneous data, where some of the variables—the relevant ones—are informative for estimating the factors, and others—the irrelevant ones—are not. We estimate the factor model within a Bayesian framework, specifying a sparse prior distribution for the factor loadings. Based on identified posterior factor loading estimates, we provide alternative methods to identify relevant and irrelevant variables. Simulations show that both types of variables are identified quite accurately. Empirical estimates for a large multi‐country GDP dataset and a disaggregated inflation dataset for the USA show that a considerable share of variables is irrelevant for factor estimation.  相似文献   

8.
This paper introduces measures for how each moment contributes to the precision of parameter estimates in generalized method of moments settings. For example, one of the measures asks what would happen to the variance of the parameter estimates if a particular moment was dropped from the estimation. The measures are all easy to compute. We illustrate the usefulness of the measures through two simple examples as well as an application to a model of joint retirement planning of couples. We estimate the model using the British Household Panel Survey, and we find evidence of complementarities in leisure. Our sensitivity measures illustrate that the estimate of the complementarity is primarily informed by the distribution of differences in planned retirement dates. The estimated econometric model can be interpreted as a bivariate ordered-choice model that allows for simultaneity. This makes the model potentially useful in other applications.  相似文献   

9.
Parametric mixture models are commonly used in applied work, especially empirical economics, where these models are often employed to learn for example about the proportions of various types in a given population. This paper examines the inference question on the proportions (mixing probability) in a simple mixture model in the presence of nuisance parameters when sample size is large. It is well known that likelihood inference in mixture models is complicated due to (1) lack of point identification, and (2) parameters (for example, mixing probabilities) whose true value may lie on the boundary of the parameter space. These issues cause the profiled likelihood ratio (PLR) statistic to admit asymptotic limits that differ discontinuously depending on how the true density of the data approaches the regions of singularities where there is lack of point identification. This lack of uniformity in the asymptotic distribution suggests that confidence intervals based on pointwise asymptotic approximations might lead to faulty inferences. This paper examines this problem in details in a finite mixture model and provides possible fixes based on the parametric bootstrap. We examine the performance of this parametric bootstrap in Monte Carlo experiments and apply it to data from Beauty Contest experiments. We also examine small sample inferences and projection methods.  相似文献   

10.
Nonparametric estimation and inferences of conditional distribution functions with longitudinal data have important applications in biomedical studies. We propose in this paper an estimation approach based on time-varying parametric models. Our model assumes that the conditional distribution of the outcome variable at each given time point can be approximated by a parametric model, but the parameters are smooth functions of time. Our estimation is based on a two-step smoothing method, in which we first obtain the raw estimators of the conditional distribution functions at a set of disjoint time points, and then compute the final estimators at any time by smoothing the raw estimators. Asymptotic properties, including the asymptotic biases, variances and mean squared errors, are derived for the local polynomial smoothed estimators. Applicability of our two-step estimation method is demonstrated through a large epidemiological study of childhood growth and blood pressure. Finite sample properties of our procedures are investigated through simulation study.  相似文献   

11.
A balanced panel of data is used to estimate technical efficiency, employing a fixed-effects stochastic frontier specification for wool producers in Australia. Both point estimates and confidence intervals for technical efficiency are reported. The confidence intervals are constructed using the multiple comparisons with the best (MCB) procedure of Horrace and Schmidt (1996, 2000). The confidence intervals make explicit the precision of the technical efficiency estimates and underscore the dangers of drawing inferences based solely on point estimates. Additionally, they allow identification of wool producers that are statistically efficient and those that are statistically inefficient. The data reveal at the 95% level that twenty-one of the twenty-six wool farms analyzed may be efficient.  相似文献   

12.
We use frequency domain techniques to estimate a medium‐scale dynamic stochastic general equilibrium (DSGE) model on different frequency bands. We show that goodness of fit, forecasting performance and parameter estimates vary substantially with the frequency bands over which the model is estimated. Estimates obtained using subsets of frequencies are characterized by significantly different parameters, an indication that the model cannot match all frequencies with one set of parameters. In particular, we find that: (i) the low‐frequency properties of the data strongly affect parameter estimates obtained in the time domain; (ii) the importance of economic frictions in the model changes when different subsets of frequencies are used in estimation. This is particularly true for the investment adjustment cost and habit persistence: when low frequencies are present in the estimation, the investment adjustment cost and habit persistence are estimated to be higher than when low frequencies are absent. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

13.
We consider Bayesian inference techniques for agent-based (AB) models, as an alternative to simulated minimum distance (SMD). Three computationally heavy steps are involved: (i) simulating the model, (ii) estimating the likelihood and (iii) sampling from the posterior distribution of the parameters. Computational complexity of AB models implies that efficient techniques have to be used with respect to points (ii) and (iii), possibly involving approximations. We first discuss non-parametric (kernel density) estimation of the likelihood, coupled with Markov chain Monte Carlo sampling schemes. We then turn to parametric approximations of the likelihood, which can be derived by observing the distribution of the simulation outcomes around the statistical equilibria, or by assuming a specific form for the distribution of external deviations in the data. Finally, we introduce Approximate Bayesian Computation techniques for likelihood-free estimation. These allow embedding SMD methods in a Bayesian framework, and are particularly suited when robust estimation is needed. These techniques are first tested in a simple price discovery model with one parameter, and then employed to estimate the behavioural macroeconomic model of De Grauwe (2012), with nine unknown parameters.  相似文献   

14.
This article generalizes production risk from a single output production function to a multiple output cost frontier, which is able to examine input-oriented technical efficiencies and production risk simultaneously in the context of a panel data. Furthermore, the joint confidence interval estimates for technical efficiencies are constructed by means of multiple comparisons with the best approach. Whether taking production risk into account or not offers quite dissimilar implications in terms of the average technical efficiency measure and the identification of multiple efficient banks achieving the optimal cost frontier. It is suggested that inferences drawn on the basis of the confidence intervals of technical efficiency provide much more fruitful and insightful information than the point estimation alone. Bank specific risk parameters are found to be highly and positively correlated with fixed-effect estimates, implying that the more risk-averse a bank is, the more technically efficient it will be.
Tong-Liang KaoEmail:
  相似文献   

15.
The past forty years have seen a great deal of research into the construction and properties of nonparametric estimates of smooth functions. This research has focused primarily on two sides of the smoothing problem: nonparametric regression and density estimation. Theoretical results for these two situations are similar, and multivariate density estimation was an early justification for the Nadaraya-Watson kernel regression estimator.
A third, less well-explored, strand of applications of smoothing is to the estimation of probabilities in categorical data. In this paper the position of categorical data smoothing as a bridge between nonparametric regression and density estimation is explored. Nonparametric regression provides a paradigm for the construction of effective categorical smoothing estimates, and use of an appropriate likelihood function yields cell probability estimates with many desirable properties. Such estimates can be used to construct regression estimates when one or more of the categorical variables are viewed as response variables. They also lead naturally to the construction of well-behaved density estimates using local or penalized likelihood estimation, which can then be used in a regression context. Several real data sets are used to illustrate these points.  相似文献   

16.
In a regression context, consider the difference in expected outcome associated with a particular difference in one of the input variables. If the true regression relationship involves interactions, then this predictive comparison can depend on the values of the other input variables. Therefore, one may wish to consider an average predictive comparison as a target of inference, where the averaging is with respect to the population distribution of the input variables. We consider inferences about such targets, with emphasis on inferential performance when the regression model is misspecified. Particularly, in light of the difficulties in dealing with interaction terms in regression models, we examine inferences about average predictive comparisons when additive models are fitted to relationships truly involving pairwise interaction terms. We identify some circumstances where such inferences are consistent despite the model misspecification, notably when the input variables are independent, or have a multivariate normal distribution.  相似文献   

17.
Effective linkage detection and gene mapping requires analysis of data jointly on members of extended pedigrees, jointly at multiple genetic markers. Exact likelihood computation is then often infeasible, but Markov chain Monte Carlo (MCMC) methods permit estimation of posterior probabilities of genome sharing among relatives, conditional upon marker data. In principle, MCMC also permits estimation of linkage analysis location score curves, but in practice effective MCMC samplers are hard to find. Although the whole-meiosis Gibbs sampler (M-sampler) performs well in some cases, for extended pedigrees and tightly linked markers better samplers are needed. However, using the M-sampler as a proposal distribution in a Metropolis-Hastings algorithm does allow genetic interference to be incorporated into the analysis.  相似文献   

18.
We develop new tests of the capital asset pricing model that take account of and are valid under the assumption that the distribution generating returns is elliptically symmetric; this assumption is necessary and sufficient for the validity of the CAPM. Our test is based on semiparametric efficient estimation procedures for a seemingly unrelated regression model where the multivariate error density is elliptically symmetric, but otherwise unrestricted. The elliptical symmetry assumption allows us to avoid the curse of dimensionality problem that typically arises in multivariate semiparametric estimation procedures, because the multivariate elliptically symmetric density function can be written as a function of a scalar transformation of the observed multivariate data. The elliptically symmetric family includes a number of thick‐tailed distributions and so is potentially relevant in financial applications. Our estimated betas are lower than the OLS estimates, and our parameter estimates are much less consistent with the CAPM restrictions than the corresponding OLS estimates. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

19.
We review generalized dynamic models for time series of count data. Usually temporal counts are modelled as following a Poisson distribution, and a transformation of the mean depends on parameters which evolve smoothly with time. We generalize the usual dynamic Poisson model by considering continuous mixtures of the Poisson distribution. We consider Poisson‐gamma and Poisson‐log‐normal mixture models. These models have a parameter for each time t which captures possible extra‐variation present in the data. If the time interval between observations is short, many observed zeros might result. We also propose zero inflated versions of the models mentioned above. In epidemiology, when a count is equal to zero, one does not know if the disease is present or not. Our model has a parameter which provides the probability of presence of the disease given no cases were observed. We rely on the Bayesian paradigm to obtain estimates of the parameters of interest, and discuss numerical methods to obtain samples from the resultant posterior distribution. We fit the proposed models to artificial data sets and also to a weekly time series of registered number of cases of dengue fever in a district of the city of Rio de Janeiro, Brazil, during 2001 and 2002.  相似文献   

20.
This paper studies the empirical performance of stochastic volatility models for twenty years of weekly exchange rate data for four major currencies. We concentrate on the effects of the distribution of the exchange rate innovations for both parameter estimates and for estimates of the latent volatility series. The density of the log of squared exchange rate innovations is modelled as a flexible mixture of normals. We use three different estimation techniques: quasi-maximum likelihood, simulated EM, and a Bayesian procedure. The estimated models are applied for pricing currency options. The major findings of the paper are that: (1) explicitly incorporating fat-tailed innovations increases the estimates of the persistence of volatility dynamics; (2) the estimation error of the volatility time series is very large; (3) this in turn causes standard errors on calculated option prices to be so large that these prices are rarely significantly different from a model with constant volatility. © 1998 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号