首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper investigates the implications of bounded speculative storage, storage bounded from below at zero and above at a capacity, on commodity prices. Binding capacity mirrors the non-negativity constraint on storage and leads to negative price spiking and higher volatility when the market is in deep contango, i.e. low current prices at high stock levels. With bounded storage there is no need to restrict storage to be costly to ensure a rational expectations equilibrium. This allows the model to cover a wide range of storage technologies, including free and productive storage. We also provide an alternative expression for speculative prices that highlights the key role of the storage boundaries. The competitive equilibrium price is the sum of discounted future probability weighted boundary prices. The boundary prices can be viewed as dividends on commodities in storage reflecting the realization of economic profits from storage.  相似文献   

2.
In this paper, we propose an estimator for the population mean when some observations on the study and auxiliary variables are missing from the sample. The proposed estimator is valid for any unequal probability sampling design, and is based upon the pseudo empirical likelihood method. The proposed estimator is compared with other estimators in a simulation study.  相似文献   

3.
We consider the problem of estimating a probability density function based on data that are corrupted by noise from a uniform distribution. The (nonparametric) maximum likelihood estimator for the corresponding distribution function is well defined. For the density function this is not the case. We study two nonparametric estimators for this density. The first is a type of kernel density estimate based on the empirical distribution function of the observable data. The second is a kernel density estimate based on the MLE of the distribution function of the unobservable (uncorrupted) data.  相似文献   

4.
A very well-known model in software reliability theory is that of Littlewood (1980). The (three) parameters in this model are usually estimated by means of the maximum likelihood method. The system of likelihood equations can have more than one solution. Only one of them will be consistent, however. In this paper we present a different, more analytical approach, exploiting the mathematical properties of the log-likelihood function itself. Our belief is that the ideas and methods developed in this paper could also be of interest for statisticians working on the estimation of the parameters of the generalised Pareto distribution. For those more generally interested in maximum likelihood the paper provides a 'practical case', indicating how complex matters may become when only three parameters are involved. Moreover, readers not familiar with counting process theory and software reliability are given a first introduction.  相似文献   

5.
For a multilevel model with two levels and only a random intercept, the quality of different estimators of the random intercept is examined. Analytical results are given for the marginal model interpretation where negative estimates of the variance components are allowed for. Except for four or five level-2 units, the Empirical Bayes Estimator (EBE) has a lower average Bayes risk than the Ordinary Least Squares Estimator (OLSE). The EBEs based on restricted maximum likelihood (REML) estimators of the variance components have a lower Bayes risk than the EBEs based on maximum likelihood (ML) estimators. For the hierarchical model interpretation, where estimates of the variance components are restricted being positive, Monte Carlo simulations were done. In this case the EBE has a lower average Bayes risk than the OLSE, also for four or five level-2 units. For large numbers of level-1 (30) or level-2 units (100), the performances of REML-based and ML-based EBEs are comparable. For small numbers of level-1 (10) and level-2 units (25), the REML-based EBEs have a lower Bayes risk than ML-based EBEs only for high intraclass correlations (0.5).  相似文献   

6.
We explore the tail dependence between crude oil prices and exchange rates via a dynamic quantile association regression model based on the flexible Fourier form. This method allows us to describe the quantile dependence between conditional distributions of assets. We first perform simulation exercises to gauge the estimation precision of our model. We then undertake empirical analyses to examine the dynamic relation between crude oil and nine exchange rates. We reveal a mildly symmetric tail dependence between these two assets but it increases sharply during the Great Recession of 2008. Further robustness check substantiates the baseline results.  相似文献   

7.
For estimatingp(⩾ 2) independent Poisson means, the paper considers a compromise between maximum likelihood and empirical Bayes estimators. Such compromise estimators enjoy both good componentwise as well as ensemble properties. Research supported by the NSF Grant Number MCS-8218091.  相似文献   

8.
Hedonic methods are a prominent approach in the construction of quality‐adjusted price indexes. This paper shows that the process of computing such indexes is substantially simplified if arithmetic (geometric) price indexes are computed based on exponential (log‐linear) hedonic functions estimated by the Poisson pseudo‐maximum likelihood (ordinary least squares) method. A Monte Carlo simulation study based on housing data illustrates the convenience of the links identified and the very attractive properties of the Poisson estimator in the hedonic framework.  相似文献   

9.
The exponentiated Weibull distribution is a convenient alternative to the generalized gamma distribution to model time-to-event data. It accommodates both monotone and nonmonotone hazard shapes, and flexible enough to describe data with wide ranging characteristics. It can also be used for regression analysis of time-to-event data. The maximum likelihood method is thus far the most widely used technique for inference, though there is a considerable body of research of improving the maximum likelihood estimators in terms of asymptotic efficiency. For example, there has recently been considerable attention on applying James–Stein shrinkage ideas to parameter estimation in regression models. We propose nonpenalty shrinkage estimation for the exponentiated Weibull regression model for time-to-event data. Comparative studies suggest that the shrinkage estimators outperform the maximum likelihood estimators in terms of statistical efficiency. Overall, the shrinkage method leads to more accurate statistical inference, a fundamental and desirable component of statistical theory.  相似文献   

10.
张丽丽 《价值工程》2011,30(30):211-211
本文通过两个具体例题表明当连续型总体可能的取值范围不是(-∞,+∞)时,利用第一种似然函数的定义解决点估计问题,学生不仅能够很容易地掌握最大似然估计法,同时对样本来自总体且估计离不开样本这一统计思想加深了理解。  相似文献   

11.
In this paper we study the Candy model, a marked point process introduced by S toica et al. (2000) . We prove Ruelle and local stability, investigate its Markov properties, and discuss how the model may be sampled. Finally, we consider estimation of the model parameters and present a simulation study.  相似文献   

12.
Zellner (1976) proposed a regression model in which the data vector (or the error vector) is represented as a realization from the multivariate Student t distribution. This model has attracted considerable attention because it seems to broaden the usual Gaussian assumption to allow for heavier-tailed error distributions. A number of results in the literature indicate that the standard inference procedures for the Gaussian model remain appropriate under the broader distributional assumption, leading to claims of robustness of the standard methods. We show that, although mathematically the two models are different, for purposes of statistical inference they are indistinguishable. The empirical implications of the multivariate t model are precisely the same as those of the Gaussian model. Hence the suggestion of a broader distributional representation of the data is spurious, and the claims of robustness are misleading. These conclusions are reached from both frequentist and Bayesian perspectives.  相似文献   

13.
This paper reviews methods for handling complex sampling schemes when analysing categorical survey data. It is generally assumed that the complex sampling scheme does not affect the specification of the parameters of interest, only the methodology for making inference about these parameters. The organisation of the paper is loosely chronological. Contingency table data are emphasised first before moving on to the analysis of unit‐level data. Weighted least squares methods, introduced in the mid 1970s along with methods for two‐way tables, receive early attention. They are followed by more general methods based on maximum likelihood, particularly pseudo maximum likelihood estimation. Point estimation methods typically involve the use of survey weights in some way. Variance estimation methods are described in broad terms. There is a particular emphasis on methods of testing. The main modelling methods considered are log‐linear models, logit models, generalised linear models and latent variable models. There is no coverage of multilevel models.  相似文献   

14.
This paper introduces the Random Walk with Drift plus AutoRegressive model (RWDAR) for time-series forecasting. Owing to the presence of a random walk plus drift term, this model shares some similarities with the Theta model of Assimakopoulos and Nikolopoulos (2000). However, the addition of a first-order autoregressive term in the state equation provides additional adaptability and flexibility. Indeed, it is shown that RWDAR tends to outperform the Theta model when forecasting both stationary and nearly non-stationary time series. This paper also proposes a simple estimation method for the RWDAR model based on the solution of the algebraic Riccati equation for the prediction error covariance of the state vector. Simulation results show that this estimator performs as well as the standard Kalman filter approach. Finally, using yearly data from the M3 and M4 competition datasets, it is found that RWDAR outperforms traditional forecasting methods.  相似文献   

15.
The optimal number of levels is studied for the one-way random model with normally distributed effects. The optimum criteria used are based on the variances of the traditional analysis of variance estimators of the variance components. Exact solutions are compared to earlier results based on lower bounds of the sampling variances. Comparisons are also made to the large-sample variances of the estimates based on restricted maximum likelihood. Received February 2002  相似文献   

16.
仓储运输资源整合的目的是通过资源的重新配置,提高资源的使用效率,最终获取更高额的回报。对那些已经或将要把未来仓储运输发展战略目标定位于进一步提升企业竞争力的传统仓储和运输等企业来说,资源整合显得尤其重要。资源整合是企业战略调整的手段,也是企业经营管理的日常工作,整合就是要优化资源配置。文中从上海浦东新区发展的角度,论述了上海浦东新区仓储运输资源整合的问题,提出作者的以下观点。  相似文献   

17.
On the analysis of multivariate growth curves   总被引:1,自引:0,他引:1  
Growth curve data arise when repeated measurements are observed on a number of individuals with an ordered dimension for occasions. Such data appear frequently in almost all fields in which statistical models are used, for instance in medicine, agriculture and engineering. In medicine, for example, more than one variable is often measured on each occasion. However, analyses are usually based on exploration of repeated measurements of only one variable. The consequence is that the information contained in the between-variables correlation structure will be discarded.  In this study we propose a multivariate model based on the random coefficient regression model for the analysis of growth curve data. Closed-form expressions for the model parameters are derived under the maximum likelihood (ML) and the restricted maximum likelihood (REML) framework. It is shown that in certain situations estimated variances of growth curve parameters are greater for REML. Also a method is proposed for testing general linear hypotheses. One numerical example is provided to illustrate the methods discussed. Received: 22 February 1999  相似文献   

18.
It is proposed to study the graphical representation of the parametric space of maximumlikelihood function of two parameters logistic functions, as is used in Item Response Theory. Thisproposal is made more from a point of view of understanding rather than of discovery..  相似文献   

19.
The methodologies that have been used in existing research to assess the efficiency with which organic farms are operating are generally based either on the stochastic frontier methodology or on a deterministic non-parametric approach. Recently, Kumbhakar et al. (J Econom 137:1–27, 2007) proposed a new nonparametric, stochastic method based on the local maximum likelihood principle. We use this methodology to compare the efficiency ratings of organic and conventional arable crop farms in the Spanish region of Andalucía. Nonparametrically encompassing the stochastic frontier model is especially useful when comparing the performance of two groups that are likely to be characterized by different production technologies.
Teresa SerraEmail: Email:
  相似文献   

20.
Typically, a Poisson model is assumed for count data. In many cases, there are many zeros in the dependent variable, thus the mean is not equal to the variance value of the dependent variable. Therefore, Poisson model is not suitable anymore for this kind of data because of too many zeros. Thus, we suggest using a hurdle‐generalized Poisson regression model. Furthermore, the response variable in such cases is censored for some values because of some big values. A censored hurdle‐generalized Poisson regression model is introduced on count data with many zeros in this paper. The estimation of regression parameters using the maximum likelihood method is discussed and the goodness‐of‐fit for the regression model is examined. An example and a simulation will be used to illustrate the effects of right censoring on the parameter estimation and their standard errors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号