首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
In two recent papers by Balakrishnan et al. (J Qual Technol 39:35–47, 2007; Ann Inst Stat Math 61:251–274, 2009), the maximum likelihood estimators [^(q)]1{\hat{\theta}_{1}} and [^(q)]2{\hat{\theta}_{2}} of the parameters θ 1 and θ 2 have been derived in the framework of exponential simple step-stress models under Type-II and Type-I censoring, respectively. Here, we prove that these estimators are stochastically monotone with respect to θ 1 and θ 2, respectively, which has been conjectured in these papers and then utilized to develop exact conditional inference for the parameters θ 1 and θ 2. For proving these results, we have established a multivariate stochastic ordering of a particular family of trinomial distributions under truncation, which is also of independent interest.  相似文献   

2.
This paper addresses the design for supporting the optimal decision on tree cutting in a Portuguese eucalyptus production forest. Trees are usually cut at the biological rotation age, i.e. the age which maximizes the yearly volume production. Here we aim the maximization of the long-term yearly volume yield reduced by harvest costs. We consider different growth curves, with a known prior distribution, that can occur in each rotation. The optimal cutting time at each rotation depends both on the current growth curve and on the prior distribution. Different priors and strategies are compared with respect to the long-term production. Optimization of the cutting time allows an improvement of 16% of the long-term volume production. We conclude that the use of optimal designs can be beneficial for tree cutting in modern production forests.  相似文献   

3.
A step-stress accelerated life test is a special life test where test units are subjected to higher stress levels than normal operating conditions so that the information on the lifetime parameters of a test unit can be obtained more quickly in a shorter period of time. Also, progressive Type-I censoring is a generalized form of time censoring where functional test units are withdrawn successively from the life test at some prefixed nonterminal time points. Despite its flexibility and efficient utilization of the available resources, progressively censored sampling has not gained much popularity due to its analytical complexity compared to the conventional censoring schemes. In particular, understanding the mean completion time of a life test is of great practical interest in order to design and manage the life test optimally under frequent budgetary and time constraints. In this work, the expected termination time of a general k-level step-stress accelerated life test under progressive Type-I censoring is derived using a recursive relationship of the stochastic termination time based on the conditional block independence. To be comprehensive, two different modes of failure inspections are considered: continuous inspection, where the exact failure times are observed, and interval inspection, where the exact failure times are not available but only the number of failures that occurred.  相似文献   

4.
We examine the use of the likelihood ratio (LR) statistic to test for unobserved heterogeneity in duration models, based on mixtures of exponential or Weibull distributions. We consider both the uncensored and censored duration cases. The asymptotic null distribution of the LR test statistic is not the standard chi-square, as the standard regularity conditions do not hold. Instead, there is a nuisance parameter identified only under the alternative, and a null parameter value on the boundary of the parameter space, as in Cho and White (2007a). We accommodate these and provide methods delivering consistent asymptotic critical values. We conduct a number of Monte Carlo simulations, comparing the level and power of the LR test statistic to an information matrix (IM) test due to Chesher (1984) and Lagrange multiplier (LM) tests of Kiefer (1985) and Sharma (1987). Our simulations show that the LR test statistic generally outperforms the IM and LM tests. We also revisit the work of van den Berg and Ridder (1998) on unemployment durations and of Ghysels et al. (2004) on interarrival times between stock trades, and, as it turns out, affirm their original informal inferences.  相似文献   

5.
This paper gives an analytical expression for the best linear unbiased estimator (BLUE) of the unknown parameters in the linear Haar-wavelet model. From the analytical expression, we solve for the eigenvalues of the covariance matrix of the BLUE in analytical form. Further, we use these eigenvalues to construct some conventional discrete optimal designs for the model. The equivalences among these optimal designs are demonstrated and some examples are also given.   相似文献   

6.
The strong consistency and asymptotic normality of the Whittle estimate of the parameters in a class of exponential volatility processes are established. Our main focus here are the EGARCH model of [Nelson, D. 1991. Conditional heteroscedasticity in asset pricing: A new approach. Econometrica 59, 347–370] and other one-shock models such as the GJR model of [Glosten, L., Jaganathan, R., Runkle, D., 1993. On the relation between the expected value and the volatility of the nominal excess returns on stocks. Journal of Finance, 48, 1779–1801], but two-shock models, such as the SV model of [Taylor, S. 1986. Modelling Financial Time Series. Wiley, Chichester, UK], are also comprised by our assumptions. The variable of interest might not have finite fractional moment of any order and so, in particular, finite variance is not imposed. We allow for a wide range of degrees of persistence of shocks to conditional variance, allowing for both short and long memory.  相似文献   

7.
Linear-in-variables continuous-time processes are estimated nonlinearly, because the coefficients of the implied linear-in-variables discrete-time estimating equations are the exponential of a matrix formed with the continuous-time parameters. Even with sampling complications such as irregular intervals, mixed frequencies, and stock and flow variables, using Van loan's (1978) results, the mapping from continuous- to discrete-time parameters and its derivatives can be expressed as the submatrix of a matrix exponential. For quicker estimation and more accurate hypothesis testing or sensitivity analysis, it is often better to compute analytically the first-order derivatives of the mapping. This paper explains how to compute efficiently the continuous- to discrete-time parameter mapping and its derivatives, without computing an eigenvalue decomposition, the common way of doing this. By linking present results with previous ones, a complete chain rule is obtained for computing the Gaussian likelihood function and its derivatives with respect to the continuous-time parameters.  相似文献   

8.
Analytical expressions for the unconditional state-covariance matrix are combined with an efficient scheme for computing theoretical autocovariances, to simplify the exact maximum-likelihood estimation of multivariate ARMA models. Alternative state-space representations for pure AR and pure MA processes, giving rise to straightforward expressions for their unconditional state covariance, are suggested.  相似文献   

9.
Several exact results on the second moments of sample autocorrelations, for both Gaussian and non-Gaussian series, are presented. General formulae for the means, variances and covariances of sample autocorrelations are given for the case where the variables in a sequence are exchangeable. Bounds for the variances and covariances of sample autocorrelations from an arbitrary random sequence are derived. Exact and explicit formulae for the variances and covariances of sample autocorrelations from a Gaussian white noise are given. It is observed that the latter results hold for all spherically symmetric distributions. A simulation experiment, with Gaussian series, indicates that normalizing each sample autocorrelation with its exact mean and variance, instead of the usual approximate moments, can improve considerably the accuracy of the asymptotic N(0,1) distribution to obtain critical values for tests of randomness. The exact second moments of rank autocorrelations are also studied.  相似文献   

10.
Interval-valued time series are interval-valued data that are collected in a chronological sequence over time. This paper introduces three approaches to forecasting interval-valued time series. The first two approaches are based on multilayer perceptron (MLP) neural networks and Holt’s exponential smoothing methods, respectively. In Holt’s method for interval-valued time series, the smoothing parameters are estimated by using techniques for non-linear optimization problems with bound constraints. The third approach is based on a hybrid methodology that combines the MLP and Holt models. The practicality of the methods is demonstrated through simulation studies and applications using real interval-valued stock market time series.  相似文献   

11.
Abstract Let X 1., X n1 and Y 1., Y n1, be two independent random samples from exponential populations. The statistical problem is to test whether or not two exponential populations are the same, based on the order statistics X [1],. X [r1] and Y [1],. Y [rs] where 1 r1 n 1 and 1 r2 n 2. A new test is given and an asymptotic optimum property of the test is proved.  相似文献   

12.
F. Brodeau 《Metrika》1999,49(2):85-105
This paper is devoted to the study of the least squares estimator of f for the classical, fixed design, nonlinear model X (t i)=f(t i)+ε(t i), i=1,2,…,n, where the (ε(t i))i=1,…,n are independent second order r.v.. The estimation of f is based upon a given parametric form. In Brodeau (1993) this subject has been studied in the homoscedastic case. This time we assume that the ε(t i) have non constant and unknown variances σ2(t i). Our main goal is to develop two statistical tests, one for testing that f belongs to a given class of functions possibly discontinuous in their first derivative, and another for comparing two such classes. The fundamental tool is an approximation of the elements of these classes by more regular functions, which leads to asymptotic properties of estimators based on the least squares estimator of the unknown parameters. We point out that Neubauer and Zwanzig (1995) have obtained interesting results for connected subjects by using the same technique of approximation. Received: February 1996  相似文献   

13.
This paper considers the problem of solving an optimal control problem for large dynamic economic models which are both nonlinear and stochastic. It proposes a technique which combines conventional deterministic optimal control algorithms with the procedure of stochastic simulation, which calculates a numerical approximation to the distribution of the models endogenous variables. The new technique is computationally feasible for even large nonlinear models and, as an illustration of this, the Bank of England's large quarterly forecasting model is used in an example.  相似文献   

14.
Enterprise modeling methodologies have made enterprises more likely to be the object of systems engineering rather than craftsmanship. However, the current state of research in enterprise modeling methodologies lacks investigations of the mathematical background embedded in these methodologies. Abstract algebra, a broad subfield of mathematics, and the study of algebraic structures may provide interesting implications in both theory and practice. Therefore, this research gives an empirical challenge to establish an algebraic structure for one aspect model proposed in Design & Engineering Methodology for Organizations (DEMO), which is a major enterprise modeling methodology in the spotlight as a modeling principle to capture the skeleton of enterprises for developing enterprise information systems. The results show that the aspect model behaves well in the sense of algebraic operations and indeed constructs a Boolean algebra. This article also discusses comparisons with other modeling languages and suggests future work.  相似文献   

15.
Although various theoretical and applied papers have appeared in recent years concerned with the estimation and use of regression models with stochastically varying coefficients, little is available in the literature on the properties of the proposed estimators or the identifiability of the parameters of such models. The present paper derives sufficient conditions under which the maximum likelihood estimator is consistent and asymptotically normal and also provides sufficient conditions for the estimation of regression models with stationary stochastically varying coefficients. In many instances these requirements are found to have simple, intuitively appealing interpretations. Consistency and asymptotic normality is also proven for a two-step estimator and a method suggested by Rosenberg for generating initial estimates.  相似文献   

16.
The purpose of this paper is to establish the complexity of alternative versions of the weak axiom of revealed preference (warp) for collective consumption models. In contrast to the unitary consumption model, these collective models explicitly take the multi-member nature of the household into account. We consider the three collective settings that are most often considered in the literature. We start with the private setting in which all goods are privately consumed by the household members. Next, we consider the public setting in which all goods are publicly consumed inside the household. Finally, we also consider the general setting where no information on the (private or public) nature of goods consumed in the household is available. We prove that the collective version of warp is np-hard to test for both the private and public settings. Surprisingly, we also find for the general setting that the collective version of warp is easy to test for two-member households.  相似文献   

17.
This paper presents the winning submission of the M4 forecasting competition. The submission utilizes a dynamic computational graph neural network system that enables a standard exponential smoothing model to be mixed with advanced long short term memory networks into a common framework. The result is a hybrid and hierarchical forecasting method.  相似文献   

18.
19.
The problem of designing jointly a land use plan and a transportation network for a new town is formalized as a combinatorial programming problem which can be considered as an extension of the Koopmans-Beckmann problem. Accessibility and capital costs are the criteria taken into account to evaluate a plan. One exact technique and several heuristic techniques are reported and evaluated.  相似文献   

20.
Organizations such as the military and those involved with disaster relief are vitally concerned with their ability to redeploy resources between various geographical locations in response to cataclysmic events. Measuring the effectiveness of a redeployment plan involves multiple objectives and differing priorities. The primary objective of redeployment is meeting requirements at affected locations with a secondary concern for transportation costs. In order to reflect these features, the problem is formulated as a goal programming model. Several variations of the model are investigated that can enhance its value. An example redeployment problem is formulated and solved to illustrate the approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号