首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper investigates the discrete Part-Period Balancing (PPB) lot-sizing algorithm and its optional feature, the Look Ahead-Look Back tests. PPB is the most commonly used dynamic lot-sizing procedure in practice and it has also been tested extensively in simulation experiments. Although its overall cost performance, relative to other heuristics, have been fairly good, a fundamental flaw with the model has been noted in the literature. This deficiency leads to poor performance under certain conditions.In this paper a simple adjustment to the main algorithm is analytically derived under the assumptions of a constant demand rate and an infinite planning horizon. The adjustment leads to an optimal behavior for the PPB heuristic under the stated conditions. Subsequent experimental analysis through simulation of lot-sizing performance in environments with time-varying, discrete demand shows that the proposed adjustment leads to significant cost reductions.This paper also analyzes the Look Ahead-Look Back tests which is the distinguishing feature between the PPB procedure and the Least Total Cost algorithm. The tests were devised to improve the cost performance of the PPB heuristic by marginally adjusting each tentative lot-size. The effect of the Look Ahead-Look Back tests have, however, never been verified in the literature. The tests have undergone some changes over time, when they have been included in commercial software packages for inventory management. We suggest yet another modified version in this paper.In the last portion of the paper, the cost effectiveness of the Look Ahead-Look Back tests is confirmed through simulation. That is, when used together with the original PPB procedure, they lead to an improved cost performance. It is also shown that a combination of these tests and the adjustment to the PPB procedure mentioned earlier leads to an even lower average total cost. All cost improvements are statistically significant. It is finally noted that the Look Ahead-Look Back tests perform poorly in certain constant demand situations. Additional analytic and experimental analysis shows that these results stem from a dominance of the Look Back test over the Look Ahead test, leading to the former test being performed more often. This can easily be corrected, however, by checking for sufficient variability in the data before the Look Back test is employed.  相似文献   

2.
This paper develops a test procedure for serial correlation for discrete switching disequilibrium models which include both an endogenous price adjustment equation and lagged dependent variables. The tests are applied to a model of the UK labour market and the model is respecified in the light of the test results.  相似文献   

3.
Fang Duan  Dominik Wied 《Metrika》2018,81(6):653-687
We propose a new multivariate constant correlation test based on residuals. This test takes into account the whole correlation matrix instead of the considering merely marginal correlations between bivariate data series. In financial markets, it is unrealistic to assume that the marginal variances are constant. This motivates us to develop a constant correlation test which allows for non-constant marginal variances in multivariate time series. However, when the assumption of constant marginal variances is relaxed, it can be shown that the residual effect leads to nonstandard limit distributions of the test statistics based on residual terms. The critical values of the test statistics are not directly available and we use a bootstrap approximation to obtain the corresponding critical values for the test. We also derive the limit distribution of the test statistics based on residuals under the null hypothesis. Monte Carlo simulations show that the test has appealing size and power properties in finite samples. We also apply our test to the stock returns in Euro Stoxx 50 and integrate the test into a binary segmentation algorithm to detect multiple break points.  相似文献   

4.
The practice of using stress tests to complement Value at Risk (VaR) estimates suffers from some limitations such as the lack of coherence between a statistical risk measure and a subjective one. On the other hand there is a wide consensus that using the same correlation matrix to design various stress tests is not likely to provide an accurate representation of relationship amongst risk factors in periods of market stress. In this paper we introduce a solution to these problems by explicitly considering different correlation regimes and incorporating the result of the stress test to the traditional market risk measurement models.
Carlos BlancoEmail:
  相似文献   

5.
In this paper a Lagrange multiplier test of the hypothesis that the covariance matrix of a multivariate time series model is constant over time is considered. It is assumed that under the alternative, the error variances are time-varying, whereas the correlations remain constant over time. Under the parameterized alternative hypothesis the variances may change continuously as a function of time or some observable stochastic variables. Small-sample properties of the test statistic are investigated by simulation. The assumption of constant correlations does not appear overly restrictive.  相似文献   

6.
This paper considers a spatial panel data regression model with serial correlation on each spatial unit over time as well as spatial dependence between the spatial units at each point in time. In addition, the model allows for heterogeneity across the spatial units using random effects. The paper then derives several Lagrange multiplier tests for this panel data regression model including a joint test for serial correlation, spatial autocorrelation and random effects. These tests draw upon two strands of earlier work. The first is the LM tests for the spatial error correlation model discussed in Anselin and Bera [1998. Spatial dependence in linear regression models with an introduction to spatial econometrics. In: Ullah, A., Giles, D.E.A. (Eds.), Handbook of Applied Economic Statistics. Marcel Dekker, New York] and in the panel data context by Baltagi et al. [2003. Testing panel data regression models with spatial error correlation. Journal of Econometrics 117, 123–150]. The second is the LM tests for the error component panel data model with serial correlation derived by Baltagi and Li [1995. Testing AR(1) against MA(1) disturbances in an error component model. Journal of Econometrics 68, 133–151]. Hence, the joint LM test derived in this paper encompasses those derived in both strands of earlier works. In fact, in the context of our general model, the earlier LM tests become marginal LM tests that ignore either serial correlation over time or spatial error correlation. The paper then derives conditional LM and LR tests that do not ignore these correlations and contrast them with their marginal LM and LR counterparts. The small sample performance of these tests is investigated using Monte Carlo experiments. As expected, ignoring any correlation when it is significant can lead to misleading inference.  相似文献   

7.
Forecast evaluations aim to choose an accurate forecast for making decisions by using loss functions. However, different loss functions often generate different ranking results for forecasts, which complicates the task of comparisons. In this paper, we develop statistical tests for comparing performances of forecasting expectiles and quantiles of a random variable under consistent loss functions. The test statistics are constructed with the extremal consistent loss functions of Ehm et al. (2016). The null hypothesis of the tests is that a benchmark forecast at least performs equally well as a competing one under all extremal consistent loss functions. It can be shown that if such a null holds, the benchmark will also perform at least equally well as the competitor under all consistent loss functions. Thus under the null, when different consistent loss functions are used, the result that the competitor does not outperform the benchmark will not be altered. We establish asymptotic properties of the proposed test statistics and propose to use the re-centered bootstrap to construct their empirical distributions. Through simulations, we show that the proposed test statistics perform reasonably well. We then apply the proposed method to evaluations of several different forecast methods.  相似文献   

8.
A restricted forecasting compatibility test for Vector Autoregressive Error Correction models is analyzed in this work. It is shown that a variance–covariance matrix associated with the restrictions can be used to cancel out model dynamics and interactions between restrictions. This allows us to interpret the joint compatibility test as a composition of the corresponding single restriction compatibility tests. These tests are useful for appreciating the contribution of each and every restriction to the joint compatibility between the whole set of restrictions and the unrestricted forecasts. An estimated process adjustment for the test is derived and the resulting feasible joint compatibility test turns out to have better performance than the original one. An empirical illustration of the usefulness of the proposed test makes use of Mexican macroeconomic data and the targets proposed by the Mexican Government for the year 2003.  相似文献   

9.
We investigate the estimation and inference in difference in difference econometric models used in the analysis of treatment effects. When the innovations in such models display serial correlation, commonly used ordinary least squares (OLS) procedures are inefficient and may lead to tests with incorrect size. Implementation of feasible generalized least squares (FGLS) procedures is often hindered by too few observations in the cross-section to allow for unrestricted estimation of the weight matrix without leading to tests with similar size distortions as conventional OLS based procedures. We analyze the small sample properties of FGLS based tests with a formal higher order Edgeworth expansion that allows us to construct a size corrected version of the test. We also address the question of optimal temporal aggregation as a method to reduce the dimension of the weight matrix. We apply our procedure to data on regulation of mobile telephone service prices. We find that a size corrected FGLS based test outperforms tests based on OLS.  相似文献   

10.
New Nonlinear Approaches for the Adjustment and Updating of a SAM   总被引:1,自引:0,他引:1  
Many structural relationships should be taken into account in any reasonable adjustment and updating process. These structural relationships are mainly represented by ratios of different types, such as technical coefficients or the proportion of the cell value in relation to its row or column total. We believe that in many cases (either because of lack of information or when the time elapsed for the estimation of a social accounting matrix is not long enough to allow for any significant structural change) the updating process should try to minimize the rela- tive deviation of the new coefficients from the initial ones in a homogeneous way. This homogeneity would mean that the magnitude of this relative deviation is similar among the elements of each row or column, therefore avoiding the concentration of the changes in particular cells of the SAM. In this work, we propose some new adjustment criteria in order to obtain a more homogeneous relative adjustment of the structural coefficients. These criteria combine the adjustment method proposed by Matuszewski et al. (1964) with other deviation functions. Each of the adjustment criteria proposed leads to a nonlinear optimization problem which is reformulated as a linear program. We test the usefulness of this proposal by comparing its results with the ones obtained by more standard approaches and we are able to show that these approaches tend to produce a less homogeneous pattern of coefficient adjustment, under certain circumstances, than the ones we put forward.  相似文献   

11.
《Journal of econometrics》2003,117(1):123-150
This paper derives several lagrange multiplier (LM) tests for the panel data regression model with spatial error correlation. These tests draw upon two strands of earlier work. The first is the LM tests for the spatial error correlation model discussed in Anselin (Spatial Econometrics: Methods and Models, Kluwer Academic Publishers, Dordrecht; Rao's score test in spatial econometrics, J. Statist. Plann. Inference 97 (2001) 113) and Anselin et al. (Regional Sci. Urban Econom. 26 (1996) 77), and the second is the LM tests for the error component panel data model discussed in Breusch and Pagan (Rev. Econom. Stud. 47(1980) 239) and Baltagi et al. (J. Econometrics 54 (1992) 95). The idea is to allow for both spatial error correlation as well as random region effects in the panel data regression model and to test for their joint significance. Additionally, this paper derives conditional LM tests, which test for random regional effects given the presence of spatial error correlation. Also, spatial error correlation given the presence of random regional effects. These conditional LM tests are an alternative to the one-directional LM tests that test for random regional effects ignoring the presence of spatial error correlation or the one-directional LM tests for spatial error correlation ignoring the presence of random regional effects. We argue that these joint and conditional LM tests guard against possible misspecification. Extensive Monte Carlo experiments are conducted to study the performance of these LM tests as well as the corresponding likelihood ratio tests.  相似文献   

12.
Summary Subsequent to a review of the effects of familial or intra-class correlation (=ϱ) on the univariateF, or analysis of variance tests, and of methods for obtaining confidence limits for ϱ, results are presented on the effects of familial correlations in tests in multivariate ‘analysis of dispersion’. Methods for obtaining confidence limits are given in the case where a common variance-covariance matrix may be assumed for the successive multivariate samples.  相似文献   

13.
A significant correlation between integrated time series does not necessarily imply a meaningful relation. The relation can also be meaningless, i.e. spurious. Cointegration is sometimes illustrated by the metaphor of ‘a drunk and her dog’. The relation between integrated processes is meaningful, if they are cointegrated. To prevent spurious correlations, integrated series are usually transformed. This implies a loss of information. In case of cointegration, these transformations are no longer necessary. Moreover, it can be shown that cointegration tests are instruments to detect spurious correlations between integrated time series. This paper compares the Dickey–Fuller and the Johansen cointegration test. By means of Monte Carlo simulations, we found that these cointegration tests are a much more accurate alternative for the identification of spurious relations compared to the rather imprecise method of utilizing the R 2-and DW-statistics recommended by some authors. Furthermore, we demonstrate that cointegration techniques are precise methods of distinguishing between spurious and meaningful relations even if the dependency between the processes is very low. Using these tests, the researcher is not in danger of either neglecting a small but meaningful relation or regarding a relation as meaningful which is actually spurious.  相似文献   

14.
In this paper, we develop two cointegration tests for two varying coefficient cointegration regression models, respectively. Our test statistics are residual based. We derive the asymptotic distributions of test statistics under the null hypothesis of cointegration and show that they are consistent against the alternative hypotheses. We also propose a wild bootstrap procedure companioned with the continuous moving block bootstrap method proposed in  Paparoditis and Politis (2001) and  Phillips (2010) to rectify severe distortions found in simulations when the sample size is small. We apply the proposed test statistic to examine the purchasing power parity (PPP) hypothesis between the US and Canada. In contrast to the existing results from linear cointegration tests, our varying coefficient cointegration test does not reject that PPP holds between the US and Canada.  相似文献   

15.
The inverse normal method, which is used to combine P‐values from a series of statistical tests, requires independence of single test statistics in order to obtain asymptotic normality of the joint test statistic. The paper discusses the modification by Hartung (1999, Biometrical Journal, Vol. 41, pp. 849–855) , which is designed to allow for a certain correlation matrix of the transformed P‐values. First, the modified inverse normal method is shown here to be valid with more general correlation matrices. Secondly, a necessary and sufficient condition for (asymptotic) normality is provided, using the copula approach. Thirdly, applications to panels of cross‐correlated time series, stationary as well as integrated, are considered. The behaviour of the modified inverse normal method is quantified by means of Monte Carlo experiments.  相似文献   

16.
This paper develops an estimation and testing framework for a stationary large panel model with observable regressors and unobservable common factors. We allow for slope heterogeneity and for correlation between the common factors and the regressors. We propose a two stage estimation procedure for the unobservable common factors and their loadings, based on Common Correlated Effects estimator and the Principal Component estimator. We also develop two tests for the null of no factor structure: one for the null that loadings are cross sectionally homogeneous, and one for the null that common factors are homogeneous over time. Our tests are based on using extremes of the estimated loadings and common factors. The test statistics have an asymptotic Gumbel distribution under the null, and have power versus alternatives where only one loading or common factor differs from the others. Monte Carlo evidence shows that the tests have the correct size and good power.  相似文献   

17.
In the present paper, we construct a new, simple, consistent and powerful test for spatial independence, called the SG test, by using the new concept of symbolic entropy as a measure of spatial dependence. The standard asymptotic distribution of the test is an affine transformation of the symbolic entropy under the null hypothesis. The test statistic, with the proposed symbolization procedure, and its standard limit distribution have appealing theoretical properties that guarantee the general applicability of the test. An important aspect is that the test does not require specification of the W matrix and is free of a priori assumptions. We include a Monte Carlo study of our test, in comparison with the well-known Moran's I, the SBDS (de Graaff et al., 2001) and τ test (Brett and Pinkse, 1997) that are two non-parametric tests, to better appreciate the properties and the behaviour of the new test. Apart from being competitive compared to other tests, results underline the outstanding power of the new test for non-linear dependent spatial processes.  相似文献   

18.
This paper contributes to the literature on forecast evaluation by conducting an extensive Monte Carlo experiment using the evaluation procedure proposed by Elliott, Komunjer and Timmermann. We consider recent developments in weighting matrices for GMM estimation and testing. We pay special attention to the size and power properties of variants of the J‐test of forecast rationality. Proceeding from a baseline scenario to a more realistic setting, our results show that the approach leads to precise estimates of the degree of asymmetry of the loss function. For correctly specified models, we find the size of the J‐tests to be close to the nominal size, while the tests have high power against misspecified models. These findings are quite robust to inducing fat tails, serial correlation and outliers.  相似文献   

19.
We consider improved estimation strategies for the parameter matrix in multivariate multiple regression under a general and natural linear constraint. In the context of two competing models where one model includes all predictors and the other restricts variable coefficients to a candidate linear subspace based on prior information, there is a need of combining two estimation techniques in an optimal way. In this scenario, we suggest some shrinkage estimators for the targeted parameter matrix. Also, we examine the relative performances of the suggested estimators in the direction of the subspace and candidate subspace restricted type estimators. We develop a large sample theory for the estimators including derivation of asymptotic bias and asymptotic distributional risk of the suggested estimators. Furthermore, we conduct Monte Carlo simulation studies to appraise the relative performance of the suggested estimators with the classical estimators. The methods are also applied on a real data set for illustrative purposes.  相似文献   

20.
The covariance matrix plays a crucial role in portfolio optimization problems as the risk and correlation measure of asset returns. An improved estimation of the covariance matrix can enhance the performance of the portfolio. In this paper, based on the Cholesky decomposition of the covariance matrix, a Stein-type shrinkage strategy for portfolio weights is constructed under the mean-variance framework. Furthermore, according to the agent’s maximum expected utility value, a portfolio selection strategy is proposed. Finally, simulation experiments and an empirical study are used to test the feasibility of the proposed strategy. The numerical results show our portfolio strategy performs satisfactorily.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号