首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper presents a methodology based on genetic algorithms, which finds feasible and reasonably adequate solutions to problems of robust design in multivariate systems. We use a genetic algorithm to determine the appropriate control factor levels for simultaneously optimizing all of the responses of the system, considering the noise factors which affect it. The algorithm is guided by a desirability function which works with only one fitness function although the system may have many responses. We validated the methodology using data obtained from a real system and also from a process simulator, considering univariate and multivariate systems. In all cases, the methodology delivered feasible solutions, which accomplished the goals of robust design: obtain responses very close to the target values of each of them, and with minimum variability. Regarding the adjustment of the mean of each response to the target value, the algorithm performed very well. However, only in some of the multivariate cases, the algorithm was able to significantly reduce the variability of the responses.  相似文献   

2.
In this paper we derive the exact risk (under quadratic loss) of pre-test estimators of the prediction vector and of the error variance of a linear regression model with spherically symmetric disturbances. The pre-test in question is one of the validity of a set of exact linear restrictions on the model's coefficient vector. We demonstrate how the known results for the model with normal disturbances can be extended to this broader case. We also show that the critical value of unity results in a minimum of the risk of the pre-test estimator of the error variance. To illustrate the results we assume multivariate Student-t regression disturbances and numerically evaluate the derived expressions.  相似文献   

3.
In multivariate analysis, the measure of variance accounted for plays a central role. In this paper, we show that an alternative approach, distance-based multivariate analysis, also yields solutions that can be summarized by a ratio of variances. For classical multivariate analysis, this ratio is equal to the variance accounted for (VAF) and in distance-based multivariate analysis it equals distance accounted for (DAF). We show that DAF in distance-based multivariate analysis can always be made higher than VAF in classical multivariate analysis. This property is illustrated for principal components analysis, multiple correspondence analysis, multiple regression, and analysis of variance.  相似文献   

4.
Melanie Frick 《Metrika》2012,75(6):819-831
Asymptotic dependence can be interpreted as the property that realizations of the single components of a random vector occur simultaneously with a high probability. Information about the asymptotic dependence structure can be captured by dependence measures like the tail dependence parameter or the residual dependence index. We introduce these measures in the bivariate framework and extend them to the multivariate case afterwards. Within the extreme value theory one can model asymptotic dependence structures by Pickands dependence functions and spectral expansions. Both in the bivariate and in the multivariate case we also compute the tail dependence parameter and the residual dependence index on the basis of this statistical model. They take a specific shape then and are related to the Pickands dependence function and the exponent of variation of the underlying density expansion.  相似文献   

5.
Ornstein–Uhlenbeck models are continuous-time processes which have broad applications in finance as, e.g., volatility processes in stochastic volatility models or spread models in spread options and pairs trading. The paper presents a least squares estimator for the model parameter in a multivariate Ornstein–Uhlenbeck model driven by a multivariate regularly varying Lévy process with infinite variance. We show that the estimator is consistent. Moreover, we derive its asymptotic behavior and test statistics. The results are compared to the finite variance case. For the proof we require some new results on multivariate regular variation of products of random vectors and central limit theorems. Furthermore, we embed this model in the setup of a co-integrated model in continuous time.  相似文献   

6.
This paper introduces a new family of portmanteau tests for serial correlation. Using the wavelet transform, we decompose the variance of the underlying process into the variance of its low frequency and of its high frequency components and we design a variance ratio test of no serial correlation in the presence of dependence. Such decomposition can be carried out iteratively, each wavelet filter leading to a rich family of tests whose joint limiting null distribution is a multivariate normal. We illustrate the size and power properties of the proposed tests through Monte Carlo simulations.  相似文献   

7.
The Shewhart and the Bonferroni-adjustment R and S chart are usually applied to monitor the range and the standard deviation of a quality characteristic. These charts are used to recognize the process variability of a quality characteristic. The control limits of these charts are constructed on the assumption that the population follows approximately the normal distribution with the standard deviation parameter known or unknown. In this article, we establish two new charts based approximately on the normal distribution. The constant values needed to construct the new control limits are dependent on the sample group size (k) and the sample subgroup size (n). Additionally, the unknown standard deviation for the proposed approaches is estimated by a uniformly minimum variance unbiased estimator (UMVUE). This estimator has variance less than that of the estimator used in the Shewhart and Bonferroni approach. The proposed approaches in the case of the unknown standard deviation, give out-of-control average run length slightly less than the Shewhart approach and considerably less than the Bonferroni-adjustment approach.  相似文献   

8.
In this paper, for each solution for TU games, we define its “dual” and “anti-dual”. Then, we apply these notions to axioms: two axioms are (anti-)dual to each other if whenever a solution satisfies one of them, its (anti-)dual satisfies the other. It turns out that these definitions allow us not only to organize existing axiomatizations of various solutions but also to find new axiomatizations of some solutions. As an illustration, we show that two well-known axiomatizations of the core are essentially equivalent in the sense that one can be derived from the other, and derive new axiomatizations of the Shapley value and the Dutta–Ray solution.  相似文献   

9.
This study investigates business models for frugal innovation (i.e. a specific form of resource-constrained innovation) in the medical device and laboratory equipment industry in the context of emerging markets. Based on original data from five case studies, we investigate how firms can set up value creation and value capturing mechanisms to reach new customer segments in remote rural areas with unprecedented value propositions. With this research, we contribute to the literature on frugal innovation and business models in emerging markets. It is among the first empirical studies to apply a fine-grained perspective on resource-constrained innovation in emerging markets. In doing so, we focus on its most disruptive form, which is when these innovations entail entirely new applications. We advance and detail the value proposition for frugal innovation in these industries and argue that frugal innovation create new markets. Further, we show how firms set up their value creation and value capturing mechanisms to achieve the value proposition and identify two distinct Research & Development (R&D) strategies for frugal innovation.  相似文献   

10.
Vector autoregressions (VARs) are important tools in time series analysis. However, relatively little is known about the finite-sample behaviour of parameter estimators. We address this issue, by investigating ordinary least squares (OLS) estimators given a data generating process that is a purely nonstationary first-order VAR. Specifically, we use Monte Carlo simulation and numerical optimisation to derive response surfaces for OLS bias and variance, in terms of VAR dimensions, given correct specification and several types of over-parameterisation of the model: we include a constant, and a constant and trend, and introduce excess lags. We then examine the correction factors that are required for the least squares estimator to attain the minimum mean squared error (MSE). Our results improve and extend one of the main finite-sample multivariate analytical bias results of Abadir, Hadri and Tzavalis [Abadir, K.M., Hadri, K., Tzavalis, E., 1999. The influence of VAR dimensions on estimator biases. Econometrica 67, 163–181], generalise the univariate variance and MSE findings of Abadir [Abadir, K.M., 1995. Unbiased estimation as a solution to testing for random walks. Economics Letters 47, 263–268] to the multivariate setting, and complement various asymptotic studies.  相似文献   

11.
Modeling the correlation structure of returns is essential in many financial applications. Considerable evidence from empirical studies has shown that the correlation among asset returns is not stable over time. A recent development in the multivariate stochastic volatility literature is the application of inverse Wishart processes to characterize the evolution of return correlation matrices. Within the inverse Wishart multivariate stochastic volatility framework, we propose a flexible correlated latent factor model to achieve dimension reduction and capture the stylized fact of ‘correlation breakdown’ simultaneously. The parameter estimation is based on existing Markov chain Monte Carlo methods. We illustrate the proposed model with several empirical studies. In particular, we use high‐dimensional stock return data to compare our model with competing models based on multiple performance metrics and tests. The results show that the proposed model not only describes historic stylized facts reasonably but also provides the best overall performance.  相似文献   

12.
Pooling of data is often carried out to protect privacy or to save cost, with the claimed advantage that it does not lead to much loss of efficiency. We argue that this does not give the complete picture as the estimation of different parameters is affected to different degrees by pooling. We establish a ladder of efficiency loss for estimating the mean, variance, skewness and kurtosis, and more generally multivariate joint cumulants, in powers of the pool size. The asymptotic efficiency of the pooled data non‐parametric/parametric maximum likelihood estimator relative to the corresponding unpooled data estimator is reduced by a factor equal to the pool size whenever the order of the cumulant to be estimated is increased by one. The implications of this result are demonstrated in case–control genetic association studies with interactions between genes. Our findings provide a guideline for the discriminate use of data pooling in practice and the assessment of its relative efficiency. As exact maximum likelihood estimates are difficult to obtain if the pool size is large, we address briefly how to obtain computationally efficient estimates from pooled data and suggest Gaussian estimation and non‐parametric maximum likelihood as two feasible methods.  相似文献   

13.
A Hotelling-type model of spatial competition is considered, in which two firms compete in uniform delivered prices. First, it is shown that there exists no uniform delivered price–location equilibrium when the product sold by the firms is perfectly homogeneous andwhen consumers buy from the firm quoting the lower delivered price. Second, when the product is heterogeneous and when preferences are identically, independently Weibull-distributed with standard deviation μ, we prove that there exists a single uniform delivered price–location equilibrium iff μ≧1/8 times the transportation rate times the size of the market. In equilibrium, firms are located at the center of the market and charge the same uniform delivered price, which equals their average transportation cost, plus a mark-up of 2μ. Finally, we discuss how our result extends to the case of n firms and proceed to a comparison of equilibria under uniform mill and delivered pricing.  相似文献   

14.
15.
Green Lean Six Sigma has been recently clarified to improve the environmental sustainability performance of operations, but it seems glaringly scarce and in need of cutting-edge studies to integrate the concepts of green, lean, and Six Sigma into one unified application. This paper is accordingly aimed at constituting the application of Green Lean Six Sigma as a cleaner production. In doing so, a Define, Measure, Analyze, Improve, and Control (DMAIC)-based approach that is one of Six Sigma's well-known methods was proposed to systematize a Green Lean tool—environmental value stream mapping. Thus, this paper as one of the preliminary studies aligns environmental value stream mapping with DMAIC through presenting the proposed methodological approach, which relies on the five DMAIC phases—Define, Measure, Analyze, Improve, and Control—and considers green wastes in each phase simultaneously. To support the narrow body of knowledge, this proposed approach was validated via the action research-oriented case study implemented in the substrate manufacturing system that seeks to develop the environmental sustainability of its production processes and subsequently its general competitiveness. The findings indicated the effectiveness of a DMAIC-based approach in systematizing environmental value stream mapping and improving its efficacy to achieve environmental sustainability. The case analysis revealed that the application can significantly lessen the consumption of chemicals and energy in the system by 28% and 21%, respectively.  相似文献   

16.
We assess the predictive accuracies of a large number of multivariate volatility models in terms of pricing options on the Dow Jones Industrial Average. We measure the value of model sophistication in terms of dollar losses by considering a set of 444 multivariate models that differ in their specification of the conditional variance, conditional correlation, innovation distribution, and estimation approach. All of the models belong to the dynamic conditional correlation class, which is particularly suitable because it allows consistent estimations of the risk neutral dynamics with a manageable amount of computational effort for relatively large scale problems. It turns out that increasing the sophistication in the marginal variance processes (i.e., nonlinearity, asymmetry and component structure) leads to important gains in pricing accuracy. Enriching the model with more complex existing correlation specifications does not improve the performance significantly. Estimating the standard dynamic conditional correlation model by composite likelihood, in order to take into account potential biases in the parameter estimates, generates only slightly better results. To enhance this poor performance of correlation models, we propose a new model that allows for correlation spillovers without too many parameters. This model performs about 60% better than the existing correlation models we consider. Relaxing a Gaussian innovation for a Laplace innovation assumption improves the pricing in a more minor way. In addition to investigating the value of model sophistication in terms of dollar losses directly, we also use the model confidence set approach to statistically infer the set of models that delivers the best pricing performances.  相似文献   

17.
Many static and dynamic models exist to forecast Value-at-Risk and other quantile-related metrics used in financial risk management. Industry practice favours simpler, static models such as historical simulation or its variants. Most academic research focuses on dynamic models in the GARCH family. While numerous studies examine the accuracy of multivariate models for forecasting risk metrics, there is little research on accurately predicting the entire multivariate distribution. However, this is an essential element of asset pricing or portfolio optimization problems having non-analytic solutions. We approach this highly complex problem using various proper multivariate scoring rules to evaluate forecasts of eight-dimensional multivariate distributions: exchange rates, interest rates and commodity futures. This way, we test the performance of static models, namely, empirical distribution functions and a new factor-quantile model with commonly used dynamic models in the asymmetric multivariate GARCH class.  相似文献   

18.
We propose global and disaggregated spillover indices that allow us to assess variance and covariance spillovers, locally in time and conditionally on time‐t information. Key to our approach is the vector moving average representation of the half‐vectorized ‘squared’ multivariate GARCH process of the popular BEKK model. In an empirical application to a four‐dimensional system of broad asset classes (equity, fixed income, foreign exchange and commodities), we illustrate the new spillover indices at various levels of (dis)aggregation. Moreover, we demonstrate that they are informative of the value‐at‐risk violations of portfolios composed of the considered asset classes.  相似文献   

19.
The aim of this paper is extensively investigate the performance of the estimators for the Greeks of multidimensional complex path-dependent options obtained by the aid of Malliavin Calculus. The study analyses both the computation effort and the variance reduction in the Quasi-Monte Carlo simulation framework. For this purpose, we adopt the approach employed by Montero and Kohatsu-Higa to the multidimensional case. The multidimensional setting shows the convenience of the Malliavin Calculus approach over different techniques that have been previously proposed. Indeed, these techniques may be computationally expensive and do not provide enough flexibility for variance reduction. In contrast, the Malliavin approach provides a class of functions that return the same expected value (the Greek) with different accuracies. This versatility for variance reduction is not possible without the use of the generalized integral by part formula of Malliavin Calculus. In the multidimensional context, we find convenient formulas that permit to improve the localization technique, introduced in Fournié et al. and reduce both the computational cost and the variance. Moreover, we show that the parameters for the variance reduction can be obtained on the flight in the simulation. We illustrate the efficiency of the proposed procedures, coupled with the enhanced version of Quasi-Monte Carlo simulations as discussed in Sabino, for the numerical estimation of the Deltas of call, digital Asian-style and exotic basket options with a fixed and a floating strike price in a multidimensional Black-Scholes market. Given the fact that the gammas of a call option coincides, apart from a constant, with the deltas of digital options, this setting also covers the analysis of formulas tailored for the second order Greeks of call options.  相似文献   

20.
Differencing is a very popular stationary transformation for series with stochastic trends. Moreover, when the differenced series is heteroscedastic, authors commonly model it using an ARMA-GARCH model. The corresponding ARIMA-GARCH model is then used to forecast future values of the original series. However, the heteroscedasticity observed in the stationary transformation should be generated by the transitory and/or the long-run component of the original data. In the former case, the shocks to the variance are transitory and the prediction intervals should converge to homoscedastic intervals with the prediction horizon. We show that, in this case, the prediction intervals constructed from the ARIMA-GARCH models could be inadequate because they never converge to homoscedastic intervals. All of the results are illustrated using simulated and real time series with stochastic levels.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号