首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We have developed a generalised iterative scaling method (KRAS) that is able to balance and reconcile input–output tables and SAMs under conflicting external information and inconsistent constraints. Like earlier RAS variants, KRAS can: (a) handle constraints on arbitrarily sized and shaped subsets of matrix elements; (b) include reliability of the initial estimate and the external constraints; and (c) deal with negative values, and preserve the sign of matrix elements. Applying KRAS in four case studies, we find that, as with constrained optimisation, KRAS is able to find a compromise solution between inconsistent constraints. This feature does not exist in conventional RAS variants such as GRAS. KRAS can constitute a major advance for the practice of balancing input–output tables and Social Accounting Matrices, in that it removes the necessity of manually tracing inconsistencies in external information. This quality does not come at the expense of substantial programming and computational requirements (of conventional constrained optimisation techniques).  相似文献   

2.
Small area estimation is concerned with methodology for estimating population parameters associated with a geographic area defined by a cross-classification that may also include non-geographic dimensions. In this paper, we develop constrained estimation methods for small area problems: those requiring smoothness with respect to similarity across areas, such as geographic proximity or clustering by covariates, and benchmarking constraints, requiring weighted means of estimates to agree across levels of aggregation. We develop methods for constrained estimation decision theoretically and discuss their geometric interpretation. The constrained estimators are the solutions to tractable optimisation problems and have closed-form solutions. Mean squared errors of the constrained estimators are calculated via bootstrapping. Our approach assumes the Bayes estimator exists and is applicable to any proposed model. In addition, we give special cases of our techniques under certain distributional assumptions. We illustrate the proposed methodology using web-scraped data on Berlin rents aggregated over areas to ensure privacy.  相似文献   

3.
This paper attempts to formalise the assembly information of autobody and solve all feasible assembly sequences. In this paper, an assembly information model is built by using polychromatic sets and the mathematic forms of these models are given. The model is described by locating relation equations which express the constrained relationship of locating and displacement interference equations which express the constrained relationship of possible displacement. At first, the assembly information model is used to model the assembly information of autobody; then according to locating relation equations and displacement interference equations, all feasible assembly sequences can be derived. Finally, a case study illustrates the application of the proposed assembly information model. The proposed assembly information model and sequences generation approach is successfully demonstrated by autobody assembly planning, and could facilitate assembly sequence optimisation under a robust and reliable situation.  相似文献   

4.
This paper documents the development of a time series of Australian input?output tables. It describes the construction techniques employed in order to overcome the major issues encountered. Environmentally important processes were delineated using a range of detailed commodity data, thus expanding the original tables from roughly 100 industries into a temporally consistent 344 industries. Data confidentiality and inconsistency were overcome using an iterative constrained optimisation method called KRAS???a recent modification of RAS (Lenzen et al. 2006; 2007; 2009). The article concludes by analysing the stability of input?output coefficients over time similar to work in Dietzenbacher and Hoen (2006). The issue of stability of coefficients and multipliers was investigated under the Leontief and Ghosh models of supply/demand. Finally, the predictability of the models was examined under updated final demand or primary inputs and over varying time scales.  相似文献   

5.
The purpose of this paper is to investigate the role of the process approach in optimisation programme implementation. It is proposed that application of a process model of a company provides overcoming of functional boundaries and, consequently, overcoming of sub-optimisation of logistics system performance. The process model of an internal logistics system of a wholesale trading company, based on the Supply Chain Operations Reference (SCOR) model, has been developed. Relations between business functions, processes and performance indicators (metrics) have been analysed. The optimisation model has been developed, and comparative analysis of possible results of optimisation of processes and functions has been conducted. Results demonstrate that optimisation of functions results in a sub-optimal solution, caused by functional boundaries, whereas optimisation of processes results in an optimal one. Research provides the rationale for process approach implementation in order to make optimal decisions regarding the logistics activities and the technique of practical implementation of an optimisation programme.  相似文献   

6.
Computer networks have been very popular in enterprise applications. However, optimisation of network designs that allows networks to be used more efficiently in industrial environment and enterprise applications remains an interesting research topic. This article mainly discusses the topology optimisation theory and methods of the network control system based on switched Ethernet in an industrial context. Factors that affect the real-time performance of the industrial control network are presented in detail, and optimisation criteria with their internal relations are analysed. After the definition of performance parameters, the normalised indices for the evaluation of the topology optimisation are proposed. The topology optimisation problem is formulated as a multi-objective optimisation problem and the evolutionary algorithm is applied to solve it. Special communication characteristics of the industrial control network are considered in the optimisation process. In respect to the evolutionary algorithm design, an improved arena algorithm is proposed for the construction of the non-dominated set of the population. In addition, for the evaluation of individuals, the integrated use of the dominative relation method and the objective function combination method, for reducing the computational cost of the algorithm, are given. Simulation tests show that the performance of the proposed algorithm is preferable and superior compared to other algorithms. The final solution greatly improves the following indices: traffic localisation, traffic balance and utilisation rate balance of switches. In addition, a new performance index with its estimation process is proposed.  相似文献   

7.
Designing a complex product such as an aircraft usually requires both qualitative and quantitative data and reasoning. To assist the design process, a critical issue is how to represent qualitative data and utilise it in the optimisation. In this study, a new method is proposed for the optimal design of complex products: to make the full use of available data, information and knowledge, qualitative reasoning is integrated into the optimisation process. The transformation and fusion of qualitative and qualitative data are achieved via the fuzzy sets theory and a cloud model. To shorten the design process, parallel computing is implemented to solve the formulated optimisation problems. A parallel adaptive hybrid algorithm (PAHA) has been proposed. The performance of the new algorithm has been verified by a comparison with the results from PAHA and two other existing algorithms. Further, PAHA has been applied to determine the shape parameters of an aircraft model for aerodynamic optimisation purpose.  相似文献   

8.
In an incomplete market model where convex trading constraints are imposed upon the underlying assets, it is no longer possible to obtain unique arbitrage-free prices for derivatives using standard replication arguments. Most existing derivative pricing approaches involve the selection of a suitable martingale measure or the optimisation of utility functions as well as risk measures from the perspective of a single trader.We propose a new and effective derivative pricing method, referred to as the equal risk pricing approach, for markets with convex trading constraints. The approach analyses the risk exposure of both the buyer and seller of the derivative, and seeks an equal risk price which evenly distributes the expected loss for both parties under optimal hedging. The existence and uniqueness of the equal risk price are established for both European and American options. Furthermore, if the trading constraints are removed, the equal risk price agrees with the standard arbitrage-free price.Finally, the equal risk pricing approach is applied to a constrained Black–Scholes market model where short-selling is banned. In particular, simple pricing formulas are derived for European calls, European puts and American puts.  相似文献   

9.
Due to the advantages of being able to function under harsh environmental conditions and serving as a distributed condition information source in a networked monitoring system, the fibre Bragg grating (FBG) sensor network has attracted considerable attention for equipment online condition monitoring. To provide an overall conditional view of the mechanical equipment operation, a networked service-oriented condition monitoring framework based on FBG sensing is proposed, together with an intelligent matching method for supporting monitoring service management. In the novel framework, three classes of progressive service matching approaches, including service-chain knowledge database service matching, multi-objective constrained service matching and workflow-driven human-interactive service matching, are developed and integrated with an enhanced particle swarm optimisation (PSO) algorithm as well as a workflow-driven mechanism. Moreover, the manufacturing domain ontology, FBG sensor network structure and monitoring object are considered to facilitate the automatic matching of condition monitoring services to overcome the limitations of traditional service processing methods. The experimental results demonstrate that FBG monitoring services can be selected intelligently, and the developed condition monitoring system can be re-built rapidly as new equipment joins the framework. The effectiveness of the service matching method is also verified by implementing a prototype system together with its performance analysis.  相似文献   

10.
Multi-disciplinary design optimisation (MDO) is one of critical methodologies to the implementation of enterprise systems (ES). MDO requiring the analysis of fluid dynamics raises a special challenge due to its extremely intensive computation. The rapid development of computational fluid dynamic (CFD) technique has caused a rise of its applications in various fields. Especially for the exterior designs of vehicles, CFD has become one of the three main design tools comparable to analytical approaches and wind tunnel experiments. CFD-based design optimisation is an effective way to achieve the desired performance under the given constraints. However, due to the complexity of CFD, integrating with CFD analysis in an intelligent optimisation algorithm is not straightforward. It is a challenge to solve a CFD-based design problem, which is usually with high dimensions, and multiple objectives and constraints. It is desirable to have an integrated architecture for CFD-based design optimisation. However, our review on existing works has found that very few researchers have studied on the assistive tools to facilitate CFD-based design optimisation. In the paper, a multi-layer architecture and a general procedure are proposed to integrate different CFD toolsets with intelligent optimisation algorithms, parallel computing technique and other techniques for efficient computation. In the proposed architecture, the integration is performed either at the code level or data level to fully utilise the capabilities of different assistive tools. Two intelligent algorithms are developed and embedded with parallel computing. These algorithms, together with the supportive architecture, lay a solid foundation for various applications of CFD-based design optimisation. To illustrate the effectiveness of the proposed architecture and algorithms, the case studies on aerodynamic shape design of a hypersonic cruising vehicle are provided, and the result has shown that the proposed architecture and developed algorithms have performed successfully and efficiently in dealing with the design optimisation with over 200 design variables.  相似文献   

11.
In the general vector autoregressive process AR ( p ), multivariate least square estimation (LSE)/maximum likelihood estimation (MLE) of a subset of the parameters is considered when the complementary subset is suspected to be redundant. This may be viewed as a special case of linear constraints of autoregressive parameters. We incorporate this nonsample information in the estimation process and propose preliminary test and Stein-type estimators for the target subset of parameters. Under local alternatives their asymptotic properties are investigated and compared with those of unrestricted and restricted LSE. The dominance picture of the estimators is presented.  相似文献   

12.
Following Parsian and Farsipour (1999), we consider the problem of estimating the mean of the selected normal population, from two normal populations with unknown means and common known variance, under the LINEX loss function. Some admissibility results for a subclass of equivariant estimators are derived and a sufficient condition for the inadmissibility of an arbitrary equivariant estimator is provided. As a consequence, several of the estimators proposed by Parsian and Farsipour (1999) are shown to be inadmissible and better estimators are obtained. Received January 2001/Revised May 2002  相似文献   

13.
Synthetic collateralised loan obligations (CLOs) often have a replenishment feature: if the securitised assets amortise, the unused CLO volume can be replenished (refilled). While this replenishment problem usually is not a linear optimisation problem, we show that under additional assumptions it can be converted into a linear optimisation problem. Thus, the replenishment problem becomes numerically tractable even for a large number of assets.  相似文献   

14.
Survey statisticians use either approximate or optimisation‐based methods to stratify finite populations. Examples of the former are the cumrootf (Dalenius & Hodges, 1957 ) and geometric (Gunning & Horgan, 2004 ) methods, while examples of the latter are Sethi ( 1963 ) and Kozak ( 2004 ) algorithms. The approximate procedures result in inflexible stratum boundaries; this lack of flexibility results in non‐optimal boundaries. On the other hand, optimisation‐based methods provide stratum boundaries that can simultaneously account for (i) a chosen allocation scheme, (ii) overall sample size or required reliability of the estimator of a studied parameter and (iii) presence or absence of a take‐all stratum. Given these additional conditions, optimisation‐based methods will result in optimal boundaries. The only disadvantage of these methods is their complexity. However, in the second decade of 21st century, this complexity does not actually pose a problem. We illustrate how these two groups of methods differ by comparing their efficiency for two artificial populations and a real population. Our final point is that statistical offices should prefer optimisation‐based over approximate stratification methods; such a decision will help them either save much public money or, if funds are already allocated to a survey, result in more precise estimates of national statistics.  相似文献   

15.
It is proved that there exists an unbiased estimator for some real parameter of a class of distributions, which has minimal variance for some fixed distribution among all corresponding unbiased estimators, if and. only if the corresponding minimal variances for all related unbiased estimation problems concerning finite subsets of the underlying family of distributions are bounded. As an application it is shown that there does not exist some unbiased estimator for θk+c(ε≥0) with minimal variance for θ =0 among all corresponding unbiased estimators on the base of k i.i.d. random variables with a Cauchy-distribution, where θ denotes some location parameter.  相似文献   

16.
Recent research has emphasised that an increasing number of enterprises need computation environments for executing HPC (High Performance Computing) applications. Rather than paying the cost of ownership and possess physical, fixed capacity clusters, enterprises can reserve or rent resources for undertaking the required tasks. With the emergence of new computation paradigms such as cloud computing it has become possible to solve a wider range of problems due to their capability to handle and process massive amounts of data. On the other hand, given the pressing regulatory requirement to reduce the carbon footprint of our built environment, significant researching efforts have been recently directed towards simulation-based building energy optimisation with the overall objective of reducing energy consumption. Energy optimisation in buildings represents a class of problems that requires significant computation resources and generally is a time consuming process especially when undertaken with building simulation software, such as EnergyPlus. In this paper we present how a HPC based cloud model can be efficiently used for running and deploying EnergyPlus simulation-based optimisation in order to fulfil a number of objectives related to energy consumption. We describe and evaluate the establishment of such an application-based environment, and consider a cost perspective to determine the efficiency over several cases we explore. This study identifies the following contributions: (i) a comprehensive examination of issues relevant to the HPC community, including performance, cost, user perspectives and range of user activities, (ii) a comparison of two different execution environments such as HTCondor and CometCloud and determine their effectiveness in supporting simulation-based optimisation and (iii) a detailed performance analysis to locate the limiting factors of these execution environments.  相似文献   

17.
The paper addresses the following question: how efficient is the market system in allocating resources if trade takes place at prices that are not competitive? Even though there are many partial answers to this question, an answer that stands comparison to the rigor by which the first and second welfare theorems are derived is lacking. We first prove a “Folk Theorem” on the generic suboptimality of equilibria at non-competitive prices. The more interesting problem is whether equilibria are constrained optimal, i.e. efficient relative to all allocations that are consistent with prices at which trade takes place. We discuss an optimality notion due to Bénassy, and argue that this notion admits no general conclusions. We then turn to the notion of p-optimality and give a necessary condition, called the separating property, for constrained optimality: each constrained household should be constrained in each constrained market. If the number of commodities is less than or equal to two, the case usually treated in the textbook, then this necessary condition is also sufficient. In that case equilibria are constrained optimal. When there are three or more commodities, two or more constrained households, and two or more constrained markets, this necessary condition is typically not sufficient and equilibria are generically constrained suboptimal.  相似文献   

18.
The information flow in modern financial markets is continuous, but major stock exchanges are open for trading for only a limited number of hours. No consensus has yet emerged on how to deal with overnight returns when calculating and forecasting realized volatility in markets where trading does not take place 24 hours a day. Based on a recently introduced formal testing procedure, we find that for the S&P 500 index, a realized volatility estimator that optimally incorporates overnight information is more accurate in-sample. In contrast, estimators that do not incorporate overnight information are more accurate for individual stocks. We also show that accounting for overnight returns may affect the conclusions drawn in an out-of-sample horserace of forecasting models. Finally, there is considerably less variation in the selection of the best out-of-sample forecasting model when only the most accurate in-sample RV estimators are considered.  相似文献   

19.
S. E. Ahmed 《Metrika》1998,47(1):35-45
The problem of simultaneous asymptotic estimation of eigenvalues of covariance matrix of Wishart matrix is considered under a weighted quadratic loss function. James-Stein type of estimators are obtained which dominate the sample eigenvalues. The relative merits of the proposed estimators are compared to the sample eigenvalues using asymptotic quadratic distributional risk under loal alternatives. It is shown that the proposed estimators are asymptotically superior to the sample eigenvalues. Further, it is demonstrated that the James-Stein type estimator is dominated by its truncated part.  相似文献   

20.
Dynamic virtualised resource allocation is the key to quality of service assurance for multi-tier web application services in cloud data centre. In this paper, we develop a self-management architecture of cloud data centres with virtualisation mechanism for multi-tier web application services. Based on this architecture, we establish a flexible hybrid queueing model to determine the amount of virtual machines for each tier of virtualised application service environments. Besides, we propose a non-linear constrained optimisation problem with restrictions defined in service level agreement. Furthermore, we develop a heuristic mixed optimisation algorithm to maximise the profit of cloud infrastructure providers, and to meet performance requirements from different clients as well. Finally, we compare the effectiveness of our dynamic allocation strategy with two other allocation strategies. The simulation results show that the proposed resource allocation method is efficient in improving the overall performance and reducing the resource energy cost.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号