首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper develops Bayesian methodology for estimating long-term trends in the daily maxima of tropospheric ozone. The methods are then applied to study long-term trends in ozone at six monitoring sites in the state of Texas. The methodology controls for the effects of meteorological variables because it is known that variables such as temperature, wind speed and humidity substantially affect the formation of tropospheric ozone. A semiparametric regression model is estimated in which a nonparametric trivariate surface is used to model the relationship between ozone and these meteorological variables because, while it is known that the relatinship is a complex nonlinear one, its functional form is unknown. The model also allows for the effects of wind direction and seasonality. The errors are modeled as an autoregression, which is methodologically challenging because the observations are unequally spaced over time. Each function in the model is represented as a linear combination of basis functions located at all of the design points. We also estimate an appropriate data transformation simulataneously with the functions. The functions are estimated nonparametrically by a Bayesian hierarchical model that uses indicator variables to allow a non-zero probability that the coefficient of each basis term is zero. The entire model, including the nonparametric surfaces, data transformation and autoregression for the unequally spaced errors, is estimated using a Markov chain Monte Carlo sampling scheme with a computationally efficient transition kernel for generating the indicator variables. The empirical results indicate that key meteorological variables explain most of the variation in daily ozone maxima through a nonlinear interaction and that their effects are consistent across the six sites. However, the estimated trends vary considerably from site to site, even within the same city.  相似文献   

2.
李美华 《价值工程》2011,30(21):197-198
介绍了仿真软件Proteus的功能及特点,结合电子电路实例阐述了该软件在电子技术设计型实验中的具体应用,及Proteus仿真实验与实际操作实验的优缺点,提出虚-实结合的电子技术设计型实验教学是最佳实验教学模式之一的观点.  相似文献   

3.
In designing an experiment with one single, continuous predictor, the questions are composed of what is the optimal number of the predictor's values, what are these values, and how many subjects should be assigned to each of these values. In this study, locally D‐optimal designs for such experiments with discrete‐time event occurrence data are studied by using a sequential construction algorithm. Using the Weibull survival function for modeling the underlying time to event function, it is shown that the optimal designs for a linear effect of the predictor have two points that coincide with the design region's boundaries, but the design weights highly depend on the predictor effect size and its direction, the survival pattern, and the number of time points. For a quadratic effect of the predictor, three or four design points are needed.  相似文献   

4.
This paper studies minimally-supported D-optimal designs for polynomial regression model with logarithmically concave (log-concave) weight functions. Many commonly used weight functions in the design literature are log-concave. For example, and exp(−x 2) in Theorem 2.3.2 of Fedorov (Theory of optimal experiments, 1972) are all log-concave. We show that the determinant of information matrix of minimally-supported design is a log-concave function of ordered support points and the D-optimal design is unique. Therefore, the numerically D-optimal designs can be constructed efficiently by cyclic exchange algorithm.  相似文献   

5.
Copulas are distributions with uniform marginals. Non‐parametric copula estimates may violate the uniformity condition in finite samples. We look at whether it is possible to obtain valid piecewise linear copula densities by triangulation. The copula property imposes strict constraints on design points, making an equi‐spaced grid a natural starting point. However, the mixed‐integer nature of the problem makes a pure triangulation approach impractical on fine grids. As an alternative, we study the ways of approximating copula densities with triangular functions which guarantees that the estimator is a valid copula density. The family of resulting estimators can be viewed as a non‐parametric MLE of B‐spline coefficients on possibly non‐equally spaced grids under simple linear constraints. As such, it can be easily solved using standard convex optimization tools and allows for a degree of localization. A simulation study shows an attractive performance of the estimator in small samples and compares it with some of the leading alternatives. We demonstrate empirical relevance of our approach using three applications. In the first application, we investigate how the body mass index of children depends on that of parents. In the second application, we construct a bivariate copula underlying the Gibson paradox from macroeconomics. In the third application, we show the benefit of using our approach in testing the null of independence against the alternative of an arbitrary dependence pattern.  相似文献   

6.
One frequent application of microarray experiments is in the study of monitoring gene activities in a cell during cell cycle or cell division. High throughput gene expression time series data are produced from such microarray experiments. A new computational and statistical challenge for analyzing such gene expression time course data, resulting from cell cycle microarray experiments, is to discover genes that are statistically significantly periodically expressed during the cell cycle. Such a challenge occurs due to the large number of genes that are simultaneously measured, a moderate to small number of measurements per gene taken at different time points and high levels of non-normal random noises inherited in the data. Computational and statistical approaches to discovery and validation of periodic patterns of gene expression are, however, very limited. A good method of analysis should be able to search for significant periodic genes with a controlled family-wise error (FWE) rate or controlled false discovery rate (FDR) and any other variations of FDR, when all gene expression profiles are compared simultaneously. In this review paper, a brief summary of currently used methods in searching for periodic genes will be given. In particular, two methods will be surveyed in details. The first one is a novel statistical inference approach, the C & G Procedure that can be used to effectively detect statistically significantly periodically expressed genes when the gene expression is measured on evenly spaced time points. The second one is the Lomb–Scargle periodogram analysis, which can be used to discover periodic genes when the gene profiles are not measured on evenly spaced time points or when there are missing values in the profiles. The ultimate goal of this review paper is to give an expository of the two surveyed methods to researchers in related fields.  相似文献   

7.
Many industrial and engineering applications are built on the basis of differential equations. In some cases, parameters of these equations are not known and are estimated from measurements leading to an inverse problem. Unlike many other papers, we suggest to construct new designs in the adaptive fashion ‘on the go’ using the A‐optimality criterion. This approach is demonstrated on determination of optimal locations of measurements and temperature sensors in several engineering applications: (1) determination of the optimal location to measure the height of a hanging wire in order to estimate the sagging parameter with minimum variance (toy example), (2) adaptive determination of optimal locations of temperature sensors in a one‐dimensional inverse heat transfer problem and (3) adaptive design in the framework of a one‐dimensional diffusion problem when the solution is found numerically using the finite difference approach. In all these problems, statistical criteria for parameter identification and optimal design of experiments are applied. Statistical simulations confirm that estimates derived from the adaptive optimal design converge to the true parameter values with minimum sum of variances when the number of measurements increases. We deliberately chose technically uncomplicated industrial problems to transparently introduce principal ideas of statistical adaptive design.  相似文献   

8.
Liu  Shuangzhe  Neudecker  Heinz 《Metrika》1997,45(1):53-66
Extending Scheffé’s simplex-centroid design for experiments with mixtures, we introduce aweighted simplex-centroid design for a class of mixture models. Becker’s homogeneous functions of degree one belong to this class. By applying optimal design theory, we obtainA-, D- andI-optimal allocations of observations for Becker’s models.  相似文献   

9.
张志义 《价值工程》2010,29(12):79-79
结构选型与建筑使用功能、经济造价有很大的关系,有时结构选型受建筑使用功能的影响,有时结构选型决定经济造价。在业主对建筑使用功能的要求下进行结构选型,经济造价提高是解决不了的矛盾,最优的结构型选及其设计应是结构设计人员必上的一课。  相似文献   

10.
李元生 《价值工程》2012,31(21):203-204
根据需要,拟在宜昌以下河段新建首级GPS控制点40个,同时兼顾可利用的旧点及邻近国家高等级点联入新网。Ⅱ级控制在首级控制网下加密,兼顾常规测量方法需要,利用原有可利用控制点联测转换,计350点,高程控制在长江中下游建三等水准420点。建立独立的局部首级控制系统。一期工程建成后,将建立一套航道测量GPS控制网点,同时检核、提高原有可利用控制点的精度,为航道开发、建设提供高精度的、使用方便的控制资料,适应航道测量工作需要。  相似文献   

11.
A minimum costly zoned effluent charge program for the control of air pollution is considered paying attention to the general situation where polluters' cost functions of treating pollutant are unknown to the policy authority, and an iterative procedure by which the authority can attain a set of optimal charges is presented. The algorithm consists of steps of estimating the unknown treatment cost functions by observing polluter behavior and steps of revising charges based on the estimated cost functions. In the latter steps, a newly developed solution procedure for the zoned charge programming problem is involved. Simulated use of the algorithm indicates that the iterative charge revision procedure proposed in this paper can effectively provide the optimal scheme of zoned effluent charges.  相似文献   

12.
Copulas are distributions with uniform marginals. Non-parametric copula estimates may violate the uniformity condition in finite samples. We look at whether it is possible to obtain valid piecewise linear copula densities by triangulation. The copula property imposes strict constraints on design points, making an equi-spaced grid a natural starting point. However, the mixed-integer nature of the problem makes a pure triangulation approach impractical on fine grids. As an alternative, we study the ways of approximating copula densities with triangular functions which guarantees that the estimator is a valid copula density. The family of resulting estimators can be viewed as a non-parametric MLE of B-spline coefficients on possibly non-equally spaced grids under simple linear constraints. As such, it can be easily solved using standard convex optimization tools and allows for a degree of localization. A simulation study shows an attractive performance of the estimator in small samples and compares it with some of the leading alternatives. We demonstrate empirical relevance of our approach using three applications. In the first application, we investigate how the body mass index of children depends on that of parents. In the second application, we construct a bivariate copula underlying the Gibson paradox from macroeconomics. In the third application, we show the benefit of using our approach in testing the null of independence against the alternative of an arbitrary dependence pattern.  相似文献   

13.
Summary If one wants to test the hypothesis as to whether a set of observations comes from a completely specified continuous distribution or not, one can use the Kuiper test. But if one or more parameters have to be estimated, the standard tables for the Kuiper test are no longer valid. This paper presents a table to use with the Kuiper statistic for testing whether a sample comes from a normal distribution when the mean and variance are to be estimated from the sample. The critical points are obtained by means of Monte-Carlo calculation; the power of the test is estimated by simulation; and the results of the powers for several alternative distributions are compared with the estimated powers of the Kolmogorov-Smirnov test.  相似文献   

14.
This paper studies a divisionalized firm with sequential transfers in which central management wants to motivate two division managers who receive predecision information. Central management can only contract on the observables price, cost and quantity. Starting with the optimal compensation schemes as a benchmark, the paper considers the question whether using transfer prices to substitute for price and cost, respectively, can replicate the optimal solution or not. This is to say, whether using an aggregate measure comes at a loss. The results are dependent on the design constraints (i) single or ‘dual’ transfer prices and (ii) simultaneous design of the reward functions or exogenously given reward functions. Basically, only in the case that central management is restricted to given reward functions, and wants to use the same single transfer price for both divisions, there is a loss relative to the benchmark solution. In the other cases, generally, there is enough latitude to design the available functions to mimic the benchmark. The paper goes on to discuss special cases. First, it finds conditions when purely cost-based transfer prices are optimal, and second, it derives explicit solutions for given linear compensation schemes over divisional book profits.  相似文献   

15.
The purpose of this paper is to investigate the role of the process approach in optimisation programme implementation. It is proposed that application of a process model of a company provides overcoming of functional boundaries and, consequently, overcoming of sub-optimisation of logistics system performance. The process model of an internal logistics system of a wholesale trading company, based on the Supply Chain Operations Reference (SCOR) model, has been developed. Relations between business functions, processes and performance indicators (metrics) have been analysed. The optimisation model has been developed, and comparative analysis of possible results of optimisation of processes and functions has been conducted. Results demonstrate that optimisation of functions results in a sub-optimal solution, caused by functional boundaries, whereas optimisation of processes results in an optimal one. Research provides the rationale for process approach implementation in order to make optimal decisions regarding the logistics activities and the technique of practical implementation of an optimisation programme.  相似文献   

16.
In industry sectors where market prices for goods and services are unavailable, it is common to use estimated output and input distance functions to estimate rates of productivity change. It is also possible, but less common, to use estimated distance functions to estimate the normalised support (or efficient) prices of individual inputs and outputs. A problem that arises in the econometric estimation of these functions is that more than one variable in the estimating equation may be endogenous. In such cases, maximum likelihood estimation can lead to biased and inconsistent parameter estimates. To solve the problem, we use linear programming to construct a quantity index. The distance function is then written in the form of a conventional stochastic frontier model where the explanatory variables are unambiguously exogenous. We use this approach to estimate productivity indexes, measures of environmental change, levels of efficiency, and support prices for a sample of Australian public hospitals.  相似文献   

17.
The aim of this paper is to derive methodology for designing ‘time to event’ type experiments. In comparison to estimation, design aspects of ‘time to event’ experiments have received relatively little attention. We show that gains in efficiency of estimators of parameters and use of experimental material can be made using optimal design theory. The types of models considered include classical failure data and accelerated testing situations, and frailty models, each involving covariates which influence the outcome. The objective is to construct an optimal design based of the values of the covariates and associated model or indeed a candidate set of models. We consider D-optimality and create compound optimality criteria to derive optimal designs for multi-objective situations which, for example, focus on the number of failures as well as the estimation of parameters. The approach is motivated and demonstrated using common failure/survival models, for example, the Weibull distribution, product assessment and frailty models.  相似文献   

18.
The research of developing efficient methodologies for constructing optimal experimental designs has been very active in the last decade. Uniform design is one of the most popular approaches, carried out by filling up experimental points in a determinately uniform fashion. Applications of coding theory in experimental design are interesting and promising. Quaternary codes and their binary Gray map images attracted much attention from those researching design of experiments in recent years. The present paper aims at exploring new results for constructing uniform designs based on quaternary codes and their binary Gray map images. This paper studies the optimality of quaternary designs and their two and three binary Gray map image designs in terms of the uniformity criteria measured by: the Lee, wrap-around, symmetric, centered and mixture discrepancies. Strong relationships between quaternary designs and their two and three binary Gray map image designs are obtained, which can be used for efficiently constructing two-level designs from four-level designs and vice versa. The significance of this work is evaluated by comparing our results to the existing literature.  相似文献   

19.
火力发电一直是中国的主要发电方式,并且随着煤炭资源逐渐稀缺,燃烧优化和氮氧化物控制已成为大型电站的主要研究课题。氮氧化物(NOX)作为大型电站锅炉的重要排放物之一,对环境的污染极大。当前,大型电站锅炉通常使用氮氧化物浓度很低的时候进行燃烧,燃烧结合选择性催化还原进行联合脱硝,而氮氧化物浓度较低的情况下,燃烧会在某些操作条件下牺牲锅炉的经济性,因此应进行燃烧优化研究。  相似文献   

20.
Adalbert Wilhelm 《Metrika》1995,42(1):365-377
The calculus of concave functions is a widely accepted tool for optimum experimental design problems. However, as a function of the support points and the weights the design problem fails to be concave. In this paper we make use of generalized gradients in the sense of Rockafellar (1980) and Clarke (1983). A chain rule is presented for the subdifferential of the composition of an information function with the moment matrix mapping. Lipschitz continuity of the global design function is proved and conditions for strict differentiability are given.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号