首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In designing an experiment with one single, continuous predictor, the questions are composed of what is the optimal number of the predictor's values, what are these values, and how many subjects should be assigned to each of these values. In this study, locally D‐optimal designs for such experiments with discrete‐time event occurrence data are studied by using a sequential construction algorithm. Using the Weibull survival function for modeling the underlying time to event function, it is shown that the optimal designs for a linear effect of the predictor have two points that coincide with the design region's boundaries, but the design weights highly depend on the predictor effect size and its direction, the survival pattern, and the number of time points. For a quadratic effect of the predictor, three or four design points are needed.  相似文献   

2.
Recent survey literature shows an increasing interest in survey designs that adapt data collection to characteristics of the survey target population. Given a specified quality objective function, the designs attempt to find an optimal balance between quality and costs. Finding the optimal balance may not be straightforward as corresponding optimisation problems are often highly non‐linear and non‐convex. In this paper, we discuss how to choose strata in such designs and how to allocate these strata in a sequential design with two phases. We use partial R‐indicators to build profiles of the data units where more or less attention is required in the data collection. In allocating cases, we look at two extremes: surveys that are run only once, or infrequent, and surveys that are run continuously. We demonstrate the impact of the sample size in a simulation study and provide an application to a real survey, the Dutch Crime Victimisation Survey.  相似文献   

3.
This study examines financial analyst coverage for U.S. firms following an increase in foreign product market competition. To capture exogenous shocks to domestic firms' competitive environments, we exploit a quasi‐natural experiment from large import tariff reductions over the 1984 to 2005 period in the manufacturing sector. Using data for the years before and after large tariff reductions, our difference‐in‐differences analysis shows evidence of a significant decrease in analyst coverage for incumbent U.S. firms when they face greater entry threat from foreign competitors. We also find that analysts with less firm‐specific experience and less accurate prior‐period forecasts are more likely to stop following the domestic firm when foreign competition intensifies. Overall, the findings suggest that foreign product market competition from global trade liberalization is an important determinant of financial analysts' coverage decisions.  相似文献   

4.
Many industrial and engineering applications are built on the basis of differential equations. In some cases, parameters of these equations are not known and are estimated from measurements leading to an inverse problem. Unlike many other papers, we suggest to construct new designs in the adaptive fashion ‘on the go’ using the A‐optimality criterion. This approach is demonstrated on determination of optimal locations of measurements and temperature sensors in several engineering applications: (1) determination of the optimal location to measure the height of a hanging wire in order to estimate the sagging parameter with minimum variance (toy example), (2) adaptive determination of optimal locations of temperature sensors in a one‐dimensional inverse heat transfer problem and (3) adaptive design in the framework of a one‐dimensional diffusion problem when the solution is found numerically using the finite difference approach. In all these problems, statistical criteria for parameter identification and optimal design of experiments are applied. Statistical simulations confirm that estimates derived from the adaptive optimal design converge to the true parameter values with minimum sum of variances when the number of measurements increases. We deliberately chose technically uncomplicated industrial problems to transparently introduce principal ideas of statistical adaptive design.  相似文献   

5.
Estimation of nitrogen response functions has a long history and yet there is still considerable uncertainty about how much nitrogen to apply to agricultural crops. Nitrogen recommendations are usually based on estimation of agronomic production functions that typically use data from designed experiments. Nitrogen experiments, for example, often use equally spaced levels of nitrogen. Past agronomic research is mostly supportive of plateau-type functional forms. The question addressed is if one is willing to accept a specific plateau-type functional form as the true model, what experimental design is the best to use for estimating the production function? The objective is to minimize the variance of the estimated expected profit maximizing level of input. Of particular interest is how well does the commonly used equally-spaced design perform in comparison to the optimal design. Mixed effects models for winter wheat (Triticum aestivium L.) yield are estimated for both Mitscherlich and linear plateau functions. With three design points, one should be high enough to be on the plateau and one should be at zero. The choice of the middle design point makes little difference over a wide range of values. The optimal middle design point is lower for the Mitscherlich functional form than it is for the plateau function. Equally spaced designs with more design points have a similar precision and thus the loss from using a nonoptimal experimental design is small.  相似文献   

6.
This paper uses semidefinite programming (SDP) to construct Bayesian optimal design for nonlinear regression models. The setup here extends the formulation of the optimal designs problem as an SDP problem from linear to nonlinear models. Gaussian quadrature formulas (GQF) are used to compute the expectation in the Bayesian design criterion, such as D‐, A‐ or E‐optimality. As an illustrative example, we demonstrate the approach using the power‐logistic model and compare results in the literature. Additionally, we investigate how the optimal design is impacted by different discretising schemes for the design space, different amounts of uncertainty in the parameter values, different choices of GQF and different prior distributions for the vector of model parameters, including normal priors with and without correlated components. Further applications to find Bayesian D‐optimal designs with two regressors for a logistic model and a two‐variable generalised linear model with a gamma distributed response are discussed, and some limitations of our approach are noted.  相似文献   

7.
Without accounting for sensitive items in sample surveys, sampled units may not respond (nonignorable nonresponse) or they respond untruthfully. There are several survey designs that address this problem and we will review some of them. In our study, we have binary data from clusters within small areas, obtained from a version of the unrelated‐question design, and the sensitive proportion is of interest for each area. A hierarchical Bayesian model is used to capture the variation in the observed binomial counts from the clusters within the small areas and to estimate the sensitive proportions for all areas. Both our example on college cheating and a simulation study show significant reductions in the posterior standard deviations of the sensitive proportions under the small‐area model as compared with an analogous individual‐area model. The simulation study also demonstrates that the estimates under the small‐area model are closer to the truth than for the corresponding estimates under the individual‐area model. Finally, for small areas, we discuss many extensions to accommodate covariates, finite population sampling, multiple sensitive items and optional designs.  相似文献   

8.
Previous design studies of unpaced assembly lines that exhibit stochastic task times indicate that an unbalanced allocation of task times results in optimal output rates. In this article, we present a comprehensive review of the previous literature on this topic and discuss the results of simulation experiments that test the bowl distribution for unbalancing unpaced lines. The simulation experiment was designed to test the bowl distribution in more realistic environments than previously tested and illustrates that a balanced line configuration is as good as or better than an unbalanced line configuration when task times are modeled with more typical values of variance.Stochastic unpaced assembly line research employs both simulation and analytical approaches to test the allocation of buffer capacity and task times to work stations. Analytical models are utilized to investigate simple line designs with exponential or Erlang task time distributions. Simulation is used for longer lines and for normal task time distributions. From the review of the previous research using both approaches, we note five major findings: 1) unbalancing task time allocation is optimal when task time variation is large; 2) unbalanced allocation of buffer storage capacity improves line output rate when task time variation is large; 3) output rate of an unpaced line decreases as the number of sequential workstations increases; 4) output rate increases as more buffer storage capacity is available; and 5) output rate decreases as the task time variation increases.Most of the previous research on unpaced lines investigated lines with few workstations and large task time variation. Empirical research by Dudley (6) suggests that variation of task times in practice is much less than variations employed in previous unpaced line studies. We present the results from simulation experiments that model longer unpaced lines with lower levels of task time variance of the magnitude that is likely to occur in practice.The results of our simulation experiments verify the benefits of using the bowl distribution for task time allocation when line lengths are short and task times experience large variance. However, when line lengths are extended or task time variation is reduced, the use of the bowl distribution for unbalancing the line degrades the line's efficiency. In these situations, the optimal task time allocation is a balanced line.Two important implications for managers follow from the results of our experiments: 1) that unpaced line output rate is relatively insensitive to moderate variations from optimal task time allocations when buffer storage is limited; and 2) that perfectly balanced line designs are optimal for most cases in practice.  相似文献   

9.
This article is primarily concerned with exploring the relationships between organizational climate and characteristics of organizational environments. Environmental characteristics include dependencies, competition and uncertainty. In addition, the relationship of climate and environments with dimensions of organizational structure and size are examined. Using data from 15 industrial organizations in Britain, the results have shown that different environmental characteristics have different associations with organizational climate. Also, the relationships between organizational environments and climate are not similar to those found between environments and structure. It is suggested that the creation of appropriate climates and structural design as responses to environmental pressures may be considered as complementary strategies in an attempt to maintain administrative control. The results, therefore, provide support for the suggestion that, in order to improve our understanding of the dynamics of organizational climate, characteristics of organizational environments should be incorporated into future research designs.  相似文献   

10.
The collection of longitudinal data over the full model time scope is often an appropriate way to estimate the dynamic state space model with time-varying parameters. Nevertheless, in many situations it is possible and preferable to collect and combine data from independent groups of subjects, each covering a shorter interval than the full dynamic model. Several quasi-longitudinal designs are discussed: overlapping designs (overlapping cohort design OCD and overlapping samples design OSD) as well as nonoverlapping designs up to the exclusively cross-sectional design. The use of the structural equation modeling (SEM) program Mx and continuous time state space modeling is recommended. Finally, a number of the quasi-longitudinal designs is empirically evaluated, comparing the results with those of the full-longitudinal design.  相似文献   

11.
Non‐response is a common source of error in many surveys. Because surveys often are costly instruments, quality‐cost trade‐offs play a continuing role in the design and analysis of surveys. The advances of telephone, computers, and Internet all had and still have considerable impact on the design of surveys. Recently, a strong focus on methods for survey data collection monitoring and tailoring has emerged as a new paradigm to efficiently reduce non‐response error. Paradata and adaptive survey designs are key words in these new developments. Prerequisites to evaluating, comparing, monitoring, and improving quality of survey response are a conceptual framework for representative survey response, indicators to measure deviations thereof, and indicators to identify subpopulations that need increased effort. In this paper, we present an overview of representativeness indicators or R‐indicators that are fit for these purposes. We give several examples and provide guidelines for their use in practice.  相似文献   

12.
In this paper, we consider balanced hierarchical data designs for both one‐sample and two‐sample (two‐treatment) location problems. The variances of the relevant estimates and the powers of the tests strongly depend on the data structure through the variance components at each hierarchical level. Also, the costs of a design may depend on the number of units at different hierarchy levels, and these costs may be different for the two treatments. Finally, the number of units at different levels may be restricted by several constraints. Knowledge of the variance components, the costs at each level, and the constraints allow us to find the optimal design. Solving such problems often requires advanced optimization tools and techniques, which we briefly explain in the paper. We develop new analytical tools for sample size calculations and cost optimization and apply our method to a data set on Baltic herring.  相似文献   

13.
Beiyan Ou  Julie Zhou 《Metrika》2009,69(1):45-54
Experimental designs for field experiments are useful in planning agricultural experiments, environmental studies, etc. Optimal designs depend on the spatial correlation structures of field plots. Without knowing the correlation structures exactly in practice, we can study robust designs. Various neighborhoods of covariance matrices are introduced and discussed. Minimax robust design criteria are proposed, and useful results are derived. The generalized least squares estimator is often more efficient than the least squares estimator if the spatial correlation structure belongs to a small neighborhood of a covariance matrix. Examples are given to compare robust designs with optimal designs. The work was partially supported by research grants from the Natural Science and Engineering Research Council of Canada.  相似文献   

14.
The Effect of using Household as a Sampling Unit   总被引:1,自引:0,他引:1  
The effect of sampling people through households is considered. Results on design effects for two stage surveys are reviewed and applied to give design effects of household samples. The main factors that determine the design effect are identified for the designs in which one person, or all people, are selected from each selected household. Within household correlation is one factor. We show that the relationships between household size and the mean and variance within households are also important factors. Census and survey data are used to empirically compare the design effects for a range estimators, variables and designs.  相似文献   

15.
Classical and modern organization designs are reviewed and evaluated in terms of their capabilities for handling radical or major innovations. The choice of an effective organization design is shown to be related to the nature of the technological and market environments. Only Type IV organization designs, the modern-integrative organization designs, are shown to have the necessary qualities for handling all the phases in the life cycle of a major innovation. Though difficult- to implement and maintain, Type IV organization designs are shown to be a potent tool for those modern managers who fully understand how to use them.  相似文献   

16.
The use of teams that incorporate autonomy in their designs continues to be an important element of many organizations. However, prior research has emphasized projects with mostly routine tasks and has assumed that autonomy resides primarily with a team leader. We investigate how two aspects of team autonomy are related to teamwork quality, a multifaceted indicator of team collaboration (Hoegl & Gemuenden, 2001). Specifically, we hypothesize that team‐external influence over operational project decisions is negatively related to teamwork quality, while team‐internal equality of influence over project decisions is positively related to teamwork quality. Testing our hypotheses on responses from 430 team members and team leaders pertaining to 145 software development teams, results support both predictions. Acknowledging the possible benefits of certain types of external influence (e.g., constructive feedback), the findings demonstrate that team‐external managers of innovative projects should generally refrain from interfering in team‐internal operational decisions. Likewise, the study shows that all team members should share decision authority, recognizing that their contributions to team discussion and decision making may well differ given differences in experience and expertise. © 2006 Wiley Periodicals, Inc.  相似文献   

17.
Spatially distributed data exhibit particular characteristics that should be considered when designing a survey of spatial units. Unfortunately, traditional sampling designs generally do not allow for spatial features, even though it is usually desirable to use information concerning spatial dependence in a sampling design. This paper reviews and compares some recently developed randomised spatial sampling procedures, using simple random sampling without replacement as a benchmark for comparison. The approach taken is design‐based and serves to corroborate intuitive arguments about the need to explicitly integrate spatial dependence into sampling survey theory. Some guidance for choosing an appropriate spatial sampling design is provided, and some empirical evidence for the gains from using these designs with spatial populations is presented, using two datasets as illustrations.  相似文献   

18.
In contemporary business environments, the ability to manage operational knowledge is an important predictor of organizational competitiveness. Organizations invest large sums in various types of information technologies (ITs) to manage operational knowledge. Because of their superior storage, processing and communication capabilities, ITs offer technical platforms to build knowledge management (KM) capabilities. However, merely acquiring ITs are not sufficient, and organizations must structure information system (IS) designs to leverage ITs for building KM capabilities. We study how technical and strategic IS designs enhance operational absorptive capacity (OAC) – the KM capability of an operations management (OM) department. Specifically, we use a capabilities perspective of absorptive capacity to examine potential absorptive capacity (POAC) and realized absorptive capacity (ROAC) capabilities – the two OAC capabilities that create and utilize knowledge, respectively. Our theory proposes that integrated IS capability, – an aspect of technical IS design – is an antecedent of POAC and ROAC capabilities, and business-IT alignment – an aspect of strategic IS design – moderates the relationship between integrated IS capability and ROAC capability. Combining data gleaned from a multi-respondent survey with archival data from COMPUSTAT, we test our hypotheses using a dataset from 153 manufacturing organizations. By proposing that IS design enables an OM department's KM processes, i.e., the POAC and ROAC capabilities, our interdisciplinary theoretical framework opens the “black box” of OAC and contributes to improved understanding of IS and OM synergies. We offer a detailed discussion of our contributions to the literature at the IS-OM interface and implications for practitioners.  相似文献   

19.
We establish the inferential properties of the mean-difference estimator for the average treatment effect in randomised experiments where each unit in a population is randomised to one of two treatments and then units within treatment groups are randomly sampled. The properties of this estimator are well understood in the experimental design scenario where first units are randomly sampled and then treatment is randomly assigned but not for the aforementioned scenario where the sampling and treatment assignment stages are reversed. We find that the inferential properties of the mean-difference estimator under this experimental design scenario are identical to those under the more common sample-first-randomise-second design. This finding will bring some clarifications about sampling-based randomised designs for causal inference, particularly for settings where there is a finite super-population. Finally, we explore to what extent pre-treatment measurements can be used to improve upon the mean-difference estimator for this randomise-first-sample-second design. Unfortunately, we find that pre-treatment measurements are often unhelpful in improving the precision of average treatment effect estimators under this design, unless a large number of pre-treatment measurements that are highly associative with the post-treatment measurements can be obtained. We confirm these results using a simulation study based on a real experiment in nanomaterials.  相似文献   

20.
Social and economic studies are often implemented as complex survey designs. For example, multistage, unequal probability sampling designs utilised by federal statistical agencies are typically constructed to maximise the efficiency of the target domain level estimator (e.g. indexed by geographic area) within cost constraints for survey administration. Such designs may induce dependence between the sampled units; for example, with employment of a sampling step that selects geographically indexed clusters of units. A sampling‐weighted pseudo‐posterior distribution may be used to estimate the population model on the observed sample. The dependence induced between coclustered units inflates the scale of the resulting pseudo‐posterior covariance matrix that has been shown to induce under coverage of the credibility sets. By bridging results across Bayesian model misspecification and survey sampling, we demonstrate that the scale and shape of the asymptotic distributions are different between each of the pseudo‐maximum likelihood estimate (MLE), the pseudo‐posterior and the MLE under simple random sampling. Through insights from survey‐sampling variance estimation and recent advances in computational methods, we devise a correction applied as a simple and fast postprocessing step to Markov chain Monte Carlo draws of the pseudo‐posterior distribution. This adjustment projects the pseudo‐posterior covariance matrix such that the nominal coverage is approximately achieved. We make an application to the National Survey on Drug Use and Health as a motivating example and we demonstrate the efficacy of our scale and shape projection procedure on synthetic data on several common archetypes of survey designs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号