首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
One frequent application of microarray experiments is in the study of monitoring gene activities in a cell during cell cycle or cell division. High throughput gene expression time series data are produced from such microarray experiments. A new computational and statistical challenge for analyzing such gene expression time course data, resulting from cell cycle microarray experiments, is to discover genes that are statistically significantly periodically expressed during the cell cycle. Such a challenge occurs due to the large number of genes that are simultaneously measured, a moderate to small number of measurements per gene taken at different time points and high levels of non-normal random noises inherited in the data. Computational and statistical approaches to discovery and validation of periodic patterns of gene expression are, however, very limited. A good method of analysis should be able to search for significant periodic genes with a controlled family-wise error (FWE) rate or controlled false discovery rate (FDR) and any other variations of FDR, when all gene expression profiles are compared simultaneously. In this review paper, a brief summary of currently used methods in searching for periodic genes will be given. In particular, two methods will be surveyed in details. The first one is a novel statistical inference approach, the C & G Procedure that can be used to effectively detect statistically significantly periodically expressed genes when the gene expression is measured on evenly spaced time points. The second one is the Lomb–Scargle periodogram analysis, which can be used to discover periodic genes when the gene profiles are not measured on evenly spaced time points or when there are missing values in the profiles. The ultimate goal of this review paper is to give an expository of the two surveyed methods to researchers in related fields.  相似文献   

2.
MicroRNA(简称miRNA)是一类长度为22nt的编码小分子RNA,在基因表达调控中发挥重要作用。miRNA在基因组中存在位置偏好性,一部分miRNA位于基因间区,而另一些miRNA基因位于蛋白质编码基因的内含子中,少数位于外显子中,称之为基因内miRNA。可变剪接(也称选择性剪接,alternative splicing,AS)是指对同一基因的mRNA前体的不同剪接方式,即一种基因能产生多种mRNA。研究发现,miRNA与基因的选择性剪接密切相关,有些miRNA位于基因的可变剪接区域。论文利用生物信息学方法对位于可变剪接区域miRNA特征进行分析。  相似文献   

3.
王波 《价值工程》2014,(25):327-328
目的:利用基因芯片技术探讨糖脂消对胰岛素抵抗高血压大鼠基因表达谱的变化,研究糖脂消治疗胰岛素抵抗高血压的作用机制。方法:用高果糖饲料诱发SD大鼠胰岛素抵抗模型,给予糖脂消口服后,用基因芯片分别检测高果糖组、治疗组,计算机软件分析后,观察基因表达的变化。结果:治疗组表达差异的基因有95条,新基因有23条,已知基因34条。结论:糖脂消可以改变其基因表达谱,为进一步探讨糖脂消治疗作用创造了条件。  相似文献   

4.
Summary In case of absolute error loss we investigate for an arbitrary class of probability distributions, if or if not a two point prior can be least favourable and a corresponding Bayes estimator can be minimax when the parameter is restricted to a closed and bounded interval of ℝ. The general results are applied to several examples, for instance location and scale parameter families are considered. We give examples for which, independent of the length of the parameter interval, no two point priors exist. On the other hand examples are given having a least favourable two point prior when the parameter interval is sufficiently small.  相似文献   

5.
This paper explores the consequences for quality of introducing yardstick competition in duopoly when a verifiable quality indicator is available. Yardstick competition can be implemented by a menu of transfers that are linear in the cost differential between the two firms and in quality. Cost- and quality incentives are stronger in larger firms when improvements are highly valued by consumers and firms can significantly influence quality. Expenditures on quality improvement can increase or decrease following the introduction of yardstick competition. The crucial factor is the likelihood ratio of productivity between the two firms, not productivity differences.  相似文献   

6.
In this paper we develop a dynamic discrete-time bivariate probit model, in which the conditions for Granger non-causality can be represented and tested. The conditions for simultaneous independence are also worked out. The model is extended in order to allow for covariates, representing individual as well as time heterogeneity. The proposed model can be estimated by Maximum Likelihood. Granger non-causality and simultaneous independence can be tested by Likelihood Ratio or Wald tests. A specialized version of the model, aimed at testing Granger non-causality with bivariate discrete-time survival data is also discussed. The proposed tests are illustrated in two empirical applications.  相似文献   

7.
According to previous literature, we define randomized inverse sampling for comparing two treatments with respect to a binary response as the sampling that stops when a total fixed number of successes, irrespective of the treatments, are observed. We have obtained elsewhere the asymptotic distributions for the counting variables involved and have shown them to be equivalent to the corresponding asymptotic distributions for multinomial sampling. In this paper, we start deriving the same basic results using different techniques, and we then show how they give rise to genuinely novel procedures when translated into finite sample approximations. As the main example, a novel confidence interval for the logarithm of the odds ratio of two success probabilities can be constructed in the case of comparative randomized inverse sampling. Some advantages over the standard multinomial sampling in terms of coverage probabilities are visible when no adjustment for cells with zero counts is applied; otherwise, the two sampling schemes appear to be fairly equivalent. This is a reassurance that under certain circumstances, inverse sampling can be safely chosen over more traditional sampling schemes.  相似文献   

8.
This paper investigates the extent to which cross-country differences in aggregate participation rates can be explained by differences in tax-benefit systems. We take the example of two countries, the Czech Republic and Hungary, which – despite a lot of similarities – differ markedly in labour force participation rates. Using comparable individual-level labour supply estimates, we simulate how the aggregate participation rate would change in one country if the other country’s tax and social welfare system were adopted. The estimation results for the two countries are quite similar, suggesting that individual preferences are essentially identical in the two countries. The simulation results show that about one-third of the difference in the participation rates of the 15–74 year-old population and more than two-thirds of the participation of the prime-age population can be explained by differences in the tax-benefit systems.  相似文献   

9.
Quantile estimation is important for a wide range of applications. While point estimates based on one or two order statistics are common, constructing confidence intervals around them, however, is a more difficult problem. This paper has two goals. First, it surveys the numerous distribution-free methods for constructing approximate confidence intervals for quantiles. These techniques can be divided roughly into four categories: using a pivotal quantity, resampling, interpolation, and empirical likelihood methods. Second, a method based on the pivotal quantity that has received limited attention in the past is extended. Comprehensive simulation studies are used to compare performance across methods. The proposed method is simple and performs similarly to linear interpolation methods and a smoothed empirical likelihood method. While the proposed method has slightly wider interval widths, it can be calculated for more extreme quantiles even when there are few observations.  相似文献   

10.
The demand for home care (HC) services has steadily been growing for two main types of services: healthcare and social care. If, for the former, caregivers' skills are of utter importance, in the latter caregivers are not distinguishable in terms of skills. This work focuses social care and models caregivers' synchronization as a means of improving human resources management. Moreover, in social care services, several visits need to be performed in the same day since patients are frequently alone and need assistance throughout the day. Depending on the patient's autonomy, some tasks have to be performed by two caregivers (e.g. assist bedridden patients). Therefore, adequate decision support tools are crucial for assisting managers (often social workers) when designing operational plans and to efficiently assign caregivers to tasks. This paper advances the literature by 1) considering teams of one caregiver that can synchronize to perform tasks requiring two caregivers (instead of having teams of two caregivers), 2) simultaneously modelling daily continuity of care and teams' synchronization, and 3) associating dynamic time windows to teams' synchronizations introducing scheduling flexibility while minimize service and travel times. These concepts are embedded into a daily routing and scheduling MIP model, deciding on the number of caregivers and on the number and type of teams to serve all patient tasks. The main HC features of the problem, synchronization and continuity of care, are evaluated by comparing the proposed planning with the current situation of a home social care service provider in Portugal. The results show that synchronization is the feature that most increases efficiency with respect to the current situation. It evidences a surplus in working time capacity by proposing plans where all requests can be served with a smaller number of caregivers. Consequently, new patients from long waiting lists can now be served by the “available” caregivers.  相似文献   

11.
To study interaction effects, two sets of data are created for fixed effect ANOVA, both with combinatory effects of the two factors. In the first, both factors and their interaction contribute independently and directly to the dependent variable. In the second, each factor contributes indirectly to the dependent score. Data created with the first model can be analyzed flawlessly. The second often show relatively large main effects and relatively small interaction effects, and as a consequence the interaction effect may be rejected. Even when the dependent variable results solely from the multiplication of both factor scores, highly significant main effects can be obtained, while the interaction effect remains insignificant. Although mathematically correct, the relative contributions of the main effects are in that case difficult to interpret.  相似文献   

12.
We give two optimization programs for determining whether Pareto improving local changes are possible. When they are, the programs compute them. Any procedure generating efficient and Pareto improving changes can be replicated by these programs. The two programs are dual to each other. We apply the programs to Pareto improving exchange processes and to Pareto-improving tax-tariff reforms.  相似文献   

13.
This paper studies a model in which two payers contract with one hospital. True costs per patient are not a possible basis for payment, and contracts can only be written on the basis of allocated cost. Payers choose a contract that is fully prospective or fully based on cost allocation, or a payment scheme that would give some weight to each of these two. We characterize the payers'equilibrium contracts arid show how in equilibrium hospital input decisions are distorted by the payers' incentives to engage in cost shifting. Two cost-shifting incentives work in opposite directions, and equilibrium can be characterized by too little or too much care relative to the socially efficient level.  相似文献   

14.
The paper studies how the optimal nonlinear quantity-payment allocation can be truthfully implemented by optional tariffs in a differentiated goods duopoly. Consumers choose from a menu of tariffs and are subsequently billed according to this. We find that implementation by simple two part tariffs may not be a feasible strategy in a duopoly because the optimal nonlinear tariff exhibits a convexity for lower quantities. We show that the optimal outcome can be implemented if the firms can use two part tariffs with inclusive consumption. The fixed fee includes a free consumption allowance, whereas subsequent consumption is charged according to a steep unit price. That way the firm is able to secure voluntary participation without violating the incentive constraint. The paper shows some examples from the telecommunications industry where firms offer two part tariffs with free call minutes to low demand segments.  相似文献   

15.
张向国 《价值工程》2011,30(25):226-226
以全国大学英语四级试题听力部分作为研究工具,对网络班和普通班225多名非英语专业本科生听力策略的有效性进行调查。结果表明:听力测试之前,给网络班学生提供两学期网络课程,此种听力策略效果不明显。而普通班学生采取基于传统语音教室的基本教学模式,此种听力策略效果较为明显。  相似文献   

16.
Distributive analysis typically involves comparisons of distributions where individuals differ in more than just one attribute. In the particular case where there are two attributes and where the distribution of one of these two attributes is fixed, one can appeal to sequential rank order dominance for comparing distributions. We show that sequential rank order domination of one distribution over another implies that the dominating distribution can be obtained from the dominated one by means of a finite sequence of favourable permutations, and conversely. We provide two examples where favourable permutations prove to have interesting implications from a normative point of view.  相似文献   

17.
We discuss a regression model in which the regressors are dummy variables. The basic idea is that the observation units can be assigned to some well-defined combination of treatments, corresponding to the dummy variables. This assignment can not be done without some error, i.e. misclassification can play a role. This situation is analogous to regression with errors in variables. It is well-known that in these situations identification of the parameters is a prominent problem. We will first show that, in our case, the parameters are not identified by the first two moments but can be identified by the likelihood. Then we analyze two estimators. The first is a moment estimator involving moments up to the third order, and the second is a maximum likelihood estimator calculated with the help of the EM algorithm. Both estimators are evaluated on the basis of a small Monte Carlo experiment.  相似文献   

18.
Due to the uncertainty in estimating both the demand for end products and the supply of components from lower levels, buffering techniques should be included before the loading of a material requirement planning (MRP) system. Safety stocks and safety lead time are two techniques of providing buffering for loading. There have been many studies made concerning the determination of the amount of safety stocks and safety lead time. Some guidelines for choosing between safety stocks and safety lead time for dealing with uncertainty in both demand and supply also have been established. Although these two different methods have been used successfully, it has not been documented that using these two methods in a given situation will yield essentially the same results; that is, the interchangeability of these two buffering techniques has not been explored quantitatively.Since the net influence of safety stocks and safety lead time and their quantitative interchangeability are of major interest, an analytical model is proposed for this study. The lead-time offset procedure for components loading are represented by a matrix model that is based on a lot-for-lot lot-sizing technique. This lead-time offset matrix model is the product of the precedence matrix and the fixed-duration matrix. The precedence matrix is formed according to the total requirement factor matrix and the duration matrix is formed by each component process time. Thus, the lead-time offset matrix will generate the starting period of each component.When the lead-time offset procedure is modeled, the net influence of buffering quantity can be analyzed. The planned safety stock that is normally used to accommodate unexpected demand, shortage in supply, and defects from the operation at each process can be combined with demand to form the master production schedule. The revised lead time due to the integration of the safety stocks can be calculated through the lead-time offset model. The safety lead time may extend the component process time as well as overall production lead time if the designated safety lead time is longer than the available slack time in a fixed lead-time loading system.When the proposed lead-time offset model is further examined, it is found that planned safety stocks at the higher level can buffer the fluctuations of lower level components quantity as well as the fluctuations of same level components quantity. Safety stocks can also buffer shortages that are caused by the delay of raw material and manufacturing processes. Thus, safety stocks can be used to buffer unexpected delay time up to certain limits. A planned safety lead time at higher level component process can buffer the fluctuations of lower level components process time, as well as the same level component process time. The safety lead time can be used to produce additional products to meet unexpected excessive demand up to certain limits under the following conditions: 1. The excessive demand is known before the actual processing of the components in the lowest level. 2. The raw material at the lowest level is available.Although safety stocks and safety lead time are interchangeable in terms of the ability to buffer variations in quantity, the conditions for safety lead time are seldom met in actual practices. Thus, the slack time in a fixed lead-time loading system cannot be considered as an effective measure to substitute safety stocks. However, all or part of the delay in manufacturing processes or the supply from the lower level components can be buffered by the safety stock and the MPS will still be met. From this study, it is obvious that the slack time can be reduced when safety stocks are planned for an MRP system. The reduction of fixed lead-time duration will be beneficial to the overall planning and scheduling in MRP systems.  相似文献   

19.
E.J. Bell  R.C. Hinojosa 《Socio》1977,11(1):13-17
In an earlier paper one of the authors has used actual land use maps to establish the validity of the Markov process for describing and projecting land use changes. In a subsequent paper the same data was used to calibrate a non-homogeneous, birth and death process in which the land uses were dichotomized into either of two states: developed or undeveloped. The present paper pursues two aspects raised by the earlier ones: whether the process of land use changes can be considered a stationary process and, if so, whether it can be fit by a continuous-time, rather than discrete-time, model. Necessary and sufficient conditions for the existence of a feasible fit are indicated.  相似文献   

20.
A variety of demographic statistical models exist for studying population dynamics when individuals can be tracked over time. In cases where data are missing due to imperfect detection of individuals, the associated measurement error can be accommodated under certain study designs (e.g. those that involve multiple surveys or replication). However, the interaction of the measurement error and the underlying dynamic process can complicate the implementation of statistical agent-based models (ABMs) for population demography. In a Bayesian setting, traditional computational algorithms for fitting hierarchical demographic models can be prohibitively cumbersome to construct. Thus, we discuss a variety of approaches for fitting statistical ABMs to data and demonstrate how to use multi-stage recursive Bayesian computing and statistical emulators to fit models in such a way that alleviates the need to have analytical knowledge of the ABM likelihood. Using two examples, a demographic model for survival and a compartment model for COVID-19, we illustrate statistical procedures for implementing ABMs. The approaches we describe are intuitive and accessible for practitioners and can be parallelised easily for additional computational efficiency.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号