共查询到20条相似文献,搜索用时 0 毫秒
1.
Cristina Sotto Caroline Beunckens Geert Molenberghs Ivy Jansen Geert Verbeke 《Metrika》2009,69(2-3):305-336
Many models to analyze incomplete data that allow the missingness to be non-random have been developed. Since such models necessarily rely on unverifiable assumptions, considerable research nowadays is devoted to assess the sensitivity of resulting inferences. A popular sensitivity route, next to local influence (Cook in J Roy Stat Soc Ser B 2:133–169, 1986; Jansen et al. in Biometrics 59:410–419, 2003) and so-called intervals of ignorance (Molenberghs et al. in Appl Stat 50:15–29, 2001), is based on contrasting more conventional selection models with members from the pattern-mixture model family. In the first family, the outcome of interest is modeled directly, while in the second family the natural parameter describes the measurement process, conditional on the missingness pattern. This implies that a direct comparison ought not to be done in terms of parameter estimates, but rather should pass by marginalizing the pattern-mixture model over the patterns. While this is relatively straightforward for linear models, the picture is less clear for the nevertheless important setting of categorical outcomes, since models ordinarily exhibit a certain amount of non-linearity. Following ideas laid out in Jansen and Molenberghs (Pattern-mixture models for categorical outcomes with non-monotone missingness. Submitted for publication, 2007), we offer ways to marginalize pattern-mixture-model-based parameter estimates, and supplement these with asymptotic variance formulas. The modeling context is provided by the multivariate Dale model. The performance of the method and its usefulness for sensitivity analysis is scrutinized using simulations. 相似文献
2.
Martin Weale 《Journal of Applied Econometrics》1992,7(2):167-174
Variables are often measured subject to error, whether they are collected as part of an experiment or by sample surveys. A consequence of this is that there will be different estimates of the same variable, or, more generally, linear restrictions which the observations should satisfy but fail to. With knowledge of the variances of the various observations, it has been shown elsewhere that maximum-likelihood estimates of the observations can be produced. This paper shows how, given a sequence of such observations, estimates can be produced without knowledge of data reliabilities. The method is applied to estimates of constant price US GNP. It suggests that 64 per cent of the discrepancy should be attributed to the expenditure estimate, with only 36 per cent going to the income/output estimate. The current method of presentation, on the other hand, places the whole of the error in the income/output estimate. 相似文献
3.
The field of productive efficiency analysis is currently divided between two main paradigms: the deterministic, nonparametric Data Envelopment Analysis (DEA) and the parametric Stochastic Frontier Analysis (SFA). This paper examines an encompassing semiparametric frontier model that combines the DEA-type nonparametric frontier, which satisfies monotonicity and concavity, with the SFA-style stochastic homoskedastic composite error term. To estimate this model, a new two-stage method is proposed, referred to as Stochastic Non-smooth Envelopment of Data (StoNED). The first stage of the StoNED method applies convex nonparametric least squares (CNLS) to estimate the shape of the frontier without any assumptions about its functional form or smoothness. In the second stage, the conditional expectations of inefficiency are estimated based on the CNLS residuals, using the method of moments or pseudolikelihood techniques. Although in a cross-sectional setting distinguishing inefficiency from noise in general requires distributional assumptions, we also show how these can be relaxed in our approach if panel data are available. Performance of the StoNED method is examined using Monte Carlo simulations. 相似文献
4.
The mathematical programming-based technique data envelopment analysis (DEA) has often treated data as being deterministic. In response to the criticism that in most applications there is error and random noise in the data, a number of mathematically elegant solutions to incorporating stochastic variations in data have been proposed. In this paper, we propose a chance-constrained formulation of DEA that allows random variations in the data. We study properties of the ensuing efficiency measure using a small sample in which multiple inputs and a single output are correlated, and are the result of a stochastic process. We replicate the analysis using Monte Carlo simulations and conclude that using simulations provides a more flexible and computationally less cumbersome approach to studying the effects of noise in the data. We suggest that, in keeping with the tradition of DEA, the simulation approach allows users to explicitly consider different data generating processes and allows for greater flexibility in implementing DEA under stochastic variations in data. 相似文献
5.
介绍了加拿大安大略省社区的居民垃圾分类回收模式。通过分析加拿大学者运用设计行为理论对居民垃圾分类回收行为所作的研究以及对相关实践经验的总结,认为应当从提升个人和主观准则、创造硬件设施条件、运用诱导因素等方面来促进居民垃圾分类回收的行为。 相似文献
6.
This paper discusses nonparametric identification in a model of sorting in which location choices depend on the location choices of other agents as well as prices and exogenous location characteristics. In this model, demand slopes and hence preferences are not identifiable without further restrictions because of the absence of independent variation of endogenous composition and exogenous location characteristics. Several solutions of this problem are presented and applied to data on neighborhoods in US cities. These solutions use exclusion restrictions, based on either subgroup demand shifters, the spatial structure of externalities, or the dynamics of prices and composition in response to an amenity shock. The empirical results consistently suggest the presence of strong social externalities, that is a dependence of location choices on neighborhood composition. 相似文献
7.
如何快速有效地从日益膨胀的高校后勤数据中,挖掘出潜在的、有价值的信息,使之有效地在高校后勤管理、经营服务和决策中发挥作用,是急需解决的问题。文章针对数据挖掘技术及其在高校后勤中的应用展开讨论和研究。 相似文献
8.
如何进行计算机数据审计模型分析 总被引:3,自引:0,他引:3
从广义的角度而言,计算机审计可以分为面向系统的审计和面向数据的审计。面向数据的审计一般称为计算机数据审计,是指运用计算机审计技术对与财政财务收支等有关的计算机系统所处理的电子数据进行的审计。在整个计算机数据审计的过程中,核心问题是对采集和处理后的电子数据进行分析,计算机数据审 相似文献
9.
A discrete replacement model for a two-unit parallel system subject to failure rate interaction 总被引:1,自引:0,他引:1
In this paper, a repairable two-unit parallel system with failure rate interaction between units is studied. Failure rate interaction between units is described as follows. Unit-1 failure whenever occurs can increase the failure rate of unit-2 of some amount, while unit-2 failure can induce unit-1 into simultaneous failure. We consider a discrete replacement policy N based on the number of unit-1 failure. The system is replaced at the instant of the N-th failure of unit-1 or on simultaneous failure of the system. Our problem is to determine an optimal replacement policy N * such that the expected cost rate per unit time is minimized. The explicit expression of the expected cost rate per unit time is derived by introducing relative costs and the corresponding optimal number N* is also verified finite and unique under some specific conditions. 相似文献
10.
Tony Lancaster 《Journal of econometrics》1985,28(1):155-169
This paper deals with models for the duration of an event that are misspecified by the neglect of random multiplicative heterogeneity in the hazard function. This type of misspecification has been widely discussed in the literature [e.g., Heckman and Singer (1982), Lancaster and Nickell (1980)], but no study of its effect on maximum likelihood estimators has been given. This paper aims to provide such a study with particular reference to the Weibull regression model which is by far the most frequently used parametric model [e.g., Heckman and Borjas (1980), Lancaster (1979)]. In this paper we define generalised errors and residuals in the sense of Cox and Snell (1968, 1971) and show how their use materially simplifies the analysis of both true and misspecified duration models. We show that multiplicative heterogeneity in the hazard of the Weibull model has two errors in variables interpretations. We give the exact asymptotic inconsistency of M.L. estimation in the Weibull model and give a general expression for the inconsistency of M.L. estimators due to neglected heterogeneity for any duration model to O(σ2), where σ2 is the variance of the error term. We also discuss the information matrix test for neglected heterogeneity in duration models and consider its behaviour when σ2>0. 相似文献
11.
A class of asymptotically distribution-free tests is considered for comparing several treatments with a control when the, data are subject to unequal right-censorship. A particular member of this class is proposed for use in practice and an illustrative numerical example is given. A general result for the Pitman efficacy of a test based on an asymptotically normal test statistic is proved for the multiparameter case and using this result the efficacy of the proposed class of tests is obtained under sequences of translation and proportional hazards alternatives. Simulation studies are conducted to compare the performance of the proposed test, in terms of power, with some other tests. 相似文献
12.
《International Journal of Forecasting》1988,4(2):177-192
In this paper, we develop a four-parameter generalization of the logistic growth curve, the flexible-logistic (FLOG) model. It is shown that the FLOG model is sufficiently general to locate its point of inflection anywhere between its upper and lower bounds: it can offer wide variation in its degree of symmetry for a given point of inflection. Although additional parameters always produce a better within-sample fit. the specific flexibility introduced by the FLOG class of models emphasises the forecast properties by controlling the saturation level and the approach to that level. The model is subjected to a number of theoretical and empirical tests and is applied to three sets of telecommunications data. 相似文献
13.
This paper is concerned with developing a semiparametric panel model to explain the trend in UK temperatures and other weather outcomes over the last century. We work with the monthly averaged maximum and minimum temperatures observed at the twenty six Meteorological Office stations. The data is an unbalanced panel. We allow the trend to evolve in a nonparametric way so that we obtain a fuller picture of the evolution of common temperature in the medium timescale. Profile likelihood estimators (PLE) are proposed and their statistical properties are studied. The proposed PLE has improved asymptotic property comparing the sequential two-step estimators. Finally, forecasting based on the proposed model is studied. 相似文献
14.
Journal of Productivity Analysis - In this paper, we adapt the ZSG-DEA model to the case of a reverse output, whose larger (smaller) values reflect lower (higher) achievements. Then, we introduce... 相似文献
15.
A Monte Carlo study of the small sample properties of various estimators of the linear regression model with first-order autocorrelated errors. When independent variables are trended, estimators using Ttransformed observations (Prais-Winsten) are much more efficient than those using T–1 (Cochrane–Orcutt). The best of the feasible estimators isiterated Prais-Winsten using a sum-of-squared-error minimizing estimate of the autocorrelation coefficient ?. None of the feasible estimators performs well in hypothesis testing; all seriously underestimate standard errors, making estimated coefficients appear to be much more significant than they actually are. 相似文献
16.
Philippe Mongrain Richard Nadeau Bruno Jérôme 《International Journal of Forecasting》2021,37(1):289-301
Election forecasting has become a fixture of election campaigns in a number of democracies. Structural modeling, the major approach to forecasting election results, relies on ‘fundamental’ economic and political variables to predict the incumbent’s vote share usually a few months in advance. Some political scientists contend that adding vote intention polls to these models—i.e., synthesizing ‘fundamental’ variables and polling information—can lead to important accuracy gains. In this paper, we look at the efficiency of different model specifications in predicting the Canadian federal elections from 1953 to 2015. We find that vote intention polls only allow modest accuracy gains late in the campaign. With this backdrop in mind, we then use different model specifications to make ex ante forecasts of the 2019 federal election. Our findings have a number of important implications for the forecasting discipline in Canada as they address the benefits of combining polls and ‘fundamental’ variables to predict election results; the efficiency of varying lag structures; and the issue of translating votes into seats. 相似文献
17.
SLA-based optimisation of virtualised resource for multi-tier web applications in cloud data centres
《Enterprise Information Systems》2013,7(7):743-767
Dynamic virtualised resource allocation is the key to quality of service assurance for multi-tier web application services in cloud data centre. In this paper, we develop a self-management architecture of cloud data centres with virtualisation mechanism for multi-tier web application services. Based on this architecture, we establish a flexible hybrid queueing model to determine the amount of virtual machines for each tier of virtualised application service environments. Besides, we propose a non-linear constrained optimisation problem with restrictions defined in service level agreement. Furthermore, we develop a heuristic mixed optimisation algorithm to maximise the profit of cloud infrastructure providers, and to meet performance requirements from different clients as well. Finally, we compare the effectiveness of our dynamic allocation strategy with two other allocation strategies. The simulation results show that the proposed resource allocation method is efficient in improving the overall performance and reducing the resource energy cost. 相似文献
18.
19.
We propose a natural conjugate prior for the instrumental variables regression model. The prior is a natural conjugate one since the marginal prior and posterior of the structural parameter have the same functional expressions which directly reveal the update from prior to posterior. The Jeffreys prior results from a specific setting of the prior parameters and results in a marginal posterior of the structural parameter that has an identical functional form as the sampling density of the limited information maximum likelihood estimator. We construct informative priors for the Angrist–Krueger [1991. Does compulsory school attendance affect schooling and earnings? Quarterly Journal of Economics 106, 979–1014] data and show that the marginal posterior of the return on education in the US coincides with the marginal posterior from the Southern region when we use the Jeffreys prior. This result occurs since the instruments are the strongest in the Southern region and the posterior using the Jeffreys prior, identical to maximum likelihood, focusses on the strongest available instruments. We construct informative priors for the other regions that make their posteriors of the return on education similar to that of the US and the Southern region. These priors show the amount of prior information needed to obtain comparable results for all regions. 相似文献
20.
Martin R.J. Knapp 《Socio》1977,11(4):205-212
British central government policy regarding the size and design of residential homes for the elderly has changed direction a number of times in the post-war period. Employing data collected in the Census of Residential Accommodation of 1970, the effectiveness of these policy changes is analysed in a multiple regression framework. The results suggest that Central and Local Authority policies have been only partially successful in ensuring minimum standards of design and cast considerable doubt upon the efficacies of converting private dwellings into residential homes. 相似文献