共查询到20条相似文献,搜索用时 0 毫秒
1.
We consider the problem of estimating input parameters for a differential equation model, given experimental observations of the output. As time and cost limit both the number and quality of observations, the design is critical. A generalized notion of leverage is derived and, with this, we define directional leverage. Effective designs are argued to be those that sample in regions of high directional leverage. We present an algorithm for finding optimal designs and then establish relationships to existing design optimality criteria. Numerical examples demonstrating the performance of the algorithm are presented. 相似文献
2.
《Enterprise Information Systems》2013,7(2):139-159
Business processes and its related workflow systems have received greater interest in practice and research in the last decade. Many analytical methodologies for analysis and design of workflow systems emerged. A recent formal approach to study workflows using a graph-theoretic method called ‘metagraphs’ has demonstrated effectiveness for analysing connectivity and interactions of information and resources between workflow components. However, past works in analysis of metagraph are element-based. Since nodes in metagraphs represent either the input or output of an activity it is natural to process information contained in a node taken as a unit. This paper takes a node-centric view on metagraphs that is a major departure from the element-based approach today. The change in focus requires provisioning an analysis framework under the node-centric views. New basic constructs including, but not limited to, concepts such as: ‘surplus sets’, ‘deficit sets’, ‘state of a path’, and ‘node-centric view of adjacency matrices’ are introduced. The approach produces computational feasible systems for elements that are over supplied and/or under supplied from a source node to a target node of any path of the metagraph. Such information could be valuable for designing workflow systems. Also, the node-centric approach is shown to be an extension of the basic constructs of element-view metagraphs and is a complementary method for validating information requirements of workflow modelling. Illustrative examples are given. 相似文献
3.
Tony Lancaster 《Journal of econometrics》1985,28(1):155-169
This paper deals with models for the duration of an event that are misspecified by the neglect of random multiplicative heterogeneity in the hazard function. This type of misspecification has been widely discussed in the literature [e.g., Heckman and Singer (1982), Lancaster and Nickell (1980)], but no study of its effect on maximum likelihood estimators has been given. This paper aims to provide such a study with particular reference to the Weibull regression model which is by far the most frequently used parametric model [e.g., Heckman and Borjas (1980), Lancaster (1979)]. In this paper we define generalised errors and residuals in the sense of Cox and Snell (1968, 1971) and show how their use materially simplifies the analysis of both true and misspecified duration models. We show that multiplicative heterogeneity in the hazard of the Weibull model has two errors in variables interpretations. We give the exact asymptotic inconsistency of M.L. estimation in the Weibull model and give a general expression for the inconsistency of M.L. estimators due to neglected heterogeneity for any duration model to O(σ2), where σ2 is the variance of the error term. We also discuss the information matrix test for neglected heterogeneity in duration models and consider its behaviour when σ2>0. 相似文献
4.
We present a unification of the Archimedean and the Lévy-frailty copula model for portfolio default models. The new default model exhibits a copula known as scale mixture of Marshall-Olkin copulas and an investigation of the dependence structure reveals that desirable properties of both original models are combined. This allows for a wider range of dependence patterns, while the analytical tractability is retained. Furthermore, simultaneous defaults and default clustering are incorporated. In addition, a hierarchical extension is presented which allows for a heterogeneous dependence structure. Finally, the model is applied to the pricing of CDO contracts. For this purpose, an efficient Laplace transform inversion approach is developed. Supporting a separation of marginal default probabilities and dependence structure, the model can be calibrated to CDS contracts in a first step. In a second step, the calibration of several parametric families to CDO contracts demonstrates a good fitting quality, which further emphasizes the suitability of the approach. 相似文献
5.
文章选择了比较成熟的单因素T-M模型和H-M模型,对2004年1月1日~2007年11月31日间194个交易周的10只基金进行业绩评价,考察证券投资基金是否在我国这种弱式有效的市场上表现出良好的选股与择时能力。结果表明,除个别基金外,我国基金整体不具备择时能力,虽具有一定的选股能力,但表现并不显著。 相似文献
6.
This paper deals with specification, estimation and tests of single equation reduced form type equations in which the dependent variable takes only non-negative integer values. Beginning with Poisson and compound Poisson models, which involve strong assumptions, a variety of possible stochastic models and their implications are discussed. A number of estimators and their properties are considered in the light of uncertainty about the data generation process. The paper also considers the role of tests in sequential revision of the model specification beginr ing with the Poisson case and provides a detailed application of the estimators and tests to a model of the number of doctor consultations. 相似文献
7.
《Enterprise Information Systems》2013,7(3):287-299
A thorough study of X party material flow (XPMF), its theory and its applications is conducted in this research. The X material flow concept is an extension of the material flow (MF) theory. To further elucidate that XPMF is one type of MF service model with PMF (party, material, flow) fractal structure and the characteristics of XPMF, we develop the three-pyramid synergetic operational model of XPMF. Through the analysis of several cases, we believe that OIR (Objective relational grade, Information sharing grade, and Resource complementary grade) is the set of order parameters that control and determine the formation of XPMF new structure and its degree of being ordered. Therefore it reveals the mechanism of XPMF formation and evolution. We also provide the principles and the methods for XPMF control. 相似文献
8.
Patrice Bertail Christian Haefke D.N.Dimitris N. Politis Halbert White 《Journal of econometrics》2004,120(2):295-326
In this paper we propose a subsampling estimator for the distribution of statistics diverging at either known or unknown rates when the underlying time series is strictly stationary and strong mixing. Based on our results we provide a detailed discussion of how to estimate extreme order statistics with dependent data and present two applications to assessing financial market risk. Our method performs well in estimating Value at Risk and provides a superior alternative to Hill's estimator in operationalizing Safety First portfolio selection. 相似文献
9.
A generalization of the Wald statistic for testing composite hypotheses is suggested for dependent data from exponential
models which include Lévy processes and diffusion fields. The generalized statistic is proved to be asymptotically chi-squared
distributed under regular composite hypotheses. It is simpler and more easily available than the generalized likelihood ratio
statistic. Simulations in an example where the latter statistic is available show that the generalized Wald test achieves
higher average power than the generalized likelihood ratio test.
Received: February 29, 2000 相似文献
10.
Tamás Rudas 《Quality and Quantity》1991,25(4):345-358
The present paper considers some new models for the analysis of multidimensional contigency tables. Although the theoretical background used here appeared already in Haberman (1974), prescribed conditional interaction (PCIN) models were introduced by Rudas (1987) and their mathematical properties were worked out by Leimer and Rudas (1988). These models are defined by prescribing the values of certain conditional interactions in the contingency table. Conditional interaction is defined here as the logarithm of an appropriately defined conditional odds ratio. This conditional odds ratio is a conditional version of a generalization of the well known odds ratio of a 2×2 table and that of the three factor interaction term of a 2×2×2 table and applies to any number of dimensions and any number of categories of the variables. The well known log-linear (LL) models are special PCIN models. Estimated frequencies under PCIN models and tests of fit can be computed using existing statistical software (e.g. BMDP). The paper describes the class of PCIN models and compares it to the class of association models of Goodman (1981). As LL models are widely used in the analysis of social mobility tables, application of more general PCIN models is illustrated. 相似文献
11.
Lawrence J. Christiano 《Journal of Economic Dynamics and Control》1985,9(4):363-404
This paper describes and implements a procedure for estimating the timing interval in any linear econometric model. The procedure is applied to Taylor's model of staggered contracts using annual averaged price and output data. The fit of the version of Taylor's model with serially uncorrelated disturbances improves as the timing interval of the model is reduced. 相似文献
12.
13.
David Bimler 《Quality and Quantity》2013,47(2):775-790
In many of the social sciences it is useful to explore the “working models” or mental schemata that people use to organise items from some cognitive or perceptual domain. With an increasing number of items, versions of the Method of Sorting become important techniques for collecting data about inter-item similarities. Because people do not necessarily all bring the same mental model to the items, there is also the prospect that sorting data can identify a range within the population of interest, or even distinct subgroups. Anthropology provides one tool for this purpose in the form of Cultural Consensus Analysis (CCA). CCA itself proves to be a special case of the “Points of View” approach. Here factor analysis is applied to the subjects’ method-of-sorting responses, obtaining idealized or prototypal modes of organising the items—the “viewpoints”. These idealised modes account for each subject’s data by combining them in proportions given by the subject’s factor loadings. The separate organisation represented by each viewpoint can be made explicit with clustering or multidimensional scaling. The technique is illustrated with job-sorting data from occupational research, and social-network data from primate behaviour. 相似文献
14.
In this article we are concerned with the comparison of educational mobility in Hungary and the Netherlands. The analysis is based on representative samples of 1981/2 and applies for both father-son and father-daughter educational mobility specified for four age cohorts. Two different theoretical perspectives are adopted—the industrialisation thesis and the reproduction thesis—from which the expectations for the analysis are derived. These expectations are confronted with the empirical data: firstly by means of overall mobility table measures, secondly by means' of the Hope-type log-linear analysis. The analysis shows that—contrary to the expectations from the reproduction thesis—no significant differences in circulation mobility can be detected between the two countries. Most of the differing mobility patterns are to be found in the structural component of mobility. This component can be uniform [linear] or non-uniform: the direction of the very significant differences supports the industrialisation thesis in this respect. The article ends with a discussion of the visualised effects of educational reforms that were introduced in both countries and with some suggestions for the next research steps. 相似文献
15.
The tree which results from the application of the Sonquist and Morgan method is based on the principle of dichotomizing the population at each point according to one of the independent variables in such a way as to explain as much variance of the dependent variables as possible. Confronted before applying this tree-analysis-method, with a distribution of the dependent variable, we may imagine that this distribution was the result of some process Pj. The present paper describes such processes, in which each element of the population is going through a sequence of nodes at each of which the value of the dependent variable is modified. These modifications can be expressed by certain types of regressive equations. The nodes can be considered as constituting a tree structure. If we are analysing the resulting distribution, the tree underlying the above described process may come out. However, the regressive functions involved are not determined. Furthermore, not tree-like processes may bring upon a certain distribution. Thus the tree-analyses can reveal only part of the processes which led to a given distribution. The explanation of such processes by soical science theories is studied. 相似文献
16.
17.
The paper demonstrates how the E-stability principle introduced by Evans and Honkapohja [2001. Learning and Expectations in Macroeconomics. Princeton University Press, Princeton, NJ] can be applied to models with heterogeneous and private information in order to assess the stability of rational expectations equilibria under learning. The paper extends already known stability results for the Grossman and Stiglitz [1980. On the impossibility of informationally efficient markets. American Economic Review 70, 393–408] model to a more general case with many differentially informed agents and to the case where information is endogenously acquired by optimizing agents. In both cases it turns out that the rational expectations equilibrium of the model is inherently E-stable and thus locally stable under recursive least squares learning. 相似文献
18.
19.
Michael J. Todd 《Journal of Mathematical Economics》1979,6(2):135-144
Equilibria in economies without production can be approximated by computing fixed points of continuous functions. With production present, the functions are usually replaced by set-valued mappings and fixed-point algorithms converge slowly. Here we propose continuous functions whose fixed points are equilibria in economies whose production is modelled by a finite list of activities. The penalty is that a least-distance program must be solved at each iteraction. The approach generalizes to allow ad valorem taxes on production. Finally, analogous arguments apply to the computation of an invariant optimal vector of capital stocks. 相似文献
20.
This paper shows that the R2 and the standard error have fatal flaws and are inadequate accuracy tests. Using data from a Krusell–Smith economy, I show that approximations for the law of motion of aggregate capital, for which the true standard deviation of aggregate capital is up to 14% (119%) higher than the implied value and which are thus clearly inaccurate, can have an R2 as high as 0.9999 (0.99). Key in generating a more powerful test is that predictions of the aggregate law of motion are not updated with the aggregated simulated individual data. 相似文献