首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
As well as arising naturally in the study of non-intersecting random paths, random spanning trees, and eigenvalues of random matrices, determinantal point processes (sometimes also called fermionic point processes) are relatively easy to simulate and provide a quite broad class of models that exhibit repulsion between points. The fundamental ingredient used to construct a determinantal point process is a kernel giving the pairwise interactions between points: the joint distribution of any number of points then has a simple expression in terms of determinants of certain matrices defined from this kernel. In this paper we initiate the study of an analogous class of point processes that are defined in terms of a kernel giving the interaction between 2M points for some integer M. The role of matrices is now played by 2M-dimensional “hypercubic” arrays, and the determinant is replaced by a suitable generalization of it to such arrays—Cayley’s first hyperdeterminant. We show that some of the desirable features of determinantal point processes continue to be exhibited by this generalization.  相似文献   

2.
A new approach in constructing orthogonal and nearly orthogonal arrays   总被引:3,自引:0,他引:3  
Orthogonal arrays have been constructed by a number of mathematical tools such as orthogonal Latin squares, Hadamard matrices, group theory and finite fields. Wang and Wu (1992) proposed the concept of a nearly orthogonal array and found a number of such arrays with high efficiency. In this paper we propose some criteria for non-orthogonality and two algorithms for the construction of orthogonal and nearly orthogonal arrays evincing higher efficiency than that obtained by Wang and Wu. Received: September 1999  相似文献   

3.
4.
In this paper, we introduce weighted estimators of the location and dispersion of a multivariate data set with weights based on the ranks of the Mahalanobis distances. We discuss some properties of the estimators like the breakdown point, influence function and asymptotic variance. The outlier detection capacities of different weight functions are compared. A simulation study is given to investigate the finite-sample behavior of the estimators. The research of Stefan Van Aelst was supported by a grant of the Fund for Scientific Research-Flanders (FWO-Vlaanderen) and by IAP research network grant nr. P6/03 of the Belgian government (Belgian Science Policy).  相似文献   

5.
Summary It is shown that the idempotent matrices of the information matrix of anm-associate class PBIB design can be obtained in terms of its association matrices. Some Special cases are also considered.  相似文献   

6.
We introduce a class of multivariate seasonal time series models with periodically varying parameters, abbreviated by the acronym SPVAR. The model is suitable for multivariate data, and combines a periodic autoregressive structure and a multiplicative seasonal time series model. The stationarity conditions (in the periodic sense) and the theoretical autocovariance functions of SPVAR stochastic processes are derived. Estimation and checking stages are considered. The asymptotic normal distribution of the least squares estimators of the model parameters is established, and the asymptotic distributions of the residual autocovariance and autocorrelation matrices in the class of SPVAR time series models are obtained. In order to check model adequacy, portmanteau test statistics are considered and their asymptotic distributions are studied. A simulation study is briefly discussed to investigate the finite-sample properties of the proposed test statistics. The methodology is illustrated with a bivariate quarterly data set on travelers entering in to Canada.  相似文献   

7.
8.
We introduce the Speculative Influence Network (SIN) to decipher the causal relationships between sectors (and/or firms) during financial bubbles. The SIN is constructed in two steps. First, we develop a Hidden Markov Model (HMM) of regime-switching between a normal market phase represented by a geometric Brownian motion and a bubble regime represented by the stochastic super-exponential Sornette and Andersen (Int J Mod Phys C 13(2):171–188, 2002) bubble model. The calibration of the HMM provides the probability at each time for a given security to be in the bubble regime. Conditional on two assets being qualified in the bubble regime, we then use the transfer entropy to quantify the influence of the returns of one asset i onto another asset j, from which we introduce the adjacency matrix of the SIN among securities. We apply our technology to the Chinese stock market during the period 2005–2008, during which a normal phase was followed by a spectacular bubble ending in a massive correction. We introduce the Net Speculative Influence Intensity variable as the difference between the transfer entropies from i to j and from j to i, which is used in a series of rank ordered regressions to predict the maximum loss (%MaxLoss) endured during the crash. The sectors that influenced other sectors the most are found to have the largest losses. There is some predictability obtained by using the transfer entropy involving industrial sectors to explain the %MaxLoss of financial institutions but not vice versa. We also show that the bubble state variable calibrated on the Chinese market data corresponds well to the regimes when the market exhibits a strong price acceleration followed by clear change of price regimes. Our results suggest that SIN may contribute significant skill to the development of general linkage-based systemic risks measures and early warning metrics.  相似文献   

9.
This paper aims to clarify three issues concerning the weighting methodol ogy generally used to evaluate interindustry R&D spillovers. These issues concern the likely nature of the spillovers estimated through different types of supporting matrices; the similarity between input–output (IO), technology flows and technological proximity matrices; and the relevance of the assumption that a single matrix can be used for different countries. Data analyses of weighting components show that technology flows matrices are in an intermediate position between IO matrices and technological proximity matrices, but closer to the former. The various IO matrices, as well as the three technological proximity matrices, are very similar to each other. The panel data estimates of the effect of different types of interindustry R&D spillovers on industrial productivity growth in the G7 countries reject the hypotheses that a technology flows matrix can be approximated by an IO matrix and that a single IO matrix can be usedfor different countries. By transitivity, the procedure that comprises using a single technology flow for several countries is not reliable. The international comparison shows that each country benefits from different types of R&D externality. In Japan and, to a lesser extent, in the US, the rate of return to direct R&D is very high and is likely to compensate for relatively weak interindustry R&D spillover effects. In the five other industrialized countries, the reverse observation is true: strong social rates of return to R&D counterbal ance the poor performances of direct R&D.  相似文献   

10.
Abstract

In this article, we introduce and evaluate testing procedures for specifying the number k of nearest neighbours in the weights matrix of a spatial econometric model. An increasing and a decreasing neighbours testing procedure are suggested. Kelejian's J-test for non-nested spatial models is used in the testing procedures. The testing procedures give formal justification for the choice of k, something which has been lacking in the classical spatial econometric literature. Simulations show that the testing procedures can be used in large samples to determine k. An empirical example involving house price data is provided.  相似文献   

11.
Yan Liu  Min-Qian Liu 《Metrika》2012,75(1):33-53
Supersaturated designs (SSDs) have been highly valued in recent years for their ability of screening out important factors in the early stages of experiments. Recently, Liu and Lin (in Statist Sinica 19:197–211, 2009) proposed a method to construct optimal mixed-level SSDs from smaller multi-level SSDs and transposed orthogonal arrays (OAs). This paper extends their method to construct more equidistant optimal SSDs by replacing the multi-level SSDs and transposed OAs with mixed-level SSDs and general transposed difference matrices, respectively, and then proposes two practical methods for constructing weak equidistant SSDs based on this extended method. A large number of new optimal SSDs can be constructed from these three methods. Some examples are provided and more new designs are listed in “Appendix” for practical use.  相似文献   

12.
Under certain conditions, a broad class of qualitative and limited dependent variable models can be consistently estimated by the method of moments using a non-iterative correction to the ordinary least squares estimator, with only a small loss of efficiency compared to maximum likelihood estimation. The class of models is that obtained from a classical multinormal regression by any type of censoring or truncation and includes the tobit, probit, two-limit probit, truncated regression, and some variants of the sample selection models. The paper derives the estimators and their asymptotic covariance matrices.  相似文献   

13.
In computer-aided tolerance design (CAT), integrated design of dimensional and geometric tolerances is still one of the research hotspots. Polychromatic sets theory (PST) is a new mathematic tool, which is especially suitable for formal hierarchical structure models. Based on PST, in this article, a new hierarchical representation model for tolerance synthesis is presented to realise integrated design of dimensional and geometric tolerances. According to the inference relations between unified and individual colours of PST, the synthesis matrices of variational geometric constraints (VGC) are established in the VGC tier of the model, and the synthesis matrices of tolerance types are established in the tolerance type tier of the model. On this basis, the synthesis processes from the feature tier to the VGC tier and from the VGC tier to the tolerance type tier can be realised. VGCs, which are achieved by the synthesis matrices of VGCs, can be combined together to establish a well-constrained VGC network (VGCN). Tolerance types, which are achieved by the synthesis matrices of tolerance types, can be added to the well-constrained VGCN to construct a well-constrained tolerance network. An application example is given in the article to illustrate the synthesis steps.  相似文献   

14.
In this paper the extended growth curve model is considered. The literature comprises two versions of the model. These models can be connected by one-to-one reparameterizations but since estimators are non-linear it is not obvious how to transmit properties of estimators from one model to another. Since it is only for one of the models where detailed knowledge concerning estimators is available (Kollo and von Rosen, Advanced multivariate statistics with matrices. Springer, Dordrecht, 2005) the object in this paper is therefore to present uniqueness properties and moment relations for the estimators of the second model. One aim of the paper is also to complete the results for the model presented in Kollo and von Rosen (Advanced multivariate statistics with matrices. Springer, Dordrecht, 2005). The presented proofs of uniqueness for linear combinations of estimators are valid for both models and are simplifications of proofs given in Kollo and von Rosen (Advanced multivariate statistics with matrices. Springer, Dordrecht, 2005).  相似文献   

15.
Screening designs are useful for situations where a large number of factors are examined but only a few, k, of them are expected to be important. Traditionally orthogonal arrays such as Hadamard matrices and Plackett Burman designs have been studied for this purpose. It is therefore of practical interest for a given k to know all the classes of inequivalent projections of the design into the k dimensions that have certain statistical properties. In this paper we present 15 inequivalent Hadamard matrices of order n=32 constructed from circulant cores. We study their projection properties using several well-known statistical criteria and we provide minimum generalized aberration 2 level designs with 32 runs and up to seven factors that are embedded into these Hadamard matrices. A concept of generalized projectivity and design selection of such designs is also discussed.AMS Subject Classification: Primary 62K15, Secondary 05B20  相似文献   

16.
Summary We present a class of tests for exponentiality against IFRA alternatives. The class of tests of Deshpande (1983) is a subclass of ours. We also treat the same problem when the data is randomly censored from the right. The results of an asymptotic relative efficiency comparison indicate the superiority of our tests. This research was supported by an NSERC Canada operating grant at the University of Alberta.  相似文献   

17.
Two orthogonal arrays based on 3 symbols are said to be isomorphic or combinatorially equivalent if one can be obtained from the other by a sequence of row permutations, column permutations and permutations of symbols in each column. Orthogonal arrays are used as screening designs to identify active main effects, after which the properties of the subdesign for estimating these effects and possibly their interactions become important. Such a subdesign is known as a ``projection design'. In this paper we have identified all the inequivalent projection designs of an OA(27,13,3,2), an OA(18,7,3,2) and an OA(36,13,3,2) into k=3,4 and 5 factors. It is shown that the generalized wordlength pattern criterion proposed by Ma and Fang [23] can distinguish between most, but not all, inequivalent classes. We propose an extension of the Es2 criterion (which is commonly used for measuring efficiency of 2-level designs) to distinguish further between the non-isomorphic classes and to measure the efficiency of the designs in these classes. Some concepts on generalized resolution are also discussed.  相似文献   

18.
While studies on the emergence of cooperation on structured populations abound, only few of them have considered real social networks as the substrate on which individuals interact. As has been shown recently [Lozano et al., PLoS ONE 3(4):e1892, 2008], understanding cooperative behavior on social networks requires knowledge not only of their global (macroscopic) characteristic, but also a deep insight on their community (mesoscopic) structure. In this paper, we look at this problem from the viewpoint of the resilience of cooperation, in particular when there are directed exogenous attacks (insertion of pure defectors) at key locations in the network. We present results of agent-based simulations showing strong evidence that the resilience of social networks is crucially dependent on their community structure, ranging from no resilience to robust cooperative behavior. Our results have important implications for the understanding of how organizations work and can be used as a guide for organization design. This work was supported by Ministerio de Educación y Ciencia (Spain) under grants FIS2006-13321-2 and MOSAICO and by Comunidad de Madrid (Spain) under grant. SIMUMAT-CM. S. Lozano was supported by URV through a FPU grant and by the EU Integrated Project IRRIIS (027568).  相似文献   

19.
The paper discusses, illustrates and possibly contributes to overcoming two methodological problems that emerge in applying Social Network Analysis (SNA) to the study of IO-based innovation flows matrices. The first has to do with the scale-effects these matrices suffer from. The second refers to the need of dichotomising the matrices. Through an illustrative application to six OECD countries in the mid-1990s, the paper shows that, as for the former problem, different relativisation procedures can be, and have been, used, which either tend to alter the actual meaning of standard SNA indicators, or do not properly take into account the actual composition of countries' final demand. As for the latter problem, the paper shows that the choice of discrete cut-offs is extremely sensitive, as comparative results actually change along the continuum of the matrices values. In order to overcome the scale problem, a new relativisation procedure is put forward that measures innovation flows embodied in a unit value basket of final demand and thus properly retains all the information provided by the original matrix of intersectoral innovation (embodied) flows. In addressing the problem of dichotomisation, the paper suggests, as a second best, to work with density distributions that can make the choice of discrete cut-off values less arbitrary.  相似文献   

20.
In a recent paper Ghosh and Sarkar [5] have developed a model of input-output systems as spatial configurations. Roy has proposed a more efficient solution method; but computation time still increases factorially which rules out its use for large matrices. [15] This note shows that the problem they have formulated belongs to a class of discrete programming problems known as placement or assignment problems. Several natural extensions are briefly discussed. More importantly, an efficient algorithm for the quadratic assignment problem is used to compute the optimal ordering of five comparable input-output matrices (US, Norway, Japan, Italy, India). These preliminary empirical results do show rather stable assignment patterns for the industries; and certain clusters of industries are shown to emerge as hypothesized by Ghosh and Sarkar.The author wishes to thank an anonymous referee for some important clarifying remarks on a preliminary version of this paper.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号