首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 484 毫秒
1.
The paper proposes a framework for modelling cointegration in fractionally integrated processes, and considers methods for testing the existence of cointegrating relationships using the parametric bootstrap. In these procedures, ARFIMA models are fitted to the data, and the estimates used to simulate the null hypothesis of non-cointegration in a vector autoregressive modelling framework. The simulations are used to estimate p-values for alternative regression-based test statistics, including the F goodness-of-fit statistic, the Durbin–Watson statistic and estimates of the residual d. The bootstrap distributions are economical to compute, being conditioned on the actual sample values of all but the dependent variable in the regression. The procedures are easily adapted to test stronger null hypotheses, such as statistical independence. The tests are not in general asymptotically pivotal, but implemented by the bootstrap, are shown to be consistent against alternatives with both stationary and nonstationary cointegrating residuals. As an example, the tests are applied to the series for UK consumption and disposable income. The power properties of the tests are studied by simulations of artificial cointegrating relationships based on the sample data. The F test performs better in these experiments than the residual-based tests, although the Durbin–Watson in turn dominates the test based on the residual d.  相似文献   

2.
This paper aims at displaying a synthetic view of the historical development and the current research concerning causal relationships, starting from the Aristotelian doctrine of causes, following with the main philosophical streams until the middle of the twentieth century, and commenting on the present intensive research work in the statistical domain. The philosophical survey dwells upon various concepts of cause, and some attempts towards picking out spurious causes. Concerning statistical modelling, factorial models and directed acyclic graphs are examined and compared. Special attention is devoted to randomization and pseudo‐randomization (for observational studies) in view of avoiding the effect of possible confounders. An outline of the most common problems and pitfalls, encountered in modelling empirical data, closes the paper, with a warning to be very cautious in modelling and inferring conditional independence between variables.  相似文献   

3.
Ten More Years of Error Rate Research   总被引:3,自引:0,他引:3  
The assessment of the performance of supervised classification rules by estimating their error rate (the proportion of objects misclassified) is an important area of work in statistical pattern recognition. This paper reviews the last ten years of error rate research, bringing up to date the reviews of Hand (1986a) and McLachlan (1987). Since those surveys were published, old estimators have been improved new estimators have been introduced, and new approaches to error rate estimation have been developed. Some of this work has led to deep insights into classification methodology and statistical modelling in general.  相似文献   

4.
ABSTRACT

Natural language query systems over RDF data need to rely on the semantic relations in query. First, we propose the new crowdsourcing model that used to produce semantic relations dataset. The model not only inherits completeness of the iterative model and accuracy of the parallel model, but also saves human resources. Second, we mine the rules of semantic relation recognition from the correlations between dependency structures and semantic relations. Third, we propose an algorithm of semantic relation recognition for natural language query over RDF data, and experiments demonstrate that it can recognize more semantic relations than existing methods.  相似文献   

5.
In this paper we develop a model for the conditional inflated multivariate density of integer count variables with domain ?n, n?. Our modelling framework is based on a copula approach and can be used for a broad set of applications where the primary characteristics of the data are: (i) discrete domain; (ii) the tendency to cluster at certain outcome values; and (iii) contemporaneous dependence. These kinds of properties can be found for high‐ or ultra‐high‐frequency data describing the trading process on financial markets. We present a straightforward sampling method for such an inflated multivariate density through the application of an independence Metropolis–Hastings sampling algorithm. We demonstrate the power of our approach by modelling the conditional bivariate density of bid and ask quote changes in a high‐frequency setup. We show how to derive the implied conditional discrete density of the bid–ask spread, taking quote clusterings (at multiples of 5 ticks) into account. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

6.
Zero‐inflated ordered probit (ZIOP) and middle‐inflated ordered probit (MIOP) models are finding increasing favour in the discrete choice literature. We propose generalizations to these models – which collapse to their ZIOP/MIOP counterparts under a set of simple parameter restrictions – with respect to the inflation process. These generalizations form the basis of a new specification test of the inflation process in ZIOP and MIOP models. Support for our generalization framework is principally demonstrated by revisiting a key ZIOP application from the economics literature, and reinforced by the reassessment of an important MIOP application from political science. Our specification test supports the generalized models over the original ZIOP/MIOP ones, suggesting an important role for it in modelling zero‐ and middle‐inflation processes.  相似文献   

7.
Considering the growing volume of scientific literature, techniques that enable automatic detection of informational entities existing in scientific research articles may contribute to the extension of scientific knowledge and practical usages. Although there have been several efforts to extract informative entities from patent and biomedical research articles, there are few attempts in other scientific literatures. In this paper, we introduce an automatic semantic annotation framework for research articles based on entity recognition techniques. Our approach includes tag set modeling for semantic annotation, semi-automatic annotation tool, manual annotation for training data preparation, and supervised machine learning to develop entity type recognition module. For experiments, we choose two different domains, such as information and communication technology and chemical engineering due to their high usages. In addition, we provide three application scenarios of how our annotation framework can be used and extended further. It is to guide potential researchers who are willing to link their own contents with external data.  相似文献   

8.
This is an attempt to examine whether there is any causal relation between social development and economic growth. Social development in this context is measured by a social development index, which is a weighted composite index formed with eight social indicators of life representing various spheres of social life. Economic growth is indicated by Per Capita Real Gross Domestic Product (PCRGDP), The causality test offered by Granger has been performed for the entire sample as well as for three income groups: high, middle and low. The study also tests causality between PCRGDP and the eight social indicators of life.  相似文献   

9.
We construct a copula from the skew t distribution of Sahu et al. ( 2003 ). This copula can capture asymmetric and extreme dependence between variables, and is one of the few copulas that can do so and still be used in high dimensions effectively. However, it is difficult to estimate the copula model by maximum likelihood when the multivariate dimension is high, or when some or all of the marginal distributions are discrete‐valued, or when the parameters in the marginal distributions and copula are estimated jointly. We therefore propose a Bayesian approach that overcomes all these problems. The computations are undertaken using a Markov chain Monte Carlo simulation method which exploits the conditionally Gaussian representation of the skew t distribution. We employ the approach in two contemporary econometric studies. The first is the modelling of regional spot prices in the Australian electricity market. Here, we observe complex non‐Gaussian margins and nonlinear inter‐regional dependence. Accurate characterization of this dependence is important for the study of market integration and risk management purposes. The second is the modelling of ordinal exposure measures for 15 major websites. Dependence between websites is important when measuring the impact of multi‐site advertising campaigns. In both cases the skew t copula substantially outperforms symmetric elliptical copula alternatives, demonstrating that the skew t copula is a powerful modelling tool when coupled with Bayesian inference. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

10.
It is argued that univariate long memory estimates based on ex post data tend to underestimate the persistence of ex ante variables (and, hence, that of the ex post variables themselves) because of the presence of unanticipated shocks whose short‐run volatility masks the degree of long‐range dependence in the data. Empirical estimates of long‐range dependence in the Fisher equation are shown to manifest this problem and lead to an apparent imbalance in the memory characteristics of the variables in the Fisher equation. Evidence in support of this typical underestimation is provided by results obtained with inflation forecast survey data and by direct calculation of the finite sample biases. To address the problem of bias, the paper introduces a bivariate exact Whittle (BEW) estimator that explicitly allows for the presence of short memory noise in the data. The new procedure enhances the empirical capacity to separate low‐frequency behaviour from high‐frequency fluctuations, and it produces estimates of long‐range dependence that are much less biased when there is noise contaminated data. Empirical estimates from the BEW method suggest that the three Fisher variables are integrated of the same order, with memory parameter in the range (0.75, 1). Since the integration orders are balanced, the ex ante real rate has the same degree of persistence as expected inflation, thereby furnishing evidence against the existence of a (fractional) cointegrating relation among the Fisher variables and, correspondingly, showing little support for a long‐run form of Fisher hypothesis. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

11.
I argue that teaching statistical thinking is harder than teaching mathematics, that experimental design is particularly well suited to teaching statistical thinking and that in teaching statistics, variation is good. We need a mix of archival data, simulations and activities, of varying degrees of complexity. Within this context, I applaud the important contributions to our profession represented by Darius et al. (2007) , and Nolan & Temple Lang (2007) , the first for showing us how to make simulation‐based learning simultaneously more flexible and more realistic than ever before, and the second for showing us a path‐breaking technology that can make archival data the basis for active learning at an impressively high level of sophistication, embedding statistical thinking within real scientific and practical investigations.  相似文献   

12.
Strategic coalignment - viewed in terms of internal consistency among key strategic decisions or the alignment between strategic choices and critical contingencies posed by either environmental or organizational contexts - is an important theoretical perspective in strategic management. However, extant research is characterized by both poor clarifications of the theoretical meanings of coalignment as well as inappropriate statistical modelling. This article adopts a methodological orientation to examining a general proposition of the performance implications of strategic coalignment among three generic strategy dimensions: marketing, manufacturing and administrative. Such a proposition is evaluated using three seemingly complementary perspectives of statistical modelling: (a) interactionist; (b) profile-derivation; and (c) covariation, and data collected from two hundred business units. The analysis and results generally support the proposition using two of three perspectives, thus raising critical methodological issues relating to multiple specifications of the statistical form of coalignment.  相似文献   

13.
Data Envelopment Analysis (DEA) methods were applied to data of 14 major oil companies (Majors) for the years 1980–1987. The oil model focuses on the worldwide reserve exploration and oil production activities. Data were reported by Arthur Andersen & Co.'sOil & Gas Reserve Disclosures.Newly developed DEA theory was used to link the input and output multiplier bounds and to measure maximum and minimum profit ratios. Also, this theory was used to identify uniquely inefficient firms and to project them uniquely to the DEA frontier.The DEA profit and efficiency measures partitioned the firms into low and high achievers. Discriminant analysis of a similarly constructed data base lends statistical support to this partition.With DEA, top-managers of major oil companies may capture the cost savings/profit ratio gains of making inefficient firms efficient. DEA allows them to benchmark entire firms against the best-practice norm. Looking outwardly, they may gain much more from adapting best-practice competitor practices than from just looking inwardly searching for small marginal gains.  相似文献   

14.
模糊聚类、判优与识别是模糊分析理论的基础;以模糊聚类为核心,建立模糊聚类、判优与识别的统一理论。模糊聚类分析方法是一种多元统计分析方法,它通过多指标把样本划分为若干类;判优,我们通过—统计量可以确定最优的分类;最后,用择近原则对新的样品识别,归类。文中以全国各地运输、邮电通信业就业人数分布为例来说明其应用。这种方法同样可用于各行各业就业分布情况的分析、归类。  相似文献   

15.
The article deals with the problem of revealing cause–effect relations in the social research by a complex of statistical methods. The ways for solving the problem for experimental and non-experimental data are regarded. In the research the role of the statistical methods in formation of causal relation hypotheses is revealed and the methods of hypotheses verification in conditions of experiment or upon latent structure modelling are discussed. The possibilities of the use of the factor analysis and the analysis of variance in revealing cause–effect relations are also considered. Special attention is paid to the interpretation of the analysis results depending on the character of the empirical data and the conditions they were obtained in. The approaches to causal relations revealing based on the development of the schemes of the complex analysis of the empirical data by the statistical methods aimed at investigation of the cause and effect relation in a long-term research are suggested. We take into account the hierarchical structure of the schemes reflecting the structure of the subject of cognition, the specificity of which requires the use of “rigid” and “flexible” models in combination. Its variant could be the method of structural equation modelling. The article gives an example of the research of causal relations in the assessment of the quality of education services in higher institution. Each stage of the assessment is provided with building of certain “rigid” or “flexible” mathematical models. Their combination allows using the obtained quality parameters as an instrument of regulation of interaction between education service providers and clients.  相似文献   

16.
Abstract

Existing high performance work system (HPWS) research has rarely considered cultural influences. This study investigates the relationships between guanxi, HPWS and employee attitudes in China. A data-set consisting of 226 employees in a Chinese state-owned enterprise in the railway sector was used to test the hypotheses. Using structural equation modelling as an analytical tool, we found that guanxi was positively related to HPWS and trust. Similar to research in the Western context, HPWS was found to be positively related to trust and job satisfaction. Moreover, the results also revealed that HPWS mediated between guanxi and both trust and job satisfaction. Theoretical and practical implications are both discussed.  相似文献   

17.
A time-varying parameter model with Markov-switching conditional heteroscedasticity is employed to investigate two sources of shifts in real interest rates: (1) shifts in the coefficients relating the ex ante real rate to the nominal rate, the inflation rate and a supply shock variable and (2) unconditional shifts in the variance of the stochastic process. The results underscore the importance of modelling continual change in the ex ante real rate in terms of other economic variables rather than relying on a statistical characterization that permits only a limited number of discrete jumps in the mean of the process. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

18.
A radically new approach to statistical modelling, which combines mathematical techniques of Bayesian statistics with the philosophy of the theory of competitive on-line algorithms, has arisen over the last decade in computer science (to a large degree, under the influence of Dawid's prequential statistics). In this approach, which we call "competitive on-line statistics", it is not assumed that data are generated by some stochastic mechanism; the bounds derived for the performance of competitive on-line statistical procedures are guaranteed to hold (and not just hold with high probability or on the average). This paper reviews some results in this area; the new material in it includes the proofs for the performance of the Aggregating Algorithm in the problem of linear regression with square loss.  相似文献   

19.
Since Provan and Milward’s article in 1995, a wide variety of conceptualizations and measures of network performance has been proposed. The lack of consensus among scholars generated a confusing landscape. In an attempt to wind a skein into a ball, our paper aims to synthesize the conceptualizations and measures of network performance proposed by the existing literature and explore their statistical and theoretical relationships. Structural equation modelling techniques were used for this purpose.  相似文献   

20.
本文基于当前嵌入式设备广泛应用的环境下,在语音识别技术的基础之上,设计了以ARM处理器为核心,Linux为操作系统的嵌入式语音识别设备。语音识别采用了流行的DHMM模型,并使用系统开销较小的Viterbi算法实现。总体来说,本文所设计的语音识别设备具有价格低、性能强、通用性好以及扩展能力强等优点。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号