首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
In 1951 R. A. Fisher described what had been achieved in the 20th century so far: “we have learnt (i) To conserve in its statistical reduction the scientific information latent in any body of observations. (ii) To conduct experimental and observational inquiries so as to maximise the information obtained for a given expenditure.” This paper asks what Fisher meant and, in particular, how he saw his work on experimental design as contributing to the objective of maximizing information for a given expenditure. The material examined ranges from detailed work on issues like “the information lost in measurement of error” to polemics against decision theory .  相似文献   

2.
The welcome rise of replication tests in economics has not been accompanied by a consensus standard for determining what constitutes a replication. A discrepant replication, in current usage of the term, can signal anything from an unremarkable disagreement over methods to scientific incompetence or misconduct. This paper proposes a standard for classifying one study as a replication of some other study. It is a standard that places the burden of proof on a study to demonstrate that it should have obtained identical results to the original, a conservative standard that is already used implicitly by many researchers. It contrasts this standard with decades of unsuccessful attempts to harmonize terminology, and argues that many prominent results described as replication tests should not be described as such. Adopting a conservative standard like this one can improve incentives for researchers, encouraging more and better replication tests.  相似文献   

3.
Various scientific studies have explored the causes of violent behaviour from different perspectives, with psychological tests, in particular, applied to the analysis of crime factors. The relationship between bi-factors has also been extensively studied including the link between age and crime. In reality, many factors interact to contribute to criminal behaviour and as such there is a need to have a greater level of insight into its complex nature. In this article we analyse violent crime information systems containing data on psychological, environmental and genetic factors. Our approach combines elements of rough set theory with fuzzy logic and particle swarm optimisation to yield an algorithm and methodology that can effectively extract multi-knowledge from information systems. The experimental results show that our approach outperforms alternative genetic algorithm and dynamic reduct-based techniques for reduct identification and has the added advantage of identifying multiple reducts and hence multi-knowledge (rules). Identified rules are consistent with classical statistical analysis of violent crime data and also reveal new insights into the interaction between several factors. As such, the results are helpful in improving our understanding of the factors contributing to violent crime and in highlighting the existence of hidden and intangible relationships between crime factors.  相似文献   

4.
This paper gives an overview of several (mostly recent) statistical contributions to the theory of Limiting and Serial Dilution Assays (LDA's, SDA's). A simple and useful method is presented for the setup of a design for an LDA or an SDA. This method is based on several user-supplied design parameters, consisting in the researcher's advance information and other parameters inherent to the particular problem. The commonly used Maximum Likelihood (ML) and Minimum Chi-square methods for the estimation of the unknown parameter in an LDA or an SDA are described and compared to several bias-reducing estimation methods, e.g. jackknife and bootstrap versions of the ML method. One particular jackknife version is recommended.  相似文献   

5.
Being convinced that scientific methodology cannot avoid the use of inductive methods, and that the problem of the justification of induction represents a genuine epistemological problem, a careful consideration of which is indispensable to every investigation concerning scientific inference, both from a general point of view, and with particular reference to social sciences; in this paper we give an account of what seems to us the best attempt to solve this problem. We are referring to the pragmatic approach, which has been elaborated withing the frequency interpretation or probability. We will present the pragmatic justification of induction, starting with Reichenbach's argument, and dealing in particular with the most recent developments of this approach, mainly due to W.C. Salmon. In the last section, we will outline the main problems left unsolved by Salmon's arguments.  相似文献   

6.
This paper presents some results obtained in time series forecasting using two nonstandard approaches and compares them with those obtained by usual statistical techniques. In particular, a new method based on recent results of the General Theory of Optimal Algorithm is considered. This method may be useful when no reliable statistical hypotheses can be made or when a limited number of observations is available. Moreover, a nonlinear modelling technique based on Group Method of Data Handling (GMDH) is also considered to derive forecasts. The well-known Wolf Sunspot Numbers and Annual Canadian Lynx Trappings series are analyzed; the Optimal Error Predictor is also applied to a recently published demographic series on Australian Births. The reported results show that the Optimal Error and GMDH predictors provide accurate one step ahead forecasts with respect to those obtained by some linear and nonlinear statistical models. Furthermore, the Optimal Error Predictor shows very good performances in multistep forecasting.  相似文献   

7.
This paper discusses the connection between mathematical finance and statistical modelling which turns out to be more than a formal mathematical correspondence. We like to figure out how common results and notions in statistics and their meaning can be translated to the world of mathematical finance and vice versa. A lot of similarities can be expressed in terms of LeCam’s theory for statistical experiments which is the theory of the behaviour of likelihood processes. For positive prices the arbitrage free financial assets fit into statistical experiments. It is shown that they are given by filtered likelihood ratio processes. From the statistical point of view, martingale measures, completeness, and pricing formulas are revisited. The pricing formulas for various options are connected with the power functions of tests. For instance the Black–Scholes price of a European option is related to Neyman–Pearson tests and it has an interpretation as Bayes risk. Under contiguity the convergence of financial experiments and option prices are obtained. In particular, the approximation of Itô type price processes by discrete models and the convergence of associated option prices is studied. The result relies on the central limit theorem for statistical experiments, which is well known in statistics in connection with local asymptotic normal (LAN) families. As application certain continuous time option prices can be approximated by related discrete time pricing formulas.  相似文献   

8.
The multiplicity problem is evident in the simplest form of statistical analysis of gene expression data – the identification of differentially expressed genes. In more complex analysis, the problem is compounded by the multiplicity of hypotheses per gene. Thus, in some cases, it may be necessary to consider testing millions of hypotheses. We present three general approaches for addressing multiplicity in large research problems. (a) Use the scalability of false discovery rate (FDR) controlling procedures; (b) apply FDR-controlling procedures to a selected subset of hypotheses; (c) apply hierarchical FDR-controlling procedures. We also offer a general framework for ensuring reproducible results in complex research, where a researcher faces more than just one large research problem. We demonstrate these approaches by analyzing the results of a complex experiment involving the study of gene expression levels in different brain regions across multiple mouse strains.  相似文献   

9.
A recent article (Tse, 1998 ) published in this journal analysed the conditional heteroscedasticity of the yen–dollar exchange rate based on the fractionally integrated asymmetric power ARCH model. In this paper, we present replication results using Tse's ( 1998 ) yen–dollar series. We also examine the robustness of Tse's ( 1998 ) findings across different currencies, sample periods and non‐nested GARCH‐type models. Unlike Tse ( 1998 ), we find some evidence of asymmetric conditional volatility for daily returns of currencies measured against the dollar or the yen. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

10.
We consider how to estimate the trend and cycle of a time series, such as real gross domestic product, given a large information set. Our approach makes use of the Beveridge–Nelson decomposition based on a vector autoregression, but with two practical considerations. First, we show how to determine which conditioning variables span the relevant information by directly accounting for the Beveridge–Nelson trend and cycle in terms of contributions from different forecast errors. Second, we employ Bayesian shrinkage to avoid overfitting in finite samples when estimating models that are large enough to include many possible sources of information. An empirical application with up to 138 variables covering various aspects of the US economy reveals that the unemployment rate, inflation, and, to a lesser extent, housing starts, aggregate consumption, stock prices, real money balances, and the federal funds rate contain relevant information beyond that in output growth for estimating the output gap, with estimates largely robust to substituting some of these variables or incorporating additional variables.  相似文献   

11.
A recent article by Krause (Qual Quant, doi:10.1007/s11135-012-9712-5, Krause (2012)) maintains that: (1) it is untenable to characterize the error term in multiple regression as simply an extraneous random influence on the outcome variable, because any amount of error implies the possibility of one or more omitted, relevant explanatory variables; and (2) the only way to guarantee the prevention of omitted variable bias and thereby justify causal interpretations of estimated coefficients is to construct fully specified models that completely eliminate the error term. The present commentary argues that such an extreme position is impractical and unnecessary, given the availability of specialized techniques for dealing with the primary statistical consequence of omitted variables, namely endogeneity, or the existence of correlations between included explanatory variables and the error term. In particular, the current article discusses the method of instrumental variable estimation, which can resolve the endogeneity problem in causal models where one or more relevant explanatory variables are excluded, thus allowing for accurate estimation of effects. An overview of recent methodological resources and software for conducting instrumental variables estimation is provided, with the aim of helping to place this crucial technique squarely in the statistical toolkit of applied researchers.  相似文献   

12.
非线性面板模型在经济学研究中得到了广泛应用,相关的分析方法却面临着很大的挑战,尤其是对一些微观数据模型来说,伴随参数问题的存在使得非线性面板模型在固定效应设定下结构参数的极大似然估计量存在着严重的偏误。本文对这类模型参数估计量的偏误问题进行了详细的阐述,回顾了近些年来出现的各种偏误修正方法,其中包括解析修正法、正交性修正法、先验信息修正法以及自助修正法等,并且对不同方法之间的联系进行了简单说明,还对已取得的一些重要研究成果进行了总结。最后强调了不同的偏误修正方法在处理实际问题时存在的差异性,并对该问题最新的发展方向加以概括。  相似文献   

13.
广西二七四地质队是区直事业单位,几十年来找矿成果显著,为建立桂西铝工业基地作出了重要贡献.但工作中也存在一些亟待解决的问题,如何解决问题以保持科学发展,这是该单位面临的重要课题.文章对广西二七四地质队今后保持科学发展提出了“五个新转变”,并指出了主要对策,对地勘单位工作具有参考意义.  相似文献   

14.
Structural Analysis of Cointegrating VARs   总被引:8,自引:0,他引:8  
This survey uses a number of recent developments in the analysis of cointegrating Vector Autoregressions (VARs) to examine their links to the older structural modelling traditions using Autoregressive Distributed Lag (ARDL), and Simultaneous Equations Models (SEMs). In particular, it emphasizes the importance of using judgement and economic theory to supplement the statistical information. After a brief historical review it sets out the statistical framework, discusses the identification of impulse responses using the Generalized Impulse Response functions, reviews the analysis of cointegrating VARs and highlights the large number of choices applied workers have to make in determining a specification. In particular, it considers the problem of specification of intercepts and trends and the size of the VAR in more detail, and examines the advantages of the use of exogenous variables in cointegration analysis. The issues are illustrated with a small U.S. Macroeconomic model.  相似文献   

15.
Since the 1990s, the Akaike Information Criterion (AIC) and its various modifications/extensions, including BIC, have found wide applicability in econometrics as objective procedures that can be used to select parsimonious statistical models. The aim of this paper is to argue that these model selection procedures invariably give rise to unreliable inferences, primarily because their choice within a prespecified family of models (a) assumes away the problem of model validation, and (b) ignores the relevant error probabilities. This paper argues for a return to the original statistical model specification problem, as envisaged by Fisher (1922), where the task is understood as one of selecting a statistical model in such a way as to render the particular data a truly typical realization of the stochastic process specified by the model in question. The key to addressing this problem is to replace trading goodness-of-fit against parsimony with statistical adequacy as the sole criterion for when a fitted model accounts for the regularities in the data.  相似文献   

16.
Forthcoming changing demographics and the introduction of the relatively recent UK age discrimination legislation make it timely to consider some of the debates around the relevance of age to work. Issues surrounding ageing have been considered from within a number of disciplines and research perspectives which have led to some questioning of the dominant economic model that pivots on chronological age as a convenient, bureaucratic, measure that proxies for ability. The role of the HR practitioner in being able to redefine expectations throughout the lifespan of employees is considered while constraints to managing for the longer term are acknowledged. It is proposed that although the legislation will affect some age‐related practices positively, there are likely to be unintended consequences that single out particular age groups as special cases and therefore make age more relevant to the work relationship.  相似文献   

17.
This narrow replication exercise of Yogo (Review of Economics and Statistics 2004; 86 (3): 797–810) finds results identical to the original paper, and provides results on overidentification tests for specifications that do not suffer from the weak instruments problem. The null hypothesis of the Sargan test is rejected in several cases, in particular for US and UK quarterly data. When other combinations of instruments are used the null of the Sargan test is rejected again in several cases. These rejections cast doubts on either instrument validity or model specification. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

18.
This paper studies how problem framing by research and development groups, in particular the extent of problem decomposition, impacts knowledge replication processes conducted through the use of virtual simulation tools (VSTs). It presents the results of a comparative study of two research and development groups working on the design of hybrid propulsion systems. The research contributes to the literature on strategy and innovation in four ways. First, we identify three organizational and strategic factors affecting the problem framing decision. Second, we analyse the impact of problem framing on the use of VSTs and the related effect on knowledge replication processes. Third, we show the emergence of a new VST‐driven knowledge replication process, i.e. functional replication. Fourth, we explain how VST‐driven knowledge replication processes can attenuate the dangers related to the adoption of modular design strategies and address the replication vs. imitation dilemma.  相似文献   

19.
Backshoring – the movement of manufacturing activities from locations abroad back to the home country – has gained some attention in policy discussions and in academic research in recent years. This paper presents empirical evidence on backshoring from a large sample of European manufacturing firms. The data indicate that backshoring is still uncommon among European firms. Around 4% of all firms in our sample have moved production back to the home country between 2013 and mid-2015. The most frequent reasons for backshoring are the loss of flexibility, a lack in quality of the goods produced abroad, and unemployed capacities at home. Flexibility and quality concerns are, in particular, relevant for firms that move production back from Asian countries. Backshoring is most likely for manufacturers of final products and in high-technology sectors, in particular in electrical equipment, information and communications equipment, and the Automotive industry.  相似文献   

20.
The hypotheses that a firm's environmental performance has a positive impact on its financial performance and vice versa are statistically supported by Japanese data. However, this tendency for two‐way positive interaction appears to be only a relatively recent phenomenon. The tendency for realizing the two‐way interaction is not limited to the top‐scoring firms in terms of both financial and environmental performance. On the contrary, this is also a trend that can be observed fairly generally. Obviously, when we consider only scores of those companies that published the relevant information in their environmental reports, and conduct the statistical causality test with such information as additional input to the pooled time‐series and cross‐section data of financial performance, the results become more strongly significant. From the recent experience of environmental policies in Japan, we infer that information‐based environmental policy measures are effective to encourage the ongoing transition toward a more sustainable market economy. Copyright © 2006 John Wiley & Sons, Ltd and ERP Environment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号