首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The theory of financial markets is well developed, but before any of it can be applied there are statistical questions to be answered: Are the hypotheses of proposed models reasonably consistent with what data show? If so, how should we infer parameter values from data? How do we quantify the error in our conclusions? This paper examines these questions in the context of the two main areas of quantitative finance, portfolio selection and derivative pricing. By looking at these two contexts, we get a very clear understanding of the viability of the two main statistical paradigms, classical (frequentist) statistics and Bayesian statistics.  相似文献   

2.
3.
Macro‐integration is the process of combining data from several sources at an aggregate level. We review a Bayesian approach to macro‐integration with special emphasis on the inclusion of inequality constraints. In particular, an approximate method of dealing with inequality constraints within the linear macro‐integration framework is proposed. This method is based on a normal approximation to the truncated multivariate normal distribution. The framework is then applied to the integration of international trade statistics and transport statistics. By combining these data sources, transit flows can be derived as differences between specific transport and trade flows. Two methods of imposing the inequality restrictions that transit flows must be non‐negative are compared. Moreover, the figures are improved by imposing the equality constraints that aggregates of incoming and outgoing transit flows must be equal.  相似文献   

4.
Consider the loglinear model for categorical data under the assumption of multinomial sampling. We are interested in testing between various hypotheses on the parameter space when we have some hypotheses relating to the parameters of the models that can be written in terms of constraints on the frequencies. The usual likelihood ratio test, with maximum likelihood estimator for the unspecified parameters, is generalized to tests based on -divergence statistics, using minimum -divergence estimator. These tests yield the classical likelihood ratio test as a special case. Asymptotic distributions for the new -divergence test statistics are derived under the null hypothesis.  相似文献   

5.
This paper uses semidefinite programming (SDP) to construct Bayesian optimal design for nonlinear regression models. The setup here extends the formulation of the optimal designs problem as an SDP problem from linear to nonlinear models. Gaussian quadrature formulas (GQF) are used to compute the expectation in the Bayesian design criterion, such as D‐, A‐ or E‐optimality. As an illustrative example, we demonstrate the approach using the power‐logistic model and compare results in the literature. Additionally, we investigate how the optimal design is impacted by different discretising schemes for the design space, different amounts of uncertainty in the parameter values, different choices of GQF and different prior distributions for the vector of model parameters, including normal priors with and without correlated components. Further applications to find Bayesian D‐optimal designs with two regressors for a logistic model and a two‐variable generalised linear model with a gamma distributed response are discussed, and some limitations of our approach are noted.  相似文献   

6.
Generalized order statistics have been introduced in Kamps (1995a). They enable a unified approach to several models of ordered random variables, e.g. (ordinary) order statistics, record values, sequential order statistics, record values from non-identical distributions. The purpose of this paper is to develop conditional distributions of one generalized order statistic given another and to characterize the underlying continuous distribution by different conditional expectations. Well-known results for ordinary order statistics and record values are extended to generalized order statistics. Received: July 1997  相似文献   

7.
Much Ado About Nothing: the Mixed Models Controversy Revisited   总被引:2,自引:2,他引:0  
We consider a well-known controversy that stems from the use of two mixed models for the analysis of balanced experimental data with a fixed and a random factor. It essentially originates in the different statistics developed from such models for testing that the variance parameter associated to the random factor is null. The corresponding hypotheses are interpreted as that of null random factor main effects in the presence of interaction. The controversy is further complicated by different opinions regarding the appropriateness of such hypothesis. Assuming that this is a sensible option, we show that the standard test statistics obtained under both models are really directed at different hypotheses and conclude that the problem lies in the definition of the main effects and interactions. We use expected values as in the fixed effects case to resolve the controversy showing that under the most commonly used model, the test usually associated to the inexistence of the random factor main effects addresses a different hypothesis. We discuss the choice of models, and some further problems that occur in the presence of unbalanced data.  相似文献   

8.
A radically new approach to statistical modelling, which combines mathematical techniques of Bayesian statistics with the philosophy of the theory of competitive on-line algorithms, has arisen over the last decade in computer science (to a large degree, under the influence of Dawid's prequential statistics). In this approach, which we call "competitive on-line statistics", it is not assumed that data are generated by some stochastic mechanism; the bounds derived for the performance of competitive on-line statistical procedures are guaranteed to hold (and not just hold with high probability or on the average). This paper reviews some results in this area; the new material in it includes the proofs for the performance of the Aggregating Algorithm in the problem of linear regression with square loss.  相似文献   

9.
A Treatise on Probability was published by John Maynard Keynes in 1921. The Treatise contains a critical assessment of the philosophical foundations of probability and of the statistical methodology at the time. We review the aspects of the book that are most related with statistics, avoiding uninteresting neophyte's forrays into philosophical issues. In particular, we examine the arguments provided by Keynes against the Bayesian approach, as well as the sketchy alternative of a return to Lexis' theory of analogies he proposes. Our conclusion is that the Treatise is a scholarly piece of work looking at past advances rather than producing directions for the future.  相似文献   

10.
The decision maker receives signals imperfectly correlated with an unobservable state variable and must take actions whose payoffs depend on the state. The state randomly changes over time. In this environment, we examine the performance of simple linear updating rules relative to Bayesian learning. We show that a range of parameters exists for which linear learning results in exactly the same decisions as Bayesian learning, although not in the same beliefs. Outside this parameter range, we use simulations to demonstrate that the consumption level attainable under the optimal linear rule is virtually indistinguishable from the one attainable under Bayes’ rule, although the respective decisions will not always be identical. These results suggest that simple rules of thumb can have an advantage over Bayesian updating when more complex calculations are more costly to perform than less complex ones. We demonstrate the implications of such an advantage in an evolutionary model where agents “learn to learn.”  相似文献   

11.
This paper studies an alternative quasi likelihood approach under possible model misspecification. We derive a filtered likelihood from a given quasi likelihood (QL), called a limited information quasi likelihood (LI-QL), that contains relevant but limited information on the data generation process. Our LI-QL approach, in one hand, extends robustness of the QL approach to inference problems for which the existing approach does not apply. Our study in this paper, on the other hand, builds a bridge between the classical and Bayesian approaches for statistical inference under possible model misspecification. We can establish a large sample correspondence between the classical QL approach and our LI-QL based Bayesian approach. An interesting finding is that the asymptotic distribution of an LI-QL based posterior and that of the corresponding quasi maximum likelihood estimator share the same “sandwich”-type second moment. Based on the LI-QL we can develop inference methods that are useful for practical applications under possible model misspecification. In particular, we can develop the Bayesian counterparts of classical QL methods that carry all the nice features of the latter studied in  White (1982). In addition, we can develop a Bayesian method for analyzing model specification based on an LI-QL.  相似文献   

12.
A separation between the academic subjects statistics and mathematical statistics has existed in Sweden almost as long as there have been statistics professors. The same distinction has not been maintained in other countries. Why has it been kept for so long in Sweden, and what consequences may it have had? In May 2015, it was 100 years since Mathematical Statistics was formally established as an academic discipline at a Swedish university where Statistics had existed since the turn of the century. We give an account of the debate in Lund and elsewhere about this division during the first decades after 1900 and present two of its leading personalities. The Lund University astronomer (and mathematical statistician) C. V. L. Charlier was a leading proponent for a position in mathematical statistics at the university. Charlier's adversary in the debate was Pontus Fahlbeck, professor in political science and statistics, who reserved the word statistics for ‘statistics as a social science’. Charlier not only secured the first academic position in Sweden in mathematical statistics for his former PhD student Sven Wicksell but also demonstrated that a mathematical statistician can be influential in matters of state, finance as well as in different natural sciences. Fahlbeck saw mathematical statistics as a set of tools that sometimes could be useful in his brand of statistics. After a summary of the organisational, educational and scientific growth of the statistical sciences in Sweden that has taken place during the last 50 years, we discuss what effects the Charlier–Fahlbeck divergence might have had on this development.  相似文献   

13.
Bayesian averaging,prediction and nonnested model selection   总被引:1,自引:0,他引:1  
This paper studies the asymptotic relationship between Bayesian model averaging and post-selection frequentist predictors in both nested and nonnested models. We derive conditions under which their difference is of a smaller order of magnitude than the inverse of the square root of the sample size in large samples. This result depends crucially on the relation between posterior odds and frequentist model selection criteria. Weak conditions are given under which consistent model selection is feasible, regardless of whether models are nested or nonnested and regardless of whether models are correctly specified or not, in the sense that they select the best model with the least number of parameters with probability converging to 1. Under these conditions, Bayesian posterior odds and BICs are consistent for selecting among nested models, but are not consistent for selecting among nonnested models and possibly overlapping models. These findings have important bearing for applied researchers who are frequent users of model selection tools for empirical investigation of model predictions.  相似文献   

14.
Information-theoretic methodologies are increasingly being used in various disciplines. Frequently an information measure is adapted for a problem, yet the perspective of information as the unifying notion is overlooked. We set forth this perspective through presenting information-theoretic methodologies for a set of problems in probability and statistics. Our focal measures are Shannon entropy and Kullback–Leibler information. The background topics for these measures include notions of uncertainty and information, their axiomatic foundation, interpretations, properties, and generalizations. Topics with broad methodological applications include discrepancy between distributions, derivation of probability models, dependence between variables, and Bayesian analysis. More specific methodological topics include model selection, limiting distributions, optimal prior distribution and design of experiment, modeling duration variables, order statistics, data disclosure, and relative importance of predictors. Illustrations range from very basic to highly technical ones that draw attention to subtle points.  相似文献   

15.
《Journal of econometrics》2002,109(2):275-303
This article considers tests for parameter stability over time in general econometric models, possibly nonlinear-in-variables. Existing test statistics are commonly not asymptotically pivotal under nonstandard conditions. In such cases, the external bootstrap tests proposed in this paper are appealing from a practical viewpoint. We propose to use bootstrap versions of the asymptotic critical values based on a first-order asymptotic expansion of the test statistics under the null hypothesis, which consists of a linear transformation of the unobserved “innovations” partial sum process. The nature of these transformations under nonstandard conditions is discussed for the main testing principles. Also, we investigate the small sample performance of the proposed bootstrap tests by means of a small Monte Carlo experiment.  相似文献   

16.
When competing retailers lack full information about rivals' decision processes, how will dynamic pricing behavior vary from patterns observed in more traditional static or full-information models? We investigate this question in a dynamic alternating-moves duopoly model. Retailers update (linear) conjectures about rivals' future prices in a Bayesian fashion. We show that as observed and expected prices converge, a pricing equilibrium is always achieved, whether or not the conjectured and actual values of the slope of the rival's best response function are consistent. Assuming specific parameter values, we compare equilibrium prices and associated profits in our Bayesian learning model with those obtained under the assumptions of static Nash behavior, collusive behavior and dynamically optimal behavior with full information. We apply the notions of strategic substitutability and strategic complementarity to the analysis and find that when products are strategic complements, conjectures of higher rival price responsiveness lead to higher steady-state prices and profits. The reverse is true for strategic substitutes. We also find that learning about a rival's behavior proceeds more quickly, the less intensely related in demand are products. We find, in general, that equilibrium pricing patterns and profits can vary considerably from those in full-information environments, but that even with grossly wrong beliefs about rival behavior, competing retailers are still attracted to an equilibrium. The analysis suggests not only the value of investigating lessthan-full information situations but also the potential incremental value of signalling greater or less aggressiveness than truly characterizes one's behavior as a strategic option.  相似文献   

17.
苑延华  徐莹  陈洪海 《价值工程》2011,30(15):230-231
本文通过应用概率统计方法解释人类行为与认识活动实例的分析,说明了基于概率统计方法如何描述客观现象和理解人类的认识行为,进而得出概率统计方法是基于归纳的演绎推理,其本质是认识论中的归纳推理。  相似文献   

18.
19.
We give a survey of different partitioning methods that have been applied to bacterial taxonomy. We introduce a theoretical framework, which makes it possible to treat the various models in a unified way. The key concepts of our approach are prediction and storing of microbiological information in a Bayesian forecasting setting. We show that there is a close connection between classification and probabilistic identification and that, in fact, our approach ties these two concepts together in a coherent way.  相似文献   

20.
Health care research includes many studies that combine quantitative and qualitative methods. In this paper, we revisit the quantitative-qualitative debate and review the arguments for and against using mixed-methods. In addition, we discuss the implications stemming from our view, that the paradigms upon which the methods are based have a different view of reality and therefore a different view of the phenomenon under study. Because the two paradigms do not study the same phenomena, quantitative and qualitative methods cannot be combined for cross-validation or triangulation purposes. However, they can be combined for complementary purposes. Future standards for mixed-methods research should clearly reflect this recommendation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号