首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper the probability distribution of equilibrium outcomes is assumed to be a continuous but unknown function of agents' forecasts (which are probability measures). Agents start with a prior distribution on the set of mappings from forecasts into probabilities on outcomes. This induces an initial forecast. After observing the equilibrium outcome a posterior distribution is computed which induces a new forecast. The main result is that with probability one the forecasts converge to the set of fixed points of the unknown mapping. This can be interpreted as convergence to rational expectations.  相似文献   

2.
Summary "Learning by experience" is a well-known part of the theory of subjective probabilities; the learning process is often derived from some prior distribution F(p ) where p is a parameter of unknown value of a binomial process for instance. In this paper, the learning process is explicitly formulated and the corresponding prior distribution is derived from it. In this interpretation, subjective probabilities are part of an inference methodology, rather than a subjective evaluation of frequentistic probabilities. Implications are considered for a concept like the "non-informative prior; the situation is considered in which the learning process seems to be in contact with some objectively determined prior.  相似文献   

3.
This paper is concerned with the construction of prior probability measures for parametric families of densities where the framework is such that only beliefs or knowledge about a single observable data point is required. We pay particular attention to the parameter which minimizes a measure of divergence to the distribution providing the data. The prior distribution reflects this attention and we discuss the application of the Bayes rule from this perspective. Our framework is fundamentally non‐parametric and we are able to interpret prior distributions on the parameter space using ideas of matching loss functions, one of which is coming from the data model and the other from the prior.  相似文献   

4.
We propose a Bayesian nonparametric model to estimate rating migration matrices and default probabilities using the reinforced urn processes (RUP) introduced in Muliere et al. (2000). The estimated default probability becomes our prior information in a parametric model for the prediction of the number of bankruptcies, with the only assumption of exchangeability within rating classes. The Polya urn construction of the transition matrix justifies a Beta distributed de Finetti measure. Dependence among the processes is introduced through the dependence among the default probabilities, with the Bivariate Beta Distribution proposed in Olkin and Liu (2003) and its multivariate generalization.  相似文献   

5.
While the likelihood ratio measures statistical support for an alternative hypothesis about a single parameter value, it is undefined for an alternative hypothesis that is composite in the sense that it corresponds to multiple parameter values. Regarding the parameter of interest as a random variable enables measuring support for a composite alternative hypothesis without requiring the elicitation or estimation of a prior distribution, as described below. In this setting, in which parameter randomness represents variability rather than uncertainty, the ideal measure of the support for one hypothesis over another is the difference in the posterior and prior log‐odds. That ideal support may be replaced by any measure of support that, on a per‐observation basis, is asymptotically unbiased as a predictor of the ideal support. Such measures of support are easily interpreted and, if desired, can be combined with any specified or estimated prior probability of the null hypothesis. Two qualifying measures of support are minimax‐optimal. An application to proteomics data indicates that a modification of optimal support computed from data for a single protein can closely approximate the estimated difference in posterior and prior odds that would be available with the data for 20 proteins.  相似文献   

6.
Understanding what determines workplace performance is important for a variety of reasons. In the first place, it can inform the debate about the UK's low productivity growth. It also enables researchers to determine the efficacy of different organisational practices, policies and payment systems. In this article, we examine not the determinants of performance but how it is measured. Specifically, we assess the alternative measures of productivity and profitability that are available in the 2004 Workplace Employment Relations Survey (WERS). Previous WERS have been an important source of data in research into workplace performance. However, the subjective nature of the performance measures available in WERS prior to 2004 has attracted criticism. In the 2004 WERS, data were again collected on the subjective measure but, in addition, objective data on profitability and productivity were also collected. This allows a comparison to be made between the two types of measures. A number of validity tests are undertaken and the main conclusion is that subjective and objective measures of performance are weakly equivalent but that differences are also evident. Our findings suggest that it would be prudent to give most weight to results supported by both types of measure.  相似文献   

7.
Traditional clustering algorithms are deterministic in the sense that a given dataset always leads to the same output partition. This article modifies traditional clustering algorithms whereby data are associated with a probability model, and clustering is carried out on the stochastic model parameters rather than the data. This is done in a principled way using a Bayesian approach which allows the assignment of posterior probabilities to output partitions. In addition, the approach incorporates prior knowledge of the output partitions using Bayesian melding. The methodology is applied to two substantive problems: (i) a question of stylometry involving a simulated dataset and (ii) the assessment of potential champions of the 2010 FIFA World Cup.  相似文献   

8.
Let be a family of choice probabilities that are caused by compatible latent factors. Then, there should exist an underlying common probability measure that in some degree induces each choice probability PS. The problem of the existence of P will be considered with respect to a general concept of induced probability. In addition, the relevance of this concept for Mathematical Economics and the Social Sciences will be discussed.  相似文献   

9.
A probabilistic forecast is the estimated probability with which a future event will occur. One interesting feature of such forecasts is their calibration, or the match between the predicted probabilities and the actual outcome probabilities. Calibration has been evaluated in the past by grouping probability forecasts into discrete categories. We show here that we can do this without discrete groupings; the kernel estimators that we use produce efficiency gains and smooth estimated curves relating the predicted and actual probabilities. We use such estimates to evaluate the empirical evidence on the calibration error in a number of economic applications, including the prediction of recessions and inflation, using both forecasts made and stored in real time and pseudo-forecasts made using the data vintage available at the forecast date. The outcomes are evaluated using both first-release outcome measures and subsequent revised data. We find substantial evidence of incorrect calibration in professional forecasts of recessions and inflation from the SPF, as well as in real-time inflation forecasts from a variety of output gap models.  相似文献   

10.
In the context of stationary point processes measurements are usually made from a time point chosen at random or from an occurrence chosen at random. That is, either the stationary distribution P or its Palm distribution P° is the ruling probability measure. In this paper an approach is presented to bridge the gap between these distributions. We consider probability measures which give exactly the same events zero probability as P°, having simple relations with P . Relations between P and P° are derived with these intermediate measures as bridges. With the resulting Radon-Nikodym densities several well-known results can be proved easily. New results are derived. As a corollary of cross ergodic theorems a conditional version of the well-known inversion formula is proved. Several approximations of P° are considered, for instance the local characterization of Po as a limit of conditional probability measures P° N The total variation distance between P° and P1 can be expressed in terms of the P-distribution function of the forward recurrence time.  相似文献   

11.
Randomized response (say, RR) techniques on survey are used for collecting data on sensitive issues while trying to protect the respondents’ privacy. The degree of confidentiality will clearly determine whether or not respondents choose to cooperate. There have been many proposals for privacy measures with very different implications for an optimal model design. These derived measures of protection privacy involve both conditional probabilities of being perceived as belonging to sensitive group, denoted as P(A|yes) and P(A|no). In this paper, we introduce an alternative criterion to measure privacy protection and reconsider and compare some RR models in the light of the efficiency/protection privacy. This measure is known to the respondents before they agree to use the RR model. This measure is helpful for choosing an optimal RR model in practice.  相似文献   

12.
Information-theoretic methodologies are increasingly being used in various disciplines. Frequently an information measure is adapted for a problem, yet the perspective of information as the unifying notion is overlooked. We set forth this perspective through presenting information-theoretic methodologies for a set of problems in probability and statistics. Our focal measures are Shannon entropy and Kullback–Leibler information. The background topics for these measures include notions of uncertainty and information, their axiomatic foundation, interpretations, properties, and generalizations. Topics with broad methodological applications include discrepancy between distributions, derivation of probability models, dependence between variables, and Bayesian analysis. More specific methodological topics include model selection, limiting distributions, optimal prior distribution and design of experiment, modeling duration variables, order statistics, data disclosure, and relative importance of predictors. Illustrations range from very basic to highly technical ones that draw attention to subtle points.  相似文献   

13.
The objective of this paper is to identify variational preferences and multiple-prior (maxmin) expected utility functions that exhibit aversion to risk under some probability measure from among the priors. Risk aversion has profound implications on agents’ choices and on market prices and allocations. Our approach to risk aversion relies on the theory of mean-independent risk of Werner (2009). We identify necessary and sufficient conditions for risk aversion of convex variational preferences and concave multiple-prior expected utilities. The conditions are stability of the cost function and of the set of probability priors, respectively, with respect to a probability measure. The two stability properties are new concepts. We show that cost functions defined by the relative entropy distance or other divergence distances have that property. Set of priors defined as cores of convex distortions of probability measures or neighborhoods in divergence distances have that property, too.  相似文献   

14.
Two types of probability are discussed, one of which is additive whilst the other is non-additive. Popular theories that attempt to justify the importance of the additivity of probability are then critically reviewed. By making assumptions the two types of probability put forward are utilised to justify a method of inference which involves betting preferences being revised in light of the data. This method of inference can be viewed as a justification for a weighted likelihood approach to inference where the plausibility of different values of a parameter θ based on the data is measured by the quantity q (θ) = l ( , θ) w (θ), where l ( , θ) is the likelihood function and w (θ) is a weight function. Even though, unlike Bayesian inference, the method has the disadvantageous property that the measure q (θ) is generally non-additive, it is argued that the method has other properties which may be considered very desirable and which have the potential to imply that when everything is taken into account, the method is a serious alternative to the Bayesian approach in many situations. The methodology that is developed is applied to both a toy example and a real example.  相似文献   

15.
Tukey (1975) introduced the notion of halfspace depth in a data analytic context, as a multivariate analog of rank relative to a finite data set. Here we focus on the depth function of an arbitrary probability distribution on Âp, and even of a non-probability measure. The halfspace depth of any point / in Âp is the smallest measure of a closed halfspace that contains /. We review the properties of halfspace depth, enriched with some new results. For various measures, uniform as well as non-uniform, we derive an expression for the depth function. We also compute the Tukey median, which is the / in which the depth function attains its maximal value. Various interesting phenomena occur. For the uniform distribution on a triangle, a square or any regular polygon, the depth function has ridges that correspond to an 'inversion' of depth contours. And for a product of Cauchy distributions, the depth contours are squares. We also consider an application of the depth function to voting theory.  相似文献   

16.
The expected value of information represents the maximum amount the decision maker should spend on inquiry before making a decision. This amount depends upon the accuracy of the information. In many cases of inquiry, prior objective knowledge of the accuracy is not available. This paper presents and compares two methods of subjectively assessing the value of imperfect information in the binary decision model. In the first method, the decision maker provides a likelihood function for the inquiry and hence the probabilities of error. The second method is the preposterior approach, in which the decision maker provides the prior distribution for the posterior probability.  相似文献   

17.
To estimate the mean sojourn time, a sample of Tilburg fair visitors was asked for the duration of their stay on the fair grounds. The longer a visitor's sojourn, the larger his/her probability of being interviewed will be; therefore, longer sojourn times will be overrepresented in the sample. As a consequence, the arithmetic sample mean is not a good estimator.
The paper places this problem against a theoretical background. Sampling with unequal probabilities is considered in a general context. The special case that the sampling probabilities are a function of the variable under investigation, is discussed in detail. As a better estimator the harmonic mean of the observations is presented. Most properties of this estimator are difficult to derive analytically, but a suitable variance estimator is derived. The behavior of estimator and variance estimator is studied in a number of quite different examples.  相似文献   

18.
Researchers have widely studied the nexus between corporate environmental (“green”) policy and its green performance and firm financial performance, but with mixed findings. A potential explanation for these mixed findings is the focus of extant studies on the direct and immediate impact of environmental performance on financial performance to the exclusion of firm‐specific boundary conditions. Furthermore, all prior research study the effect of environmental performance on either stock market‐based performance measures (i.e., stock return) or accounting‐based performance measures (i.e., return on assets). A missing third dimension of firm performance, product–market‐based performance (i.e., market share), has so far remained unexplored despite representing a crucial objective when innovating. Using Newsweek's annual green ranking as a novel measure of environmental performance for a panel of U.S. firms from 2010 to 2015, this paper attempts to fill these voids in the literature. The results show a positive relationship between firms' environmental performance and market share as a measure of product–market‐based performance. The findings further demonstrate that this relationship is positively moderated by the level of customer awareness and innovativeness of the firm: The higher the level of awareness of a firm's environmental credentials and innovativeness, the stronger the effects of environmental performance on market share. Our results are robust against endogeneity concerns and alternative measures of firm financial and environmental performance.  相似文献   

19.
This paper examines the investments and performance of community development venture capital (CDVC). We find substantial differences between CDVCs and traditional VCs: CDVC investments are far more likely to be in nonmetropolitan regions and in regions with little prior venture activity. CDVC investments are likely to be in earlier stage investments and in industries outside the venture capital mainstream that have lower probabilities of successful exit. Even after controlling for this unattractive transaction mixture, the probability of a CDVC investment being successfully exited is lower. One benefit of CDVCs may be their effect in bringing traditional VCs to underserved regions—controlling for the presence of traditional VC investments, each additional CDVC investment results in an additional 0.06 new traditional VC firms in a region.  相似文献   

20.
ϕ-divergence statistics quantify the divergence between a joint probability measure and the product of its marginal probabilities on the basis of contingency tables. Asymptotic properties of these statistics are investigated either considering random sampling or stratified random sampling with proportional allocation and independence among strata. To finish same tests of hypotheses of independence are presented. The research in this paper was supported in part by DGICYT Grants No. PS91-0387 and No. PB91-0155. Their financial support is gratefully acknowledged.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号