首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
It is well known that there is a large degree of uncertainty around Rogoff's consensus half‐life of the real exchange rate. To obtain a more efficient estimator, we develop a system method that combines the Taylor rule and a standard exchange rate model to estimate half‐lives. Further, we propose a median unbiased estimator for the system method based on the generalized method of moments with non‐parametric grid bootstrap confidence intervals. Applying the method to real exchange rates of 18 developed countries against the US dollar, we find that most half‐life estimates from the single equation method fall in the range of 3–5 years, with wide confidence intervals that extend to positive infinity. In contrast, the system method yields median‐unbiased estimates that are typically shorter than 1 year, with much sharper 95% confidence intervals. Our Monte Carlo simulation results are consistent with an interpretation of these results that the true half‐lives are short but long half‐life estimates from single‐equation methods are caused by the high degree of uncertainty of these methods. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

2.
In this study, we reconsider the classical positive association between the level of market uncertainty and an organization's propensity to form ties with organizations of similar status. Although prior research argues that the greater the uncertainty, the higher the level of status homophily, we suggest that this relationship is contingent upon framing that affects positive or negative valence towards uncertainty. In an up market, organizations tend to frame uncertainty as upside risk, and thus will subsequently favour explorative uncertainty‐mitigation devices; whereas, in a down market, organizations primarily frame uncertainty as downward risk, and thus will rely on more conservative uncertainty‐mitigation mechanisms. We therefore predict that a greater number of status‐heterophilous ties will be formed in an up market than in a down market. We discuss the implications of our results for status theory and more broadly for research on strategic decision making under uncertainty.  相似文献   

3.
This paper is concerned with the construction of prior probability measures for parametric families of densities where the framework is such that only beliefs or knowledge about a single observable data point is required. We pay particular attention to the parameter which minimizes a measure of divergence to the distribution providing the data. The prior distribution reflects this attention and we discuss the application of the Bayes rule from this perspective. Our framework is fundamentally non‐parametric and we are able to interpret prior distributions on the parameter space using ideas of matching loss functions, one of which is coming from the data model and the other from the prior.  相似文献   

4.
While the likelihood ratio measures statistical support for an alternative hypothesis about a single parameter value, it is undefined for an alternative hypothesis that is composite in the sense that it corresponds to multiple parameter values. Regarding the parameter of interest as a random variable enables measuring support for a composite alternative hypothesis without requiring the elicitation or estimation of a prior distribution, as described below. In this setting, in which parameter randomness represents variability rather than uncertainty, the ideal measure of the support for one hypothesis over another is the difference in the posterior and prior log‐odds. That ideal support may be replaced by any measure of support that, on a per‐observation basis, is asymptotically unbiased as a predictor of the ideal support. Such measures of support are easily interpreted and, if desired, can be combined with any specified or estimated prior probability of the null hypothesis. Two qualifying measures of support are minimax‐optimal. An application to proteomics data indicates that a modification of optimal support computed from data for a single protein can closely approximate the estimated difference in posterior and prior odds that would be available with the data for 20 proteins.  相似文献   

5.
We propose a new nonlinear time series model of expected returns based on the dynamics of the cross‐sectional rank of realized returns. We model the joint dynamics of a sharp jump in the cross‐sectional rank and the asset return by analyzing (1) the marginal probability distribution of a jump in the cross‐sectional rank within the context of a duration model, and (2) the probability distribution of the asset return conditional on a jump, for which we specify different dynamics depending upon whether or not a jump has taken place. As a result, the expected returns are generated by a mixture of normal distributions weighted by the probability of jumping. The model is estimated for the weekly returns of the constituents of the SP500 index from 1990 to 2000, and its performance is assessed in an out‐of‐sample exercise from 2001 to 2005. Based on the one‐step‐ahead forecast of the mixture model we propose a trading rule, which is evaluated according to several forecast evaluation criteria and compared to 18 alternative trading rules. We find that the proposed trading strategy is the dominant rule by providing superior risk‐adjusted mean trading returns and accurate value‐at‐risk forecasts. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

6.
In forensics it is a classical problem to determine, when a suspect S shares a property Γ with a criminal C, the probability that S = C. In this article we give a detailed account of this problem in various degrees of generality. We start with the classical case where the probability of having Γ, as well as the a priori probability of being the criminal, is the same for all individuals. We then generalize the solution to deal with heterogeneous populations, biased search procedures for the suspect, Γ‐correlations, uncertainty about the subpopulation of the criminal and the suspect, and uncertainty about theΓ‐frequencies. We also consider the effect of the way the search for S is conducted, in particular when this is done by a database search. A returning theme is that we show that conditioning is of importance when one wants to quantify the ‘weight’ of the evidence by a likelihood ratio. Apart from these mathematical issues, we also discuss the practical problems in applying these issues to the legal process. The posterior probabilities of C = S are typically the same for all reasonable choices of the hypotheses, but this is not the whole story. The legal process might force one to dismiss certain hypotheses, for instance when the relevant likelihood ratio depends on prior probabilities. We discuss this and related issues as well. As such, the article is relevant both from a theoretical and from an applied point of view.  相似文献   

7.
Inferences about unobserved random variables, such as future observations, random effects and latent variables, are of interest. In this paper, to make probability statements about unobserved random variables without assuming priors on fixed parameters, we propose the use of the confidence distribution for fixed parameters. We focus on their interval estimators and related probability statements. In random‐effect models, intervals can be formed either for future (yet‐to‐be‐realised) random effects or for realised values of random effects. The consistency of intervals for these two cases requires different regularity conditions. Via numerical studies, their finite sampling properties are investigated.  相似文献   

8.
This paper develops methods for estimating and forecasting in Bayesian panel vector autoregressions of large dimensions with time‐varying parameters and stochastic volatility. We exploit a hierarchical prior that takes into account possible pooling restrictions involving both VAR coefficients and the error covariance matrix, and propose a Bayesian dynamic learning procedure that controls for various sources of model uncertainty. We tackle computational concerns by means of a simulation‐free algorithm that relies on analytical approximations to the posterior. We use our methods to forecast inflation rates in the eurozone and show that these forecasts are superior to alternative methods for large vector autoregressions.  相似文献   

9.
In this paper, we propose a model of income dynamics which takes account of mobility both within and between jobs. The model is a hybrid of the mover‐stayer model of income dynamics and a geometric random walk. In any period, individuals face a discrete probability of ‘moving’, in which case their income is a random drawn from a stationary recurrent distribution. Otherwise, they ‘stay’ and incomes follow a geometric random walk. The model is estimated on income transition data for the United Kingdom from the British Household Panel Survey (BHPS) and provides a good explanation of observed non‐linearities in income dynamics. The steady‐state distribution of the model provides a good fit for the observed, cross‐sectional distribution of earnings. We also evaluate the impact of tertiary education on income transitions and on the long‐run distribution of incomes. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

10.
We employ a neoclassical business‐cycle model to study two sources of business‐cycle fluctuations: marginal efficiency of investment shocks, and total factor productivity shocks. The parameters of the model are estimated using a Bayesian procedure that accommodates prior uncertainty about their magnitudes; from these estimates, posterior distributions of the two shocks are obtained. The postwar US experience suggests that both shocks are important in understanding fluctuations, but that total factor productivity shocks are primarily responsible for beginning and ending recessions. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

11.
In principle, making credit decisions under uncertainty can be approached by estimating the potential future outcomes that will result from the various decision alternatives. In practice, estimation difficulties may arise as a result of selection bias and limited historic testing. We review some theoretical results and practical estimation tools from observation study design and causal modeling, and evaluate their relevance to credit decision problems. Building on these results and tools, we propose a novel approach for estimating potential outcomes for credit decisions with multiple alternatives based on matching on multiple propensity scores. We demonstrate the approach and discuss results for risk-based pricing and credit line increase problems. Among the strengths of our approach are its transparency about data support for the estimates and its ability to incorporate prior knowledge in the extrapolative inference of treatment-response curves.  相似文献   

12.
We study the relationship between wealth and labour market transitions. A lifecycle model, in which individuals are faced by uncertainty about the availability of jobs, serves as a basis for a reduced‐form specification for the probabilities of labour market transitions, which depend on wealth according to the model. Theory implies a negative effect of wealth on the probability of becoming or staying employed. This implication is tested for in a reduced‐from model of labour market transitions, in which we allow for random effects, initial conditions, and measurement error in wealth. Elasticities of transitions probabilities with respect to wealth are presented. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

13.
Health outcomes, such as mortality and readmission rates, are commonly used as indicators of hospital quality and as a basis to design pay‐for‐performance (P4P) incentive schemes. We propose a model of hospital behavior under P4P where patients differ in severity and can choose hospital based on quality. We assume that risk‐adjustment is not fully accounted for and that unobserved dimensions of severity remain. We show that the introduction of P4P which rewards lower mortality and/or readmission rates can weaken or strengthen hospitals' incentive to provide quality. Since patients with higher severity have a different probability of exercising patient choice when quality varies, this introduces a selection bias (patient composition effect) which in turn alters quality incentives. We also show that this composition effect increases with the degree of competition. Critically, readmission rates suffer from one additional source of selection bias through mortality rates since quality affects the distribution of survived patients. This implies that the scope for counterproductive effects of P4P is larger when financial rewards are linked to readmission rates rather than mortality rates.  相似文献   

14.
This paper develops a valuation model for a project or firm in the presence of uncertainty about the mean of the probability distribution of the cash flows generated by the project. Its major point is that in the presence of parameter uncertainty the value of the project is smaller than in the case where the mean cash flows is perfectly known. The second point is that when there is a known covariance between project cash flows and aggregate market cash flows investors can learn about the unknown mean cash flows by observing the market. This is referred to as ‘learning from the market’.  相似文献   

15.
Identification in most sample selection models depends on the independence of the regressors and the error terms conditional on the selection probability. All quantile and mean functions are parallel in these models; this implies that quantile estimators cannot reveal any—per assumption non‐existing—heterogeneity. Quantile estimators are nevertheless useful for testing the conditional independence assumption because they are consistent under the null hypothesis. We propose tests of the Kolmogorov–Smirnov type based on the conditional quantile regression process. Monte Carlo simulations show that their size is satisfactory and their power sufficient to detect deviations under plausible data‐generating processes. We apply our procedures to female wage data from the 2011 Current Population Survey and show that homogeneity is clearly rejected. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

16.
This study analyzes mean probability distributions reported by ASA-NBER forecasters on two macroeconomic variables, GNP and the GNP implicit price deflator. In the derivation of expectations, a critical assertion has been that the aggregate average expectation can be regarded as coming from a normal distribution. We find that, in fact, this assumption should be rejected in favor of distributions which are more peaked and skewed. For IPD, they are mostly positively skewed, and for nominal GNP the reverse is true. We then show that a non-central scaled t-distribution fit the empirical distributions remarkably well. The practice of using the degree of consensus across a group of predictions as a measure of a typical forecasters' uncertainty about the prediction is called to question.  相似文献   

17.
Conformity testing is a systematic examination of the extent to which an entity conforms to specified requirements. Such testing is performed in industry as well as in regulatory agencies in a variety of fields. In this paper we discuss conformity testing under measurement or sampling uncertainty. Although the situation has many analogies to statistical testing of a hypothesis concerning the unknown value of the measurand there are no generally accepted rules for handling measurement uncertainty when testing for conformity. Usually the objective of a test for conformity is to provide assurance of conformity. We therefore suggest that an appropriate statistical test for conformity should be devised such that there is only a small probability of declaring conformity when in fact the entity does not conform. An operational way of formulating this principle is to require that whenever an entity has been declared to be conforming, it should not be possible to alter that declaration, even if the entity was investigated with better (more precise) measuring instruments, or measurement procedures. Some industries and agencies designate specification limits under consideration of the measurement uncertainty. This practice is not invariant under changes of measurement procedure. We therefore suggest that conformity testing should be based upon a comparison of a confidence interval for the value of the measurand with some limiting values that have been designated without regard to the measurement uncertainty. Such a procedure is in line with the recently established practice of reporting measurement uncertainty as “an interval of values that could reasonably be attributed to the measurand”. The price to be paid for a reliable assurance of conformity is a relatively large risk that the procedure will fail to establish conformity for entities that only marginally conform. We suggest a two‐stage procedure that may improve this situation and provide a better discriminatory ability. In an example we illustrate the determination of the power function of such a two‐stage procedure.  相似文献   

18.
This study builds upon March and Simon’s proposition that individual‐level differences must be considered when explaining decision‐making performance. We extend their discussion on the importance of decision‐makers’ attention to explain heterogeneous patterns of exploration and exploitation within the same uncertain environment. We develop a model of decision‐making under uncertainty in which ‘working memory’ – i.e., the ability to hold multiple elements in mind to actively process them – explains the emergence of heterogeneity in exploration‐exploitation choice patterns. We validated the model in a laboratory study and two replications involving 171 individuals. Our findings show that differences in working memory allow us to identify individuals who are more likely to choose exploration over exploitation appropriately, and thus achieve higher performance. We discuss the implications for management theories, and re‐propose the work of March and Simon as a unifying framework that still can be used to generate and test managerially relevant hypotheses.  相似文献   

19.
Information-theoretic methodologies are increasingly being used in various disciplines. Frequently an information measure is adapted for a problem, yet the perspective of information as the unifying notion is overlooked. We set forth this perspective through presenting information-theoretic methodologies for a set of problems in probability and statistics. Our focal measures are Shannon entropy and Kullback–Leibler information. The background topics for these measures include notions of uncertainty and information, their axiomatic foundation, interpretations, properties, and generalizations. Topics with broad methodological applications include discrepancy between distributions, derivation of probability models, dependence between variables, and Bayesian analysis. More specific methodological topics include model selection, limiting distributions, optimal prior distribution and design of experiment, modeling duration variables, order statistics, data disclosure, and relative importance of predictors. Illustrations range from very basic to highly technical ones that draw attention to subtle points.  相似文献   

20.
We review generalized dynamic models for time series of count data. Usually temporal counts are modelled as following a Poisson distribution, and a transformation of the mean depends on parameters which evolve smoothly with time. We generalize the usual dynamic Poisson model by considering continuous mixtures of the Poisson distribution. We consider Poisson‐gamma and Poisson‐log‐normal mixture models. These models have a parameter for each time t which captures possible extra‐variation present in the data. If the time interval between observations is short, many observed zeros might result. We also propose zero inflated versions of the models mentioned above. In epidemiology, when a count is equal to zero, one does not know if the disease is present or not. Our model has a parameter which provides the probability of presence of the disease given no cases were observed. We rely on the Bayesian paradigm to obtain estimates of the parameters of interest, and discuss numerical methods to obtain samples from the resultant posterior distribution. We fit the proposed models to artificial data sets and also to a weekly time series of registered number of cases of dengue fever in a district of the city of Rio de Janeiro, Brazil, during 2001 and 2002.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号