首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Abstract

Numbers saturate news coverage and health and risk messaging. But as our expertise in the creation of statistical information increases, the ability to use those statistics in decision making remains frustratingly inadequate. There has been a wealth of research related to how to train people to better use the numbers they interact with on a daily basis. Far less research, however, explores the appropriate way to use numbers in communication. Two experiments explored the role of numbers in risk communication infographics related to road safety while driving. Experiment 1 found that the presence of numbers influence risk perception, but whether those numbers reflect accurate statistics or random numbers does not change their influence. Experiment 2 found that removing all statistics entirely from infographics and replacing them with linguistic gist representations of the numbers (i.e. words like ‘some’, ‘many’, ‘none’) increased risk perception even though people found the infographics to be less informative than the ones containing numbers. The results suggest that the gist representations of the numbers in the context of the infographics are equivalent regardless of their value, such that the very presence of statistics influences judgment and risk perception but not their meaning. They also suggest that people do not always realize how they are using statistical information in their judgement and decision making process.  相似文献   

2.
We analyze the possibility of nonlinear trend stationarity as the alternative to unit roots in 23 OECD real exchange rates, 1974–1998, by adding nonlinear time terms to the CIPS panel unit root test of Pesaran (2007). We follow a thorough bootstrapping approach and propose a technique to adjust statistical significance for the use of multiple tests over several time trend orders. The unit root null that all real exchange rates have unit roots is rejected at better than the 0.05 level. Bootstrapped results from a procedure of Chortareas and Kapetanios (2009) suggest that the hypothesis that all are stationary is reasonable. We argue that nonlinear trend stationarity is the most likely alternative hypothesis for at least some of the real exchange rates because: (1) the strongest CIPS rejection occurs when quadratic trends are specified; (2) nonlinear time terms are statistically significant at the 0.10 level; (3) the actual CIPS statistics are more consistent with CIPS sampling distributions from bootstrapped nonlinear trend stationary processes than from linear trend or mean stationary processes.  相似文献   

3.
In high-frequency finance, the statistical terms ‘realized skewness’ and ‘realized kurtosis’ refer to the realized third- and fourth-order moments of high-frequency returns data normalized (or divided) by ‘realized variance’. In particular, before any computations of these two normalized realized moments are carried out, one often predetermines the holding-interval and sampling-interval and thus implicitly influencing the actual magnitudes of the computed values of the normalized realized higher-order moments i.e. they have been found to be interval-variant. To-date, little theoretical or empirical studies have been undertaken in the high-frequency finance literature to properly investigate and understand the effects of these two types of intervalings on the behaviour of the ensuring measures of realized skewness and realized kurtosis. This paper fills this gap by theoretically and empirically analyzing as to why and how these two normalized realized higher-order moments of market returns are influenced by the selected holding-interval and sampling-interval. Using simulated and price index data from the G7 countries, we then proceed to illustrate via count-based signature plots, the theoretical and empirical relationships between the realized higher-order moments and the sampling-intervals and holding-intervals.  相似文献   

4.
We present a macroeconomic market experiment to isolate the impact of monetary shocks on the exchange rate, as an alternative to SVAR identification. In a non-stochastic treatment, covered interest rate parity holds and predicted exchange rates are tracked well. In a stochastic treatment, we model expectations using a Neyman–Pearson hypothesis test (inferential expectations) and find evidence of belief conservatism and uncovered interest rate parity failure. The market environment magnifies belief conservatism, which is opposite to the standard claim that markets tend to eliminate individual choice anomalies.  相似文献   

5.
Introduction.

Critics of the custom that bases actuarial theory on the probability calculus whilst admitting that probability theory may be applicable, have denied that mortality data satisfy the requirements of independence and equi-probability demanded by ‘simple’ theory. We believe that the proper answer to these polemics is that ‘equi-probability’ and ‘independence’ are technical phrases introduced as part of a theory which is purely conceptual. To seek to deny the applicability of the theory on the grounds of the lack of one-to-one correspondence of these words with their counterparts in the everyday world is precipitate. The evaluation of the theory must be decided by the correctness of the results it forecasts. Few of the critics mentioned have produced statistics which support their case: some have even misapplied the theory they so earnestly criticise.  相似文献   

6.
This article and the companion paper aim at reviewing recent empirical and theoretical developments usually grouped under the term Econophysics. Since the name was coined in 1995 by merging the words ‘Economics’ and ‘Physics’, this new interdisciplinary field has grown in various directions: theoretical macroeconomics (wealth distribution), microstructure of financial markets (order book modeling), econometrics of financial bubbles and crashes, etc. We discuss the interactions between Physics, Mathematics, Economics and Finance that led to the emergence of Econophysics. We then present empirical studies revealing the statistical properties of financial time series. We begin the presentation with the widely acknowledged ‘stylized facts’, which describe the returns of financial assets—fat tails, volatility clustering, autocorrelation, etc.—and recall that some of these properties are directly linked to the way ‘time’ is taken into account. We continue with the statistical properties observed on order books in financial markets. For the sake of illustrating this review, (nearly) all the stated facts are reproduced using our own high-frequency financial database. Finally, contributions to the study of correlations of assets such as random matrix theory and graph theory are presented. The companion paper will review models in Econophysics from the point of view of agent-based modeling.  相似文献   

7.
Multifractal models and random cascades have been successfully used to model asset returns. In particular, the log-normal continuous cascade is a parsimonious model that has proven to reproduce most observed stylized facts. In this paper, several statistical issues related to this model are studied. We first present a quick, but extensive, review of its main properties and show that most of these properties can be studied analytically. We then develop an approximation theory in the limit of small intermittency λ2???1, i.e. when the degree of multifractality is small. This allows us to prove that the probability distributions associated with these processes possess some very simple aggregation properties across time scales. Such a control of the process properties at different time scales allows us to address the problem of parameter estimation. We show that one has to distinguish two different asymptotic regimes: the first, referred to as the ‘low-frequency asymptotics’, corresponds to taking a sample whose overall size increases, whereas the second, referred to as the ‘high-frequency asymptotics’, corresponds to sampling the process at an increasing sampling rate. The first case leads to convergent estimators, whereas in the high-frequency asymptotics, the situation is much more intricate: only the intermittency coefficient λ2 can be estimated using a consistent estimator. However, we show that, in practical situations, one can detect the nature of the asymptotic regime (low frequency versus high frequency) and consequently decide whether the estimations of the other parameters are reliable or not. We apply our results to equity market (individual stocks and indices) daily return series and illustrate a possible application to the prediction of volatility and conditional value at risk.  相似文献   

8.
In recent years, there have been many cost-benefit studies on aviation safety, which deal mainly with economic issues, omitting some strictly technical aspects. This study compares aircraft accidents in relation to the characteristics of the aircraft, environmental conditions, route, and traffic type. The study was conducted using a database of over 1500 aircraft accidents worldwide, occurring between 1985 and 2010. The data were processed and then aggregated into groups, using cluster analysis based on an algorithm of partition binary ‘Hard c means.’ For each cluster, the ‘cluster representative’ accident was identified as the average of all the different characteristics of the accident. Moreover, a ‘hazard index’ was defined for each cluster (according to annual movements); using this index, it was possible to establish the dangerousness of each ‘cluster’ in terms of aviation accidents. Obtained results allowed the construction of an easy-to-use predictive model for accidents using multivariate analysis.  相似文献   

9.
This paper examines the small sample properties of three testing strategies used to analyze the rationality, monetary neutrality and market efficiency hypotheses. We focus on the original ‘two-step’ Barro test of the MRE hypothesis formed entirely from OLS results, a test that employs the correct variance-covariance formulae for these ‘two-step’ estimates, and Mishkin's FIMLE testing framework. Each test is examined under likely model respecifications. The findings highlight the extensive bias incurred by drawing inferences from simple unadjusted ‘two-step’ estimates and reveal the relative power of all tests in identifying alternatives to the null hypotheses.  相似文献   

10.
Editorial data     
A number of recent studies have attempted to test propositions concerning ‘long-run’ economic relationships by means of frequency-domain time-series techniques that concentrate attention on low-frequency comovements of variables. The present paper emphasizes that many of these propositions involve expectational relationships that are not inherently related to specific frequencies or periodicities. Thus the association of low-frequency time-series test statistics with long-run economic propositions is not generally warranted. That such an association can be misleading is demonstrated by analysis of examples taken from notable papers by Geweke, Lucas, and Summers.  相似文献   

11.
The present paper investigates funding liquidity risk of banks. We present a new statistical multi-factor risk model leading to three new funding liquidity risk metrics, thanks to liquidity gap's probability distribution analysis. We test our model on a large sample composed of 593 US banking companies, this allows us to identify some stylized facts regarding the evolution of liquidity risk and its relationship with the size of banking companies. Our main motivation is to develop ‘the contractual maturity mismatch’ monitoring tool proposed within the Basel III reform.  相似文献   

12.
Recent market events have reinvigorated the search for realistic return models that capture greater likelihoods of extreme movements. In this paper we model the medium-term log-return dynamics in a market with both fundamental and technical traders. This is based on a trade arrival model with variable size orders and a general arrival-time distribution. With simplifications we are led in the jump-free case to a local volatility model defined by a hybrid SDE mixing both arithmetic and geometric or CIR Brownian motions, whose solution in the geometric case is given by a class of integrals of exponentials of one Brownian motion against another, in forms considered by Yor and collaborators. The reduction of the hybrid SDE to a single Brownian motion leads to an SDE of the form considered by Nagahara, which is a type of ‘Pearson diffusion’, or, equivalently, a hyperbolic OU SDE. Various dynamics and equilibria are possible depending on the balance of trades. Under mean-reverting circumstances we arrive naturally at an equilibrium fat-tailed return distribution with a Student or Pearson Type~IV form. Under less-restrictive assumptions, richer dynamics are possible, including time-dependent Johnson-SU distributions and bimodal structures. The phenomenon of variance explosion is identified that gives rise to much larger price movements that might have a priori been expected, so that ‘25σ’ events are significantly more probable. We exhibit simple example solutions of the Fokker–Planck equation that shows how such variance explosion can hide beneath a standard Gaussian facade. These are elementary members of an extended class of distributions with a rich and varied structure, capable of describing a wide range of market behaviors. Several approaches to the density function are possible, and an example of the computation of a hyperbolic VaR is given. The model also suggests generalizations of the Bougerol identity. We touch briefly on the extent to which such a model is consistent with the dynamics of a ‘flash-crash’ event, and briefly explore the statistical evidence for our model.  相似文献   

13.
The stimulus for this paper is an article published in 2005 by Wüstemann and Kierzek together with a critical comment thereon by Nobes in 2006. The original paper discusses the IFRS proposals, and the philosophy they represent, concerning revenue recognition. Nobes, in terms which this author broadly supports, criticises their revenue recognition proposals in some detail, together, again rightly, with some of their assumptions. This paper, rather than reopening these specific issues, considers the explicit statement, for example, p. 71, and elsewhere, that there is a ‘requirement of legal certainty in the European Union’, a statement which is given neither logical justification nor supporting references. IAS, EU requirements and GoB as a national concept are all argued to be flexible, judgemental and, necessarily and permanently, devoid of ‘legal certainty’. Implications of this analysis for regulation, harmonisation and for educational programmes in today's global environment are considered.  相似文献   

14.
Performance auditing is a longstanding feature of democratic government in many countries. It aims to lift the efficiency and effectiveness of public sector organisations, but numerous authors have voiced scepticism about its ability to do so. From three decades of performance auditing literature, this paper distils seven critiques of performance auditing: ‘anti‐innovation’, ‘nit‐picking’, ‘expectations gap’, ‘lapdog’, ‘headline hunting’, ‘unnecessary systems’ and ‘hollow ritual’. The paper concludes that the critiques are not valid in all cases, but serve to categorise risks to be managed in the design of performance audit programs and associated institutional arrangements. In light of the critiques, the paper proposes desirable elements of frameworks for monitoring and reporting the performance of institutions with performance audit mandates.  相似文献   

15.
The applications of techniques from statistical (and classical) mechanics to model interesting problems in economics and finance have produced valuable results. The principal movement which has steered this research direction is known under the name of ‘econophysics’. In this paper, we illustrate and advance some of the findings that have been obtained by applying the mathematical formalism of quantum mechanics to model human decision making under ‘uncertainty’ in behavioral economics and finance. Starting from Ellsberg's seminal article, decision making situations have been experimentally verified where the application of Kolmogorovian probability in the formulation of expected utility is problematic. Those probability measures which by necessity must situate themselves in Hilbert space (such as ‘quantum probability’) enable a faithful representation of experimental data. We thus provide an explanation for the effectiveness of the mathematical framework of quantum mechanics in the modeling of human decision making. We want to be explicit though that we are not claiming that decision making has microscopic quantum mechanical features.  相似文献   

16.
In this paper, a result for bivariate normal distributions from statistics is transformed into a financial asset context in order to build a tool which can translate a correlation matrix into an equivalent probability matrix and vice versa. This way, the correlation coefficient parameter is more understandable in terms of joint probability of two stocks’ returns, and much more useful in terms of the information it provides. We validate, empirically, our result for a sample covering the three market capitalization categories in the S&P 500 index over a ten-year period. Finally, the accuracy of this new tool is measured theoretically and some applications from the practitioners’ point of view are offered. Such applications include, for instance, the calculation of the number of trading days in a year in which two stocks have same sign returns and how to split the average return of weighted stocks into four orthants.  相似文献   

17.
《Quantitative Finance》2013,13(4):251-256
Abstract

We investigate several statistical properties of the order book of three liquid stocks of the Paris Bourse. The results are to a large degree independent of the stock studied. The most interesting features concern (i) the statistics of incoming limit order prices, which follows a power-law around the current price with a diverging mean; and (ii) the shape of the average order book, which can be quantitatively reproduced using a ‘zero intelligence’ numerical model and qualitatively predicted using a simple approximation.  相似文献   

18.
Unpredictable dividend growth by the dividend–price ratio is considered a ‘stylized fact’ in post war US data. Using long-term annual data from the US and three European countries, we revisit this stylized fact, and we also report results on return predictability. We make two main contributions. First, we document that for the US, results for long-horizon predictability are crucially dependent on whether returns and dividend growth are measured in nominal or real terms, and this difference is due to long-term inflation being strongly negatively predictable by the dividend–price ratio. The impact of inflation is to reinforce real return predictability and to reduce – or change direction of – real dividend growth predictability. This provides an explanation for the strong predictability of long-horizon real returns in the ‘right’ direction, and the strong predictability of long-horizon real dividend growth in the ‘wrong’ direction, that we see in US post war data. Second, we find that predictability patterns in three European stock markets are in many ways different from what characterize the US stock market. In particular, in Sweden and Denmark dividend growth is strongly predictable by the dividend–price ratio in the ‘right’ direction while returns are not predictable. The results for the UK are mixed. Our results are robust to a number of changes in the modeling framework. We discuss the results for dividend growth predictability in terms of the ‘dividend smoothing hypothesis’.  相似文献   

19.
For assessing the risk adjusted insurance premiums, we always face the challenge that we don’t know the respective distribution functions of the probable claims and the probability of occurrence. Purely chance-based deviations from expected damages are no sufficient reason for premium increases. This means that for preparing for such deviations we have to distinct between chance-based and other deviations from expected and realised damage events. For adjusting insurance contracts due to new information, there are three possible strategies: first, we could ignore the past premium and calculate them based on the new data sample. Alternatively we could make use of a Bayesian learning process, which means to adjust the past premiums by taking into account the new information. The third strategy refers to a statistical test of hypotheses. This means to only adjust a premium if the original assumptions on the possible distribution of claims have to be rejected statistically. Looking at the certainty of the contracts and a steady calculation basis, there are many reasons in favour of the statistical test of hypotheses. The stringent usage of this method can lead to a sound basis for negotiations between insurance provider and holder. The improvement of transparency of taken risks is regulatorily desirable as well as helpful for evaluation of solvency.  相似文献   

20.
Relationships and the internet: The mediating role of a relationship banker   总被引:2,自引:2,他引:0  
The role technology plays in building (or weakening) relationships with customers has become increasingly debated since the advent of the internet. Arguments, and some emerging empirical evidence, have shown that there is support for both sides of the argument; technology can both build and erode customer relationships. In this paper the authors seek to add further insight into this inquiry. The paper examines relationships that primarily exist within a technology context (in this case the internet) compared to those that primarily exist in a traditional face-to-face context (in this case the branch). The mediating role of a relationship banker is used to explore these dynamics further. Results show that the channel which the customer primarily adopts as the mode of interaction makes little difference to the strength of the relationship. What makes a difference, in terms of trust, is the existence of a ‘good’ or ‘excellent’ one-to-one relationship with an assigned banker. Intriguingly, the effects of the relationship banker are dependent on whether the customer uses primarily the branch versus the internet.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号