首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 640 毫秒
1.
Summary Many books about probability and statistics only mention the weak and the strong law of large numbers for samples from distributions with finite expectation. However, these laws also hold for distributions with infinite expectation and then the sample average has to go to infinity with increasing sample size. Being curious about the way in which this would happen, we simulated increasing samples (up to n= 40000) from three distributions with infinite expectation. The results were somewhat surprising at first sight, but understandable after some thought. Most statisticians, when asked, seem to expect a gradual increase of the average with the size of the sample. So did we. In general, however, this proves to be wrong and for different parent distributions different types of conduct appear from this experiment. The samples from the “absolute Cauchy”-distribution are most interesting from a practical point of view: the average takes a high jump from time to time and decreases in between. In practice it might well happen, that the observations causing the jumps would be discarded as outlying observations.  相似文献   

2.
Summary  Let x1…, xn be a sample from a distribution with infinite expectation, then for n →∞ the sample average n tends to +∞ with probability 1 (see [4]).
Sometimes n contains high jumps due to large observations. In this paper we consider samples from the "absolute Cauchy" distribution. In practice, on may consider the logarithm of the observations as a sample from a normal distribution. So we found in our simulation. After rejecting the log-normality assumption, one will be tempted to regard the extreme observations as outliers. It is shown that the discarding of the outlying observations gives an underestimation of the expectation, variance and 99 percentile of the actual distribution.  相似文献   

3.
Summary Let x1…, xn be a sample from a distribution with infinite expectation, then for n→∞ the sample average x?n tends to +∞ with probability 1 (see [4]). Sometimes x?n contains high jumps due to large observations. In this paper we consider samples from the “absolute Cauchy” distribution. In practice, on may consider the logarithm of the observations as a sample from a normal distribution. So we found in our simulation. After rejecting the log-normality assumption, one will be tempted to regard the extreme observations as outliers. It is shown that the discarding of the outlying observations gives an underestimation of the expectation, variance and 99 percentile of the actual distribution.  相似文献   

4.
Over the past year a gap has opened up between the growth of manufacturing productivity and that of real wages. This gap cannot persist indefinitely, but it can be closed in many different ways. The best that can happen is that wage settlements fall while output and productivity accelerate. The worst outcome would be continued stagnation of real output and no deceleration of wages, in which case the required productivity improvement would have to come about through renewed labour shedding. There are worrying signs that this has started to happen. An intermediate solution might involve a fall in the exchange rate, with some improvement in competitiveness boosting real output (so that UK producers get a larger share of buoyant consumer spending) and some rise in prices holding back real wages.
We continue to believe that the most likely outcome is a rise in output and a fall in the rate of wage settlements. In our June forecast this occurs despite a fall in the real exchange rate. In these circumstances we expect the growth of unit labour costs to fall back from its current high level so that the current 3 per cent inflation rate becomes a true "core" rate. But a moderate fall in the real exchange rate may prove hard to achieve, especially if the oil price continues to weaken. We therefore explore what would happen if the required depreciation happens more rapidly, so that interest rates have to remain high to prevent it getting out of control. In this case we would expect lower growth and higher inflation than we forecast in June.  相似文献   

5.
In this paper, we apply a vine copula approach to investigate the dynamic relationship between energy, stock and currency markets. Dependence modeling using vine copulas offers a greater flexibility and permits the modeling of complex dependency patterns for high-dimensional distributions. Using a sample of more than 10 years of daily return observations of the WTI crude oil, the Dow Jones Industrial average stock index and the trade weighted US dollar index returns, we find evidence of a significant and symmetric relationship between these variables. Considering different sample periods show that the dynamic of the relationship between returns is not constant over time. Our results indicate also that the dependence structure is highly affected by the financial crisis and Great Recession, over 2007–2009. Finally, there is evidence to suggest that the application of the vine copula model improves the accuracy of VaR estimates, compared to traditional approaches.  相似文献   

6.
A one-sided testing problem based on an i.i.d. sample of observations is considered. The usual one-sided sequential probability ratio test would be based on a random walk derived from these observations. Here we propose a sequential test where the random walk is replaced by Lindleys random walk which starts anew at zero as soon as it becomes negative. We derive the asymptotics of the expected sample size and the error probabilities of this sequential test. We discuss the advantages of this test for certain nonsymmetric situations.Acknowledgement. The authors thank the referee for helpful comments and suggestions. Their research was supported by the German Research Foundation (DFG) and the Russian Foundation for Basic Research (RFBR).  相似文献   

7.
An important disadvantage of the h-index is that typically it cannot take into account the specific field of research of a researcher. Usually sample point estimates of the average and median h-index values for the various fields are reported that are highly variable and dependent of the specific samples and it would be useful to provide confidence intervals of prediction accuracy. In this paper we apply the non-parametric bootstrap technique for constructing confidence intervals for the h-index for different fields of research. In this way no specific assumptions about the distribution of the empirical h-index are required as well as no large samples since that the methodology is based on resampling from the initial sample. The results of the analysis showed important differences between the various fields. The performance of the bootstrap intervals for the mean and median h-index for most fields seems to be rather satisfactory as revealed by the performed simulation.  相似文献   

8.
This paper compares three new methods of estimating the asset returns covariance and evaluates their performances with the conventional covariance estimation methods. We find that taking a simple average of the historical sample covariance matrix and the covariance matrix estimated from the single-index model provides the best overall performance among all competing methods. In addition, we find that commonly used assessment criteria provide systematically different rankings, which explains the preferences to different types of estimation methods in the existing literature. We believe the difference between our results and those of previous studies may be partly due to the differences in the ratio of the time series observations to the number of stocks in the samples that have been used in different studies.  相似文献   

9.
Studies of efficiency in banking and elsewhere often impose arbitrary assumptions on the distributions of efficiency and random error in order to separate one from the other. In this study, we impose much less structure on these distributions and only assume that efficiencies are stable over time while random error tends to average out. We are able to do so by estimating firm-specific effects on costs using panel data sets of over 28,000 observations on U.S. banks from 1980 to 1989. We find results similar to the literature—X-efficiencies or managerial differences in efficiency are important in banking, while scale-efficiency differences are not. However, we also find that the distributional assumptions usually imposed in the literature are not very consistent with these data.  相似文献   

10.
This study analyzes mean probability distributions reported by ASA-NBER forecasters on two macroeconomic variables, GNP and the GNP implicit price deflator. In the derivation of expectations, a critical assertion has been that the aggregate average expectation can be regarded as coming from a normal distribution. We find that, in fact, this assumption should be rejected in favor of distributions which are more peaked and skewed. For IPD, they are mostly positively skewed, and for nominal GNP the reverse is true. We then show that a non-central scaled t-distribution fit the empirical distributions remarkably well. The practice of using the degree of consensus across a group of predictions as a measure of a typical forecasters' uncertainty about the prediction is called to question.  相似文献   

11.
For some non–parametric testing problems (one–sided two–sample problem, k –sample trend problem, testing independence against positive dependence) a partial ordering, denoted by ≥, over the alternatives is defined. This partial ordering expresses the strength of the deviation from the null–hypothesis. All familiar rank tests turn out to become more powerful under "increasing" alternatives; that is, all familiar rank statistics preserve the ordering stochastically in samples whenever it is present between underlying distributions. As a tool, the sample equivalence of ≥ is introduced as a partial ordering over pairs of permutations. Functions, defined on pairs of permutations, which preserve this ordering are studied.  相似文献   

12.
In this paper, we propose a novel approach to econometric forecasting of stationary and ergodic time series within a panel-data framework. Our key element is to employ the (feasible) bias-corrected average forecast. Using panel-data sequential asymptotics we show that it is potentially superior to other techniques in several contexts. In particular, it is asymptotically equivalent to the conditional expectation, i.e., has an optimal limiting mean-squared error. We also develop a zero-mean test for the average bias and discuss the forecast-combination puzzle in small and large samples. Monte-Carlo simulations are conducted to evaluate the performance of the feasible bias-corrected average forecast in finite samples. An empirical exercise, based upon data from a well known survey is also presented. Overall, these results show promise for the feasible bias-corrected average forecast.  相似文献   

13.
To date, best practice in sampling credit applicants has been established based largely on expert opinion, which generally recommends that small samples of 1500 instances each of both goods and bads are sufficient, and that the heavily biased datasets observed should be balanced by undersampling the majority class. Consequently, the topics of sample sizes and sample balance have not been subject to either formal study in credit scoring, or empirical evaluations across different data conditions and algorithms of varying efficiency. This paper describes an empirical study of instance sampling in predicting consumer repayment behaviour, evaluating the relative accuracies of logistic regression, discriminant analysis, decision trees and neural networks on two datasets across 20 samples of increasing size and 29 rebalanced sample distributions created by gradually under- and over-sampling the goods and bads respectively. The paper makes a practical contribution to model building on credit scoring datasets, and provides evidence that using samples larger than those recommended in credit scoring practice provides a significant increase in accuracy across algorithms.  相似文献   

14.
In this paper, we introduce a new algorithm for estimating non-negative parameters from Poisson observations of a linear transformation of the parameters. The proposed objective function fits both a weighted least squares (WLS) and a minimum χ2 estimation framework, and results in a convex optimization problem. Unlike conventional WLS methods, the weights do not need to be estimated from the datas, but are incorporated in the objective function. The iterative algorithm is derived from an alternating projection procedure in which "distance" is determined by the chi-squared test statistic, which is interpreted as a measure of the discrepancy between two distributions. This may be viewed as an alternative to the Kullback-Leibler divergence which corresponds to the maximum likelihood (ML) estimation. The algorithm is similar in form to, and shares many properties with, the expectation maximization algorithm for ML estimation. In particular, we show that every limit point of the algorithm is an estimator, and the sequence of projected (by the linear transformation into the data space) means converge. Despite the similarities, we show that the new estimators are quite distinct from ML estimators, and obtain conditions under which they are identical.  相似文献   

15.
An idealized static equilibrium model of a circularly symmetric city is presented. The model allows one to compute the spatial distribution of residences, given certain simple and plausible assumptions about the “costs” of transport, housing and neighborhood crowding. The model is chosen so as to guarantee that in first approximation, the residential population distribution which would be considered optimal by a perfect planner is identical to the distribution reached in a push-shove, laissez-faire equilibrium. This aspect of the construction is shown to be related in a simple way to the familiar “external diseconomy” situation in which a free resource is allocated among alternative uses by equating average, rather than marginal products. The existence of an infinite class of models in which the associated planner's optimum and laissez-faire equilibria are equivalent follows naturally from the standard theory of the private and social costs of highway congestion. The model leads naturally to exponentially falling population distributions which exhibit an “urban-suburban” dichotomy, to a particular overall city size, and to an optimal allocation of land between transport and residential uses.  相似文献   

16.
One of the most frequently used class of processes in time series analysis is the one of linear processes. For many statistical quantities, among them sample autocovariances and sample autocorrelations, central limit theorems are available in the literature. We investigate classical linear processes under a nonstandard observation pattern; namely, we assume that we are only able to observe the linear process at a lower frequency. It is shown that such observation pattern destroys the linear structure of the observations and leads to substantially different asymptotic results for standard statistical quantities. Central limit theorems are given for sample autocovariances and sample autocorrelations as well as more general integrated periodograms and ratio statistics. Moreover, for specific autoregressive processes, the possibilities to estimate the parameters of the underlying autoregression from lower frequency observations are addressed. Finally, we suggest for autoregressions of order 2 a valid bootstrap procedure. A small simulation study demonstrates the performance of the bootstrap proposal for finite sample size.  相似文献   

17.
This paper evaluates the simplifying assumption that producers compete in a large market without substantial strategic interactions using nonparametric regressions of producers' choices on market size. With such atomistic competition, increasing the number of consumers leaves the distributions of producers' prices and other choices unchanged. In many models featuring non‐trivial strategic considerations, producers' prices fall as their numbers increase. I examine observations of restaurants' sales, seating capacities, exit decisions, and prices from 222 US cities. Given factor prices and demographic variables, increasing a city's size increases restaurants' average sales and decreases their exit rate and prices. These results suggest that strategic considerations lie at the heart of restaurant pricing and turnover. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

18.
Summary Results are given of a sampling study of the sensitivities of two statistical procedures viz. the D'A gostino and the W ilk -S hapiro test of normality. Four alternative distributions were considered. For 5 small sample sizes the test of D'A gostino seems to be clearly dominated by the W ilk -S hapiro test for each of the 4 alternatives. For sample size n = 100 the recently developed approximate W ilk -S hapiro test seems to dominate the D'A gostino test for the U -shaped distribution; however, little can be said about the difference in power of these two tests for the other 3 alternatives. Random samples generated from the standard normal distribution showed the degree of approximation of the percentiles of each test procedure to be good.  相似文献   

19.
Financing through the supply‐driven green bonds market has significantly surged in recent years. In this paper, we examine the factors influencing the size of financing though green bond supply, using cross‐section OLS regressions on a global dataset for 8 years (2010–2017) sourced from Bloomberg. We consider a set of tridimensional factors: bond characteristics, issuer characteristics, and market characteristics and examine their effects on issue size. Alongside whole sample estimation, we produce year‐wise estimations to realize the evolution and persistence of the effects over time. We then produce estimates across rating grades of the bonds. Finally, we carry Blinder–Oaxaca decomposition to see if average issue size has significantly changed over time and whether the factors considered can explain the difference. We find a large number of factors affecting issue size asymmetrically; however, many of the effects do not persist over time and are heterogeneous across rating grades. In contrast to the aggregate market trend, we find no evidence of increases in average issue size in the recent year. Furthermore, the average financing size is found significantly lower for high‐grade bonds. The paper provides a basis for encouraging green bond supply, particularly considering the rating of the bonds and the issuers.  相似文献   

20.
Choosing the sample size in advance is a familiar problem: often, additional observations appear to be desirable. The final sample size then becomes a random variable, which has rather serious consequences.
Two such sample extension situations will be considered here. In the first situation, the observed sample variance determines whether or not to double the original sample size. In the second situation, the variances observed in two independent samples are compared; their ratio determines the number of additional observations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号