首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The celebrated Blackwell’s theorem demonstrates the equivalence of a notion of statistical informativeness and economic valuableness for the class of preferences that are represented by the subjective expected utility. This note shows that this equivalence holds for a larger class of preferences, namely maxminmaxmin expected utility.  相似文献   

2.
We show that statistical inference on the risk premia in linear factor models that is based on the Fama–MacBeth (FM) and generalized least squares (GLS) two-pass risk premia estimators is misleading when the ββ’s are small and/or the number of assets is large. We propose novel statistics, that are based on the maximum likelihood estimator of Gibbons [Gibbons, M., 1982. Multivariate tests of financial models: A new approach. Journal of Financial Economics 10, 3–27], which remain trustworthy in these cases. The inadequacy of the FM and GLS two-pass tt/Wald statistics is highlighted in a power and size comparison using quarterly portfolio returns from Lettau and Ludvigson [Lettau, M., Ludvigson, S., 2001. Resurrecting the (C)CAPM: A cross-sectional test when risk premia are time-varying. Journal of Political Economy 109, 1238–1287]. The power and size comparison shows that the FM and GLS two-pass tt/Wald statistics can be severely size distorted. The 95% confidence sets for the risk premia in the above-cited work that result from the novel statistics differ substantially from those that result from the FM and GLS two-pass tt-statistics. They show support for the human capital asset pricing model although the 95% confidence set for the risk premia on labor income growth is unbounded. The 95% confidence sets show no support for the (scaled) consumption asset pricing model, since the 95% confidence set of the risk premia on the scaled consumption growth consists of the whole real line, but do not reject it either.  相似文献   

3.
4.
5.
Many econometric quantities such as long-term risk can be modeled by Pareto-like distributions and may also display long-range dependence. If Pareto is replaced by Gaussian, then one can consider fractional Brownian motion whose increments, called fractional Gaussian noise, exhibit long-range dependence. There are many extensions of that process in the infinite variance stable case. Log-fractional stable noise (log-FSN) is a particularly interesting one. It is a stationary mean-zero stable process with infinite variance, parametrized by a tail index αα between 1 and 2, and hence with heavy tails. The lower the value of αα, the heavier the tail of the marginal distributions. The fact that αα is less than 2 renders the variance infinite. Thus dependence between past and future cannot be measured using the correlation. There are other dependence measures that one can use, for instance the “codifference” or the “covariation”. Since log-FSN is a moving average and hence “mixing”, these dependence measures converge to zero as the lags between past and future become very large. The codifference, in particular, decreases to zero like a power function as the lag goes to infinity. Two parameters play an important role: (a) the value of the exponent, which depends on αα and measures the speed of the decay; (b) a multiplicative constant of asymptoticity cc which depends also on αα. In this paper, it is shown that for symmetric αα-stable log-FSN, the constant cc is positive and that the rate of decay of the codifference is such that one has long-range dependence. It is also proved that the same conclusion holds for the second measure of dependence, the covariation, which converges to zero with the same intensity and with a constant of asymptoticity which is positive as well.  相似文献   

6.
A recent strand of empirical work uses (S, s) models with time-varying stochastic bands to describe infrequent adjustments of prices and other variables. The present paper examines some properties of this model, which encompasses most micro-founded adjustment rules rationalizing infrequent changes. We illustrate that this model is flexible enough to fit data characterized by infrequent adjustment and variable adjustment size. We show that, to the extent that there is variability in the size of adjustments (e.g. if both small and large price changes are observed), (i) a large band parameter is needed to fit the data and (ii) the average band of inaction underlying the model may differ strikingly from the typical observed size of adjustment. The paper thus provides a rationalization for a recurrent empirical result: very large estimated values for the parameters measuring the band of inaction.  相似文献   

7.
Bentler and Raykov (2000, Journal of Applied Psychology 85: 125–131), and Jöreskog (1999a, http://www.ssicentral.com/lisrel/column3.htm, 1999b http://www.ssicentral. com/lisrel/column5.htm) proposed procedures for calculating R 2 for dependent variables involved in loops or possessing correlated errors. This article demonstrates that Bentler and Raykov’s procedure can not be routinely interpreted as a “proportion” of explained variance, while Jöreskog’s reduced-form calculation is unnecessarily restrictive. The new blocked-error-R 2 (beR 2) uses a minimal hypothetical causal intervention to resolve the variance-partitioning ambiguities created by loops and correlated errors. Hayduk (1996) discussed how stabilising feedback models – models capable of counteracting external perturbations – can result in an acceptable error variance which exceeds the variance of the dependent variable to which that error is attached. For variables included within loops, whether stabilising or not, beR 2 provides the same value as Hayduk’s (1996) loop-adjusted-R 2. For variables not involved in loops and not displaying correlated residuals, beR 2 reports the same value as the traditional regression R 2. Thus, beR 2 provides a conceptualisation of the proportion of explained variance that spans both recursive and nonrecursive structural equation models. A procedure for calculating beR 2 in any SEM program is provided.  相似文献   

8.
In 2004, Predtetchinski and Herings [A. Predtetchinski, P.J.J. Herings, “A necessary and sufficient condition for non-emptiness of the core of a non-transferable utility game”, Journal of Economic Theory 116 (2004) 84–92] provided a necessary and sufficient condition for non-emptiness of the core of a non-transferable utility game. In this paper, we extend this theorem to its counterpart in fuzzy games and give a necessary and sufficient condition for a non-transferable utility fuzzy game to have a non-empty fuzzy core. As a consequence, we derive a necessary and sufficient condition for non-emptiness of the fuzzy core of a TU fuzzy game.  相似文献   

9.
Abstract

This study develops two space-varying coefficient simultaneous autoregressive (SVC-SAR) models for areal data and applies them to the discrete/continuous choice model, which is an econometric model based on the consumer's utility maximization problem. The space-varying coefficient model is a statistical model in which the coefficients vary depending on their location. This study introduces the simultaneous autoregressive model for the underlying spatial dependence across coefficients, where the coefficients for one observation are affected by the sum of those for the other observations. This model is named the SVC-SAR model. Because of its flexibility, we use the Bayesian approach and construct its estimation method based on the Markov chain Monte Carlo simulation. The proposed models are applied to estimate the Japanese residential water demand function, which is an example of the discrete/continuous choice model.  相似文献   

10.
This paper develops an asymptotic theory for test statistics in linear panel models that are robust to heteroskedasticity, autocorrelation and/or spatial correlation. Two classes of standard errors are analyzed. Both are based on nonparametric heteroskedasticity autocorrelation (HAC) covariance matrix estimators. The first class is based on averages of HAC estimators across individuals in the cross-section, i.e. “averages of HACs”. This class includes the well known cluster standard errors analyzed by Arellano (1987) as a special case. The second class is based on the HAC of cross-section averages and was proposed by Driscoll and Kraay (1998). The ”HAC of averages” standard errors are robust to heteroskedasticity, serial correlation and spatial correlation but weak dependence in the time dimension is required. The “averages of HACs” standard errors are robust to heteroskedasticity and serial correlation including the nonstationary case but they are not valid in the presence of spatial correlation. The main contribution of the paper is to develop a fixed-b asymptotic theory for statistics based on both classes of standard errors in models with individual and possibly time fixed-effects dummy variables. The asymptotics is carried out for large time sample sizes for both fixed and large cross-section sample sizes. Extensive simulations show that the fixed-b approximation is usually much better than the traditional normal or chi-square approximation especially for the Driscoll-Kraay standard errors. The use of fixed-b critical values will lead to more reliable inference in practice especially for tests of joint hypotheses.  相似文献   

11.
Subsampling and the m out of n bootstrap have been suggested in the literature as methods for carrying out inference based on post-model selection estimators and shrinkage estimators. In this paper we consider a subsampling confidence interval (CI) that is based on an estimator that can be viewed either as a post-model selection estimator that employs a consistent model selection procedure or as a super-efficient estimator. We show that the subsampling CI (of nominal level 1−α for any α(0,1)) has asymptotic confidence size (defined to be the limit of finite-sample size) equal to zero in a very simple regular model. The same result holds for the m out of n bootstrap provided m2/n→0 and the observations are i.i.d. Similar zero-asymptotic-confidence-size results hold in more complicated models that are covered by the general results given in the paper and for super-efficient and shrinkage estimators that are not post-model selection estimators. Based on these results, subsampling and the m out of n bootstrap are not recommended for obtaining inference based on post-consistent model selection or shrinkage estimators.  相似文献   

12.
We study the problem of building confidence sets for ratios of parameters, from an identification robust perspective. In particular, we address the simultaneous confidence set estimation of a finite number of ratios. Results apply to a wide class of models suitable for estimation by consistent asymptotically normal procedures. Conventional methods (e.g. the delta method) derived by excluding the parameter discontinuity regions entailed by the ratio functions and which typically yield bounded confidence limits, break down even if the sample size is large ( Dufour, 1997). One solution to this problem, which we take in this paper, is to use variants of  Fieller’s ( 1940, 1954) method. By inverting a joint test that does not require identifying the ratios, Fieller-based confidence regions are formed for the full set of ratios. Simultaneous confidence sets for individual ratios are then derived by applying projection techniques, which allow for possibly unbounded outcomes. In this paper, we provide simple explicit closed-form analytical solutions for projection-based simultaneous confidence sets, in the case of linear transformations of ratios. Our solution further provides a formal proof for the expressions in Zerbe et al. (1982) pertaining to individual ratios. We apply the geometry of quadrics as introduced by  and , in a different although related context. The confidence sets so obtained are exact if the inverted test statistic admits a tractable exact distribution, for instance in the normal linear regression context. The proposed procedures are applied and assessed via illustrative Monte Carlo and empirical examples, with a focus on discrete choice models estimated by exact or simulation-based maximum likelihood. Our results underscore the superiority of Fieller-based methods.  相似文献   

13.
A statistical treatment of the problem of division   总被引:1,自引:0,他引:1  
The problem of division is one of the most important problems in the emergence of probability. It has been long considered solved from a probabilistic viewpoint. However, we do not find the solution satisfactory. In this study, the problem is recasted as a statistical problem. The outcomes of matches of the game are considered as an infinitely exchangeable random sequence and predictors/estimators are constructed in light of de Finetti representation theorem. Bounds of the estimators are derived over wide classes of priors (mixing distributions). We find that, although conservative, the classical solutions are justifiable by our analysis while the plug-in estimates are too optimistic for the winning player.Acknowledgement. The authors would like to thank the referees for the insightful and informative suggestions and, particularly, for referring us to important references.Supported by NSC-88-2118-M-259-009.Supported in part by NSC 89-2118-M-259-012.Received August 2002  相似文献   

14.
The main goal of both Bayesian model selection and classical hypotheses testing is to make inferences with respect to the state of affairs in a population of interest. The main differences between both approaches are the explicit use of prior information by Bayesians, and the explicit use of null distributions by the classicists. Formalization of prior information in prior distributions is often difficult. In this paper two practical approaches (encompassing priors and training data) to specify prior distributions will be presented. The computation of null distributions is relatively easy. However, as will be illustrated, a straightforward interpretation of the resulting p-values is not always easy. Bayesian model selection can be used to compute posterior probabilities for each of a number of competing models. This provides an alternative for the currently prevalent testing of hypotheses using p-values. Both approaches will be compared and illustrated using case studies. Each case study fits in the framework of the normal linear model, that is, analysis of variance and multiple regression.  相似文献   

15.
Recent years have seen an explosion of activity in the field of functional data analysis (FDA), in which curves, spectra, images and so on are considered as basic functional data units. A central problem in FDA is how to fit regression models with scalar responses and functional data points as predictors. We review some of the main approaches to this problem, categorising the basic model types as linear, non‐linear and non‐parametric. We discuss publicly available software packages and illustrate some of the procedures by application to a functional magnetic resonance imaging data set.  相似文献   

16.
This paper studies an alternative quasi likelihood approach under possible model misspecification. We derive a filtered likelihood from a given quasi likelihood (QL), called a limited information quasi likelihood (LI-QL), that contains relevant but limited information on the data generation process. Our LI-QL approach, in one hand, extends robustness of the QL approach to inference problems for which the existing approach does not apply. Our study in this paper, on the other hand, builds a bridge between the classical and Bayesian approaches for statistical inference under possible model misspecification. We can establish a large sample correspondence between the classical QL approach and our LI-QL based Bayesian approach. An interesting finding is that the asymptotic distribution of an LI-QL based posterior and that of the corresponding quasi maximum likelihood estimator share the same “sandwich”-type second moment. Based on the LI-QL we can develop inference methods that are useful for practical applications under possible model misspecification. In particular, we can develop the Bayesian counterparts of classical QL methods that carry all the nice features of the latter studied in  White (1982). In addition, we can develop a Bayesian method for analyzing model specification based on an LI-QL.  相似文献   

17.
18.
We study the problem of testing hypotheses on the parameters of one- and two-factor stochastic volatility models (SV), allowing for the possible presence of non-regularities such as singular moment conditions and unidentified parameters, which can lead to non-standard asymptotic distributions. We focus on the development of simulation-based exact procedures–whose level can be controlled in finite samples–as well as on large-sample procedures which remain valid under non-regular conditions. We consider Wald-type, score-type and likelihood-ratio-type tests based on a simple moment estimator, which can be easily simulated. We also propose a C(α)-type test which is very easy to implement and exhibits relatively good size and power properties. Besides usual linear restrictions on the SV model coefficients, the problems studied include testing homoskedasticity against a SV alternative (which involves singular moment conditions under the null hypothesis) and testing the null hypothesis of one factor driving the dynamics of the volatility process against two factors (which raises identification difficulties). Three ways of implementing the tests based on alternative statistics are compared: asymptotic critical values (when available), a local Monte Carlo (or parametric bootstrap) test procedure, and a maximized Monte Carlo (MMC) procedure. The size and power properties of the proposed tests are examined in a simulation experiment. The results indicate that the C(α)-based tests (built upon the simple moment estimator available in closed form) have good size and power properties for regular hypotheses, while Monte Carlo tests are much more reliable than those based on asymptotic critical values. Further, in cases where the parametric bootstrap appears to fail (for example, in the presence of identification problems), the MMC procedure easily controls the level of the tests. Moreover, MMC-based tests exhibit relatively good power performance despite the conservative feature of the procedure. Finally, we present an application to a time series of returns on the Standard and Poor’s Composite Price Index.  相似文献   

19.
This paper introduces large-T bias-corrected estimators for nonlinear panel data models with both time invariant and time varying heterogeneity. These models include systems of equations with limited dependent variables and unobserved individual effects, and sample selection models with unobserved individual effects. Our two-step approach first estimates the reduced form by fixed effects procedures to obtain estimates of the time varying heterogeneity underlying the endogeneity/selection bias. We then estimate the primary equation by fixed effects including an appropriately constructed control variable from the reduced form estimates as an additional explanatory variable. The fixed effects approach in this second step captures the time invariant heterogeneity while the control variable accounts for the time varying heterogeneity. Since either or both steps might employ nonlinear fixed effects procedures it is necessary to bias adjust the estimates due to the incidental parameters problem. This problem is exacerbated by the two-step nature of the procedure. As these two-step approaches are not covered in the existing literature we derive the appropriate correction thereby extending the use of large-T bias adjustments to an important class of models. Simulation evidence indicates our approach works well in finite samples and an empirical example illustrates the applicability of our estimator.  相似文献   

20.
This paper is devoted to studying optimal designs for estimating an extremal point of a multivariate quadratic regression model in the unit hyperball. The problem of estimating an extremal point is reduced to that of estimating certain parameters of a corresponding nonlinear (in parameters) regression model. For this reduced problem truncated locally D-optimal designs are found in an explicit form. The result is a generalization of the results of Fedorov and Müller (1997) for onedimensional quadratic regression function in the unit segment. Received February 2002  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号