首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Microeconometric treatments of discrete choice under risk are typically homoscedastic latent variable models. Specifically, choice probabilities are given by preference functional differences (given by expected utility, rank-dependent utility, etc.) embedded in cumulative distribution functions. This approach has a problem: Estimated utility function parameters meant to represent agents’ degree of risk aversion in the sense of Pratt (1964) do not imply a suggested “stochastically more risk averse” relation within such models. A new heteroscedastic model called “contextual utility” remedies this, and estimates in one data set suggest it explains (and especially predicts) as well as or better than other stochastic models.  相似文献   

2.
We consider several ordinal formulations of submodularity, defined for arbitrary binary relations on lattices. Two of these formulations are essentially due to Kreps [Kreps, D.M., 1979. A representation theorem for “Preference for Flexibility”. Econometrica 47 (3), 565–578] and one is a weakening of a notion due to Milgrom and Shannon [Milgrom, P., Shannon, C., 1994. Monotone comparative statics. Econometrica 62 (1), 157–180]. We show that any reflexive binary relation satisfying either of Kreps’s definitions also satisfies Milgrom and Shannon’s definition, and that any transitive and monotonic binary relation satisfying the Milgrom and Shannon’s condition satisfies both of Kreps’s conditions.  相似文献   

3.
This paper studies the decision-theoretic foundation for the notion of stability in the dynamic context of strategic interaction. We formulate and show that common knowledge of rationality implies a “stable” pattern of behavior in extensive games with perfect information. In the “generic” case, our approach is consistent with Aumann’s [Aumann, R.J., 1995. Backward induction and common knowledge of rationality. Games and Economic Behavior 8, 6–19] result that common knowledge of rationality leads to the backward induction outcome.  相似文献   

4.
“The quiet life hypothesis” (QLH) by Hicks (1935) argues that, due to management’s subjective cost of reaching optimal profits, firms use their market power to allow inefficient allocation of resources. Increasing competitive pressure is therefore likely to force management to work harder to reach optimal profits. Another hypothesis, which also relates market power to efficiency is “the efficient structure hypothesis” (ESH) by Demsetz (1973). ESH argues that firms with superior efficiencies or technologies have lower costs and therefore higher profits. These firms are assumed to gain larger market shares which lead to higher concentration. Ignoring the efficiency levels of the firms in a market power model might cause both estimation and interpretation problems. Unfortunately, the literature on market power measurement largely ignores this relationship. In the context of a dynamic setting, we estimate the market power of US airlines in two city-pairs by both allowing inefficiencies of the firms and not allowing inefficiencies of the firms. Using industry level cost data, we estimate the cost function parameters and time-varying efficiencies. An instrumental variables version of the square root Kalman filter is used to estimate time-varying conduct parameters.  相似文献   

5.
In this paper we focus primarily on the dynamic evolution of the world distribution of growth rates in per capita GDP. We propose new concepts and measures of “convergence,” or “divergence” that are based on entropy distances and dominance relations between groups of countries over time.  相似文献   

6.
In this note I show that there is a mistake in the proof of uniqueness in Engelbrecht-Wiggans, Milgrom and Weber’s seminal “Competitive Bidding and Proprietary Information” and provide a correct proof.  相似文献   

7.
We take as a starting point the existence of a joint distribution implied by different dynamic stochastic general equilibrium (DSGE) models, all of which are potentially misspecified. Our objective is to compare “true” joint distributions with ones generated by given DSGEs. This is accomplished via comparison of the empirical joint distributions (or confidence intervals) of historical and simulated time series. The tool draws on recent advances in the theory of the bootstrap, Kolmogorov type testing, and other work on the evaluation of DSGEs, aimed at comparing the second order properties of historical and simulated time series. We begin by fixing a given model as the “benchmark” model, against which all “alternative” models are to be compared. We then test whether at least one of the alternative models provides a more “accurate” approximation to the true cumulative distribution than does the benchmark model, where accuracy is measured in terms of distributional square error. Bootstrap critical values are discussed, and an illustrative example is given, in which it is shown that alternative versions of a standard DSGE model in which calibrated parameters are allowed to vary slightly perform equally well. On the other hand, there are stark differences between models when the shocks driving the models are assigned non-plausible variances and/or distributional assumptions.  相似文献   

8.
We analyze optimality properties of maximum likelihood (ML) and other estimators when the problem does not necessarily fall within the locally asymptotically normal (LAN) class, therefore covering cases that are excluded from conventional LAN theory such as unit root nonstationary time series. The classical Hájek–Le Cam optimality theory is adapted to cover this situation. We show that the expectation of certain monotone “bowl-shaped” functions of the squared estimation error are minimized by the ML estimator in locally asymptotically quadratic situations, which often occur in nonstationary time series analysis when the LAN property fails. Moreover, we demonstrate a direct connection between the (Bayesian property of) asymptotic normality of the posterior and the classical optimality properties of ML estimators.  相似文献   

9.
A simplified version of the Neyman (1937) “Smooth” goodness-of-fit test is extended to account for the presence of estimated model parameters, thereby removing overfitting bias. Using a Lagrange Multiplier approach rather than the Likelihood Ratio statistic proposed by Neyman greatly simplifies the calculations. Polynomials, splines, and the step function of Pearson’s test are compared as alternative perturbations to the theoretical uniform distribution. The extended tests have negligible size distortion and more power than standard tests. The tests are applied to competing symmetric leptokurtic distributions with US stock return data. These are generally rejected, primarily because of the presence of skewness.  相似文献   

10.
For financial assets whose best quotes almost always change by jumping by the market’s price tick size (one cent, five cents, etc.), this paper proposes an estimator of Quadratic Variation which controls for microstructure effects. It measures the prevalence of alternations, where quotes jump back to their just-previous price. It defines a simple property called “uncorrelated alternation”, which under conditions implies that the estimator is consistent in an asymptotic limit theory, where jumps become very frequent and small. Feasible limit theory is developed, and in simulations works well.  相似文献   

11.
The recent general equilibrium theory of trade and multinationals emphasizes the importance of third countries and the complex integration strategies of multinationals. Little has been done to test this theory empirically. This paper attempts to rectify this situation by considering not only bilateral determinants, but also spatially weighted third-country determinants of foreign direct investment (FDI). Since the dependency among host markets is particularly related to multinationals’ trade between them, we use trade costs (distances) as spatial weights. Using panel data on U.S. industries and host countries observed over the 1989–1999 period, we estimate a “complex FDI” version of the knowledge-capital model of U.S. outward FDI by various recently developed spatial panel data generalized moments (GM) estimators. We find that third-country effects are significant, lending support to the existence of various modes of complex FDI.  相似文献   

12.
Data for discrete ordered dependent variables are often characterised by “excessive” zero observations which may relate to two distinct data generating processes. Traditional ordered probit models have limited capacity in explaining this preponderance of zero observations. We propose a zero-inflated ordered probit model using a double-hurdle combination of a split probit model and an ordered probit model. Monte Carlo results show favourable performance in finite samples. The model is applied to a consumer choice problem of tobacco consumption indicating that policy recommendations could be misleading if the splitting process is ignored.  相似文献   

13.
A regression discontinuity (RD) research design is appropriate for program evaluation problems in which treatment status (or the probability of treatment) depends on whether an observed covariate exceeds a fixed threshold. In many applications the treatment-determining covariate is discrete. This makes it impossible to compare outcomes for observations “just above” and “just below” the treatment threshold, and requires the researcher to choose a functional form for the relationship between the treatment variable and the outcomes of interest. We propose a simple econometric procedure to account for uncertainty in the choice of functional form for RD designs with discrete support. In particular, we model deviations of the true regression function from a given approximating function—the specification errors—as random. Conventional standard errors ignore the group structure induced by specification errors and tend to overstate the precision of the estimated program impacts. The proposed inference procedure that allows for specification error also has a natural interpretation within a Bayesian framework.  相似文献   

14.
We consider the problem of adjudicating conflicting claims in the context of a variable population. A property of rules is “lifted” if whenever a rule satisfies it in the two-claimant case, and the rule is bilaterally consistent, it satisfies it for any number of claimants. We identify a number of properties that are lifted, such as equal treatment of equals, resource monotonicity, composition down and composition up, and show that continuity, anonymity and self-duality are not lifted. However, each of these three properties is lifted if the rule is resource monotonic.  相似文献   

15.
Recent work finds evidence that the volatility of the U.S. economy fell dramatically around the first quarter of 1984. We trace the timing of this so-called “Great Moderation” across many subsectors of the economy in order to better understand its root cause. We find that the interest rate sensitive sectors generally experience a much earlier volatility decline than other large sectors of the economy. The changes in Federal Reserve stabilization policies that occurred during the early 1980s support the view that an improved monetary policy played an important role in stabilizing real economic activity. We find only mild evidence that “good luck” was important and little evidence to support the claim that improved inventory management was important.  相似文献   

16.
Structural vs. atheoretic approaches to econometrics   总被引:1,自引:0,他引:1  
In this paper I attempt to lay out the sources of conflict between the so-called “structural” and “experimentalist” camps in econometrics. Critics of the structural approach often assert that it produces results that rely on too many assumptions to be credible, and that the experimentalist approach provides an alternative that relies on fewer assumptions. Here, I argue that this is a false dichotomy. All econometric work relies heavily on a priori assumptions. The main difference between structural and experimental (or “atheoretic”) approaches is not in the number of assumptions but the extent to which they are made explicit.  相似文献   

17.
Volatility forecast comparison using imperfect volatility proxies   总被引:1,自引:0,他引:1  
The use of a conditionally unbiased, but imperfect, volatility proxy can lead to undesirable outcomes in standard methods for comparing conditional variance forecasts. We motivate our study with analytical results on the distortions caused by some widely used loss functions, when used with standard volatility proxies such as squared returns, the intra-daily range or realised volatility. We then derive necessary and sufficient conditions on the functional form of the loss function for the ranking of competing volatility forecasts to be robust to the presence of noise in the volatility proxy, and derive some useful special cases of this class of “robust” loss functions. The methods are illustrated with an application to the volatility of returns on IBM over the period 1993 to 2003.  相似文献   

18.
We develop a general procedure to construct pairwise meeting processes characterized by two features. First, in each period the process maximizes the number of matches in the population. Second, over time agents meet everybody else exactly once. We call this type of meetings “absolute strangers.” Our methodological contribution to economics is to offer a simple procedure to construct a type of decentralized trading environments usually employed in both theoretical and experimental economics. In particular, we demonstrate how to make use of the mathematics of Latin squares to enrich the modeling of matching economies.  相似文献   

19.
We show that exact computation of a family of ‘max weighted score’ estimators, including Manski’s max score estimator, can be achieved efficiently by reformulating them as mixed integer programs (MIP) with disjunctive constraints. The advantage of our MIP formulation is that estimates are exact and can be computed using widely available solvers in reasonable time. In a classic work-trip mode choice application, our method delivers exact estimates that lead to a different economic interpretation of the data than previous heuristic estimates. In a small Monte Carlo study we find that our approach is computationally efficient for usual estimation problem sizes.  相似文献   

20.
Recent approaches to testing for a unit root when uncertainty exists over the presence and timing of a trend break employ break detection methods, so that a with-break unit root test is used only if a break is detected by some auxiliary statistic. While these methods achieve near asymptotic efficiency in both fixed trend break and no trend break environments, in finite samples pronounced “valleys” in the power functions of the tests (when mapped as functions of the break magnitude) are observed, with power initially high for very small breaks, then decreasing as the break magnitude increases, before increasing again. In response to this problem, we propose two practical solutions, based either on the use of a with-break unit root test but with adaptive critical values, or on a union of rejections principle taken across with-break and without-break unit root tests. These new procedures are shown to offer improved reliability in terms of finite sample power. We also develop local limiting distribution theory for both the extant and the newly proposed unit root statistics, treating the trend break magnitude as local-to-zero. We show that this framework allows the asymptotic analysis to closely approximate the finite sample power valley phenomenon, thereby providing useful analytical insights.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号