首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper introduces a numerical method for solving concave continuous state dynamic programming problems which is based on a pair of polyhedral approximations of concave functions. The method is globally convergent and produces computable upper and lower bounds on the value function which can in theory be made arbitrarily tight. This is true regardless of the pattern of binding constraints, the smoothness of model primitives, and the dimensionality and rectangularity of the state space. We illustrate the method's performance using an optimal firm management problem subject to credit constraints and partial investment irreversibilities.  相似文献   

2.
We consider European options on a price process that follows the log-linear stochastic volatility model. Two stochastic integrals in the option pricing formula are costly to compute. We derive a central limit theorem to approximate them. At parameter settings appropriate to foreign exchange data our formulas improve computation speed by a factor of 1000 over brute force Monte Carlo making MCMC statistical methods practicable. We provide estimates of model parameters from daily data on the Swiss Franc to Euro and Japanese Yen to Euro over the period 1999–2002.  相似文献   

3.
We consider two likelihood ratio tests, the so-called maximum eigenvalue and trace tests, for the null of no cointegration when fractional cointegration is allowed under the alternative, which is a first step to generalize the so-called Johansen’s procedure to the fractional cointegration case. The standard cointegration analysis only considers the assumption that deviations from equilibrium can be integrated of order zero, which is very restrictive in many cases and may imply an important loss of power in the fractional case. We consider the alternative hypotheses with equilibrium deviations that can be mean reverting with order of integration possibly greater than zero. Moreover, the degree of fractional cointegration is not assumed to be known, and the asymptotic null distribution of both tests is found when considering an interval of possible values. The power of the proposed tests under fractional alternatives and size accuracy provided by the asymptotic distribution in finite samples are investigated.  相似文献   

4.
Maximum Likelihood (ML) estimation of probit models with correlated errors typically requires high-dimensional truncated integration. Prominent examples of such models are multinomial probit models and binomial panel probit models with serially correlated errors. In this paper we propose to use a generic procedure known as Efficient Importance Sampling (EIS) for the evaluation of likelihood functions for probit models with correlated errors. Our proposed EIS algorithm covers the standard GHK probability simulator as a special case. We perform a set of Monte Carlo experiments in order to illustrate the relative performance of both procedures for the estimation of a multinomial multiperiod probit model. Our results indicate substantial numerical efficiency gains for ML estimates based on the GHK–EIS procedure relative to those obtained by using the GHK procedure.  相似文献   

5.
We present a new specification for the multinomial multiperiod probit model with autocorrelated errors. In sharp contrast with commonly used specifications, ours is invariant with respect to the choice of a baseline alternative for utility differencing. It also nests these standard models as special cases, allowing for data-based selection of the baseline alternatives for the latter. Likelihood evaluation is achieved under an Efficient Importance Sampling (EIS) version of the standard GHK algorithm. Several simulation experiments highlight identification, estimation and pretesting within the new class of multinomial multiperiod probit models.  相似文献   

6.
We conduct an extensive Monte Carlo experiment to examine the finite sample properties of maximum‐likelihood‐based inference in the bivariate probit model with an endogenous dummy. We analyse the relative performance of alternative exogeneity tests, the impact of distributional misspecification and the role of exclusion restrictions to achieve parameter identification in practice. The results allow us to infer important guidelines for applied econometric practice.  相似文献   

7.
I propose a new estimation method for finite sequential games that is efficient, computationally attractive, and applicable to a fairly general class of finite sequential games that is beyond the scope of existing studies. The major challenge is the computation of high-dimensional truncated integration whose domain is complicated by strategic interaction. This complication resolves when unobserved off-the-equilibrium-path strategies are controlled for. Separately evaluating the likelihood contribution of each subgame-perfect equilibrium that generates the observed outcome allows the use of the GHK simulator, a widely used importance-sampling probit simulator. Monte Carlo experiments demonstrate the performance and robustness of the proposed method.  相似文献   

8.
This paper analyzes the higher-order properties of the estimators based on the nested pseudo-likelihood (NPL) algorithm and the practical implementation of such estimators for parametric discrete Markov decision models. We derive the rate at which the NPL algorithm converges to the MLE and provide a theoretical explanation for the simulation results in Aguirregabiria and Mira [Aguirregabiria, V., Mira, P., 2002. Swapping the nested fixed point algorithm: A class of estimators for discrete Markov decision models. Econometrica 70, 1519–1543], in which iterating the NPL algorithm improves the accuracy of the estimator. We then propose a new NPL algorithm that can achieve quadratic convergence without fully solving the fixed point problem in every iteration and apply our estimation procedure to a finite mixture model. We also develop one-step NPL bootstrap procedures for discrete Markov decision models. The Monte Carlo simulation evidence based on a machine replacement model of Rust [Rust, J., 1987. Optimal replacement of GMC bus engines: An empirical model of Harold Zurcher. Econometrica 55, 999–1033] shows that the proposed one-step bootstrap test statistics and confidence intervals improve upon the first order asymptotics even with a relatively small number of iterations.  相似文献   

9.
This paper provides closed-form likelihood approximations for multivariate jump-diffusion processes widely used in finance. For a fixed order of approximation, the maximum-likelihood estimator (MLE) computed from this approximate likelihood achieves the asymptotic efficiency of the true yet uncomputable MLE as the sampling interval shrinks. This method is used to uncover the realignment probability of the Chinese Yuan. Since February 2002, the market-implied realignment intensity has increased fivefold. The term structure of the forward realignment rate, which completely characterizes future realignment probabilities, is hump-shaped and peaks at mid-2004. The realignment probability responds quickly to economic news releases and government interventions.  相似文献   

10.
This paper studies the contribution of demand, costs, and strategic factors to the adoption of hub-and-spoke networks in the US airline industry. Our results are based on the estimation of a dynamic game of network competition using data from the Airline Origin and Destination Survey with information on quantities, prices, and entry and exit decisions for every airline company in the routes between the 55 largest US cities. As methodological contributions of the paper, we propose and apply a method to reduce the dimension of the state space in dynamic games, and a procedure to deal with the problem of multiple equilibria when implementing counterfactual experiments. Our empirical results show that the most important factor to explain the adoption of hub-and-spoke networks is that the sunk cost of entry in a route declines importantly with the number of cities that the airline connects from the origin and destination airports of the route. For some carriers, the entry deterrence motive is the second most important factor to explain hub-and-spoke networks.  相似文献   

11.
This paper studies likelihood-based estimation and inference in parametric discontinuous threshold regression models with i.i.d. data. The setup allows heteroskedasticity and threshold effects in both mean and variance. By interpreting the threshold point as a “middle” boundary of the threshold variable, we find that the Bayes estimator is asymptotically efficient among all estimators in the locally asymptotically minimax sense. In particular, the Bayes estimator of the threshold point is asymptotically strictly more efficient than the left-endpoint maximum likelihood estimator and the newly proposed middle-point maximum likelihood estimator. Algorithms are developed to calculate asymptotic distributions and risk for the estimators of the threshold point. The posterior interval is proved to be an asymptotically valid confidence interval and is attractive in both length and coverage in finite samples.  相似文献   

12.
We propose an alternative method for estimating the nonlinear component in semiparametric panel data models. Our method is based on marginal integration that allows us to recover the nonlinear component from an additive regression structure that results from the first differencing transformation. We characterize the asymptotic behavior of our estimator. We also extend the methodology to treat panel data models with two-way effects. Monte Carlo simulations show that our estimator behaves well in finite samples in both random effects and fixed effects settings.  相似文献   

13.
In this paper we consider estimation of nonlinear panel data models that include multiple individual fixed effects. Estimation of these models is complicated both by the difficulty of estimating models with possibly thousands of coefficients and also by the incidental parameters problem; that is, noisy estimates of the fixed effects when the time dimension is short contaminate the estimates of the common parameters due to the nonlinearity of the problem. We propose a simple variation of existing bias‐corrected estimators, which can exploit the additivity of the effects for numerical optimization. We exhibit the performance of the estimators in simulations.  相似文献   

14.
We consider the problem of adjudicating conflicting claims in the context of a variable population. A property of rules is “lifted” if whenever a rule satisfies it in the two-claimant case, and the rule is bilaterally consistent, it satisfies it for any number of claimants. We identify a number of properties that are lifted, such as equal treatment of equals, resource monotonicity, composition down and composition up, and show that continuity, anonymity and self-duality are not lifted. However, each of these three properties is lifted if the rule is resource monotonic.  相似文献   

15.
We present a variety of semiparametric models that produce bounds on the average causal effect of a binary treatment on a binary outcome. The semiparametric assumptions exploit variation in observable covariates to narrow the bounds. In our main model, the outcome is determined by a generalized linear model, but the treatment may be arbitrarily endogenous. Our bounding strategy does not require the existence of an instrument, but incorporating an instrument narrows the bounds. The bounds are further improved by combining the semiparametric model with the joint threshold-crossing assumption of Shaikh and Vytlacil (2005).  相似文献   

16.
This paper develops methods of inference for nonparametric and semiparametric parameters defined by conditional moment inequalities and/or equalities. The parameters need not be identified. Confidence sets and tests are introduced. The correct uniform asymptotic size of these procedures is established. The false coverage probabilities and power of the CS’s and tests are established for fixed alternatives and some local alternatives. Finite-sample simulation results are given for a nonparametric conditional quantile model with censoring and a nonparametric conditional treatment effect model. The recommended CS/test uses a Cramér–von-Mises-type test statistic and employs a generalized moment selection critical value.  相似文献   

17.
This article deals with the estimation of the parameters of an α-stable distribution with indirect inference, using the skewed-t distribution as an auxiliary model. The latter distribution appears as a good candidate since it has the same number of parameters as the α-stable distribution, with each parameter playing a similar role. To improve the properties of the estimator in finite sample, we use constrained indirect inference. In a Monte Carlo study we show that this method delivers estimators with good properties in finite sample. We provide an empirical application to the distribution of jumps in the S&P 500 index returns.  相似文献   

18.
We explore convenient analytic properties of distributions constructed as mixtures of scaled and shifted t-distributions. Particularly desirable for econometric applications are closed-form expressions for antiderivatives (e.g., the cumulative density function). We illustrate the usefulness of these distributions in two applications. In the first application, we produce density forecasts of U.S. inflation and show that these forecasts are more accurate, out-of-sample, than density forecasts obtained using normal or standard t-distributions. In the second application, we replicate the option-pricing exercise of Abadir and Rockinger [Density functionals, with an option-pricing application. Econometric Theory 19, 778–811.] and obtain comparably good results, while gaining analytical tractability.  相似文献   

19.
This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates from a high dimensional set of psychological measurements.  相似文献   

20.
We will show that the regression approach to estimating the standard error of the Gini index can produce incorrect results as it does not account for the correlations introduced in the error terms once the data are ordered. To assess the effect of ignoring the correlation in the error terms we examined two distributions and show that the regression method overestimates the standard error of the Gini index. We recommend that the more mathematically complex or computationally intensive methods be used.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号