首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
A recent strand of empirical work uses (S, s) models with time-varying stochastic bands to describe infrequent adjustments of prices and other variables. The present paper examines some properties of this model, which encompasses most micro-founded adjustment rules rationalizing infrequent changes. We illustrate that this model is flexible enough to fit data characterized by infrequent adjustment and variable adjustment size. We show that, to the extent that there is variability in the size of adjustments (e.g. if both small and large price changes are observed), (i) a large band parameter is needed to fit the data and (ii) the average band of inaction underlying the model may differ strikingly from the typical observed size of adjustment. The paper thus provides a rationalization for a recurrent empirical result: very large estimated values for the parameters measuring the band of inaction.  相似文献   

2.
This Briefing Paper is thejirst ofa series of three designeddiscussed is the process of making 'constant adjustments' in forecasts. This process involves modifying the results generated by the econometric model. For the first time we are publishing tables of the constant adjustments used in the current forecast. We explain in general why such adjustments are made and also explain the actual adjustments we have made for this forecast.
The second article of the series, to be published in our February 1983 edition, will describe the potential sources of error in forecasts. In particular it will describe the inevitable stochastic or random element involved in e tatistical attempts to quantify economic behaviour. As a completely new departure the article will report estimates of future errors based on stochastic simulations of the LBS. model and will provide statistical error bad for the main elements of the forecast.
The final article, to be published in our June 1983 edition, will contrast the measures of forecast error that e e obtain from the estimation process and our stochastic e imulationsp with the errors that we have actually made, as revealed by an examination of our forecasting 'track record'. It is hoped to draw, from this comparison, some e eneral conclusions about the scope and limits of econometric forecasting producers.  相似文献   

3.
Although the literature on purchasing power parity (PPP) is rich in controversy, the relative contribution of prices and nominal exchange rates to real exchange rate movements which restore PPP disequilibria has rarely been put under any close scrutiny. This paper as a first step applies a cointegrated VAR framework to test for stationary real exchange rates and linear adjustments in prices and nominal exchange rates. As a second step, ESTR error correction models are fitted to test whether nonlinear error correctional behaviour characterizes the data. The results clearly indicate that the nominal exchange rate is responsible for the nonlinear mean reverting behaviour in real exchange rates and also mainly drives overall adjustment. Applying dynamic stochastic simulations based on the estimated models, this study also confirms recent results that the half-life times of real exchange rate shocks are significantly smaller than the consensus benchmark of 3–5 years.  相似文献   

4.
Treating infrastructure inputs as quasi-fixed in the short run, a multi-equation econometric model of production-infrastructure (social overhead capital) interlinkages and adjustments is developed based on a flexible functional form. Adjustment dynamics are endogenized and costs of adjustments are explicitly incorporated. The model is estimated with regional and national data from India; results include optimal paths and speeds of adjustments for infrastructure inputs market inputs' own and cross-price elasticities and demand elasticities with respect to the level of output, infrastructure stocks and associated user costs; and production cost elasticities with respect to output and infrastructure stocks.  相似文献   

5.
We introduce a general class of periodic unobserved component (UC) time series models with stochastic trend and seasonal components and with a novel periodic stochastic cycle component. The general state space formulation of the periodic model allows for exact maximum likelihood estimation, signal extraction and forecasting. The consequences for model‐based seasonal adjustment are discussed. The new periodic model is applied to postwar monthly US unemployment series from which we identify a significant periodic stochastic cycle. A detailed periodic analysis is presented including a comparison between the performances of periodic and non‐periodic UC models.  相似文献   

6.
宋炯  杨宏进  张发龙  邢忠义 《价值工程》2012,31(13):136-137
在城市多十字路口交通环境、面对复杂和瞬息万变的交通条件之间的相互作用,通常采用的交通信号控制方法常常暴露出效率低和实用性差。一个采用以Q学习为基础的交通信号控制模型,提出了处理没有规律和瞬息万变的交通流量的问题。利用专业的自主学习—Q学习中固有的,自主适应能力的最优控制策略应对的不同交通条件。这个模型的主要优点是没有固定的数学控制模型,正好适合这个环境。在仿真环境下的实验研究结果也说明该算法是切实可行、有效的。  相似文献   

7.
This article investigates the impact of institutions on bank efficiency and technology, using a stochastic frontier analysis of a data set of 7,959 banks across 136 countries over 10 years. The results confirm the importance of well‐developed institutions for the efficient operation of commercial banks. Furthermore, the insights reveal the impact of institutional reforms in improving bank efficiency. The results are robust to adjustments in country‐specific effects, achieved by including country dummies, as well as across different risk profiles. Moreover, they provide empirical evidence in support of the public view of the banking sector.  相似文献   

8.
This paper examines the reliability of option fair value estimates in the presence of transaction costs. The Black Scholes Merton (BSM) framework assumes zero transaction costs and thus might not provide a reasonable approximation in this context. We investigate the model adjustments companies make to their BSM models to deal with these transaction costs. We specifically examine Employee Stock Option (ESO) plans listed on the French stock exchange, as detailed disclosure on modeling is available for these ESOs. Our analysis questions the reliability of these model adjustments, especially their bias and the extent to which they provide a faithful representation of option fair values. Holding parameter values constant, we find that the model adjustments lead to a median understatement of 52% compared to the BSM model price, higher than the discount we observe for the opportunistic determination of model parameters (below 20%). The paper contributes to the fair value literature by highlighting model risk in the fair valuation of options. This model risk stems from assumptions made about the size of transaction costs and complements the notion of parameter risk analyzed in previous literature. As a result, the model itself might be a possible channel for fair value management.  相似文献   

9.
Multivariate frailty approaches are most commonly used to define distributions of random vectors, which represent lifetimes of individuals or components and stochastically compare them in terms of various multivariate orders. In this paper, we study a multivariate shared reversed frailty model and a general multivariate reversed frailty mixture model, and derive sufficient conditions for some of the stochastic orderings to hold among the random vectors. We also consider a particular case of a general multivariate mixture model in which the baseline distribution function is represented in terms of a copula and study stochastic comparisons (stochastic and lower orthant order) among the two random vectors.  相似文献   

10.
A Stochastic Frontier Production Function with Flexible Risk Properties   总被引:1,自引:1,他引:1  
This paper considers a stochastic frontier production function which has additive, heteroscedastic error structure. The model allows for negative or positive marginal production risks of inputs, as originally proposed by Just and Pope (1978). The technical efficiencies of individual firms in the sample are a function of the levels of the input variables in the stochastic frontier, in addition to the technical inefficiency effects. These are two features of the model which are not exhibited by the commonly used stochastic frontiers with multiplicative error structures.An empirical application is presented using cross-sectional data on Ethiopian peasant farmers. The null hypothesis of no technical inefficiencies of production among these farmers is accepted. Further, the flexible risk models do not fit the data on peasant farmers as well as the traditional stochastic frontier model with multiplicative error structure.  相似文献   

11.

This paper explains how to calibrate a stochastic collocation polynomial against market option prices directly. The method is first applied to the interpolation of short-maturity equity option prices in a fully arbitrage-free manner and then to the joint calibration of the constant maturity swap convexity adjustments with the interest rate swaptions smile. To conclude, we explore some limitations of the stochastic collocation technique.

  相似文献   

12.
This paper studies symmetry among countably infinitely many agents who randomly enter into a stochastic process, one for each period. Upon entry, they observe only the current period signal and try to draw inference about the underlying state governing the stochastic process. We show that there exist random entry models under which agents are ex post symmetric. That is, all agents have identical posterior belief about the underlying states, although they are not ex ante symmetric. The form of the posterior belief is uniquely pinned down by ex post symmetry and a stationarity condition. Our results provide a common prior foundation for the model studied in Liu and Skrzypacz (2014).  相似文献   

13.
In this paper I evaluate a number of training, recruitment and employment programmes. The evaluation method is a combination of a quasi-experimental design (a simple pre-treatment-post-treatment design) and a stochastic process model to describe the response variables. I conclude that the programmes have no effect on older workers. Female and minority workers benefit most from the programmes. The training programmes are less effective than the recruitment programmes, which are in turn less effective than the employment programmes.  相似文献   

14.
It is a challenge to incorporate randomness into financial projections that are at the core of new venture assessment. We present a model based on Schwartz and Moon ( 2001 ) and apply it to a real firm data. We find that our 10‐year projections conform to the actual realized values. The model allows addressing crucial questions regarding the venture survival, its extreme potential outcomes, and its sensitivity to its parameters. It facilitates identifying risk drivers and assessing potential remedies. To our knowledge, we are the first to propose such a comprehensive stochastic model for risky ventures' simulation and analysis.  相似文献   

15.
‘Fat big data’ characterise data sets that contain many more variables than observations. We discuss the use of both principal components analysis and equilibrium correction models to identify cointegrating relations that handle stochastic trends in non-stationary fat data. However, most time series are wide-sense non-stationary—induced by the joint occurrence of stochastic trends and distributional shifts—so we also handle the latter by saturation estimation. Seeking substantive relationships when there are vast numbers of potentially spurious connections cannot be achieved by merely choosing the best-fitting equation or trying hundreds of empirical fits and selecting a preferred one, perhaps contradicted by others that go unreported. Conversely, fat big data are useful if they help ensure that the data generation process is nested in the postulated model, and increase the power of specification and mis-specification tests without raising the chances of adventitious significance. We model the monthly UK unemployment rate, using both macroeconomic and Google Trends data, searching across 3000 explanatory variables, yet identify a parsimonious, statistically valid, and theoretically interpretable specification.  相似文献   

16.
《Journal of econometrics》2005,126(2):305-334
The paper analyzes a number of competing approaches to modeling efficiency in panel studies. The specifications considered include the fixed effects stochastic frontier, the random effects stochastic frontier, the Hausman–Taylor random effects stochastic frontier, and the random and fixed effects stochastic frontier with an AR(1) error. I have summarized the foundations and properties of estimators that have appeared elsewhere and have described the model assumptions under which each of the estimators have been developed. I discuss parametric and nonparametric treatments of time varying efficiency including the Battese–Coelli estimator and linear programming approaches to efficiency measurement. Monte Carlo simulation is used to compare the various estimators and to assess their relative performances under a variety of misspecified settings. A brief illustration of the estimators is conducted using U.S. banking data.  相似文献   

17.
This paper provides an empirical estimation of a stochastic frontier Cobb-Douglas production function using micro data from a cross-section of Brazilian manufacturing firms. Following a procedure developed by Aigner, Lovell and Schmidt incorporating both stochastic and efficiency disturbance terms in the estimating model, maximum likelihood techniques are used for the estimation of the stochastic frontier. A measure of mean technical efficiency is also developed and employed with the Brazilian data. Unlike the previous empirical exercises carried out with aggregated data, the efficiency disturbance with the Brazilian micro data estimates is not swamped by the stochastic disturbance.  相似文献   

18.
The mathematical programming-based technique data envelopment analysis (DEA) has often treated data as being deterministic. In response to the criticism that in most applications there is error and random noise in the data, a number of mathematically elegant solutions to incorporating stochastic variations in data have been proposed. In this paper, we propose a chance-constrained formulation of DEA that allows random variations in the data. We study properties of the ensuing efficiency measure using a small sample in which multiple inputs and a single output are correlated, and are the result of a stochastic process. We replicate the analysis using Monte Carlo simulations and conclude that using simulations provides a more flexible and computationally less cumbersome approach to studying the effects of noise in the data. We suggest that, in keeping with the tradition of DEA, the simulation approach allows users to explicitly consider different data generating processes and allows for greater flexibility in implementing DEA under stochastic variations in data.  相似文献   

19.
We propose a novel mixed-frequency dynamic factor model with time-varying parameters and stochastic volatility for macroeconomic nowcasting and develop a fast estimation algorithm. This enables us to generate forecast densities based on a large space of factor models. We apply our framework to nowcast US GDP growth in real time. Our results reveal that stochastic volatility seems to improve the accuracy of point forecasts the most, compared to the constant-parameter factor model. These gains are most prominent during unstable periods such as the Covid-19 pandemic. Finally, we highlight indicators driving the US GDP growth forecasts and associated downside risks in real time.  相似文献   

20.
This paper introduces and studies the econometric properties of a general new class of models, which I refer to as jump-driven stochastic volatility models, in which the volatility is a moving average of past jumps. I focus attention on two particular semiparametric classes of jump-driven stochastic volatility models. In the first, the price has a continuous component with time-varying volatility and time-homogeneous jumps. The second jump-driven stochastic volatility model analyzed here has only jumps in the price, which have time-varying size. In the empirical application I model the memory of the stochastic variance with a CARMA(2,1) kernel and set the jumps in the variance to be proportional to the squared price jumps. The estimation, which is based on matching moments of certain realized power variation statistics calculated from high-frequency foreign exchange data, shows that the jump-driven stochastic volatility model containing continuous component in the price performs best. It outperforms a standard two-factor affine jump–diffusion model, but also the pure-jump jump-driven stochastic volatility model for the particular jump specification.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号