首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
This paper is concerned with developing a semiparametric panel model to explain the trend in UK temperatures and other weather outcomes over the last century. We work with the monthly averaged maximum and minimum temperatures observed at the twenty six Meteorological Office stations. The data is an unbalanced panel. We allow the trend to evolve in a nonparametric way so that we obtain a fuller picture of the evolution of common temperature in the medium timescale. Profile likelihood estimators (PLE) are proposed and their statistical properties are studied. The proposed PLE has improved asymptotic property comparing the sequential two-step estimators. Finally, forecasting based on the proposed model is studied.  相似文献   

2.
When location shifts occur, cointegration-based equilibrium-correction models (EqCMs) face forecasting problems. We consider alleviating such forecast failure by updating, intercept corrections, differencing, and estimating the future progress of an ‘internal’ break. Updating leads to a loss of cointegration when an EqCM suffers an equilibrium-mean shift, but helps when collinearities are changed by an ‘external’ break with the EqCM staying constant. Both mechanistic corrections help compared to retaining a pre-break estimated model, but an estimated model of the break process could outperform. We apply the approaches to EqCMs for UK M1, compared with updating a learning function as the break evolves.  相似文献   

3.
We derive indirect estimators of conditionally heteroskedastic factor models in which the volatilities of common and idiosyncratic factors depend on their past unobserved values by calibrating the score of a Kalman-filter approximation with inequality constraints on the auxiliary model parameters. We also propose alternative indirect estimators for large-scale models, and explain how to apply our procedures to many other dynamic latent variable models. We analyse the small sample behaviour of our indirect estimators and several likelihood-based procedures through an extensive Monte Carlo experiment with empirically realistic designs. Finally, we apply our procedures to weekly returns on the Dow 30 stocks.  相似文献   

4.
This paper develops a dynamic approximate factor model in which returns are time-series heteroskedastic. The heteroskedasticity has three components: a factor-related component, a common asset-specific component, and a purely asset-specific component. We develop a new multivariate GARCH model for the factor-related component. We develop a univariate stochastic volatility model linked to a cross-sectional series of individual GARCH models for the common asset-specific component and the purely asset-specific component. We apply the analysis to monthly US equity returns for the period January 1926 to December 2000. We find that all three components contribute to the heteroskedasticity of individual equity returns. Factor volatility and the common component in asset-specific volatility have long-term secular trends as well as short-term autocorrelation. Factor volatility has correlation with interest rates and the business cycle.  相似文献   

5.
A persistent question in the development of models for macroeconomic policy analysis has been the relative role of economic theory and evidence in their construction. This paper looks at some popular strategies that involve setting up a theoretical or conceptual model (CM) which is transformed to match the data and then made operational for policy analysis. A dynamic general equilibrium model is constructed that is similar to standard CMs. After calibration to UK data it is used to examine the utility of formal econometric methods in assessing the match of the CM to the data and also to evaluate some standard model-building strategies.  相似文献   

6.
We take as a starting point the existence of a joint distribution implied by different dynamic stochastic general equilibrium (DSGE) models, all of which are potentially misspecified. Our objective is to compare “true” joint distributions with ones generated by given DSGEs. This is accomplished via comparison of the empirical joint distributions (or confidence intervals) of historical and simulated time series. The tool draws on recent advances in the theory of the bootstrap, Kolmogorov type testing, and other work on the evaluation of DSGEs, aimed at comparing the second order properties of historical and simulated time series. We begin by fixing a given model as the “benchmark” model, against which all “alternative” models are to be compared. We then test whether at least one of the alternative models provides a more “accurate” approximation to the true cumulative distribution than does the benchmark model, where accuracy is measured in terms of distributional square error. Bootstrap critical values are discussed, and an illustrative example is given, in which it is shown that alternative versions of a standard DSGE model in which calibrated parameters are allowed to vary slightly perform equally well. On the other hand, there are stark differences between models when the shocks driving the models are assigned non-plausible variances and/or distributional assumptions.  相似文献   

7.
This paper considers methods for estimating the slope coefficients in large panel data models that are robust to the presence of various forms of error cross-section dependence. It introduces a general framework where error cross-section dependence may arise because of unobserved common effects and/or error spill-over effects due to spatial or other forms of local dependencies. Initially, this paper focuses on a panel regression model where the idiosyncratic errors are spatially dependent and possibly serially correlated, and derives the asymptotic distributions of the mean group and pooled estimators under heterogeneous and homogeneous slope coefficients, and for these estimators proposes non-parametric variance matrix estimators. The paper then considers the more general case of a panel data model with a multifactor error structure and spatial error correlations. Under this framework, the Common Correlated Effects (CCE) estimator, recently advanced by Pesaran (2006), continues to yield estimates of the slope coefficients that are consistent and asymptotically normal. Small sample properties of the estimators under various patterns of cross-section dependence, including spatial forms, are investigated by Monte Carlo experiments. Results show that the CCE approach works well in the presence of weak and/or strong cross-sectionally correlated errors.  相似文献   

8.
This paper develops a new approach to the estimation of consumer demand models with unobserved heterogeneity subject to revealed preference inequality restrictions. Particular attention is given to nonseparable heterogeneity. The inequality restrictions are used to identify bounds on counterfactual demand. A nonparametric estimator for these bounds is developed and asymptotic properties are derived. An empirical application using data from the UK Family Expenditure Survey illustrates the usefulness of the methods.  相似文献   

9.
A new model class for univariate asset returns is proposed which involves the use of mixtures of stable Paretian distributions, and readily lends itself to use in a multivariate context for portfolio selection. The model nests numerous ones currently in use, and is shown to outperform all its special cases. In particular, an extensive out-of-sample risk forecasting exercise for seven major FX and equity indices confirms the superiority of the general model compared to its special cases and other competitors. Estimation issues related to problems associated with mixture models are discussed, and a new, general, method is proposed to successfully circumvent these. The model is straightforwardly extended to the multivariate setting by using an independent component analysis framework. The tractability of the relevant characteristic function then facilitates portfolio optimization using expected shortfall as the downside risk measure.  相似文献   

10.
国外高等职业教育发展的比较研究与借鉴   总被引:14,自引:0,他引:14  
高等职业教育在德国、美国、英国、日本等发达国家都有各自不同的发展方式和特色,它们的共同特征是在人才培养方面注重技术应用与开发能力的培养,在教学方式手段方面具有很强的实践性.有选择地引入和借鉴国外高等职业教育的有益经验和模式,将对我国高等职业教育的发展具有重要意义.  相似文献   

11.
We consider classes of multivariate distributions which can model skewness and are closed under orthogonal transformations. We review two classes of such distributions proposed in the literature and focus our attention on a particular, yet quite flexible, subclass of one of these classes. Members of this subclass are defined by affine transformations of univariate (skewed) distributions that ensure the existence of a set of coordinate axes along which there is independence and the marginals are known analytically. The choice of an appropriate m-dimensional skewed distribution is then restricted to the simpler problem of choosing m univariate skewed distributions. We introduce a Bayesian model comparison setup for selection of these univariate skewed distributions. The analysis does not rely on the existence of moments (allowing for any tail behaviour) and uses equivalent priors on the common characteristics of the different models. Finally, we apply this framework to multi-output stochastic frontiers using data from Dutch dairy farms.  相似文献   

12.
It is well understood that the two most popular empirical models of location choice - conditional logit and Poisson - return identical coefficient estimates when the regressors are not individual specific. We show that these two models differ starkly in terms of their implied predictions. The conditional logit model represents a zero-sum world, in which one region’s gain is the other regions’ loss. In contrast, the Poisson model implies a positive-sum economy, in which one region’s gain is no other region’s loss. We also show that all intermediate cases can be represented as a nested logit model with a single outside option. The nested logit turns out to be a linear combination of the conditional logit and Poisson models. Conditional logit and Poisson elasticities mark the polar cases and can therefore serve as boundary values in applied research.  相似文献   

13.
The familiar concept of cointegration enables us to determine whether or not there is a long-run relationship between two integrated time series. However, this may not capture short-run effects such as seasonality. Two series which display different seasonal effects can still be cointegrated. Seasonality may arise independently of the long-run relationship between two time series or, indeed, the long-run relationship may itself be seasonal. The market for recycled ferrous scrap displays these features: the US and UK scrap prices are cointegrated, yet the local markets exhibit different forms of seasonality. The paper addresses the problem of using both cointegrating and seasonal relationships in forecasting time series through the use of periodic transfer function models. We consider the problems of testing for cointegration between series with differing seasonal patterns and develop a periodic transfer function model for the US and UK scrap markets. Forecast comparisons with other time series models suggest that forecasting efficiency may be improved by allowing for periodicity but that such improvement is by no means guaranteed. The correct specification of the periodic component of the model is critical for forecast accuracy.  相似文献   

14.
Linear parabolic partial differential equations (PDE’s) and diffusion models are closely linked through the celebrated Feynman–Kac representation of solutions to PDE’s. In asset pricing theory, this leads to the representation of derivative prices as solutions to PDE’s. Very often implied derivative prices are calculated given preliminary estimates of the diffusion model for the underlying variable. We demonstrate that the implied derivative prices are consistent and derive their asymptotic distribution under general conditions. We apply this result to three leading cases of preliminary estimators: Nonparametric, semiparametric and fully parametric ones. In all three cases, the asymptotic distribution of the solution is derived. We demonstrate the use of these results in obtaining confidence bands and standard errors for implied prices of bonds, options and other derivatives. Our general results also are of interest for the estimation of diffusion models using either historical data of the underlying process or option prices; these issues are also discussed.  相似文献   

15.
Bayesian averaging,prediction and nonnested model selection   总被引:1,自引:0,他引:1  
This paper studies the asymptotic relationship between Bayesian model averaging and post-selection frequentist predictors in both nested and nonnested models. We derive conditions under which their difference is of a smaller order of magnitude than the inverse of the square root of the sample size in large samples. This result depends crucially on the relation between posterior odds and frequentist model selection criteria. Weak conditions are given under which consistent model selection is feasible, regardless of whether models are nested or nonnested and regardless of whether models are correctly specified or not, in the sense that they select the best model with the least number of parameters with probability converging to 1. Under these conditions, Bayesian posterior odds and BICs are consistent for selecting among nested models, but are not consistent for selecting among nonnested models and possibly overlapping models. These findings have important bearing for applied researchers who are frequent users of model selection tools for empirical investigation of model predictions.  相似文献   

16.
This paper focuses on the provision of consistent forecasts for an aggregate economic indicator, such as a consumer price index and its components. The procedure developed is a disaggregated approach based on single-equation models for the components, which take into account the stable features that some components share, such as a common trend and common serial correlation. Our procedure starts by classifying a large number of components based on restrictions from common features. The result of this classification is a disaggregation map, which may also be useful in applying dynamic factors, defining intermediate aggregates or formulating models with unobserved components. We use the procedure to forecast inflation in the Euro area, the UK and the US. Our forecasts are significantly more accurate than either a direct forecast of the aggregate or various other indirect forecasts.  相似文献   

17.
In this paper we consider the problem of estimating semiparametric panel data models with cross section dependence, where the individual-specific regressors enter the model nonparametrically whereas the common factors enter the model linearly. We consider both heterogeneous and homogeneous regression relationships when both the time and cross-section dimensions are large. We propose sieve estimators for the nonparametric regression functions by extending Pesaran’s (2006) common correlated effect (CCE) estimator to our semiparametric framework. Asymptotic normal distributions for the proposed estimators are derived and asymptotic variance estimators are provided. Monte Carlo simulations indicate that our estimators perform well in finite samples.  相似文献   

18.
Policy makers must base their decisions on preliminary and partially revised data of varying reliability. Realistic modeling of data revisions is required to guide decision makers in their assessment of current and future conditions. This paper provides a new framework with which to model data revisions.Recent empirical work suggests that measurement errors typically have much more complex dynamics than existing models of data revisions allow. This paper describes a state-space model that allows for richer dynamics in these measurement errors, including the noise, news and spillover effects documented in this literature. We also show how to relax the common assumption that “true” values are observed after a few revisions.The result is a unified and flexible framework that allows for more realistic data revision properties, and allows the use of standard methods for optimal real-time estimation of trends and cycles. We illustrate the application of this framework with real-time data on US real output growth.  相似文献   

19.
We consider the properties of weighted linear combinations of prediction models, or linear pools, evaluated using the log predictive scoring rule. Although exactly one model has limiting posterior probability, an optimal linear combination typically includes several models with positive weights. We derive several interesting results: for example, a model with positive weight in a pool may have zero weight if some other models are deleted from that pool. The results are illustrated using S&P 500 returns with six prediction models. In this example models that are clearly inferior by the usual scoring criteria have positive weights in optimal linear pools.  相似文献   

20.
During the past two decades, innovations protected by patents have played a key role in business strategies. This fact enhanced studies of the determinants of patents and the impact of patents on innovation and competitive advantage. Sustaining competitive advantages is as important as creating them. Patents help sustaining competitive advantages by increasing the production cost of competitors, by signaling a better quality of products and by serving as barriers to entry. If patents are rewards for innovation, more R&D should be reflected in more patent applications but this is not the end of the story. There is empirical evidence showing that patents through time are becoming easier to get and more valuable to the firm due to increasing damage awards from infringers. These facts question the constant and static nature of the relationship between R&D and patents. Furthermore, innovation creates important knowledge spillovers due to its imperfect appropriability. Our paper investigates these dynamic effects using US patent data from 1979 to 2000 with alternative model specifications for patent counts. We introduce a general dynamic count panel data model with dynamic observable and unobservable spillovers, which encompasses previous models, is able to control for the endogeneity of R&D and therefore can be consistently estimated by maximum likelihood. Apart from allowing for firm specific fixed and random effects, we introduce a common unobserved component, or secret stock of knowledge, that affects differently the propensity to patent of each firm across sectors due to their different absorptive capacity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号