首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
We introduce a general class of periodic unobserved component (UC) time series models with stochastic trend and seasonal components and with a novel periodic stochastic cycle component. The general state space formulation of the periodic model allows for exact maximum likelihood estimation, signal extraction and forecasting. The consequences for model‐based seasonal adjustment are discussed. The new periodic model is applied to postwar monthly US unemployment series from which we identify a significant periodic stochastic cycle. A detailed periodic analysis is presented including a comparison between the performances of periodic and non‐periodic UC models.  相似文献   

2.
This paper investigates Alfred Marshall’s hypothesis that knowledge spillovers increase where industries are localized. At the same time, we take a fresh look at the role of distance in the diffusion of knowledge spillovers. Relying on a cited-citing gravity-like equation with high-dimensional fixed effects that control for multiple sources of observed and non-observed heterogeneity, we implement a Poisson pseudo-maximum-likelihood (PPML) estimator. We find that knowledge spillovers correlate positively with industry localization and that the agglomeration of an industry can offset the adverse effect of distance. The results also corroborate the distance decay effect uncovered in earlier research. Our new approach to estimate the PPML with two high-dimensional fixed effects should prove valuable in applications to a variety of other problems in economics, such as the estimation of gravity equations widely used in modeling migration, trade and other flows among countries and regions.  相似文献   

3.
Multiple event data are frequently encountered in medical follow‐up, engineering and other applications when the multiple events are considered as the major outcomes. They may be repetitions of the same event (recurrent events) or may be events of different nature. Times between successive events (gap times) are often of direct interest in these applications. The stochastic‐ordering structure and within‐subject dependence of multiple events generate statistical challenges for analysing such data, including induced dependent censoring and non‐identifiability of marginal distributions. This paper provides an overview of a class of existing non‐parametric estimation methods for gap time distributions for various types of multiple event data, where sampling bias from induced dependent censoring is effectively adjusted. We discuss the statistical issues in gap time analysis, describe the estimation procedures and illustrate the methods with a comparative simulation study and a real application to an AIDS clinical trial. A comprehensive understanding of challenges and available methods for non‐parametric analysis can be useful because there is no existing standard approach to identifying an appropriate gap time method that can be used to address research question of interest. The methods discussed in this review would allow practitioners to effectively handle a variety of real‐world multiple event data.  相似文献   

4.
The evaluation of economic data and the monitoring of the economy is often concerned with an assessment of the mid- and long-term dynamics of time series (trend and/or cycle). Frequently, one is interested in the most recent estimate of a target signal, a so-called real-time estimate. Unfortunately, real-time signal extraction is a difficult estimation problem that involves linear combinations of possibly infinitely many multi-step ahead forecasts of a series. Here, we address the performances of real-time designs by proposing a generic direct filter approach. We decompose the ordinary mean squared error into accuracy, timeliness and smoothness error components, and we propose a new tradeoff between these competing terms, the so-called ATS-trilemma. This formalism enables us to derive a general class of optimization criteria that allow the user to address specific research priorities, in terms of the accuracy, timeliness and smoothness properties of the corresponding concurrent filter. We illustrate the new methods through simulations, and present an application to Indian industrial production data.  相似文献   

5.
In this paper we propose a flexible model to describe nonlinearities and long-range dependence in time series dynamics. The new model is a multiple regime smooth transition extension of the Heterogeneous Autoregressive (HAR) model, which is specifically designed to model the behavior of the volatility inherent in financial time series. The model is able to simultaneously approximate long memory behavior, as well as describe sign and size asymmetries. A sequence of tests is developed to determine the number of regimes, and an estimation and testing procedure is presented. Monte Carlo simulations evaluate the finite-sample properties of the proposed tests and estimation procedures. We apply the model to several Dow Jones Industrial Average index stocks using transaction level data from the Trades and Quotes database that covers ten years of data. We find strong support for long memory and both sign and size asymmetries. Furthermore, the new model, when combined with the linear HAR model, is viable and flexible for purposes of forecasting volatility.  相似文献   

6.
Small area estimation (SAE) entails estimating characteristics of interest for domains, often geographical areas, in which there may be few or no samples available. SAE has a long history and a wide variety of methods have been suggested, from a bewildering range of philosophical standpoints. We describe design-based and model-based approaches and models that are specified at the area level and at the unit level, focusing on health applications and fully Bayesian spatial models. The use of auxiliary information is a key ingredient for successful inference when response data are sparse, and we discuss a number of approaches that allow the inclusion of covariate data. SAE for HIV prevalence, using data collected from a Demographic Health Survey in Malawi in 2015–2016, is used to illustrate a number of techniques. The potential use of SAE techniques for outcomes related to coronavirus disease 2019 is discussed.  相似文献   

7.
This paper provides an econometric examination of technological knowledge spillovers among countries by focusing on the issue of error cross‐sectional dependence, particularly on the different ways—weak and strong—that this dependence may affect model specification and estimation. A preliminary analysis based on estimation of the exponent of cross‐sectional dependence provides a clear result in favor of strong cross‐sectional dependence. This result has relevant implications in terms of econometric modeling and suggests that a factor structure is preferable to a spatial error model. The common correlated effects approach is then used because it remains valid in a variety of situations that are likely to occur, such as the presence of both forms of dependence or the existence of nonstationary factors. According to the estimation results, richer countries benefit more from domestic R&D and geographic spillovers than poorer countries, while smaller countries benefit more from spillovers originating from trade. The results also suggest that when the problem of (possibly many) correlated unobserved factors is addressed the quantity of education no longer has a significant effect. Finally, a comparison of the results with those obtained from a spatial model provides interesting insights into the bias that may arise when we allow only for weak dependence, despite the presence of strong dependence in the data. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
This paper concerns estimating parameters in a high-dimensional dynamic factor model by the method of maximum likelihood. To accommodate missing data in the analysis, we propose a new model representation for the dynamic factor model. It allows the Kalman filter and related smoothing methods to evaluate the likelihood function and to produce optimal factor estimates in a computationally efficient way when missing data is present. The implementation details of our methods for signal extraction and maximum likelihood estimation are discussed. The computational gains of the new devices are presented based on simulated data sets with varying numbers of missing entries.  相似文献   

9.
煤层具有高声波时差、低密度、低波阻抗、低伽马、高电阻率的测井响应特征,因而煤层具有较为明显的地震响应特征。应用地震数值模拟方法模拟煤层与顶、底板岩层之间较强的地震反射界面,随着煤层厚度的变化,煤层与顶、底板岩层间形成的地震反射强度会随之改变;另外,随着同一套煤层层数的增多,煤层与顶、底板岩层间的地震反射强度也会增大。根据煤层的特征,分别从构造、煤质、煤层上覆地层厚度、煤层的直接盖层、物性等方面对相关的地震属性分析方法进行优选,优选出地震振幅、地震旅行时、曲率、频率、吸收衰减、波阻抗反演等地震方法适应于煤层气储层预测研究,并将这些方法应用于实际地震资料,取得较好的预测结果。  相似文献   

10.
Health Effects of Air Pollution: A Statistical Review   总被引:2,自引:0,他引:2  
We critically review and compare epidemiological designs and statistical approaches to estimate associations between air pollution and health. More specifically, we aim to address the following questions:
  • 1 Which epidemiological designs and statistical methods are available to estimate associations between air pollution and health?
  • 2 What are the recent methodological advances in the estimation of the health effects of air pollution in time series studies?
  • 3 What are the the main methodological challenges and future research opportunities relevant to regulatory policy?
In question 1, we identify strengths and limitations of time series, cohort, case‐crossover and panel sampling designs. In question 2, we focus on time series studies and we review statistical methods for: 1) combining information across multiple locations to estimate overall air pollution effects; 2) estimating the health effects of air pollution taking into account of model uncertainties; 3) investigating the consequences of exposure measurement error in the estimation of the health effects of air pollution; and 4) estimating air pollution‐health exposure‐response curves. Here, we also discuss the extent to which these statistical contributions have addressed key substantive questions. In question 3, within a set of policy‐relevant‐questions, we identify research opportunities and point out current data limitations.  相似文献   

11.
Many structural break and regime-switching models have been used with macroeconomic and financial data. In this paper, we develop an extremely flexible modeling approach which can accommodate virtually any of these specifications. We build on earlier work showing the relationship between flexible functional forms and random variation in parameters. Our contribution is based around the use of priors on the time variation that is developed from considering a hypothetical reordering of the data and distance between neighboring (reordered) observations. The range of priors produced in this way can accommodate a wide variety of nonlinear time series models, including those with regime-switching and structural breaks. By allowing the amount of random variation in parameters to depend on the distance between (reordered) observations, the parameters can evolve in a wide variety of ways, allowing for everything from models exhibiting abrupt change (e.g. threshold autoregressive models or standard structural break models) to those which allow for a gradual evolution of parameters (e.g. smooth transition autoregressive models or time varying parameter models). Bayesian econometric methods for inference are developed for estimating the distance function and types of hypothetical reordering. Conditional on a hypothetical reordering and distance function, a simple reordering of the actual data allows us to estimate our models with standard state space methods by a simple adjustment to the measurement equation. We use artificial data to show the advantages of our approach, before providing two empirical illustrations involving the modeling of real GDP growth.  相似文献   

12.
This paper describes a method for finding optimal transformations for analyzing time series by autoregressive models. 'Optimal' implies that the agreement between the autoregressive model and the transformed data is maximal. Such transformations help 1) to increase the model fit, and 2) to analyze categorical time series. The method uses an alternating least squares algorithm that consists of two main steps: estimation and transformation. Nominal, ordinal and numerical data can be analyzed. Some alternative applications of the general idea are highlighted: intervention analysis, smoothing categorical time series, predictable components, spatial modeling and cross-sectional multivariate analysis. Limitations, modeling issues and possible extensions are briefly indicated.  相似文献   

13.
We describe and employ a Bayesian posterior simulator for fitting a high-dimensional system of ordinal or count outcome equations. The model is then applied to describe the multiple site recreation demands of individual agents, and we argue that our approach provides advantages relative to existing methods commonly applied in this area. In particular, our model flexibly adjusts to match observed frequencies in trip outcomes, permits a flexible correlation pattern among the sites visited by individuals, and the posterior simulator for fitting this model is relatively easy to implement in practice. We also describe how the posterior simulations produced from the model can be used to conduct a variety of counterfactual experiments, including predicting behavioral changes and describing welfare implications resulting from shifts in exogenous demographic and site characteristics. We illustrate our method using data from the Iowa Lakes Project by modeling the visitation patterns of individuals to a set of twenty-nine large Iowa lakes. Consistent with previous findings in the literature, we see strong evidence that own and cross-price effects on trip demand are negative and positive, respectively, that higher income increases the likelihood of visiting most sites, and that a commonly used indicator of water quality, Secchi transparency, is positively correlated with the number of trips taken. In addition, the correlation structure among the errors reveals a complex pattern in which unobserved factors affecting trip demand are generally (though not strictly) positively correlated across sites. The flexibility and richness with which we are able to characterize the demand system provides a solid platform for counterfactual analysis, where we find significant behavioral and welfare effects from changes in site availability, water quality, and travel costs.  相似文献   

14.
Action variety of planners: Cognitive load and requisite variety   总被引:2,自引:0,他引:2  
The complexity of planning tasks have increased over the past decade. There is relatively poor understanding what the implications are of increased task complexity in planning and scheduling operations. Previous work in the behavioral sciences have investigated the concept of cognitive load, addressing both task complexity and task workload or stress, and have concluded that decision makers tend to resort to routine action and reduce the variety in their actions with increasing complexity and workload. Alternatively, control theory suggests that a higher variety of actions is needed to deal with more complex problems. In this paper, we investigate the effects of task complexity in a chemical plant on the variety of actions deployed by the planners. The single work center resource structure and the availability of actual planning data from an MRP-application database allows us to both use field data and study a situation which is simple enough to measure the main effect. Our results suggest that increased task complexity without time pressure does indeed lead to increased action variety deployed by the planners.  相似文献   

15.
Counternarcotics interdiction efforts have traditionally relied on historically determined sorting criteria or “best guess” to find and classify suspected smuggling traffic. We present a more quantitative approach which incorporates customized database applications, graphics software and statistical modeling techniques to develop forecasting and classification models. Preliminary results show that statistical methodology can improve interdiction rates and reduce forecast error. The idea of predictive modeling is thus gaining support in the counterdrug community. The problem is divided into sea, air and land forecasting, only part of which will be addressed here. The maritime problem is solved using multiple regression in lieu of multivariate time series. This model predicts illegal boat counts by behavior and geographic region. We developed support software to present the forecasts and to automate the process of performing periodic model updates. During the period, the model was in use at. Coast Guard Headquarters. Because of deterrence provided by improved intervention, the vessel seizure rate declined from 1 every 36 hours to 1 every 6 months. Due in part to the success of the sea model, the maritime movement of marijuana has ceased to be a major threat. The air problem is more complex, and required us to locally design data collection and display software. Intelligence analysts are using a customized relational database application with a map overlay to perform visual pattern recognition of smuggling routes. We are solving the modeling portion of the air problem using multiple regression for regional forecasts of traffic density, and discriminant analysis to develop tactical models that classify “good guys” and “bad guys”. The air models are still under development, but we discuss some modeling considerations and preliminary results. The land problem is even more difficult, and data collection is still in progress.  相似文献   

16.
We analyze whether it is better to forecast air travel demand using aggregate data at (say) a national level, or to aggregate the forecasts derived for individual airports using airport-specific data. We compare the US Federal Aviation Administration’s (FAA) practice of predicting the total number of passengers using macroeconomic variables with an equivalently specified AIM (aggregating individual markets) approach. The AIM approach outperforms the aggregate forecasting approach in terms of its out-of-sample air travel demand predictions for different forecast horizons. Variants of AIM, where we restrict the coefficient estimates of some explanatory variables to be the same across individual airports, generally dominate both the aggregate and AIM approaches. The superior out-of-sample performances of these so-called quasi-AIM approaches depend on the trade-off between heterogeneity and estimation uncertainty. We argue that the quasi-AIM approaches exploit the heterogeneity across individual airports efficiently, without suffering from as much estimation uncertainty as the AIM approach.  相似文献   

17.
18.
There is a large literature evaluating job-training programmes. In this paper, we evaluate three such job-training programmes that are used in Slovakia. Individuals participating in a job-training programme during their unemployment spell are usually not a random subsample of the population. To determine the treatment effect of a job-training programme on unemployment duration, one has to correct for this selection effect. Therefore, we use a multivariate mixed proportional hazard-type model to describe the hazard of getting a job simultaneously with the hazard of entering a programme. We allow for a heterogeneous treatment effect and for the treatment effect to vary over time. Furthermore, we add the monthly unemployment rate as a time-varying explanatory variable. The estimation results show that two of the three programmes shorten the unemployment duration quite substantially.  相似文献   

19.
Due to the fact that there has been only little research on some essential issues of the Variance Gamma (VG) process, we have recognized a gap in literature as to the performance of the various estimation methods for modeling empirical share returns. While some papers present only few estimated parameters for a very small, selected empirical database, Finaly and Seneta (Int Stat Rev 76:167–186, 2008) compare most of the possible estimation methods using simulated data. In contrast to Finaly and Seneta (2008) we utilize a broad, daily, and empirical data set consisting of the stocks of each company listed on the DOW JONES over the period from 1991 to 2011. We also apply a regime switching model in order to identify normal and turbulent times within our data set and fit the VG process to the data in the respective period. We find out that the VG process parameters vary over time, and in accordance with the regime switching model, we recognize significantly increasing fitting rates which are due to the chosen periods.  相似文献   

20.
We consider estimating binary response models on an unbalanced panel, where the outcome of the dependent variable may be missing due to nonrandom selection, or there is self‐selection into a treatment. In the present paper, we first consider estimation of sample selection models and treatment effects using a fully parametric approach, where the error distribution is assumed to be normal in both primary and selection equations. Arbitrary time dependence in errors is permitted. Estimation of both coefficients and partial effects, as well as tests for selection bias, are discussed. Furthermore, we consider a semiparametric estimator of binary response panel data models with sample selection that is robust to a variety of error distributions. The estimator employs a control function approach to account for endogenous selection and permits consistent estimation of scaled coefficients and relative effects.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号