首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper investigates the benefits of internet search data in the form of Google Trends for nowcasting real U.S. GDP growth in real time through the lens of mixed frequency Bayesian Structural Time Series (BSTS) models. We augment and enhance both model and methodology to make these better amenable to nowcasting with large number of potential covariates. Specifically, we allow shrinking state variances towards zero to avoid overfitting, extend the SSVS (spike and slab variable selection) prior to the more flexible normal-inverse-gamma prior which stays agnostic about the underlying model size, as well as adapt the horseshoe prior to the BSTS. The application to nowcasting GDP growth as well as a simulation study demonstrate that the horseshoe prior BSTS improves markedly upon the SSVS and the original BSTS model with the largest gains in dense data-generating-processes. Our application also shows that a large dimensional set of search terms is able to improve nowcasts early in a specific quarter before other macroeconomic data become available. Search terms with high inclusion probability have good economic interpretation, reflecting leading signals of economic anxiety and wealth effects.  相似文献   

2.
In this paper, we propose a time‐varying parameter vector autoregression (VAR) model with stochastic volatility which allows for estimation on data sampled at different frequencies. Our contribution is twofold. First, we extend the methodology developed by Cogley and Sargent (Drifts and volatilities: monetary policies and outcomes in the post WWII U.S. Review of Economic Studies 2005; 8 : 262–302) and Primiceri (Time varying structural vector autoregressions and monetary policy. Review of Economic Studies 2005; 72 : 821–852) to a mixed‐frequency setting. In particular, our approach allows for the inclusion of two different categories of variables (high‐frequency and low‐frequency) into the same time‐varying model. Second, we use this model to study the macroeconomic effects of government spending shocks in Italy over the 1988:Q4–2013:Q3 period. Italy—as well as most other euro area economies—is characterized by short quarterly time series for fiscal variables, whereas annual data are generally available for a longer sample before 1999. Our results show that the proposed time‐varying mixed‐frequency model improves on the performance of a simple linear interpolation model in generating the true path of the missing observations. Second, our empirical analysis suggests that government spending shocks tend to have positive effects on output in Italy. The fiscal multiplier, which is maximized at the 1‐year horizon, follows a U‐shape over the sample considered: it peaks at around 1.5 at the beginning of the sample; it then stabilizes between 0.8 and 0.9 from the mid 1990s to the late 2000s, before rising again to above unity during the recent crisis. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

3.
A new data set is employed to construct an index of the Swiss rental residential market starting as early as 1936. Given the data sample at our disposal of slightly less than 1000 paired data points spread across all Switzerland, we focus on using the most efficient type of repeated-measurement index to evaluate the yearly price development of the rental property market. In the process of building the index, an alternative of the SPAR method (Sale Price Appraisal Ratio) is developed and compared against a structural time series model and the Case–Shiller approach. The newly developed ISPAR (Inverse SPAR) method yields qualitatively similar results to the regression based methodology yet is influenced to a lesser extent by the sample size. The structural time series model is the version least influenced by the sample size. An interesting finding in our sample is that despite the large time span between successive price measurements, no notable improvement is obtained using the 3SLS method of Case–Shiller instead of the traditional Bailey et al. method.  相似文献   

4.
In this paper we study an optimal control problem with mixed constraints related to a multisector linear model with endogenous growth. The main aim is to establish a set of necessary and a set of sufficient conditions which are the basis for studying the qualitative properties of optimal trajectories. The presence of possibly degenerate mixed constraints, the unboundedness and non-strict convexity of the Hamiltonian, make the problem difficult to deal with. We develop first the dynamic programming approach, proving that the value function is a bilateral viscosity solution to the associated Hamilton–Jacobi–Bellman (HJB) equation. Then, using our results, we give a set of sufficient and a set of necessary optimality conditions which involve so-called co-state inclusion: this can be interpreted as the existence of a dual path of prices supporting the optimal path.  相似文献   

5.
Many popular methods of model selection involve minimizing a penalized function of the data (such as the maximized log-likelihood or the residual sum of squares) over a set of models. The penalty in the criterion function is controlled by a penalty multiplier λ which determines the properties of the procedure. In this paper, we first review model selection criteria of the simple form “Loss + Penalty” and then propose studying such model selection criteria as functions of the penalty multiplier. This approach can be interpreted as exploring the stability of model selection criteria through what we call model selection curves. It leads to new insights into model selection and new proposals on how to select models. We use the bootstrap to enhance the basic model selection curve and develop convenient numerical and graphical summaries of the results. The methodology is illustrated on two data sets and supported by a small simulation. We show that the new methodology can outperform methods such as AIC and BIC which correspond to single points on a model selection curve.  相似文献   

6.
This paper presents a Bayesian model averaging regression framework for forecasting US inflation, in which the set of predictors included in the model is automatically selected from a large pool of potential predictors and the set of regressors is allowed to change over time. Using real‐time data on the 1960–2011 period, this model is applied to forecast personal consumption expenditures and gross domestic product deflator inflation. The results of this forecasting exercise show that, although it is not able to beat a simple random‐walk model in terms of point forecasts, it does produce superior density forecasts compared with a range of alternative forecasting models. Moreover, a sensitivity analysis shows that the forecasting results are relatively insensitive to prior choices and the forecasting performance is not affected by the inclusion of a very large set of potential predictors.  相似文献   

7.
We introduce a modified conditional logit model that takes account of uncertainty associated with mis‐reporting in revealed preference experiments estimating willingness‐to‐pay (WTP). Like Hausman et al. [Journal of Econometrics (1988) Vol. 87, pp. 239–269], our model captures the extent and direction of uncertainty by respondents. Using a Bayesian methodology, we apply our model to a choice modelling (CM) data set examining UK consumer preferences for non‐pesticide food. We compare the results of our model with the Hausman model. WTP estimates are produced for different groups of consumers and we find that modified estimates of WTP, that take account of mis‐reporting, are substantially revised downwards. We find a significant proportion of respondents mis‐reporting in favour of the non‐pesticide option. Finally, with this data set, Bayes factors suggest that our model is preferred to the Hausman model.  相似文献   

8.
In this study, we addressed the problem of point and probabilistic forecasting by describing a blending methodology for machine learning models from the gradient boosted trees and neural networks families. These principles were successfully applied in the recent M5 Competition in both the Accuracy and Uncertainty tracks. The key points of our methodology are: (a) transforming the task into regression on sales for a single day; (b) information-rich feature engineering; (c) creating a diverse set of state-of-the-art machine learning models; and (d) carefully constructing validation sets for model tuning. We show that the diversity of the machine learning models and careful selection of validation examples are most important for the effectiveness of our approach. Forecasting data have an inherent hierarchical structure (12 levels) but none of our proposed solutions exploited the hierarchical scheme. Using the proposed methodology, we ranked within the gold medal range in the Accuracy track and within the prizes in the Uncertainty track. Inference code with pre-trained models are available on GitHub.1  相似文献   

9.
Research on inclusion and exclusion at work has grown in recent years, but for the most part has been treated as separate domains. In this paper, we integrate these literatures to build greater understanding of leader inclusion and leader exclusion. Leaders play a critical role in determining group member experiences of inclusion and exclusion through direct treatment of employees, and by serving as a role model (Bandura, 1977). According to social identity theory, when the leader is rewarded by the organization, this signifies that the leader is a prototypical organizational member who exemplifies the set of norms and behaviors most consistent with the organizational ideal (Hogg & van Knippenberg, 2003). We argue that through both social learning and social identity mechanisms, the leader can encourage inclusionary and exclusionary behavior in their work group. We first examine leader inclusion and present the types of behaviors that will aid in creating inclusive team member experiences. By exhibiting these behaviors, a leader can be a role model, an advocate and an ally for building work group inclusion. Next, we present the negative roles of ostracizer and bystander adopted by leaders that indicate support for behaving in an exclusionary manner, which can lead to exclusion among coworkers. We then describe leader remedies for social exclusion. Finally, we discuss the implications of our model and directions for future research.  相似文献   

10.
This paper evaluates the effects of high‐frequency uncertainty shocks on a set of low‐frequency macroeconomic variables representative of the US economy. Rather than estimating models at the same common low frequency, we use recently developed econometric models, which allow us to deal with data of different sampling frequencies. We find that credit and labor market variables react the most to uncertainty shocks in that they exhibit a prolonged negative response to such shocks. When looking at detailed investment subcategories, our estimates suggest that the most irreversible investment projects are the most affected by uncertainty shocks. We also find that the responses of macroeconomic variables to uncertainty shocks are relatively similar across single‐frequency and mixed‐frequency data models, suggesting that the temporal aggregation bias is not acute in this context.  相似文献   

11.
We explore a new approach to the forecasting of macroeconomic variables based on a dynamic factor state space analysis. Key economic variables are modeled jointly with principal components from a large time series panel of macroeconomic indicators using a multivariate unobserved components time series model. When the key economic variables are observed at a low frequency and the panel of macroeconomic variables is at a high frequency, we can use our approach for both nowcasting and forecasting purposes. Given a dynamic factor model as the data generation process, we provide Monte Carlo evidence of the finite-sample justification of our parsimonious and feasible approach. We also provide empirical evidence for a US macroeconomic dataset. The unbalanced panel contains quarterly and monthly variables. The forecasting accuracy is measured against a set of benchmark models. We conclude that our dynamic factor state space analysis can lead to higher levels of forecasting precision when the panel size and time series dimensions are moderate.  相似文献   

12.
This paper examines two pairs of hypotheses about the effect of the Mexican Peso crisis on U.S. bank stock returns. We use a three-index market model as our empirical methodology because bank stocks are influenced more by both interest rate risk and foreign exchange risk than other non-banking stocks. The results show that the market reacted to each event promptly, supporting semi-strong market efficiency. To find out whether these effects created a domino effect in the U.S. banking system, a set of cross-sectional regressions were run. In general, the empirical results support the investor-contagion hypothesis, which indicates that the market penalized or rewarded banks without regard to their ecposure to the market for Mexican loans.  相似文献   

13.
《Economic Systems》2020,44(4):100836
In this article, we investigate the impact of institutional quality on financial inclusion with a sample of fifty-one African countries. We specify and estimate a dynamic panel data model with the system–generalized method of moments (sys-GMM) over the period 2004–2018, based on different approaches to financial inclusion. Our results show that institutional quality increases financial inclusion as well as the penetration, accessibility, and use of financial services in Africa. These results remain robust to the provision of financial education. Acceleration of income-generating activities in Africa will require improvement in financial institutions.  相似文献   

14.
This paper presents a new methodology combining multiple criteria sorting or ranking methods with a project portfolio selection procedure. The multicriteria method permits the comparison of projects in terms of their priority based on qualitative and quantitative criteria. Then, a feasible set of projects, i.e. a portfolio, is selected according to the priority defined by the multiple criteria method and satisfying a set of resources and logical constraints. The proposed portfolio selection methodology is called Priority Based Portfolio Selection (PBPS) and can be applied in different contexts. We present an application in the urban planning domain where our approach allows us to select a set of urban projects based on their priority, budgetary constraints, and urban policy requirements. Given the increasing interest of historical cities to reuse their cultural heritage, we applied and tested our methodology in this context. In particular, we show how the methodology can support the prioritization of the interventions on buildings with some historical value in the historic city center of Naples (Italy), taking into account several points of view.  相似文献   

15.
In this paper we model production technology in a state-contingent framework. Our model analyzes production under uncertainty without being explicit about the nature of producer risk preferences. In our model producers’ risk preferences are captured by the risk-neutral probabilities they assign to the different states of nature. Using a state-general state-contingent specification of technology we show that rational producers who encounter the same stochastic technology can make significantly different production choices. Further, we develop an econometric methodology to estimate the risk-neutral probabilities and the parameters of stochastic technology when there are two states of nature and only one of which is observed. Finally, we simulate data based on our state-general state-contingent specification of technology. Biased estimates of the technology parameters are obtained when we apply conventional ordinary least squares estimator on the simulated data.  相似文献   

16.
Classification problems of functional data arise naturally in many applications. Several approaches have been considered for solving the problem of finding groups based on functional data. In this paper, we are interested in detecting groups when the functional data are spatially correlated. Our methodology allows to find spatially homogeneous groups of sites when the observations at each sampling location consist of samples of random functions. In univariable and multivariable geostatistics, various methods of incorporating spatial information into the clustering analysis have been considered. Here, we extend these methods to the functional context to fulfil the task of clustering spatially correlated curves. In our approach, we initially use basis functions to smooth the observed data, and then, we weight the dissimilarity matrix among curves by either the trace‐variogram or the multivariable variogram calculated with the coefficients of the basis functions. This paper contains a simulation study as well as the analysis of a real data set corresponding to average daily temperatures measured at 35 Canadian weather stations.  相似文献   

17.
The paper estimates a large‐scale mixed‐frequency dynamic factor model for the euro area, using monthly series along with gross domestic product (GDP) and its main components, obtained from the quarterly national accounts (NA). The latter define broad measures of real economic activity (such as GDP and its decomposition by expenditure type and by branch of activity) that we are willing to include in the factor model, in order to improve its coverage of the economy and thus the representativeness of the factors. The main problem with their inclusion is not one of model consistency, but rather of data availability and timeliness, as the NA series are quarterly and are available with a large publication lag. Our model is a traditional dynamic factor model formulated at the monthly frequency in terms of the stationary representation of the variables, which however becomes nonlinear when the observational constraints are taken into account. These are of two kinds: nonlinear temporal aggregation constraints, due to the fact that the model is formulated in terms of the unobserved monthly logarithmic changes, but we observe only the sum of the monthly levels within a quarter, and nonlinear cross‐sectional constraints, since GDP and its main components are linked by the NA identities, but the series are expressed in chained volumes. The paper provides an exact treatment of the observational constraints and proposes iterative algorithms for estimating the parameters of the factor model and for signal extraction, thereby producing nowcasts of monthly GDP and its main components, as well as measures of their reliability.  相似文献   

18.
19.
This paper seeks to empirically extend the gravity model, which has been widely used to analyze volumes of trade between pairs of countries. We generalize the basic threshold tobit model by allowing for the inclusion of country‐specific effects into the analysis and also show how one can explore the relationship between trade volumes and a given covariate via a non‐parametric approach. We use our derived methodology to investigate the impact of a particular aspect of institutions—the enforcement of contracts—on bilateral trade. We find that contract enforcement matters in predicting trade volumes for all types of goods, that it matters most for the trade of differentiated goods, and that the relationship between contract enforcement and trade in our threshold tobit exhibits some nonlinearities. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

20.
The improvement of service quality so as to enhance customer satisfaction has been widely mentioned over the past few decades. However, a creative and systematic way of achieving higher customer satisfaction in terms of service quality is rarely discussed. Recently, TRIZ, a Russian acronym which means “Theory of Inventive Problem Solving,” has been proven to be a well-structured and innovative way to solve problems in both technical and non-technical areas. In this study, a systematic model based on the TRIZ methodology is proposed to generate creative solutions for service quality improvement. This is done by examining first the determinants of service quality based on a comprehensive qualitative study in the electronic commerce sector. Then the correlation between the imprecise requirements from customers and the determinants of service quality is analyzed with Fuzzy Quality Function Deployment (QFD) in order to identify the critical determinants relating to customer satisfaction. After which, the corresponding TRIZ engineering parameters can be effectively applied in the TRIZ contradiction matrix to identify the inventive principles. A case study is illustrated to demonstrate the effectiveness of our approach in an e-commerce company, and its results are presented to show the applicability of the TRIZ methodology in the e-service sector.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号