首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 796 毫秒
1.
This paper analyzes patterns in the earnings development of young labor market entrants over their life cycle. We identify four distinctly different types of transition patterns between discrete earnings states in a large administrative dataset. Further, we investigate the effects of labor market conditions at the time of entry on the probability of belonging to each transition type. To estimate our statistical model we use a model‐based clustering approach. The statistical challenge in our application comes from the difficulty in extending distance‐based clustering approaches to the problem of identifying groups of similar time series in a panel of discrete‐valued time series. We use Markov chain clustering, which is an approach for clustering discrete‐valued time series obtained by observing a categorical variable with several states. This method is based on finite mixtures of first‐order time‐homogeneous Markov chain models. In order to analyze group membership we present an extension to this approach by formulating a probabilistic model for the latent group indicators within the Bayesian classification rule using a multinomial logit model. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

2.
We consider Bayesian estimation of a stochastic production frontier with ordered categorical output, where the inefficiency error is assumed to follow an exponential distribution, and where output, conditional on the inefficiency error, is modelled as an ordered probit model. Gibbs sampling algorithms are provided for estimation with both cross-sectional and panel data, with panel data being our main focus. A Monte Carlo study and a comparison of results from an example where data are used in both continuous and categorical form supports the usefulness of the approach. New efficiency measures are suggested to overcome a lack-of-invariance problem suffered by traditional efficiency measures. Potential applications include health and happiness production, university research output, financial credit ratings, and agricultural output recorded in broad bands. In our application to individual health production we use data from an Australian panel survey to compute posterior densities for marginal effects, outcome probabilities, and a number of within-sample and out-of-sample efficiency measures.  相似文献   

3.
This paper investigates the accuracy of forecasts from four dynamic stochastic general equilibrium (DSGE) models for inflation, output growth and the federal funds rate using a real‐time dataset synchronized with the Fed's Greenbook projections. Conditioning the model forecasts on the Greenbook nowcasts leads to forecasts that are as accurate as the Greenbook projections for output growth and the federal funds rate. Only for inflation are the model forecasts dominated by the Greenbook projections. A comparison with forecasts from Bayesian vector autoregressions shows that the economic structure of the DSGE models which is useful for the interpretation of forecasts does not lower the accuracy of forecasts. Combining forecasts of several DSGE models increases precision in comparison to individual model forecasts. Comparing density forecasts with the actual distribution of observations shows that DSGE models overestimate uncertainty around point forecasts. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

4.
This paper introduces a novel meta-learning algorithm for time series forecast model performance prediction. We model the forecast error as a function of time series features calculated from historical time series with an efficient Bayesian multivariate surface regression approach. The minimum predicted forecast error is then used to identify an individual model or a combination of models to produce the final forecasts. It is well known that the performance of most meta-learning models depends on the representativeness of the reference dataset used for training. In such circumstances, we augment the reference dataset with a feature-based time series simulation approach, namely GRATIS, to generate a rich and representative time series collection. The proposed framework is tested using the M4 competition data and is compared against commonly used forecasting approaches. Our approach provides comparable performance to other model selection and combination approaches but at a lower computational cost and a higher degree of interpretability, which is important for supporting decisions. We also provide useful insights regarding which forecasting models are expected to work better for particular types of time series, the intrinsic mechanisms of the meta-learners, and how the forecasting performance is affected by various factors.  相似文献   

5.
A basic concern in statistical disclosure limitation is the re-identification of individuals in anonymised microdata. Linking against a second dataset that contains identifying information can result in a breach of confidentiality. Almost all linkage approaches are based on comparing the values of variables that are common to both datasets. It is tempting to think that if datasets contain no common variables, then there can be no risk of re-identification. However, linkage has been attempted between such datasets via the extraction of structural information using ordered weighted averaging (OWA) operators. Although this approach has been shown to perform better than randomly pairing records, it is debatable whether it demonstrates a practically significant disclosure risk. This paper reviews some of the main aspects of statistical disclosure limitation. It then goes on to show that a relatively simple, supervised Bayesian approach can consistently outperform OWA linkage. Furthermore, the Bayesian approach demonstrates a significant risk of re-identification for the types of data considered in the OWA record linkage literature.  相似文献   

6.
In this article, we propose new Monte Carlo methods for computing a single marginal likelihood or several marginal likelihoods for the purpose of Bayesian model comparisons. The methods are motivated by Bayesian variable selection, in which the marginal likelihoods for all subset variable models are required to compute. The proposed estimates use only a single Markov chain Monte Carlo (MCMC) output from the joint posterior distribution and it does not require the specific structure or the form of the MCMC sampling algorithm that is used to generate the MCMC sample to be known. The theoretical properties of the proposed method are examined in detail. The applicability and usefulness of the proposed method are demonstrated via ordinal data probit regression models. A real dataset involving ordinal outcomes is used to further illustrate the proposed methodology.  相似文献   

7.
This paper develops a Bayesian method for quantile regression for dichotomous response data. The frequentist approach to this type of regression has proven problematic in both optimizing the objective function and making inferences on the parameters. By accepting additional distributional assumptions on the error terms, the Bayesian method proposed sets the problem in a parametric framework in which these problems are avoided. To test the applicability of the method, we ran two Monte Carlo experiments and applied it to Horowitz's (1993) often studied work‐trip mode choice dataset. Compared to previous estimates for the latter dataset, the method proposed leads to a different economic interpretation. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

8.
The Components of Output Growth: A Stochastic Frontier Analysis   总被引:1,自引:0,他引:1  
This paper uses Bayesian stochastic frontier methods to decompose output change into technical, efficiency and input changes. In the context of macroeconomic growth exercises, which typically involve small and noisy data sets, we argue that stochastic frontier methods are useful since they incorporate measurement error and assume a (flexible) parametric form for the production relationship. These properties enable us to calculate measures of uncertainty associated with the decomposition and minimize the risk of overfitting the noise in the data. Tools for Bayesian inference in such models are developed. An empirical investigation using data from 17 OECD countries for 10 years illustrates the practicality and usefulness of our approach.  相似文献   

9.
Computationally efficient methods for Bayesian analysis of seemingly unrelated regression (SUR) models are described and applied that involve the use of a direct Monte Carlo (DMC) approach to calculate Bayesian estimation and prediction results using diffuse or informative priors. This DMC approach is employed to compute Bayesian marginal posterior densities, moments, intervals and other quantities, using data simulated from known models and also using data from an empirical example involving firms’ sales. The results obtained by the DMC approach are compared to those yielded by the use of a Markov Chain Monte Carlo (MCMC) approach. It is concluded from these comparisons that the DMC approach is worthwhile and applicable to many SUR and other problems.  相似文献   

10.
This paper proposes a new framework for the estimation of product-level global and interregional feedback and spillover (FS) factor multipliers. The framework is directly based on interregional supply and use tables (SUTs) that could be rectangular and gives a possibility of taking account of the inherent input–output data uncertainty problems. A Bayesian econometric approach is applied to the framework using the first version of international SUTs in the World Input–Output Database. The obtained estimates of the global and intercountry FS output effects are discussed and presented at the world, country and product levels for the period of 1995–2009.  相似文献   

11.
The mixed logit model is widely used in applied econometrics. Researchers typically rely on the free choice between the classical and Bayesian estimation approach. However, empirical evidence of the similarity of their parameter estimates is sparse. The presumed similarity is mainly based on one empirical study that analyzes a single dataset (Huber J, Train KE. 2001. On the similarity of classical and Bayesian estimates of individual mean partworths. Marketing Letters 12 (3): 259–269). Our replication study offers a generalization of their results by comparing classical and Bayesian parameter estimates from six additional datasets and specifically for panel versus cross‐sectional data. In general, our results suggest that the two methods provide similar results, with less similarity for cross‐sectional data than for panel data. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

12.
The purpose of this article is to propose a method to minimize the difference between electoral predictions and electoral results. It builds on findings that stem from established democracies, where most of the research has been carried out, but it focuses on filling the gap for developing nations, which have thus far been neglected by the literature. It proposes a two-stage model in which data are first collected, filtered and weighed according to biases, and then output using Bayesian algorithms and Markov chains. It tests the specification using data stemming from 11 Latin American countries. It shows that the model is remarkably accurate. In comparison to polls, not only does it produce more precise estimates for every election, but it also produces a more accurate forecast for nine out of every ten candidates. The article closes with a discussion on the limitations of the model and a proposal for future research.  相似文献   

13.
Multiple time series data may exhibit clustering over time and the clustering effect may change across different series. This paper is motivated by the Bayesian non-parametric modelling of the dependence between clustering effects in multiple time series analysis. We follow a Dirichlet process mixture approach and define a new class of multivariate dependent Pitman–Yor processes (DPY). The proposed DPY are represented in terms of vectors of stick-breaking processes which determine dependent clustering structures in the time series. We follow a hierarchical specification of the DPY base measure to account for various degrees of information pooling across the series. We discuss some theoretical properties of the DPY and use them to define Bayesian non-parametric repeated measurement and vector autoregressive models. We provide efficient Monte Carlo Markov Chain algorithms for posterior computation of the proposed models and illustrate the effectiveness of the method with a simulation study and an application to the United States and the European Union business cycle.  相似文献   

14.
There are two main methods for measuring the efficiency of decision-making units (DMUs): data envelopment analysis (DEA) and stochastic frontier analysis (SFA). Each of these methods has advantages and disadvantages. DEA is more popular in the literature due to its simplicity, as it does not require any pre-assumption and can be used for measuring the efficiency of DMUs with multiple inputs and multiple outputs, whereas SFA is a parametric approach that is applicable to multiple inputs and a single output. Since many applied studies feature multiple output variables, SFA cannot be used in such cases. In this research, a unique method to transform multiple outputs to a virtual single output is proposed. We are thus able to obtain efficiency scores from calculated virtual single output by the proposed method that are close (or even the same depending on targeted parameters at the expense of computation time and resources) to the efficiency scores obtained from multiple outputs of DEA. This will enable us to use SFA with a virtual single output. The proposed method is validated using a simulation study, and its usefulness is demonstrated with real application by using a hospital dataset from Turkey.  相似文献   

15.
《Journal of econometrics》2005,124(2):311-334
We introduce a set of new Markov chain Monte Carlo algorithms for Bayesian analysis of the multinomial probit model. Our Bayesian representation of the model places a new, and possibly improper, prior distribution directly on the identifiable parameters and thus is relatively easy to interpret and use. Our algorithms, which are based on the method of marginal data augmentation, involve only draws from standard distributions and dominate other available Bayesian methods in that they are as quick to converge as the fastest methods but with a more attractive prior specification. C-code along with an R interface for our algorithms is publicly available.1  相似文献   

16.
《Journal of econometrics》2005,126(2):493-523
The estimated parameters of output distance functions frequently violate the monotonicity, quasi-convexity and convexity constraints implied by economic theory, leading to estimated elasticities and shadow prices that are incorrectly signed, and ultimately to perverse conclusions concerning the effects of input and output changes on productivity growth and relative efficiency levels. We show how a Bayesian approach can be used to impose these constraints on the parameters of a translog output distance function. Implementing the approach involves the use of a Gibbs sampler with data augmentation. A Metropolis–Hastings algorithm is also used within the Gibbs to simulate observations from truncated pdfs. Our methods are developed for the case where panel data is available and technical inefficiency effects are assumed to be time-invariant. Two models—a fixed effects model and a random effects model—are developed and applied to panel data on 17 European railways. We observe significant changes in estimated elasticities and shadow price ratios when regularity restrictions are imposed.  相似文献   

17.
Real time nowcasting is an assessment of current-quarter GDP from timely released economic and financial series before the GDP figure is disseminated. Providing a reliable current quarter nowcast in real time based on the most recently released economic and financial monthly data is crucial for central banks to make policy decisions and longer-term forecasting exercises. In this study, we use dynamic factor models to bridge monthly information with quarterly GDP and achieve reduction in the dimensionality of the monthly data. We develop a Bayesian approach to provide a way to deal with the unbalanced features of the dataset and to estimate latent common factors. We demonstrate the validity of our approach through simulation studies, and explore the applicability of our approach through an empirical study in nowcasting the China’s GDP using 117 monthly data series of several categories in the Chinese market. The simulation studies and empirical study indicate that our Bayesian approach may be a viable option for nowcasting the China’s GDP.  相似文献   

18.
This article examines how efficient art organizations are in raising funds from private giving. We measure fundraising efficiency using a Bayesian estimation approach using the stochastic frontier production model. We show that fundraising efficiencies are generally quite low for art organizations in the U.S. when private giving is only considered as a fundraising output; however, when the effect of fundraising on ticket sales is considered, fundraising efficiencies improve substantially. We also show that government grants have a negative impact on fundraising efficiency and therefore partially crowd out private giving.  相似文献   

19.
This paper develops a method to estimate the U.S. output gap by exploiting the cross‐sectional variation of state‐level output and unemployment rate data. The model assumes that there are common output and unemployment rate trend and cycle components, and that each state's output and unemployment rate are subject to idiosyncratic trend and cycle perturbations. I estimate the model with Bayesian methods using quarterly data from 2005:Q1 to 2018:Q2 for the 50 states and the District of Columbia. Results show that the U.S. output gap reached about negative 4.6% around the years of the Great Recession and was about 0.9% in 2018:Q2.  相似文献   

20.
This editorial summarizes the papers published in issue 14(1) so as to raise the bar in applied spatial economic research and highlight new trends. The first paper applies the Shapley-based decomposition approach to determine the impact of firm-, linkage- and location-specific factors to the survival probability of enterprises. The second paper applies Bayesian comparison methods to identify simultaneously the most likely spatial econometric model and spatial weight matrix explaining new business creation. The third paper compares the performance of continuous and discrete approaches to explain subjective well-being across space. The fourth paper applies a multiple imputation approach to determine regional purchasing power parities at the NUTS-3 level using data available at the NUTS-2 level. Finally, the last paper constructs a regional input–output table for Japan from its national counterpart using and comparing the performance of four non-survey techniques.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号