共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper presents a Bayesian analysis of an ordered probit model with endogenous selection. The model can be applied when analyzing ordered outcomes that depend on endogenous covariates that are discrete choice indicators modeled by a multinomial probit model. The model is illustrated by analyzing the effects of different types of medical insurance plans on the level of hospital utilization, allowing for potential endogeneity of insurance status. The estimation is performed using the Markov chain Monte Carlo (MCMC) methods to approximate the posterior distribution of the parameters in the model. 相似文献
2.
We present a new specification for the multinomial multiperiod probit model with autocorrelated errors. In sharp contrast with commonly used specifications, ours is invariant with respect to the choice of a baseline alternative for utility differencing. It also nests these standard models as special cases, allowing for data-based selection of the baseline alternatives for the latter. Likelihood evaluation is achieved under an Efficient Importance Sampling (EIS) version of the standard GHK algorithm. Several simulation experiments highlight identification, estimation and pretesting within the new class of multinomial multiperiod probit models. 相似文献
3.
Rosalia Condorelli 《Quality and Quantity》2013,47(2):1143-1161
The identification of change points in a sequence of suicide rates is one of the fundamental aspects of Durkheim’s theory. The specification of a statistical standard suitable for this purpose is the main condition for making inferences about the causes of suicide with distinctive trends of persistency and variability just as Durkheim theorized. At present, the statistical ‘strategy’ employed by the French social scientist is too ‘rudimentary’. A hundred years later, I take the opportunity to test Durkheim’s theory through modern methodological instruments, specifically the Bayesian change-point analysis. First of all, I analyzed the same suicide data which Durkheim took into consideration. Change-point analysis corroborates the Durkheimian analysis revealing the same change-points identified by the author. Secondly, I analyzed Italian suicide rates from 1864 to 2005. The change-point analysis was very useful. Durkheim’s theory ‘works’ until 1961: suicides rates increased as industrial development increased. However, after 1961 and the economic boom, they declined, and when they began increasing again, after 1984, they did not reach the same level as before. This finding obliges us to ‘adjust’ the Durkheim’s theory giving space to Halbwach’s convergence law. Therefore, as high economic and social development levels are attained, suicide rates tend to level-off: People adapt to the stress of modernization associated to low social integration levels. Although we are more ‘egoist’, individualism does not destroy identity and the sense of life as Durkheim had maintained. 相似文献
4.
《International Journal of Forecasting》2023,39(3):1384-1412
This paper investigates the benefits of internet search data in the form of Google Trends for nowcasting real U.S. GDP growth in real time through the lens of mixed frequency Bayesian Structural Time Series (BSTS) models. We augment and enhance both model and methodology to make these better amenable to nowcasting with large number of potential covariates. Specifically, we allow shrinking state variances towards zero to avoid overfitting, extend the SSVS (spike and slab variable selection) prior to the more flexible normal-inverse-gamma prior which stays agnostic about the underlying model size, as well as adapt the horseshoe prior to the BSTS. The application to nowcasting GDP growth as well as a simulation study demonstrate that the horseshoe prior BSTS improves markedly upon the SSVS and the original BSTS model with the largest gains in dense data-generating-processes. Our application also shows that a large dimensional set of search terms is able to improve nowcasts early in a specific quarter before other macroeconomic data become available. Search terms with high inclusion probability have good economic interpretation, reflecting leading signals of economic anxiety and wealth effects. 相似文献
5.
We present a Bayesian approach for analyzing aggregate level sales data in a market with differentiated products. We consider the aggregate share model proposed by Berry et al. [Berry, Steven, Levinsohn, James, Pakes, Ariel, 1995. Automobile prices in market equilibrium. Econometrica. 63 (4), 841–890], which introduces a common demand shock into an aggregated random coefficient logit model. A full likelihood approach is possible with a specification of the distribution of the common demand shock. We introduce a reparameterization of the covariance matrix to improve the performance of the random walk Metropolis for covariance parameters. We illustrate the usefulness of our approach with both actual and simulated data. Sampling experiments show that our approach performs well relative to the GMM estimator even in the presence of a mis-specified shock distribution. We view our approach as useful for those who are willing to trade off one additional distributional assumption for increased efficiency in estimation. 相似文献
6.
Jamie L. Crandell Corrine I. Voils YunKyung Chang Margarete Sandelowski 《Quality and Quantity》2011,45(3):653-669
The possible utility of Bayesian methods for the synthesis of qualitative and quantitative research has been repeatedly suggested but insufficiently investigated. In this project, we developed and used a Bayesian method for synthesis, with the goal of identifying factors that influence adherence to HIV medication regimens. We investigated the effect of 10 factors on adherence. Recognizing that not all factors were examined in all studies, we considered standard methods for dealing with missing data and chose a Bayesian data augmentation method. We were able to summarize, rank, and compare the effects of each of the 10 factors on medication adherence. This is a promising methodological development in the synthesis of qualitative and quantitative research. 相似文献
7.
Journal of Productivity Analysis - This paper proposes a probabilistic frontier regression model for multinomial ordinal type output data. We consider some of the output categories as... 相似文献
8.
9.
The paper investigates cross-country differences in wage mobility in Europe using the European Community Household Panel. We examine the impact of specific wage-setting institutions, such as the collective bargaining and the trade union density, the employment protection regulation and the welfare state regime on wage mobility. We apply a log-linear approach that is very much similar to a restricted multinomial logit model and much more flexible than the standard probit approach that is typically applied in the research on wage mobility. It is shown that the macro-economic context and the aforementioned specific institutions explain a substantial part of the cross-country variation that is larger than the part that regime type explains. The findings also confirm the existence of an inverse U-shape pattern of wage mobility, showing a great deal of low-wage and high-wage persistence in all countries. 相似文献
10.
《Journal of econometrics》2004,123(2):307-325
This paper presents a method for estimating the posterior probability density of the cointegrating rank of a multivariate error correction model. A second contribution is the careful elicitation of the prior for the cointegrating vectors derived from a prior on the cointegrating space. This prior obtains naturally from treating the cointegrating space as the parameter of interest in inference and overcomes problems previously encountered in Bayesian cointegration analysis. Using this new prior and Laplace approximation, an estimator for the posterior probability of the rank is given. The approach performs well compared with information criteria in Monte Carlo experiments. 相似文献
11.
NAFTA has arguably been the most important and elaborate free-trade agreement in history, providing a blueprint for potential
new agreements. So far, the evidence is mixed as to whether NAFTA has been successful in terms of its economic impact. We
fit a multivariate stochastic volatility model that directly measures financial information linkages across the three participating
countries in a trivariate setting. The model detects significant changes in information linkages across the countries from
the pre- to post-NAFTA period with a high degree of reliability. This has implications not only for measuring these linkages
but also for hedging and portfolio diversification policies. An MCMC procedure is used to fit the model, and the accuracy
and robustness of the method is confirmed by simulations. 相似文献
12.
Fixed effects estimators of nonlinear panel models can be severely biased due to the incidental parameters problem. In this paper, I characterize the leading term of a large-T expansion of the bias of the MLE and estimators of average marginal effects in parametric fixed effects panel binary choice models. For probit index coefficients, the former term is proportional to the true value of the coefficients being estimated. This result allows me to derive a lower bound for the bias of the MLE. I then show that the resulting fixed effects estimates of ratios of coefficients and average marginal effects exhibit no bias in the absence of heterogeneity and negligible bias for a wide variety of distributions of regressors and individual effects in the presence of heterogeneity. I subsequently propose new bias-corrected estimators of index coefficients and marginal effects with improved finite sample properties for linear and nonlinear models with predetermined regressors. 相似文献
13.
Bayesian model selection using encompassing priors 总被引:1,自引:0,他引:1
This paper deals with Bayesian selection of models that can be specified using inequality constraints among the model parameters. The concept of encompassing priors is introduced, that is, a prior distribution for an unconstrained model from which the prior distributions of the constrained models can be derived. It is shown that the Bayes factor for the encompassing and a constrained model has a very nice interpretation: it is the ratio of the proportion of the prior and posterior distribution of the encompassing model in agreement with the constrained model. It is also shown that, for a specific class of models, selection based on encompassing priors will render a virtually objective selection procedure. The paper concludes with three illustrative examples: an analysis of variance with ordered means; a contingency table analysis with ordered odds-ratios; and a multilevel model with ordered slopes. 相似文献
14.
Following the major reforms of the UK health service in 1990, general practitioners (primary care physicians) have been able to purchase specialist care from any hospital they choose. To date, little research has been conducted with respect to the decision process by which such hospitals are chosen. Based on the results of a questionnaire survey conducted in central England and using probit analysis, the significant arguments in practitioners' decision functions are identified. Locality and clinical variables emerge as being considerably more important than price in determining referral destination. 相似文献
15.
This article develops a new portfolio selection method using Bayesian theory. The proposed method accounts for the uncertainties in estimation parameters and the model specification itself, both of which are ignored by the standard mean-variance method. The critical issue in constructing an appropriate predictive distribution for asset returns is evaluating the goodness of individual factors and models. This problem is investigated from a statistical point of view; we propose using the Bayesian predictive information criterion. Two Bayesian methods and the standard mean-variance method are compared through Monte Carlo simulations and in a real financial data set. The Bayesian methods perform very well compared to the standard mean-variance method. 相似文献
16.
Markov chain Monte Carlo (MCMC) methods have become a ubiquitous tool in Bayesian analysis. This paper implements MCMC methods
for Bayesian analysis of stochastic frontier models using the WinBUGS package, a freely available software. General code for
cross-sectional and panel data are presented and various ways of summarizing posterior inference are discussed. Several examples
illustrate that analyses with models of genuine practical interest can be performed straightforwardly and model changes are
easily implemented. Although WinBUGS may not be that efficient for more complicated models, it does make Bayesian inference
with stochastic frontier models easily accessible for applied researchers and its generic structure allows for a lot of flexibility
in model specification.
相似文献
17.
We model the hospital as seeking to balance the costs to itself in providing care, as well as the societal cost of people waiting for care. We use queuing theory to show that the optimal capacity and the corresponding optimal occupancy rate are dependent on the marginal cost of expanding capacity, the marginal cost of waiting, and the rates of patient arrival and discharge. Therefore, a universal occupancy target is unfounded. As well, the model shows that increasing capacity to respond to increased patient influxes is inadequate, suggesting that the health care system must explore alternate responses to burgeoning patient populations. 相似文献
18.
Jacques H. Drèze 《Journal of econometrics》1977,6(3):329-354
Poly-t densities are defined by the property that their kernel is a product, or ratio of products, of Student-t kernels. These multivariate densities arise as Bayesian posterior densities for regression coefficients, under a surprising variety of specifications for the prior density and the data generating process. Although no analytical expression exists for the integrating constant and moments of these densities, these parameters are obtained through numerical integration in a number of dimensions given by the number of Student-t kernels in the numerator, minus one. The paper reviews how poly-t densities arise in regression analysis, and summarizes the results obtained for a number of models. 相似文献
19.
C. A. Rohde 《Metrika》1972,18(1):220-226
20.
《Enterprise Information Systems》2013,7(2):207-217
In exploring the business operation of Internet companies, few researchers have used data envelopment analysis (DEA) to evaluate their performance. Since the Internet companies have a two-stage production process: marketability and profitability, this study employs a relational two-stage DEA model to assess the efficiency of the 40 dot com firms. The results show that our model performs better in measuring efficiency, and is able to discriminate the causes of inefficiency, thus helping business management to be more effective through providing more guidance to business performance improvement. 相似文献