首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   25篇
  免费   2篇
财政金融   2篇
计划管理   11篇
经济学   8篇
旅游经济   1篇
贸易经济   4篇
农业经济   1篇
  2020年   3篇
  2016年   2篇
  2014年   1篇
  2012年   1篇
  2011年   1篇
  2010年   3篇
  2008年   4篇
  2007年   3篇
  2006年   2篇
  2005年   3篇
  1998年   1篇
  1988年   1篇
  1985年   1篇
  1984年   1篇
排序方式: 共有27条查询结果,搜索用时 456 毫秒
1.
This paper is concerned with the use of a Bayesian approach to fuzzy regression discontinuity (RD) designs for understanding the returns to education. The discussion is motivated by the change in government policy in the UK in April of 1947, when the minimum school leaving age was raised from 14 to 15—a change that had a discontinuous impact on the probability of leaving school at age 14 for cohorts who turned 14 around the time of the policy change. We develop a Bayesian fuzzy RD framework that allows us to take advantage of this discontinuity to calculate the effect of an additional year of education on subsequent log earnings for the (latent) class of subjects that complied with the policy change. We illustrate this approach with a new dataset composed from the UK General Household Surveys. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   
2.
This paper develops an IP model to determine item allocation for a hybrid retailer's store network, comprising bricks-and-mortar and online stores. Products with low carrying costs are distributed between the bricks-and-mortar stores and the online store. Products with high carrying costs can be withdrawn from the bricks-and-mortar stores and made available exclusively at the online store where the inventory carrying costs are comparatively lower. This strategy assists the hybrid retailer to not only improve the profitability of its bricks- and-mortar stores but also to retain the custom of the market segment that is loyal to the items withdrawn from the traditional stores. In this framework, the online channel complements rather than competes with traditional channels. This model is used to conduct an extensive simulation study to analyze the impact of important business factors on system profitability.  相似文献   
3.
The complexity of policy decision-making raises the need to elicit opinions from large and heterogeneous groups of stakeholders with broad and diverse sets of expertise. Existing options for elicitation include small face-to-face panels of experts by using the Nominal Group Technique (NGT), large Delphi panels whose members do not interact with each other face-to-face, and crowdsourcing, which involves an open call for input issued to a large community of people. In an attempt to close the gap between the practical needs of policy makers and the methodological challenges associated with eliciting opinions of large, diverse, and distributed groups, we have developed a new online elicitation system and methodology called ExpertLens. By optimizing the direct interactions of NGT with the larger number of Delphi participants and the wisdom of “selected crowds,” our approach is designed to save on the costs associated with traditional expert panels, while increasing accuracy in elicitation by reducing the potential for group process losses that can occur in large, diverse, and non-collocated panels whose members interact via asynchronous online discussion boards. The ExpertLens approach is iterative, does not require participants to develop consensus, and determines what the group “thinks” by statistically analyzing data collected in all rounds of the elicitation. This paper describes the ExpertLens system and methodology, briefly discusses recent ExpertLens trials, provides conceptual arguments for why it is an appropriate model for eliciting expert opinions, illustrates its main components and analytics by using an infrastructure investment example, and discusses a research agenda for testing the underlying tenets of the ExpertLens approach.  相似文献   
4.
A simple Arrow-Debreu model with production and adjustment cost is developed and it is shown how the production parameters interact in determining the steady state equity premium. We also show that the equity premium is highest in exchange economy.  相似文献   
5.
This paper is concerned with the Bayes estimation of an arbitrary multivariate density,f(x), x ? R k. Such anf(x) may be represented as a mixture of a given parametric family of densities {h (x¦θ)} with support inR k, whereθ (inR d) is chosen according to a mixing distributionG. We consider the semiparametric Bayes approach in whichG, in turn, is chosen according to a Dirichlet process prior with given parameterα. We then specialize these results whenf is expressed as a mixture of multivariate normal densitiesΦ (x¦Μ, λ) whereΜ is the mean vector and λ is the precision matrix. The results are finally applied to estimating a regression parameter.  相似文献   
6.
This paper revisits the question of whether CEO compensation practices are in keeping with those justified by agency theory. We develop and analyze a new panel Tobit model, estimated by modern Bayesian methods, in which the heterogeneity of covariate effects across firms is modeled in a hierarchical way. We find that our specification of heterogeneity provides a significantly improved fit to the data. Our results show support for the hypothesis that companies increase option awards to their CEOs when agency problems become more pronounced. We also find that liquidity constraints matter in defining the cash–option mix of CEO compensation.  相似文献   
7.
Stochastic Volatility: Likelihood Inference and Comparison with ARCH Models   总被引:41,自引:0,他引:41  
In this paper, Markov chain Monte Carlo sampling methods are exploited to provide a unified, practical likelihood-based framework for the analysis of stochastic volatility models. A highly effective method is developed that samples all the unobserved volatilities at once using an approximating offset mixture model, followed by an importance reweighting procedure. This approach is compared with several alternative methods using real data. The paper also develops simulation-based methods for filtering, likelihood evaluation and model failure diagnostics. The issue of model choice using non-nested likelihood ratios and Bayes factors is also investigated. These methods are used to compare the fit of stochastic volatility and GARCH models. All the procedures are illustrated in detail.  相似文献   
8.
There are many opportunities and challenges in area of Indian technical education due to liberalization and globalization of economy. One of these challenges is how to assess performance of technical institutions based on multiple criteria. This paper is focused on performance evaluation and ranking of seven Indian Institute of Technology (IITs) in respect to stakeholders’ preference using an integrated model consisting of fuzzy AHP and COPRAS. Findings based on 2007–2008 data show that performance of two IITs need considerable improvement. To the best of our knowledge it is one of few studies that evaluates performance of technical institutions in India.  相似文献   
9.
We describe a method for estimating the marginal likelihood, based on Chib (1995) and C hib and Jeliazkov (2001) , when simulation from the posterior distribution of the model parameters is by the accept–reject Metropolis–Hastings (ARMH) algorithm. The method is developed for one-block and multiple-block ARMH algorithms and does not require the (typically) unknown normalizing constant of the proposal density. The problem of calculating the numerical standard error of the estimates is also considered and a procedure based on batch means is developed. Two examples, dealing with a multinomial logit model and a Gaussian regression model with non-conjugate priors, are provided to illustrate the efficiency and applicability of the method.  相似文献   
10.
This paper is concerned with the Bayesian analysis of stochastic volatility (SV) models with leverage. Specifically, the paper shows how the often used Kim et al. [1998. Stochastic volatility: likelihood inference and comparison with ARCH models. Review of Economic Studies 65, 361–393] method that was developed for SV models without leverage can be extended to models with leverage. The approach relies on the novel idea of approximating the joint distribution of the outcome and volatility innovations by a suitably constructed ten-component mixture of bivariate normal distributions. The resulting posterior distribution is summarized by MCMC methods and the small approximation error in working with the mixture approximation is corrected by a reweighting procedure. The overall procedure is fast and highly efficient. We illustrate the ideas on daily returns of the Tokyo Stock Price Index. Finally, extensions of the method are described for superposition models (where the log-volatility is made up of a linear combination of heterogenous and independent autoregressions) and heavy-tailed error distributions (student and log-normal).  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号