首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
We compare four common data collection techniques to elicit preferences: the rating of items, the ranking of items, the partitioning of a given amount of points among items, and a reduced form of the technique for comparing items in pairs. University students were randomly assigned a questionnaire employing one of the four techniques. All questionnaires incorporated the same collection of items. The data collected with the four techniques were converted into analogous preference matrices, and analyzed with the Bradley–Terry model. The techniques were evaluated with respect to the fit to the model, the precision and reliability of the item estimates, and the consistency among the produced item sequences. The rating, ranking and budget partitioning techniques performed similarly, whereas the reduced pair comparisons technique performed a little worse. The item sequence produced by the rating technique was very close to the sequence obtained averaging over the three other techniques.  相似文献   

2.
Traditional oligopoly models hold that firms compete in the same strategic variable, output (Cournot) or price (Bertrand). Alternatively, a hybrid model allows some firms to compete in output and other firms to compete in price, also known as the Cournot–Bertrand model. When the choice of strategic variable is endogenous, the established dominant strategy is output competition. A growing body of work demonstrates, however, that the Cournot–Bertrand outcome can be a subgame‐perfect Nash equilibrium in the presence of market asymmetries. Observations of real‐world markets consistent with Cournot–Bertrand behavior bolster justification for the model and have stimulated an impressive and evolving literature on advances and applications. We lay out the roots of the Cournot–Bertrand model and explore a number of model developments. We categorize 12 primary models in the literature based on alternative assumptions. In particular, some authors consider when the timing of play as well as the choice of strategic variable are endogenous. Altogether, this research identifies when Cournot–Bertrand behavior can emerge in a dynamic setting and under alternative market conditions. We also review the Cournot–Bertrand model applications in the fields of international economics, industrial organization, labor, and public economics. We expect the literature to continue to expand in the future.  相似文献   

3.
戈特曼(Jean Gottmann)与麦吉(Terry G.Mc Gee)都是世界著名的地理学家,处于不同的时代背景,研究不同的地区发展,关注不同城市化现象,提出了两种截然不同的经典城市理论,他们在现代城市理论的发展中具有代表意义。通过本文多方面的比较,有利于我们清晰认识美国城市的发展模式与亚洲模式的本质差别,以客观的立场思考中国特色的城市发展道路。  相似文献   

4.
Junming Liu  Kaoru Tone 《Socio》2008,42(2):75-91
When measuring technical efficiency with existing data envelopment analysis (DEA) techniques, mean efficiency scores generally exhibit volatile patterns over time. This appears to be at odds with the general perception of learning-by-doing management, due to Arrow [The economic implications of learning by doing. Review of Economic Studies 1964; 154–73]. Further, this phenomenon is largely attributable to the fundamental assumption of deterministic data maintained in DEA models, and to the difficulty such models have in incorporating environmental influences. This paper proposes a three-stage method to measure DEA efficiency while controlling for the impacts of both statistical noise and environmental factors. Using panel data on Japanese banking over the period 1997–2001, we demonstrate that the proposed approach greatly mitigates these weaknesses of DEA models. We find a stable upward trend in mean measured efficiency, indicating that, on average, the bankers were learning over the sample period. Therefore, we conclude that this new method is a significant improvement relative to those DEA models currently used by researchers, corporate management, and industrial regulatory bodies to evaluate performance of their respective interests.  相似文献   

5.
Appropriate modelling of Likert‐type items should account for the scale level and the specific role of the neutral middle category, which is present in most Likert‐type items that are in common use. Powerful hierarchical models that account for both aspects are proposed. To avoid biased estimates, the models separate the neutral category when modelling the effects of explanatory variables on the outcome. The main model that is propagated uses binary response models as building blocks in a hierarchical way. It has the advantage that it can be easily extended to include response style effects and non‐linear smooth effects of explanatory variables. By simple transformation of the data, available software for binary response variables can be used to fit the model. The proposed hierarchical model can be used to investigate the effects of covariates on single Likert‐type items and also for the analysis of a combination of items. For both cases, estimation tools are provided. The usefulness of the approach is illustrated by applying the methodology to a large data set.  相似文献   

6.
We propose a new methodology for designing flexible proposal densities for the joint posterior density of parameters and states in a nonlinear, non‐Gaussian state space model. We show that a highly efficient Bayesian procedure emerges when these proposal densities are used in an independent Metropolis–Hastings algorithm or in importance sampling. Our method provides a computationally more efficient alternative to several recently proposed algorithms. We present extensive simulation evidence for stochastic intensity and stochastic volatility models based on Ornstein–Uhlenbeck processes. For our empirical study, we analyse the performance of our methods for corporate default panel data and stock index returns. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

7.
Data that have a multilevel structure occur frequently across a range of disciplines, including epidemiology, health services research, public health, education and sociology. We describe three families of regression models for the analysis of multilevel survival data. First, Cox proportional hazards models with mixed effects incorporate cluster‐specific random effects that modify the baseline hazard function. Second, piecewise exponential survival models partition the duration of follow‐up into mutually exclusive intervals and fit a model that assumes that the hazard function is constant within each interval. This is equivalent to a Poisson regression model that incorporates the duration of exposure within each interval. By incorporating cluster‐specific random effects, generalised linear mixed models can be used to analyse these data. Third, after partitioning the duration of follow‐up into mutually exclusive intervals, one can use discrete time survival models that use a complementary log–log generalised linear model to model the occurrence of the outcome of interest within each interval. Random effects can be incorporated to account for within‐cluster homogeneity in outcomes. We illustrate the application of these methods using data consisting of patients hospitalised with a heart attack. We illustrate the application of these methods using three statistical programming languages (R, SAS and Stata).  相似文献   

8.
We develop a Bayesian median autoregressive (BayesMAR) model for time series forecasting. The proposed method utilizes time-varying quantile regression at the median, favorably inheriting the robustness of median regression in contrast to the widely used mean-based methods. Motivated by a working Laplace likelihood approach in Bayesian quantile regression, BayesMAR adopts a parametric model bearing the same structure as autoregressive models by altering the Gaussian error to Laplace, leading to a simple, robust, and interpretable modeling strategy for time series forecasting. We estimate model parameters by Markov chain Monte Carlo. Bayesian model averaging is used to account for model uncertainty, including the uncertainty in the autoregressive order, in addition to a Bayesian model selection approach. The proposed methods are illustrated using simulations and real data applications. An application to U.S. macroeconomic data forecasting shows that BayesMAR leads to favorable and often superior predictive performance compared to the selected mean-based alternatives under various loss functions that encompass both point and probabilistic forecasts. The proposed methods are generic and can be used to complement a rich class of methods that build on autoregressive models.  相似文献   

9.
Input–output (IO) models, describing trade between different sectors and regions, are widely used to study the environmental repercussions of human activities. A frequent challenge in assembling an IO model or linking several such models is the absence of flow data with the same level of detail for all components. Such problems can be addressed using proportional allocation, which is a form of algebraic transformations. In this paper, we propose a novel approach whereby the IO system is viewed as a network, the topology of which is transformed with the addition of virtual nodes so that available empirical flow data can be mapped directly to existing links, with no additional estimation required, and no impact on results. As IO systems become increasingly disaggregated, and coupled to adjacent databases and models, the adaptability of IO frameworks becomes increasingly important. We show that topological transformations also offer large advantages in terms of transparency, modularity and increasingly importantly for global IO models, efficiency. We illustrate the results in the context of trade linking, multi-scale integration and other applications.  相似文献   

10.
We develop a Bayesian random compressed multivariate heterogeneous autoregressive (BRC-MHAR) model to forecast the realized covariance matrices of stock returns. The proposed model randomly compresses the predictors and reduces the number of parameters. We also construct several competing multivariate volatility models with the alternative shrinkage methods to compress the parameter’s dimensions. We compare the forecast performances of the proposed models with the competing models based on both statistical and economic evaluations. The results of statistical evaluation suggest that the BRC-MHAR models have the better forecast precision than the competing models for the short-term horizon. The results of economic evaluation suggest that the BRC-MHAR models are superior to the competing models in terms of the average return, the Shape ratio and the economic value.  相似文献   

11.
The linear mixed-effects model has been widely used for the analysis of continuous longitudinal data. This paper demonstrates that the linear mixed model can be adapted and used for the analysis of structured repeated measurements. A computational advantage of the proposed methodology is that there is no extra burden on the analyst since any software for linear mixed-effects models can be used to fit the proposed models. Two data sets from clinical psychology are used as motivating examples and to illustrate the methods.  相似文献   

12.
We estimate productivity growth without recourse to data on factor input shares or prices. In the proposed model, the economy is represented by the Leontief input–output model, which is extended by the constraints of primary inputs. A Luenberger productivity indicator is proposed to estimate productivity change; this is then decomposed in a way that enables us to examine the contributions of individual production factors and individual commodities to productivity change. The results allow for the identification of inputs or outputs that are the drivers of the overall productivity change. Their contributions are then decomposed into efficiency change and technical change components. Using input–output tables of the US economy for the period 1977–2006, we show that technical progress has been the main source of productivity change. Technical progress was mostly driven by capital, whereas low-skilled labour contributed negatively.  相似文献   

13.
We contribute to an emerging literature that brings the constant elasticity of substitution (CES) specification of the production function into the analysis of business cycle fluctuations. Using US data, we estimate by Bayesian-Maximum-Likelihood methods a standard medium-sized DSGE model with a CES rather than Cobb–Douglas (CD) technology. We estimate a elasticity of substitution between capital and labour well below unity at 0.15–0.18. In a marginal likelihood race CES decisively beats the CD production and this is matched by its ability to fit the data better in terms of second moments. We show that this result is mainly driven by the implied fluctuations of factor shares under the CES specification. The CES model performance is further improved when the estimation is carried out under an imperfect information assumption. Hence the main message for DSGE models is that we should dismiss once and for all the use of CD for business cycle analysis.  相似文献   

14.
We examine how the accuracy of real‐time forecasts from models that include autoregressive terms can be improved by estimating the models on ‘lightly revised’ data instead of using data from the latest‐available vintage. The benefits of estimating autoregressive models on lightly revised data are related to the nature of the data revision process and the underlying process for the true values. Empirically, we find improvements in root mean square forecasting error of 2–4% when forecasting output growth and inflation with univariate models, and of 8% with multivariate models. We show that multiple‐vintage models, which explicitly model data revisions, require large estimation samples to deliver competitive forecasts. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

15.
In this paper, we extend the heterogeneous panel data stationarity test of Hadri [Econometrics Journal, Vol. 3 (2000) pp. 148–161] to the cases where breaks are taken into account. Four models with different patterns of breaks under the null hypothesis are specified. Two of the models have been already proposed by Carrion‐i‐Silvestre et al. [Econometrics Journal, Vol. 8 (2005) pp. 159–175]. The moments of the statistics corresponding to the four models are derived in closed form via characteristic functions. We also provide the exact moments of a modified statistic that do not asymptotically depend on the location of the break point under the null hypothesis. The cases where the break point is unknown are also considered. For the model with breaks in the level and no time trend and for the model with breaks in the level and in the time trend, Carrion‐i‐Silvestre et al. [Econometrics Journal, Vol. 8 (2005) pp. 159–175] showed that the number of breaks and their positions may be allowed to differ across individuals for cases with known and unknown breaks. Their results can easily be extended to the proposed modified statistic. The asymptotic distributions of all the statistics proposed are derived under the null hypothesis and are shown to be normally distributed. We show by simulations that our suggested tests have in general good performance in finite samples except the modified test. In an empirical application to the consumer prices of 22 OECD countries during the period from 1953 to 2003, we found evidence of stationarity once a structural break and cross‐sectional dependence are accommodated.  相似文献   

16.
This article considers the long‐run relationship between nominal exchange rates and fundamentals from a different perspective. We apply a long‐horizon regression approach proposed by Fisher and Seater (1993) and find evidence supporting the explanatory power of exchange rate models. In particular, the Taylor‐rule model outperforms other conventional models. We then use the inverse power function (IPF) proposed by Andrews (1989) to investigate the power of the Fisher–Seater test. The IPF analysis provides additional evidence supporting exchange rate models.  相似文献   

17.
book reviews     
WAGE RESTRAINT BY CONSENSUS: BRITAIN'S SEARCH FOR AN INCOMES POLICY AGREEMENT, 1965–79 Warren H. Fishbein CONSENT AND EFFICIENCY: LABOUR RELATIONS AND MANAGEMENT IN THE STATE ENTERPRISE E. Batstone, A. Ferner and M. Terry INDUSTRIAL DEMOCRACY AT SEA Edited by Robert Schrank PARTICIPATION AND INDUSTRIAL DEMOCRACY: THE SHOPFLOOR VIEW Paul Rathkey SOLVING COSTLY ORGANIZATIONAL CONFLICTS R. Blake and J. Mouton  相似文献   

18.
Terry Arthur argues that the current UK banking system represents the very opposite of a free market, and that a genuine free market, in which there is no place for a nationalised central bank, is the best solution. One of the most successful examples of such a market is the Scottish experience of the nineteenth century, destroyed only by Peel's Banking Acts of 1844–45.  相似文献   

19.
Goodman (1972) proposed several models for the analysis of the general I x I square tables with particular emphasis on social mobility data. We demonstrate in this paper, that most of his models can be reproduced by combinations of both new models proposed here and the various well known models that have received considerable attention in the literature. Our presentation here is both concise and simple to comprehend. The various models considered in this study are fitted to ten data sets that include the much analyzed 5×5 Danish and British Social mobility data sets. Results suggest that in some cases more parsimonious models than those considered earlier by various authors are possible for the explanations of the variations in the data analyzed in this study.  相似文献   

20.
We propose a nonparametric Bayesian approach to estimate time‐varying grouped patterns of heterogeneity in linear panel data models. Unlike the classical approach in Bonhomme and Manresa (Econometrica, 2015, 83, 1147–1184), our approach can accommodate selection of the optimal number of groups and model estimation jointly, and also be readily extended to quantify uncertainties in the estimated group structure. Our proposed approach performs well in Monte Carlo simulations. Using our approach, we successfully replicate the estimated relationship between income and democracy in Bonhomme and Manresa and the group characteristics when we use the same number of groups. Furthermore, we find that the optimal number of groups could depend on model specifications on heteroskedasticity and discuss ways to choose models in practice.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号