首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Do groups make better judgments and decisions than individuals? We tested the hypothesis that the advantage of groups over individuals in decision-making depends on the group composition. Our study used susceptibility to the framing effect as a measure of decision quality. Individuals were assigned to one of two perspectives on a choice problem. The individuals were asked to indicate their individual preference between a risky option and a risk-free option. Next, they were asked to consider the same (or a related) choice problem as a group. Homogeneous groups were composed of similarly framed individuals, while the heterogeneous ones were composed of differently framed individuals. In comparison to individual preferences, the homogeneous groups’ preferences were polarized, and thus the framing effect was amplified; in contrast, the heterogeneous groups’ preferences converged, and thus the framing effect was reduced to zero. The findings are discussed with regard to group polarization, the effects of heterogeneity on group performance, and the Delphi forecasting method.  相似文献   

2.
We show that exact computation of a family of ‘max weighted score’ estimators, including Manski’s max score estimator, can be achieved efficiently by reformulating them as mixed integer programs (MIP) with disjunctive constraints. The advantage of our MIP formulation is that estimates are exact and can be computed using widely available solvers in reasonable time. In a classic work-trip mode choice application, our method delivers exact estimates that lead to a different economic interpretation of the data than previous heuristic estimates. In a small Monte Carlo study we find that our approach is computationally efficient for usual estimation problem sizes.  相似文献   

3.
This paper proposes a nonlinear panel data model which can endogenously generate both ‘weak’ and ‘strong’ cross-sectional dependence. The model’s distinguishing characteristic is that a given agent’s behaviour is influenced by an aggregation of the views or actions of those around them. The model allows for considerable flexibility in terms of the genesis of this herding or clustering type behaviour. At an econometric level, the model is shown to nest various extant dynamic panel data models. These include panel AR models, spatial models, which accommodate weak dependence only, and panel models where cross-sectional averages or factors exogenously generate strong, but not weak, cross sectional dependence. An important implication is that the appropriate model for the aggregate series becomes intrinsically nonlinear, due to the clustering behaviour, and thus requires the disaggregates to be simultaneously considered with the aggregate. We provide the associated asymptotic theory for estimation and inference. This is supplemented with Monte Carlo studies and two empirical applications which indicate the utility of our proposed model as a vehicle to model different types of cross-sectional dependence.  相似文献   

4.
This paper examines the usefulness of a more refined business cycle classification for monthly industrial production (IP), beyond the usual distinction between expansions and contractions. Univariate Markov-switching models show that a three regime model is more appropriate than a model with only two regimes. Interestingly, the third regime captures ‘severe recessions’, contrasting the conventional view that the additional third regime represents a ‘recovery’ phase. This is confirmed by means of Markov-switching vector autoregressive models that allow for phase shifts between the cyclical regimes of IP and the Conference Board's Leading Economic Index (LEI). The timing of the severe recession regime mostly corresponds with periods of substantial financial market distress and severe credit squeezes, providing empirical evidence for the ‘financial accelerator’ theory.  相似文献   

5.
Nonparametric transfer function models   总被引:1,自引:0,他引:1  
In this paper a class of nonparametric transfer function models is proposed to model nonlinear relationships between ‘input’ and ‘output’ time series. The transfer function is smooth with unknown functional forms, and the noise is assumed to be a stationary autoregressive-moving average (ARMA) process. The nonparametric transfer function is estimated jointly with the ARMA parameters. By modeling the correlation in the noise, the transfer function can be estimated more efficiently. The parsimonious ARMA structure improves the estimation efficiency in finite samples. The asymptotic properties of the estimators are investigated. The finite-sample properties are illustrated through simulations and one empirical example.  相似文献   

6.
This brief article first investigates key dimensions underlying the progress realized by data envelopment analysis (DEA) methodologies. The resulting perspective is then used to encourage reflection on future paths for the field. Borrowing from the social sciences literature, we distinguish between problematization and gap identification in suggesting strategies to push the DEA research envelope. Emerging evidence of a declining number of influential methodological (theory)-based publications, and a flattening diffusion of applications imply an unfolding maturity of the field. Such findings suggest that focusing on known limitations of DEA, and/or of its applications, while searching for synergistic partnerships with other methodologies, can create new and fertile grounds for research. Possible future directions might thus include ‘DEA in practice’, ‘opening the black-box of production,’ ‘rationalizing inefficiency,’ and ‘the productivity dilemma.’ What we are therefore proposing is a strengthening of the methodology's contribution to fields of endeavor both including, and beyond, those considered in the past.  相似文献   

7.
This paper considers Bayesian estimation strategies for first-price auctions within the independent private value paradigm. We develop an ‘optimization’ error approach that allows for estimation of values assuming that observed bids differ from optimal bids. We further augment this approach by allowing systematic over or underbidding by bidders using ideas from the stochastic frontier literature. We perform a simulation study to showcase the appeal of the method and apply the techniques to timber auction data collected in British Columbia. Our results suggest that significant underbidding is present in the timber auctions.  相似文献   

8.
The literature on the characterization of aggregate excess and market demand has generated three types of results: global, local, or ‘at a point’. In this note, we study the relationship between the last two approaches. We prove that within the class of functions satisfying standard conditions and whose Jacobian matrix is negative semi-definite, only n/2+1n/2+1 agents are needed for the ‘at’ decomposition. We ask whether, within the same class, the ‘around’ decomposition also requires only n/2+1n/2+1 agents.  相似文献   

9.
Microeconometric treatments of discrete choice under risk are typically homoscedastic latent variable models. Specifically, choice probabilities are given by preference functional differences (given by expected utility, rank-dependent utility, etc.) embedded in cumulative distribution functions. This approach has a problem: Estimated utility function parameters meant to represent agents’ degree of risk aversion in the sense of Pratt (1964) do not imply a suggested “stochastically more risk averse” relation within such models. A new heteroscedastic model called “contextual utility” remedies this, and estimates in one data set suggest it explains (and especially predicts) as well as or better than other stochastic models.  相似文献   

10.
We extend the analytical results for reduced form realized volatility based forecasting in ABM (2004) to allow for market microstructure frictions in the observed high-frequency returns. Our results build on the eigenfunction representation of the general stochastic volatility class of models developed byMeddahi (2001). In addition to traditional realized volatility measures and the role of the underlying sampling frequencies, we also explore the forecasting performance of several alternative volatility measures designed to mitigate the impact of the microstructure noise. Our analysis is facilitated by a simple unified quadratic form representation for all these estimators. Our results suggest that the detrimental impact of the noise on forecast accuracy can be substantial. Moreover, the linear forecasts based on a simple-to-implement ‘average’ (or ‘subsampled’) estimator obtained by averaging standard sparsely sampled realized volatility measures generally perform on par with the best alternative robust measures.  相似文献   

11.
This paper proposes an extension to Global Vector Autoregressive (GVAR) models to capture time-varying interdependence among financial variables. Government bond spreads in the euro area feature a time-varying pattern of co-movement that poses a serious challenge for econometric modelling and forecasting. This pattern of the data is not captured by the standard specification that model spreads as persistent processes reverting to a time-varying mean determined by two factors: a local factor, driven by fiscal fundamentals and growth, and a global world factor, driven by the market’s appetite for risk. This paper argues that a third factor, expectations of exchange rate devaluation, gained traction during the crises. This factor is well captured via a GVAR that models the interdependence among spreads by making each country’s spread function of global European spreads. Global spreads capture the exposure of each country’s spread to other spreads in the euro area in terms of the time-varying ‘distance’ between their fiscal fundamentals. This new specification dominates the standard one in modelling the time-varying pattern of co-movements among spreads and the response of euro area spreads to the Greek debt crisis.  相似文献   

12.
When location shifts occur, cointegration-based equilibrium-correction models (EqCMs) face forecasting problems. We consider alleviating such forecast failure by updating, intercept corrections, differencing, and estimating the future progress of an ‘internal’ break. Updating leads to a loss of cointegration when an EqCM suffers an equilibrium-mean shift, but helps when collinearities are changed by an ‘external’ break with the EqCM staying constant. Both mechanistic corrections help compared to retaining a pre-break estimated model, but an estimated model of the break process could outperform. We apply the approaches to EqCMs for UK M1, compared with updating a learning function as the break evolves.  相似文献   

13.
In many econometric models the asymptotic variance of a parameter estimate depends on the value of another structural parameter in such a way that the data contain little information about the former when the latter is close to a critical value. This paper introduces the zero-information-limit-condition (ZILC) to identify such models where ‘weak identification’ leads to spurious inference. We find that standard errors tend to be underestimated in these cases, but the size of the asymptotic t-test may either be too great (the intuitive case emphasized in the ‘weak instrument’ literature) or too small as in two cases illustrated here.  相似文献   

14.
This paper investigates the spurious effect in forecasting asset returns when signals from technical trading rules are used as predictors. Against economic intuition, the simulation result shows that, even if past information has no predictive power, buy or sell signals based on the difference between the short-period and long-period moving averages of past asset prices can be statistically significant when the forecast horizon is relatively long. The theoretical analysis reveals that both ‘momentum’ and ‘contrarian’ strategies can be falsely supported, while the probability of obtaining each result depends on the type of the test statistics employed.  相似文献   

15.
We propose a novel methodology for identification of first-price auctions, when bidders’ private valuations are independent conditional on one-dimensional unobserved heterogeneity. We extend the existing literature ( and ) by allowing the unobserved heterogeneity to be non-separable from bidders’ valuations. Our central identifying assumption is that the distribution of bidder values is increasing in the state. When the state-space is finite, such monotonicity implies the full-rank condition needed for identification. Further, we extend our approach to the conditionally independent private values model of Li et al. (2000), as well as to unobserved heterogeneity settings in which the implicit reserve price or the cost of bidding varies across auctions.  相似文献   

16.
This paper proposes a two-step maximum likelihood estimation (MLE) procedure to deal with the problem of endogeneity in Markov-switching regression models. A joint estimation procedure provides us with an asymptotically most efficient estimator, but it is not always feasible, due to the ‘curse of dimensionality’ in the matrix of transition probabilities. A two-step estimation procedure, which ignores potential correlation between the latent state variables, suffers less from the ‘curse of dimensionality’, and it provides a reasonable alternative to the joint estimation procedure. In addition, our Monte Carlo experiments show that the two-step estimation procedure can be more efficient than the joint estimation procedure in finite samples, when there is zero or low correlation between the latent state variables.  相似文献   

17.
Using data from a large, U.S. federal job training program, we investigate whether enrolment incentives that exogenously vary the ‘shadow prices’ for serving different demographic subgroups of clients influence case workers' intake decisions. We show that case workers enroll more clients from subgroups whose shadow prices increase but select at the margin weaker-performing members from those subgroups. We conclude that enrolment incentives curb cream-skimming across subgroups leaving a residual potential for cream-skimming within a subgroup.  相似文献   

18.
Textbooks could be a cheap and efficient input to primary school education in Africa. In this paper, we examine the effects of textbooks on student outcomes and separate between direct effects and externalities. Using the rich data set provided by the ‘Program on the Analysis of Education Systems’ (PASEC) for five Francophone, sub-Saharan African countries, this paper goes beyond the estimation of direct effects of textbooks on students' learning and focuses on peer effects resulting from textbooks owned by students' classmates. Using nonparametric estimation methods, we separate the direct effect of textbooks from their peer effect. The latter clearly dominates but depends upon the initial level of textbook availability.  相似文献   

19.
In this paper, we introduce a new Poisson mixture model for count panel data where the underlying Poisson process intensity is determined endogenously by consumer latent utility maximization over a set of choice alternatives. This formulation accommodates the choice and count in a single random utility framework with desirable theoretical properties. Individual heterogeneity is introduced through a random coefficient scheme with a flexible semiparametric distribution. We deal with the analytical intractability of the resulting mixture by recasting the model as an embedding of infinite sequences of scaled moments of the mixing distribution, and newly derive their cumulant representations along with bounds on their rate of numerical convergence. We further develop an efficient recursive algorithm for fast evaluation of the model likelihood within a Bayesian Gibbs sampling scheme. We apply our model to a recent household panel of supermarket visit counts. We estimate the nonparametric density of three key variables of interest-price, driving distance, and their interaction-while controlling for a range of consumer demographic characteristics. We use this econometric framework to assess the opportunity cost of time and analyze the interaction between store choice, trip frequency, search intensity, and household and store characteristics. We also conduct a counterfactual welfare experiment and compute the compensating variation for a 10%-30% increase in Walmart prices.  相似文献   

20.
The choice of a college major plays a critical role in determining the future earnings of college graduates. Students make their college major decisions in part due to the future earnings streams associated with the different majors. We survey students about what their expected earnings would be both in the major they have chosen and in counterfactual majors. We also elicit students’ subjective assessments of their abilities in chosen and counterfactual majors. We estimate a model of college major choice that incorporates these subjective expectations and assessments. We show that both expected earnings and students’ abilities in the different majors are important determinants of a student’s choice of a college major. We also consider how differences in students’ forecasts about what the average Duke student would earn in different majors versus what they expect they would earn both influence one’s choice of a college major. In particular, our estimates suggest that 7.8% of students would switch majors if they had the same expectations about the average returns to different majors and differed only in their perceived comparative advantages across these majors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号