首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Models of consumer heterogeneity play a pivotal role in marketing and economics, specifically in random coefficient or mixed logit models for aggregate or individual data and in hierarchical Bayesian models of heterogeneity. In applications, the inferential target often pertains to a population beyond the sample of consumers providing the data. For example, optimal prices inferred from the model are expected to be optimal in the population and not just optimal in the observed, finite sample. The population model, random coefficients distribution, or heterogeneity distribution is the natural and correct basis for generalizations from the observed sample to the market. However, in many if not most applications standard heterogeneity models such as the multivariate normal, or its finite mixture generalization lack economic rationality because they support regions of the parameter space that contradict basic economic arguments. For example, such population distributions support positive price coefficients or preferences against fuel-efficiency in cars. Likely as a consequence, it is common practice in applied research to rely on the collection of individual level mean estimates of consumers as a representation of population preferences that often substantially reduce the support for parameters in violation of economic expectations. To overcome the choice between relying on a mis-specified heterogeneity distribution and the collection of individual level means that fail to measure heterogeneity consistently, we develop an approach that facilitates the formulation of more economically faithful heterogeneity distributions based on prior constraints. In the common situation where the heterogeneity distribution comprises both constrained and unconstrained coefficients (e.g., brand and price coefficients), the choice of subjective prior parameters is an unresolved challenge. As a solution to this problem, we propose a marginal-conditional decomposition that avoids the conflict between wanting to be more informative about constrained parameters and only weakly informative about unconstrained parameters. We show how to efficiently sample from the implied posterior and illustrate the merits of our prior as well as the drawbacks of relying on means of individual level preferences for decision-making in two illustrative case studies.

  相似文献   

2.
Computable expressions are derived for the Expected Shortfall of portfolios whose value is a quadratic function of a number of risk factors, as arise from a Delta–Gamma–Theta approximation. The risk factors are assumed to follow an elliptical multivariate t distribution, reflecting the heavy‐tailed nature of asset returns. Both an exact expression and a uniform asymptotic expansion are presented. The former involves only a single rapidly convergent integral. The latter is essentially explicit, and numerical experiments suggest that its error is negligible compared to that incurred by the Delta–Gamma–Theta approximation.  相似文献   

3.
We propose an efficient individually adapted sequential Bayesian approach for constructing conjoint-choice experiments, which uses Bayesian updating, a Bayesian analysis, and a Bayesian design criterion to generate a conjoint-choice design for each individual respondent based on the previous answers of that particular respondent. The proposed design approach is compared with three non-adaptive design approaches, two aggregate-customization approaches (based on the conditional logit model and on a mixed logit model), and the (nearly) orthogonal design approach, under various degrees of response accuracy and consumer heterogeneity. A simulation study shows that the individually adapted sequential Bayesian conjoint-choice designs perform better than the benchmark approaches in all scenarios we investigated in terms of the efficient estimation of individual-level part-worths and the prediction of individual choices. In the presence of high consumer heterogeneity, the improvements are impressive. The new method also performs well when the response accuracy is low, in contrast with the recently proposed adaptive polyhedral approach. Furthermore, the new methodology yields precise population-level parameter estimates, even though the design criterion focuses on the individual-level parameters.  相似文献   

4.
The random coefficients logit model is a workhorse in marketing and empirical industrial organizations research. When only aggregate data are available, it is customary to calibrate the model based on market shares as data input, even if the data are available in the form of aggregate counts. However, market shares are functionally related to model primitives in the random coefficients model whereas finite aggregate counts are only probabilistic functions of these model primitives. A recent paper by Park and Gupta (Journal of Marketing Research, 46(4), 531–543 2009) stresses this distinction but is hamstrung by numerical problems when demonstrating its potential practical importance. We develop Bayesian inference for the likelihood function proposed by Park and Gupta (Journal of Marketing Research, 46(4), 531–543 2009), sidestepping the numerical problem encountered by these authors. We show how taking account of the amount of information about shares by modeling counts directly results in improved inference.  相似文献   

5.
Using a unique dataset on U.S. beer consumption, we investigate brand preferences of consumers across various social group and context related consumption scenarios (??scenarios??). As sufficient data are not available for each scenario, understanding these preferences requires us to share information across scenarios. Our proposed modeling framework has two main building blocks. The first is a standard continuous random coefficients logit model that the framework reduces to in the absence of information on social groups and consumption contexts. The second component captures variations in mean preferences across scenarios in a parsimonious fashion by decomposing the deviations in preferences from a base scenario into a low dimensional brand map in which the brand locations are fixed across scenarios but the importance weights vary by scenario. In addition to heterogeneity in brand preferences that is reflected in the random coefficients, heterogeneity in preferences across scenarios is accounted for by allowing the brand map itself to have a discrete heterogeneity distribution across consumers. Finally, heterogeneity in preferences within a scenario is accounted for by allowing the importance weights to vary across consumers. Together, these factors allow us to parsimoniously account for preference heterogeneity across brands, consumers and scenarios. We conduct a simulation study to reassure ourselves that using the kind of data that is available to us, our proposed estimator can recover the true model parameters from those data. We find that brand preferences vary considerably across the different social groups and consumption contexts as well as across different consumer segments. Despite the sparse data on specific brand-scenario combinations, our approach facilitates such an analysis and assessment of the relative strengths of brands in each of these scenarios. This could provide useful guidance to the brand managers of the smaller brands whose overall preference level might be low but which enjoy a customer franchise in a particular segment or in a particular context or a social group setting.  相似文献   

6.
Reference prices have long been studied in applied economics and business research. One of the classic formulations of the reference price is in terms of an iterative function of past prices. There are a number of limitations of such a formulation, however. Such limitations include burdensome computational time to estimate parameters, an inability to truly account for customer heterogeneity, and an estimation procedure that implies a misspecified model. Managerial recommendations based on inferences from such a model can be quite misleading. We mathematically reformulate the reference price by developing a closed-form expansion that addresses the aforementioned issues, enabling one to elicit truly meaningful managerial advice from the model. We estimate our model on a real world data set to illustrate the efficacy of our approach. Our work is not only important from a modeling perspective, but also has valuable behavioral and managerial implications, which modelers and non-modelers alike should find useful.  相似文献   

7.
Two prominent approaches exist nowadays for estimating the distribution of willingness-to-pay (WTP) based on choice experiments. One is to work in the usual preference space in which the random utility model is expressed in terms of partworths. These partworths or utility coefficients are estimated together with their distribution. The WTP and the corresponding heterogeneity distribution of WTP is derived from these results. The other approach reformulates the utility in terms of WTP (called WTP-space) and estimates the WTP and the heterogeneity distribution of WTP directly. Though often used, working in preference space has severe drawbacks as it often leads to WTP-distributions with long flat tails, infinite moments and therefore many extreme values. By moving to WTP-space, authors have tried to improve the estimation of WTP and its distribution from a modeling perspective. In this paper we will further improve the estimation of individual level WTP and corresponding heterogeneity distribution by designing the choice sets more efficiently. We will generate individual sequential choice designs in WTP space. The use of this sequential approach is motivated by findings of Yu et al. (2011) who show that this approach allows for superior estimation of the utility coefficients and their distribution. The key feature of this approach is that it uses Bayesian methods to generate individually optimized choice sets sequentially based on prior information of each individual which is further updated after each choice made. Based on a simulation study in which we compare the efficiency of this sequential design procedure with several non-sequential choice designs, we can conclude that the sequential approach improves the estimation results substantially.  相似文献   

8.
Extending the traditional discrete choice model by incorporating latent psychological factors can help to better understand the individual’s decision-making process and therefore to yield more reliable part-worth estimates and market share predictions. Several integrated choice and latent variable (ICLV) models which merge the conditional logit model with a structural equation model exist in the literature. They assume homogeneity in the part-worths and use latent variables to model the heterogeneity among the respondents. This paper starts from the mixed logit model that describes the heterogeneity in the part-worths and uses the latent variables to decrease the unexplained part of the heterogeneity. The empirical study presented here shows these ICLV models perform very well with respect to model fit and prediction.  相似文献   

9.
This paper proposed a Bayesian method-based structural equation model (SEM) of miners’ work injury for an underground coal mine in India. The environmental and behavioural variables for work injury were identified and causal relationships were developed. For Bayesian modelling, prior distributions of SEM parameters are necessary to develop the model. In this paper, two approaches were adopted to obtain prior distribution for factor loading parameters and structural parameters of SEM. In the first approach, the prior distributions were considered as a fixed distribution function with specific parameter values, whereas, in the second approach, prior distributions of the parameters were generated from experts’ opinions. The posterior distributions of these parameters were obtained by applying Bayesian rule. The Markov Chain Monte Carlo sampling in the form Gibbs sampling was applied for sampling from the posterior distribution. The results revealed that all coefficients of structural and measurement model parameters are statistically significant in experts’ opinion-based priors, whereas, two coefficients are not statistically significant when fixed prior-based distributions are applied. The error statistics reveals that Bayesian structural model provides reasonably good fit of work injury with high coefficient of determination (0.91) and less mean squared error as compared to traditional SEM.  相似文献   

10.
In this paper, we study the impact of customer stochasticity on firm price discrimination strategies. We develop a new model termed the Bayesian Mixture Scale Heterogeneity (BMSH) model that incorporates both parameter heterogeneity and customer stochasticity using a mixture model approach, and demonstrate model identification using extensive simulations. We estimate the model on yogurt scanner data and find that compared to the benchmark mixed logit and multinomial probit models, our model shows that markets are less price elastic, and that a majority of customers exhibit stochasticity in purchases; our model also obtains better prediction and more profitable targeting strategies.  相似文献   

11.
Nowadays, brand choice models are standard tools in quantitative marketing. In most applications, parameters representing brand intercepts and covariate effects are assumed to be constant over time. However, marketing theories, as well as the experience of marketing practitioners, suggest the existence of trends or short-term variations in particular parameters. Hence, having constant parameters over time is a highly restrictive assumption, which is not necessarily justified in a marketing context and may lead to biased inferences and misleading managerial insights.In this paper, we develop flexible, heterogeneous multinomial logit models based on penalized splines to estimate time-varying parameters. The estimation procedure is fully data-driven, determining the flexible function estimates and the corresponding degree of smoothness in a unified approach. The model flexibly accounts for parameter dynamics without any prior knowledge needed by the analyst or decision maker. Thus, we position our approach as an exploratory tool that can uncover interesting and managerially relevant parameter paths from the data without imposing assumptions on their shape and smoothness.Our approach further allows for heterogeneity in all parameters by additively decomposing parameter variation into time variation (at the population level) and cross-sectional heterogeneity (at the individual household level). It comprises models without time-varying parameters or heterogeneity, as well as random walk parameter evolutions used in recent state space models, as special cases. The results of our extensive model comparison suggest that models considering parameter dynamics and household heterogeneity outperform less complex models regarding fit and predictive validity. Although models with random walk dynamics for brand intercepts and covariate effects perform well, the proposed semiparametric approach still provides a higher predictive validity for two of the three data sets analyzed.For joint estimation of all regression coefficients and hyperparameters, we employ the publicly available software BayesX, making the proposed approach directly applicable.  相似文献   

12.
Heterogeneity distributions of willingness-to-pay in choice models   总被引:4,自引:1,他引:4  
We investigate direct and indirect specification of the distribution of consumer willingness-to-pay (WTP) for changes in product attributes in a choice setting. Typically, choice models identify WTP for an attribute as a ratio of the estimated attribute and price coefficients. Previous research in marketing and economics has discussed the problems with allowing for random coefficients on both attribute and price, especially when the distribution of the price coefficient has mass near zero. These problems can be avoided by combining a parameterization of the likelihood function that directly identifies WTP with a normal prior for WTP. We show that the typical likelihood parameterization in combination with what are regarded as standard heterogeneity distributions for attribute and price coefficients results in poorly behaved posterior WTP distributions, especially in small sample settings. The implied prior for WTP readily allows for substantial mass in the tails of the distribution and extreme individual-level estimates of WTP. We also demonstrate the sensitivity of profit maximizing prices to parameterization and priors for WTP.
Thomas OtterEmail:
  相似文献   

13.
There is growing interest in exploring the view that both revealed preference (RP) and stated preference (SP) data have useful information and that their integration will enrich the overall explanatory power of RP choice models. These two types of data have been independently used in the estimation of a wide variety of discrete choice applications in marketing. In order to combine the two data sources, each with independent choice outcomes, allowance must be made for their different scaling properties. The approach uses a full information maximum likelihood estimation procedure of the hierarchical logit form to obtain suitable scaling parameters to make one or more data sets comparable. We illustrate the advantages of the dual data strategy by comparing the results with those obtained from models estimated independently with RP and SP data. Data collected as part of a study of high speed rail is used to estimate a set of illustrative mode choice models.  相似文献   

14.
Green consumption is a very common phrase in our daily lives, yet product characteristics that mainly contribute to the diffusion of green products are largely unknown. Based on microeconomic theory, we conduct a conjoint survey of consumer preferences for a ubiquitous green product—laundry detergent. We analyze the correlation between consumers' demographic variables and attributes of laundry detergents through a hierarchical Bayesian mixed logit model. We find that consumer preferences for attributes display significant heterogeneity. Age and income significantly influence the marginal preferences for attributes. An examination of consumer willingness to pay and of the relative importance of each attribute reveals that price and base material are the most important attributes. Green attributes, such as skin irritation potential and biodegradability, tend to be less important. This study also examines preference heterogeneity based on previous purchase experience. To promote green consumption, we emphasize the need for policies that reduce the value‐action gap.  相似文献   

15.
This study develops an ex-ante model for estimating financial distress likelihood (FDL), and contributes to the literature by presenting a financially-based definition of distress that is independent of its legal consequences, a theoretically supported model for the FDL, and an appropriate methodology that uses panel data to eliminate the unobservable heterogeneity. The model is then estimated cross-sectionally to obtain an indicator of the likelihood of financial distress that incorporates the specificity of each company. In doing so, this study provides a well-specified model that is stable in terms of magnitude, sign and significance of the coefficients and, more importantly, that yields a measure of the FDL that is more robust to time and the international context than the estimates of FDL that are based on seminal models. This measure could be appropriate for use in future research that deals with FDL, such as capital structure and the prevention of financial distress.  相似文献   

16.
Estimation of household parameters in scanner panel data requires the introduction of prior information. Traditionally, prior information is incorporated by restricting parameters to be constant across households or by specifying a random coefficient distribution. An alternative solution is to incorporate stochastic prior information in a formal Bayesian approach. In standard Bayesian analysis, a prior distribution over the model parameters is specified and combined with the household likelihood to obtain the Bayes estimates. The construction of the prior distribution over model parameters may be difficult, especially when working with new models whose parameters are difficult to interpret. In this paper, we propose a solution which specifies prior information through the marginal distribution of the data, i.e., the outcomes. We evaluate this marginal-predictive approach, using both actual and simulated panel data, and show it to be highly accurate relative to other available alternatives.  相似文献   

17.
Huber  Joel  Train  Kenneth 《Marketing Letters》2001,12(3):259-269
An exciting development in modeling has been the ability to estimate reliable individual-level parameters for choice models. Individual partworths derived from these parameters have been very useful in segmentation, identifying extreme individuals, and in creating appropriate choice simulators. In marketing, hierarchical Bayes models have taken the lead in combining information about the aggregate distribution of tastes with the individual's choices to arrive at a conditional estimate of the individual's parameters. In economics, the same behavioral model has been derived from a classical rather than a Bayesian perspective. That is, instead of Gibbs sampling, the method of maximum simulated likelihood provides estimates of both the aggregate and the individual parameters. This paper explores the similarities and differences between classical and Bayesian methods and shows that they result in virtually equivalent conditional estimates of partworths for customers. Thus, the choice between Bayesian and classical estimation becomes one of implementation convenience and philosophical orientation, rather than pragmatic usefulness.  相似文献   

18.
Continuous models of respondent heterogeneity assume the existence of a response function where variables of interest are continuously related to explanatory variables. In many situations this assumption may not be true. In this paper we propose an approach of modeling respondent heterogeneity that identifies abrupt changes in the distribution of response coefficients around a threshold specification. Our model differs from traditional threshold models by introducing the threshold effect to describe across-unit behavior as opposed to within-unit behavior. We illustrate our proposed Bayesian threshold model for survey data from a large national retail bank that examines the effects of service wait times on customer satisfaction. We find evidence of a threshold effect where long in-process wait times are associated with bank branches characterized by weak associations between service quality drivers and overall perceptions of service quality. Branches with wait times below the threshold are found to have much stronger associations.  相似文献   

19.
Extant customer-base models like the beta geometric/negative binomial distribution (BG/NBD) predict future purchasing based on customers' observed purchase history. We extend the BG/NBD by adding an important non-transactional element that also drives future purchases: complaint history. Our model retains several desirable properties of the BG/NBD: it can be implemented in readily available software, and estimation requires only customer-specific statistics, rather than detailed transaction-sequence data. The likelihood function is closed-form, and managerially relevant metrics are obtained by drawing from beta and gamma densities and transforming these draws to a sample average. Based on more than two years of individual-level data from a major U.S. internet and catalog retailer, our model with complaints outperforms both the original BG/NBD and a modified version. Even though complaints are rare and non-transactional events, they lead to different substantive insights about customer purchasing and drop-out: customers purchase faster but also drop out much faster. Furthermore, there is more heterogeneity in drop-out rates following a purchase than a complaint.  相似文献   

20.
Using firm‐level data for Japan, this paper examines the determinants of the export and foreign direct investment (FDI) decision. We contribute to the literature by employing a mixed logit model, i.e. a multinomial logit model with random intercepts and random coefficients, to incorporate any unobserved firm heterogeneity and by paying special attention to the quantitative significance of the determinants. We find that while the impact of productivity on the export and FDI decision is positive and statistically significant, it is economically negligible. The effect of firm size, credit constraints and information spillovers from experienced firms is also small in magnitude. A quantitatively dominant determinant of the export and FDI decision is instead the prior status of firms in terms of internationalisation. In addition, the use of the mixed logit model enables us to find a substantial role of unobserved firm characteristics in internationalisation of the firm. These findings suggest that entry costs to foreign markets, which substantially vary in size across firms, play an important role in the export and FDI decision. In addition, given that the negligible effect of productivity and the dominant effect of prior status appear to be more prominent in Japan than in some other countries, this study helps highlight the uniqueness of Japanese firms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号