共查询到20条相似文献,搜索用时 0 毫秒
1.
In this paper, we introduce a new flexible mixed model for multinomial discrete choice where the key individual- and alternative-specific parameters of interest are allowed to follow an assumption-free nonparametric density specification, while other alternative-specific coefficients are assumed to be drawn from a multivariate Normal distribution, which eliminates the independence of irrelevant alternatives assumption at the individual level. A hierarchical specification of our model allows us to break down a complex data structure into a set of submodels with the desired features that are naturally assembled in the original system. We estimate the model, using a Bayesian Markov Chain Monte Carlo technique with a multivariate Dirichlet Process (DP) prior on the coefficients with nonparametrically estimated density. We employ a “latent class” sampling algorithm, which is applicable to a general class of models, including non-conjugate DP base priors. The model is applied to supermarket choices of a panel of Houston households whose shopping behavior was observed over a 24-month period in years 2004–2005. We estimate the nonparametric density of two key variables of interest: the price of a basket of goods based on scanner data, and driving distance to the supermarket based on their respective locations. Our semi-parametric approach allows us to identify a complex multi-modal preference distribution, which distinguishes between inframarginal consumers and consumers who strongly value either lower prices or shopping convenience. 相似文献
2.
3.
Mariano Matilla-García Manuel Ruiz Marín Mohammed I. Dore Rina B. Ojeda 《Decisions in Economics and Finance》2014,37(1):181-193
The BDS test is the best-known correlation integral–based test, and it is now an important part of most standard econometric data analysis software packages. This test depends on the proximity ( $\varepsilon )$ and the embedding dimension ( $m)$ parameters both of which are chosen by the researcher. Although different studies (e.g., Kanzler in Very fast and correctly sized estimation of the BDS statistic. Department of Economics, Oxford University, Oxford, 1999) have been carried out to provide an adequate selection of the proximity parameter, no relevant research has yet been done on $m$ . In practice, researchers usually compute the BDS statistic for different values of $m$ , but sometimes these results are contradictory because some of them accept the null and others reject it. This paper aims to fill this gap. To that end, we propose a new simple, yet powerful, aggregate test for independence, based on BDS outputs from a given data set, that allows the consideration of all of the information contained in several embedding dimensions without the ambiguity of the well-known BDS tests. 相似文献
4.
ABSTRACTIn the construction of input–output models from supply-use tables, technology assumptions disambiguate how an industry uses inputs in the production recipe of multiple outputs. This paper uses Bayes' theorem to select technology assumptions, taking into account empirical observations. The paper presents a formulation to explore hybrids between product and industry technology assumptions in product-by-product tables. We then present Markov chain Monte-Carlo techniques to implement the Bayesian method for selecting technology assumptions. We apply the method in a case study using Eurostat supply-use tables of 2004 and 2005, exhibiting a volume of secondary products of less than 13%, and 59 products and industries per country. The results show that the choice of technology is not important, given that there is no strong evidence in favour of any of them. 相似文献
5.
Under the Bayesian–Walrasian Equilibrium (BWE) (see Balder and Yannelis, 2009), agents form price estimates based on their own private information, and in terms of those prices they can formulate estimated budget sets. Then, based on his/her own private information, each agent maximizes interim expected utility subject to his/her own estimated budget set. From the imprecision due to the price estimation it follows that the resulting equilibrium allocation may not clear the markets for every state of nature, i.e., exact feasibility of allocations may not occur. This paper shows that if the economy is repeated from period to period and agents refine their private information by observing the past BWE, then in the limit all agents will obtain the same information and market clearing will be reached. The converse is also true. The analysis provides a new way of looking at the asymmetric equilibrium which has a statistical foundation. 相似文献
6.
《Technovation》2015
The innovation process has traditionally been understood as a predefined sequence of phases: idea generation, selection, development, and launch/diffusion/sales. Drawing upon contingency theory, we argue that innovation process may follow a number of different paths. Our research focuses on a clear theoretical and managerial question, i.e., how does a firm organize and plan resource allocation for those innovation processes that do not easily fit into traditional models. This question, in turn, leads to our research question: Which configuration of innovation processes and resource allocation should be employed in a given situation, and what is the rationale behind the choice? Based on a large-scale study analyzing 132 innovation projects in 72 companies, we propose a taxonomy of eight different innovation processes with specific rationales that depend on a project׳s contingencies. 相似文献
7.
8.
Ornstein–Uhlenbeck models are continuous-time processes which have broad applications in finance as, e.g., volatility processes in stochastic volatility models or spread models in spread options and pairs trading. The paper presents a least squares estimator for the model parameter in a multivariate Ornstein–Uhlenbeck model driven by a multivariate regularly varying Lévy process with infinite variance. We show that the estimator is consistent. Moreover, we derive its asymptotic behavior and test statistics. The results are compared to the finite variance case. For the proof we require some new results on multivariate regular variation of products of random vectors and central limit theorems. Furthermore, we embed this model in the setup of a co-integrated model in continuous time. 相似文献
9.
Etienne Wasmer 《Labour economics》2012,19(5):769-771
The four papers in this issue are part of a collective effort coordinated by the ECB and European National Central Banks, the Eurosystem's Wage Dynamics Network Survey. They provide new and systematic evidence on wage and price flexibility in Europe and attempt to explain their determinants. 相似文献
10.
We present a purification result for incomplete information games with a large but finite number of players that allows compact metric spaces for both actions and types. We then compare our framework and findings to the early purification theorems of Rashid (1983. Equilibrium points of non-atomic games: asymptotic results. Economics Letters 12, 7–10), Cartwright and Wooders (2002 On equilibrium in pure strategies in games with many players. University of Warwick Working Paper 686 (revised 2005)), Kalai (2004. Large robust games. Econometrica 72, 1631–1665) and Wooders, Cartwright and Selten (2006. Behavioral conformity in games with many players. Games and Economic Behavior 57, 347–360). Our proofs are elementary and rely on the Shapley–Folkman theorem. 相似文献
11.
Simple transformations are given for reducing/stabilizing bias, skewness and kurtosis, including the first such transformations for kurtosis. The transformations are based on cumulant expansions and the effect of transformations on their main coefficients. The proposed transformations are compared to the most traditional Box–Cox transformations. They are shown to be more efficient. 相似文献
12.
The log transformation of realized volatility is often preferred to the raw version of realized volatility because of its superior finite sample properties. One of the possible explanations for this finding is the fact the skewness of the log transformed statistic is smaller than that of the raw statistic. Simulation evidence presented here shows that this is the case. It also shows that the log transform does not completely eliminate skewness in finite samples. This suggests that there may exist other nonlinear transformations that are more effective at reducing the finite sample skewness. 相似文献
13.
The importance that users or customers attach to various services and products is an essential part of customer satisfaction surveys. Some proposals for linking satisfaction and importance can be found in available literature. The objective is to identify and understand the dimensions with high importance but low perceived quality. These dimensions are primary candidates for focused improvement initiatives. In this study, we propose to apply a class of statistical models, denoted as CUB models, generally used to estimate the feeling and the uncertainty, to measure the importance of items on observed overall satisfaction. A questionnaire with explicit variables of importance for each dimension is considered to compare the obtained ranks with the observed ones. Then the estimated importance and the perceived quality, both obtained with the CUB models, will be jointly analyzed in different datasets coming from various fields. This approach will be compared with some others reported in the literature. 相似文献
14.
The process capability index Cpk has been widely used in the manufacturing industry to provide numerical measures on process performance. Since Cpk is a yield-based index which is independent of the target T, it fails to account for process centering with symmetric tolerances, and presents an even greater problem with asymmetric tolerances. Pearn and Chen (1998) considered a new generalization Cpk which was shown to be superior to other existing generalizations of Cpk for processes with asymmetric tolerances. In this paper, we investigate the relation between the fraction nonconforming and the value of Cpk. Furthermore, we derive explicit forms of the cumulative distribution function and the probability density function for the natural estimator pk, under the assumption of normality. We also develop a decision making rule based on the natural estimator pk, which can be used to test whether the process is capable or not. 相似文献
15.
Stein’s method is used to derive an error in normal approximation for sums of pairwise negative quadrant dependent random
variables, but under the assumption of second moment only. This allows us to derive a central limit theorem for pairwise negative
quadrant dependent random variables with Lindeberg’s condition.
Research supported by Science Foundation of Zhejiang Provincial Education(no. 20060122) 相似文献
16.
This paper studies the Hodges and Lehmann (1956) optimality of tests in a general setup. The tests are compared by the exponential rates of growth to one of the power functions evaluated at a fixed alternative while keeping the asymptotic sizes bounded by some constant. We present two sets of sufficient conditions for a test to be Hodges–Lehmann optimal. These new conditions extend the scope of the Hodges–Lehmann optimality analysis to setups that cannot be covered by other conditions in the literature. The general result is illustrated by our applications of interest: testing for moment conditions and overidentifying restrictions. In particular, we show that (i) the empirical likelihood test does not necessarily satisfy existing conditions for optimality but does satisfy our new conditions; and (ii) the generalized method of moments (GMM) test and the generalized empirical likelihood (GEL) tests are Hodges–Lehmann optimal under mild primitive conditions. These results support the belief that the Hodges–Lehmann optimality is a weak asymptotic requirement. 相似文献
17.
Journal of Economic Interaction and Coordination - We present a dynamical model for the price evolution of financial assets. The model is based on a two-level approach: In the first stage, one... 相似文献
18.
Zhaohui Wu Thomas Y. Choi M. Johnny Rungtusanatham 《Journal of Operations Management》2010,28(2):115-123
A growing number of studies and evidence from industries suggest that, besides managing the relationship with its suppliers, a buyer needs to proactively manage the relationships between those suppliers. In a buyer–supplier–supplier relationship triad, the buyer, as the contracting entity, influences the suppliers’ behaviors and the relationship between them. By considering the relationships in such a triad, we are able to gain a richer and more realistic perspective of buyer–supplier relationships. In this study, our goal is to examine supplier–supplier relationships in buyer–supplier–supplier triads, focusing on how such relationships impact the supplier performance. We frame the supplier–supplier relationship as co-opetition—one in which competing suppliers work together to meet the buyer's requirements. We investigate the role of the buyer on such relationships, and how the buyer and co-opetitive supplier–supplier relationships affect supplier performance. We find mixed empirical support for our hypotheses. However, we are able to demonstrate the dynamics of supplier–supplier co-opetition in the buyer–supplier–supplier triad. We point out the need for further studies in this area. 相似文献
19.
Ecological inference refers to the study of individuals using aggregate data and it is used in an impressive number of studies; it is well known, however, that the study of individuals using group data suffers from an ecological fallacy problem (Robinson in Am Sociol Rev 15:351–357, 1950). This paper evaluates the accuracy of two recent methods, the Rosen et al. (Stat Neerl 55:134–156, 2001) and the Greiner and Quinn (J R Stat Soc Ser A (Statistics in Society) 172:67–81, 2009) and the long-standing Goodman’s (Am Sociol Rev 18:663–664, 1953; Am J Sociol 64:610–625, 1959) method designed to estimate all cells of R × C tables simultaneously by employing exclusively aggregate data. To conduct these tests we leverage on extensive electoral data for which the true quantities of interest are known. In particular, we focus on examining the extent to which the confidence intervals provided by the three methods contain the true values. The paper also provides important guidelines regarding the appropriate contexts for employing these models. 相似文献
20.
Extended input–output models require careful estimation of disaggregated consumption by households and comparable sources of labor income by sector. The latter components most often have to be estimated. The primary focus of this paper is to produce labor demand disaggregated by workers’ age. The results are evaluated through considerations of its consistency with a static labor demand model restricted with theoretical requirements. A Bayesian approach is used for more straightforward imposition of regularity conditions. The Bayesian model confirms elastic labor demand for youth workers, which is consistent with what past studies find. Additionally, to explore the effects of changes in age structure on a regional economy, the estimated age-group-specific labor demand model is integrated into a regional input–output model. The integrated model suggests that ceteris paribus ageing population contributes to lowering aggregate economic multipliers due to the rapidly growing number of elderly workers who earn less than younger workers. 相似文献