首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
This study empirically investigates the theory that odd‐numbered pricing points can be used as focal points to facilitate tacit collusion. Like other retailers, gasoline stations in the United States disproportionately sell at prices ending in odd digits. I show that station prices are higher and change less frequently in locations using more odd prices (particularly those ending in 5 or 9), even after controlling for other market characteristics. The evidence suggests that the use of pricing points can be an effective mechanism for tacitly coordinating prices, providing an alternative explanation for the widespread use of odd prices in retail markets.  相似文献   

3.
This paper introduces a flexible multiproduct cost function that permits zero values of one or more of the outputs and can impose restrictions quite easily, if not automatically satisfied, to ensure global concavity property. It satisfies linear homogeneity (in prices) property and is flexible in the output space. Thus the function is ideal for estimating, for example, economies of scope, cost complementarity, product-specific returns to scale, etc., without worrying about zero values of output(s) and extrapolations to points far from the point of approximation. As an empirical application, we use panel data (1978–1985) on 12 Finnish foundry plants to estimate technical progress, overall returns to scale, product-specific returns to scale and economies of scope.  相似文献   

4.
《Labour economics》2001,8(4):419-442
This paper summarizes our recent research on the relationship between wages and measured cognitive ability. In it, we make three main points. First, we find that wage payment by ability does vary across race and gender in the US, and that the fraction of wage variance explained by cognitive ability is modest. Second, measured cognitive ability and schooling are so highly correlated that one cannot separate their effects without imposing strong, arbitrary parametric structure in estimation which, when tested, is rejected by the data. Third, controlling for cognitive ability, personality traits (socialization skills) are correlated with earnings, although they primarily operate through schooling attainment.  相似文献   

5.
We investigate the temporal structure that maximizes the winner’s effort in large homogeneous contests. We find that the winner’s effort ranges from a lower bound of 0 to an upper bound of one third of the value of the prize, depending on the temporal structure; the upper (lower) bound is approached with an infinite number of players playing sequentially (simultaneously) in the first periods (period). Nevertheless, when the number of players is large but finite, we show that winner’s effort is maximized when all players play sequentially except in the very last period and that, within the family of such optimal temporal structures, more players play simultaneously in the very last period than sequentially in all other periods. Furthermore, out of all players, the percentage of those playing simultaneously in the very last period goes to 100% as the number of players grows larger and larger.  相似文献   

6.
We consider Bayesian inference techniques for agent-based (AB) models, as an alternative to simulated minimum distance (SMD). Three computationally heavy steps are involved: (i) simulating the model, (ii) estimating the likelihood and (iii) sampling from the posterior distribution of the parameters. Computational complexity of AB models implies that efficient techniques have to be used with respect to points (ii) and (iii), possibly involving approximations. We first discuss non-parametric (kernel density) estimation of the likelihood, coupled with Markov chain Monte Carlo sampling schemes. We then turn to parametric approximations of the likelihood, which can be derived by observing the distribution of the simulation outcomes around the statistical equilibria, or by assuming a specific form for the distribution of external deviations in the data. Finally, we introduce Approximate Bayesian Computation techniques for likelihood-free estimation. These allow embedding SMD methods in a Bayesian framework, and are particularly suited when robust estimation is needed. These techniques are first tested in a simple price discovery model with one parameter, and then employed to estimate the behavioural macroeconomic model of De Grauwe (2012), with nine unknown parameters.  相似文献   

7.
《Technovation》2007,27(11):676-692
In seeking to support the longevity of firms in high technology industries, much research effort has been directed at understanding the stages of growth and development of these firms. One industry regarded as vitally important to most national economies is biotechnology. Although our knowledge of the growth of biotechnology firms remains embryonic, we know that it is a multistage process requiring a changing blend of scientific and business skills at points along a developmental path. In this paper data are presented from a multiple case study, in which new biotechnology firms (NBFs) from five different countries were analyzed using in-depth interviews with CEOs, supported by archival and observational research. A conceptual model is developed from the literature which is further refined using the empirical evidence of the multiple case study. The resultant model captures the temporal aspects of the tension between the science and business agendas as the NBF traverses its commercialization pathways. The authors find that a common feature of successful NBFs is their ability to harmonize the changing scientific and business agendas as the company progresses through its development cycle.  相似文献   

8.
Using longitudinal data on individuals from the European Community Household Panel (ECHP) for eleven countries during 1995-2001, I investigate temporary job contract duration and job search effort. The countries are Austria, Belgium, Denmark, Finland, France, Greece, Ireland, Italy, the Netherlands, Portugal and Spain. I construct a search model for workers in temporary jobs which predicts that shorter duration raises search intensity. Calibration of the model to the ECHP data implies that at least 75% of the increase in search intensity over the life of a 2+ year temporary contract occurs in the last six months of the contract. I then estimate regression models for search effort that control for human capital, pay, local unemployment, and individual and time fixed effects. I find that workers on temporary jobs indeed search harder than those on permanent jobs. Moreover, search intensity increases as temporary job duration falls, and roughly 84% of this increase occurs on average in the shortest duration jobs. These results are robust to disaggregation by gender and by country. These empirical results are noteworthy, since it is not necessary to assume myopia or hyperbolic discounting in order to explain them, although the data clearly also do not rule out such explanations.  相似文献   

9.

This study employed prospect theory to examine relationships between effort invested in developing financial forecasts and risk taking. Results of an experimental study indicated that the more effort subjects invested in developing forecasts, the more likely they were to use those forecasts as their reference points when evaluating venture performance. Results also indicated that subjects who used forecasts as their reference points and exerted greater effort developing those forecasts were more likely to take risky actions when performance fell below their reference points. This study is the first to link effort to the type of reference point used and the first to link effort and the use of financial forecasts to risky decisions. In addition, it is one of only a few studies to employ prospect theory to examine risk taking decisions subsequent to start-up. Its results enhance our understanding of risk taking, prospect theory and reference points.

  相似文献   

10.

The aim of this study is to explain the determinants of entrepreneurship in agriculture industry. What are the drivers of early stage entrepreneurial activity of agri-business entrepreneur and how it is influenced by various cognitive and social capital factors? To answers these questions various driving factors of entrepreneurial activity have been explored from the literature. To achieve the objective, the study uses APS (Adult Population Survey) 2013 data of 69 countries provided by GEM (Global Entrepreneurship Monitor). Total number of respondents 1470, those who are alone or with others, currently trying to start a new business, including any self-employment or selling any goods or services to others in Agriculture Industry, were selected from the data set. To measure the influence of cognitive and social capital factors on early stage entrepreneurial activity logistic regression was employed. The findings show that those who see entrepreneurial opportunities, are confident in their own skills and ability, having personal relationship or social networks with existing entrepreneurs, and have invested in others business as business angels are more likely to become an entrepreneur. Additionally, fear of failure or risk perception does not prevent people to become entrepreneur. Policy implications have been discussed. This is one the first study of its kind and contributes to the existing literature by explaining agricultural entrepreneurship through an integrated approach of entrepreneurial cognition and social networking.

  相似文献   

11.
We study all‐pay contests with an exogenous minimal effort constraint where a player can participate in a contest only if his effort (output) is equal to or higher than the minimal effort constraint. Contestants are privately informed about a parameter (ability) that affects their cost of effort. The designer decides about the size and number of prizes. We analyze the optimal prize allocation for the contest designer who wishes to maximize either the total effort or the highest effort. It is shown that if the minimal effort constraint is relatively high, the winner‐take‐all contest in which the contestant with the highest effort wins the entire prize sum does not maximize the expected total effort or the expected highest effort. Rather, a random contest in which the entire prize sum is equally allocated to all the participants yields a higher expected total effort as well as a higher expected highest effort.  相似文献   

12.
Scholars in our field, Operations and Supply Chain Management (OSCM), are under high pressure to show research productivity. At most schools, this productivity is measured by the number of journal articles published. One possible response to such pressure is to improve research efficiency: publishing more journal articles from each data collection effort. In other words, using one dataset for multiple publications. As long as each publication makes a sufficient contribution, and authors ensure transparency in methods and consistency across publications, generating more than one publication from one data collection effort is possible. The aim of this Notes and Debates article, however, is to draw attention to inappropriate reuse of empirical data in OSCM research, to explain its implications and to suggest ways in which to promote research quality and integrity. Based on two cases of extensive data reuse in OSCM, eighteen problematic practices associated with the reuse of data across multiple journal articles are identified. Recommendations on this issue of data reuse are provided for authors, reviewers, editors and readers.  相似文献   

13.
In many of the social sciences it is useful to explore the “working models” or mental schemata that people use to organise items from some cognitive or perceptual domain. With an increasing number of items, versions of the Method of Sorting become important techniques for collecting data about inter-item similarities. Because people do not necessarily all bring the same mental model to the items, there is also the prospect that sorting data can identify a range within the population of interest, or even distinct subgroups. Anthropology provides one tool for this purpose in the form of Cultural Consensus Analysis (CCA). CCA itself proves to be a special case of the “Points of View” approach. Here factor analysis is applied to the subjects’ method-of-sorting responses, obtaining idealized or prototypal modes of organising the items—the “viewpoints”. These idealised modes account for each subject’s data by combining them in proportions given by the subject’s factor loadings. The separate organisation represented by each viewpoint can be made explicit with clustering or multidimensional scaling. The technique is illustrated with job-sorting data from occupational research, and social-network data from primate behaviour.  相似文献   

14.
This paper examines how a number of decision context variables affect the cognitive effort required for decision making on dichotomical choice tasks. Subjects are trained in the use of a strategy in which information processing is alternative-based. The correlation between the attributes of the alternatives and the mean and variance of the difference between the attributes is manipulated. The results show that the effort needed for decision making increases as the mean of the differences decreases. Yet, neither the variance of the differences nor the correlation context affect the decision making effort in this type of strategies.  相似文献   

15.
In this paper we develop a regularity theory for stationary overlapping generations economies. We show that generically there is an odd number of steady states in which a non-zero amount of nominal debt (fiat money) is passed from generation to generation and an odd number in which there is no nominal debt. We are also interested in non-steady state perfect foresight paths. As a first step in this direction we analyze the behavior of paths near a steady state. We show that generically they are given by a second order difference equation that satisfies strong regularity properties. Economic theory alone imposes little restriction on those paths: With n goods and consumers who live for m periods, for example, the only restriction on the set of paths converging to the steady state is that they form a manifold of dimension no less than one, no more than 2nm.  相似文献   

16.
One frequent application of microarray experiments is in the study of monitoring gene activities in a cell during cell cycle or cell division. High throughput gene expression time series data are produced from such microarray experiments. A new computational and statistical challenge for analyzing such gene expression time course data, resulting from cell cycle microarray experiments, is to discover genes that are statistically significantly periodically expressed during the cell cycle. Such a challenge occurs due to the large number of genes that are simultaneously measured, a moderate to small number of measurements per gene taken at different time points and high levels of non-normal random noises inherited in the data. Computational and statistical approaches to discovery and validation of periodic patterns of gene expression are, however, very limited. A good method of analysis should be able to search for significant periodic genes with a controlled family-wise error (FWE) rate or controlled false discovery rate (FDR) and any other variations of FDR, when all gene expression profiles are compared simultaneously. In this review paper, a brief summary of currently used methods in searching for periodic genes will be given. In particular, two methods will be surveyed in details. The first one is a novel statistical inference approach, the C & G Procedure that can be used to effectively detect statistically significantly periodically expressed genes when the gene expression is measured on evenly spaced time points. The second one is the Lomb–Scargle periodogram analysis, which can be used to discover periodic genes when the gene profiles are not measured on evenly spaced time points or when there are missing values in the profiles. The ultimate goal of this review paper is to give an expository of the two surveyed methods to researchers in related fields.  相似文献   

17.
In imperfectly discriminating contests the contestants contribute effort to win a prize but the highest contributed effort does not necessarily secure a win. The contest success function (CSF) is the technology that translates an individual's effort into his or her probability of winning. This paper provides an axiomatization of CSF when there is the possibility of a draw (the sum of winning probabilities across all contestants does not add up to one).  相似文献   

18.
We develop a twofold analysis of how the information provided by several economic indicators can be used in Markov switching dynamic factor models to identify the business cycle turning points. First, we compare the performance of a fully nonlinear multivariate specification (one‐step approach) with the ‘shortcut’ of using a linear factor model to obtain a coincident indicator, which is then used to compute the Markov switching probabilities (two‐step approach). Second, we examine the role of increasing the number of indicators. Our results suggest that one step is generally preferred to two steps, especially in the vicinity of turning points, although its gains diminish as the quality of the indicators increases. Additionally, we also obtain decreasing returns of adding more indicators with similar signal‐to‐noise ratios. Using the four constituent series of the Stock–Watson coincident index, we illustrate these results for US data. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

19.
The objective of the study was to develop a valid measurement scale for green human resource management (HRM). Even though the common practices of green HRM have been presented in much of the literature, the previous studies focused only on a small number of functions in integrating environmental management with HRM. Additionally, the measurement of green HRM practices still calls for empirical validation. The two‐stage methodology of structural equation modeling in AMOS was employed for data analysis. Exploratory factor analysis revealed seven dimensions of the construct measured by 28 items. Confirmatory factor analysis confirmed the factor structure. The measuring instruments revealed convergent and discriminant validity. Several model fit indices indicated the model fitness. The study provided supplementary evidence on the underlying structure of the construct that can be valuable to researchers and practitioners in this area.  相似文献   

20.
We explore a direct approach to estimating household equivalence scales from income satisfaction data. Our method differs from previous approaches to using satisfaction data for this purpose in that it can be used to directly fit and evaluate closed‐form and non‐parametric equivalence scales of any desired form. Its flexibility makes it easy to consider specific aspects such as income dependence or more specific information on household composition (such as whether household members live in a partner relationship). We estimate and evaluate a number of scales used in the literature. If the equivalence scale is assumed to be independent of income and to depend only on household size, we do not reject the validity of the widely used square‐root scale at conventional significance levels. We also test GESE and GAESE restrictions (Donaldson and Pendakur, 2003, 2006) and investigate in detail to what extent household economies of scale depend on income. Our results suggest that the income dependence differs fundamentally across household types (rising economies of scale for ‘family’ households, falling economies of scale for multi‐adult households without children and no income dependence for other households).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号