首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
High technology incubators have been funded in universities by the UK government as part of the ‘third mission’ for higher education (DTI 2000a). The provision of such facilities is premised on the notion that new technology firms achieve success at least in part from the benefits of incubators as rich networked environments where specialist knowledge acquisition can occur. This paper presents a exploration of how this process takes place, based on a case study of the high-tech incubator at the University of Southampton. The paper shows that firm founders adopt different approaches to the networked environment provided by the incubator; in this case the shift from Directorial support to that embedding in external networks was significant as firms grew. Taking account of this process should enable incubator managers to develop practices that ensure firms gain maximum advantage from the available resources.  相似文献   

2.
We extend an earlier model of innovation dynamics based on percolation by adding endogenous R&D search by economically motivated firms. The {0, 1} seeding of the technology lattice is now replaced by draws from a lognormal distribution for technology ‘difficulty’. Firms are rewarded for successful innovations by increases in their R&D budget. We compare two regimes. In the first, firms are fixed in a region of technology space. In the second, they can change their location by myopically comparing progress in their local neighborhoods and probabilistically moving to the region with the highest recent progress. We call this the moving or self-organizational regime (SO). The SO regime always outperforms the fixed one, but its performance is a complex function of the ‘rationality’ of firm search (in terms of search radius and speed of movement). The clustering of firms in the SO regime grows rapidly and then fluctuates in a complex way around a high value that increases with the search radius. We also investigate the size distributions of the innovations generated in each regime. In the fixed one, the distribution is approximately lognormal and certainly not fat tailed. In the SO regime, the distributions are radically different. They are much more highly right skewed and show scaling over at least 2 decades with a slope around one, for a wide range of parameter settings. Thus we argue that firm self-organization leads to self-organized criticality. The online version of the original article can be found under doi:.  相似文献   

3.
4.
Previous studies have shown that regulated firms diversify for reasons that are different than for unregulated firms. We explore some of these differences by providing a theoretical model that starts by considering the firm–regulator relationship as an incomplete information issue, in which a regulated incumbent has knowledge that the regulator does not have, but the firm cannot convey hard information about this knowledge. The incumbent faces both market and nonmarket competition from a new entrant. In that context, we show that when the firm faces tough nonmarket competition domestically, going abroad can create a mechanism that makes information transmission to the regulator more credible. International expansion can thus be a way to solve domestic nonmarket issues in addition to being a catalyst for growth. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

5.
Previous research examining mixed duopolies shows that the use of an optimal incentive contract for the public firm increases welfare and that privatization reduces welfare. We demonstrate that these results do not generalize to a mixed oligopoly with multiple private firms. We derive the optimal incentive contract for a public firm that weighs both profit and welfare and show that its use may either increase or decrease welfare depending on the number of private firms and the exact nature of costs. We also identify the conditions that determine whether or not privatizing the public firm facing an optimal incentive contract reduces welfare. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

6.
The main objective of this study is to investigate the impact of corporate research and development (R&D) activities on firm performance, measured by labour productivity. To this end, the stochastic frontier technique is used on a unique unbalanced longitudinal dataset comprising top European R&D investors over the period 2000–2005. In this framework, this study quantifies technical inefficiency of individual firms. From a policy perspective, the results of this study suggest that if the aim is to leverage firms’ productivity, the emphasis should be put on supporting corporate R&D in high-tech sectors and, to some extent, in medium-tech sectors. On the other hand, corporate R&D in the low-tech sector is found to have a minor effect in explaining productivity. Instead, encouraging investment in fixed assets appears important for the productivity of low-tech industries. Hence, the allocation of support for corporate R&D seems to be as important as its overall increase and an ‘erga omnes’ approach across all sectors appears inappropriate. However, with regard to technical efficiency, R&D intensity is found to be a pivotal factor in explaining firm efficiency and this turns out to be true for all industries.  相似文献   

7.
A firm issues bonds before undertaking a risky continuous investment project that is costly to later either expand or contract. The firm’s existing debt load causes it to install a smaller capacity because equity has limited liability. This lowers debt value, but such a cost should be borne by equityholders because debtholders will rationally anticipate equityholders’ future behavior. The firm’s choice of debt levels balances this agency cost against the tax shield benefit. As the firm incurs higher costs to later expand capacity, its growth option value becomes lower. The simulation results of this article are in line with Myers’ conjecture (1977), which states that a firm’s debt capacity is inversely related to its growth option value.  相似文献   

8.
Broadband access is widely considered to be a productivity-enhancing factor, but there are few firm-level estimates of its benefits. We use a large micro-survey of firms linked to longitudinal firm financial data to determine the impact that broadband access has on firm productivity. Propensity score matching is used to control for factors, including the firm’s own lagged productivity, that determine a firm’s internet access choice. Instrumental variables estimates are employed as a robustness check. Results indicate that broadband adoption boosts firm productivity by 7–10%; effects are consistent across urban versus rural locations and across high versus low knowledge intensive sectors.  相似文献   

9.
In this study it will be argued that the perceived distribution of opinions among others is important for opinion research. Three different ways of measuring the perception of opinion distributions in survey research are compared: (a) by means of a questionwhat most people think about an issue, (b) by means of a questionhow many people are perceived to agree with an issue-statement, (c) by means of ‘line-production-boxes’, a special version ofmagnitude estimation. The results indicate that ‘line-production-boxes’ can improve data quality, but have also some drawbacks which will have to be dealt with. ‘Line-production-boxes’ give a wealth of information about individual differences in the forms of perceived opinion distributions. Although the normal distribution is used often, many other distribution forms are also used. The method of ‘line-production-boxes’ is compared with the method of estimating percentage points. Although high correlations suggest a good concurrent validity, some systematic differences do exist. New research directions are suggested.  相似文献   

10.
Acquisition is one way entrepreneurial firms have to capture the assets needed to achieve their strategic objectives. We investigate shareholder value creation of Spanish listed firms in response to announcements of acquisitions over the period 1991–2006. Similar to foreign markets, bidders earn insignificant average abnormal returns regardless of the pricing model used in the estimation procedure. When we relate these results to company and transaction characteristics our evidence suggests that the listing status of the target firm is a critical key in the strategic decision to acquire a company. This listing status effect is mainly associated with the fact that unlisted firms tend to be smaller and lesser–known firms, and thus suffer from a lack of competition in the market for corporate control. Consequently, the payment of lower premiums and the possibility of diversifying shareholders’ portfolios lead to unlisted firm acquisitions being viewed as value–orientated transactions which have major implications for managers.  相似文献   

11.
This paper examines whether rate-of return regulation alters the input quantities firms use to produce their selected output level when the corresponding input prices change, in a manner similar to the Le Chatelier principle. More specifically, would the change in a rate regulated firm’s input quantity due to a change in its input price be less price elastic than the unregulated firm’s change in the input quantity due to a change in its input price. We follow Färe and Logan (1986), Nelson and Wohar (1983) in estimating a rate regulated cost function and capital input share system of equations. Using a 1992–2000 panel of 34 US major investor-owned electric utilities, empirical results indicate that the regulated own-input price elasticities of demand for labor and fuel are less price elastic than their corresponding unregulated own-input price elasticities of demand (a Le Chatelier principle type effect). Having a fuel clause (1) reduces the firm’s willingness to substitute from fuel to either non-fuel (capital, labor) input when the price of fuel rises, and (2) enhances the firm’s willingness to substitute from non-fuel inputs to fuel when the price of non-fuel inputs rises.  相似文献   

12.
A classic puzzle in the economic theory of the firm concerns the fundamental cause of decreasing returns to scale. If a plant producing product quantityX at costC can be replicated as often as desired, then the quantityrX need never cost more thanrC. Traditionally the firm is imagined to take its identity from a fixednon-replicable input, namely a ‘top manager’; as more plants or divisions are added, the communication and computation burden imposed on the top manager (who has information not possessed by the divisions) grows more than proportionately. Decreasing returns are experienced as the top manager hires more variable inputs to cope with the rising burden. Suppose it turns out, however, that when the divisions are assembled, and are given exactly the same totally independent tasks that they fulfilled when they were autonomous, then asaving can be achieved if they adopt a joint procedure for performing those tasks rather than replicating their previous separate procedures. Then the top manager's rising burden must be shown to be particularly onerous—otherwise there may actually beincreasing returns. We show that for a certain model of the information-processing procedure used by the separate divisions and by the firm, there may indeed be such an odd unexpected saving. The saving occurs with respect to the size of the language in which members of each division, or of the firm, communicate with one another, provided that language is finite. If instead the language is a continuum then the saving cannot occur, provided that the procedures used obey suitable ‘smoothness’ conditions. We show that the saving for the finite case can be ruled out in two ways: by requiring the procedures used to obey a regularity condition that is a crude analogue of the smoothness conditions we impose on the continuum procedures, or by insisting that the procedure used be a ‘deterministic’ protocol. Such a protocol prescribes a conversation among the participants, in which a participant has only one choice, whenever that participant has to make an announcement to the others. The results suggest that a variety of information-processing models will have to be studied before the traditional explanation for decreasing returns to scale is understood in a rigorous way.  相似文献   

13.
The succession process in family firms has by far been determined to be the most critical phase in the family business life-cycle (e.g. Morris et al. Journal of Business Venturing 18:513–531, 1997; Wang et al. 2000) and characterized as the period in which most family firm fatalities occur (Handler and Kram Family Business Review 1:361–381, 1988). This paper is an empirical study on Greek family firms and seeks to identify the critical success factors that have a major impact on the outcome of a generational transition in the leadership of the family firm. Based on an integrated conceptual framework proposed by Pyromalis et al. (2006), we test the impact of five factors, namely the incumbent’s propensity to step aside, the successor’s willingness to take over, the positive family relations and communication, succession planning, and the successor’s appropriateness and preparation on both the satisfaction of the stakeholders with the succession process and the effectiveness of the succession process per se. The results provide a useful insight and confirm the importance of the aforementioned factors in the succession process by mapping a safe passage through the family business succession process, and by contributing not only to the overall family business literature but also generating strong arguments in favor of the family firm as an integral entrepreneurial element for a region’s sustainable economic development.  相似文献   

14.
Although the Big Five Questionnaire for children (BFQ-C) (C. Barbaranelli et al., Manuale del BFQ-C. Big Five Questionnaire Children, O.S. Organizazioni, Firenze, 1998) is an ordinal scale, its dimensionality has often been studied using factor analysis with Pearson correlations. In contrast, the present study takes this ordinal metric into account and examines the dimensionality of the scale using factor analysis based on a matrix of polychoric correlations. The sample comprised 852 subjects (40.90% boys and 59.10% girls). As in previous studies the results obtained through exploratory factor analysis revealed a five-factor structure (extraversion, agreeableness, conscientiousness, emotional instability and openness). However, the results of the confirmatory factor analysis were consistent with both a four and five-factor structure, the former showing a slightly better fit and adequate theoretical interpretation. These data confirm the need for further research as to whether the factor ‘Openness’ should be maintained as an independent factor (five-factor model), or whether it would be best to omit it and distribute its items among the factors ‘Extraversion’ and ‘Conscientiousness’ (four-factor model).  相似文献   

15.
Within the class of weighted averages of ordered measurements on criteria of ‘equal importance’, we discuss aggregators that help assess to what extent ‘most’ criteria are satisfied, or a ‘good performance across the board’ is attained. All aggregators discussed are compromises between the simple mean and the very demanding minimum score. The weights of the aggregators are described in terms of the convex polytopes as induced by the characteristics of the aggregators, such as concavity. We take the barycentres of the polytopes as representative weights.  相似文献   

16.
This paper discusses ways forward in terms of making efficiency measurement in the area of health care more useful. Options are discussed in terms of the potential introduction of guidelines for the undertaking of studies in this area, in order to make them more useful to policy makers and those involved in service delivery. The process of introducing such guidelines is discussed using the example of the development of guidelines in economic evaluation of health technologies. This presents two alternative ways forward—‘revolution’, the establishment of a panel to establish initial guidelines, or ‘evolution’—the more gradual development of such guidelines over time. The third alternative of ‘status quo’, representing the current state of play, is seen as the base case scenario. It is concluded that although we are quite a way on in terms of techniques and publications, perhaps revolution, followed by evolution is the way forward.  相似文献   

17.
A Data Envelopment Analysis (DEA) cost minimization model is employed to estimate the cost to thrift institutions of achieving a rating of ‘outstanding’ under the anti-redlining Community Reinvestment Act, which is viewed as an act of voluntary Corporate Social Responsibility (CSR). There is no difference in overall cost efficiency between ‘outstanding’ and minimally compliant ‘satisfactory’ thrifts. However, the sources of cost inefficiency do differ, and an ‘outstanding’ rating involves annual extra cost of $6.547 million or, 1.2% of total costs. This added cost is the shadow price of CSR since it is not an explicit output or input in the DEA cost model. Before and after-tax rates of return are the same for the ‘outstanding’ and ‘satisfactory’ thrifts, which implies a recoupment of the extra cost. The findings are consistent with CSR as a management choice based on balancing marginal cost and marginal revenue. An incidental finding is that larger thrifts are less efficient.
Donald F. VitalianoEmail: Phone: +1-518- 276-8093
  相似文献   

18.
The paper is a preliminary research report and presents a method for generating new records using an evolutionary algorithm (close to but different from a genetic algorithm). This method, called Pseudo-Inverse Function (in short P-I Function), was designed and implemented at Semeion Research Centre (Rome). P-I Function is a method to generate new (virtual) data from a small set of observed data. P-I Function can be of aid when budget constraints limit the number of interviewees, or in case of a population that shows some sociologically interesting trait, but whose small size can seriously affect the reliability of estimates, or in case of secondary analysis on small samples. The applicative ground is given by research design with one or more dependent and a set of independent variables. The estimation of new cases takes place according to the maximization of a fitness function and outcomes a number as large as needed of ‘virtual’ cases, which reproduce the statistical traits of the original population. The algorithm used by P-I Function is known as Genetic Doping Algorithm (GenD), designed and implemented by Semeion Research Centre; among its features there is an innovative crossover procedure, which tends to select individuals with average fitness values, rather than those who show best values at each ‘generation’. A particularly thorough research design has been put on: (1) the observed sample is half-split to obtain a training and a testing set, which are analysed by means of a back propagation neural network; (2) testing is performed to find out how good the parameter estimates are; (3) a 10% sample is randomly extracted from the training set and used as a reduced training set; (4) on this narrow basis, GenD calculates the pseudo-inverse of the estimated parameter matrix; (5) ‘virtual’ data are tested against the testing data set (which has never been used for training). The algorithm has been proved on a particularly difficult ground, since the data set used as a basis for generating ‘virtual’ cases counts only 44 respondents, randomly sampled from a broader data set taken from the General Social Survey 2002. The major result is that networks trained on the ‘virtual’ resample show a model fit as good as the one of the observed data, though ‘virtual’ and observed data differ on some features. It can be seen that GenD ‘refills’ the joint distribution of the independent variables, conditioned by the dependent one. This paper is the result of deep collaboration among all authors. Cinzia Meraviglia wrote § 1, 3, 4, 6, 7 and 8; Giulia Massini wrote §5; Daria Croce performed some elaborations with neural networks and linear regression; Massimo Buscema wrote §2.  相似文献   

19.
Inspired by the success of geographical clustering in California, many governments pursue cluster policy in the hope to build the next Silicon Valley. In this paper we critically assess the relationship between geographical clustering and public policy. With the help of a range of theoretical insights and case study examples we show that cluster policy in fact is a risky\ venture, especially when it is tried to copy the success of regional ‘best practices’. Therefore, we advice policy makers to move away from the Silicon Valley model and to modestly start from a place-specific approach of ‘Regional Realism’.  相似文献   

20.
We use a stochastic frontier model with firm-specific technical inefficiency effects in a panel framework (Battese and Coelli in Empir Econ 20:325–332, 1995) to assess two popular probability of bankruptcy (PB) measures based on Merton model (Merton in J Financ 29:449–470, 1974) and discrete-time hazard model (DHM; Shumway in J Bus 74:101–124, 2001). Three important results based on our empirical studies are obtained. First, a firm with a higher PB generally has less technical efficiency. Second, for an ex-post bankrupt firm, its PB tends to increase and its technical efficiency of production tends to decrease, as the time to its bankruptcy draws near. Finally, the information content about firm’s technical inefficiency provided by PB based on DHM is significantly more than that based on Merton model. By the last result and the fact that economic-based efficiency measures are reasonable indicators of the long-term health and prospects of firms (Baek and Pagán in Q J Bus Econ 41:27–41, 2002), we conclude that PB based on DHM is a better credit risk proxy of firms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号