首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Since the financial crisis in Korea, by focusing on core technology, IT startups have played an important role in the recovery of Korea’s economy through innovating technologies and creating new jobs. Even though there are many startups, it is not very common to reach the point of the initial public offering (IPO) and the post-IPO performance of the firms is mostly declining. Since it is rather difficult to apply conventional performance measures to very young firms, IPO has been used as a tool for performance evaluation. This study adopts the IPO as an early-stage measure for the performance of high technology startups. It is important to find out whether an earlier IPO of firms leads to a better performance and capability of firms. We investigate the relationship between the time to IPO of firms and their post-IPO performance for 3 years after their IPO by adopting samples of 79 information technology hardware firms founded after 1996 and listed between 2000 and 2004 in the KOSDAQ. Four determinant factors, including entrepreneurs’ experience, venture capital investment, startups’ technology sourcing, and technology portfolios which determine the firm’s time lag to getting to the IPO, are identified. The findings contain several results. First, the patent has positive effects on the firms’ performance after an IPO and on the firms’ growth before the IPO. Second, a faster technology acquisition via technology alliance has a positive influence on the firms’ IPO regardless of internal technologies. Third, concentrating on core technology, instead of diversifying can mature the startup firms faster. These indicate that a startup’s efficient initial strategy is critical for its performance and it enhances the credit and confidence of the market.  相似文献   

2.
This paper investigates the post-offering performance of initial public offerings in the health care industry in a sample of 223 IPOs issued between 1985 and 1996. Statistically insignificant abnormal returns for IPOs relative to matched control firms and risk-adjusted health care index are evident for the whole sample. Thus, our empirical results support the overall information efficiency in the IPO market. However, numerical and statistical differences of the IPOs’ abnormal returns are documented in every subgroup specified according to the issuance years and sectors. We conjecture that such differences are due to the growing threat of government intervention and the significant structural changes.(JEL11, C11) The views expressed herein are our own and do not necessarily reflect the views of our colleagues.  相似文献   

3.
We characterize the equilibrium of the all-pay auction with general convex cost of effort and sequential effort choices. We consider a set of n players who are arbitrarily partitioned into a group of players who choose their efforts ‘early’ and a group of players who choose ‘late’. Only the player with the lowest cost of effort has a positive payoff in any equilibrium. This payoff depends on his own timing vis-a-vis the timing of others. We also show that the choice of timing can be endogenized, in which case the strongest player typically chooses ‘late’, whereas all other players are indifferent with respect to their choice of timing. In the most prominent equilibrium the player with the lowest cost of effort wins the auction at zero aggregate cost. We thank Dan Kovenock and Luis C. Corchón for discussion and helpful comments. The usual caveat applies. Wolfgang Leininger likes to express his gratitude to Wissenschaftszentrum Berlin (WZB) for its generous hospitality and financial support.  相似文献   

4.
The aim of this paper is to derive methodology for designing ‘time to event’ type experiments. In comparison to estimation, design aspects of ‘time to event’ experiments have received relatively little attention. We show that gains in efficiency of estimators of parameters and use of experimental material can be made using optimal design theory. The types of models considered include classical failure data and accelerated testing situations, and frailty models, each involving covariates which influence the outcome. The objective is to construct an optimal design based of the values of the covariates and associated model or indeed a candidate set of models. We consider D-optimality and create compound optimality criteria to derive optimal designs for multi-objective situations which, for example, focus on the number of failures as well as the estimation of parameters. The approach is motivated and demonstrated using common failure/survival models, for example, the Weibull distribution, product assessment and frailty models.  相似文献   

5.
Voting operators map n-tuples of subsets of a given set X of candidates (the voters choices) into subsets of X (the social choice). This paper characterizes dictatorial voting operators by means of three conditions (the non-emptiness condition A1, the independence condition A2 and the resoluteness condition A3) motivated by the idea of transferring to the social choice properties common to all the voters choices. The result is used to refine Lahiris (2001) characterization and to derive dictatorial results in other three types of aggregation problems, in which choice functions are transformed into choice functions, binary relations into choices and binary relations into binary relations.Received: 20 May 2002, Accepted: 5 August 2003, JEL Classification: D70, D71Antonio Quesada: Present address: Departament dEconomia, Facultat de Ciéncies Económiques i Empresarials, Universitat Rovira i Virgili, Avinguda de la Universitat 1, 43204 Reus (Tarragona), Spain. I would like to express my gratitude to the referees for their contribution to improve this paper.Part of this work was done at the Departament dAnálisi Económica, Facultat dEconomia, Universitat de Valéncia, Avinguda dels Tarongers s/n, 46022 Valéncia, Spain.  相似文献   

6.
The best known achievement of the literature on resource-allocating mechanisms and their message spaces is the first rigorous proof of the competitive mechanism's informational efficiency. In an exchange economy withN persons andK+1 commodities (including a numeraire), that mechanism announcesK prices as well as aK-compenent trade vector for each ofN−1 persons, making a total ofNK message variables. Trial messages are successively announced and after each announcement each personprivately determines, usingprivate information, whether she finds the proposed trades acceptable at the announced prices. When a message is reached with which all are content, then the trades specified in that message take place, and they satisfy Pareto optimality and individual rationality. The literature shows that no (suitably regular) mechanism can achieve the same thing with fewer thanNK message variables. In the classic proof, all the candidate mechanisms have the privacy property, and the proof uses that property in a crucial way. ‘Non-private’ mechanisms are, however, well-defined. We present a proof that forN>K,NK remains a lower bound even when we permit ‘non-private’ mechanisms. Our new proof does not use privacy at all. But in a non-private mechanism, minimality of the number of message variables can hardly be defended as the hallmark of informational efficiency, since a non-private mechanism requires some persons to know something about the private information of othersin addition to the information contained in the messages. The new proof of the lower boundNK invites a new interpretation of the competitive mechanism's informational efficiency. We provide a new concept of efficiency which the competitive mechanism exhibits and which does rest on privacy even whenN>K. To do so, we first define a class ofprojection mechanisms, wherein some of the message variables are proposed values of the action to be taken, and the rest are auxiliary variables. The competitive mechanism has the projection property, with a trade vector as its action and prices as the auxiliary variables. A projection mechanism proposes an action; for each proposal, the agents then use the auxiliary variables, together with their private information, to verify that the proposed action meets the mechanism's goal (Pareto optimality and individual rationality for the competitive mechanism) if, indeed, it does meet that goal. For a given goal, we seek projection mechanisms for which theverification effort (suitably measured) is not greater than that of any other projection mechanism that achieves the goal. We show the competitive mechanism to be verification-minimal within the class of private projection mechanisms that achieve Pareto optimality and individual rationality; that proofdoes use the privacy of the candidate mechanisms. We also show, under certain conditions, that a verification-minimal projection mechanism achieving a given goal has smallest ‘total communication effort’ (which is locally equivalent to the classic ‘message-space size’) among all private mechanisms that achieve the goal, whether or not they have the projection property.  相似文献   

7.
In this paper, we present an algorithm suitable for analysing the variance of panel data when some observations are either given in grouped form or are missed. The analysis is carried out from the perspective of ANOVA panel data models with general errors. The classification intervals of the grouped observations may vary from one to another, thus the missing observations are in fact a particular case of grouping. The proposed Algorithm (1) estimates the parameters of the panel data models; (2) evaluates the covariance matrices of the asymptotic distribution of the time-dependent parameters assuming that the number of time periods, T, is fixed and the number of individuals, N, tends to infinity and similarly, of the individual parameters when T → ∞ and N is fixed; and, finally, (3) uses these asymptotic covariance matrix estimations to analyse the variance of the panel data.  相似文献   

8.
In the process of coding open-ended questions, the evaluation of interjudge reliability is a critical issue. In this paper, using real data, the behavior of three coefficients of reliability among coders, Cohen’s K, Krippendorff’s α and Perreault and Leigh’s I r are patterned, in terms of the number of judges involved and the categories of answer defined. The outcome underlines the importance of both variables in the valuations of interjudge reliability, as well as the higher adequacy of Perreault and Leigh’s I r and Krippendorff’s α for marketing and opinion research.  相似文献   

9.
An agency-theory model of IPO management retention is presented and empirically explored. The model is based upon the differences between the investment public’s and underwriters’ fears of the consequences of management entrenchment and other agency problems. The model suggests that IPO underpricing should be a curvilinear hump-shaped function of retention. A large-sample empirical exploration verifies the curvilinear relation.  相似文献   

10.
A Data Envelopment Analysis (DEA) cost minimization model is employed to estimate the cost to thrift institutions of achieving a rating of ‘outstanding’ under the anti-redlining Community Reinvestment Act, which is viewed as an act of voluntary Corporate Social Responsibility (CSR). There is no difference in overall cost efficiency between ‘outstanding’ and minimally compliant ‘satisfactory’ thrifts. However, the sources of cost inefficiency do differ, and an ‘outstanding’ rating involves annual extra cost of $6.547 million or, 1.2% of total costs. This added cost is the shadow price of CSR since it is not an explicit output or input in the DEA cost model. Before and after-tax rates of return are the same for the ‘outstanding’ and ‘satisfactory’ thrifts, which implies a recoupment of the extra cost. The findings are consistent with CSR as a management choice based on balancing marginal cost and marginal revenue. An incidental finding is that larger thrifts are less efficient.
Donald F. VitalianoEmail: Phone: +1-518- 276-8093
  相似文献   

11.
Summary Dynamic exponential family regression provides a framework for nonlinear regression analysis with time dependent parametersβ 0,β 1, …,β t, …, dimβ t=p. In addition to the familiar conditionally Gaussian model, it covers e.g. models for categorical or counted responses. Parameters can be estimated by extended Kalman filtering and smoothing. In this paper, further algorithms are presented. They are derived from posterior mode estimation of the whole parameter vector (β0, …,βt) by Gauss-Newton resp. Fisher scoring iterations. Factorizing the information matrix into block-bidiagonal matrices, algorithms can be given in a forward-backward recursive form where only inverses of “small”p×p-matrices occur. Approximate error covariance matrices are obtained by an inversion formula for the information matrix, which is explicit up top×p-matrices. Heinz Leo Kaufmann, my friend and coauthor for many years, died in a tragical rock climbing accident in August 1989. This paper is dedicated to his memory.  相似文献   

12.
This study evaluates the economics of the choice of form of payout initiation mechanism adopted by IPO firms. Our results suggest that IPO firms demonstrate a preference for repurchases over dividends as the specific form of payout initiation mechanism. We however, find that while the market views post-IPO payout initiations favorably, it is indifferent to the specific form of payout mechanism adopted. Further, we find that dividends and repurchases represent distinct payout mechanisms adopted by IPO firms with fundamentally different characteristics and motivation to initiate payouts during the post-IPO phase. Our results suggest that while dividend initiations are primarily driven by life cycle and catering theory considerations, signaling theory provides the more likely explanation for payout initiations through share repurchases.  相似文献   

13.
We consider nonparametric estimation of multivariate versions of Blomqvist’s beta, also known as the medial correlation coefficient. For a two-dimensional population, the sample version of Blomqvist’s beta describes the proportion of data which fall into the first or third quadrant of a two-way contingency table with cutting points being the sample medians. Asymptotic normality and strong consistency of the estimators are established by means of the empirical copula process, imposing weak conditions on the copula. Though the asymptotic variance takes a complicated form, we are able to derive explicit formulas for large families of copulas. For the copulas of elliptically contoured distributions we obtain a variance stabilizing transformation which is similar to Fisher’s z-transformation. This allows for an explicit construction of asymptotic confidence bands used for hypothesis testing and eases the analysis of asymptotic efficiency. The computational complexity of estimating Blomqvist’s beta corresponds to the sample size n, which is lower than the complexity of most competing dependence measures.   相似文献   

14.
LetX 1,…,X m andY 1,…,Y n be two independent samples from continuous distributionsF andG respectively. Using a Hoeffding (1951) type theorem, we obtain the distributions of the vector S=(S (1),…,S (n)), whereS (j)=# (X i ’s≤Y (j)) andY (j) is thej-th order statistic ofY sample, under three truncation models: (a)G is a left truncation ofF orG is a right truncation ofF, (b)F is a right truncation ofH andG is a left truncation ofH, whereH is some continuous distribution function, (c)G is a two tail truncation ofF. Exploiting the relation between S and the vectorR of the ranks of the order statistics of theY-sample in the pooled sample, we can obtain exact distributions of many rank tests. We use these to compare powers of the Hajek test (Hajek 1967), the Sidak Vondracek test (1957) and the Mann-Whitney-Wilcoxon test. We derive some order relations between the values of the probagility-functions under each model. Hence find that the tests based onS (1) andS (n) are the UMP rank tests for the alternative (a). We also find LMP rank tests under the alternatives (b) and (c).  相似文献   

15.
A social design x evokes a response y from a set of individuals. The value of the design is expressed in terms of a social welfare function which is derived from Arrow’s formulation of social choice. Making certain simplifying assumptions the social welfare function can be expressed in terms of individuals’ ideal designs. A method for estimating the social welfare function from quite limited empirical evidence is developed. The method is applied to an educational case study. There was considerable variation in individuals’ ideal designs. The components of the social welfare were estimated: the welfare ideal, the population sensitivity, the population variation, the deviation from the ideal and the welfare ceiling. Methodological problems are discussed.  相似文献   

16.
N. Giri  M. Behara  P. Banerjee 《Metrika》1992,39(1):75-84
Summary LetX=(X ij )=(X 1, ...,X n )’,X i =(X i1, ...,X ip )’,i=1,2, ...,n be a matrix having a multivariate elliptical distribution depending on a convex functionq with parameters, 0,σ. Let ϱ22 -2 be the squared multiple correlation coefficient between the first and the remainingp 2+p 3=p−1 components of eachX i . We have considered here the problem of testingH 02=0 against the alternativesH 11 -2 =0, ϱ 2 -2 >0 on the basis ofX andn 1 additional observationsY 1 (n 1×1) on the first component,n 2 observationsY 2(n 2×p 2) on the followingp 2 components andn 3 additional observationsY 3(n 3×p 3) on the lastp 3 components and we have derived here the locally minimax test ofH 0 againstH 1 when ϱ 2 -2 →0 for a givenq. This test, in general, depends on the choice ofq of the familyQ of elliptically symmetrical distributions and it is not optimality robust forQ.  相似文献   

17.
Indian leather industry has massive potential for generating employment and achieving high export-oriented growth. However, its economic performance has not been assessed much till date. The present paper attempts to fill in this gap by examining technical efficiency (TE) of individual leather producing firms for some years since the mid-1980’s. Analyzing the industry’s firm-level data through the two conventional tools, viz., data envelopment analysis and stochastic frontier analysis, the paper observes a significant positive association between a firm’s size and its TE, but no such clear relation between a firm’s age and TE. It also finds significant variation in TE across firms in different groups of states as well as under different organizational structures and observes some technological heterogeneity across states. Although, non-availability of panel data does not allow one to assess effects of economic reforms on the performance of the Indian leather firms, the average firm-level TE, however, seems to be on an increasing path, except for downswing in the immediate post-reform years.  相似文献   

18.
Liu  Shuangzhe  Neudecker  Heinz 《Metrika》1997,45(1):53-66
Extending Scheffé’s simplex-centroid design for experiments with mixtures, we introduce aweighted simplex-centroid design for a class of mixture models. Becker’s homogeneous functions of degree one belong to this class. By applying optimal design theory, we obtainA-, D- andI-optimal allocations of observations for Becker’s models.  相似文献   

19.
The class of p2 models is suitable for modeling binary relation data in social network analysis. A p2 model is essentially a regression model for bivariate binary responses, featuring within‐dyad dependence and correlated crossed random effects to represent heterogeneity of actors. Despite some desirable properties, these models are used less frequently in empirical applications than other models for network data. A possible reason for this is due to the limited possibilities for this model for accounting for (and explicitly modeling) structural dependence beyond the dyad as can be done in exponential random graph models. Another motive, however, may lie in the computational difficulties existing to estimate such models by means of the methods proposed in the literature, such as joint maximization methods and Bayesian methods. The aim of this article is to investigate maximum likelihood estimation based on the Laplace approximation approach, that can be refined by importance sampling. Practical implementation of such methods can be performed in an efficient manner, and the article provides details on a software implementation using R . Numerical examples and simulation studies illustrate the methodology.  相似文献   

20.
This paper investigates whether various start-up motivations and a country’s level of social security can explain the prevalence of entrepreneurial aspirations. For entrepreneurial aspirations and motivations we use country-level data from the Global Entrepreneurship Monitor (GEM) for the year 2005. We distinguish between the necessity motive, independence motive and increase wealth motive and between aspirations in terms of innovativeness, job growth and export orientation. Our findings indicate that social security negatively affects a country’s supply of ambitious entrepreneurship. Our results also suggest that entrepreneurial aspirations in terms of job growth and export relate positively to the increase wealth motive.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号