首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   108篇
  免费   3篇
财政金融   14篇
工业经济   6篇
计划管理   41篇
经济学   18篇
运输经济   3篇
贸易经济   23篇
农业经济   2篇
经济概况   4篇
  2021年   2篇
  2020年   2篇
  2019年   3篇
  2018年   5篇
  2017年   2篇
  2016年   2篇
  2014年   3篇
  2013年   12篇
  2012年   2篇
  2011年   1篇
  2010年   9篇
  2009年   6篇
  2008年   2篇
  2007年   2篇
  2006年   4篇
  2005年   4篇
  2004年   2篇
  2003年   8篇
  2002年   4篇
  2001年   2篇
  2000年   2篇
  1999年   1篇
  1998年   2篇
  1997年   2篇
  1995年   1篇
  1994年   1篇
  1993年   2篇
  1991年   2篇
  1984年   2篇
  1983年   4篇
  1981年   1篇
  1980年   3篇
  1979年   1篇
  1978年   1篇
  1977年   2篇
  1972年   2篇
  1970年   1篇
  1969年   2篇
  1945年   2篇
排序方式: 共有111条查询结果,搜索用时 0 毫秒
51.
Under certain conditions, a broad class of qualitative and limited dependent variable models can be consistently estimated by the method of moments using a non-iterative correction to the ordinary least squares estimator, with only a small loss of efficiency compared to maximum likelihood estimation. The class of models is that obtained from a classical multinormal regression by any type of censoring or truncation and includes the tobit, probit, two-limit probit, truncated regression, and some variants of the sample selection models. The paper derives the estimators and their asymptotic covariance matrices.  相似文献   
52.
The normal-gamma stochastic frontier model was proposed in Greene (1990) and Beckers and Hammond (1987) as an extension of the normal-exponential proposed in the original derivations of the stochastic frontier by Aigner, Lovell and Schmidt (1977). The normal-gamma model has the virtue of providing a richer and more flexible parameterization of the inefficiency distribution in the stochastic frontier model than either of the canonical forms, normal-half normal and normal-exponential. However, several attempts to operationalize the normal-gamma model have met with very limited success, as the log likelihood is possesed of a significant degree of complexity. This note will propose an alternative approach to estimation of this model based on the method of maximum simulated likelihood estimation as opposed to the received attempts which have approached the problem by direct maximization.  相似文献   
53.
54.
This paper examines the impact of an endogenous cost function variable on the inefficiency estimates generated by stochastic frontier analysis (SFA). The specific variable of interest in this application is endogenous quality in nursing homes. We simulate a dataset based on the characteristics of for-profit nursing homes in California, which we use to assess the impact on SFA-generated inefficiency estimates of an endogenous regressor under a variety of scenarios, including variations in the strength and direction of the endogeneity and whether the correlation is with the random noise or the inefficiency residual component of the error term. We compare each of these cases when quality is included and excluded from the cost equation. We provide evidence of the impact of endogeneity on inefficiency estimates yielded by SFA under these various scenarios and when the endogenous regressor is included and excluded from the model.  相似文献   
55.
A stochastic frontier model with correction for sample selection   总被引:1,自引:2,他引:1  
Heckman’s (Ann Econ Soc Meas 4(5), 475–492, 1976; Econometrica 47, 153–161, 1979) sample selection model has been employed in three decades of applications of linear regression studies. This paper builds on this framework to obtain a sample selection correction for the stochastic frontier model. We first show a surprisingly simple way to estimate the familiar normal-half normal stochastic frontier model using maximum simulated likelihood. We then extend the technique to a stochastic frontier model with sample selection. In an application that seems superficially obvious, the method is used to revisit the World Health Organization data (WHO in The World Health Report, WHO, Geneva 2000; Tandon et al. in Measuring the overall health system performance for 191 countries, World Health Organization, 2000) where the sample partitioning is based on OECD membership. The original study pooled all 191 countries. The OECD members appear to be discretely different from the rest of the sample. We examine the difference in a sample selection framework.  相似文献   
56.
This article challenges the conventional wisdom that increasing size is the key to effective trade union representation. We draw on case study research at the National Union of Lock and Metal Workers (NULMW), which has allowed certain assumptions about small and independent unions to be challenged and suggests some possible alternatives to concentration strategies as a response to a hostile environment  相似文献   
57.
The productive efficiency of a firm can be seen as composed of two parts, one persistent and one transient. The received empirical literature on the measurement of productive efficiency has paid relatively little attention to the difference between these two components. Ahn and Sickles (Econ Rev 19(4):461–492, 2000) suggested some approaches that pointed in this direction. The possibility was also raised in Greene (Health Econ 13(10):959–980, 2004. doi:10.1002/hec.938), who expressed some pessimism over the possibility of distinguishing the two empirically. Recently, Colombi (A skew normal stochastic frontier model for panel data, 2010) and Kumbhakar and Tsionas (J Appl Econ 29(1):110–132, 2012), in a milestone extension of the stochastic frontier methodology have proposed a tractable model based on panel data that promises to provide separate estimates of the two components of efficiency. The approach developed in the original presentation proved very cumbersome actually to implement in practice. Colombi (2010) notes that FIML estimation of the model is ‘complex and time consuming.’ In the sequence of papers, Colombi (2010), Colombi et al. (A stochastic frontier model with short-run and long-run inefficiency random effects, 2011, J Prod Anal, 2014), Kumbhakar et al. (J Prod Anal 41(2):321–337, 2012) and Kumbhakar and Tsionas (2012) have suggested other strategies, including a four step least squares method. The main point of this paper is that full maximum likelihood estimation of the model is neither complex nor time consuming. The extreme complexity of the log likelihood noted in Colombi (2010), Colombi et al. (2011, 2014) is reduced by using simulation and exploiting the Butler and Moffitt (Econometrica 50:761–764, 1982) formulation. In this paper, we develop a practical full information maximum simulated likelihood estimator for the model. The approach is very effective and strikingly simple to apply, and uses all of the sample distributional information to obtain the estimates. We also implement the panel data counterpart of the Jondrow et al. (J Econ 19(2–3):233–238, 1982) estimator for technical or cost inefficiency. The technique is applied in a study of the cost efficiency of Swiss railways.  相似文献   
58.
In a roundtable published in this journal a year ago, there was a clear consensus that the R&D function in big pharma was inefficient and in need of major restructuring, possibly through increased investments by venture capital and private equity firms. In this discussion, an accomplished group of industry practitioners begins by looking at the prospects for both venture capital and private equity to play meaningful roles in financing early- and mid-stage drug development. In so doing, they explore questions like the following:
  • • Are there ways for big pharma and biotech to reduce “science risk” and make R&D funding more profitable and attractive to venture capital and private equity—and perhaps even hedge funds?
  • • What roles do you see for specialty PE firms like Symphony Capital and Paul Capital, which are now bundling mid-stage development assets and securitizing royalties?
Then the panelists turn to the broader life sciences industry and consider the outlook for leveraged private equity transactions involving marketed products, late-stage development, and services. Here they consider issues like the following:
  • • Will PE be attracted to less-R&D-intensive activities like medtech and generics?
  • • Have the recent consolidation through mergers and reorganization of big pharma into decentralized business units created opportunities for carve-outs of certain businesses?
For big pharma and life sciences companies in general, the answers to such questions point to greater specialization and focus achieved partly through strategic alliances with venture capital, private equity, and even hedge funds, and involving marketed products and services as well as early-stage drug development.  相似文献   
59.
Although ‘horse and buggy’ usually connotes a quainticon of the preindustrial world, The Carriage Trade by ThomasA. Kinney shows that horse-drawn vehicles were anything butquaint. Carriage and wagon making was a major nineteenth-centuryindustry, employing by 1890 130,000 employees in 13,000 firmsproducing $200 million in value (p. 262). These firms were leadersin production, management, and marketing innovations. Kinney’sstraightforward account shows how mechanization, interchangeability,and rationalization changed the nature of  相似文献   
60.
The paper reviews selected literature on the theory and practice of constructive technology assessment (CTA), which represents a promising approach for managing technology through society. CTA emphasises the involvement and interaction of diverse participants to facilitate ‘upstream’ (or anticipatory) learning about possible impacts of technology and socially robust decision-making. The paper seeks to identify limitations of CTA, as these relate to the broadening of debate about nascent, controversial technology. In particular, it considers the relevance of CTA to the achievement of more democratic decision-making about technology. In addition, the paper directs attention towards differences in participants' discursive capacities and rhetorical skills that may affect the role and contribution of non-expert citizens in technology assessment. The paper draws upon the debate between Habermas and Foucault to suggest promising avenues for future research based on technology assessment conceptualised as discourse. It concludes that the theory and practice of CTA may be improved by addressing explicitly possible structural limitations on the broadening of debate, whilst invoking a notion of technology assessment as discourse to point up cultural, subjective or cognitive limitations on agency.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号