首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   6085篇
  免费   395篇
  国内免费   53篇
财政金融   564篇
工业经济   321篇
计划管理   1710篇
经济学   1254篇
综合类   436篇
运输经济   96篇
旅游经济   124篇
贸易经济   1020篇
农业经济   437篇
经济概况   571篇
  2024年   18篇
  2023年   150篇
  2022年   132篇
  2021年   244篇
  2020年   352篇
  2019年   300篇
  2018年   257篇
  2017年   295篇
  2016年   243篇
  2015年   251篇
  2014年   425篇
  2013年   634篇
  2012年   442篇
  2011年   444篇
  2010年   354篇
  2009年   284篇
  2008年   311篇
  2007年   294篇
  2006年   232篇
  2005年   191篇
  2004年   134篇
  2003年   125篇
  2002年   92篇
  2001年   73篇
  2000年   61篇
  1999年   32篇
  1998年   47篇
  1997年   25篇
  1996年   24篇
  1995年   14篇
  1994年   11篇
  1993年   8篇
  1992年   6篇
  1991年   5篇
  1990年   4篇
  1989年   1篇
  1988年   4篇
  1987年   3篇
  1986年   4篇
  1985年   4篇
  1983年   2篇
  1980年   1篇
排序方式: 共有6533条查询结果,搜索用时 929 毫秒
41.
We propose an extension to the basic DEA models that guarantees that if an intensity is positive then it must be at least as large as a pre-defined lower bound. This requirement adds an integer programming constraint known within Operations Research as a Fixed-Charge (FC) type of constraint. Accordingly, we term the new model DEA_FC. The proposed model lies between the DEA models that allow units to be scaled arbitrarily low, and the Free Disposal Hull model that allows no scaling. We analyze 18 datasets from the literature to demonstrate that sufficiently low intensities—those for which the scaled Decision-Making Unit (DMU) has inputs and outputs that lie below the minimum values observed—are pervasive, and that the new model ensures fairer comparisons without sacrificing the required discriminating power. We explain why the low-intensity phenomenon exists. In sharp contrast to standard DEA models we demonstrate via examples that an inefficient DMU may play a pivotal role in determining the technology. We also propose a goal programming model that determines how deviations from the lower bounds affect efficiency, which we term the trade-off between the deviation gap and the efficiency gap.  相似文献   
42.
Selecting Sites for New Facilities Using Data Envelopment Analysis   总被引:3,自引:0,他引:3  
This paper develops a mathematical programming model for obtaining a best set of sites for planned facilities. The model is concerned with those situations where resource constraints are present. The specific setting for the paper involves the selection of sites for a set of retail outlets, wherein the ratio of aggregate outputs to inputs for the selected set is maximal among all possible sets that could be chosen. At the same time, the model guarantees that the only sets of stores allowable are those for which the available resources are used to the maximum extent possible.  相似文献   
43.
This paper deals with a dynamic adjustment process in which adjustment of a key variable input (labor) towards its desired level is modeled in a panel data context. The partial adjustment type model is extended to make the adjustment parameter both firm- and time-specific by specifying it as a function of firm- and time-specific variables. Desired level of labor use is represented by a labor requirement function, which is a function of outputs and other firm-specific variables. The catch-up factor is defined as the ratio of actual to desired level of employment. Productivity growth is then defined in terms of a shift in the desired level of labor use and the change in the catch-up factor. Swedish banking data is used as an application of the above model.  相似文献   
44.
The purpose of this paper is to discuss the use of Value Efficiency Analysis (VEA) in efficiency evaluation when preference information is taken into account. Value efficiency analysis is an approach, which applies the ideas developed for Multiple Objective Linear Programming (MOLP) to Data Envelopment Analysis (DEA). Preference information is given through the desirable structure of input- and output-values. The same values can be used for all units under evaluation or the values can be specific for each unit. A decision-maker can specify the input- and output-values subjectively without any support or (s)he can use a multiple criteria support system to assist him/her to find those values on the efficient frontier. The underlying assumption is that the most preferred values maximize the decision-maker's implicitly known value function in a production possibility set or a subset. The purpose of value efficiency analysis is to estimate a need to increase outputs and/or decrease inputs for reaching the indifference contour of the value function at the optimum. In this paper, we briefly review the main ideas in value efficiency analysis and discuss practical aspects related to the use of value efficiency analysis. We also consider some extensions.  相似文献   
45.
Deterministic frontier analysis (DFA), stochastic frontier analysis (SFA), and data envelopment analysis (DEA) are alternative analytical techniques designed to measure the efficiency of producers. All three techniques were originally developed within a cross-sectional context, in which the objective is to compare the efficiencies of producers. More recently all three techniques have been extended for use in a panel data context. In the latter context it is possible to measure productivity change, and to decompose measured productivity change into its sources, one of which is efficiency change. However when efficiency measurement techniques, particularly SFA, have been applied to panel data, it has infrequently been made clear what the objective of the analysis is: the measurement of efficiency, which may vary through time as well as across producers, or the measurement and decomposition of productivity change. In this paper I explore the use of each technique in a panel data context. I find DFA and DEA to have achieved a more satisfactory reorientation toward productivity measurement than SFA has.  相似文献   
46.
This article provides a series of reflections on the practice of carrying out processual research on organisational change. At a broad level, some of the main tasks associated with conducting company case studies are described and the benefits of this approach for dealing with complex change data are outlined. At a more specific level, the article addresses three main areas tied to the actual “doing” of processual research. First, the notion of tacit knowledge and “getting your hands dirty” by engaging in ongoing in-depth fieldwork. Second, the design and implementation of a longitudinal case study research programme. Third, the advantages and concerns of combining a range of different data collecting techniques in carrying out processual studies. Overall, the main intention is to provide some useful reflections and practical insights, as well as providing something of the flavour of carrying out this type of research.  相似文献   
47.
供应链信息流研究综述   总被引:7,自引:0,他引:7  
通过对供应链信息流相关文献分析,从信息流运作对供应链整个响应时间压缩的影响、信息流的研究内容和研究方法三个方面,对该领域研究进行分析总结,并就进一步研究提出展望。  相似文献   
48.
This paper describes Bayesian methods for life test planning with Type II censored data from a Weibull distribution, when the Weibull shape parameter is given. We use conjugate prior distributions and criteria based on estimating a quantile of interest of the lifetime distribution. One criterion is based on a precision factor for a credibility interval for a distribution quantile and the other is based on the length of the credibility interval. We provide simple closed form expressions for the relationship between the needed number of failures and the precision criteria. Examples are used to illustrate the results.Received: October 2002 / Revised: March 2004  相似文献   
49.
This paper discusses consequences of violating the normal distribution assumption imbedded in Structural Equation Modeling (SEM). Based on real data from a large sample customer satisfaction survey we follow the procedures as suggested in leading textbooks. We document consequences of this practice and discuss its impact on decision making in marketing.  相似文献   
50.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号