首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   579篇
  免费   9篇
财政金融   73篇
工业经济   31篇
计划管理   83篇
经济学   173篇
综合类   15篇
运输经济   5篇
旅游经济   5篇
贸易经济   125篇
农业经济   15篇
经济概况   63篇
  2022年   3篇
  2021年   3篇
  2020年   5篇
  2019年   9篇
  2018年   7篇
  2017年   11篇
  2016年   12篇
  2015年   11篇
  2014年   12篇
  2013年   70篇
  2012年   34篇
  2011年   13篇
  2010年   23篇
  2009年   10篇
  2008年   25篇
  2007年   17篇
  2006年   17篇
  2005年   15篇
  2004年   6篇
  2003年   6篇
  2002年   20篇
  2001年   11篇
  2000年   12篇
  1999年   11篇
  1998年   10篇
  1997年   9篇
  1996年   7篇
  1995年   11篇
  1994年   4篇
  1993年   4篇
  1992年   13篇
  1991年   9篇
  1990年   7篇
  1989年   11篇
  1987年   3篇
  1986年   10篇
  1985年   3篇
  1984年   10篇
  1983年   8篇
  1982年   4篇
  1981年   13篇
  1980年   7篇
  1979年   12篇
  1978年   3篇
  1977年   4篇
  1975年   8篇
  1972年   3篇
  1934年   6篇
  1891年   2篇
  1890年   2篇
排序方式: 共有588条查询结果,搜索用时 12 毫秒
81.
82.
This paper explores the possibility of using the Classification of the Functions of Government published recently by the United Nations (COFOG) in order to segregate intermediate from final use of government production in the national accounts. It is argued that the notorious difficulties of doing that can be traced to two reasons, one the multiplicity of theoretical concepts, and the other the lack of sufficient detail at the statistical level. The first can be removed by clarifying that on the production account of an economy only production and not welfare is to be measured. The second seems to be overcome by the three-digit detail of COFOG. It is shown that many of these categories are now sufficiently homogeneous for a panel of experts to agree in assigning them to either intermediate or final use, although for a number of categories this is still difficult. The question is whether consensus in the major categories is large enough to consider the remaining controversial ones as border cases, normal in any classification and solved in the last instance not by argument but by convention. Some preliminary figures for the intermediate part of government production are given.  相似文献   
83.
84.
The aim of this paper is twofold. Firstly, we introduce a novel semiparametric technique called Genetic Programming to estimate and explain the willingness to pay to maintain environmental conditions of a specific natural park in Spain. To the authors’ knowledge, this is the first time in which Genetic Programming is employed in contingent valuation. Secondly, we investigate the existence of bias due to the functional rigidity of the traditional parametric techniques commonly employed in a contingent valuation problem. We applied standard parametric methods (logit and probit) and compared with results obtained using semiparametric methods (a proportional hazard model and a genetic program). The parametric and semiparametric methods give similar results in terms of the variables finally chosen in the model. Therefore, the results confirm the internal validity of our contingent valuation exercise.  相似文献   
85.
The small farmers and landless in India are faced with the vicious cycle of low incomes, low savings, low investment and uneconomic size of operational holdings and often are nonviable. Using data for the Haryana State of Northern India for 1977/78, the linear discriminant function was used to identify those factors which will make the majority of small farmers and landless in India a viable entity. Analysis of these data indicated that per hectare fertiliser use, area under high yielding crop varieties, operational size of holding and working capital are the factors which affect the viability of the small farmers and landless. As compared to the simple regression analysis discriminant function approach has been found an effective tool for discriminating the two social groups and for predicting the sponsored social change. However, the use of the discriminant function may be limited if the qualitative independent variable has more than two classes and its orderings are not in sequence.  相似文献   
86.
Abstract

This paper provides a new and accessible approach to establishing certain results concerning the discounted penalty function. The direct approach consists of two steps. In the first step, closed-form expressions are obtained in the special case in which the claim amount distribution is a combination of exponential distributions. A rational function is useful in this context. For the second step, one observes that the family of combinations of exponential distributions is dense. Hence, it suffices to reformulate the results of the first step to obtain general results. The surplus process has downward and upward jumps, modeled by two independent compound Poisson processes. If the distribution of the upward jumps is exponential, a series of new results can be obtained with ease. Subsequently, certain results of Gerber and Shiu [H. U. Gerber and E. S. W. Shiu, North American Actuarial Journal 2(1): 48–78 (1998)] can be reproduced. The two-step approach is also applied when an independent Wiener process is added to the surplus process. Certain results are related to Zhang et al. [Z. Zhang, H. Yang, and S. Li, Journal of Computational and Applied Mathematics 233: 1773–1 784 (2010)], which uses different methods.  相似文献   
87.
Static tradeoff theories, which do not explicitly treat the impact of transaction costs, do not explain the policy of asymmetry between frequent small debt transactions and infrequent large equity transactions. Nor do these theories explain why the debt ratio is allowed to wander a considerable distance from its alleged static optimum, or how much of a distance should be tolerated. We offer a class of diffusion models that mimic this behaviour in a stochastic-dynamic framework and are designed to optimize a financing strategy using any static tradeoff theory as input. The models developed reveal the determinants of the size and frequency of equity transactions and the range of values over which leverage variations are tolerated in four generic scenarios. They also yield a new formulation of the cost of capital that recognizes stochastic transaction costs and a penalty for deviation from any static-optimal leverage. Our class of models augments the pecking order theory, provides a flexible quantitative framework for its implementation as a decision tool, and facilitates the formulation of additional hypotheses for its empirical validation. Symmetrically, our results show the importance of dynamic factors in designing and interpreting empirical tests of static tradeoff theories. The results presented have important implications for the role played by static tradeoff theories in a stochastic-dynamic framework. One such implication is that the static-optimal leverage has no direct effect on the firm's leverage policy in this setting. The target leverage for refinancing transactions is different from the static-optimal leverage, and the mean leverage is generally different from both. As a consequence, the latter cannot be used to estimate the former. Another implication is that even when the mean leverage equals the static optimum, mean reversion is not an optimal behaviour and therefore not a legitimate test for the existence of a static tradeoff in a dynamic context. Still another implication is that wide variations in leverage ratios cannot be interpreted as evidence of leverage indifference. It follows that the pecking order theory is consistent with static tradeoff theories and does not require the assumption of leverage indifference.  相似文献   
88.
It has been argued that fundamentally different methodological approaches have made for ‘two sociologies’. This view has obscured the fact that the problem of validity has to be tackled independently of any specific methodological premises because of the textuality of sociological data. This does not necessarily imply, however, a single, unified strategy for validity testing. In this paper, some basic theoretical presuppositions underlying the approach to validity testing in quantitative research will be contrasted with the strategies offered by Max Weber's methodological writings on the ideal type. It is argued that the use of ideal typical constructs in qualitative research (exemplified by patient's illness careers) allows systematic validity testing despite the important differences in the conceptualization of social reality which is used in quantitative research, thus serving the purpose of any empirical sociological research, that is, to gain valid insight into societies' concrete reality.  相似文献   
89.
90.
Within the context of the Jülich Compatibility Study on Energy Supply Systems the model of the planning cell was used to incorporate participation into the process of policy formulation and evaluation and to gain information about intuitive preferences concerning the four basic energy scenarios constructed by the Enquete Commission of the German Federal Parliament. Planning Cells consist of groups of citizens who are selected by random process and are given paid leave from their workday obligations for a limited period of time to work out solutions for social problems. A total of 24 planning cells were organized throughout Germany to evaluate the four energy scenarios and to formulate recommendations for the policy maker. As a result most citizens favored the more moderate scenarios [1, 5], but were almost equally divided in their preference distribution with respect to the pronuclear (option 2) and non-nuclear scenario (option 3). Using a simplified MAU-model to determine the preferences of each citizen, the surprising result was achieved that more than 40% of the participants reached the highest positive score for the most antinuclear, soft energy scenario. This result could be partly explained by cognitive factors and by preference group influence.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号