首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   17707篇
  免费   47篇
财政金融   2860篇
工业经济   860篇
计划管理   2718篇
经济学   4098篇
综合类   489篇
运输经济   10篇
旅游经济   7篇
贸易经济   4669篇
农业经济   23篇
经济概况   1404篇
信息产业经济   44篇
邮电经济   572篇
  2024年   1篇
  2023年   10篇
  2022年   4篇
  2021年   15篇
  2020年   19篇
  2019年   39篇
  2018年   2331篇
  2017年   2080篇
  2016年   1243篇
  2015年   115篇
  2014年   129篇
  2013年   206篇
  2012年   479篇
  2011年   1994篇
  2010年   1867篇
  2009年   1564篇
  2008年   1536篇
  2007年   1905篇
  2006年   95篇
  2005年   419篇
  2004年   494篇
  2003年   578篇
  2002年   285篇
  2001年   86篇
  2000年   63篇
  1999年   12篇
  1998年   32篇
  1997年   16篇
  1996年   26篇
  1995年   12篇
  1994年   14篇
  1993年   14篇
  1992年   5篇
  1991年   3篇
  1989年   4篇
  1988年   3篇
  1987年   2篇
  1986年   18篇
  1985年   6篇
  1984年   4篇
  1983年   4篇
  1982年   2篇
  1981年   4篇
  1980年   3篇
  1979年   4篇
  1978年   3篇
  1977年   3篇
  1976年   1篇
  1974年   2篇
排序方式: 共有10000条查询结果,搜索用时 0 毫秒
81.
82.
83.
84.
When price-cap rules determine the structure of prices for a long period, they suffer a credibility problem and introduce an element of risk especially if a firm’s profits are “too large”. Profit sharing may be seen as a device to pre-determine price adjustments and thus to decrease regulatory risk. We analyse the effects of profit sharing on the incentives to invest, using a real option approach. Absent credibility issues, a well designed profit sharing system may be neutral relative to a pure price cap. With regulatory risk, profit sharing is preferable to a pure price-cap one, if it intervenes for high enough profit levels.
Carlo Scarpa (Corresponding author)Email:
  相似文献   
85.
Security of Big Data is a huge concern. In a broad sense, Big Data contains two types of data: structured and unstructured. Providing security to unstructured data is more difficult than providing security to structured data. In this paper, we have developed an approach to provide adequate security to unstructured data by considering types of data and their sensitivity levels. We have reviewed the different analytics methods of Big Data to build nodes of different types of data. Each type of data has been classified to provide adequate security and enhance the overhead of the security system. To provide security to a data node, and a security suite has been designed by incorporating different security algorithms. Those security algorithms collectively form a security suite which has been interfaced with the data node. Information on data sensitivity has been collected through a survey. We have shown through several experiments on multiple computer systems with varied configurations that data classification with respect to sensitivity levels enhances the performance of the system. The experimental results show how and in what amount the designed security suite reduces overhead and increases security simultaneously.  相似文献   
86.
87.
This systematic review analyses literature on the work of hybrid value creation, i.e. the process of generating additional value by innovatively combining products (tangible component) and services (intangible component). A state of the art report on hybrid value creation is delivered by first systematically identifying and then analyzing 169 publications focusing on hybrid value creation. The identified publications are clustered into eight categories based on their links and interactions and thus a mapping of this evolving field is suggested. A discussion and reflection of the findings with respect to the pervasiveness of literature and the research methodologies used is provided. The paper concludes by identifying some dominant strategic gaps in the overall research landscape and provides directions for future research.  相似文献   
88.
89.
Ansgar Steland 《Metrika》2004,60(3):229-249
Motivated in part by applications in model selection in statistical genetics and sequential monitoring of financial data, we study an empirical process framework for a class of stopping rules which rely on kernel-weighted averages of past data. We are interested in the asymptotic distribution for time series data and an analysis of the joint influence of the smoothing policy and the alternative defining the deviation from the null model (in-control state). We employ a certain type of local alternative which provides meaningful insights. Our results hold true for short memory processes which satisfy a weak mixing condition. By relying on an empirical process framework we obtain both asymptotic laws for the classical fixed sample design and the sequential monitoring design. As a by-product we establish the asymptotic distribution of the Nadaraya-Watson kernel smoother when the regressors do not get dense as the sample size increases.Acknowledgements The author is grateful to two anonymous referees for their constructive comments, which improved the paper. One referee draws my attention to Lifshits paper. The financial support of the Collaborative Research Centre Reduction of Complexity in Multivariate Data Structures (SFB 475) of the German Research Foundation (DFG) is greatly acknowledged.  相似文献   
90.
The setting of the individual X-factor is a core element of every incentive regulation system. The problem faced by the regulator is the choice among a wide variety of methods for setting the individual efficiency objectives. So far no single method could achieve acceptance as best-practice in both scientific research and regulatory practice. The German incentive regulation, which started in January 2009, uses the so called “Best-of-Four Method” to define individual X-factors. The regulator, the Bundesnetzagentur, announced an in-depth evaluation of this method, because it potentially leads to an unacceptable downward bias in setting the individual efficiency objectives. This article illustrates the problems of the Best-of-Four Method and offers alternatives. The author additionally develops a new approach which is based on a multi-stage process, using economical and engineering methods. Finally all alternatives are compared according to various criteria.It can be shown that the complementary usage of Data Envelopment Analysis and Stochastic Frontier Analysis is a reasonable approach to efficiency analysis. But this raises the question how to transform the resulting efficiency scores into individual X-factors. The Best-of-Four Approach is not appropriate because it distorts the X-factors, offers possibilities for strategic behaviour and cannot guarantee comparability of the efficiency objectives. Comparing alternatives shows that no approach clearly dominates all others taking into account all considered criteria. The multi-stage approach offers a possibility of transforming a “Nordic Walking” into an ambitious fitness program while also setting appropriate and comparable individual X-factors.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号