首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   16986篇
  免费   6篇
财政金融   2712篇
工业经济   770篇
计划管理   2596篇
经济学   3909篇
综合类   483篇
运输经济   2篇
旅游经济   2篇
贸易经济   4525篇
农业经济   7篇
经济概况   1365篇
信息产业经济   44篇
邮电经济   577篇
  2023年   2篇
  2021年   2篇
  2020年   5篇
  2019年   6篇
  2018年   2309篇
  2017年   2056篇
  2016年   1215篇
  2015年   92篇
  2014年   100篇
  2013年   87篇
  2012年   449篇
  2011年   1948篇
  2010年   1838篇
  2009年   1530篇
  2008年   1517篇
  2007年   1881篇
  2006年   74篇
  2005年   390篇
  2004年   465篇
  2003年   554篇
  2002年   257篇
  2001年   63篇
  2000年   52篇
  1999年   5篇
  1998年   21篇
  1997年   2篇
  1996年   14篇
  1994年   3篇
  1993年   2篇
  1992年   2篇
  1991年   2篇
  1990年   2篇
  1989年   1篇
  1988年   5篇
  1986年   18篇
  1985年   1篇
  1984年   3篇
  1983年   3篇
  1982年   5篇
  1981年   3篇
  1980年   1篇
  1979年   1篇
  1978年   1篇
  1973年   1篇
  1972年   1篇
  1952年   1篇
  1933年   1篇
  1931年   1篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
81.
82.
83.
When price-cap rules determine the structure of prices for a long period, they suffer a credibility problem and introduce an element of risk especially if a firm’s profits are “too large”. Profit sharing may be seen as a device to pre-determine price adjustments and thus to decrease regulatory risk. We analyse the effects of profit sharing on the incentives to invest, using a real option approach. Absent credibility issues, a well designed profit sharing system may be neutral relative to a pure price cap. With regulatory risk, profit sharing is preferable to a pure price-cap one, if it intervenes for high enough profit levels.
Carlo Scarpa (Corresponding author)Email:
  相似文献   
84.
Security of Big Data is a huge concern. In a broad sense, Big Data contains two types of data: structured and unstructured. Providing security to unstructured data is more difficult than providing security to structured data. In this paper, we have developed an approach to provide adequate security to unstructured data by considering types of data and their sensitivity levels. We have reviewed the different analytics methods of Big Data to build nodes of different types of data. Each type of data has been classified to provide adequate security and enhance the overhead of the security system. To provide security to a data node, and a security suite has been designed by incorporating different security algorithms. Those security algorithms collectively form a security suite which has been interfaced with the data node. Information on data sensitivity has been collected through a survey. We have shown through several experiments on multiple computer systems with varied configurations that data classification with respect to sensitivity levels enhances the performance of the system. The experimental results show how and in what amount the designed security suite reduces overhead and increases security simultaneously.  相似文献   
85.
This paper considers the combination of pollution taxes and abatement subsidies when some polluting firms procure their abatement goods and services from an oligopolistic eco-industry. The regulator must here cope with two simultaneous price distortions: one that comes from pollution and the other which is caused by the eco-industry’s market power. In this context, we show that taxing emissions while subsidizing polluters’ abatement efforts cannot lead to first-best, but the opposite occurs provided it is the eco-industry’s output which is subsidized. When public transfers also create distortions, welfare can be higher if the regulator uses only an emission tax, but subsidizing abatement suppliers while taxing emissions remains optimal when the eco-industry is concentrated.  相似文献   
86.
87.
This systematic review analyses literature on the work of hybrid value creation, i.e. the process of generating additional value by innovatively combining products (tangible component) and services (intangible component). A state of the art report on hybrid value creation is delivered by first systematically identifying and then analyzing 169 publications focusing on hybrid value creation. The identified publications are clustered into eight categories based on their links and interactions and thus a mapping of this evolving field is suggested. A discussion and reflection of the findings with respect to the pervasiveness of literature and the research methodologies used is provided. The paper concludes by identifying some dominant strategic gaps in the overall research landscape and provides directions for future research.  相似文献   
88.
89.
Ansgar Steland 《Metrika》2004,60(3):229-249
Motivated in part by applications in model selection in statistical genetics and sequential monitoring of financial data, we study an empirical process framework for a class of stopping rules which rely on kernel-weighted averages of past data. We are interested in the asymptotic distribution for time series data and an analysis of the joint influence of the smoothing policy and the alternative defining the deviation from the null model (in-control state). We employ a certain type of local alternative which provides meaningful insights. Our results hold true for short memory processes which satisfy a weak mixing condition. By relying on an empirical process framework we obtain both asymptotic laws for the classical fixed sample design and the sequential monitoring design. As a by-product we establish the asymptotic distribution of the Nadaraya-Watson kernel smoother when the regressors do not get dense as the sample size increases.Acknowledgements The author is grateful to two anonymous referees for their constructive comments, which improved the paper. One referee draws my attention to Lifshits paper. The financial support of the Collaborative Research Centre Reduction of Complexity in Multivariate Data Structures (SFB 475) of the German Research Foundation (DFG) is greatly acknowledged.  相似文献   
90.
The setting of the individual X-factor is a core element of every incentive regulation system. The problem faced by the regulator is the choice among a wide variety of methods for setting the individual efficiency objectives. So far no single method could achieve acceptance as best-practice in both scientific research and regulatory practice. The German incentive regulation, which started in January 2009, uses the so called “Best-of-Four Method” to define individual X-factors. The regulator, the Bundesnetzagentur, announced an in-depth evaluation of this method, because it potentially leads to an unacceptable downward bias in setting the individual efficiency objectives. This article illustrates the problems of the Best-of-Four Method and offers alternatives. The author additionally develops a new approach which is based on a multi-stage process, using economical and engineering methods. Finally all alternatives are compared according to various criteria.It can be shown that the complementary usage of Data Envelopment Analysis and Stochastic Frontier Analysis is a reasonable approach to efficiency analysis. But this raises the question how to transform the resulting efficiency scores into individual X-factors. The Best-of-Four Approach is not appropriate because it distorts the X-factors, offers possibilities for strategic behaviour and cannot guarantee comparability of the efficiency objectives. Comparing alternatives shows that no approach clearly dominates all others taking into account all considered criteria. The multi-stage approach offers a possibility of transforming a “Nordic Walking” into an ambitious fitness program while also setting appropriate and comparable individual X-factors.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号