首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   16997篇
  免费   6篇
财政金融   2675篇
工业经济   771篇
计划管理   2611篇
经济学   3954篇
综合类   483篇
运输经济   4篇
旅游经济   6篇
贸易经济   4508篇
农业经济   7篇
经济概况   1368篇
信息产业经济   44篇
邮电经济   572篇
  2023年   2篇
  2022年   1篇
  2021年   6篇
  2020年   7篇
  2019年   8篇
  2018年   2309篇
  2017年   2059篇
  2016年   1222篇
  2015年   95篇
  2014年   101篇
  2013年   90篇
  2012年   450篇
  2011年   1956篇
  2010年   1835篇
  2009年   1529篇
  2008年   1520篇
  2007年   1874篇
  2006年   68篇
  2005年   390篇
  2004年   463篇
  2003年   556篇
  2002年   256篇
  2001年   66篇
  2000年   52篇
  1999年   3篇
  1998年   20篇
  1997年   4篇
  1996年   16篇
  1995年   5篇
  1994年   1篇
  1993年   4篇
  1992年   4篇
  1991年   4篇
  1990年   2篇
  1989年   2篇
  1988年   1篇
  1986年   13篇
  1985年   1篇
  1981年   1篇
  1979年   4篇
  1978年   2篇
  1964年   1篇
排序方式: 共有10000条查询结果,搜索用时 0 毫秒
131.
This paper examines the effects of the Economic and Monetary Union on demand for foreign reserves. The traditional theory on demand for international reserves assigns a pivotal role to imports. However, in a currency union part of imports are settled in the common currency, leaving no incentive for keeping foreign reserves. Moreover, the pooling of the demand for reserves in the currency union and an increasing role of a currency as an international reserve currency may also influence, among other things, the union demand for reserves. Based on estimated demand functions for reserves it is shown that the Economic and Monetary Union has reduced the demand for reserves substantially. It is argued that an enlargement with new member countries of the European Union will result in further savings of reserves. A simple calculation at the end of the paper illustrates the welfare gain associated with the reduced need of reserves in the Economic and Monetary Union.  相似文献   
132.
This paper examines the impact of foreign firm entry on the industry consolidation process in a host country that operates through mergers and exits of incumbent firms. Using a three-stage oligopolistic model, the paper shows that foreign direct investment (FDI) may trigger consolidation via a merger since the approval of a domestic merger by the antitrust authority is more likely in the case a foreign firm enters via FDI and a firm’s incentive for a domestic merger is greater and that, in turn, the possibility to merge and become more efficient modifies the outcome of the game by making FDI compared to exports less likely.  相似文献   
133.
134.
135.
136.
A vast body of literature suggests that the European Alpine Region is extremely sensitive to climate change. Winter tourism is closely related to climate variations, especially in mountain regions where resorts are heavily dependent on snow. This paper explores how to effectively integrate a climate change adaptation perspective with local discourses about sustainability and tourism, an increasing priority for policy-makers in the region and elsewhere. It reports on the development and application of a participatory decision support process for the analysis of adaptation strategies for local development of an Alpine tourism destination, Auronzo di Cadore (Dolomites, Italy). This experience significantly contributed to the idea that an efficient combination of modelling capabilities, decision support tools, and participatory processes can substantially improve decision-making for sustainability. The authors show that, in this case study, such a combination of methods and tools allowed for managing the involvement of local actors, stimulating local debates on climate change adaptation and possible consequences on winter tourism, encouraging creativity and smoothing potential conflicts, and easing the integration of the qualitative knowledge and the preferences of the involved actors with quantitative information. This contributed to an integrated sustainability assessment of alternative strategies for sustainable tourism planning.  相似文献   
137.
We introduce the Speculative Influence Network (SIN) to decipher the causal relationships between sectors (and/or firms) during financial bubbles. The SIN is constructed in two steps. First, we develop a Hidden Markov Model (HMM) of regime-switching between a normal market phase represented by a geometric Brownian motion and a bubble regime represented by the stochastic super-exponential Sornette and Andersen (Int J Mod Phys C 13(2):171–188, 2002) bubble model. The calibration of the HMM provides the probability at each time for a given security to be in the bubble regime. Conditional on two assets being qualified in the bubble regime, we then use the transfer entropy to quantify the influence of the returns of one asset i onto another asset j, from which we introduce the adjacency matrix of the SIN among securities. We apply our technology to the Chinese stock market during the period 2005–2008, during which a normal phase was followed by a spectacular bubble ending in a massive correction. We introduce the Net Speculative Influence Intensity variable as the difference between the transfer entropies from i to j and from j to i, which is used in a series of rank ordered regressions to predict the maximum loss (%MaxLoss) endured during the crash. The sectors that influenced other sectors the most are found to have the largest losses. There is some predictability obtained by using the transfer entropy involving industrial sectors to explain the %MaxLoss of financial institutions but not vice versa. We also show that the bubble state variable calibrated on the Chinese market data corresponds well to the regimes when the market exhibits a strong price acceleration followed by clear change of price regimes. Our results suggest that SIN may contribute significant skill to the development of general linkage-based systemic risks measures and early warning metrics.  相似文献   
138.
139.
140.
Twitter has a high presence in our modern society, media and science. Numbers of studies with Twitter data – not only in communication research – show that tweets are a popular data source for science. This popularity can be explained by the mostly free data and its technically high availability, as well as the distinct and open communication structure. Even though much research is based on Twitter data, it is only suitable for research to a limited extent. For example, some studies have already revealed that Twitter data has a low explanatory power when predicting election outcomes. Furthermore, the rise of automated communication by bots is an urgent problem of Twitter data analysis. Although critical aspects of Twitter data have already been discussed to some extent (mostly in final remarks of studies), comprehensive evaluations of data quality are relatively rare.To contribute to a deeper understanding of problems regarding the scientific use of Twitter data leading to a more deliberate und critical handling of this data, the study examines different aspects of data quality, usability and explanatory power. Based on previous research on data quality, it takes a critical look with the following four dimensions: availability and completeness, quality (regarding authenticity, reliability and interpretability), language as well as representativeness. Based on a small case study, this paper evaluates the scientific use of Twitter data by elaborating problems in data collection, analysis and interpretation. For this illustrative purpose, the author typically gathered data via Twitter’s Streaming APIs: 73,194 tweets collected between 20–24/02/2017 (each 8pm) with the Streaming APIs (POST statuses/filter) containing the search term “#merkel”.Concerning data availability and completeness, several aspects diminish data usability. Twitter provides two types of data gateways: Streaming APIs (for real-time data) and REST APIs (for historical data). Streaming APIs only have a free available Spritzer bandwidth, that is limited to only one percent of the overall (global) tweet volume at any given time. This limit is a prevalent problem when collecting Twitter data to major events like elections and sports. The REST APIs do not usually provide data older than seven days. Furthermore, Twitter gives no information about the total or search term-related tweet volume at any time.In addition to incomplete data, several quality related aspects complicate data gathering and analysis, like the lack of user specific and verified information (age, gender, location), inconsistent hashtag usage, missing conversational context or poor data/user authenticity. Geo data on Twitter is – if available at all – rarely correct and not useful for filtering relevant tweets. Searching and filtering relevant tweets by search terms can be deceptive, because not every tweet concerning a topic contains corresponding hashtags. Furthermore, it is difficult to find a perfect search term for broader and dynamically changing topics. Besides, the missing conversational context of tweets impedes interpretation of statements (especially with regard to irony or sarcasm). In addition, the rise of social bots diminishes dataset quality enormously. In the dataset generated for this work, only three of the top 30 accounts (by tweet count) could be directly identified as genuine. One fourth of all accounts in this dataset generated about 60% of all tweets. If the high-performing accounts predominantly consist of bots, the negative impact on data quality is immense.Another problem of Twitter analysis is Internet language. While Emojis can be misinterpreted, abbreviations, neologisms, mixed languages and a lack of grammar impede text analysis. In addition to low data quality in general, the quality of tweet content and its representativeness is crucial. This work compares user statistics with research articles on SCOPUS as well as media coverage of two selected, German quality newspapers. Twitter is – compared to its user count – enormously overrepresented in media and science. Only 16% of German adults (over 18 years) are monthly active (MAUs) and merely four percent are daily active users.Considering all presented problems, Twitter can be a good data source for research, but only to a limited extent. Researchers must consider that Twitter does not guarantee complete, reliable and representative data. Ignoring those critical points can mislead data analysis. While Twitter data can be suitable for specific case studies, like the usage and spread of selected hashtags or the twitter usage of specific politicians, you cannot use it for broader, nation-based surveys like the prediction of elections or the public opinion on a specific topic. Twitter has a low representativeness and is mostly an “elite medium” with an uncertain future (concerning the stagnating number of users and financial problems).  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号