首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   17116篇
  免费   6篇
财政金融   2682篇
工业经济   775篇
计划管理   2623篇
经济学   3979篇
综合类   483篇
运输经济   3篇
旅游经济   5篇
贸易经济   4562篇
农业经济   9篇
经济概况   1385篇
信息产业经济   44篇
邮电经济   572篇
  2021年   3篇
  2020年   5篇
  2019年   8篇
  2018年   2303篇
  2017年   2058篇
  2016年   1215篇
  2015年   103篇
  2014年   100篇
  2013年   95篇
  2012年   451篇
  2011年   1961篇
  2010年   1838篇
  2009年   1527篇
  2008年   1525篇
  2007年   1875篇
  2006年   74篇
  2005年   396篇
  2004年   469篇
  2003年   555篇
  2002年   257篇
  2001年   67篇
  2000年   59篇
  1999年   5篇
  1998年   24篇
  1997年   5篇
  1996年   18篇
  1995年   8篇
  1994年   7篇
  1993年   2篇
  1992年   3篇
  1991年   11篇
  1990年   9篇
  1989年   4篇
  1987年   8篇
  1986年   16篇
  1985年   7篇
  1984年   6篇
  1983年   2篇
  1982年   3篇
  1981年   2篇
  1978年   4篇
  1977年   2篇
  1976年   4篇
  1975年   4篇
  1974年   3篇
  1973年   3篇
  1964年   2篇
  1960年   2篇
  1936年   1篇
  1935年   1篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
101.
This paper studies the financing status of small and medium enterprises (SMEs) in transition economies. Factors causing financing obstacles are indentified and further analyzed to determine their influence over financing patterns. Bank regulatory practices relevant to SMEs’ access to bank loans and their influence over loan structures are identified. This study contributes to the existing body of knowledge by exploring the impact of specific bank regulatory practices on credit lending to SMEs in transition economies.  相似文献   
102.
Bank crises in emerging economies have been a feature of the recent global crisis, and their incidence has increased in the post-Bretton Woods era. This paper investigates the impact of financial globalization on the incidence of systemic bank crises in 20 emerging markets over the years 1976–2002 using measures of de facto and de jure financial openness. An increase in foreign debt liabilities contributes to an increase in the incidence of crises, but foreign direct investment and portfolio equity liabilities have the opposite effect. A more liberal de jure capital regime lowers the incidence of banking crises, while a regime of fixed exchange rates increases their frequency. The results of the econometric analysis is consistent with the experience of East European and central Asian emerging markets, which attracted a relatively large proportion of capital flows in the form of debt in recent years and have been particularly hard hit by the global financial crisis.  相似文献   
103.
This paper follows Bailey (J Polit Econ 64:93–110, 1956) and Lucas (Econometrica 68:247–274, 2000) and estimates the welfare cost of inflation for 17 Latin American economies. We use annual data, from 1955 to 2000, and recent advances in the field of applied econometrics to estimate the inflation rate elasticity of money demand and report significantly high and differential welfare cost estimates for these economies.  相似文献   
104.
105.
106.
107.
We introduce the Speculative Influence Network (SIN) to decipher the causal relationships between sectors (and/or firms) during financial bubbles. The SIN is constructed in two steps. First, we develop a Hidden Markov Model (HMM) of regime-switching between a normal market phase represented by a geometric Brownian motion and a bubble regime represented by the stochastic super-exponential Sornette and Andersen (Int J Mod Phys C 13(2):171–188, 2002) bubble model. The calibration of the HMM provides the probability at each time for a given security to be in the bubble regime. Conditional on two assets being qualified in the bubble regime, we then use the transfer entropy to quantify the influence of the returns of one asset i onto another asset j, from which we introduce the adjacency matrix of the SIN among securities. We apply our technology to the Chinese stock market during the period 2005–2008, during which a normal phase was followed by a spectacular bubble ending in a massive correction. We introduce the Net Speculative Influence Intensity variable as the difference between the transfer entropies from i to j and from j to i, which is used in a series of rank ordered regressions to predict the maximum loss (%MaxLoss) endured during the crash. The sectors that influenced other sectors the most are found to have the largest losses. There is some predictability obtained by using the transfer entropy involving industrial sectors to explain the %MaxLoss of financial institutions but not vice versa. We also show that the bubble state variable calibrated on the Chinese market data corresponds well to the regimes when the market exhibits a strong price acceleration followed by clear change of price regimes. Our results suggest that SIN may contribute significant skill to the development of general linkage-based systemic risks measures and early warning metrics.  相似文献   
108.
109.
110.
Twitter has a high presence in our modern society, media and science. Numbers of studies with Twitter data – not only in communication research – show that tweets are a popular data source for science. This popularity can be explained by the mostly free data and its technically high availability, as well as the distinct and open communication structure. Even though much research is based on Twitter data, it is only suitable for research to a limited extent. For example, some studies have already revealed that Twitter data has a low explanatory power when predicting election outcomes. Furthermore, the rise of automated communication by bots is an urgent problem of Twitter data analysis. Although critical aspects of Twitter data have already been discussed to some extent (mostly in final remarks of studies), comprehensive evaluations of data quality are relatively rare.To contribute to a deeper understanding of problems regarding the scientific use of Twitter data leading to a more deliberate und critical handling of this data, the study examines different aspects of data quality, usability and explanatory power. Based on previous research on data quality, it takes a critical look with the following four dimensions: availability and completeness, quality (regarding authenticity, reliability and interpretability), language as well as representativeness. Based on a small case study, this paper evaluates the scientific use of Twitter data by elaborating problems in data collection, analysis and interpretation. For this illustrative purpose, the author typically gathered data via Twitter’s Streaming APIs: 73,194 tweets collected between 20–24/02/2017 (each 8pm) with the Streaming APIs (POST statuses/filter) containing the search term “#merkel”.Concerning data availability and completeness, several aspects diminish data usability. Twitter provides two types of data gateways: Streaming APIs (for real-time data) and REST APIs (for historical data). Streaming APIs only have a free available Spritzer bandwidth, that is limited to only one percent of the overall (global) tweet volume at any given time. This limit is a prevalent problem when collecting Twitter data to major events like elections and sports. The REST APIs do not usually provide data older than seven days. Furthermore, Twitter gives no information about the total or search term-related tweet volume at any time.In addition to incomplete data, several quality related aspects complicate data gathering and analysis, like the lack of user specific and verified information (age, gender, location), inconsistent hashtag usage, missing conversational context or poor data/user authenticity. Geo data on Twitter is – if available at all – rarely correct and not useful for filtering relevant tweets. Searching and filtering relevant tweets by search terms can be deceptive, because not every tweet concerning a topic contains corresponding hashtags. Furthermore, it is difficult to find a perfect search term for broader and dynamically changing topics. Besides, the missing conversational context of tweets impedes interpretation of statements (especially with regard to irony or sarcasm). In addition, the rise of social bots diminishes dataset quality enormously. In the dataset generated for this work, only three of the top 30 accounts (by tweet count) could be directly identified as genuine. One fourth of all accounts in this dataset generated about 60% of all tweets. If the high-performing accounts predominantly consist of bots, the negative impact on data quality is immense.Another problem of Twitter analysis is Internet language. While Emojis can be misinterpreted, abbreviations, neologisms, mixed languages and a lack of grammar impede text analysis. In addition to low data quality in general, the quality of tweet content and its representativeness is crucial. This work compares user statistics with research articles on SCOPUS as well as media coverage of two selected, German quality newspapers. Twitter is – compared to its user count – enormously overrepresented in media and science. Only 16% of German adults (over 18 years) are monthly active (MAUs) and merely four percent are daily active users.Considering all presented problems, Twitter can be a good data source for research, but only to a limited extent. Researchers must consider that Twitter does not guarantee complete, reliable and representative data. Ignoring those critical points can mislead data analysis. While Twitter data can be suitable for specific case studies, like the usage and spread of selected hashtags or the twitter usage of specific politicians, you cannot use it for broader, nation-based surveys like the prediction of elections or the public opinion on a specific topic. Twitter has a low representativeness and is mostly an “elite medium” with an uncertain future (concerning the stagnating number of users and financial problems).  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号