全文获取类型
收费全文 | 16851篇 |
免费 | 6篇 |
专业分类
财政金融 | 2678篇 |
工业经济 | 758篇 |
计划管理 | 2574篇 |
经济学 | 3889篇 |
综合类 | 482篇 |
运输经济 | 1篇 |
旅游经济 | 1篇 |
贸易经济 | 4494篇 |
农业经济 | 3篇 |
经济概况 | 1361篇 |
信息产业经济 | 44篇 |
邮电经济 | 572篇 |
出版年
2023年 | 4篇 |
2022年 | 1篇 |
2021年 | 1篇 |
2020年 | 4篇 |
2019年 | 4篇 |
2018年 | 2308篇 |
2017年 | 2057篇 |
2016年 | 1209篇 |
2015年 | 92篇 |
2014年 | 87篇 |
2013年 | 69篇 |
2012年 | 440篇 |
2011年 | 1949篇 |
2010年 | 1835篇 |
2009年 | 1524篇 |
2008年 | 1517篇 |
2007年 | 1868篇 |
2006年 | 66篇 |
2005年 | 392篇 |
2004年 | 466篇 |
2003年 | 550篇 |
2002年 | 252篇 |
2001年 | 62篇 |
2000年 | 51篇 |
1999年 | 1篇 |
1998年 | 16篇 |
1996年 | 13篇 |
1995年 | 2篇 |
1994年 | 1篇 |
1991年 | 1篇 |
1987年 | 1篇 |
1986年 | 13篇 |
1970年 | 1篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
81.
This paper studies the financing status of small and medium enterprises (SMEs) in transition economies. Factors causing financing
obstacles are indentified and further analyzed to determine their influence over financing patterns. Bank regulatory practices
relevant to SMEs’ access to bank loans and their influence over loan structures are identified. This study contributes to
the existing body of knowledge by exploring the impact of specific bank regulatory practices on credit lending to SMEs in
transition economies. 相似文献
82.
Joseph P. Joyce 《Open Economies Review》2011,22(5):875-895
Bank crises in emerging economies have been a feature of the recent global crisis, and their incidence has increased in the
post-Bretton Woods era. This paper investigates the impact of financial globalization on the incidence of systemic bank crises
in 20 emerging markets over the years 1976–2002 using measures of de facto and de jure financial openness. An increase in foreign debt liabilities contributes to an increase in the incidence of crises, but foreign
direct investment and portfolio equity liabilities have the opposite effect. A more liberal de jure capital regime lowers the incidence of banking crises, while a regime of fixed exchange rates increases their frequency.
The results of the econometric analysis is consistent with the experience of East European and central Asian emerging markets,
which attracted a relatively large proportion of capital flows in the form of debt in recent years and have been particularly
hard hit by the global financial crisis. 相似文献
83.
This paper follows Bailey (J Polit Econ 64:93–110, 1956) and Lucas (Econometrica 68:247–274, 2000) and estimates the welfare cost of inflation for 17 Latin American economies. We use annual data, from 1955 to 2000, and
recent advances in the field of applied econometrics to estimate the inflation rate elasticity of money demand and report
significantly high and differential welfare cost estimates for these economies. 相似文献
84.
Marketing IS management: The wisdom of Peter Drucker 总被引:1,自引:0,他引:1
85.
86.
Henning Vöpel 《Wirtschaftsdienst》2017,97(10):755-756
87.
We introduce the Speculative Influence Network (SIN) to decipher the causal relationships between sectors (and/or firms) during financial bubbles. The SIN is constructed in two steps. First, we develop a Hidden Markov Model (HMM) of regime-switching between a normal market phase represented by a geometric Brownian motion and a bubble regime represented by the stochastic super-exponential Sornette and Andersen (Int J Mod Phys C 13(2):171–188, 2002) bubble model. The calibration of the HMM provides the probability at each time for a given security to be in the bubble regime. Conditional on two assets being qualified in the bubble regime, we then use the transfer entropy to quantify the influence of the returns of one asset i onto another asset j, from which we introduce the adjacency matrix of the SIN among securities. We apply our technology to the Chinese stock market during the period 2005–2008, during which a normal phase was followed by a spectacular bubble ending in a massive correction. We introduce the Net Speculative Influence Intensity variable as the difference between the transfer entropies from i to j and from j to i, which is used in a series of rank ordered regressions to predict the maximum loss (%MaxLoss) endured during the crash. The sectors that influenced other sectors the most are found to have the largest losses. There is some predictability obtained by using the transfer entropy involving industrial sectors to explain the %MaxLoss of financial institutions but not vice versa. We also show that the bubble state variable calibrated on the Chinese market data corresponds well to the regimes when the market exhibits a strong price acceleration followed by clear change of price regimes. Our results suggest that SIN may contribute significant skill to the development of general linkage-based systemic risks measures and early warning metrics. 相似文献
88.
89.
90.
Fabian Pfaffenberger 《Publizistik》2018,63(1):53-72
Twitter has a high presence in our modern society, media and science. Numbers of studies with Twitter data – not only in communication research – show that tweets are a popular data source for science. This popularity can be explained by the mostly free data and its technically high availability, as well as the distinct and open communication structure. Even though much research is based on Twitter data, it is only suitable for research to a limited extent. For example, some studies have already revealed that Twitter data has a low explanatory power when predicting election outcomes. Furthermore, the rise of automated communication by bots is an urgent problem of Twitter data analysis. Although critical aspects of Twitter data have already been discussed to some extent (mostly in final remarks of studies), comprehensive evaluations of data quality are relatively rare.To contribute to a deeper understanding of problems regarding the scientific use of Twitter data leading to a more deliberate und critical handling of this data, the study examines different aspects of data quality, usability and explanatory power. Based on previous research on data quality, it takes a critical look with the following four dimensions: availability and completeness, quality (regarding authenticity, reliability and interpretability), language as well as representativeness. Based on a small case study, this paper evaluates the scientific use of Twitter data by elaborating problems in data collection, analysis and interpretation. For this illustrative purpose, the author typically gathered data via Twitter’s Streaming APIs: 73,194 tweets collected between 20–24/02/2017 (each 8pm) with the Streaming APIs (POST statuses/filter) containing the search term “#merkel”.Concerning data availability and completeness, several aspects diminish data usability. Twitter provides two types of data gateways: Streaming APIs (for real-time data) and REST APIs (for historical data). Streaming APIs only have a free available Spritzer bandwidth, that is limited to only one percent of the overall (global) tweet volume at any given time. This limit is a prevalent problem when collecting Twitter data to major events like elections and sports. The REST APIs do not usually provide data older than seven days. Furthermore, Twitter gives no information about the total or search term-related tweet volume at any time.In addition to incomplete data, several quality related aspects complicate data gathering and analysis, like the lack of user specific and verified information (age, gender, location), inconsistent hashtag usage, missing conversational context or poor data/user authenticity. Geo data on Twitter is – if available at all – rarely correct and not useful for filtering relevant tweets. Searching and filtering relevant tweets by search terms can be deceptive, because not every tweet concerning a topic contains corresponding hashtags. Furthermore, it is difficult to find a perfect search term for broader and dynamically changing topics. Besides, the missing conversational context of tweets impedes interpretation of statements (especially with regard to irony or sarcasm). In addition, the rise of social bots diminishes dataset quality enormously. In the dataset generated for this work, only three of the top 30 accounts (by tweet count) could be directly identified as genuine. One fourth of all accounts in this dataset generated about 60% of all tweets. If the high-performing accounts predominantly consist of bots, the negative impact on data quality is immense.Another problem of Twitter analysis is Internet language. While Emojis can be misinterpreted, abbreviations, neologisms, mixed languages and a lack of grammar impede text analysis. In addition to low data quality in general, the quality of tweet content and its representativeness is crucial. This work compares user statistics with research articles on SCOPUS as well as media coverage of two selected, German quality newspapers. Twitter is – compared to its user count – enormously overrepresented in media and science. Only 16% of German adults (over 18 years) are monthly active (MAUs) and merely four percent are daily active users.Considering all presented problems, Twitter can be a good data source for research, but only to a limited extent. Researchers must consider that Twitter does not guarantee complete, reliable and representative data. Ignoring those critical points can mislead data analysis. While Twitter data can be suitable for specific case studies, like the usage and spread of selected hashtags or the twitter usage of specific politicians, you cannot use it for broader, nation-based surveys like the prediction of elections or the public opinion on a specific topic. Twitter has a low representativeness and is mostly an “elite medium” with an uncertain future (concerning the stagnating number of users and financial problems). 相似文献