首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   17653篇
  免费   33篇
  国内免费   1篇
财政金融   2816篇
工业经济   820篇
计划管理   2723篇
经济学   4107篇
综合类   487篇
运输经济   12篇
旅游经济   14篇
贸易经济   4652篇
农业经济   20篇
经济概况   1414篇
信息产业经济   44篇
邮电经济   578篇
  2024年   2篇
  2023年   17篇
  2022年   7篇
  2021年   31篇
  2020年   45篇
  2019年   62篇
  2018年   2340篇
  2017年   2092篇
  2016年   1237篇
  2015年   117篇
  2014年   141篇
  2013年   170篇
  2012年   498篇
  2011年   1980篇
  2010年   1867篇
  2009年   1557篇
  2008年   1556篇
  2007年   1888篇
  2006年   88篇
  2005年   400篇
  2004年   480篇
  2003年   580篇
  2002年   265篇
  2001年   79篇
  2000年   58篇
  1999年   12篇
  1998年   25篇
  1997年   5篇
  1996年   22篇
  1995年   6篇
  1994年   5篇
  1993年   4篇
  1992年   4篇
  1991年   2篇
  1990年   3篇
  1989年   3篇
  1988年   4篇
  1987年   1篇
  1986年   14篇
  1985年   6篇
  1984年   4篇
  1983年   2篇
  1982年   2篇
  1981年   2篇
  1979年   3篇
  1977年   1篇
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
131.
132.
133.
134.
Strategic Partnerships in New Product Development: an Italian Case Study   总被引:6,自引:0,他引:6  
This article provides some revealing insights into what a leading Italian firm operating in markets where innovation is a focal point of competition has learned about partnering with suppliers in the new products development process. To succeed in a rapidly changing environment, the firm promoted and sustained tightly linked, integrated supplier relationships. This provided one key element of a shorter product cycle, led to better products, and increased the firm's ability to compete. Andrea Bonaccorsi and Andrea Lipparini explore why partnering is critical for new product success. Finally, they highlight the steps that should be taken to make this relationship a productive one.  相似文献   
135.
We introduce the Speculative Influence Network (SIN) to decipher the causal relationships between sectors (and/or firms) during financial bubbles. The SIN is constructed in two steps. First, we develop a Hidden Markov Model (HMM) of regime-switching between a normal market phase represented by a geometric Brownian motion and a bubble regime represented by the stochastic super-exponential Sornette and Andersen (Int J Mod Phys C 13(2):171–188, 2002) bubble model. The calibration of the HMM provides the probability at each time for a given security to be in the bubble regime. Conditional on two assets being qualified in the bubble regime, we then use the transfer entropy to quantify the influence of the returns of one asset i onto another asset j, from which we introduce the adjacency matrix of the SIN among securities. We apply our technology to the Chinese stock market during the period 2005–2008, during which a normal phase was followed by a spectacular bubble ending in a massive correction. We introduce the Net Speculative Influence Intensity variable as the difference between the transfer entropies from i to j and from j to i, which is used in a series of rank ordered regressions to predict the maximum loss (%MaxLoss) endured during the crash. The sectors that influenced other sectors the most are found to have the largest losses. There is some predictability obtained by using the transfer entropy involving industrial sectors to explain the %MaxLoss of financial institutions but not vice versa. We also show that the bubble state variable calibrated on the Chinese market data corresponds well to the regimes when the market exhibits a strong price acceleration followed by clear change of price regimes. Our results suggest that SIN may contribute significant skill to the development of general linkage-based systemic risks measures and early warning metrics.  相似文献   
136.
137.
138.
Twitter has a high presence in our modern society, media and science. Numbers of studies with Twitter data – not only in communication research – show that tweets are a popular data source for science. This popularity can be explained by the mostly free data and its technically high availability, as well as the distinct and open communication structure. Even though much research is based on Twitter data, it is only suitable for research to a limited extent. For example, some studies have already revealed that Twitter data has a low explanatory power when predicting election outcomes. Furthermore, the rise of automated communication by bots is an urgent problem of Twitter data analysis. Although critical aspects of Twitter data have already been discussed to some extent (mostly in final remarks of studies), comprehensive evaluations of data quality are relatively rare.To contribute to a deeper understanding of problems regarding the scientific use of Twitter data leading to a more deliberate und critical handling of this data, the study examines different aspects of data quality, usability and explanatory power. Based on previous research on data quality, it takes a critical look with the following four dimensions: availability and completeness, quality (regarding authenticity, reliability and interpretability), language as well as representativeness. Based on a small case study, this paper evaluates the scientific use of Twitter data by elaborating problems in data collection, analysis and interpretation. For this illustrative purpose, the author typically gathered data via Twitter’s Streaming APIs: 73,194 tweets collected between 20–24/02/2017 (each 8pm) with the Streaming APIs (POST statuses/filter) containing the search term “#merkel”.Concerning data availability and completeness, several aspects diminish data usability. Twitter provides two types of data gateways: Streaming APIs (for real-time data) and REST APIs (for historical data). Streaming APIs only have a free available Spritzer bandwidth, that is limited to only one percent of the overall (global) tweet volume at any given time. This limit is a prevalent problem when collecting Twitter data to major events like elections and sports. The REST APIs do not usually provide data older than seven days. Furthermore, Twitter gives no information about the total or search term-related tweet volume at any time.In addition to incomplete data, several quality related aspects complicate data gathering and analysis, like the lack of user specific and verified information (age, gender, location), inconsistent hashtag usage, missing conversational context or poor data/user authenticity. Geo data on Twitter is – if available at all – rarely correct and not useful for filtering relevant tweets. Searching and filtering relevant tweets by search terms can be deceptive, because not every tweet concerning a topic contains corresponding hashtags. Furthermore, it is difficult to find a perfect search term for broader and dynamically changing topics. Besides, the missing conversational context of tweets impedes interpretation of statements (especially with regard to irony or sarcasm). In addition, the rise of social bots diminishes dataset quality enormously. In the dataset generated for this work, only three of the top 30 accounts (by tweet count) could be directly identified as genuine. One fourth of all accounts in this dataset generated about 60% of all tweets. If the high-performing accounts predominantly consist of bots, the negative impact on data quality is immense.Another problem of Twitter analysis is Internet language. While Emojis can be misinterpreted, abbreviations, neologisms, mixed languages and a lack of grammar impede text analysis. In addition to low data quality in general, the quality of tweet content and its representativeness is crucial. This work compares user statistics with research articles on SCOPUS as well as media coverage of two selected, German quality newspapers. Twitter is – compared to its user count – enormously overrepresented in media and science. Only 16% of German adults (over 18 years) are monthly active (MAUs) and merely four percent are daily active users.Considering all presented problems, Twitter can be a good data source for research, but only to a limited extent. Researchers must consider that Twitter does not guarantee complete, reliable and representative data. Ignoring those critical points can mislead data analysis. While Twitter data can be suitable for specific case studies, like the usage and spread of selected hashtags or the twitter usage of specific politicians, you cannot use it for broader, nation-based surveys like the prediction of elections or the public opinion on a specific topic. Twitter has a low representativeness and is mostly an “elite medium” with an uncertain future (concerning the stagnating number of users and financial problems).  相似文献   
139.
Erik Koenen 《Publizistik》2018,63(4):535-556
In the discussion about the future of communication and media science in digital times, this article focuses on the position and perspective of communication and media history. The challenges, problems and potentials associated with digitization are illustrated using the example of historical press research. Within the media ensemble of classical mass communication, the periodic press in particular benefits from the retrospective digitization of historical media and their digital edition in databases and portals. For historical press research, digitized newspapers and digital newspaper portals represent an originally new, because increasingly digital research situation: Digital newspaper portals as a novel, originally digital world for newspapers not only facilitate the path to newspapers and their contents, they also open them up as digital resources machine-readable and thus open up completely new paths for research—not least supported by digital methods.The main objective of this article is to discuss the epistemological-methodological problems and the practical operationalization of digitally framed or supported research processes in historical press research and to present concrete perspectives of knowledge and research strategies for practice. With this aim in mind, the paper discusses three points:(1.) Methodological and practical consequences of historical press research in digital research environments. With the digitization of newspapers and their digital reproduction in newspaper portals, their source character shifts essentially in three dimensions: They are edited and indexed digitally and their complete content is made accessible through optical character recognition. This makes previously unimportant technical aspects such as data formats, portal interfaces, search algorithms and programming interfaces very relevant for the methodology of historical press research. A primary methodological effect of the digital reorganization of newspapers in data and portals is the reversal of the usual reading practice: from “top down” to “bottom up”. With the help of “keyword searching”, newspapers can now be searched comprehensively and transversely to the order of the newspaper original. Nevertheless, there is a warning against an all too naïve and uncritical usage of digitized newspapers and newspaper portals. In practice, some problems and risks are crucial for the conception of historical newspaper research in digital research environments: Besides a hardly standardized and in large parts “wild”, because often uncoordinated and selective digitization of newspapers, the newspaper portals are different in their conception as well as characterized by different content, technical, legal and entrepreneurial conditions.(2.) Historical newspapers as digital sources in practice. The methodological and technical framework are fundamental and far-reaching for the further practical use of newspapers as digital sources in research. In each research step, it must be considered that digitized newspapers are genuinely new and, depending on the quality and depth of digitization, very complex sources with information gains and losses compared to the originals. Newspapers are not simply digitized, they are digitally constructed and differ in this construction from each other. In this respect, historical press researchers are increasingly becoming “users”. However simple and uncomplicated newspaper portals may be in practice, one must incorporate the implicit functions (hidden in algorithms, data and code) and the limits of these knowledge engines and their “correct” use into the research process. Combining and mediating classical hermeneutic methods with search technologies is an essential moment in the practical handling of digitized newspapers.(3.) Historical press research and digital methods. In the light of the new research situation which is emerging with digitized newspapers and newspaper portals, it is obvious that historical press research should increasingly open up to the possibilities of digital methods. In the digital method discussion of historical press research, one concept in particular forms a central point of reference: Franco Moretti’s concept of “Distant Reading”. Basically, “Distant Reading”—and this is what makes this perspective so interesting for historical press research in dealing with the considerable metadata and full text volumes of digitized newspapers—is about the quantitative-automatic indexing of large text corpora using methods and techniques of “Text Mining”. Digital text methods are thus seriously changing the way we look at texts and the research practice with texts such as newspapers: In parts, they automate and accelerate reading processes, produce “new” text extracts by the computer, generate new interpretation contexts between individual text, corpus and condensate, and thus set new orientation points for “close reading”. Computers and digital text methods thus do not relieve researchers of interpretation. Rather, they constantly challenge them to interpret in a continuous interplay in order to give meaning to the text patterns discovered by machines.In spite of all these advantages, digital methods have so far only been used sporadically in historical press research. For this reason, finally a digital workflow for research processes in historical press research will be presented, which illustrates and summarizes essential challenges, problems, solutions and potentials of digitally framed or supported research in press history.  相似文献   
140.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号