首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   30307篇
  免费   254篇
财政金融   5346篇
工业经济   1794篇
计划管理   4640篇
经济学   6763篇
综合类   975篇
运输经济   86篇
旅游经济   209篇
贸易经济   6384篇
农业经济   694篇
经济概况   3047篇
信息产业经济   44篇
邮电经济   579篇
  2021年   72篇
  2020年   146篇
  2019年   213篇
  2018年   2489篇
  2017年   2262篇
  2016年   1404篇
  2015年   265篇
  2014年   341篇
  2013年   1486篇
  2012年   758篇
  2011年   2319篇
  2010年   2147篇
  2009年   1842篇
  2008年   1881篇
  2007年   2141篇
  2006年   314篇
  2005年   626篇
  2004年   719篇
  2003年   814篇
  2002年   530篇
  2001年   354篇
  2000年   384篇
  1999年   251篇
  1998年   277篇
  1997年   274篇
  1996年   250篇
  1995年   245篇
  1994年   239篇
  1993年   286篇
  1992年   260篇
  1991年   239篇
  1990年   227篇
  1989年   186篇
  1988年   184篇
  1987年   175篇
  1986年   202篇
  1985年   251篇
  1984年   317篇
  1983年   261篇
  1982年   255篇
  1981年   269篇
  1980年   238篇
  1979年   251篇
  1978年   205篇
  1977年   189篇
  1976年   174篇
  1975年   138篇
  1974年   139篇
  1973年   126篇
  1972年   86篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
271.
272.
273.
Twitter has a high presence in our modern society, media and science. Numbers of studies with Twitter data – not only in communication research – show that tweets are a popular data source for science. This popularity can be explained by the mostly free data and its technically high availability, as well as the distinct and open communication structure. Even though much research is based on Twitter data, it is only suitable for research to a limited extent. For example, some studies have already revealed that Twitter data has a low explanatory power when predicting election outcomes. Furthermore, the rise of automated communication by bots is an urgent problem of Twitter data analysis. Although critical aspects of Twitter data have already been discussed to some extent (mostly in final remarks of studies), comprehensive evaluations of data quality are relatively rare.To contribute to a deeper understanding of problems regarding the scientific use of Twitter data leading to a more deliberate und critical handling of this data, the study examines different aspects of data quality, usability and explanatory power. Based on previous research on data quality, it takes a critical look with the following four dimensions: availability and completeness, quality (regarding authenticity, reliability and interpretability), language as well as representativeness. Based on a small case study, this paper evaluates the scientific use of Twitter data by elaborating problems in data collection, analysis and interpretation. For this illustrative purpose, the author typically gathered data via Twitter’s Streaming APIs: 73,194 tweets collected between 20–24/02/2017 (each 8pm) with the Streaming APIs (POST statuses/filter) containing the search term “#merkel”.Concerning data availability and completeness, several aspects diminish data usability. Twitter provides two types of data gateways: Streaming APIs (for real-time data) and REST APIs (for historical data). Streaming APIs only have a free available Spritzer bandwidth, that is limited to only one percent of the overall (global) tweet volume at any given time. This limit is a prevalent problem when collecting Twitter data to major events like elections and sports. The REST APIs do not usually provide data older than seven days. Furthermore, Twitter gives no information about the total or search term-related tweet volume at any time.In addition to incomplete data, several quality related aspects complicate data gathering and analysis, like the lack of user specific and verified information (age, gender, location), inconsistent hashtag usage, missing conversational context or poor data/user authenticity. Geo data on Twitter is – if available at all – rarely correct and not useful for filtering relevant tweets. Searching and filtering relevant tweets by search terms can be deceptive, because not every tweet concerning a topic contains corresponding hashtags. Furthermore, it is difficult to find a perfect search term for broader and dynamically changing topics. Besides, the missing conversational context of tweets impedes interpretation of statements (especially with regard to irony or sarcasm). In addition, the rise of social bots diminishes dataset quality enormously. In the dataset generated for this work, only three of the top 30 accounts (by tweet count) could be directly identified as genuine. One fourth of all accounts in this dataset generated about 60% of all tweets. If the high-performing accounts predominantly consist of bots, the negative impact on data quality is immense.Another problem of Twitter analysis is Internet language. While Emojis can be misinterpreted, abbreviations, neologisms, mixed languages and a lack of grammar impede text analysis. In addition to low data quality in general, the quality of tweet content and its representativeness is crucial. This work compares user statistics with research articles on SCOPUS as well as media coverage of two selected, German quality newspapers. Twitter is – compared to its user count – enormously overrepresented in media and science. Only 16% of German adults (over 18 years) are monthly active (MAUs) and merely four percent are daily active users.Considering all presented problems, Twitter can be a good data source for research, but only to a limited extent. Researchers must consider that Twitter does not guarantee complete, reliable and representative data. Ignoring those critical points can mislead data analysis. While Twitter data can be suitable for specific case studies, like the usage and spread of selected hashtags or the twitter usage of specific politicians, you cannot use it for broader, nation-based surveys like the prediction of elections or the public opinion on a specific topic. Twitter has a low representativeness and is mostly an “elite medium” with an uncertain future (concerning the stagnating number of users and financial problems).  相似文献   
274.
Erik Koenen 《Publizistik》2018,63(4):535-556
In the discussion about the future of communication and media science in digital times, this article focuses on the position and perspective of communication and media history. The challenges, problems and potentials associated with digitization are illustrated using the example of historical press research. Within the media ensemble of classical mass communication, the periodic press in particular benefits from the retrospective digitization of historical media and their digital edition in databases and portals. For historical press research, digitized newspapers and digital newspaper portals represent an originally new, because increasingly digital research situation: Digital newspaper portals as a novel, originally digital world for newspapers not only facilitate the path to newspapers and their contents, they also open them up as digital resources machine-readable and thus open up completely new paths for research—not least supported by digital methods.The main objective of this article is to discuss the epistemological-methodological problems and the practical operationalization of digitally framed or supported research processes in historical press research and to present concrete perspectives of knowledge and research strategies for practice. With this aim in mind, the paper discusses three points:(1.) Methodological and practical consequences of historical press research in digital research environments. With the digitization of newspapers and their digital reproduction in newspaper portals, their source character shifts essentially in three dimensions: They are edited and indexed digitally and their complete content is made accessible through optical character recognition. This makes previously unimportant technical aspects such as data formats, portal interfaces, search algorithms and programming interfaces very relevant for the methodology of historical press research. A primary methodological effect of the digital reorganization of newspapers in data and portals is the reversal of the usual reading practice: from “top down” to “bottom up”. With the help of “keyword searching”, newspapers can now be searched comprehensively and transversely to the order of the newspaper original. Nevertheless, there is a warning against an all too naïve and uncritical usage of digitized newspapers and newspaper portals. In practice, some problems and risks are crucial for the conception of historical newspaper research in digital research environments: Besides a hardly standardized and in large parts “wild”, because often uncoordinated and selective digitization of newspapers, the newspaper portals are different in their conception as well as characterized by different content, technical, legal and entrepreneurial conditions.(2.) Historical newspapers as digital sources in practice. The methodological and technical framework are fundamental and far-reaching for the further practical use of newspapers as digital sources in research. In each research step, it must be considered that digitized newspapers are genuinely new and, depending on the quality and depth of digitization, very complex sources with information gains and losses compared to the originals. Newspapers are not simply digitized, they are digitally constructed and differ in this construction from each other. In this respect, historical press researchers are increasingly becoming “users”. However simple and uncomplicated newspaper portals may be in practice, one must incorporate the implicit functions (hidden in algorithms, data and code) and the limits of these knowledge engines and their “correct” use into the research process. Combining and mediating classical hermeneutic methods with search technologies is an essential moment in the practical handling of digitized newspapers.(3.) Historical press research and digital methods. In the light of the new research situation which is emerging with digitized newspapers and newspaper portals, it is obvious that historical press research should increasingly open up to the possibilities of digital methods. In the digital method discussion of historical press research, one concept in particular forms a central point of reference: Franco Moretti’s concept of “Distant Reading”. Basically, “Distant Reading”—and this is what makes this perspective so interesting for historical press research in dealing with the considerable metadata and full text volumes of digitized newspapers—is about the quantitative-automatic indexing of large text corpora using methods and techniques of “Text Mining”. Digital text methods are thus seriously changing the way we look at texts and the research practice with texts such as newspapers: In parts, they automate and accelerate reading processes, produce “new” text extracts by the computer, generate new interpretation contexts between individual text, corpus and condensate, and thus set new orientation points for “close reading”. Computers and digital text methods thus do not relieve researchers of interpretation. Rather, they constantly challenge them to interpret in a continuous interplay in order to give meaning to the text patterns discovered by machines.In spite of all these advantages, digital methods have so far only been used sporadically in historical press research. For this reason, finally a digital workflow for research processes in historical press research will be presented, which illustrates and summarizes essential challenges, problems, solutions and potentials of digitally framed or supported research in press history.  相似文献   
275.
This article aims to identify the determinants that influence business tourism income and that may be controlled by economic agents and policy makers of destination countries. For the development of the empirical study, a dynamic panel model by the Generalized Method of Moments (GMM) was estimated using the Gretl 2016a software, and a sample of 122 countries for the period 2002–2013 (12 years) was used. The study reveals that, for the development of policies to stimulate the growth in the short and long-term of business tourism income, countries should develop measures that encourage capital investment in tourism and foreign direct investment.  相似文献   
276.
277.
278.
279.
In manufacturing industries, product inspection is automated and the use of image data is increasingly being employed for defect detection. A manufacturing company in Japan produces an item and inspects the produced products using image data. Reducing the error rate is important in product inspection because poor inspection of products might lead to the delivery of defective products to consumers (consumer’s risk) and strict inspection increases production cost (producer’s risk). To reduce the error rate, we highlighted fault points using a two-dimensional moving range filter and discriminated defect production through a unanimous vote among Mahalanobis classifiers for each color component. For results, we achieved a lower error rate than the current system. This research is an empirical study of how to use image data in defect detection.  相似文献   
280.
This paper examines how environmental resources and costs feature in business models of small- and medium-sized tourism enterprises (SMTEs). Several studies have pointed to the generally positive nature of the relationship between the economic and environmental performance of tourism firms. Yet, although business models act as a vector between these aspects of firm performance, they have been overlooked in sustainable tourism discourse. The paper reports findings from discussion groups of SMTE businesses in South West England during the global economic downturn. Environmental costs and cost control were afforded relatively little importance in terms of value creation; conversely, there was a strong and predictable emphasis on revenue generation. Indirect tactics emerged for dealing with guests’ environmental behaviours which reflected this prevailing commercial logic. Green credentials were routinely de-emphasized, sometimes regarded as liabilities, in a form of greenhushing. Responses were framed by reference to social media and how online reviews may negatively impact on future value capture. Conceptually, the business model emerged as an important lens for understanding how environmental resources and costs were valourized. The paper highlights the need to ensure that contemporary approaches to environmental management in SMTEs reflect the current and fast-changing conditions that frame business models.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号