首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   21881篇
  免费   155篇
财政金融   3607篇
工业经济   1233篇
计划管理   3439篇
经济学   4958篇
综合类   532篇
运输经济   32篇
旅游经济   67篇
贸易经济   5551篇
农业经济   227篇
经济概况   1736篇
信息产业经济   44篇
邮电经济   610篇
  2023年   35篇
  2022年   26篇
  2021年   61篇
  2020年   93篇
  2019年   117篇
  2018年   2401篇
  2017年   2171篇
  2016年   1365篇
  2015年   221篇
  2014年   239篇
  2013年   652篇
  2012年   654篇
  2011年   2128篇
  2010年   1984篇
  2009年   1723篇
  2008年   1677篇
  2007年   2022篇
  2006年   229篇
  2005年   517篇
  2004年   596篇
  2003年   686篇
  2002年   375篇
  2001年   179篇
  2000年   150篇
  1999年   100篇
  1998年   120篇
  1997年   90篇
  1996年   103篇
  1995年   80篇
  1994年   79篇
  1993年   68篇
  1992年   79篇
  1991年   63篇
  1990年   64篇
  1989年   64篇
  1988年   52篇
  1987年   44篇
  1986年   71篇
  1985年   71篇
  1984年   71篇
  1983年   52篇
  1982年   75篇
  1981年   48篇
  1980年   48篇
  1979年   38篇
  1978年   36篇
  1977年   35篇
  1976年   32篇
  1975年   27篇
  1972年   21篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
141.
Twitter has a high presence in our modern society, media and science. Numbers of studies with Twitter data – not only in communication research – show that tweets are a popular data source for science. This popularity can be explained by the mostly free data and its technically high availability, as well as the distinct and open communication structure. Even though much research is based on Twitter data, it is only suitable for research to a limited extent. For example, some studies have already revealed that Twitter data has a low explanatory power when predicting election outcomes. Furthermore, the rise of automated communication by bots is an urgent problem of Twitter data analysis. Although critical aspects of Twitter data have already been discussed to some extent (mostly in final remarks of studies), comprehensive evaluations of data quality are relatively rare.To contribute to a deeper understanding of problems regarding the scientific use of Twitter data leading to a more deliberate und critical handling of this data, the study examines different aspects of data quality, usability and explanatory power. Based on previous research on data quality, it takes a critical look with the following four dimensions: availability and completeness, quality (regarding authenticity, reliability and interpretability), language as well as representativeness. Based on a small case study, this paper evaluates the scientific use of Twitter data by elaborating problems in data collection, analysis and interpretation. For this illustrative purpose, the author typically gathered data via Twitter’s Streaming APIs: 73,194 tweets collected between 20–24/02/2017 (each 8pm) with the Streaming APIs (POST statuses/filter) containing the search term “#merkel”.Concerning data availability and completeness, several aspects diminish data usability. Twitter provides two types of data gateways: Streaming APIs (for real-time data) and REST APIs (for historical data). Streaming APIs only have a free available Spritzer bandwidth, that is limited to only one percent of the overall (global) tweet volume at any given time. This limit is a prevalent problem when collecting Twitter data to major events like elections and sports. The REST APIs do not usually provide data older than seven days. Furthermore, Twitter gives no information about the total or search term-related tweet volume at any time.In addition to incomplete data, several quality related aspects complicate data gathering and analysis, like the lack of user specific and verified information (age, gender, location), inconsistent hashtag usage, missing conversational context or poor data/user authenticity. Geo data on Twitter is – if available at all – rarely correct and not useful for filtering relevant tweets. Searching and filtering relevant tweets by search terms can be deceptive, because not every tweet concerning a topic contains corresponding hashtags. Furthermore, it is difficult to find a perfect search term for broader and dynamically changing topics. Besides, the missing conversational context of tweets impedes interpretation of statements (especially with regard to irony or sarcasm). In addition, the rise of social bots diminishes dataset quality enormously. In the dataset generated for this work, only three of the top 30 accounts (by tweet count) could be directly identified as genuine. One fourth of all accounts in this dataset generated about 60% of all tweets. If the high-performing accounts predominantly consist of bots, the negative impact on data quality is immense.Another problem of Twitter analysis is Internet language. While Emojis can be misinterpreted, abbreviations, neologisms, mixed languages and a lack of grammar impede text analysis. In addition to low data quality in general, the quality of tweet content and its representativeness is crucial. This work compares user statistics with research articles on SCOPUS as well as media coverage of two selected, German quality newspapers. Twitter is – compared to its user count – enormously overrepresented in media and science. Only 16% of German adults (over 18 years) are monthly active (MAUs) and merely four percent are daily active users.Considering all presented problems, Twitter can be a good data source for research, but only to a limited extent. Researchers must consider that Twitter does not guarantee complete, reliable and representative data. Ignoring those critical points can mislead data analysis. While Twitter data can be suitable for specific case studies, like the usage and spread of selected hashtags or the twitter usage of specific politicians, you cannot use it for broader, nation-based surveys like the prediction of elections or the public opinion on a specific topic. Twitter has a low representativeness and is mostly an “elite medium” with an uncertain future (concerning the stagnating number of users and financial problems).  相似文献   
142.
Erik Koenen 《Publizistik》2018,63(4):535-556
In the discussion about the future of communication and media science in digital times, this article focuses on the position and perspective of communication and media history. The challenges, problems and potentials associated with digitization are illustrated using the example of historical press research. Within the media ensemble of classical mass communication, the periodic press in particular benefits from the retrospective digitization of historical media and their digital edition in databases and portals. For historical press research, digitized newspapers and digital newspaper portals represent an originally new, because increasingly digital research situation: Digital newspaper portals as a novel, originally digital world for newspapers not only facilitate the path to newspapers and their contents, they also open them up as digital resources machine-readable and thus open up completely new paths for research—not least supported by digital methods.The main objective of this article is to discuss the epistemological-methodological problems and the practical operationalization of digitally framed or supported research processes in historical press research and to present concrete perspectives of knowledge and research strategies for practice. With this aim in mind, the paper discusses three points:(1.) Methodological and practical consequences of historical press research in digital research environments. With the digitization of newspapers and their digital reproduction in newspaper portals, their source character shifts essentially in three dimensions: They are edited and indexed digitally and their complete content is made accessible through optical character recognition. This makes previously unimportant technical aspects such as data formats, portal interfaces, search algorithms and programming interfaces very relevant for the methodology of historical press research. A primary methodological effect of the digital reorganization of newspapers in data and portals is the reversal of the usual reading practice: from “top down” to “bottom up”. With the help of “keyword searching”, newspapers can now be searched comprehensively and transversely to the order of the newspaper original. Nevertheless, there is a warning against an all too naïve and uncritical usage of digitized newspapers and newspaper portals. In practice, some problems and risks are crucial for the conception of historical newspaper research in digital research environments: Besides a hardly standardized and in large parts “wild”, because often uncoordinated and selective digitization of newspapers, the newspaper portals are different in their conception as well as characterized by different content, technical, legal and entrepreneurial conditions.(2.) Historical newspapers as digital sources in practice. The methodological and technical framework are fundamental and far-reaching for the further practical use of newspapers as digital sources in research. In each research step, it must be considered that digitized newspapers are genuinely new and, depending on the quality and depth of digitization, very complex sources with information gains and losses compared to the originals. Newspapers are not simply digitized, they are digitally constructed and differ in this construction from each other. In this respect, historical press researchers are increasingly becoming “users”. However simple and uncomplicated newspaper portals may be in practice, one must incorporate the implicit functions (hidden in algorithms, data and code) and the limits of these knowledge engines and their “correct” use into the research process. Combining and mediating classical hermeneutic methods with search technologies is an essential moment in the practical handling of digitized newspapers.(3.) Historical press research and digital methods. In the light of the new research situation which is emerging with digitized newspapers and newspaper portals, it is obvious that historical press research should increasingly open up to the possibilities of digital methods. In the digital method discussion of historical press research, one concept in particular forms a central point of reference: Franco Moretti’s concept of “Distant Reading”. Basically, “Distant Reading”—and this is what makes this perspective so interesting for historical press research in dealing with the considerable metadata and full text volumes of digitized newspapers—is about the quantitative-automatic indexing of large text corpora using methods and techniques of “Text Mining”. Digital text methods are thus seriously changing the way we look at texts and the research practice with texts such as newspapers: In parts, they automate and accelerate reading processes, produce “new” text extracts by the computer, generate new interpretation contexts between individual text, corpus and condensate, and thus set new orientation points for “close reading”. Computers and digital text methods thus do not relieve researchers of interpretation. Rather, they constantly challenge them to interpret in a continuous interplay in order to give meaning to the text patterns discovered by machines.In spite of all these advantages, digital methods have so far only been used sporadically in historical press research. For this reason, finally a digital workflow for research processes in historical press research will be presented, which illustrates and summarizes essential challenges, problems, solutions and potentials of digitally framed or supported research in press history.  相似文献   
143.
Rwanda's Nyungwe National Park is a biodiversity hotspot with the most endemic species in the ecoregion and the highest number of threatened species internationally. Nyungwe supplies critical ecosystem services to the Rwandan population including water provisioning and tourism services. Tourism in the Park has strong potential for financing enhanced visitor experiences and the sustainable management of the Park. This paper explores quantitatively the economic impacts of adjustment in Park visitation fees and tourism demand as a source of revenues to improve Park tourism opportunities and ongoing operations and maintenance. The methods developed in this paper are novel in integrating the results of stated preference techniques with a regional computable general equilibrium modelling approach to capture multisectoral, direct, indirect and induced impacts. Such methods have strong potential for assessing revenue generation alternatives in other contexts where park managers are faced with the need to generate additional revenue for sustainable park management while facing diminishing budget allocations. Results of this analysis demonstrate that adjustment of Park fees has a relatively small impact on the regional economy and well-being when compared with a strategy aimed at generating increased tourism demand through investment in improving the visitor experience at Nyungwe National Park.  相似文献   
144.
145.
146.
147.
In manufacturing industries, product inspection is automated and the use of image data is increasingly being employed for defect detection. A manufacturing company in Japan produces an item and inspects the produced products using image data. Reducing the error rate is important in product inspection because poor inspection of products might lead to the delivery of defective products to consumers (consumer’s risk) and strict inspection increases production cost (producer’s risk). To reduce the error rate, we highlighted fault points using a two-dimensional moving range filter and discriminated defect production through a unanimous vote among Mahalanobis classifiers for each color component. For results, we achieved a lower error rate than the current system. This research is an empirical study of how to use image data in defect detection.  相似文献   
148.
The Brazilian cities as well as many of the large urban centers in the world continue to expand, increasing the demand for mobility and transport, while, at the same time, the same cities are investing in greenhouse gas (GHG) mitigation to avoid climate change. Brazil's urbanization rate increased from 26% in 1940 to almost 70% in 1980. During this period, the Brazilian population tripled and the urban population multiplied by seven. In 2010, the transport sector in São Paulo accounted for 71% of the total emissions released by the energy sector. Ethanol has been considered a fuel with less greenhouse gas emissions, when compared with fossil fuels. However, ethanol production would have to double to meet the expected demand. Electric vehicles (EVs) market is expanding around the world, and is also an option to reduce the transport emissions, if powered by clean electricity. To assess whether the adoption of EVs might bring more benefits than the current ethanol, we develop prospective scenarios supported by the Long-range Energy Alternatives Planning (LEAP) simulation tool, taking a bottom-up tank-to-wheel approach to consider the CO2 emissions of car in São Paulo. The scenario considering a substitution of 25% of gasoline-powered cars by EV in 2030 showed a reduction in energy consumption and CO2 emissions, around 15% and 26% respectively in that year in comparison with 2015. We discuss the interplay between ethanol and EV, also considering emission coefficients from life cycle analysis conducted in Brazil, and concluded EV will have higher positive impact on climate change mitigation than ethanol.  相似文献   
149.
ABSTRACT

This study investigates job location and its relationship with employee turnover intention within the casino-entertainment industry. The researchers analyzed turnover intention at the supervisor and company level from the perspective of employees who hold front-of-house or back-of-house jobs in three Nevada casinos and their corporate office. The results of this research fill existing gaps in the academic literature related to turnover intention and provide beneficial implications for industry and academic practitioners. Organizations within the casino-entertainment industry may develop strategies related to the management of human capital that could provide fiscal and operational benefits.  相似文献   
150.
An increase of broadband demand is forecasted by transitional methods that consider the effect of this increase through many factors, such as customer requirement diversification, and new service introduction and deployment under competition. Broadband demand forecasting has become important for closing the digital divide, promoting regional developments, and constructing networks economically; therefore, a demand forecast model that considers the mechanisms of market structure is necessary. In this paper, a demand analysis method for broadband access combining macro- and micro-data mining is proposed, and the service choice behaviour of customers is introduced as a customer model not only to express the macro trend of market structure, but also to consider area marketing. The proposed method can estimate the potential demand, determine the point at which broadband demand growth peaks in a specified area, and support a decision for ultra high-speed broadband access facility installation.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号