全文获取类型
收费全文 | 18109篇 |
免费 | 5篇 |
专业分类
财政金融 | 2670篇 |
工业经济 | 760篇 |
计划管理 | 2566篇 |
经济学 | 3889篇 |
综合类 | 482篇 |
运输经济 | 5篇 |
旅游经济 | 3篇 |
贸易经济 | 4491篇 |
农业经济 | 4篇 |
经济概况 | 1356篇 |
水利工程 | 1272篇 |
信息产业经济 | 44篇 |
邮电经济 | 572篇 |
出版年
2024年 | 1篇 |
2023年 | 1篇 |
2022年 | 3篇 |
2021年 | 2篇 |
2020年 | 4篇 |
2019年 | 9篇 |
2018年 | 2572篇 |
2017年 | 2250篇 |
2016年 | 1435篇 |
2015年 | 93篇 |
2014年 | 89篇 |
2013年 | 65篇 |
2012年 | 514篇 |
2011年 | 2131篇 |
2010年 | 1989篇 |
2009年 | 1523篇 |
2008年 | 1581篇 |
2007年 | 1968篇 |
2006年 | 69篇 |
2005年 | 388篇 |
2004年 | 464篇 |
2003年 | 553篇 |
2002年 | 252篇 |
2001年 | 62篇 |
2000年 | 51篇 |
1999年 | 1篇 |
1998年 | 17篇 |
1997年 | 1篇 |
1996年 | 13篇 |
1986年 | 13篇 |
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
91.
Fabian Pfaffenberger 《Publizistik》2018,63(1):53-72
Twitter has a high presence in our modern society, media and science. Numbers of studies with Twitter data – not only in communication research – show that tweets are a popular data source for science. This popularity can be explained by the mostly free data and its technically high availability, as well as the distinct and open communication structure. Even though much research is based on Twitter data, it is only suitable for research to a limited extent. For example, some studies have already revealed that Twitter data has a low explanatory power when predicting election outcomes. Furthermore, the rise of automated communication by bots is an urgent problem of Twitter data analysis. Although critical aspects of Twitter data have already been discussed to some extent (mostly in final remarks of studies), comprehensive evaluations of data quality are relatively rare.To contribute to a deeper understanding of problems regarding the scientific use of Twitter data leading to a more deliberate und critical handling of this data, the study examines different aspects of data quality, usability and explanatory power. Based on previous research on data quality, it takes a critical look with the following four dimensions: availability and completeness, quality (regarding authenticity, reliability and interpretability), language as well as representativeness. Based on a small case study, this paper evaluates the scientific use of Twitter data by elaborating problems in data collection, analysis and interpretation. For this illustrative purpose, the author typically gathered data via Twitter’s Streaming APIs: 73,194 tweets collected between 20–24/02/2017 (each 8pm) with the Streaming APIs (POST statuses/filter) containing the search term “#merkel”.Concerning data availability and completeness, several aspects diminish data usability. Twitter provides two types of data gateways: Streaming APIs (for real-time data) and REST APIs (for historical data). Streaming APIs only have a free available Spritzer bandwidth, that is limited to only one percent of the overall (global) tweet volume at any given time. This limit is a prevalent problem when collecting Twitter data to major events like elections and sports. The REST APIs do not usually provide data older than seven days. Furthermore, Twitter gives no information about the total or search term-related tweet volume at any time.In addition to incomplete data, several quality related aspects complicate data gathering and analysis, like the lack of user specific and verified information (age, gender, location), inconsistent hashtag usage, missing conversational context or poor data/user authenticity. Geo data on Twitter is – if available at all – rarely correct and not useful for filtering relevant tweets. Searching and filtering relevant tweets by search terms can be deceptive, because not every tweet concerning a topic contains corresponding hashtags. Furthermore, it is difficult to find a perfect search term for broader and dynamically changing topics. Besides, the missing conversational context of tweets impedes interpretation of statements (especially with regard to irony or sarcasm). In addition, the rise of social bots diminishes dataset quality enormously. In the dataset generated for this work, only three of the top 30 accounts (by tweet count) could be directly identified as genuine. One fourth of all accounts in this dataset generated about 60% of all tweets. If the high-performing accounts predominantly consist of bots, the negative impact on data quality is immense.Another problem of Twitter analysis is Internet language. While Emojis can be misinterpreted, abbreviations, neologisms, mixed languages and a lack of grammar impede text analysis. In addition to low data quality in general, the quality of tweet content and its representativeness is crucial. This work compares user statistics with research articles on SCOPUS as well as media coverage of two selected, German quality newspapers. Twitter is – compared to its user count – enormously overrepresented in media and science. Only 16% of German adults (over 18 years) are monthly active (MAUs) and merely four percent are daily active users.Considering all presented problems, Twitter can be a good data source for research, but only to a limited extent. Researchers must consider that Twitter does not guarantee complete, reliable and representative data. Ignoring those critical points can mislead data analysis. While Twitter data can be suitable for specific case studies, like the usage and spread of selected hashtags or the twitter usage of specific politicians, you cannot use it for broader, nation-based surveys like the prediction of elections or the public opinion on a specific topic. Twitter has a low representativeness and is mostly an “elite medium” with an uncertain future (concerning the stagnating number of users and financial problems). 相似文献
92.
Erik Koenen 《Publizistik》2018,63(4):535-556
In the discussion about the future of communication and media science in digital times, this article focuses on the position and perspective of communication and media history. The challenges, problems and potentials associated with digitization are illustrated using the example of historical press research. Within the media ensemble of classical mass communication, the periodic press in particular benefits from the retrospective digitization of historical media and their digital edition in databases and portals. For historical press research, digitized newspapers and digital newspaper portals represent an originally new, because increasingly digital research situation: Digital newspaper portals as a novel, originally digital world for newspapers not only facilitate the path to newspapers and their contents, they also open them up as digital resources machine-readable and thus open up completely new paths for research—not least supported by digital methods.The main objective of this article is to discuss the epistemological-methodological problems and the practical operationalization of digitally framed or supported research processes in historical press research and to present concrete perspectives of knowledge and research strategies for practice. With this aim in mind, the paper discusses three points:(1.) Methodological and practical consequences of historical press research in digital research environments. With the digitization of newspapers and their digital reproduction in newspaper portals, their source character shifts essentially in three dimensions: They are edited and indexed digitally and their complete content is made accessible through optical character recognition. This makes previously unimportant technical aspects such as data formats, portal interfaces, search algorithms and programming interfaces very relevant for the methodology of historical press research. A primary methodological effect of the digital reorganization of newspapers in data and portals is the reversal of the usual reading practice: from “top down” to “bottom up”. With the help of “keyword searching”, newspapers can now be searched comprehensively and transversely to the order of the newspaper original. Nevertheless, there is a warning against an all too naïve and uncritical usage of digitized newspapers and newspaper portals. In practice, some problems and risks are crucial for the conception of historical newspaper research in digital research environments: Besides a hardly standardized and in large parts “wild”, because often uncoordinated and selective digitization of newspapers, the newspaper portals are different in their conception as well as characterized by different content, technical, legal and entrepreneurial conditions.(2.) Historical newspapers as digital sources in practice. The methodological and technical framework are fundamental and far-reaching for the further practical use of newspapers as digital sources in research. In each research step, it must be considered that digitized newspapers are genuinely new and, depending on the quality and depth of digitization, very complex sources with information gains and losses compared to the originals. Newspapers are not simply digitized, they are digitally constructed and differ in this construction from each other. In this respect, historical press researchers are increasingly becoming “users”. However simple and uncomplicated newspaper portals may be in practice, one must incorporate the implicit functions (hidden in algorithms, data and code) and the limits of these knowledge engines and their “correct” use into the research process. Combining and mediating classical hermeneutic methods with search technologies is an essential moment in the practical handling of digitized newspapers.(3.) Historical press research and digital methods. In the light of the new research situation which is emerging with digitized newspapers and newspaper portals, it is obvious that historical press research should increasingly open up to the possibilities of digital methods. In the digital method discussion of historical press research, one concept in particular forms a central point of reference: Franco Moretti’s concept of “Distant Reading”. Basically, “Distant Reading”—and this is what makes this perspective so interesting for historical press research in dealing with the considerable metadata and full text volumes of digitized newspapers—is about the quantitative-automatic indexing of large text corpora using methods and techniques of “Text Mining”. Digital text methods are thus seriously changing the way we look at texts and the research practice with texts such as newspapers: In parts, they automate and accelerate reading processes, produce “new” text extracts by the computer, generate new interpretation contexts between individual text, corpus and condensate, and thus set new orientation points for “close reading”. Computers and digital text methods thus do not relieve researchers of interpretation. Rather, they constantly challenge them to interpret in a continuous interplay in order to give meaning to the text patterns discovered by machines.In spite of all these advantages, digital methods have so far only been used sporadically in historical press research. For this reason, finally a digital workflow for research processes in historical press research will be presented, which illustrates and summarizes essential challenges, problems, solutions and potentials of digitally framed or supported research in press history. 相似文献
93.
94.
95.
96.
In manufacturing industries, product inspection is automated and the use of image data is increasingly being employed for defect detection. A manufacturing company in Japan produces an item and inspects the produced products using image data. Reducing the error rate is important in product inspection because poor inspection of products might lead to the delivery of defective products to consumers (consumer’s risk) and strict inspection increases production cost (producer’s risk). To reduce the error rate, we highlighted fault points using a two-dimensional moving range filter and discriminated defect production through a unanimous vote among Mahalanobis classifiers for each color component. For results, we achieved a lower error rate than the current system. This research is an empirical study of how to use image data in defect detection. 相似文献
97.
Motoi Iwashita Ken Nishimatsu Takeshi Kurosawa Shinsuke Shimogawa 《The Review of Socionetwork Strategies》2010,4(1):17-28
An increase of broadband demand is forecasted by transitional methods that consider the effect of this increase through many
factors, such as customer requirement diversification, and new service introduction and deployment under competition. Broadband
demand forecasting has become important for closing the digital divide, promoting regional developments, and constructing
networks economically; therefore, a demand forecast model that considers the mechanisms of market structure is necessary.
In this paper, a demand analysis method for broadband access combining macro- and micro-data mining is proposed, and the service
choice behaviour of customers is introduced as a customer model not only to express the macro trend of market structure, but
also to consider area marketing. The proposed method can estimate the potential demand, determine the point at which broadband
demand growth peaks in a specified area, and support a decision for ultra high-speed broadband access facility installation. 相似文献
98.
Specialized advertising media and product market competition 总被引:1,自引:1,他引:1
99.
100.