首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   17941篇
  免费   46篇
财政金融   2899篇
工业经济   956篇
计划管理   2775篇
经济学   4096篇
综合类   493篇
运输经济   10篇
旅游经济   26篇
贸易经济   4649篇
农业经济   38篇
经济概况   1429篇
信息产业经济   44篇
邮电经济   572篇
  2023年   10篇
  2021年   8篇
  2020年   17篇
  2019年   23篇
  2018年   2330篇
  2017年   2076篇
  2016年   1247篇
  2015年   119篇
  2014年   113篇
  2013年   219篇
  2012年   472篇
  2011年   1989篇
  2010年   1860篇
  2009年   1565篇
  2008年   1554篇
  2007年   1907篇
  2006年   96篇
  2005年   423篇
  2004年   497篇
  2003年   574篇
  2002年   283篇
  2001年   92篇
  2000年   86篇
  1999年   28篇
  1998年   37篇
  1997年   24篇
  1996年   31篇
  1995年   16篇
  1994年   17篇
  1993年   10篇
  1992年   14篇
  1991年   13篇
  1990年   12篇
  1989年   6篇
  1988年   9篇
  1987年   12篇
  1986年   21篇
  1985年   17篇
  1984年   17篇
  1983年   20篇
  1982年   8篇
  1981年   11篇
  1980年   14篇
  1979年   11篇
  1978年   15篇
  1977年   16篇
  1976年   7篇
  1975年   5篇
  1974年   7篇
  1973年   5篇
排序方式: 共有10000条查询结果,搜索用时 171 毫秒
991.
The Economic Value of Water Quality   总被引:1,自引:0,他引:1  
Stated preference values for water quality ratings based on the US Environmental Protection Agency National Water Quality Inventory ratings provide an operational basis for benefit assessment. Iterative choice survey results for a very large, nationally representative, Web-based panel imply an average valuation of $32 for each percent increase in lakes and rivers in the region for which water quality is rated “Good.” Valuations are skewed, with the mean value more than double the median. Sources of heterogeneity in benefit values include differences in responses to average water quality information and the base level of water quality. Conjoint estimates are somewhat lower than the iterative choice values. The annual economic value of the decline in inland US water quality from 1994 to 2000 is over $20 billion.   相似文献   
992.
Business strategy is fundamentally concerned with the actions required to create superior customer value in the firm’s target markets with the ultimate goal of achieving superior performance. Marketing theory suggests that two critical marketing activities required to achieve this end are: (1) the adoption of appropriate strategic behaviors (i.e., customer-oriented, competitor-oriented, technology-oriented) and (2) targeting of the appropriate market segments (i.e., innovators, early adopters, early majority, late majority, laggards). This study builds on prior research which demonstrates that the strategic behavior—firm performance relationship is contingent on the firm’s strategy by examining this relationship in high tech markets and by considering the incremental contribution of appropriate target market selection. Responses from 160 senior marketing managers in high-tech firms reveal strong support for our framework. Thus, this study provides useful guidance to executives and managers in high-tech firms regarding the steps that they should take to increase their probability of success.
Eric M. OlsonEmail:
  相似文献   
993.
This study investigates whether consumers’ perceptions of motives influence their evaluation of corporate social responsibility (CSR) efforts. The study reveals the mediating role of consumer trust in CSR evaluation frameworks; managers should monitor consumer trust, which seems to be an important subprocess regulating the effect of consumer attributions on patronage and recommendation intentions. Further, managers may allay the negative effects of profit-motivated giving by doing well on service quality perceptions. On the other hand, appropriately motivated giving continues to positively affect trust regardless of the performance of the firm on service quality provision.  相似文献   
994.
Erik Koenen 《Publizistik》2018,63(4):535-556
In the discussion about the future of communication and media science in digital times, this article focuses on the position and perspective of communication and media history. The challenges, problems and potentials associated with digitization are illustrated using the example of historical press research. Within the media ensemble of classical mass communication, the periodic press in particular benefits from the retrospective digitization of historical media and their digital edition in databases and portals. For historical press research, digitized newspapers and digital newspaper portals represent an originally new, because increasingly digital research situation: Digital newspaper portals as a novel, originally digital world for newspapers not only facilitate the path to newspapers and their contents, they also open them up as digital resources machine-readable and thus open up completely new paths for research—not least supported by digital methods.The main objective of this article is to discuss the epistemological-methodological problems and the practical operationalization of digitally framed or supported research processes in historical press research and to present concrete perspectives of knowledge and research strategies for practice. With this aim in mind, the paper discusses three points:(1.) Methodological and practical consequences of historical press research in digital research environments. With the digitization of newspapers and their digital reproduction in newspaper portals, their source character shifts essentially in three dimensions: They are edited and indexed digitally and their complete content is made accessible through optical character recognition. This makes previously unimportant technical aspects such as data formats, portal interfaces, search algorithms and programming interfaces very relevant for the methodology of historical press research. A primary methodological effect of the digital reorganization of newspapers in data and portals is the reversal of the usual reading practice: from “top down” to “bottom up”. With the help of “keyword searching”, newspapers can now be searched comprehensively and transversely to the order of the newspaper original. Nevertheless, there is a warning against an all too naïve and uncritical usage of digitized newspapers and newspaper portals. In practice, some problems and risks are crucial for the conception of historical newspaper research in digital research environments: Besides a hardly standardized and in large parts “wild”, because often uncoordinated and selective digitization of newspapers, the newspaper portals are different in their conception as well as characterized by different content, technical, legal and entrepreneurial conditions.(2.) Historical newspapers as digital sources in practice. The methodological and technical framework are fundamental and far-reaching for the further practical use of newspapers as digital sources in research. In each research step, it must be considered that digitized newspapers are genuinely new and, depending on the quality and depth of digitization, very complex sources with information gains and losses compared to the originals. Newspapers are not simply digitized, they are digitally constructed and differ in this construction from each other. In this respect, historical press researchers are increasingly becoming “users”. However simple and uncomplicated newspaper portals may be in practice, one must incorporate the implicit functions (hidden in algorithms, data and code) and the limits of these knowledge engines and their “correct” use into the research process. Combining and mediating classical hermeneutic methods with search technologies is an essential moment in the practical handling of digitized newspapers.(3.) Historical press research and digital methods. In the light of the new research situation which is emerging with digitized newspapers and newspaper portals, it is obvious that historical press research should increasingly open up to the possibilities of digital methods. In the digital method discussion of historical press research, one concept in particular forms a central point of reference: Franco Moretti’s concept of “Distant Reading”. Basically, “Distant Reading”—and this is what makes this perspective so interesting for historical press research in dealing with the considerable metadata and full text volumes of digitized newspapers—is about the quantitative-automatic indexing of large text corpora using methods and techniques of “Text Mining”. Digital text methods are thus seriously changing the way we look at texts and the research practice with texts such as newspapers: In parts, they automate and accelerate reading processes, produce “new” text extracts by the computer, generate new interpretation contexts between individual text, corpus and condensate, and thus set new orientation points for “close reading”. Computers and digital text methods thus do not relieve researchers of interpretation. Rather, they constantly challenge them to interpret in a continuous interplay in order to give meaning to the text patterns discovered by machines.In spite of all these advantages, digital methods have so far only been used sporadically in historical press research. For this reason, finally a digital workflow for research processes in historical press research will be presented, which illustrates and summarizes essential challenges, problems, solutions and potentials of digitally framed or supported research in press history.  相似文献   
995.
996.
997.
998.
999.
Twitter has a high presence in our modern society, media and science. Numbers of studies with Twitter data – not only in communication research – show that tweets are a popular data source for science. This popularity can be explained by the mostly free data and its technically high availability, as well as the distinct and open communication structure. Even though much research is based on Twitter data, it is only suitable for research to a limited extent. For example, some studies have already revealed that Twitter data has a low explanatory power when predicting election outcomes. Furthermore, the rise of automated communication by bots is an urgent problem of Twitter data analysis. Although critical aspects of Twitter data have already been discussed to some extent (mostly in final remarks of studies), comprehensive evaluations of data quality are relatively rare.To contribute to a deeper understanding of problems regarding the scientific use of Twitter data leading to a more deliberate und critical handling of this data, the study examines different aspects of data quality, usability and explanatory power. Based on previous research on data quality, it takes a critical look with the following four dimensions: availability and completeness, quality (regarding authenticity, reliability and interpretability), language as well as representativeness. Based on a small case study, this paper evaluates the scientific use of Twitter data by elaborating problems in data collection, analysis and interpretation. For this illustrative purpose, the author typically gathered data via Twitter’s Streaming APIs: 73,194 tweets collected between 20–24/02/2017 (each 8pm) with the Streaming APIs (POST statuses/filter) containing the search term “#merkel”.Concerning data availability and completeness, several aspects diminish data usability. Twitter provides two types of data gateways: Streaming APIs (for real-time data) and REST APIs (for historical data). Streaming APIs only have a free available Spritzer bandwidth, that is limited to only one percent of the overall (global) tweet volume at any given time. This limit is a prevalent problem when collecting Twitter data to major events like elections and sports. The REST APIs do not usually provide data older than seven days. Furthermore, Twitter gives no information about the total or search term-related tweet volume at any time.In addition to incomplete data, several quality related aspects complicate data gathering and analysis, like the lack of user specific and verified information (age, gender, location), inconsistent hashtag usage, missing conversational context or poor data/user authenticity. Geo data on Twitter is – if available at all – rarely correct and not useful for filtering relevant tweets. Searching and filtering relevant tweets by search terms can be deceptive, because not every tweet concerning a topic contains corresponding hashtags. Furthermore, it is difficult to find a perfect search term for broader and dynamically changing topics. Besides, the missing conversational context of tweets impedes interpretation of statements (especially with regard to irony or sarcasm). In addition, the rise of social bots diminishes dataset quality enormously. In the dataset generated for this work, only three of the top 30 accounts (by tweet count) could be directly identified as genuine. One fourth of all accounts in this dataset generated about 60% of all tweets. If the high-performing accounts predominantly consist of bots, the negative impact on data quality is immense.Another problem of Twitter analysis is Internet language. While Emojis can be misinterpreted, abbreviations, neologisms, mixed languages and a lack of grammar impede text analysis. In addition to low data quality in general, the quality of tweet content and its representativeness is crucial. This work compares user statistics with research articles on SCOPUS as well as media coverage of two selected, German quality newspapers. Twitter is – compared to its user count – enormously overrepresented in media and science. Only 16% of German adults (over 18 years) are monthly active (MAUs) and merely four percent are daily active users.Considering all presented problems, Twitter can be a good data source for research, but only to a limited extent. Researchers must consider that Twitter does not guarantee complete, reliable and representative data. Ignoring those critical points can mislead data analysis. While Twitter data can be suitable for specific case studies, like the usage and spread of selected hashtags or the twitter usage of specific politicians, you cannot use it for broader, nation-based surveys like the prediction of elections or the public opinion on a specific topic. Twitter has a low representativeness and is mostly an “elite medium” with an uncertain future (concerning the stagnating number of users and financial problems).  相似文献   
1000.
This study examines the influence of the valence of online customer reviews on sales outcomes based on prospect theory. Numerous studies have revealed the importance of customer reviews in online marketing. However, only few studies have explored the impact of online customer reviews on sales outcomes in the dynamic process. Prior studies in behavioral economics literature have indicated that people differently value gains and losses and that losses have more emotional impact than an equivalent amount of gains. This study verifies whether prospect theory applies to the relation between online customer reviews and sales outcomes. Relevant data were collected from Amazon.co.jp, and three statistical models were employed to investigate the relation between the two factors. Major findings confirm that negative customer reviews considerably impact online sales than positive reviews. Furthermore, the findings indicate that the marginal effects of positive and negative reviews decrease with the increase in their volume. The results of this study will enable marketers to compare the relative sales effects of different types of customer reviews and improve the effectiveness of customer service management.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号