首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   9574篇
  免费   167篇
财政金融   2064篇
工业经济   708篇
计划管理   1523篇
经济学   1936篇
综合类   125篇
运输经济   62篇
旅游经济   201篇
贸易经济   1547篇
农业经济   334篇
经济概况   1232篇
信息产业经济   1篇
邮电经济   8篇
  2020年   97篇
  2019年   158篇
  2018年   165篇
  2017年   185篇
  2016年   177篇
  2015年   119篇
  2014年   172篇
  2013年   1089篇
  2012年   250篇
  2011年   288篇
  2010年   247篇
  2009年   267篇
  2008年   278篇
  2007年   217篇
  2006年   229篇
  2005年   195篇
  2004年   203篇
  2003年   195篇
  2002年   197篇
  2001年   180篇
  2000年   175篇
  1999年   173篇
  1998年   185篇
  1997年   175篇
  1996年   165篇
  1995年   136篇
  1994年   137篇
  1993年   151篇
  1992年   173篇
  1991年   165篇
  1990年   122篇
  1989年   122篇
  1988年   104篇
  1987年   113篇
  1986年   126篇
  1985年   161篇
  1984年   148篇
  1983年   170篇
  1982年   143篇
  1981年   133篇
  1980年   146篇
  1979年   126篇
  1978年   96篇
  1977年   109篇
  1976年   102篇
  1975年   102篇
  1974年   93篇
  1973年   76篇
  1972年   63篇
  1971年   62篇
排序方式: 共有9741条查询结果,搜索用时 15 毫秒
311.
312.
313.
Standard jackknife confidence intervals for a quantile Q y (β) are usually preferred to confidence intervals based on analytical variance estimators due to their operational simplicity. However, the standard jackknife confidence intervals can give undesirable coverage probabilities for small samples sizes and large or small values of β. In this paper confidence intervals for a population quantile based on several existing estimators of a quantile are derived. These intervals are based on an approximation for the cumulative distribution function of a studentized quantile estimator. Confidence intervals are empirically evaluated by using real data and some applications are illustrated. Results derived from simulation studies show that proposed confidence intervals are narrower than confidence intervals based on the standard jackknife technique, which assumes normal approximation. Proposed confidence intervals also achieve coverage probabilities above to their nominal level. This study indicates that the proposed method can be an alternative to the asymptotic confidence intervals, which can be unknown in practice, and the standard jackknife confidence intervals, which can have poor coverage probabilities and give wider intervals.  相似文献   
314.
In 2007 Nicholas Stern’s Review (in Science 317:201–202, 2007) estimated that global GDP would shrink by 5–20% due to climate change which brought forth calls to reduce emissions by 30–70% in the next 20 years. Stern’s results were contested by Weitzman (in J Econ Lit XLV(3):703–724, 2007) who argued for more modest reductions in the near term, and Nordhaus (in Science 317:201–202, 2007) who questioned the low discount rate and coefficient of relative risk aversion employed in the Stern Review, which caused him to argue that ‘the central question about global-warming policy—how much how, how fast, and how costly—remain open.’ We present a simulation model developed by Färe et al. (in Time substitution with application to data envelopment analysis, 2009) on intertemporal resource allocation that allows us to shine some light on these questions. The empirical specification here constrains the amount of undesirable output a country can produce over a given period by choosing the magnitude and timing of those reductions. We examine the production technology of 28 OECD countries over 1992–2006, in which countries produce real GDP and CO2 using capital and labor and simulate the magnitude and timing necessary to be in compliance with the Kyoto Protocol. This tells us ‘how fast’ and ‘how much’. Comparison of observed GDP and simulated GDP with the emissions constraints tells us ‘how costly’. We find these costs to be relatively low if countries are allowed reallocate production decision across time, and that emissions should be cut gradually at the beginning of the period, with larger cuts starting in 2000.  相似文献   
315.
We extend the standard textbook search and matching model by introducing deep habits in consumption. This assumption generates amplification in the response of labour market variables to technology shocks by producing endogenous countercyclical mark-ups. The cyclical fluctuations of vacancies and unemployment in our model can replicate those observed in the US data, with labour market tightness being 20 times more volatile than consumption. Vacancies display a hump-shaped response to technology shocks and the numerical simulations generate an artificial Beveridge curve that is in line with the data. Our model preserves the assumption of fully flexible wages for new hires and the calibration is consistent with the estimated elasticity of unemployment to unemployment benefits. Finally, we show that in contrast to models with exogenous mark-up shocks, the deep habits model does not require an implausible variation in the elasticity of demand to match the volatility of labour market variables, and the cyclical properties of the mark-up are in line with empirical evidence.  相似文献   
316.
This research examines collaborative and transactional relationships in buying firms to determine whether collaborative relationships offer greater benefits than transactional relationships, and to ascertain which relational factors drive satisfaction and performance for both collaborative and transactional relationship types. Factor analysis, t-tests and step-wise regression were used to analyze data from a survey of buying firms. Results show that collaborative relationships offer higher levels of satisfaction and performance than transactional relationships. Satisfaction factored into two separate constructs: satisfaction with the relationship and satisfaction with the results. Of the relational factors examined, trust is the most important predictor of performance and satisfaction with the relationship regardless of the type of relationship examined.  相似文献   
317.
An Austrian interpretation of the New Keynesian small menu cost model of the business cycle is proposed. Austrian and New Keynesian business cycle theories share the feature that the cycle is generated by rigidities which prevent the economy from adapting instantaneously to changing conditions. Austrian business cycle theory is capital-based, focusing on credit expansion which artificially lowers interest rates and causes an investment boom and unsustainable business expansion. In contrast, the New Keynesian small menu cost model of the business cycle is based on nominal rigidities which prevent markets from clearing. Small menu costs introduce dichotomous behavior, where firms find it locally optimal to avoid instantaneous output price adjustments in the face of the cost, but this local optimum results in economy-wide output and employment fluctuations which are much greater in relative magnitude. The small menu cost model of the business cycle is extended and reinterpreted in light of Austrian business cycle theory with heterogeneous, multiply-specific capital, thus providing a rigorous formalization of the Austrian business cycle. The Austrian interpretation of this New Keynesian model fortuitously addresses several of its shortcomings. JEL classification B53, E12, E23, E32  相似文献   
318.
We study a dynamic duopoly model with network externalities. The value of the product depends on the current and past network size. We compare the market outcome to a planner. With equal quality products, the market outcome may result in too little standardization (i.e. too many products active in the long run) but never too much. The potential inefficiency is non-monotonic in the strength of the network effect, being most likely for intermediate levels. When products differ in quality, an inferior product may dominate even when the planner would choose otherwise, but only if the discount factor is sufficiently large  相似文献   
319.
A learning-based model of repeated games with incomplete information   总被引:3,自引:0,他引:3  
This paper tests a learning-based model of strategic teaching in repeated games with incomplete information. The repeated game has a long-run player whose type is unknown to a group of short-run players. The proposed model assumes a fraction of ‘short-run’ players follow a one-parameter learning model (self-tuning EWA). In addition, some ‘long-run’ players are myopic while others are sophisticated and rationally anticipate how short-run players adjust their actions over time and “teach” the short-run players to maximize their long-run payoffs. All players optimize noisily. The proposed model nests an agent-based quantal-response equilibrium (AQRE) and the standard equilibrium models as special cases. Using data from 28 experimental sessions of trust and entry repeated games, including 8 previously unpublished sessions, the model fits substantially better than chance and much better than standard equilibrium models. Estimates show that most of the long-run players are sophisticated, and short-run players become more sophisticated with experience.  相似文献   
320.
We distinguish learning in a static environment from that in a dynamic environment to show the existence of an important interaction between the development of new technologies and human capital accumulation. Because technological progress creates a more dynamic environment, it complements education in the production of human capital by enhancing adaptive skills. Higher levels of adaptability and human capital in turn determine the profitability of new inventions and the incentive to invest in new R&D. Differences in the history of technological progress produce different levels of adaptability and our results suggest why countries that have comparable levels of education and per capita incomes may differ significantly in their growth performance.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号