首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   29636篇
  免费   580篇
财政金融   5323篇
工业经济   2162篇
计划管理   5024篇
经济学   6766篇
综合类   308篇
运输经济   187篇
旅游经济   488篇
贸易经济   4573篇
农业经济   1417篇
经济概况   3897篇
邮电经济   71篇
  2021年   173篇
  2020年   291篇
  2019年   404篇
  2018年   563篇
  2017年   575篇
  2016年   522篇
  2015年   387篇
  2014年   592篇
  2013年   2874篇
  2012年   809篇
  2011年   877篇
  2010年   692篇
  2009年   821篇
  2008年   868篇
  2007年   777篇
  2006年   720篇
  2005年   669篇
  2004年   668篇
  2003年   653篇
  2002年   602篇
  2001年   611篇
  2000年   604篇
  1999年   524篇
  1998年   527篇
  1997年   505篇
  1996年   503篇
  1995年   451篇
  1994年   499篇
  1993年   506篇
  1992年   472篇
  1991年   498篇
  1990年   457篇
  1989年   379篇
  1988年   388篇
  1987年   381篇
  1986年   396篇
  1985年   569篇
  1984年   531篇
  1983年   534篇
  1982年   515篇
  1981年   443篇
  1980年   436篇
  1979年   448篇
  1978年   380篇
  1977年   344篇
  1976年   274篇
  1975年   269篇
  1974年   248篇
  1973年   240篇
  1972年   198篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
891.
A Jump-diffusion Model for Exchange Rates in a Target Zone   总被引:1,自引:0,他引:1  
We propose a simple jump-diffusion model for an exchange rate target zone. The model captures most stylized facts from the existing target zone models while remaining analytically tractable. The model is based on a modified two-limit version of the C OX , I NGERSOLL and R OSS (1985) model. In the model the exchange rate is kept within the band because the variance decreases as the exchange rate approaches the upper or lower limits of the band. We also consider an extension of the model with parity adjustments, which are modeled as Poisson jumps. Estimation of the model is by GMM based on conditional moments. We derive prices of currency options in our model, assuming that realignment jump risk is idiosyncratic. Throughout, we apply the theory to EMS exchange rate data. We show that, after the EMS crisis of 1993, currencies remain in an implicit target zone which is narrower than the officially announced target zones.  相似文献   
892.
Within the framework of the proportional hazard model proposed in Cox (1972), Han and Hausman (1990) consider the logarithm of the integrated baseline hazard function as constant in each time period. We, however, proposed an alternative semiparametric estimator of the parameters of the covariate part. The estimator is considered as semiparametric since no prespecified functional form for the error terms (or certain convolution) is needed. This estimator, proposed in Lewbel (2000) in another context, shows at least four advantages. The distribution of the latent variable error is unknown and may be related to the regressors. It takes into account censored observations, it allows for heterogeneity of unknown form and it is quite easy to implement since the estimator does not require numerical searches. Using the Spanish Labour Force Survey, we compare empirically the results of estimating several alternative models, basically on the estimator proposed in Han and Hausman (1990) and our semiparametric estimator.  相似文献   
893.
In contrast to a posterior analysis given a particular sampling model, posterior model probabilities in the context of model uncertainty are typically rather sensitive to the specification of the prior. In particular, ‘diffuse’ priors on model-specific parameters can lead to quite unexpected consequences. Here we focus on the practically relevant situation where we need to entertain a (large) number of sampling models and we have (or wish to use) little or no subjective prior information. We aim at providing an ‘automatic’ or ‘benchmark’ prior structure that can be used in such cases. We focus on the normal linear regression model with uncertainty in the choice of regressors. We propose a partly non-informative prior structure related to a natural conjugate g-prior specification, where the amount of subjective information requested from the user is limited to the choice of a single scalar hyperparameter g0j. The consequences of different choices for g0j are examined. We investigate theoretical properties, such as consistency of the implied Bayesian procedure. Links with classical information criteria are provided. More importantly, we examine the finite sample implications of several choices of g0j in a simulation study. The use of the MC3 algorithm of Madigan and York (Int. Stat. Rev. 63 (1995) 215), combined with efficient coding in Fortran, makes it feasible to conduct large simulations. In addition to posterior criteria, we shall also compare the predictive performance of different priors. A classic example concerning the economics of crime will also be provided and contrasted with results in the literature. The main findings of the paper will lead us to propose a ‘benchmark’ prior specification in a linear regression context with model uncertainty.  相似文献   
894.
In this paper we introduce a family of test statistics for testing symmetry based on φ-divergence families. These test statistics yield the likelihood ratio test and the Pearson test statistic as special cases. Asymptotic distribution for the new test statistics are derived under both the null and the alternative hypotheses. A simulation study is presented to see that some new test statistics offer an attractive alternative to the classical Pearson test statistic for the problem of symmetry. Received: May 2000  相似文献   
895.
896.
Given an increased emphasis on work teams in organizations, it is important to select applicants based on their ability to make contributions to a given work team. This paper proposes that person–group fit should be useful to select applicants for work teams and suggests that effective use of person–group fit will create both more cohesive work units and more effectively functioning work units. It proposes ways to make valid and reliable assessments of person–group fit that could be used to minimize bias in the selection process. Finally, it addresses several implications of using the person–group fit paradigm for human resource management practice. © 2001 John Wiley & Sons, Inc.  相似文献   
897.
In this paper, an empirically stable money demand model for M3 in the euro area is constructed. Starting with a multivariate system, three cointegrating relationships with economic content are found: (i) the spread between the long‐term and the short‐term nominal interest rates, (ii) the long‐term real interest rate, and (iii) a long‐run demand for broad money M3. There is evidence that the determinants of M3 money demand are weakly exogenous with respect to the long‐run parameters. Hence, following a general‐to‐specific modelling approach, a parsimonious conditional error‐correction model for M3 money demand is derived which can be interpreted economically. For the conditional model, long‐run and short‐run parameter stability is extensively tested and not rejected. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   
898.
This article attempts to bring coherence to the diversity that characterizes organizational learning research. It argues that organizational learning is embedded in four schools of thought: an economic school, a managerial school, a developmental school, and a process school. The article provides a comprehensive analysis of the schools, describes how they differ from each other, and outlines how each of them can be employed effectively. To demonstrate the benefits of theoretical plurality, the four schools are applied to the key marketing topics of market orientation and new product development. Implications for future research in marketing are provided. Simon J. Bell is a lecturer in marketing in the Faculty of Economics and Commerce at the University of Melbourne. Gregory J. Whitwell is an associate professor of marketing in the Faculty of Economics and Commerce at the University of Melbourne. Bryan A. Lukas is an associate professor of marketing and director of the Master of Applied Commerce Program in the Faculty of Economics and Commerce at the University of Melbourne.  相似文献   
899.
Random urinalysis strategies stratified by time since the last test are characterized with a set of Markov chain models. The probability of a person being tested depends on the amount of time since the person's last test. The Nuclear Regulatory Commission (NRC) has proposed a two strata drug testing strategy based on time since last test. The proposal included a high testing rate for people not yet tested in a given time period and a low testing rate for people testing negative in a given time period. Southern California Edison has implemented a variation of the NRC proposal. These strategies can be modeled within a Markov chain framework. Time to detection is calculated as a function of testing probabilities and drug usage levels. Drug user gaming strategies are discussed with illustrations. These models are implemented as part of a U.S. Navy drug policy analysis system.  相似文献   
900.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号