首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2669篇
  免费   282篇
  国内免费   25篇
财政金融   346篇
工业经济   117篇
计划管理   728篇
经济学   409篇
综合类   175篇
运输经济   20篇
旅游经济   25篇
贸易经济   844篇
农业经济   139篇
经济概况   173篇
  2024年   4篇
  2023年   43篇
  2022年   41篇
  2021年   56篇
  2020年   117篇
  2019年   128篇
  2018年   90篇
  2017年   125篇
  2016年   93篇
  2015年   123篇
  2014年   191篇
  2013年   278篇
  2012年   188篇
  2011年   197篇
  2010年   140篇
  2009年   113篇
  2008年   187篇
  2007年   161篇
  2006年   127篇
  2005年   144篇
  2004年   84篇
  2003年   72篇
  2002年   50篇
  2001年   38篇
  2000年   32篇
  1999年   20篇
  1998年   25篇
  1997年   26篇
  1996年   12篇
  1995年   14篇
  1994年   9篇
  1993年   10篇
  1992年   6篇
  1991年   4篇
  1990年   6篇
  1989年   6篇
  1988年   4篇
  1987年   4篇
  1986年   2篇
  1985年   3篇
  1984年   1篇
  1982年   2篇
排序方式: 共有2976条查询结果,搜索用时 0 毫秒
1.
The Local Whittle Estimator of Long-Memory Stochastic Volatility   总被引:1,自引:0,他引:1  
We propose a new semiparametric estimator of the degree of persistencein volatility for long memory stochastic volatility (LMSV) models.The estimator uses the periodogram of the log squared returnsin a local Whittle criterion which explicitly accounts for thenoise term in the LMSV model. Finite-sample and asymptotic standarderrors for the estimator are provided. An extensive simulationstudy reveals that the local Whittle estimator is much lessbiased and that the finite-sample standard errors yield moreaccurate confidence intervals than the widely-used GPH estimator.The estimator is also found to be robust against possible leverageeffects. In an empirical analysis of the daily Deutsche Mark/USDollar exchange rate, the new estimator indicates stronger persistencein volatility than the GPH estimator, provided that a largenumber of frequencies is used.  相似文献   
2.
Price variations at speculative markets exhibit positive autocorrelationand cross correlation. Due to large parameter spaces necessaryfor joint modeling of variances and covariances, multivariateparametric volatility models become easily intractable in practice.We propose an adaptive procedure that identifies periods ofsecond-order homogeneity for each moment in time. To overcomethe high dimensionality of the problem we transform the multivariateseries into a set of univariate processes. We discuss thoroughlythe implementation of the adaptive technique. Theoretical andMonte Carlo results are given. We provide two applications ofthe new method. For a bivariate exchange rate series we comparethe multivariate GARCH approach with our method and find thelatter to be more in line with the underlying assumption ofindependently distributed innovations. Analyzing a 23-dimensionalvector of asset returns we underscore the case for adaptivemodeling in high-dimensional systems.  相似文献   
3.
Abstract. Economists devote considerable energies towards refining their econometric techniques to overcome difficulties connected with conducting empirical research. Despite advances in technique. it is not clear whether further refinement in this direction is worthwhile for policy purposes. It may be that no further amount of statistical adjustment of inadequate data will increase understanding, and that better data is simply necessary to add to our knowledge. But rarely is sufficient credit paid to new forms of data. In short, econometric technique is emphasized to the neglect of data innovation, as if new data were merely lying about waiting for an ingenious suggestion for use. This paper surveys advances of the last twenty five years in estimating labour supply for policy purposes with a view towards appreciating the relative contribution of both improvements in econometric technique as well as developments of new data.
After briefly detailing the key parameters which economists have sought to estimate, we describe the early 'first generation' research (circa 1970), which is plagued by problems of unobservable variables, measurement errors, truncation and selectivity bias, and non linear budget constraints. 'Second generation' research constitute attempts to resolve one or more of these difficulties, and the respective contribution of econometric technique and new data is acknowledged and assessed, including the contribution of data generated by large scale social experiments in which participants are randomly assigned to different guaranteed income plans and their labour supply behaviour measured.  相似文献   
4.
Three tests for the skewness of an unknown distribution are derived for iid data. They are based on suitable normalization of estimators of some usual skewness coefficients. Their asymptotic null distributions are derived. The tests are next shown to be consistent and their power under some sequences of local alternatives is investigated. Their finite sample properties are also studied through a simulation experiment, and compared to those of the √ b 2-test.  相似文献   
5.
The truncated Poisson regression model is used to arrive at point and interval estimates of the size of two offender populations, i.e. drunk drivers and persons who illegally possess firearms. The dependent capture–recapture variables are constructed from Dutch police records and are counts of individual arrests for both violations. The population size estimates are derived assuming that each count is a realization of a Poisson distribution, and that the Poisson parameters are related to covariates through the truncated Poisson regression model. These assumptions are discussed in detail, and the tenability of the second assumption is assessed by evaluating the marginal residuals and performing tests on overdispersion. For the firearms example, the second assumption seems to hold well, but for the drunk drivers example there is some overdispersion. It is concluded that the method is useful, provided it is used with care.  相似文献   
6.
We extend the concept of piecewise linear histogram introduced recently by Beirlant, Berlinet and Györfi. The disadvantage of that histogram is that in many models it takes on negative values with probability close to 1. We show that for a wide set of models, the extended class of estimates contains a bona fide density with probability tending to 1 as the sample size n increases to infinity. The mean integrated absolute error in the extended class of estimators decreases with the same rate n–2/5 as in the original narrower class.  相似文献   
7.
Hypothesis tests using data envelopment analysis   总被引:5,自引:4,他引:1  
A substantial body of recent work has opened the way to exploring the statistical properties of DEA estimators of production frontiers and related efficiency measures. The purpose of this paper is to survey several possibilities that have been pursued, and to present them in a unified framework. These include the development of statistics to test hypotheses about the characteristics of the production frontier, such as returns to scale, input substitutability, and model specification, and also about variation in efficiencies relative to the production frontier.  相似文献   
8.
We test the robustness of the APT to two alternative estimation procedures: the Fama and MacBeth (1973) two-step methodology; and the one-step procedure due to Burmeister and McElroy (1988). We find that the APT is indeed sensitive to the chosen estimator and assumptions about the factor structure of stock returns. We believe that our findings have implications for the estimation of asset pricing models in general.  相似文献   
9.
An extensive collection of continuous-time models of the short-term interest rate is evaluated over data sets that have appeared previously in the literature. The analysis, which uses the simulated maximum likelihood procedure proposed by Durham and Gallant (2002), provides new insights regarding several previously unresolved questions. For single factor models, I find that the volatility, not the drift, is the critical component in model specification. Allowing for additional flexibility beyond a constant term in the drift provides negligible benefit. While constant drift would appear to imply that the short rate is nonstationary, in fact, stationarity is volatility-induced. The simple constant elasticity of volatility model fits weekly observations of the three-month Treasury bill rate remarkably well but is easily rejected when compared with more flexible volatility specifications over daily data. The methodology of Durham and Gallant can also be used to estimate stochastic volatility models. While adding the latent volatility component provides a large improvement in the likelihood for the physical process, it does little to improve bond-pricing performance.  相似文献   
10.
Progressive stress accelerated life tests under finite mixture models   总被引:1,自引:0,他引:1  
In this paper, progressive stress accelerated life tests are considered when the lifetime of a product under use condition follows a finite mixture of distributions. The experiment is performed when each of the components in the mixture follows a general class of distributions which includes, among others, the Weibull, compound Weibull, power function, Gompertz and compound Gompertz distributions. It is assumed that the scale parameter of each component satisfies the inverse power low, the progressive stress is directly proportional to time and the cumulative exposure model for the effect of changing stress holds. Based on type-I censoring, the maximum likelihood estimates (MLEs) of the parameters under consideration are obtained. A special attention is paid to a mixture of two Rayleigh components. Simulation results are carried out to study the precision of the MLEs and to obtain confidence intervals for the parameters involved.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号