首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1905篇
  免费   178篇
  国内免费   6篇
财政金融   285篇
工业经济   73篇
计划管理   691篇
经济学   321篇
综合类   60篇
运输经济   16篇
旅游经济   22篇
贸易经济   389篇
农业经济   131篇
经济概况   101篇
  2024年   2篇
  2023年   32篇
  2022年   24篇
  2021年   47篇
  2020年   94篇
  2019年   105篇
  2018年   80篇
  2017年   99篇
  2016年   82篇
  2015年   73篇
  2014年   128篇
  2013年   176篇
  2012年   97篇
  2011年   113篇
  2010年   70篇
  2009年   73篇
  2008年   104篇
  2007年   105篇
  2006年   76篇
  2005年   96篇
  2004年   67篇
  2003年   53篇
  2002年   37篇
  2001年   27篇
  2000年   38篇
  1999年   19篇
  1998年   28篇
  1997年   34篇
  1996年   17篇
  1995年   16篇
  1994年   13篇
  1993年   11篇
  1992年   6篇
  1991年   7篇
  1990年   7篇
  1989年   10篇
  1988年   5篇
  1987年   5篇
  1986年   3篇
  1985年   4篇
  1984年   3篇
  1982年   3篇
排序方式: 共有2089条查询结果,搜索用时 0 毫秒
1.
The Local Whittle Estimator of Long-Memory Stochastic Volatility   总被引:1,自引:0,他引:1  
We propose a new semiparametric estimator of the degree of persistencein volatility for long memory stochastic volatility (LMSV) models.The estimator uses the periodogram of the log squared returnsin a local Whittle criterion which explicitly accounts for thenoise term in the LMSV model. Finite-sample and asymptotic standarderrors for the estimator are provided. An extensive simulationstudy reveals that the local Whittle estimator is much lessbiased and that the finite-sample standard errors yield moreaccurate confidence intervals than the widely-used GPH estimator.The estimator is also found to be robust against possible leverageeffects. In an empirical analysis of the daily Deutsche Mark/USDollar exchange rate, the new estimator indicates stronger persistencein volatility than the GPH estimator, provided that a largenumber of frequencies is used.  相似文献   
2.
Price variations at speculative markets exhibit positive autocorrelationand cross correlation. Due to large parameter spaces necessaryfor joint modeling of variances and covariances, multivariateparametric volatility models become easily intractable in practice.We propose an adaptive procedure that identifies periods ofsecond-order homogeneity for each moment in time. To overcomethe high dimensionality of the problem we transform the multivariateseries into a set of univariate processes. We discuss thoroughlythe implementation of the adaptive technique. Theoretical andMonte Carlo results are given. We provide two applications ofthe new method. For a bivariate exchange rate series we comparethe multivariate GARCH approach with our method and find thelatter to be more in line with the underlying assumption ofindependently distributed innovations. Analyzing a 23-dimensionalvector of asset returns we underscore the case for adaptivemodeling in high-dimensional systems.  相似文献   
3.
We consider the problem of estimating a probability density function based on data that are corrupted by noise from a uniform distribution. The (nonparametric) maximum likelihood estimator for the corresponding distribution function is well defined. For the density function this is not the case. We study two nonparametric estimators for this density. The first is a type of kernel density estimate based on the empirical distribution function of the observable data. The second is a kernel density estimate based on the MLE of the distribution function of the unobservable (uncorrupted) data.  相似文献   
4.
随机生产前沿方法的发展及其在中国的应用   总被引:8,自引:0,他引:8  
本文对随机前沿生产函数模型的发展及其在中国生产率分析中的应用进行了评述。文章首先介绍随机前沿方法的基本原理、估计方法和在面板数据下对全要素生产率增长的分解,随后评述随机前沿生产函数模型的最新进展和在经验分析中的优势与作用,最后总结了在中国行业和地区经济增长研究中随机前沿方法的成果和不足,并探讨今后研究的发展方向。  相似文献   
5.
Abstract. Economists devote considerable energies towards refining their econometric techniques to overcome difficulties connected with conducting empirical research. Despite advances in technique. it is not clear whether further refinement in this direction is worthwhile for policy purposes. It may be that no further amount of statistical adjustment of inadequate data will increase understanding, and that better data is simply necessary to add to our knowledge. But rarely is sufficient credit paid to new forms of data. In short, econometric technique is emphasized to the neglect of data innovation, as if new data were merely lying about waiting for an ingenious suggestion for use. This paper surveys advances of the last twenty five years in estimating labour supply for policy purposes with a view towards appreciating the relative contribution of both improvements in econometric technique as well as developments of new data.
After briefly detailing the key parameters which economists have sought to estimate, we describe the early 'first generation' research (circa 1970), which is plagued by problems of unobservable variables, measurement errors, truncation and selectivity bias, and non linear budget constraints. 'Second generation' research constitute attempts to resolve one or more of these difficulties, and the respective contribution of econometric technique and new data is acknowledged and assessed, including the contribution of data generated by large scale social experiments in which participants are randomly assigned to different guaranteed income plans and their labour supply behaviour measured.  相似文献   
6.
Three tests for the skewness of an unknown distribution are derived for iid data. They are based on suitable normalization of estimators of some usual skewness coefficients. Their asymptotic null distributions are derived. The tests are next shown to be consistent and their power under some sequences of local alternatives is investigated. Their finite sample properties are also studied through a simulation experiment, and compared to those of the √ b 2-test.  相似文献   
7.
The truncated Poisson regression model is used to arrive at point and interval estimates of the size of two offender populations, i.e. drunk drivers and persons who illegally possess firearms. The dependent capture–recapture variables are constructed from Dutch police records and are counts of individual arrests for both violations. The population size estimates are derived assuming that each count is a realization of a Poisson distribution, and that the Poisson parameters are related to covariates through the truncated Poisson regression model. These assumptions are discussed in detail, and the tenability of the second assumption is assessed by evaluating the marginal residuals and performing tests on overdispersion. For the firearms example, the second assumption seems to hold well, but for the drunk drivers example there is some overdispersion. It is concluded that the method is useful, provided it is used with care.  相似文献   
8.
Making quantified statements about the uncertainty associated with the lifelength of an item is one of the most fundamental tasks of reliability assessment. Most practitioners routinely do this using one of the several available statistical techniques. The purpose of this paper is two-fold. The first is to give the user an overview of the key tenets of two of the most commonly used parametric approaches. The second is to point out that these commonly used approaches involve strategies that are either ad hoc, or are in violation of some of the underlying tenets. A method that is devoid of logical flaws can be proposed, but this method is difficult to implement. The user must therefore resign to using that technique against which the fewest objections can be hurled.  相似文献   
9.
We extend the concept of piecewise linear histogram introduced recently by Beirlant, Berlinet and Györfi. The disadvantage of that histogram is that in many models it takes on negative values with probability close to 1. We show that for a wide set of models, the extended class of estimates contains a bona fide density with probability tending to 1 as the sample size n increases to infinity. The mean integrated absolute error in the extended class of estimators decreases with the same rate n–2/5 as in the original narrower class.  相似文献   
10.
Hypothesis tests using data envelopment analysis   总被引:5,自引:4,他引:1  
A substantial body of recent work has opened the way to exploring the statistical properties of DEA estimators of production frontiers and related efficiency measures. The purpose of this paper is to survey several possibilities that have been pursued, and to present them in a unified framework. These include the development of statistics to test hypotheses about the characteristics of the production frontier, such as returns to scale, input substitutability, and model specification, and also about variation in efficiencies relative to the production frontier.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号