首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1613篇
  免费   159篇
  国内免费   4篇
财政金融   244篇
工业经济   67篇
计划管理   543篇
经济学   300篇
综合类   51篇
运输经济   12篇
旅游经济   13篇
贸易经济   346篇
农业经济   114篇
经济概况   86篇
  2024年   2篇
  2023年   27篇
  2022年   24篇
  2021年   34篇
  2020年   74篇
  2019年   88篇
  2018年   70篇
  2017年   89篇
  2016年   62篇
  2015年   61篇
  2014年   113篇
  2013年   150篇
  2012年   89篇
  2011年   94篇
  2010年   62篇
  2009年   69篇
  2008年   96篇
  2007年   93篇
  2006年   62篇
  2005年   89篇
  2004年   48篇
  2003年   43篇
  2002年   32篇
  2001年   27篇
  2000年   27篇
  1999年   20篇
  1998年   24篇
  1997年   24篇
  1996年   12篇
  1995年   13篇
  1994年   9篇
  1993年   11篇
  1992年   6篇
  1991年   4篇
  1990年   6篇
  1989年   6篇
  1988年   4篇
  1987年   4篇
  1986年   2篇
  1985年   3篇
  1984年   1篇
  1982年   2篇
排序方式: 共有1776条查询结果,搜索用时 15 毫秒
1.
Abstract. Economists devote considerable energies towards refining their econometric techniques to overcome difficulties connected with conducting empirical research. Despite advances in technique. it is not clear whether further refinement in this direction is worthwhile for policy purposes. It may be that no further amount of statistical adjustment of inadequate data will increase understanding, and that better data is simply necessary to add to our knowledge. But rarely is sufficient credit paid to new forms of data. In short, econometric technique is emphasized to the neglect of data innovation, as if new data were merely lying about waiting for an ingenious suggestion for use. This paper surveys advances of the last twenty five years in estimating labour supply for policy purposes with a view towards appreciating the relative contribution of both improvements in econometric technique as well as developments of new data.
After briefly detailing the key parameters which economists have sought to estimate, we describe the early 'first generation' research (circa 1970), which is plagued by problems of unobservable variables, measurement errors, truncation and selectivity bias, and non linear budget constraints. 'Second generation' research constitute attempts to resolve one or more of these difficulties, and the respective contribution of econometric technique and new data is acknowledged and assessed, including the contribution of data generated by large scale social experiments in which participants are randomly assigned to different guaranteed income plans and their labour supply behaviour measured.  相似文献   
2.
Three tests for the skewness of an unknown distribution are derived for iid data. They are based on suitable normalization of estimators of some usual skewness coefficients. Their asymptotic null distributions are derived. The tests are next shown to be consistent and their power under some sequences of local alternatives is investigated. Their finite sample properties are also studied through a simulation experiment, and compared to those of the √ b 2-test.  相似文献   
3.
The truncated Poisson regression model is used to arrive at point and interval estimates of the size of two offender populations, i.e. drunk drivers and persons who illegally possess firearms. The dependent capture–recapture variables are constructed from Dutch police records and are counts of individual arrests for both violations. The population size estimates are derived assuming that each count is a realization of a Poisson distribution, and that the Poisson parameters are related to covariates through the truncated Poisson regression model. These assumptions are discussed in detail, and the tenability of the second assumption is assessed by evaluating the marginal residuals and performing tests on overdispersion. For the firearms example, the second assumption seems to hold well, but for the drunk drivers example there is some overdispersion. It is concluded that the method is useful, provided it is used with care.  相似文献   
4.
We extend the concept of piecewise linear histogram introduced recently by Beirlant, Berlinet and Györfi. The disadvantage of that histogram is that in many models it takes on negative values with probability close to 1. We show that for a wide set of models, the extended class of estimates contains a bona fide density with probability tending to 1 as the sample size n increases to infinity. The mean integrated absolute error in the extended class of estimators decreases with the same rate n–2/5 as in the original narrower class.  相似文献   
5.
Hypothesis tests using data envelopment analysis   总被引:5,自引:4,他引:1  
A substantial body of recent work has opened the way to exploring the statistical properties of DEA estimators of production frontiers and related efficiency measures. The purpose of this paper is to survey several possibilities that have been pursued, and to present them in a unified framework. These include the development of statistics to test hypotheses about the characteristics of the production frontier, such as returns to scale, input substitutability, and model specification, and also about variation in efficiencies relative to the production frontier.  相似文献   
6.
We test the robustness of the APT to two alternative estimation procedures: the Fama and MacBeth (1973) two-step methodology; and the one-step procedure due to Burmeister and McElroy (1988). We find that the APT is indeed sensitive to the chosen estimator and assumptions about the factor structure of stock returns. We believe that our findings have implications for the estimation of asset pricing models in general.  相似文献   
7.
An extensive collection of continuous-time models of the short-term interest rate is evaluated over data sets that have appeared previously in the literature. The analysis, which uses the simulated maximum likelihood procedure proposed by Durham and Gallant (2002), provides new insights regarding several previously unresolved questions. For single factor models, I find that the volatility, not the drift, is the critical component in model specification. Allowing for additional flexibility beyond a constant term in the drift provides negligible benefit. While constant drift would appear to imply that the short rate is nonstationary, in fact, stationarity is volatility-induced. The simple constant elasticity of volatility model fits weekly observations of the three-month Treasury bill rate remarkably well but is easily rejected when compared with more flexible volatility specifications over daily data. The methodology of Durham and Gallant can also be used to estimate stochastic volatility models. While adding the latent volatility component provides a large improvement in the likelihood for the physical process, it does little to improve bond-pricing performance.  相似文献   
8.
Statistical Decision Problems and Bayesian Nonparametric Methods   总被引:1,自引:0,他引:1  
This paper considers parametric statistical decision problems conducted within a Bayesian nonparametric context. Our work was motivated by the realisation that typical parametric model selection procedures are essentially incoherent. We argue that one solution to this problem is to use a flexible enough model in the first place, a model that will not be checked no matter what data arrive. Ideally, one would use a nonparametric model to describe all the uncertainty about the density function generating the data. However, parametric models are the preferred choice for many statisticians, despite the incoherence involved in model checking, incoherence that is quite often ignored for pragmatic reasons. In this paper we show how coherent parametric inference can be carried out via decision theory and Bayesian nonparametrics. None of the ingredients discussed here are new, but our main point only becomes evident when one sees all priors—even parametric ones—as measures on sets of densities as opposed to measures on finite-dimensional parameter spaces.  相似文献   
9.
Progressive stress accelerated life tests under finite mixture models   总被引:1,自引:0,他引:1  
In this paper, progressive stress accelerated life tests are considered when the lifetime of a product under use condition follows a finite mixture of distributions. The experiment is performed when each of the components in the mixture follows a general class of distributions which includes, among others, the Weibull, compound Weibull, power function, Gompertz and compound Gompertz distributions. It is assumed that the scale parameter of each component satisfies the inverse power low, the progressive stress is directly proportional to time and the cumulative exposure model for the effect of changing stress holds. Based on type-I censoring, the maximum likelihood estimates (MLEs) of the parameters under consideration are obtained. A special attention is paid to a mixture of two Rayleigh components. Simulation results are carried out to study the precision of the MLEs and to obtain confidence intervals for the parameters involved.  相似文献   
10.
This paper utilizes the average derivative estimation of Stoker (1986) and the pesudo-likelihood estimation of Fan, Li, and Weersink (1996) to estimate a semiparametric stochastic frontier regression, y = g(x) + , where the function g(.)is unknown and is a composite error in a standard setting. The proposed semiparametric method of estimation is applied to data on farmers' credit unions in Taiwan. Empirical results show that the banking services of the farmers' credit unions is subject to economies of scale, but high degree of cost inefficiency in operation.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号