首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   9719篇
  免费   608篇
  国内免费   22篇
财政金融   1594篇
工业经济   384篇
计划管理   2503篇
经济学   2955篇
综合类   301篇
运输经济   58篇
旅游经济   106篇
贸易经济   1201篇
农业经济   548篇
经济概况   698篇
信息产业经济   1篇
  2024年   4篇
  2023年   59篇
  2022年   57篇
  2021年   101篇
  2020年   293篇
  2019年   345篇
  2018年   224篇
  2017年   306篇
  2016年   218篇
  2015年   242篇
  2014年   652篇
  2013年   886篇
  2012年   903篇
  2011年   1092篇
  2010年   822篇
  2009年   644篇
  2008年   685篇
  2007年   748篇
  2006年   499篇
  2005年   351篇
  2004年   270篇
  2003年   216篇
  2002年   123篇
  2001年   82篇
  2000年   71篇
  1999年   58篇
  1998年   70篇
  1997年   81篇
  1996年   54篇
  1995年   31篇
  1994年   24篇
  1993年   20篇
  1992年   4篇
  1991年   10篇
  1990年   2篇
  1989年   4篇
  1988年   1篇
  1987年   1篇
  1985年   27篇
  1984年   26篇
  1983年   18篇
  1982年   11篇
  1981年   2篇
  1980年   4篇
  1979年   1篇
  1978年   4篇
  1977年   1篇
  1976年   1篇
  1975年   1篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
71.
We provide a partial ordering view of horizontal inequity (HI), based on the Lorenz criterion, associated with different post‐tax income distributions and a (bistochastic) non‐parametric estimated benchmark distribution. As a consequence, several measures consistent with the Lorenz criterion can be rationalized. In addition, we establish the so‐called HI transfer principle, which imposes a normative minimum requirement that any HI measure must satisfy. Our proposed HI ordering is consistent with this principle. Moreover, we adopt a cardinal view to decompose the total effect of a tax system into a welfare gain caused by HI‐free income redistribution and a welfare loss caused by HI, without any additive decomposable restriction on the indices. Hence, more robust tests can be applied. Other decompositions in the literature are seen as particular cases.  相似文献   
72.
Probability theory in fuzzy sample spaces   总被引:2,自引:0,他引:2  
This paper tries to develop a neat and comprehensive probability theory for sample spaces where the events are fuzzy subsets of The investigations are focussed on the discussion how to equip those sample spaces with suitable -algebras and metrics. In the end we can point out a unified concept of random elements in the sample spaces under consideration which is linked with compatible metrics to express random errors. The result is supported by presenting a strong law of large numbers, a central limit theorem and a Glivenko-Cantelli theorem for these kinds of random elements, formulated simultaneously w.r.t. the selected metrics. As a by-product the line of reasoning, which is followed within the paper, enables us to generalize as well as to bring together already known results and concepts from literature.Acknowledgement. The author would like to thank the participants of the 23rd Linz Seminar on Fuzzy Set Theory for the intensive discussion of the paper. Especially he is indebted to Professors Diamond and Höhle whose remarks have helped to get deeper insights into the subject. Additionally, the author is grateful to one anonymous referee for careful reading and valuable proposals which have led to an improvement of the first draft.This paper was presented at the 23rd Linz Seminar on Fuzzy Set Theory, Linz, Austria, February 5–9, 2002.  相似文献   
73.
Robustness issues in multilevel regression analysis   总被引:8,自引:0,他引:8  
A multilevel problem concerns a population with a hierarchical structure. A sample from such a population can be described as a multistage sample. First, a sample of higher level units is drawn (e.g. schools or organizations), and next a sample of the sub‐units from the available units (e.g. pupils in schools or employees in organizations). In such samples, the individual observations are in general not completely independent. Multilevel analysis software accounts for this dependence and in recent years these programs have been widely accepted. Two problems that occur in the practice of multilevel modeling will be discussed. The first problem is the choice of the sample sizes at the different levels. What are sufficient sample sizes for accurate estimation? The second problem is the normality assumption of the level‐2 error distribution. When one wants to conduct tests of significance, the errors need to be normally distributed. What happens when this is not the case? In this paper, simulation studies are used to answer both questions. With respect to the first question, the results show that a small sample size at level two (meaning a sample of 50 or less) leads to biased estimates of the second‐level standard errors. The answer to the second question is that only the standard errors for the random effects at the second level are highly inaccurate if the distributional assumptions concerning the level‐2 errors are not fulfilled. Robust standard errors turn out to be more reliable than the asymptotic standard errors based on maximum likelihood.  相似文献   
74.
We take as a starting point the existence of a joint distribution implied by different dynamic stochastic general equilibrium (DSGE) models, all of which are potentially misspecified. Our objective is to compare “true” joint distributions with ones generated by given DSGEs. This is accomplished via comparison of the empirical joint distributions (or confidence intervals) of historical and simulated time series. The tool draws on recent advances in the theory of the bootstrap, Kolmogorov type testing, and other work on the evaluation of DSGEs, aimed at comparing the second order properties of historical and simulated time series. We begin by fixing a given model as the “benchmark” model, against which all “alternative” models are to be compared. We then test whether at least one of the alternative models provides a more “accurate” approximation to the true cumulative distribution than does the benchmark model, where accuracy is measured in terms of distributional square error. Bootstrap critical values are discussed, and an illustrative example is given, in which it is shown that alternative versions of a standard DSGE model in which calibrated parameters are allowed to vary slightly perform equally well. On the other hand, there are stark differences between models when the shocks driving the models are assigned non-plausible variances and/or distributional assumptions.  相似文献   
75.
We provide a convenient econometric framework for the analysis of nonlinear dependence in financial applications. We introduce models with constrained nonparametric dependence, which specify the conditional distribution or the copula in terms of a one-dimensional functional parameter. Our approach is intermediate between standard parametric specifications (which are in general too restrictive) and the fully unrestricted approach (which suffers from the curse of dimensionality). We introduce a nonparametric estimator defined by minimizing a chi-square distance between the constrained densities in the family and an unconstrained kernel estimator of the density. We derive the nonparametric efficiency bound for linear forms and show that the minimum chi-square estimator is nonparametrically efficient for linear forms.  相似文献   
76.
We examine the performance of a metric entropy statistic as a robust test for time-reversibility (TR), symmetry, and serial dependence. It also serves as a measure of goodness-of-fit. The statistic provides a consistent and unified basis in model search, and is a powerful diagnostic measure with surprising ability to pinpoint areas of model failure. We provide empirical evidence comparing the performance of the proposed procedure with some of the modern competitors in nonlinear time-series analysis, such as robust implementations of the BDS and characteristic function-based tests of TR, along with correlation-based competitors such as the Ljung–Box Q-statistic. Unlike our procedure, each of its competitors is motivated for a different, specific, context and hypothesis. Our evidence is based on Monte Carlo simulations along with an application to several stock indices for the US equity market.  相似文献   
77.
We examine the statistical performance of inequality indices in the presence of extreme values in the data and show that these indices are very sensitive to the properties of the income distribution. Estimation and inference can be dramatically affected, especially when the tail of the income distribution is heavy, even when standard bootstrap methods are employed. However, use of appropriate semiparametric methods for modelling the upper tail can greatly improve the performance of even those inequality indices that are normally considered particularly sensitive to extreme values.  相似文献   
78.
This paper considers finite sample motivated structural change tests in the multivariate linear regression model with application to energy demand models, in which case commonly used structural change tests remain asymptotic. As in Dufour and Kiviet [1996. Exact tests for structural change in first-order dynamic models. Journal of Econometrics 70, 39–68], we account for intervening nuisance parameters through a two-stage maximized Monte Carlo test procedure. Our contributions can be classified into five categories: (i) we extend tests for which a finite-sample theory has been supplied for Gaussian distributions to the non-Gaussian context; (ii) we show that Bai et al. [1998. Testing and dating common breaks in multi-variate time series. The Review of Economic Studies 65 (3), 395–432] test severely over-rejects and propose exact variants of this test; (iii) we consider predictive break test approaches which generalize tests in Dufour [1980. Dummy variables and predictive tests for structural change. Economics Letters 6, 241–247] and Dufour and Kiviet [1996. Exact tests for structural change in first-order dynamic models. Journal of Econometrics 70, 39–68]; (iv) we propose exact (non-Bonferonni based) extensions of the multivariate outliers test from Wilks [1963. Multivariate statistical outliers. Sankhya Series A 25, 407–426] to models with covariates; (v) we apply these tests to the energy demand system analyzed by Arsenault et al. [1995. A total energy demand model of Québec: forecasting properties. Energy Economics 17 (2), 163–171]. For two out of the six industrial sectors analyzed over the 1962–2000 period, break and further goodness-of-fit and diagnostic tests allow to identify (and correct) specification problems arising from historical regulatory changes or (possibly random) industry-specific effects. The procedures we propose have potential useful applications in statistics, econometrics and finance (e.g. event studies).  相似文献   
79.
We show that given a value function approximation V of a strongly concave stochastic dynamic programming problem (SDDP), the associated policy function approximation is Hölder continuous in V.  相似文献   
80.
This paper presents a new approximation to the exact sampling distribution of the instrumental variables estimator in simultaneous equations models. It differs from many of the approximations currently available, Edgeworth expansions for example, in that it is specifically designed to work well when the concentration parameter is small. The approximation is remarkable in that simultaneously: (i) it has an extremely simple final form; (ii) in situations for which it is designed it is typically much more accurate than is the large sample normal approximation; and (iii) it is able to capture most of those stylized facts that characterize lack of identification and weak instrument scenarios. The development leading to the approximation is also novel in that it introduces techniques of some independent interest not seen in this literature hitherto.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号