首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   578篇
  免费   42篇
  国内免费   2篇
财政金融   93篇
工业经济   9篇
计划管理   307篇
经济学   73篇
综合类   11篇
运输经济   5篇
旅游经济   11篇
贸易经济   64篇
农业经济   30篇
经济概况   19篇
  2023年   8篇
  2022年   4篇
  2021年   14篇
  2020年   24篇
  2019年   31篇
  2018年   23篇
  2017年   23篇
  2016年   30篇
  2015年   17篇
  2014年   33篇
  2013年   52篇
  2012年   21篇
  2011年   33篇
  2010年   21篇
  2009年   22篇
  2008年   31篇
  2007年   25篇
  2006年   20篇
  2005年   17篇
  2004年   31篇
  2003年   25篇
  2002年   10篇
  2001年   12篇
  2000年   13篇
  1999年   5篇
  1998年   13篇
  1997年   15篇
  1996年   7篇
  1995年   6篇
  1994年   7篇
  1993年   4篇
  1992年   2篇
  1991年   8篇
  1990年   1篇
  1989年   4篇
  1988年   2篇
  1987年   2篇
  1986年   1篇
  1985年   2篇
  1984年   2篇
  1982年   1篇
排序方式: 共有622条查询结果,搜索用时 15 毫秒
1.
The paper concerns the study of equilibrium points, or steady states, of economic systems arising in modeling optimal investment with vintage capital, namely, systems where all key variables (capitals, investments, prices) are indexed not only by time but also by age. Capital accumulation is hence described as a partial differential equation (briefly, PDE), and equilibrium points are in fact equilibrium distributions in the variable of ages. A general method is developed to compute and study equilibrium points of a wide range of infinite dimensional, infinite horizon, optimal control problems. We apply the method to optimal investment with vintage capital, for a variety of data, deriving existence and uniqueness of equilibrium distribution, as well as analytic formulas for optimal controls and trajectories in the long run. The examples suggest that the same method can be applied to other economic problems displaying heterogeneity. This shows how effective the theoretical machinery of optimal control in infinite dimension is in computing explicitly equilibrium distributions. To this extent, the results of this work constitute a first crucial step towards a thorough understanding of the behavior of optimal paths in the long run.  相似文献   
2.
We consider the problem of estimating a probability density function based on data that are corrupted by noise from a uniform distribution. The (nonparametric) maximum likelihood estimator for the corresponding distribution function is well defined. For the density function this is not the case. We study two nonparametric estimators for this density. The first is a type of kernel density estimate based on the empirical distribution function of the observable data. The second is a kernel density estimate based on the MLE of the distribution function of the unobservable (uncorrupted) data.  相似文献   
3.
随机生产前沿方法的发展及其在中国的应用   总被引:8,自引:0,他引:8  
本文对随机前沿生产函数模型的发展及其在中国生产率分析中的应用进行了评述。文章首先介绍随机前沿方法的基本原理、估计方法和在面板数据下对全要素生产率增长的分解,随后评述随机前沿生产函数模型的最新进展和在经验分析中的优势与作用,最后总结了在中国行业和地区经济增长研究中随机前沿方法的成果和不足,并探讨今后研究的发展方向。  相似文献   
4.
Making quantified statements about the uncertainty associated with the lifelength of an item is one of the most fundamental tasks of reliability assessment. Most practitioners routinely do this using one of the several available statistical techniques. The purpose of this paper is two-fold. The first is to give the user an overview of the key tenets of two of the most commonly used parametric approaches. The second is to point out that these commonly used approaches involve strategies that are either ad hoc, or are in violation of some of the underlying tenets. A method that is devoid of logical flaws can be proposed, but this method is difficult to implement. The user must therefore resign to using that technique against which the fewest objections can be hurled.  相似文献   
5.
Modeling Conditional Yield Densities   总被引:2,自引:0,他引:2  
Given the increasing interest in agricultural risk, many have sought improved methods to characterize conditional crop-yield densities. While most have postulated the Beta as a flexible alternative to the Normal, others have chosen nonparametric methods. Unfortunately, yield data tends not to be sufficiently abundant to invalidate many reasonable parametric models. This is problematic because conclusions from economic analyses, which require estimated conditional yield densities, tend not to be invariant to the modeling assumption. We propose a semiparametric estimator that, because of its theoretical properties and our simulation results, enables one to empirically proceed with a higher degree of confidence.  相似文献   
6.
Estimation in the interval censoring model is considered. A class of smooth functionals is introduced, of which the mean is an example. The asymptotic information lower bound for such functionals can be represented as an inner product of two functions. In case 1, i.e. one observation time per unobservable event time, both functions can be given explicitly. We mainly consider case 2, with two observation times for each unobservable event time, in the situation that the observation times can not become arbitrarily close to each other. For case 2, one of the functions in the inner product can only be given implicitly as solution to a Fredholm integral equation. We study properties of this solution and, in a sequel to this paper, prove that the nonparametric maximum likelihood estimator of the functional asymptotically reaches the information lower bound.  相似文献   
7.
An extensive collection of continuous-time models of the short-term interest rate is evaluated over data sets that have appeared previously in the literature. The analysis, which uses the simulated maximum likelihood procedure proposed by Durham and Gallant (2002), provides new insights regarding several previously unresolved questions. For single factor models, I find that the volatility, not the drift, is the critical component in model specification. Allowing for additional flexibility beyond a constant term in the drift provides negligible benefit. While constant drift would appear to imply that the short rate is nonstationary, in fact, stationarity is volatility-induced. The simple constant elasticity of volatility model fits weekly observations of the three-month Treasury bill rate remarkably well but is easily rejected when compared with more flexible volatility specifications over daily data. The methodology of Durham and Gallant can also be used to estimate stochastic volatility models. While adding the latent volatility component provides a large improvement in the likelihood for the physical process, it does little to improve bond-pricing performance.  相似文献   
8.
Robustness issues in multilevel regression analysis   总被引:8,自引:0,他引:8  
A multilevel problem concerns a population with a hierarchical structure. A sample from such a population can be described as a multistage sample. First, a sample of higher level units is drawn (e.g. schools or organizations), and next a sample of the sub‐units from the available units (e.g. pupils in schools or employees in organizations). In such samples, the individual observations are in general not completely independent. Multilevel analysis software accounts for this dependence and in recent years these programs have been widely accepted. Two problems that occur in the practice of multilevel modeling will be discussed. The first problem is the choice of the sample sizes at the different levels. What are sufficient sample sizes for accurate estimation? The second problem is the normality assumption of the level‐2 error distribution. When one wants to conduct tests of significance, the errors need to be normally distributed. What happens when this is not the case? In this paper, simulation studies are used to answer both questions. With respect to the first question, the results show that a small sample size at level two (meaning a sample of 50 or less) leads to biased estimates of the second‐level standard errors. The answer to the second question is that only the standard errors for the random effects at the second level are highly inaccurate if the distributional assumptions concerning the level‐2 errors are not fulfilled. Robust standard errors turn out to be more reliable than the asymptotic standard errors based on maximum likelihood.  相似文献   
9.
Progressive stress accelerated life tests under finite mixture models   总被引:1,自引:0,他引:1  
In this paper, progressive stress accelerated life tests are considered when the lifetime of a product under use condition follows a finite mixture of distributions. The experiment is performed when each of the components in the mixture follows a general class of distributions which includes, among others, the Weibull, compound Weibull, power function, Gompertz and compound Gompertz distributions. It is assumed that the scale parameter of each component satisfies the inverse power low, the progressive stress is directly proportional to time and the cumulative exposure model for the effect of changing stress holds. Based on type-I censoring, the maximum likelihood estimates (MLEs) of the parameters under consideration are obtained. A special attention is paid to a mixture of two Rayleigh components. Simulation results are carried out to study the precision of the MLEs and to obtain confidence intervals for the parameters involved.  相似文献   
10.
Following Parsian and Farsipour (1999), we consider the problem of estimating the mean of the selected normal population, from two normal populations with unknown means and common known variance, under the LINEX loss function. Some admissibility results for a subclass of equivariant estimators are derived and a sufficient condition for the inadmissibility of an arbitrary equivariant estimator is provided. As a consequence, several of the estimators proposed by Parsian and Farsipour (1999) are shown to be inadmissible and better estimators are obtained. Received January 2001/Revised May 2002  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号