首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   371篇
  免费   25篇
财政金融   113篇
工业经济   4篇
计划管理   143篇
经济学   50篇
综合类   12篇
运输经济   4篇
旅游经济   1篇
贸易经济   36篇
农业经济   9篇
经济概况   24篇
  2023年   8篇
  2022年   7篇
  2021年   4篇
  2020年   19篇
  2019年   9篇
  2018年   17篇
  2017年   15篇
  2016年   19篇
  2015年   22篇
  2014年   16篇
  2013年   41篇
  2012年   21篇
  2011年   22篇
  2010年   15篇
  2009年   13篇
  2008年   20篇
  2007年   15篇
  2006年   17篇
  2005年   10篇
  2004年   8篇
  2003年   8篇
  2002年   12篇
  2001年   7篇
  2000年   14篇
  1999年   5篇
  1998年   3篇
  1997年   6篇
  1996年   3篇
  1995年   1篇
  1994年   3篇
  1993年   1篇
  1992年   3篇
  1991年   4篇
  1990年   2篇
  1989年   1篇
  1988年   2篇
  1985年   2篇
  1982年   1篇
排序方式: 共有396条查询结果,搜索用时 15 毫秒
1.
Luby变换(Luby Transform,LT)码作为信道编码应用于电力线通信(Power Line Communication,PLC),可实现电信号的可靠传输。度分布对LT码编译码性能的影响至关重要。为了得到更优的度分布,首先调整二进制指数分布(Binary Exponential Distribution,BED)中的度数比例,获得一种译码性能更优的改进的二进制指数分布(Improved BED,IBED)。然后,根据IBED在冗余度较小时译码成功率高,而冗余度增大后鲁棒孤子分布(Robust Soliton Distribution,RSD)的译码性能表现更佳的特点,通过求和归一化的方式将IBED与RSD两种度分布的优势进行有机结合,提出一种新型二进制鲁棒孤子分布(Binary RSD,BRSD)。仿真结果表明,与其他方法及传统的RSD相比,采用新度分布进行LT编码,可明显降低译码开销,并节约编译码耗时。将新型度分布应用于基于LT码的PLC系统中,能有效地抑制PLC信道中各种噪声对电信号的干扰,并提高通信效率。  相似文献   
2.
S. Wang 《Metrika》1991,38(1):259-267
Summary Using Silverman and Young’s (1987) idea of rescaling a rescaled smoothed empirical distribution function is defined and investigated when the smoothing parameter depends on the data. The rescaled smoothed estimator is shown to be often better than the commonly used ordinary smoothed estimator.  相似文献   
3.
4.
This paper tests Barro's (1979) tax‐smoothing hypothesis using Swedish central government data for the period 1952–1999. According to the tax‐smoothing hypothesis, the government sets the budget surplus equal to expected changes in government expenditure. When expenditure is expected to increase, the government runs a budget surplus, and when expenditure is expected to fall, the government runs a budget deficit. The empirical evidence suggests that the model provides a useful benchmark and that tax‐smoothing behavior can explain about 60 percent of the variability in the Swedish central government budget surplus.  相似文献   
5.
This paper describes Bayesian methods for life test planning with Type II censored data from a Weibull distribution, when the Weibull shape parameter is given. We use conjugate prior distributions and criteria based on estimating a quantile of interest of the lifetime distribution. One criterion is based on a precision factor for a credibility interval for a distribution quantile and the other is based on the length of the credibility interval. We provide simple closed form expressions for the relationship between the needed number of failures and the precision criteria. Examples are used to illustrate the results.Received: October 2002 / Revised: March 2004  相似文献   
6.
7.
This paper is motivated by automated valuation systems, which would benefit from an ability to estimate spatial variation in location value. It develops theory for the local regression model (LRM), a semiparametric approach to estimating a location value surface. There are two parts to the LRM: (1) an ordinary least square (OLS) model to hold constant for interior square footage, land area, bathrooms, and other structural characteristics; and (2) a non-parametric smoother (local polynomial regression, LPR) which calculates location value as a function of latitude and longitude. Several methods are used to consistently estimate both parts of the model. The LRM was fit to geocoded hedonic sales data for six towns in the suburbs of Boston, MA. The estimates yield substantial, significant and plausible spatial patterns in location values. Using the LRM as an exploratory tool, local peaks and valleys in location value identified by the model are close to points identified by the tax assessor, and they are shown to add to the explanatory power of an OLS model. Out-of-sample MSE shows that the LRM with a first-degree polynomial (local linear smoothing) is somewhat better than polynomials of degree zero or degree two. Future applications might use degree zero (the well-known NW estimator) because this is available in popular commercial software. The optimized LRM reduces MSE from the OLS model by between 5 percent and 11 percent while adding information on statistically significant variations in location value.  相似文献   
8.
Increase (decrease) in loan loss provisions would decrease (increases) bank earnings, but increase (decreases) regulatory capital. Previous studies have separately documented earnings and capital management behavior via loan loss provisions by commercial banks. However, it is difficult to isolate a bank's demand for increasing earnings from its demand for regulatory capital because earnings is a source of capital. Based on the objective bank function, this study investigates the impact of SFAS No. 114 on the information content of loan loss provisions in relation to both earnings quality and capital adequacy in a linear information dynamic framework. Test results show that the association between market value with loan loss provisions became significantly stronger for commercial banks in the post- than in the pre-adoption period. As a result, SFAS No. 114 is also found to positively affect the association of market value with both bank earnings and regulatory capital through the clean surplus relation because of the higher value relevance of loan loss provisions. The findings thus provide empirical evidence that SFAS No. 114 has significantly complemented banking regulations in enhancing (reducing) the (dispersion from the) accounting measurement construct of loan loss provisions.  相似文献   
9.
In this paper, we consider the problem of optimal investment by an insurer. The wealth of the insurer is described by a Cramér–Lundberg process. The insurer invests in a market consisting of a bank account and m risky assets. The mean returns and volatilities of the risky assets depend linearly on economic factors that are formulated as the solutions of linear stochastic differential equations. Moreover, the insurer preferences are exponential. With this setting, a Hamilton–Jacobi–Bellman equation that is derived via a dynamic programming approach has an explicit solution found by solving the matrix Riccati equation. Hence, the optimal strategy can be constructed explicitly. Finally, we present some numerical results related to the value function and the ruin probability using the optimal strategy.  相似文献   
10.
The construction of an importance density for partially non‐Gaussian state space models is crucial when simulation methods are used for likelihood evaluation, signal extraction, and forecasting. The method of efficient importance sampling is successful in this respect, but we show that it can be implemented in a computationally more efficient manner using standard Kalman filter and smoothing methods. Efficient importance sampling is generally applicable for a wide range of models, but it is typically a custom‐built procedure. For the class of partially non‐Gaussian state space models, we present a general method for efficient importance sampling. Our novel method makes the efficient importance sampling methodology more accessible because it does not require the computation of a (possibly) complicated density kernel that needs to be tracked for each time period. The new method is illustrated for a stochastic volatility model with a Student's t distribution.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号