首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   108篇
  免费   15篇
财政金融   30篇
工业经济   3篇
计划管理   47篇
经济学   9篇
综合类   4篇
运输经济   1篇
贸易经济   29篇
  2023年   4篇
  2022年   1篇
  2021年   5篇
  2020年   4篇
  2019年   3篇
  2018年   5篇
  2017年   6篇
  2016年   8篇
  2015年   7篇
  2014年   9篇
  2013年   12篇
  2012年   4篇
  2011年   9篇
  2010年   3篇
  2009年   4篇
  2008年   4篇
  2007年   7篇
  2006年   3篇
  2005年   4篇
  2004年   1篇
  2003年   1篇
  2002年   5篇
  2001年   5篇
  2000年   1篇
  1999年   3篇
  1997年   3篇
  1994年   1篇
  1992年   1篇
排序方式: 共有123条查询结果,搜索用时 171 毫秒
81.
The spot commodities market exhibits both extreme volatility and price spikes, which lead to heavy-tailed distributions of price change and autocorrelation. This article uses various Lévy jump models to capture these features in a panel of agricultural commodities observed between January 1990 and February 2014. The results show that Levy jump models outperform the continuous Gaussian model. Our results prove that assuming a constant volatility or even a deterministic volatility and drift structure of agricultural commodity spot prices is not realistic and is less efficient than the stochastic assumption. The findings demonstrate an interesting correlation between volatility and jumps for a given commodity i, but no relationship between the volatility of commodity i and the probability of jumps of commodity j.  相似文献   
82.
Wavelet shrinkage and thresholding methods constitute a powerful way to carry out signal denoising, especially when the underlying signal has a sparse wavelet representation. They are computationally fast, and automatically adapt to the smoothness of the signal to be estimated. Nearly minimax properties for simple threshold estimators over a large class of function spaces and for a wide range of loss functions were established in a series of papers by Donoho and Johnstone. The notion behind these wavelet methods is that the unknown function is well approximated by a function with a relatively small proportion of nonzero wavelet coefficients. In this paper, we propose a framework in which this notion of sparseness can be naturally expressed by a Bayesian model for the wavelet coefficients of the underlying signal. Our Bayesian formulation is grounded on the empirical observation that the wavelet coefficients can be summarized adequately by exponential power prior distributions and allows us to establish close connections between wavelet thresholding techniques and Maximum A Posteriori estimation for two classes of noise distributions including heavy–tailed noises. We prove that a great variety of thresholding rules are derived from these MAP criteria. Simulation examples are presented to substantiate the proposed approach.  相似文献   
83.
84.
The main objective of this study is to analyse whether the combination of regional predictions generated with machine learning (ML) models leads to improved forecast accuracy. With this aim, we construct one set of forecasts by estimating models on the aggregate series, another set by using the same models to forecast the individual series prior to aggregation, and then we compare the accuracy of both approaches. We use three ML techniques: support vector regression, Gaussian process regression and neural network models. We use an autoregressive moving average model as a benchmark. We find that ML methods improve their forecasting performance with respect to the benchmark as forecast horizons increase, suggesting the suitability of these techniques for mid- and long-term forecasting. In spite of the fact that the disaggregated approach yields more accurate predictions, the improvement over the benchmark occurs for shorter forecast horizons with the direct approach.  相似文献   
85.
This paper describes the first application to an economy-wide macroeconometric model of recently developed methods for the exact Gaussian estimation of higher order continuous time dynamic models. The new model is formulated as a system of second order differential equations, thus providing a much richer dynamic specification than the predominantly first order continuous time macroeconometric models developed during the last 15 years. It also makes intensive use of economic theory to obtain a parsimonious parametrization, is designed in such a way as to permit a rigorous mathematical investigation of its steady state and asymptotic stability properties, and makes systematic use of the assumption of long-run rational expectations. In addition to the exact Gaussian estimates of the structural parameters, the paper includes the first set of continuous lag distributions derived from estimates that take account of the exact restrictions on the distribution of the discrete data implied by a continuous time model. It also includes the first estimates of the coefficient matrices of the exact discrete model, in its VARMAX form, satisfied by the discrete stock and flow data generated by a higher order continuous time dynamic model.  相似文献   
86.
在卢卡斯提出的资产定价模型中,消费增长率是独立同分布的和投资者效用函数是可分的。卢卡斯模型无法解释著名的股票溢价之谜和无风险利率之谜,对卢卡斯模型做的两个修改是:消费增长率服从一阶高斯自回归随机过程和投资者偏好是具有习惯形成的效用函数,修改后的模型,一是存在资产定价的显示解;二是可以解释股票溢价之谜和无风险利率之谜。  相似文献   
87.
We present a scheme by which a probabilistic forecasting system whose predictions have a poor probabilistic calibration may be recalibrated through the incorporation of past performance information in order to produce a new forecasting system that is demonstrably superior to the original, inasmuch as one may use it to win wagers consistently against someone who is using the original system. The scheme utilizes Gaussian process (GP) modeling to estimate a probability distribution over the probability integral transform (PIT) of a scalar predictand. The GP density estimate gives closed-form access to information entropy measures that are associated with the estimated distribution, which allows the prediction of winnings in wagers against the base forecasting system. A separate consequence of the procedure is that the recalibrated forecast has a uniform expected PIT distribution. One distinguishing feature of the procedure is that it is appropriate even if the PIT values are not i.i.d. The recalibration scheme is formulated in a framework that exploits the deep connections among information theory, forecasting, and betting. We demonstrate the effectiveness of the scheme in two case studies: a laboratory experiment with a nonlinear circuit and seasonal forecasts of the intensity of the El Niño-Southern Oscillation phenomenon.  相似文献   
88.
To ensure a timely response to emergencies, governments are obliged to implement effective ambulance allocation plans. In practice, an emergency medical service (EMS) system works in an uncertain environment, with stochastic demand, response-times, and travel-times. This uncertainty significantly affects ambulance allocation planning. However, few studies in this field adequately consider the effect of spatiotemporal uncertainty in demand, because it is difficult to measure it quantitatively. As a result, few analytic models capture the dynamic nature of an EMS system and, thus, the allocation plans they generate are not efficient in practice. Therefore, this study proposes a simulation-based optimization method for ambulance allocation. A simulation model is constructed to mimic the operational processes of an EMS system, and to evaluate the performance of an ambulance allocation plan in an uncertain environment. Gaussian mixture model clustering is used to derive the uncertain spatial demand. Then, the simulation generates emergency demand based on the obtained spatial distribution. A Gaussian-process-based search algorithm is used together with the simulation model to identify optimal solutions. To validate the proposed method, a case study is conducted using data on emergency patients in the Shanghai Songjiang District. Compared with the current plan adopted in Songjiang, the experimental results demonstrate that the delay time and frequency of the EMS system can be reduced significantly by employing the proposed methods. Furthermore, nearly 41% of the allocation cost can be saved.  相似文献   
89.
This paper introduces a new positional momentum management strategy based on the expected future ranks of asset returns and trade volume changes predicted by a bivariate Vector Autoregressive (VAR) model. The new method is applied to a dataset of 1330 stocks traded on the NASDAQ between 2008 and 2016. It is shown that return ranks are correlated with their own past values and the current and past ranks of trade volume changes. This results leads to a new expected positional momentum strategy providing portfolios of predicted winners, conditional on past ranks of returns and volume changes. This approach further extends to positional liquidity management. The expected liquid positional strategy selects portfolios of stocks with the strongest realized or predicted increase in trading volume. These new positional management strategies outperform the standard momentum strategies and the equally weighted portfolio in terms of average returns and Sharpe ratio.  相似文献   
90.
This paper tests affine, quadratic and Black-type Gaussian models on Euro area triple A Government bond yields for maturities up to 30 years. Quadratic Gaussian models beat affine Gaussian models both in-sample and out-of-sample. A Black-type model best fits the shortest maturities and the extremely low yields since 2013, but worst fits the longest maturities. Even for quadratic models we can infer the latent factors from some yields observed without errors, which makes quasi-maximum likelihood (QML) estimation feasible. New specifications of quadratic models fit the longest maturities better than does the ‘classic’ specification of Ahn et al. [2002. ‘Quadratic Term Structure Models: Theory and Evidence.’ The Review of Financial Studies 15 (1): 243–288], but the opposite is true for the shortest maturities. These new specifications are more suitable to QML estimation. Overall quadratic models seem preferable to affine Gaussian models, because of superior empirical performance, and to Black-type models, because of superior tractability. This paper also proposes the vertical method of lines (MOL) to solve numerically partial differential equations (PDEs) for pricing bonds under multiple non-independent stochastic factors. ‘Splitting’ the PDE drastically reduces computations. Vertical MOL can be considerably faster and more accurate than finite difference methods.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号