首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   7篇
  免费   0篇
财政金融   3篇
计划管理   4篇
  2023年   2篇
  2021年   1篇
  2020年   2篇
  2017年   1篇
  2014年   1篇
排序方式: 共有7条查询结果,搜索用时 0 毫秒
1
1.
2.
研究目标:构建了可以调节追踪误差和超额收益的增强型指数追踪模型,并给出了广义最小角度回归算法(GLARS),用以计算调节参数作用下模型解的折中路径。研究方法:通过模拟数据和五组世界主要股票市场指数的历史数据,对本文提出的模型和算法与同类模型和算法进行了性能比较;同时追踪上证50指数构建若干稀疏且稳定的资产组合模型,通过信息比率等指标对投资组合进行评价。研究发现:本文构建的模型可用以构造权衡追踪效果和超额收益,且稀疏的资产组合,GLARS算法相对传统预设参数的算法具有良好的求解能力和计算速度。研究创新:引入调节参数平衡追踪效果和超额收益,并针对中国股票市场的特点,在增强型指数追踪模型施加非负约束;GLARS算法可遍历所有折中意义下的最优解。研究价值:本文提出的增强型指数追踪模型在国内具有较强适用性,在保证资产稀疏性的前提下可以得到超额收益,同时丰富了目前投资组合中的方法论研究。  相似文献   
3.
This paper proposes a novel methodology to detect Granger causality on average in vector autoregressive settings using feedforward neural networks. The approach accommodates unknown dependence structures between elements of high-dimensional multivariate time series with weak and strong persistence. To do this, we propose a two-stage procedure: first, we maximize the transfer of information between input and output variables in the network in order to obtain an optimal number of nodes in the intermediate hidden layers. Second, we apply a novel sparse double group lasso penalty function in order to identify the variables that have the predictive ability and, hence, indicate that Granger causality is present in the others. The penalty function inducing sparsity is applied to the weights that characterize the nodes of the neural network. We show the correct identification of these weights so as to increase sample sizes. We apply this method to the recently created Tobalaba network of renewable energy companies and show the increase in connectivity between companies after the creation of the network using Granger causality measures to map the connections.  相似文献   
4.
When a portfolio consists of a large number of assets, it generally incorporates too many small and illiquid positions and needs a large amount of rebalancing, which can involve large transaction costs. For financial index tracking, it is desirable to avoid such atomized, unstable portfolios, which are difficult to realize and manage. A natural way of achieving this goal is to build a tracking portfolio that is sparse with only a small number of assets in practice. The cardinality constraint approach, by directly restricting the number of assets held in the tracking portfolio, is a natural idea. However, it requires the pre-specification of the maximum number of assets selected, which is rarely practicable. Moreover, the cardinality constrained optimization problem is shown to be NP-hard. Solving such a problem will be computationally expensive, especially in high-dimensional settings. Motivated by this, this paper employs a regularization approach based on the adaptive elastic-net (Aenet) model for high-dimensional index tracking. The proposed method represents a family of convex regularization methods, which nests the traditional Lasso, adaptive Lasso (Alasso), and elastic-net (Enet) as special cases. To make the formulation more practical and general, we also take the full investment condition and turnover restrictions (or transaction costs) into account. An efficient algorithm based on coordinate descent with closed-form updates is derived to tackle the resulting optimization problem. Empirical results show that the proposed method is computationally efficient and has competitive out-of-sample performance, especially in high-dimensional settings.  相似文献   
5.
We propose two data-based priors for vector error correction models. Both priors lead to highly automatic approaches which require only minimal user input. For the first one, we propose a reduced rank prior which encourages shrinkage towards a low-rank, row-sparse, and column-sparse long-run matrix. For the second one, we propose the use of the horseshoe prior, which shrinks all elements of the long-run matrix towards zero. Two empirical investigations reveal that Bayesian vector error correction (BVEC) models equipped with our proposed priors scale well to higher dimensions and forecast well. In comparison to VARs in first differences, they are able to exploit the information in the level variables. This turns out to be relevant to improve the forecasts for some macroeconomic variables. A simulation study shows that the BVEC with data-based priors possesses good frequentist estimation properties.  相似文献   
6.
Machine learning methods used in finance for corporate credit rating lack transparency as to which accounting features are important for the respective rating. A counterfactual explanation is a methodology that attempts to find the smallest modification of the input values which changes the prediction of a learned algorithm to a new output, other than the original one. We propose a “sparsity algorithm” which finds a counterfactual explanation to find the most important features for obtaining a higher credit score. We validate the novel algorithm with synthetically generated data and we apply it to quarterly financial statements from companies in the US market. We provide evidence that the counterfactual explanation can capture the majority of features that change between two quarters when corporate ratings improve. The results obtained show that the higher the rating of a company, the greater the “effort” required to further improve credit rating.  相似文献   
7.
A class of global-local hierarchical shrinkage priors for estimating large Bayesian vector autoregressions (BVARs) has recently been proposed. We question whether three such priors: Dirichlet-Laplace, Horseshoe, and Normal-Gamma, can systematically improve the forecast accuracy of two commonly used benchmarks (the hierarchical Minnesota prior and the stochastic search variable selection (SSVS) prior), when predicting key macroeconomic variables. Using small and large data sets, both point and density forecasts suggest that the answer is no. Instead, our results indicate that a hierarchical Minnesota prior remains a solid practical choice when forecasting macroeconomic variables. In light of existing optimality results, a possible explanation for our finding is that macroeconomic data is not sparse, but instead dense.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号