全文获取类型
收费全文 | 221篇 |
免费 | 8篇 |
国内免费 | 2篇 |
专业分类
财政金融 | 15篇 |
工业经济 | 6篇 |
计划管理 | 33篇 |
经济学 | 55篇 |
综合类 | 8篇 |
运输经济 | 4篇 |
旅游经济 | 16篇 |
贸易经济 | 58篇 |
农业经济 | 23篇 |
经济概况 | 13篇 |
出版年
2024年 | 1篇 |
2023年 | 8篇 |
2022年 | 9篇 |
2021年 | 16篇 |
2020年 | 11篇 |
2019年 | 8篇 |
2018年 | 14篇 |
2017年 | 8篇 |
2016年 | 14篇 |
2015年 | 10篇 |
2014年 | 11篇 |
2013年 | 14篇 |
2012年 | 12篇 |
2011年 | 9篇 |
2010年 | 5篇 |
2009年 | 5篇 |
2008年 | 17篇 |
2007年 | 7篇 |
2006年 | 11篇 |
2005年 | 12篇 |
2004年 | 7篇 |
2003年 | 6篇 |
2002年 | 5篇 |
2001年 | 1篇 |
2000年 | 2篇 |
1999年 | 1篇 |
1997年 | 3篇 |
1995年 | 2篇 |
1994年 | 1篇 |
1990年 | 1篇 |
排序方式: 共有231条查询结果,搜索用时 7 毫秒
231.
《International Journal of Forecasting》2023,39(1):332-345
Probabilistic time series forecasting is crucial in many application domains, such as retail, ecommerce, finance, and biology. With the increasing availability of large volumes of data, a number of neural architectures have been proposed for this problem. In particular, Transformer-based methods achieve state-of-the-art performance on real-world benchmarks. However, these methods require a large number of parameters to be learned, which imposes high memory requirements on the computational resources for training such models. To address this problem, we introduce a novel bidirectional temporal convolutional network that requires an order of magnitude fewer parameters than a common Transformer-based approach. Our model combines two temporal convolutional networks: the first network encodes future covariates of the time series, whereas the second network encodes past observations and covariates. We jointly estimate the parameters of an output distribution via these two networks. Experiments on four real-world datasets show that our method performs on par with four state-of-the-art probabilistic forecasting methods, including a Transformer-based approach and WaveNet, on two point metrics (sMAPE and NRMSE) as well as on a set of range metrics (quantile loss percentiles) in the majority of cases. We also demonstrate that our method requires significantly fewer parameters than Transformer-based methods, which means that the model can be trained faster with significantly lower memory requirements, which as a consequence reduces the infrastructure cost for deploying these models. 相似文献