首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   19篇
  免费   2篇
财政金融   8篇
计划管理   7篇
经济学   3篇
贸易经济   2篇
经济概况   1篇
  2021年   2篇
  2020年   1篇
  2017年   1篇
  2016年   1篇
  2014年   1篇
  2011年   2篇
  2009年   1篇
  2008年   1篇
  2007年   1篇
  2005年   3篇
  2003年   2篇
  2002年   1篇
  2000年   1篇
  1993年   1篇
  1992年   1篇
  1985年   1篇
排序方式: 共有21条查询结果,搜索用时 62 毫秒
1.
Trades and Quotes: A Bivariate Point Process   总被引:3,自引:0,他引:3  
This article formulates a bivariate point process to jointlyanalyze trade and quote arrivals. In microstructure models,trades may reveal private information that is then incorporatedinto new price quotes. This article examines the speed of thisinformation flow and the circumstances that govern it. A jointlikelihood function for trade and quote arrivals is specifiedin a way that recognizes that an intervening trade sometimescensors the time between a trade and the subsequent quote. Modelsof trades and quotes are estimated for eight stocks using Tradeand Quote database (TAQ) data. The essential finding for thearrival of price quotes is that information flow variables,such as high trade arrival rates, large volume per trade, andwide bid–ask spreads, all predict more rapid price revisions.This means prices respond more quickly to trades when informationis flowing so that the price impacts of trades and ultimatelythe volatility of prices are high in such circumstances.  相似文献   
2.
This article demonstrates that a dual pair of input-output price and quantity models, taken together, constitutes a composite network flow model. The network flow model has become a dominant analytical tool within the fields of operations analysis and electrical engineering, and the aim of the article is to enable input-output users within non-standard applications to take advantage of the large body of literature within these disciplines. Through a simple example it is shown that the solution of a dual pair of input-output models is a special case of the general problem of finding a minimum cost circulation in a transportation network. Second, it is shown that the common method of determining quantities and prices in input-output models is identical to the classical node method for determination of currents and voltages in an electrical network. The theory of transportation networks offers well developed methods for analysis of capacity constraints on the network flows, while the theory of electrical networks supplies methods for analysis of models with price sensitivities and for dynamic analysis. In addition, the elegant symmetry of currents and voltages in electrical networks contributes significantly to a better understanding of the logical relationship between price and quantity models.  相似文献   
3.
We examine the impact of monetary policy on the S&P 500 using intraday data. The analysis shows an economically and statistically significant relationship between S&P 500 intraday returns and changes in the Fed funds target rate. The significance and magnitude of the response is dependent on whether the change was expected or unexpected. An expected change in the Fed funds target rate has no impact on prices in the broad equity market; however, an unexpected change of 25 basis points in the Fed funds target rate results in an approximate 48 basis points decline in the broad equity market’s return. The speed of these market reactions is rapid with the equity market reaching a new equilibrium within 15 minutes.
Allan A. ZebedeeEmail:
  相似文献   
4.
Non-negative matrix factorisation (NMF) is an increasingly popular unsupervised learning method. However, parameter estimation in the NMF model is a difficult high-dimensional optimisation problem. We consider algorithms of the alternating least squares type. Solutions to the least squares problem fall in two categories. The first category is iterative algorithms, which include algorithms such as the majorise–minimise (MM) algorithm, coordinate descent, gradient descent and the Févotte-Cemgil expectation–maximisation (FC-EM) algorithm. We introduce a new family of iterative updates based on a generalisation of the FC-EM algorithm. The coordinate descent, gradient descent and FC-EM algorithms are special cases of this new EM family of iterative procedures. Curiously, we show that the MM algorithm is never a member of our general EM algorithm. The second category is based on cone projection. We describe and prove a cone projection algorithm tailored to the non-negative least square problem. We compare the algorithms on a test case and on the problem of identifying mutational signatures in human cancer. We generally find that cone projection is an attractive choice. Furthermore, in the cancer application, we find that a mix-and-match strategy performs better than running each algorithm in isolation.  相似文献   
5.
The aggregation level of industries in the Danish macroeconomic model ADAM is examined using a new indicator of aggregation bias. The indicator is decomposed into contributions from the original industries, thereby clearly identifying the aggregation problems which caused the six industry groups of the older versions of ADAM to be disaggregated into the current 19 groups. An aggregation key minimizing the new bias indicator is found: from the microlevel of 64 industries, 18 ‘optimal’ industry groups are formed through ‘clustering’; these groups are very similar to the current ADAM groups. Altogether, the conclusions based on the new indicator closely resemble those reached through years of practical experience.  相似文献   
6.
This paper develops a joint theory of aggregation of input–output quantity and price models. The main emphasis is on the problem of aggregation of industries in the models. While aggregation of quantity models is a familiar topic in economic literature, the aggregation of price models is a largely unexplored subject. Here, however, quantity and price models are considered as two parts of a single, composite flow model. This understanding implies that each result in the quantity model has dual counterpart in the price model and vice versa. Through consistent use of this duality principle, significant results are developed for both model types, and a number of well-known results can be stated in a simpler way. Specifically, a number of conditions for perfect aggregation of final demands, primary inputs and industries in price models as well as quantity models are formulated. On this basis a new indicator of aggregation bias is suggested. The indicator can be decomposed into the contributions of each detailed industry, enabling the user to identify atypical industries in each group. Furthermore, the indicator can be computed even with no knowledge of the coefficients of the detailed models.  相似文献   
7.
We compare 330 ARCH‐type models in terms of their ability to describe the conditional variance. The models are compared out‐of‐sample using DM–$ exchange rate data and IBM return data, where the latter is based on a new data set of realized variance. We find no evidence that a GARCH(1,1) is outperformed by more sophisticated models in our analysis of exchange rates, whereas the GARCH(1,1) is clearly inferior to models that can accommodate a leverage effect in our analysis of IBM returns. The models are compared with the test for superior predictive ability (SPA) and the reality check for data snooping (RC). Our empirical results show that the RC lacks power to an extent that makes it unable to distinguish ‘good’ and ‘bad’ models in our analysis. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   
8.
We introduce a simulation scheme for Brownian semistationary processes, which is based on discretizing the stochastic integral representation of the process in the time domain. We assume that the kernel function of the process is regularly varying at zero. The novel feature of the scheme is to approximate the kernel function by a power function near zero and by a step function elsewhere. The resulting approximation of the process is a combination of Wiener integrals of the power function and a Riemann sum, which is why we call this method a hybrid scheme. Our main theoretical result describes the asymptotics of the mean square error of the hybrid scheme, and we observe that the scheme leads to a substantial improvement of accuracy compared to the ordinary forward Riemann-sum scheme, while having the same computational complexity. We exemplify the use of the hybrid scheme by two numerical experiments, where we examine the finite-sample properties of an estimator of the roughness parameter of a Brownian semistationary process and study Monte Carlo option pricing in the rough Bergomi model of Bayer et al. (Quant. Finance 16:887–904, 2016), respectively.  相似文献   
9.
In a recent paper we have introduced the class of realised kernel estimators of the increments of quadratic variation in the presence of noise. We showed that this estimator is consistent and derived its limit distribution under various assumptions on the kernel weights. In this paper we extend our analysis, looking at the class of subsampled realised kernels and we derive the limit theory for this class of estimators. We find that subsampling is highly advantageous for estimators based on discontinuous kernels, such as the truncated kernel. For kinked kernels, such as the Bartlett kernel, we show that subsampling is impotent, in the sense that subsampling has no effect on the asymptotic distribution. Perhaps surprisingly, for the efficient smooth kernels, such as the Parzen kernel, we show that subsampling is harmful as it increases the asymptotic variance. We also study the performance of subsampled realised kernels in simulations and in empirical work.  相似文献   
10.
Maier  Norbert  Jørgensen  Julie Runge  Lunde  Asger  Toivanen  Otto 《De Economist》2021,169(2):141-178
De Economist - We provide an ex-post analysis of the 2005 TeliaSonera-Chess merger in the Norwegian mobile telecommunication market. Applying a difference-in-difference approach and a synthetic...  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号