首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   145篇
  免费   2篇
财政金融   41篇
工业经济   3篇
计划管理   50篇
经济学   19篇
综合类   9篇
运输经济   3篇
旅游经济   1篇
贸易经济   9篇
农业经济   2篇
经济概况   10篇
  2022年   3篇
  2021年   3篇
  2020年   8篇
  2019年   8篇
  2018年   3篇
  2017年   3篇
  2016年   6篇
  2015年   1篇
  2014年   9篇
  2013年   9篇
  2012年   11篇
  2011年   8篇
  2010年   4篇
  2009年   4篇
  2008年   11篇
  2007年   6篇
  2006年   13篇
  2005年   6篇
  2004年   7篇
  2003年   3篇
  2002年   2篇
  2001年   2篇
  2000年   3篇
  1999年   2篇
  1998年   3篇
  1996年   1篇
  1995年   2篇
  1993年   3篇
  1992年   1篇
  1989年   1篇
  1985年   1篇
排序方式: 共有147条查询结果,搜索用时 468 毫秒
31.
This paper studies the long- and short-run relationship between financial liberalization and stock market efficiency. It expands the extant body of knowledge by investigating Granger causality relationship applying mean group, common correlated effect mean group and common correlated effect pooled estimator to balanced panel data for 27 emerging markets over the period 1996–2011. We find evidence of financial liberalization Granger causes stock market efficiency, which is consistent with liberalization leads to efficiency hypothesis. Subsequently, our work makes a fresh contribution to the literature by focusing on informational efficiency of stock markets rather than financial development. Furthermore, we find that a negative long-term relationship between financial liberalization and stock return autocorrelation coexists with a positive short-term relationship between the two. The findings that financial liberalization, which has a deteriorated effect on stock market efficiency in the short-run, but positive impact in the long-run, allow us to draw an analogy similar to the J-curve hypothesis.  相似文献   
32.
传统链梯法是未决赔款准备金评估最常用的确定性方法,Munich链梯法基于Mack模型的假设,利用已决赔款和已报案赔款的相关性调整进展因子,有效减少了链梯法分别基于已决赔款和已报案赔款得到的未决赔款准备金之间的差异。本文在系统介绍Munich链梯法的基础上,结合模型假设,提出了两种基于Bootstrap方法的随机性Munich链梯法,并通过精算实务中的数值实例应用R软件加以实证分析。本文的研究对保险公司准备金负债评估的准确性和充足性具有重要参考价值。  相似文献   
33.
The paper examines the relative importance of ten anomaly-based trading strategies. We employ Mean Variance spanning methodologies in a classical unconditional setting and a novel conditional setting. Fixed-weight optimal portfolios stemming from the unconditional methodology indicate that all the strategies are needed to enhance the mean–variance tradeoff. This conclusion is completely reversed when we allow for time-varying portfolio weights as a nonlinear function of lagged economic indicators. The overall results suggest that diversified anomaly-based holdings are of limited benefit to sophisticated investors who employ dynamic trading strategies.  相似文献   
34.
It is well established that, in a market with inclusion of a risk-free asset, the single-period mean–variance efficient frontier is a straight line tangent to the risky region, a fact that is the very foundation of the classical CAPM. In this paper, it is shown that, in a continuous-time market where the risky prices are described by Itô processes and the investment opportunity set is deterministic (albeit time-varying), any efficient portfolio must involve allocation to the risk-free asset at any time. As a result, the dynamic mean–variance efficient frontier, although still a straight line, is strictly above the entire risky region. This in turn suggests a positive premium, in terms of the Sharpe ratio of the efficient frontier, arising from dynamic trading. Another implication is that the inclusion of a risk-free asset boosts the Sharpe ratio of the efficient frontier, which again contrasts sharply with the single-period case.  相似文献   
35.
The center of a univariate data set {x 1,…,x n} can be defined as the point μ that minimizes the norm of the vector of distances y′=(|x 1−μ|,…,|x n−μ|). As the median and the mean are the minimizers of respectively the L 1- and the L 2-norm of y, they are two alternatives to describe the center of a univariate data set. The center μ of a multivariate data set {x 1,…,x n} can also be defined as minimizer of the norm of a vector of distances. In multivariate situations however, there are several kinds of distances. In this note, we consider the vector of L 1-distances y1=(∥x 1- μ1,…,∥x n- μ1) and the vector of L 2-distances y2=(∥x 1- μ2,…,∥x n-μ2). We define the L 1-median and the L 1-mean as the minimizers of respectively the L 1- and the L 2-norm of y 1; and then the L 2-median and the L 2-mean as the minimizers of respectively the L 1- and the L 2-norm of y 2. In doing so, we obtain four alternatives to describe the center of a multivariate data set. While three of them have been already investigated in the statistical literature, the L 1-mean appears to be a new concept. Received January 1999  相似文献   
36.
The optimal investment policy for a standard multi-period mean–variance model is not time-consistent because the variance operator is not separable in the sense of the dynamic programming principle. With a nested conditional expectation mapping, we develop an investment model with time consistency in Markovian markets. Furthermore, we examine the differences of the investment policies with a riskless asset from those without a riskless asset. Analytical solutions for time-consistent optimal investment policies and the resulting mean–variance efficient frontier are obtained. Finally, using numerical examples, we show that the optimal investment policy derived from our model is more efficient than that of the standard mean–variance model in which the trade-off is determined between the mean and variance of the terminal wealth.  相似文献   
37.
Summary. We use the theory of large deviations to investigate the large time behavior and the small noise asymptotics of random economic processes whose evolutions are governed by mean-reverting stochastic differential equations with (i) constant and (ii) state dependent noise terms. We explicitly show that the probability is exponentially small that the time averages of these process will occupy regions distinct from their stable equilibrium position. We also demonstrate that as the noise parameter decreases, there is an exponential convergence to the stable position. Applications of large deviation techniques and public policy implications of our results for regulators are explored. Received: December 7, 1998; revised version: October 25, 1999  相似文献   
38.
Robust portfolio optimization has been developed to resolve the high sensitivity to inputs of the Markowitz mean–variance model. Although much effort has been put into forming robust portfolios, there have not been many attempts to analyze the characteristics of portfolios formed from robust optimization. We investigate the behavior of robust portfolios by analytically describing how robustness leads to higher dependency on factor movements. Focusing on the robust formulation with an ellipsoidal uncertainty set for expected returns, we show that as the robustness of a portfolio increases, its optimal weights approach the portfolio with variance that is maximally explained by factors.  相似文献   
39.
Peer-effects have been shown to affect behavior, and can generally lead to investments choices that are mean–variance inefficient. This paper analyzes optimal diversification with peer-effects. We show that if individuals have keeping-up with the Joneses preferences and they take their peer-group reference as the market portfolio, Markowitz’s mean–variance efficiency analysis and the CAPM equilibrium are intact. This holds for any keeping-up preferences, as well as heterogeneous combinations of such preferences. These results also extend to the Merton–Levy segmented-market model.  相似文献   
40.
We introduce an iterative procedure for estimating the unknown density of a random variable X from n independent copies of Y=X+ɛ, where ɛ is normally distributed measurement error independent of X. Mean integrated squared error convergence rates are studied over function classes arising from Fourier conditions. Minimax rates are derived for these classes. It is found that the sequence of estimators defined by the iterative procedure attains the optimal rates. In addition, it is shown that the sequence of estimators converges exponentially fast to an estimator within the class of deconvoluting kernel density estimators. The iterative scheme shows how, in practice, density estimation from indirect observations may be performed by simply correcting an appropriate ordinary density estimator. This allows to assess the effect that the perturbation due to contamination by ɛ has on the density to be estimated. We also suggest a method to select the smoothing parameter required by the iterative approach and, utilizing this method, perform a simulation study.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号