首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   48篇
  免费   4篇
财政金融   18篇
工业经济   4篇
计划管理   8篇
经济学   5篇
贸易经济   3篇
农业经济   13篇
经济概况   1篇
  2024年   1篇
  2021年   1篇
  2020年   2篇
  2018年   3篇
  2017年   2篇
  2016年   2篇
  2014年   2篇
  2013年   4篇
  2012年   1篇
  2011年   2篇
  2009年   6篇
  2008年   2篇
  2007年   1篇
  2006年   1篇
  2005年   2篇
  2004年   2篇
  2003年   1篇
  2002年   1篇
  1994年   1篇
  1992年   3篇
  1991年   1篇
  1989年   2篇
  1988年   1篇
  1985年   2篇
  1981年   1篇
  1979年   1篇
  1975年   2篇
  1966年   2篇
排序方式: 共有52条查询结果,搜索用时 203 毫秒
31.
We study the sensitivity to estimation error of portfolios optimized under various risk measures, including variance, absolute deviation, expected shortfall and maximal loss. We introduce a measure of portfolio sensitivity and test the various risk measures by considering simulated portfolios of varying sizes N and for different lengths T of the time series. We find that the effect of noise is very strong in all the investigated cases, asymptotically it only depends on the ratio N/T, and diverges (goes to infinity) at a critical value of N/T, that depends on the risk measure in question. This divergence is the manifestation of a phase transition, analogous to the algorithmic phase transitions recently discovered in a number of hard computational problems. The transition is accompanied by a number of critical phenomena, including the divergent sample to sample fluctuations of portfolio weights. While the optimization under variance and mean absolute deviation is always feasible below the critical value of N/T, expected shortfall and maximal loss display a probabilistic feasibility problem, in that they can become unbounded from below already for small values of the ratio N/T, and then no solution exists to the optimization problem under these risk measures. Although powerful filtering techniques exist for the mitigation of the above instability in the case of variance, our findings point to the necessity of developing similar filtering procedures adapted to the other risk measures where they are much less developed or non-existent. Another important message of this study is that the requirement of robustness (noise-tolerance) should be given special attention when considering the theoretical and practical criteria to be imposed on a risk measure.  相似文献   
32.
In this paper we examine the relationship between bond re-ratings and changes in systematic risk. Using both time series and cross-sectional regressions, we find that upgrades are not associated with a change in beta. Across the entire sample, downgrades are associated with an increase in beta. Further, the increase in beta is positively correlated with firm size. There is no evidence that movement within or across rating categories, the number of grades changed, or a change across the investment grade category have a differential impact on the change in beta.  相似文献   
33.
We examine several event-study test statistics that can be used to detect abnormal performance during amultiperiod event window. We demonstrate that one of the most commonly used test statistics does not, under the assumptions made, have the distribution claimed (standard normal), and thus tests using it will be biased. The magnitude of that bias is shown to increase with the length of the event window and can generally be expected to lead to excessive rejection of the null hypothesis. We also compare the relative power of alternative test statistics that are normally distributed and are straightforward to apply.  相似文献   
34.
The contour maps of the error of historical and parametric estimates of the global minimum risk for large random portfolios optimized under the Expected Shortfall (ES) risk measure are constructed. Similar maps for the VaR of the ES-optimized portfolio are also presented, along with results for the distribution of portfolio weights over the random samples and for the out-of-sample and in-sample estimates for ES. The contour maps allow one to quantitatively determine the sample size (the length of the time series) required by the optimization for a given number of different assets in the portfolio, at a given confidence level and a given level of relative estimation error. The necessary sample sizes invariably turn out to be unrealistically large for any reasonable choice of the number of assets and the confidence level. These results are obtained via analytical calculations based on methods borrowed from the statistical physics of random systems, supported by numerical simulations.  相似文献   
35.
Regression analysis is often used to estimate a linear relationship between security abnormal returns and firm-specific variables. If the abnormal returns are caused by a common event (i.e., there is “event clustering”) the error term of the cross-sectional regression will be heteroskedastic and correlated across observations. The size and power of alternative test statistics for the event clustering case has been evaluated under ideal conditions (Monte Carlo experiments using normally distributed synthetic security returns) by Chandra and Balachandran (J Finance 47:2055–2070, 1992) and Karafiath (J Financ Quant Anal 29(2):279–300, 1994). Harrington and Shrider (J Financ Quant Anal 42(1):229–256, 2007) evaluate cross-sectional regressions using actual (not simulated) stock returns only for the case of cross-sectional independence, i.e., in the absence of clustering. In order to evaluate the event clustering case, random samples of security returns are drawn from the data set provided by the Center for Research in Security Prices (CRSP) and the empirical distributions of alternative test statistics compared. These simulations include a comparison of OLS, WLS, GLS, two heteroskedastic-consistent estimators, and a bootstrap test for GLS. In addition, the Sefcik and Thompson (J Accounting Res 24(2):316–334, 1986) portfolio counterparts to OLS, WLS, and GLS, are evaluated. The main result from these simulations is none of the other estimator shows clear advantages over OLS or WLS. Researchers should be aware, however, that in these simulations the variance of the error term in the cross-sectional regression is unrelated to the explanatory variable.
Imre KarafiathEmail:
  相似文献   
36.
A production–recycling system is investigated. A constant demand can be satisfied with production and recycling. The used items are bought back and then recycled. The non-recycled products are disposed of. Two types of models will be analyzed. The first model examines the EOQ-related costs and minimizes the relevant costs. The second model generalizes the first model with the introduction of the cost function with linear waste disposal, recycling, production and buyback costs. It is asked whether the pure (either production or recycling) or mixed strategies are optimal and it will be shown that under these circumstances the mixed strategies are dominated by the pure strategies. The paper generalizes a former model proposed by the authors for the case of one recycling and one production batch to the case of arbitrary batch numbers.  相似文献   
37.
This article investigates farm technical efficiency (TE) and the effect of heterogeneity on production among farms using the Slovenian Farm Accountancy Data Network sample of farms in the period 2007–2013. We model production technology with a random parameter model that allows us to examine both the direct effect of heterogeneity on production and the indirect effect through the interaction of unobserved heterogeneity with time and input variables. Additionally, we consider intersectoral heterogeneity among types of farming. Results confirm the importance of all these sources of heterogeneity. The second contribution of the article is that, in addition to using conventional statistical methods, we examine the differences between less favored area (LFA) and non‐LFA farms using matching techniques. Results indicate that there is only a minor and statistically nonsignificant difference in TE between these groups. However, the difference is highly significant in terms of heterogeneity and technology. In other words, results show that farms in LFAs are not more inefficient but rather use different, production–environment‐specific technologies. These findings call attention to the fact that omitting the effect of heterogeneity on production technology leads to biased TE estimates and, in turn, leads to potentially imperfect policy choices.  相似文献   
38.
We investigate relative productivity levels and decompose productivity change for European agriculture between 2004 and 2013. Specifically (i) we contribute to the debate on whether agricultural Total Factor Productivity (TFP) has declined or not in the European Union (EU); (ii) we compare the relative TFP level across EU Member States and investigate the difference between ‘old’ Member States (OMS, i.e. the EU‐15) and ‘new’ Member States (NMS); and (iii) we test whether TFP is converging or not among Member States. The empirical analysis applies an aggregate quantity framework to country‐level panel data from the Economic Accounts for Agriculture for 23 EU Member States. The results imply that TFP has slightly decreased in the EU over the analysed period; however there are significant differences between the OMS and NMS and across Member States. Finally, our estimates suggest that productivity is generally converging over this period, albeit slowly.  相似文献   
39.
We investigate determinants of quality upgrades in EU agri‐food exports using panel data models for the period 2000–2011. By employing highly disaggregated data we show that the unit value of exports is positively correlated to level of economic development and size of population. Our results highlight the negative impacts of comparative advantage and trade costs on upgrades in export quality. Our analysis partly confirms the role of income distribution in quality specialisation, that greater income inequality increases specialisation in quality upgrades. Findings are robust when applied to alternative subsamples, including vertically specialised and final agri‐food products.  相似文献   
40.
We propose to interpret distribution model risk as sensitivity of expected loss to changes in the risk factor distribution, and to measure the distribution model risk of a portfolio by the maximum expected loss over a set of plausible distributions defined in terms of some divergence from an estimated distribution. The divergence may be relative entropy or another f‐divergence or Bregman distance. We use the theory of minimizing convex integral functionals under moment constraints to give formulae for the calculation of distribution model risk and to explicitly determine the worst case distribution from the set of plausible distributions. We also evaluate related risk measures describing divergence preferences.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号