首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   604篇
  免费   9篇
财政金融   170篇
工业经济   10篇
计划管理   206篇
经济学   102篇
综合类   21篇
运输经济   12篇
旅游经济   2篇
贸易经济   47篇
农业经济   23篇
经济概况   20篇
  2024年   2篇
  2023年   9篇
  2022年   8篇
  2021年   13篇
  2020年   24篇
  2019年   32篇
  2018年   14篇
  2017年   23篇
  2016年   22篇
  2015年   13篇
  2014年   32篇
  2013年   81篇
  2012年   24篇
  2011年   30篇
  2010年   24篇
  2009年   43篇
  2008年   32篇
  2007年   30篇
  2006年   27篇
  2005年   17篇
  2004年   16篇
  2003年   17篇
  2002年   12篇
  2001年   13篇
  2000年   13篇
  1999年   9篇
  1998年   6篇
  1997年   3篇
  1996年   5篇
  1995年   3篇
  1994年   2篇
  1993年   2篇
  1992年   1篇
  1991年   3篇
  1990年   2篇
  1988年   1篇
  1987年   1篇
  1986年   2篇
  1984年   1篇
  1982年   1篇
排序方式: 共有613条查询结果,搜索用时 15 毫秒
1.
Graphical models provide a powerful and flexible approach to the analysis of complex problems in genetics. While task-specific software may be extremely efficient for any particular analysis, it is often difficult to adapt to new computational challenges. By viewing these genetic applications in a more general framework, many problems can be handled by essentially the same software. This is advantageous in an area where fast methodological development is essential. Once a method has been fully developed and tested, problem-specific software may then be required. The aim of this paper is to illustrate the potential use of a graphical model approach to genetic analyses by taking a very simple and well-understood problem by way of example.  相似文献   
2.
The problem of comparing the precisions of two instruments using repeated measurements can be cast as an extension of the Pitman-Morgan problem of testing equality of variances of a bivariate normal distribution. Hawkins (1981) decomposes the hypothesis of equal variances in this model into two subhypotheses for which simple tests exist. For the overall hypothesis he proposes to combine the tests of the subhypotheses using Fisher's method and empirically compares the component tests and their combination with the likelihood ratio test. In this paper an attempt is made to resolve some discrepancies and puzzling conclusions in Hawkins's study and to propose simple modifications.
The new tests are compared to the tests discussed by Hawkins and to each other both in terms of the finite sample power (estimated by Monte Carlo simulation) and theoretically in terms of asymptotic relative efficiencies.  相似文献   
3.
    
This paper extends the joint Value-at-Risk (VaR) and expected shortfall (ES) quantile regression model of Taylor (2019), by incorporating a realized measure to drive the tail risk dynamics, as a potentially more efficient driver than daily returns. Furthermore, we propose and test a new model for the dynamics of the ES component. Both a maximum likelihood and an adaptive Bayesian Markov chain Monte Carlo method are employed for estimation, the properties of which are compared in a simulation study. The results favour the Bayesian approach, which is employed subsequently in a forecasting study of seven financial market indices. The proposed models are compared to a range of parametric, non-parametric and semi-parametric competitors, including GARCH, realized GARCH, the extreme value theory method and the joint VaR and ES models of Taylor (2019), in terms of the accuracy of one-day-ahead VaR and ES forecasts, over a long forecast sample period that includes the global financial crisis in 2007–2008. The results are favorable for the proposed models incorporating a realized measure, especially when employing the sub-sampled realized variance and the sub-sampled realized range.  相似文献   
4.
Usual inference methods for stable distributions are typically based on limit distributions. But asymptotic approximations can easily be unreliable in such cases, for standard regularity conditions may not apply or may hold only weakly. This paper proposes finite-sample tests and confidence sets for tail thickness and asymmetry parameters (αα and ββ) of stable distributions. The confidence sets are built by inverting exact goodness-of-fit tests for hypotheses which assign specific values to these parameters. We propose extensions of the Kolmogorov–Smirnov, Shapiro–Wilk and Filliben criteria, as well as the quantile-based statistics proposed by McCulloch (1986) in order to better capture tail behavior. The suggested criteria compare empirical goodness-of-fit or quantile-based measures with their hypothesized values. Since the distributions involved are quite complex and non-standard, the relevant hypothetical measures are approximated by simulation, and pp-values are obtained using Monte Carlo (MC) test techniques. The properties of the proposed procedures are investigated by simulation. In contrast with conventional wisdom, we find reliable results with sample sizes as small as 25. The proposed methodology is applied to daily electricity price data in the US over the period 2001–2006. The results show clearly that heavy kurtosis and asymmetry are prevalent in these series.  相似文献   
5.
窦晓峰 《价值工程》2012,31(32):262-263
本文对在随机事件与概率中的常见问题提出了一种新的借助于计算机实现的解决思路,即通过机器自动产生随机数后构建相应的数学模型,再通过编程计算得到最后结果。该方法既为概率论的教学带来了创新性思维,也为当前的教师提出了新的挑战。  相似文献   
6.
    
In this paper, we will present a multiple time step Monte Carlo simulation technique for pricing options under the Stochastic Alpha Beta Rho model. The proposed method is an extension of the one time step Monte Carlo method that we proposed in an accompanying paper Leitao et al. [Appl. Math. Comput. 2017, 293, 461–479], for pricing European options in the context of the model calibration. A highly efficient method results, with many very interesting and nontrivial components, like Fourier inversion for the sum of log-normals, stochastic collocation, Gumbel copula, correlation approximation, that are not yet seen in combination within a Monte Carlo simulation. The present multiple time step Monte Carlo method is especially useful for long-term options and for exotic options.  相似文献   
7.
    
We describe a simple Importance Sampling strategy for Monte Carlo simulations based on a least-squares optimization procedure. With several numerical examples, we show that such Least-squares Importance Sampling (LSIS) provides efficiency gains comparable to the state-of-the-art techniques, for problems that can be formulated in terms of the determination of the optimal mean of a multivariate Gaussian distribution. In addition, LSIS can be naturally applied to more general Importance Sampling densities and is particularly effective when the ability to adjust higher moments of the sampling distribution, or to deal with non-Gaussian or multi-modal densities, is critical to achieve variance reductions.  相似文献   
8.
In Foreign Exchange Markets vanilla and barrier options are traded frequently. The market standard is a cutoff time of 10:00 a.m. in New York for the strike of vanillas and a knock-out event based on a continuously observed barrier in the inter bank market. However, many clients, particularly from Italy, prefer the cutoff and knock-out event to be based on the fixing published by the European Central Bank on the Reuters Page ECB37. These barrier options are called discretely monitored barrier options. While these options can be priced in several models by various techniques, the ECB source of the fixing causes two problems. First of all, it is not tradable, and secondly it is published with a delay of about 10–20 min. We examine here the effect of these problems on the hedge of those options and consequently suggest a cost based on the additional uncertainty encountered.   相似文献   
9.
本文采用蒙特卡洛模拟方法,根据现金净额是否为负这一标准来判断房地产开发企业是否违约,在对企业的现金流进行随机模拟的基础上来计算企业的违约概率。压力测试的场景为房价下降,利率上升。压力传导途径为房价与利率变动导致企业销售收入变动,销售收入的改变导致企业的现金流量表发生变化。房价和利率对销售收入的冲击是随机的,企业的现金流也是随机的,本文通过随机模拟估算了企业的现金流为负的频率,以此作为企业违约的概率。压力测试表明,当房价下降幅度到达15%附近时,房地产开发商的违约概率开始急剧上升。  相似文献   
10.
    
Tournament outcome uncertainty depends on: the design of the tournament; and the relative strengths of the competitors – the competitive balance. A tournament design comprises the arrangement of the individual matches, which we call the tournament structure, the seeding policy and the progression rules. In this paper, we investigate the effect of seeding policy for various tournament structures, while taking account of competitive balance. Our methodology uses tournament outcome uncertainty to consider the effect of seeding policy and other design changes. The tournament outcome uncertainty is measured using the tournament outcome characteristic which is the probability Pq,R that a team in the top 100q pre‐tournament rank percentile progresses forward from round R, for all q and R. We use Monte Carlo simulation to calculate the values of this metric. We find that, in general, seeding favours stronger competitors, but that the degree of favouritism varies with the type of seeding. Reseeding after each round favours the strong to the greatest extent. The ideas in the paper are illustrated using the soccer World Cup Finals tournament.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号