首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
Parametric estimators, such as OLS, attain high efficiency for well-specified models. Nonparametric estimators greatly reduce specification error but at the cost of efficiency. Semiparametric estimators compromise between these dual goals of efficiency and specification error. Semiparametric estimators can assume general forms within classes of functional forms. This paper applies OLS, the kernel nonparametric regression estimator, and the semi-parametric estimator of Powell, Stock, and Stoker (1989) to a data set, which should, based on theory and previous empirical work, yield positive coefficients. The semiparametric estimator, on average, displayed the performance most consistent with prior expectations followed by the nonparametric and parametric estimators. In addition, the paper shows how the semiparametric estimator can provide insights into the form of misspecification and suggest data transformations.  相似文献   

2.
This article provides various paradigms for the grid estimator, the most useful being a representation of the grid estimator as a combination of the nonparametric nearest neighbor estimator and a parametric estimator. Hence, the grid estimator falls into the class of semiparametric estimators. The article used this representation to derive the relative efficiency of the nearest neighbor, grid, and OLS estimators. Under statistically perfect conditions, the OLS estimator dominated the grid estimator, which in turn dominated the nearest neighbor estimator. A Monte Carlo experiment verified the theoretical results. A second Monte Carlo experiment showed the fragility of the OLS superiority to misspecification. The results cast light upon appraisal practice.  相似文献   

3.
The Local Whittle Estimator of Long-Memory Stochastic Volatility   总被引:1,自引:0,他引:1  
We propose a new semiparametric estimator of the degree of persistencein volatility for long memory stochastic volatility (LMSV) models.The estimator uses the periodogram of the log squared returnsin a local Whittle criterion which explicitly accounts for thenoise term in the LMSV model. Finite-sample and asymptotic standarderrors for the estimator are provided. An extensive simulationstudy reveals that the local Whittle estimator is much lessbiased and that the finite-sample standard errors yield moreaccurate confidence intervals than the widely-used GPH estimator.The estimator is also found to be robust against possible leverageeffects. In an empirical analysis of the daily Deutsche Mark/USDollar exchange rate, the new estimator indicates stronger persistencein volatility than the GPH estimator, provided that a largenumber of frequencies is used.  相似文献   

4.
The ruin probability of an insurance company is a central topic in risk theory. We consider the classical Poisson risk model when the claim size distribution and the Poisson arrival rate are unknown. Given a sample of inter-arrival times and corresponding claims, we propose a semiparametric estimator of the ruin probability. We establish properties of strong consistency and asymptotic normality of the estimator and study bootstrap confidence bands. Further, we present a simulation example in order to investigate the finite sample properties of the proposed estimator.  相似文献   

5.
Constant-quality commercial indices generated by ordinary least squares may suffer an efficiency loss due to leptokurtosis caused by outliers in transactions data. When the subsequent nonnormality occurs, substantial improvement in index precision is obtained by estimating the hedonic model using a semiparametric adaptive estimator technique. When this method was applied to 1,846 office transactions that occurred in the Phoenix metropolitan area from January 1997 through June 2004, a substantial standard error reduction of approximately 9% was realized relative to ordinary least squares estimates. The difference in average returns between the semiparametric method and ordinary least squares was about 0.25% in each period, which represents a substantial increase in commercial property index precision. JEL Classification C4 R0  相似文献   

6.
This article proposes time-varying nonparametric and semiparametric estimators of the conditional cross-correlation matrix in the context of portfolio allocation. Simulations results show that the nonparametric and semiparametric models are best in DGPs with substantial variability or structural breaks in correlations. Only when correlations are constant does the parametric DCC model deliver the best outcome. The methodologies are illustrated by evaluating two interesting portfolios. The first portfolio consists of the equity sector SPDRs and the S&P 500, while the second one contains major currencies. Results show the nonparametric model generally dominates the others when evaluating in-sample. However, the semiparametric model is best for out-of-sample analysis.  相似文献   

7.
Nonlogit maximum‐likelihood estimators are inconsistent when using data on a subset of the choices available to agents. I show that the semiparametric, multinomial maximum‐score estimator is consistent when using data on a subset of choices. No information is required for choices outside of the subset. The required conditions about the error terms are the same conditions as for using all the choices. Estimation can proceed under additional restrictions if agents have unobserved, random consideration sets. A solution exists for instrumenting endogenous continuous variables. Monte Carlo experiments show the estimator performs well using small subsets of choices.  相似文献   

8.
When correlations between assets turn positive, multi-asset portfolios can become riskier than single assets. This article presents the estimation of tail risk at very high quantiles using a semiparametric estimator which is particularly suitable for portfolios with a large number of assets. The estimator captures simultaneously the information contained in each individual asset return that composes the portfolio, and the interrelation between assets. Noticeably, the accuracy of the estimates does not deteriorate when the number of assets in the portfolio increases. The implementation is as easy for a large number of assets as it is for a small number. We estimate the probability distribution of large losses for the American stock market considering portfolios with ten, fifty and one hundred assets of stocks with different market capitalization. In either case, the approximation for the portfolio tail risk is very accurate. We compare our results with well known benchmark models.  相似文献   

9.
This study examines the relationship between expected stock returns and volatility in the 12 largest international stock markets during January 1980 to December 2001. Consistent with most previous studies, we find a positive but insignificant relationship during the sample period for the majority of the markets based on parametric EGARCH-M models. However, using a flexible semiparametric specification of conditional variance, we find evidence of a significant negative relationship between expected returns and volatility in 6 out of the 12 markets. The results lend some support to the recent claim [Bekaert, G., Wu, G., 2000. Asymmetric volatility and risk in equity markets. Review of Financial Studies 13, 1–42; Whitelaw, R., 2000. Stock market risk and return: an empirical equilibrium approach. Review of Financial Studies 13, 521–547] that stock market returns are negatively correlated with stock market volatility.  相似文献   

10.
Utility-based models of asset pricing may be estimated with or without assuming a distribution for security returns; both approaches are developed and compared here. The chief strength of a parametric estimator lies in its computational simplicity and statistical efficiency when the added distributional assumption is true. In contrast, the nonparametric estimator is robust to departures from any particular distribution, and it is more consistent with the spirit underlying utility-based asset pricing models since the distribution of asset returns remains unspecified even in the empirical work. The nonparametric approach turns out to be easy to implement with precision nearly indistinguishable from its parametric counterpart in this particular application. The application shows that log utility is consistent with the data over the period 1926–1981.  相似文献   

11.
We consider a general problem of modeling a mortality law of a population of failing units with some parametric function. In this setting we define a mortality table of crude rates as a statistical estimator with multinomial distribution and show its consistency as well as asymptotic normality. We further derive the statistical properties of parameter estimators in a parametric mortality model based on a weighted square loss function. We use the obtained results to study consistency and appropriateness of the parametric bootstrap method in our setting. We derive the conditions on the assumed parametric mortality law and the loss function, under which the bootstrap is consistent for estimating the model parameters, their standard errors and corresponding confidence intervals. We apply our results to a model of Aggregate US Mortality Table based on a so called mixture of extreme value distributions suggested by Carriere ().  相似文献   

12.
I use parametric and semiparametric methods to test for the order of integration in stock market indexes. The results, which are based on the EOE (Amsterdam), DAX (Frankfurt), Hang Seng (Hong Kong), FTSE100 (London), S&P500 (New York), CAC40 (Paris), Singapore All Shares, and the Japanese Nikkei, show that in almost all of the series the unit root hypothesis cannot be rejected. The Hang Seng and the Singapore All Shares seem to be the most nonstationary series with orders of integration higher than one, and the S&P500 is the less nonstationary series, with values smaller than one and showing mean reversion.  相似文献   

13.
We propose a method of evaluating the accuracy of the implied default probabilities. We modify the model proposed by Duffie et al. (Rev Fin Stud 12:678–720, 1999) to allow the parametric statistical analysis. The pseudo maximum likelihood estimator is defined and to justify our method we shall prove the consistency and the asymptotic normality of the estimator. The key step is to define a pseudo score vector and apply the method of Wald (Ann Math Stat 20: 595–601, 1949) and a delta method. We also introduce the bootstrap for estimating the accuracies, which is similar to that for regression models. To implement our method to the real data, we shall recommend the bootstrap rather than asymptotic normality.  相似文献   

14.
This study proposes a new approach to the estimation of daily realised volatility in financial markets from intraday data. Initially, an examination of intraday returns on S&P 500 Index Futures reveals that returns can be characterised by heteroscedasticity and time-varying autocorrelation. After reviewing a number of daily realised volatility estimators cited in the literature, it is concluded that these estimators are based upon a number of restrictive assumptions in regard to the data generating process for intraday returns. We use a weak set of assumptions about the data generating process for intraday returns, including transaction returns, given in den Haan and Levin [den Haan, W.J., Levin, A., 1996. Inferences from parametric and non-parametric covariance matrix estimation procedures, Working paper, NBER, 195.], which allows for heteroscedasticity and time-varying autocorrelation in intraday returns. These assumptions allow the VARHAC estimator to be employed in the estimation of daily realised volatility. An empirical analysis of the VARHAC daily volatility estimator employing intraday transaction returns concludes that this estimator performs well in comparison to other estimators cited in the literature.  相似文献   

15.
We use stock market data to analyze the quality of alternative models and procedures for forecasting expected shortfall (ES) at different significance levels. We compute ES forecasts from conditional models applied to the full distribution of returns as well as from models that focus on tail events using extreme value theory (EVT). We also apply the semiparametric filtered historical simulation (FHS) approach to ES forecasting to obtain 10-day ES forecasts. At the 10-day horizon we combine FHS with EVT. The performance of the different models is assessed using six different ES backtests recently proposed in the literature. Our results suggest that conditional EVT-based models produce more accurate 1-day and 10-day ES forecasts than do non-EVT based models. Under either approach, asymmetric probability distributions for return innovations tend to produce better forecasts. Incorporating EVT in parametric or semiparametric approaches also improves ES forecasting performance. These qualitative results are also valid for the recent crisis period, even though all models then underestimate the level of risk. FHS narrows the range of numerical forecasts obtained from alternative models, thereby reducing model risk. Combining EVT and FHS seems to be best approach for obtaining accurate ES forecasts.  相似文献   

16.
The new proposal of the Basel Committee on banking regulation issued in January 2001 allows banks to use internal ratings systems to classify firms. Within this context, the main problem is to find a model that fits the data as well as possible, but one that also provides good prediction and explicative capabilities. In this paper, our aim is to compare two kinds of classification models applied to creditworthiness using weighted classification error as the performance function: the standard logistic model and a mixed logistic model, adopting, respectively, a parametric and a semiparametric approach. The main problem of the former is related to the assumption of an i.i.d. hypothesis, but it is often necessary to consider the possible presence of unobservable heterogeneity that characterizes microeconomic data. To better consider this phenomenon, we defined and applied a random effect logistic model, avoiding parametric assumptions upon the random effect distribution. This leads to a likelihood that is defined as the integral of the kernel density with respect to the mixing density, which has no analytical solution. This problem can be obviated by approximating the integral with a finite sum of kernel densities, each one characterized by a different set of model parameters. This discrete nature helps us in detecting non-overlapping clusters characterized by homogeneous values of insolvency risk, and in classifying firms to one of these clusters by means of estimated posterior probabilities of component membership.  相似文献   

17.
We propose a performance measure that generalizes the Sharpe ratio. The new performance measure is monotone with respect to stochastic dominance and consistently accounts for mean, variance and higher moments of the return distribution. It is equivalent to the Sharpe ratio if returns are normally distributed. Moreover, the two performance measures are asymptotically equivalent as the underlying distributions converge to the normal distribution. We suggest a parametric and a non-parametric estimator for the new performance measure and provide an empirical illustration using mutual funds and hedge funds data.  相似文献   

18.
Current real estate statistical valuation involves the estimation of parameters within a posited specification. Suchparametric estimation requires judgment concerning model (1) variables; and (2) functional form. In contrast,nonparametric regression estimation requires attention to (1) but permits greatly reduced attention to (2). Parametric estimators functionally model the parameters and variables affectingE(y¦x) while nonparametric estimators directly modelpdf(y, x) and henceE(y¦x).This article applies the kernel nonparametric regression estimator to two different data sets and specifications. The article shows the nonparametric estimator outperforms the standard parametric estimator (OLS) across variable transformations and across data subsets differing in quality. In addition, the article reviews properties of nonparametric estimators, presents the history of nonparametric estimators in real estate, and discusses a representation of the kernel estimator as a nonparametric grid method.  相似文献   

19.
This paper is motivated by automated valuation systems, which would benefit from an ability to estimate spatial variation in location value. It develops theory for the local regression model (LRM), a semiparametric approach to estimating a location value surface. There are two parts to the LRM: (1) an ordinary least square (OLS) model to hold constant for interior square footage, land area, bathrooms, and other structural characteristics; and (2) a non-parametric smoother (local polynomial regression, LPR) which calculates location value as a function of latitude and longitude. Several methods are used to consistently estimate both parts of the model. The LRM was fit to geocoded hedonic sales data for six towns in the suburbs of Boston, MA. The estimates yield substantial, significant and plausible spatial patterns in location values. Using the LRM as an exploratory tool, local peaks and valleys in location value identified by the model are close to points identified by the tax assessor, and they are shown to add to the explanatory power of an OLS model. Out-of-sample MSE shows that the LRM with a first-degree polynomial (local linear smoothing) is somewhat better than polynomials of degree zero or degree two. Future applications might use degree zero (the well-known NW estimator) because this is available in popular commercial software. The optimized LRM reduces MSE from the OLS model by between 5 percent and 11 percent while adding information on statistically significant variations in location value.  相似文献   

20.
Abstract

Estimation of the tail index parameter of a single-parameter Pareto model has wide application in actuarial and other sciences. Here we examine various estimators from the standpoint of two competing criteria: efficiency and robustness against upper outliers. With the maximum likelihood estimator (MLE) being efficient but nonrobust, we desire alternative estimators that retain a relatively high degree of efficiency while also being adequately robust. A new generalized median type estimator is introduced and compared with the MLE and several well-established estimators associated with the methods of moments, trimming, least squares, quantiles, and percentile matching. The method of moments and least squares estimators are found to be relatively deficient with respect to both criteria and should become disfavored, while the trimmed mean and generalized median estimators tend to dominate the other competitors. The generalized median type performs best overall. These findings provide a basis for revision and updating of prevailing viewpoints. Other topics discussed are applications to robust estimation of upper quantiles, tail probabilities, and actuarial quantities, such as stop-loss and excess-of-loss reinsurance premiums that arise concerning solvency of portfolios. Robust parametric methods are compared with empirical nonparametric methods, which are typically nonrobust.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号