首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 484 毫秒
1.
A normality assumption is usually made for the discrimination between two stationary time series processes. A nonparametric approach is desirable whenever there is doubt concerning the validity of this normality assumption. In this paper a nonparametric approach is suggested based on kernel density estimation firstly on (p+1) sample autocorrelations and secondly on (p+1) consecutive observations. A numerical comparison is made between Fishers linear discrimination based on sample autocorrelations and kernel density discrimination for AR and MA processes with and without Gaussian noise. The methods are applied to some seismological data.  相似文献   

2.
Multi-input multi-output production technologies can be represented using distance functions. Econometric estimation of these functions typically involves factoring out one of the outputs or inputs and estimating the resulting equation using maximum likelihood methods. A problem with this approach is that the outputs or inputs that are not factored out may be correlated with the composite error term. Fernandez et al. (J Econ 98:47–79, 2000) show how to solve this so-called ‘endogeneity problem’ using Bayesian methods. In this paper I use the approach to estimate an output distance function and an associated index of total factor productivity (TFP) change. The TFP index is a new index that satisfies most, if not all, economically-relevant axioms from index number theory. It can also be exhaustively decomposed into a measure of technical change and various measures of efficiency change. I illustrate the methodology using state-level data on U.S. agricultural input and output quantities (no prices are needed). Results are summarized in terms of the characteristics (e.g., means) of estimated probability density functions for measures of TFP change, technical change and efficiency change.  相似文献   

3.
In spite of the current availability of numerous methods of cluster analysis, evaluating a clustering configuration is questionable without the definition of a true population structure, representing the ideal partition that clustering methods should try to approximate. A precise statistical notion of cluster, unshared by most of the mainstream methods, is provided by the density‐based approach, which assumes that clusters are associated to some specific features of the probability distribution underlying the data. The non‐parametric formulation of this approach, known as modal clustering, draws a correspondence between the groups and the modes of the density function. An appealing implication is that the ill‐specified task of cluster detection can be regarded to as a more circumscribed problem of estimation, and the number of clusters is also conceptually well defined. In this work, modal clustering is critically reviewed from both conceptual and operational standpoints. The main directions of current research are outlined as well as some challenges and directions of further research.  相似文献   

4.
In this paper we have attempted to provide an integrated approach to the estimation of models with risk terms. It was argued that there exist orthogonality conditions between variables in the information set and higher-order moments of the unanticipated variable density. These could be exploited to provide consistent estimators of the parameters associated with the risk term. Specifically, it was recommended that an IV estimator should be applied, with instruments constructed from the information set. Four existing methods commonly used to estimate models with risk terms are examined, and applications of the techniques are made to the estimation of the risk term in the $US/$C exchange market, and the effects of price uncertainty upon production.  相似文献   

5.
In this paper, we study a robust and efficient estimation procedure for the order of finite mixture models based on the minimizing a penalized density power divergence estimator. For this task, we use the locally conic parametrization approach developed by Dacunha-Castelle and Gassiate (ESAIM Probab Stat 285–317, 1997a; Ann Stat 27:1178–1209, 1999), and verify that the minimizing a penalized density power divergence estimator is consistent. Simulation results are provided for illustration.  相似文献   

6.
Computationally efficient methods for Bayesian analysis of seemingly unrelated regression (SUR) models are described and applied that involve the use of a direct Monte Carlo (DMC) approach to calculate Bayesian estimation and prediction results using diffuse or informative priors. This DMC approach is employed to compute Bayesian marginal posterior densities, moments, intervals and other quantities, using data simulated from known models and also using data from an empirical example involving firms’ sales. The results obtained by the DMC approach are compared to those yielded by the use of a Markov Chain Monte Carlo (MCMC) approach. It is concluded from these comparisons that the DMC approach is worthwhile and applicable to many SUR and other problems.  相似文献   

7.
Households' choice of the number of leisure trips and the total number of overnight stays is empirically studied using Swedish tourism data. A bivariate hurdle approach separating the participation (to travel and stay the night or not) from the quantity (the number of trips and nights) decision is employed. The quantity decision is modelled with a bivariate mixed Poisson lognormal model allowing for both positive as well as negative correlation between count variables. The observed endogenous variables are drawn from a truncated density and estimation is pursued by simulated maximum likelihood. The estimation results indicate a negative correlation between the number of trips and nights. In most cases own price effects are as expected negative, while estimates of cross‐price effects vary between samples. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

8.
Stochastic FDH/DEA estimators for frontier analysis   总被引:2,自引:2,他引:0  
In this paper we extend the work of Simar (J Product Ananl 28:183–201, 2007) introducing noise in nonparametric frontier models. We develop an approach that synthesizes the best features of the two main methods in the estimation of production efficiency. Specifically, our approach first allows for statistical noise, similar to Stochastic frontier analysis (even in a more flexible way), and second, it allows modelling multiple-inputs-multiple-outputs technologies without imposing parametric assumptions on production relationship, similar to what is done in non-parametric methods, like Data Envelopment Analysis (DEA), Free Disposal Hull (FDH), etc.... The methodology is based on the theory of local maximum likelihood estimation and extends recent works of Kumbhakar et al. (J Econom 137(1):1–27, 2007) and Park et al. (J Econom 146:185–198, 2008). Our method is suitable for modelling and estimation of the marginal effects onto inefficiency level jointly with estimation of marginal effects of input. The approach is robust to heteroskedastic cases and to various (unknown) distributions of statistical noise and inefficiency, despite assuming simple anchorage models. The method also improves DEA/FDH estimators, by allowing them to be quite robust to statistical noise and especially to outliers, which were the main problems of the original DEA/FDH estimators. The procedure shows great performance for various simulated cases and is also illustrated for some real data sets. Even in the single-output case, our simulated examples show that our stochastic DEA/FDH improves the Kumbhakar et al. (J Econom 137(1):1–27, 2007) method, by making the resulting frontier smoother, monotonic and, if we wish, concave.  相似文献   

9.
This paper is an up-to-date survey of the state-of-the-art in consumer demand modelling. We review and evaluate advances in a number of related areas, including different approaches to empirical demand analysis, such as the differential approach, the locally flexible functional forms approach, the semi-non-parametric approach, and a non-parametric approach. We also address estimation issues, including sampling theoretic and Bayesian estimation methods, and discuss the limitations of the currently common approaches. We also highlight the challenge inherent in achieving economic regularity, for consistency with the assumptions of the underlying neoclassical economic theory, as well as econometric regularity, when variables are nonstationary.  相似文献   

10.
Abstract

This paper describes improvements on methods developed by Burgstahler and Dichev (1997, Earnings management to avoid earnings decreases and losses, Journal of Accounting and Economics, 24(1), pp. 99–126) and Bollen and Pool (2009, Do hedge fund managers misreport returns? Evidence from the pooled distribution, Journal of Finance, 64(5), pp. 2257–2288) to test for earnings management by identifying discontinuities in distributions of scaled earnings or earnings forecast errors. While existing methods use preselected bandwidths for kernel density estimation and histogram construction, the proposed test procedure addresses the key problem of bandwidth selection by using a bootstrap test to endogenise the selection step. The main advantage offered by the bootstrap procedure over prior methods is that it provides a reference distribution that cannot be globally distinguished from the empirical distribution rather than assuming a correct reference distribution. This procedure limits the researcher's degrees of freedom and offers a simple procedure to find and test a local discontinuity. I apply the bootstrap density estimation to earnings, earnings changes, and earnings forecast errors in US firms over the period 1976–2010. Significance levels found in earlier studies are greatly reduced, often to insignificant values. Discontinuities cannot be detected in analysts’ forecast errors, while such findings of discontinuities in earlier research can be explained by a simple rounding mechanism. Earnings data show a large drop in loss aversion after 2003 that cannot be detected in changes of earnings.  相似文献   

11.
Model averaging has become a popular method of estimation, following increasing evidence that model selection and estimation should be treated as one joint procedure. Weighted‐average least squares (WALS) is a recent model‐average approach, which takes an intermediate position between frequentist and Bayesian methods, allows a credible treatment of ignorance, and is extremely fast to compute. We review the theory of WALS and discuss extensions and applications.  相似文献   

12.
A quasi-maximum likelihood procedure for estimating the parameters of multi-dimensional diffusions is developed in which the transitional density is a multivariate Gaussian density with first and second moments approximating the true moments of the unknown density. For affine drift and diffusion functions, the moments are exactly those of the true transitional density and for nonlinear drift and diffusion functions the approximation is extremely good and is as effective as alternative methods based on likelihood approximations. The estimation procedure generalises to models with latent factors. A conditioning procedure is developed that allows parameter estimation in the absence of proxies.  相似文献   

13.
This paper considers Bayesian estimation of the threshold vector error correction (TVECM) model in moderate to large dimensions. Using the lagged cointegrating error as a threshold variable gives rise to additional difficulties that typically are solved by utilizing large sample approximations. By relying on Markov chain Monte Carlo methods, we are enabled to circumvent these issues and avoid computationally-prohibitive estimation strategies like the grid search. Due to the proliferation of parameters, we use novel global-local shrinkage priors in the spirit of Griffin and Brown (2010). We illustrate the merits of our approach in an application to five exchange rates vis-á-vis the US dollar by means of a forecasting comparison. Our findings indicate that adopting a non-linear modeling approach improves the predictive accuracy for most currencies relative to a set of simpler benchmark models and the random walk.  相似文献   

14.
A simple and robust approach is proposed for the parametric estimation of scalar homogeneous stochastic differential equations. We specify a parametric class of diffusions and estimate the parameters of interest by minimizing criteria based on the integrated squared difference between kernel estimates of the drift and diffusion functions and their parametric counterparts. The procedure does not require simulations or approximations to the true transition density and has the simplicity of standard nonlinear least-squares methods in discrete time. A complete asymptotic theory for the parametric estimates is developed. The limit theory relies on infill and long span asymptotics and is robust to deviations from stationarity, requiring only recurrence.  相似文献   

15.
《Journal of econometrics》1999,88(2):341-363
Optimal estimation of missing values in ARMA models is typically performed by using the Kalman filter for likelihood evaluation, ‘skipping’ in the computations the missing observations, obtaining the maximum likelihood (ML) estimators of the model parameters, and using some smoothing algorithm. The same type of procedure has been extended to nonstationary ARIMA models in Gómez and Maravall (1994). An alternative procedure suggests filling in the holes in the series with arbitrary values and then performing ML estimation of the ARIMA model with additive outliers (AO). When the model parameters are not known the two methods differ, since the AO likelihood is affected by the arbitrary values. We develop the proper likelihood for the AO approach in the general non-stationary case and show the equivalence of this and the skipping method. Finally, the two methods are compared through simulation, and their relative advantages assessed; the comparison also includes the AO method with the uncorrected likelihood.  相似文献   

16.
The asymptotic approach and Fisher's exact approach have often been used for testing the association between two dichotomous variables. The asymptotic approach may be appropriate to use in large samples but is often criticized for being associated with unacceptable high actual type I error rates for small to medium sample sizes. Fisher's exact approach suffers from conservative type I error rates and low power. For these reasons, a number of exact unconditional approaches have been proposed, which have been seen to be generally more powerful than exact conditional counterparts. We consider the traditional unconditional approach based on maximization and compare it to our presented approach, which is based on estimation and maximization. We extend the unconditional approach based on estimation and maximization to designs with the total sum fixed. The procedures based on the Pearson chi‐square, Yates's corrected, and likelihood ratio test statistics are evaluated with regard to actual type I error rates and powers. A real example is used to illustrate the various testing procedures. The unconditional approach based on estimation and maximization performs well, having an actual level much closer to the nominal level. The Pearson chi‐square and likelihood ratio test statistics work well with this efficient unconditional approach. This approach is generally more powerful than the other p‐value calculation methods in the scenarios considered.  相似文献   

17.
We analyze the quantile combination approach (QCA) of Lima and Meng (2017) in situations with mixed-frequency data. The estimation of quantile regressions with mixed-frequency data leads to a parameter proliferation problem, which can be addressed through extensions of the MIDAS and soft (hard) thresholding methods towards quantile regression. We use the proposed approach to forecast the growth rate of the industrial production index, and our results show that including high-frequency information in the QCA achieves substantial gains in terms of forecasting accuracy.  相似文献   

18.
This paper presents a Bayesian approach to bandwidth selection for multivariate kernel regression. A Monte Carlo study shows that under the average squared error criterion, the Bayesian bandwidth selector is comparable to the cross-validation method and clearly outperforms the bootstrapping and rule-of-thumb bandwidth selectors. The Bayesian bandwidth selector is applied to a multivariate kernel regression model that is often used to estimate the state-price density of Arrow–Debreu securities with the S&P 500 index options data and the DAX index options data. The proposed Bayesian bandwidth selector represents a data-driven solution to the problem of choosing bandwidths for the multivariate kernel regression involved in the nonparametric estimation of the state-price density pioneered by Aït-Sahalia and Lo [Aït-Sahalia, Y., Lo, A.W., 1998. Nonparametric estimation of state-price densities implicit in financial asset prices. The Journal of Finance, 53, 499, 547.]  相似文献   

19.
Predicting the geo-temporal variations of crime and disorder   总被引:2,自引:0,他引:2  
Traditional police boundaries—precincts, patrol districts, etc.—often fail to reflect the true distribution of criminal activity and thus do little to assist in the optimal allocation of police resources. This paper introduces methods for crime incident forecasting by focusing upon geographical areas of concern that transcend traditional policing boundaries. The computerised procedure utilises a geographical crime incidence-scanning algorithm to identify clusters with relatively high levels of crime (hot spots). These clusters provide sufficient data for training artificial neural networks (ANNs) capable of modelling trends within them. The approach to ANN specification and estimation is enhanced by application of a novel and noteworthy approach, the Gamma test (GT).  相似文献   

20.
An approach to developing a possibly misspecified econometric model that will be used as the beliefs of an expected utility maximiser is proposed. This approach builds on a novel objective function that measures the value of predictive distributions in decision-making and is used in model estimation, selection and evaluation. The methods proposed also provide an econometric approach for developing arbitrary parametric action rules such as technical trading rules. The approach is compared in detail with existing methods and is applied in the context of a CARA investor's decision problem where analytical and empirical results suggest it is very effective.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号