首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Single‐index models are popular regression models that are more flexible than linear models and still maintain more structure than purely nonparametric models. We consider the problem of estimating the regression parameters under a monotonicity constraint on the unknown link function. In contrast to the standard approach of using smoothing techniques, we review different “non‐smooth” estimators that avoid the difficult smoothing parameter selection. For about 30 years, one has had the conjecture that the profile least squares estimator is an ‐consistent estimator of the regression parameter, but the only non‐smooth argmin/argmax estimators that are actually known to achieve this ‐rate are not based on the nonparametric least squares estimator of the link function. However, solving a score equation corresponding to the least squares approach results in ‐consistent estimators. We illustrate the good behavior of the score approach via simulations. The connection with the binary choice and current status linear regression models is also discussed.  相似文献   

2.
The focus of this article is modeling the magnitude and duration of monotone periods of log‐returns. For this, we propose a new bivariate law assuming that the probabilistic framework over the magnitude and duration is based on the joint distribution of (X,N), where N is geometric distributed and X is the sum of an identically distributed sequence of inverse‐Gaussian random variables independent of N. In this sense, X and N represent the magnitude and duration of the log‐returns, respectively, and the magnitude comes from an infinite mixture of inverse‐Gaussian distributions. This new model is named bivariate inverse‐Gaussian geometric ( in short) law. We provide statistical properties of the model and explore stochastic representations. In particular, we show that the is infinitely divisible, and with this, an induced Lévy process is proposed and studied in some detail. Estimation of the parameters is performed via maximum likelihood, and Fisher's information matrix is obtained. An empirical illustration to the log‐returns of Tyco International stock demonstrates the superior performance of the law compared to an existing model. We expect that the proposed law can be considered as a powerful tool in the modeling of log‐returns and other episodes analyses such as water resources management, risk assessment, and civil engineering projects.  相似文献   

3.
We consider Grenander‐type estimators for a monotone function , obtained as the slope of a concave (convex) estimate of the primitive of λ. Our main result is a central limit theorem for the Hellinger loss, which applies to estimation of a probability density, a regression function or a failure rate. In the case of density estimation, the limiting variance of the Hellinger loss turns out to be independent of λ.  相似文献   

4.
In this study, we consider residual‐based bootstrap methods to construct the confidence interval for structural impulse response functions in factor‐augmented vector autoregressions. In particular, we compare the bootstrap with factor estimation (Procedure A) with the bootstrap without factor estimation (Procedure B). Both procedures are asymptotically valid under the condition , where N and T are the cross‐sectional dimension and the time dimension, respectively. However, Procedure A is also valid even when with 0 ≤ c < because it accounts for the effect of the factor estimation errors on the impulse response function estimator. Our simulation results suggest that Procedure A achieves more accurate coverage rates than those of Procedure B, especially when N is much smaller than T. In the monetary policy analysis of Bernanke et al. (Quarterly Journal of Economics, 2005, 120(1), 387–422), the proposed methods can produce statistically different results.  相似文献   

5.
We investigate the prevalence and sources of reporting errors in 30,993 hypothesis tests from 370 articles in three top economics journals. We define reporting errors as inconsistencies between reported significance levels by means of eye‐catchers and calculated ‐values based on reported statistical values, such as coefficients and standard errors. While 35.8% of the articles contain at least one reporting error, only 1.3% of the investigated hypothesis tests are afflicted by reporting errors. For strong reporting errors for which either the eye‐catcher or the calculated ‐value signals statistical significance but the respective other one does not, the error rate is 0.5% for the investigated hypothesis tests corresponding to 21.6% of the articles having at least one strong reporting error. Our analysis suggests a bias in favor of errors for which eye‐catchers signal statistical significance but calculated ‐values do not. Survey responses from the respective authors, replications, and exploratory regression analyses indicate some solutions to mitigate the prevalence of reporting errors in future research.  相似文献   

6.
The presence of weak instruments is translated into a nearly singular problem in a control function representation. Therefore, the ‐norm type of regularization is proposed to implement the 2SLS estimation for addressing the weak instrument problem. The ‐norm regularization with a regularized parameter O(n) allows us to obtain the Rothenberg (1984) type of higher‐order approximation of the 2SLS estimator in the weak instrument asymptotic framework. The proposed regularized parameter yields the regularized concentration parameter O(n), which is used as a standardized factor in the higher‐order approximation. We also show that the proposed ‐norm regularization consequently reduces the finite sample bias. A number of existing estimators that address finite sample bias in the presence of weak instruments, especially Fuller's limited information maximum likelihood estimator, are compared with our proposed estimator in a simple Monte Carlo exercise.  相似文献   

7.
We compare several representative sophisticated model averaging and variable selection techniques of forecasting stock returns. When estimated traditionally, our results confirm that the simple combination of individual predictors is superior. However, sophisticated models improve dramatically once we combine them with the historical average and take parameter instability into account. An equal weighted combination of the historical average with the standard multivariate predictive regression estimated using the average windows method, for example, achieves a statistically significant monthly out-of-sample of 1.10% and annual utility gains of 2.34%. We obtain similar gains for predicting future macroeconomic conditions.  相似文献   

8.
In manufacturing industries, it is often seen that the bilateral specification limits corresponding to a particular quality characteristic are not symmetric with respect to the stipulated target. A unified superstructure of univariate process capability indices was specially designed for processes with asymmetric specification limits. However, as in most of the practical situations a process consists of a number of inter‐related quality characteristics, subsequently, a multivariate analogue of , which is called CM(u,v), was developed. In the present paper, we study some properties of CM(u,v) like threshold value and compatibility with the asymmetry in loss function. We also discuss estimation procedures for plug‐in estimators of some of the member indices of CM(u,v). Finally, the superstructure is applied to a numerical example to supplement the theory developed in this article.  相似文献   

9.
This paper considers a continuous three-phase polynomial regression model with two threshold points for dependent data with heteroscedasticity. We assume the model is polynomial of order zero in the middle regime, and is polynomial of higher orders elsewhere. We denote this model by 2 $$ {\mathcal{M}}_2 $$ , which includes models with one or no threshold points, denoted by 1 $$ {\mathcal{M}}_1 $$ and 0 $$ {\mathcal{M}}_0 $$ , respectively, as special cases. We provide an ordered iterative least squares (OiLS) method when estimating 2 $$ {\mathcal{M}}_2 $$ and establish the consistency of the OiLS estimators under mild conditions. When the underlying model is 1 $$ {\mathcal{M}}_1 $$ and is ( d 0 1 ) $$ \left({d}_0-1\right) $$ th-order differentiable but not d 0 $$ {d}_0 $$ th-order differentiable at the threshold point, we further show the O p ( N 1 / ( d 0 + 2 ) ) $$ {O}_p\left({N}^{-1/\left({d}_0+2\right)}\right) $$ convergence rate of the OiLS estimators, which can be faster than the O p ( N 1 / ( 2 d 0 ) ) $$ {O}_p\left({N}^{-1/\left(2{d}_0\right)}\right) $$ convergence rate given in Feder when d 0 3 $$ {d}_0\ge 3 $$ . We also apply a model-selection procedure for selecting κ $$ {\mathcal{M}}_{\kappa } $$ ; κ = 0 , 1 , 2 $$ \kappa =0,1,2 $$ . When the underlying model exists, we establish the selection consistency under the aforementioned conditions. Finally, we conduct simulation experiments to demonstrate the finite-sample performance of our asymptotic results.  相似文献   

10.
We review some first‐order and higher‐order asymptotic techniques for M‐estimators, and we study their stability in the presence of data contaminations. We show that the estimating function (ψ) and its derivative with respect to the parameter play a central role. We discuss in detail the first‐order Gaussian density approximation, saddlepoint density approximation, saddlepoint test, tail area approximation via the Lugannani–Rice formula and empirical saddlepoint density approximation (a technique related to the empirical likelihood method). For all these asymptotics, we show that a bounded ψ (in the Euclidean norm) and a bounded (e.g. in the Frobenius norm) yield stable inference in the presence of data contamination. We motivate and illustrate our findings by theoretical and numerical examples about the benchmark case of one‐dimensional location model.  相似文献   

11.
Mixed causal–noncausal autoregressive (MAR) models have been proposed to model time series exhibiting nonlinear dynamics. Possible exogenous regressors are typically substituted into the error term to maintain the MAR structure of the dependent variable. We introduce a representation including these covariates called MARX to study their direct impact. The asymptotic distribution of the MARX parameters is derived for a class of non-Gaussian densities. For a Student likelihood, closed-form standard errors are provided. By simulations, we evaluate the MARX model selection procedure using information criteria. We examine the influence of the exchange rate and industrial production index on commodity prices.  相似文献   

12.
This paper proposes a test for the null that, in a cointegrated panel, the long‐run correlation between the regressors and the error term is different from zero. As is well known, in such case the OLS estimator is T‐consistent, whereas it is ‐consistent when there is no endogeneity. Other estimators can be employed, such as the FM‐OLS, that are ‐consistent irrespective of whether exogeneity is present or not. Using the difference between the former and the latter estimator, we construct a test statistic which diverges at a rate under the null of endogeneity, whilst it is bounded under the alternative of exogeneity, and employ a randomization approach to carry out the test. Monte Carlo evidence shows that the test has the correct size and good power.  相似文献   

13.
State space models play an important role in macroeconometric analysis and the Bayesian approach has been shown to have many advantages. This paper outlines recent developments in state space modelling applied to macroeconomics using Bayesian methods. We outline the directions of recent research, specifically the problems being addressed and the solutions proposed. After presenting a general form for the linear Gaussian model, we discuss the interpretations and virtues of alternative estimation routines and their outputs. This discussion includes the Kalman filter and smoother, and precision-based algorithms. As the advantages of using large models have become better understood, a focus has developed on dimension reduction and computational advances to cope with high-dimensional parameter spaces. We give an overview of a number of recent advances in these directions. Many models suggested by economic theory are either non-linear or non-Gaussian, or both. We discuss work on the particle filtering approach to such models as well as other techniques that use various approximations – to either the time state and measurement equations or to the full posterior for the states – to obtain draws.  相似文献   

14.
This paper provides the first comprehensive review of the empirical and theoretical literature on the determinants of the elasticity of substitution between capital and labor. Our focus is on the two-input constant elasticity of substitution (CES) production function. We start by presenting four concise observations that summarize the empirical literature on the estimation of . Motivated by these observations, the main part of this survey then focuses on potential determinants of capital–labor substitution. We first review several approaches to the microfoundation of production functions where the elasticity of substitution (EOS) is treated as a purely technological parameter. Second, we outline the construction of an aggregate elasticity of substitution (AES) in a multi-sectoral framework and investigate its dependence on underlying intra- and inter-sectoral substitution. Third, we discuss the influence of the institutional framework on the extent of factor substitution. Overall, this survey highlights that the effective elasticity of substitution (EES), which is typically estimated in empirical studies, is generally not an immutable deep parameter but depends on a multitude of technological, non-technological, and institutional determinants. Based on these insights, the final section identifies a number of potential empirical and theoretical avenues for future research.  相似文献   

15.
In dynamic panel regression, when the variance ratio of individual effects to disturbance is large, the system‐GMM estimator will have large asymptotic variance and poor finite sample performance. To deal with this variance ratio problem, we propose a residual‐based instrumental variables (RIV) estimator, which uses the residual from regressing Δyi,t?1 on as the instrument for the level equation. The RIV estimator proposed is consistent and asymptotically normal under general assumptions. More importantly, its asymptotic variance is almost unaffected by the variance ratio of individual effects to disturbance. Monte Carlo simulations show that the RIV estimator has better finite sample performance compared to alternative estimators. The RIV estimator generates less finite sample bias than difference‐GMM, system‐GMM, collapsing‐GMM and Level‐IV estimators in most cases. Under RIV estimation, the variance ratio problem is well controlled, and the empirical distribution of its t‐statistic is similar to the standard normal distribution for moderate sample sizes.  相似文献   

16.
17.
Univariate continuous distributions are one of the fundamental components on which statistical modelling, ancient and modern, frequentist and Bayesian, multi‐dimensional and complex, is based. In this article, I review and compare some of the main general techniques for providing families of typically unimodal distributions on with one or two, or possibly even three, shape parameters, controlling skewness and/or tailweight, in addition to their all‐important location and scale parameters. One important and useful family is comprised of the ‘skew‐symmetric’ distributions brought to prominence by Azzalini. As these are covered in considerable detail elsewhere in the literature, I focus more on their complements and competitors. Principal among these are distributions formed by transforming random variables, by what I call ‘transformation of scale’—including two‐piece distributions—and by probability integral transformation of non‐uniform random variables. I also treat briefly the issues of multi‐variate extension, of distributions on subsets of and of distributions on the circle. The review and comparison is not comprehensive, necessarily being selective and therefore somewhat personal. © 2014 The Authors. International Statistical Review © 2014 International Statistical Institute  相似文献   

18.
Finding a suitable representation of multivariate data is fundamental in many scientific disciplines. Projection pursuit ( PP) aims to extract interesting ‘non-Gaussian’ features from multivariate data, and tends to be computationally intensive even when applied to data of low dimension. In high-dimensional settings, a recent work (Bickel et al., 2018) on PP addresses asymptotic characterization and conjectures of the feasible projections as the dimension grows with sample size. To gain practical utility of and learn theoretical insights into PP in an integral way, data analytic tools needed to evaluate the behaviour of PP in high dimensions become increasingly desirable but are less explored in the literature. This paper focuses on developing computationally fast and effective approaches central to finite sample studies for (i) visualizing the feasibility of PP in extracting features from high-dimensional data, as compared with alternative methods like PCA and ICA, and (ii) assessing the plausibility of PP in cases where asymptotic studies are lacking or unavailable, with the goal of better understanding the practicality, limitation and challenge of PP in the analysis of large data sets.  相似文献   

19.
Several exact inference procedures for logistic regression require the simulation of a 0-1 dependent vector according to its conditional distribution, given the sufficient statistics for some nuisance parameters. This is viewed, in this work, as a sampling problem involving a population of n units, unequal selection probabilities and balancing constraints. The basis for this reformulation of exact inference is a proposition deriving the limit, as n goes to infinity, of the conditional distribution of the dependent vector given the logistic regression sufficient statistics. It is proposed to sample from this distribution using the cube sampling algorithm. The interest of this approach to exact inference is illustrated by tackling new problems. First it allows to carry out exact inference with continuous covariates. It is also useful for the investigation of a partial correlation between several 0-1 vectors. This is illustrated in an example dealing with presence-absence data in ecology.  相似文献   

20.
This paper provides a characterisation of the degree of cross‐sectional dependence in a two dimensional array, {xit,i = 1,2,...N;t = 1,2,...,T} in terms of the rate at which the variance of the cross‐sectional average of the observed data varies with N. Under certain conditions this is equivalent to the rate at which the largest eigenvalue of the covariance matrix of x t=(x1t,x2t,...,xNt)′ rises with N. We represent the degree of cross‐sectional dependence by α, which we refer to as the ‘exponent of cross‐sectional dependence’, and define it by the standard deviation, , where is a simple cross‐sectional average of xit. We propose bias corrected estimators, derive their asymptotic properties for α > 1/2 and consider a number of extensions. We include a detailed Monte Carlo simulation study supporting the theoretical results. We also provide a number of empirical applications investigating the degree of inter‐linkages of real and financial variables in the global economy. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号