首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Summary  This paper is an exposition about the model and techniques in factor analysis, a method of studying the covariance matrix of several properties on the basis of a sample co-variance matrix of independent observations on n individuals. The indeterminacy of the basis of the so called factor space and several possibilities of interpretation are discussed. The scale invariant maximum likelihood estimation of the parameters of the assumed normal distribution which also provides a test on the dimension of the factor space is compared with the customary but unjustified attack of the estimation problem by means of component analysis or modifications of it. The prohibitive slowness of convergence of iterative procedures recommended till now can be removed by steepest ascent methods together with Aitken's acceleration method. An estimate of the original observations according to the model assumed, as to be compared with the data, is given.  相似文献   

2.
This paper studies the efficient estimation of large‐dimensional factor models with both time and cross‐sectional dependence assuming (N,T) separability of the covariance matrix. The asymptotic distribution of the estimator of the factor and factor‐loading space under factor stationarity is derived and compared to that of the principal component (PC) estimator. The paper also considers the case when factors exhibit a unit root. We provide feasible estimators and show in a simulation study that they are more efficient than the PC estimator in finite samples. In application, the estimation procedure is employed to estimate the Lee–Carter model and life expectancy is forecast. The Dutch gender gap is explored and the relationship between life expectancy and the level of economic development is examined in a cross‐country comparison.  相似文献   

3.
The classical exploratory factor analysis (EFA) finds estimates for the factor loadings matrix and the matrix of unique factor variances which give the best fit to the sample correlation matrix with respect to some goodness-of-fit criterion. Common factor scores can be obtained as a function of these estimates and the data. Alternatively to the classical EFA, the EFA model can be fitted directly to the data which yields factor loadings and common factor scores simultaneously. Recently, new algorithms were introduced for the simultaneous least squares estimation of all EFA model unknowns. The new methods are based on the numerical procedure for singular value decomposition of matrices and work equally well when the number of variables exceeds the number of observations. This paper provides an account that is intended as an expository review of methods for simultaneous parameter estimation in EFA. The methods are illustrated on Harman's five socio-economic variables data and a high-dimensional data set from genome research.  相似文献   

4.
The methods listed in the title are compared by means of a simulation study and a real world application. The aspects compared via simulations are the performance of the tests for the cointegrating rank and the quality of the estimated cointegrating space. The subspace algorithm method, formulated in the state space framework and thus applicable for vector autoregressive moving average (VARMA) processes, performs at least comparably to the Johansen method. Both the Johansen procedure and the subspace algorithm cointegration analysis perform significantly better than Bierens’ method. The real‐world application is an investigation of the long‐run properties of the one‐sector neoclassical growth model for Austria. The results do not fully support the implications of the model with respect to cointegration. Furthermore, the results differ greatly between the different methods. The amount of variability depends strongly upon the number of variables considered and huge differences occur for the full system with six variables. Therefore we conclude that the results of such applications with about five or six variables and 100 observations, which are typical in the applied literature, should possibly be interpreted with more caution than is commonly done.  相似文献   

5.
This article considers the asymptotic estimation theory for the proportion in randomized response survey usinguncertain prior information (UPI) about the true proportion parameter which is assumed to be available on the basis of some sort of realistic conjecture. Three estimators, namely, the unrestricted estimator, the shrinkage restricted estimator and an estimator based on a preliminary test, are proposed. Their asymptotic mean squared errors are derived and compared. The relative dominance picture of the estimators is presented.  相似文献   

6.
We propose and study the finite‐sample properties of a modified version of the self‐perturbed Kalman filter of Park and Jun (Electronics Letters 1992; 28 : 558–559) for the online estimation of models subject to parameter instability. The perturbation term in the updating equation of the state covariance matrix is weighted by the estimate of the measurement error variance. This avoids the calibration of a design parameter as the perturbation term is scaled by the amount of uncertainty in the data. It is shown by Monte Carlo simulations that this perturbation method is associated with a good tracking of the dynamics of the parameters compared to other online algorithms and to classical and Bayesian methods. The standardized self‐perturbed Kalman filter is adopted to forecast the equity premium on the S&P 500 index under several model specifications, and determines the extent to which realized variance can be used to predict excess returns. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

7.
Simultaneous optimal estimation in linear mixed models is considered. A necessary and sufficient condition is presented for the least squares estimator of the fixed effects and the analysis of variance estimator of the variance components to be of uniformly minimum variance simultaneously in a general variance components model. That is, the matrix obtained by orthogonally projecting the covariance matrix onto the orthogonal complement space of the column space of the design matrix is symmetric, each eigenvalue of the matrix is a linear combinations of the variance components and the number of all distinct eigenvalues of the matrix is equal to the the number of the variance components. Under this condition, uniformly optimal unbiased tests and uniformly most accurate unbiased confidence intervals are constructed for the parameters of interest. A necessary and sufficient condition is also given for the equivalence of several common estimators of variance components. Two examples of their application are given.  相似文献   

8.
In this paper we introduce the Random Recursive Partitioning (RRP) matching method. RRP generates a proximity matrix which might be useful in econometric applications like average treatment effect estimation. RRP is a Monte Carlo method that randomly generates non‐empty recursive partitions of the data and evaluates the proximity between two observations as the empirical frequency they fall in a same cell of these random partitions over all Monte Carlo replications. From the proximity matrix it is possible to derive both graphical and analytical tools to evaluate the extent of the common support between data sets. The RRP method is “honest” in that it does not match observations “at any cost”: if data sets are separated, the method clearly states it. The match obtained with RRP is invariant under monotonic transformation of the data. Average treatment effect estimators derived from the proximity matrix seem to be competitive compared to more commonly used estimators. RRP method does not require a particular structure of the data and for this reason it can be applied when distances like Mahalanobis or Euclidean are not suitable, in the presence of missing data or when the estimated propensity score is too sensitive to model specifications. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

9.
In this paper we consider the exact D-optimal designs for estimation of the unknown parameters in the two factors, each at only two-level, main effects model with autocorrelated errors. The vector of the n random errors in the observed responses is assumed to follow a first-order autoregressive model (AR(1)). The exact D-optimal designs seek the optimal combinations of the design levels as well as the optimal run orders, so that the determinant of the information matrix of BLUEs for the unknown parameters is maximized. Bora-Senta and Moyssiadis (1999) gave some conjectures about the exact D-optimal designs based on their experience of several exhaustive searches. In this paper their conjectures are partially proved to be true.Received: January 2003 / Accepted: October 2003Partially supported by the National Science Council of Taiwan, R.O.C. under grant NSC 91-2115-M-008-013.Supported in part by the National Science Council of Taiwan, R.O.C. under grant NSC 89-2118-M-110-003.  相似文献   

10.
Despite the solid theoretical foundation on which the gravity model of bilateral trade is based, empirical implementation requires several assumptions which do not follow directly from the underlying theory. First, unobserved trade costs are assumed to be a (log‐)linear function of observables. Second, the effects of trade costs on trade flows are assumed to be constant across country pairs. Maintaining consistency with the underlying theory, but relaxing these assumptions, we estimate gravity models—in levels and logs—using two data sets via nonparametric methods. The results are striking. Despite the added flexibility of the nonparametric models, parametric models based on these assumptions offer equally or more reliable in‐sample predictions and out‐of‐sample forecasts in the majority of cases, particularly in the levels model. Moreover, formal statistical tests fail to reject either parametric functional form. Thus, concerns in the gravity literature over functional form appear unwarranted, and estimation of the gravity model in levels is recommended. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

11.
Quasi-maximum-likelihood (QML) estimation of a model combining cointegration in the conditional mean and rare large shocks (outliers) with a factor structure in the innovations is studied. The goal is not only to robustify inference on the conditional-mean parameters, but also to find regularities and conduct inference on the instantaneous and long-run effect of the large shocks. Given the cointegration rank and the factor order, χ2χ2 asymptotic inference is obtained for the cointegration vectors, the short-run parameters, and the direction of each column of both the factor loading matrix and the matrix of long-run impacts of the large shocks. Large shocks, whose location is assumed unknown a priori, can be detected and classified consistently into the factor components.  相似文献   

12.
本文把反映行业间生产率联动的购买距离矩阵和销售距离矩阵引入空间自回归模型,研究行业间生产率联动对我国工业生产率增长的影响。为了克服引入社会经济距离矩阵带来的异方差和矩阵的行标准化问题,本文采用空间GMM法进行模型的估计。结果表明,行业生产率联动对我国工业生产率增长具有显著的正影响,并且在资源密集型、劳动密集型和资本密集型工业行业中,行业间生产率联动对工业生产率增长的影响相对于其他因素的影响更为稳健。此外,由销售距离矩阵所体现的联动作用效果整体上大于购买距离矩阵体现的相关效果。  相似文献   

13.
The value of a share is given by the dividend discount model as a simple function of future dividends; but the actual determination of the share price is rarely based upon the direct estimation of these future dividends. A ranking of the valuation models used by analysts and fund managers shows a preference for ‘unsophisticated’ valuation using, for example, the dividend yield rather than the dividend discount model. This finding is shown to depend upon the practical difficulty of using currently-available information to forecast future cash flows. This difficulty limits the quantitative basis of valuations to short forecast horizons, while the subjective, qualitative estimation of terminal value assumes great importance. Crucially, both analysts and fund managers use their own assessment of management quality to underpin the estimation of terminal value, on the basis that superior quality causes outperformance and that, whereas management quality can be assessed now, future performance itself is unobservable. Linked with this and with information asymmetry, valuation is a dynamic, company-specific process, focused on personal communication with management and embodying ongoing signalling and implicit contracting, using both dividends and other variables. This method of valuation causes formal valuation models such as the dividend yield to play only a limited role. They offer a benchmark of relative price differences, which serves as a basis from which to conduct subjective, company-specific analysis and to make investment decisions; but valuation models are not used exclusively, in themselves, to value shares.  相似文献   

14.
总结了常用的空间加权矩阵的一般构建方法和研究领域内新提出的空间加权矩阵的构建方法,从宏观与微观层面,量化分析了空间加权矩阵设置对于空间面板参数估计效率、空间效应识别的影响效应。结论表明:宏观数据层面,随着空间加权矩阵复杂程度的提高,无论是空间面板固定效应模型还是空间面板随机效应模型,参数估计的有效性与一致性都显著提高并且广义矩参数估计方法优于拟极大似然估计方法,复合的空间加权矩阵条件下,拉格朗日乘子检验方法的功效更高;微观数据层面,回归结果表明四种不同类型的空间加权矩阵的设置,对于聚集外部性引致的企业全要素生产率增长的空间边界的识别具有显著影响,复合的空间加权矩阵更有效。  相似文献   

15.
We propose a new conditionally heteroskedastic factor model, the GICA-GARCH model, which combines independent component analysis (ICA) and multivariate GARCH (MGARCH) models. This model assumes that the data are generated by a set of underlying independent components (ICs) that capture the co-movements among the observations, which are assumed to be conditionally heteroskedastic. The GICA-GARCH model separates the estimation of the ICs from their fitting with a univariate ARMA-GARCH model. Here, we will use two ICA approaches to find the ICs: the first estimates the components, maximizing their non-Gaussianity, while the second exploits the temporal structure of the data. After estimating and identifying the common ICs, we fit a univariate GARCH model to each of them in order to estimate their univariate conditional variances. The GICA-GARCH model then provides a new framework for modelling the multivariate conditional heteroskedasticity in which we can explain and forecast the conditional covariances of the observations by modelling the univariate conditional variances of a few common ICs. We report some simulation experiments to show the ability of ICA to discover leading factors in a multivariate vector of financial data. Finally, we present an empirical application to the Madrid stock market, where we evaluate the forecasting performances of the GICA-GARCH and two additional factor GARCH models: the orthogonal GARCH and the conditionally uncorrelated components GARCH.  相似文献   

16.
Luc Pronzato 《Metrika》2010,71(2):219-238
We study the consistency of parameter estimators in adaptive designs generated by a one-step ahead D-optimal algorithm. We show that when the design space is finite, under mild conditions the least-squares estimator in a nonlinear regression model is strongly consistent and the information matrix evaluated at the current estimated value of the parameters strongly converges to the D-optimal matrix for the unknown true value of the parameters. A similar property is shown to hold for maximum-likelihood estimation in Bernoulli trials (dose–response experiments). Some examples are presented.  相似文献   

17.
In this paper we consider estimating an approximate factor model in which candidate predictors are subject to sharp spikes such as outliers or jumps. Given that these sharp spikes are assumed to be rare, we formulate the estimation problem as a penalized least squares problem by imposing a norm penalty function on those sharp spikes. Such a formulation allows us to disentangle the sharp spikes from the common factors and estimate them simultaneously. Numerical values of the estimates can be obtained by solving a principal component analysis (PCA) problem and a one-dimensional shrinkage estimation problem iteratively. In addition, it is easy to incorporate methods for selecting the number of common factors in the iterations. We compare our method with PCA by conducting simulation experiments in order to examine their finite-sample performances. We also apply our method to the prediction of important macroeconomic indicators in the U.S., and find that it can deliver performances that are comparable to those of the PCA method.  相似文献   

18.
Cross-validation is a method used to estimate the expected prediction error of a model. Such estimates may be of interest in themselves, but their use for model selection is more common. Unfortunately, cross-validation is viewed as being computationally expensive in many situations. In this paper it is shown that the h-block cross-validation function for least-squares based estimators can be expressed in a form which can enormously impact on the amount of calculation required. The standard approach is of O(T2) where T denotes the sample size, while the proposed approach is of O(T) and yields identical numerical results. The proposed approach has widespread potential application ranging from the estimation of expected prediction error to least squares-based model specification to the selection of the series order for non-parametric series estimation. The technique is valid for general stationary observations. Simulation results and applications are considered. © 1997 by John Wiley & Sons, Ltd.  相似文献   

19.
In this paper, we use the local influence method to study a vector autoregressive model under Students t‐distributions. We present the maximum likelihood estimators and the information matrix. We establish the normal curvature diagnostics for the vector autoregressive model under three usual perturbation schemes for identifying possible influential observations. The effectiveness of the proposed diagnostics is examined by a simulation study, followed by our data analysis using the model to fit the weekly log returns of Chevron stock and the Standard & Poor's 500 Index as an application.  相似文献   

20.
Measuring technical efficiency in European railways: A panel data approach   总被引:3,自引:0,他引:3  
We estimate a factor requirement frontier for European railways using a panel data approach in which technical efficiency is assumed to be endogeneously determined. This approach has two main outcomes. On one hand, it allows the identification of factors influencing technical efficiency, and on the other hand, it allows the estimation of alternative efficiency indicators free of these influences. In the case under study, a particular attention is devoted to an autonomy indicator representing the managerial freedom, with respect to public authorities, experienced by firms, that appears to be positively correlated with technical efficiency.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号