首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Quality & Quantity - The purpose of this paper is to provide a nonparametric kernel method to estimate nonlinear structural equation models involving the functional effects between the latent...  相似文献   

2.
The practical relevance of several concepts of exogeneity of treatments for the estimation of causal parameters based on observational data are discussed. We show that the traditional concepts, such as strong ignorability and weak and super-exogeneity, are too restrictive if interest lies in average effects (i.e. not on distributional effects of the treatment). We suggest a new definition of exogeneity, KL-exogeneity. It does not rely on distributional assumptions and is not based on counterfactual random variables. As a consequence it can be empirically tested using a proposed test that is simple to implement and is distribution-free.  相似文献   

3.
Prior research has emphasized the relevance of adequate statistical power for covariance-based structural equation modeling (CSEM). Nevertheless, reviews in domains other than supply chain management (SCM) found that the magnitude of power tends to be inadequate. This finding is worrisome because statistical power directly affects the meaningfulness of the conclusions based on CSEM. The issue is particularly relevant for the field of SCM in light of the increasing use of CSEM. An investigation of the statistical power of CSEM published in seven major SCM journals since 1999 confirms this criticism. Specifically, an analysis of 988 applications of CSEM indicates that 32% of all applications have too little power, increasing the probability of Type II errors, and that another 43% of all applications exhibit excessive power, increasing the probability of Type I errors. This paper emphasizes the importance of adequate statistical power for CSEM in SCM.  相似文献   

4.
A condition is given by which optimal normal theory methods, such as the maximum likelihood methods, are robust against violation of the normality assumption in a general linear structural equation model. Specifically, the estimators and the goodness of fit test are robust. The estimator is efficient within some defined class, and its standard errors can be obtained by a correction formula applied to the inverse of the information matrix. Some special models, like the factor analysis model and path models, are discussed in more detail. A method for evaluating the robustness condition is given.  相似文献   

5.
Yet another paper on fit measures? To our knowledge, very few papers discuss how fit measures are affected by error variance in the Data Generating Process (DGP). The present paper deals with this. Based upon an extensive simulation study, this paper shows that the effects of increased error variance differ significantly for various fit measures. In addition to error variance the effects depend on sample size and severity of misspecification. The findings confirm the general notion that good fit as measured by the chi-square, RMSEA and GFI etc. does not necessarily mean that the model is correctly specified and reliable. One finding is that the chi square test may give support to misspecified models in situations with a high level of error variance in the DGP, for small sample sizes. Another finding is that the chi-square test looses power also for large sample sizes when the model is negligible misspecified. Other results include incremental fit indices as NFI and RFI which prove to be more informative indicators under these circumstances. At the end of the paper we formulate some guidelines for use of different fit measures.  相似文献   

6.
In structural equation modeling the statistician needs assumptions inorder (1) to guarantee that the estimates are consistent for the parameters of interest, and (2) to evaluate precision of the estimates and significance level of test statistics. With respect to purpose (1), the typical type of analyses (ML and WLS) are robust against violation of distributional assumptions; i.e., estimates remain consistent or any type of WLS analysis and distribution of z. (It should be noted, however, that (1) is sensitive to structural misspecification.) A typical assumption used for purpose (2), is the assumption that the vector z of observable follows a multivariate normal distribution.In relation to purpose (2), distributional misspecification may have consequences for efficiency, as well as power of test statistics (see Satorra, 1989a); that is, some estimation methods may bemore precise than others for a given specific distribution of z. For instance, ADF-WLS is asymptotically optimal under a variety of distributions of z, while the asymptotic optimality of NT-WLS may be lost when the data is non-normal  相似文献   

7.
There is compelling evidence that many macroeconomic and financial variables are not generated by linear models. This evidence is based on testing linearity against either smooth nonlinearity or piece-wise linearity, but there is no framework that encompasses both. This paper provides an econometric framework that allows for both breaks and smooth nonlinearity in between breaks. We estimate the unknown break-dates simultaneously with other parameters via nonlinear least-squares. Using new central limit results for nonlinear processes, we provide inference methods on break-dates and parameter estimates and several instability tests. We illustrate our methods via simulated and empirical smooth transition models with breaks.  相似文献   

8.
In this paper approximate counterparts of the exact tests earlier proposed by the authors are examined. Type 1 error probabilities and test powers are estimated and compared using Monte Carlo experiments. The effect on the Type 1 error probabilities of the misspecification which results when serial correlation is present elsewhere in the system is also investigated.  相似文献   

9.
Ten empirical models of travel behavior are used to measure the variability of structural equation model goodness-of-fit as a function of sample size, multivariate kurtosis, and estimation technique. The estimation techniques are maximum likelihood, asymptotic distribution free, bootstrapping, and the Mplus approach. The results highlight the divergence of these techniques when sample sizes are small and/or multivariate kurtosis high. Recommendations include using multiple estimation techniques and, when sample sizes are large, sampling the data and reestimating the models to test both the robustness of the specifications and to quantify, to some extent, the large sample bias inherent in the χ 2 test statistic.  相似文献   

10.
Election forecasting is a cottage industry among pollsters, the media, political scientists, and political anoraks. Here, we plow a fresh field in providing a systematic exploration of election forecasting in Ireland. We develop a structural forecast model for predicting incumbent government support in Irish general elections between 1977 and 2020 (the Iowa model). We contrast this structural model with forecasts from opinion polls, the dominant means of predicting Ireland’s elections to date. Our results show that with appropriate lead-in time, structural models perform similarly to opinion polls in predicting government support when the dependent variable is vote share. Most importantly, however, the Iowa model is superior to opinion polls in predicting government seat share, the ultimate decider of government fate in parliamentary systems, and especially significant in single transferable vote (STV) systems where vote and seat shares are not always in sync. Our results provide cumulative evidence of the potency of structural electoral forecast models globally, with the takeaway that the Iowa model estimating seat share outpaces other prediction approaches in anticipating government performance in Irish general elections.  相似文献   

11.
We consider the problem of estimating input parameters for a differential equation model, given experimental observations of the output. As time and cost limit both the number and quality of observations, the design is critical. A generalized notion of leverage is derived and, with this, we define directional leverage. Effective designs are argued to be those that sample in regions of high directional leverage. We present an algorithm for finding optimal designs and then establish relationships to existing design optimality criteria. Numerical examples demonstrating the performance of the algorithm are presented.  相似文献   

12.
Starting from the dynamic factor model for nonstationary data we derive the factor‐augmented error correction model (FECM) and its moving‐average representation. The latter is used for the identification of structural shocks and their propagation mechanisms. We show how to implement classical identification schemes based on long‐run restrictions in the case of large panels. The importance of the error correction mechanism for impulse response analysis is analyzed by means of both empirical examples and simulation experiments. Our results show that the bias in estimated impulse responses in a factor‐augmented vector autoregressive (FAVAR) model is positively related to the strength of the error correction mechanism and the cross‐section dimension of the panel. We observe empirically in a large panel of US data that these features have a substantial effect on the responses of several variables to the identified permanent real (productivity) and monetary policy shocks.  相似文献   

13.
There is general agreement that attitudes towards entrepreneurship are determinant factors to decide to be an entrepreneur. In this context, this research is focused on analyzing the relationship between desirability and feasibility on university student’s intentions to create a new firm in Catalonia. A structural equation model supported by Krueger & Brazeal’s Model was tested with different groups of students. The main results reveal most of university students consider desirable to create a new firm, although the perception of feasibility is not positive. Also, there is a statistical significant and positively relationship between credibility and the intention to create a new firm.
David Urbano (Corresponding author)Email:
  相似文献   

14.
We consider the (possibly nonlinear) regression model in \(\mathbb{R }^q\) with shift parameter \(\alpha \) in \(\mathbb{R }^q\) and other parameters \(\beta \) in \(\mathbb{R }^p\) . Residuals are assumed to be from an unknown distribution function (d.f.). Let \(\widehat{\phi }\) be a smooth \(M\) -estimator of \(\phi = {{\beta }\atopwithdelims (){\alpha }}\) and \(T(\phi )\) a smooth function. We obtain the asymptotic normality, covariance, bias and skewness of \(T(\widehat{\phi })\) and an estimator of \(T(\phi )\) with bias \(\sim n^{-2}\) requiring \(\sim n\) calculations. (In contrast, the jackknife and bootstrap estimators require \(\sim n^2\) calculations.) For a linear regression with random covariates of low skewness, if \(T(\phi ) = \nu \beta \) , then \(T(\widehat{\phi })\) has bias \(\sim n^{-2}\) (not \(n^{-1}\) ) and skewness \(\sim n^{-3}\) (not \(n^{-2}\) ), and the usual approximate one-sided confidence interval (CI) for \(T(\phi )\) has error \(\sim n^{-1}\) (not \(n^{-1/2}\) ). These results extend to random covariates.  相似文献   

15.
16.
This paper compares the economic questions addressed by instrumental variables estimators with those addressed by structural approaches. We discuss Marschak’s Maxim: estimators should be selected on the basis of their ability to answer well-posed economic problems with minimal assumptions. A key identifying assumption that allows structural methods to be more informative than IV can be tested with data and does not have to be imposed.  相似文献   

17.
Quality & Quantity - Despite the great efforts of healthcare providers, medical errors are inevitable, and the consequences of these errors may vary from little or no harm (near-miss) to being...  相似文献   

18.
Multilevel structural equation modeling (multilevel SEM) has become an established method to analyze multilevel multivariate data. The first useful estimation method was the pseudobalanced method. This method is approximate because it assumes that all groups have the same size, and ignores unbalance when it exists. In addition, full information maximum likelihood (ML) estimation is now available, which is often combined with robust chi‐squares and standard errors to accommodate unmodeled heterogeneity (MLR). In addition, diagonally weighted least squares (DWLS) methods have become available as estimation methods. This article compares the pseudobalanced estimation method, ML(R), and two DWLS methods by simulating a multilevel factor model with unbalanced data. The simulations included different sample sizes at the individual and group levels and different intraclass correlation (ICC). The within‐group part of the model posed no problems. In the between part of the model, the different ICC sizes had no effect. There is a clear interaction effect between number of groups and estimation method. ML reaches unbiasedness fastest, then the two DWLS methods, then MLR, and then the pseudobalanced method (which needs more than 200 groups). We conclude that both ML(R) and DWLS are genuine improvements on the pseudobalanced approximation. With small sample sizes, the robust methods are not recommended.  相似文献   

19.
We contest Jaeger and Paserman's claim (Jaeger and Paserman , 2008. The cycle of violence? An empirical analysis of fatalities in the Palestinian–Israeli conflict. American Economic Review 98 (4): 1591–1604) that Palestinians did not react to Israeli aggression during Intifada 2. We address the differences between the two sides in terms of the timing and intensity of violence, estimate nonlinear vector autoregression models that are suitable when the linear vector autoregression innovations are not normally distributed, identify causal effects rather than Granger causality using the principle of weak exogeneity, and introduce the “kill‐ratio” as a concept for testing hypotheses about the cycle of violence. The Israelis killed 1.28 Palestinians for every killed Israeli, whereas the Palestinians killed only 0.09 Israelis for every killed Palestinian.  相似文献   

20.
Previous studies to explain why companies utilize particular human resource management (HRM) strategies have not adequately addressed the influence of contextual variables such as size, location, ownership, competitive pressure, technological change, age and growth. In this study, we investigate the extent to which these contextual variables are related to HRM strategy in seventy-six private-sector firms located in Hong Kong. Our analysis uses structural equations to examine the relationships among contextual variables and HRM strategy to develop and retain managers. The results show that contextual variables have both direct and indirect effects on an organization's HRM strategy. The indirect effects occur through the top management involvement of the HR function within an organization. Use of a human capital development HRM strategy reduces organizational uncertainty about having an adequate supply of managers to meet firm objectives. Contrary to our expectation, in Hong Kong firms, greater reliance on internal development and promotion tends to increase uncertainty and greater competition tends to reduce training investment. Both of these unanticipated relationships may reflect the high mobility of managers peculiar to the Hong Kong labour market.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号