首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Abstract

Testing the assumption of independence between variables is a crucial aspect of spatial data analysis. However, the literature is limited and somewhat confusing. To our knowledge, we can mention only the bivariate generalization of Moran's statistic. This test suffers from several restrictions: it is applicable only to pairs of variables, a weighting matrix and the assumption of linearity are needed; the null hypothesis of the test is not totally clear. Given these limitations, we develop a new non-parametric test, Υ(m), based on symbolic dynamics with better properties. We show that the Υ(m) test can be extended to a multivariate framework, it is robust to departures from linearity, it does not need a weighting matrix and can be adapted to different specifications of the null. The test is consistent, computationally simple and with good size and power, as shown by a Monte Carlo experiment. An application to the case of the productivity of the manufacturing sector in the Ebro Valley illustrates our approach.  相似文献   

2.
This paper studies an alternative quasi likelihood approach under possible model misspecification. We derive a filtered likelihood from a given quasi likelihood (QL), called a limited information quasi likelihood (LI-QL), that contains relevant but limited information on the data generation process. Our LI-QL approach, in one hand, extends robustness of the QL approach to inference problems for which the existing approach does not apply. Our study in this paper, on the other hand, builds a bridge between the classical and Bayesian approaches for statistical inference under possible model misspecification. We can establish a large sample correspondence between the classical QL approach and our LI-QL based Bayesian approach. An interesting finding is that the asymptotic distribution of an LI-QL based posterior and that of the corresponding quasi maximum likelihood estimator share the same “sandwich”-type second moment. Based on the LI-QL we can develop inference methods that are useful for practical applications under possible model misspecification. In particular, we can develop the Bayesian counterparts of classical QL methods that carry all the nice features of the latter studied in  White (1982). In addition, we can develop a Bayesian method for analyzing model specification based on an LI-QL.  相似文献   

3.
We propose to extend the cointegration rank determination procedure of Robinson and Yajima [2002. Determination of cointegrating rank in fractional systems. Journal of Econometrics 106, 217–242] to accommodate both (asymptotically) stationary and nonstationary fractionally integrated processes as the common stochastic trends and cointegrating errors by applying the exact local Whittle analysis of Shimotsu and Phillips [2005. Exact local Whittle estimation of fractional integration. Annals of Statistics 33, 1890–1933]. The proposed method estimates the cointegrating rank by examining the rank of the spectral density matrix of the ddth differenced process around the origin, where the fractional integration order, dd, is estimated by the exact local Whittle estimator. Similar to other semiparametric methods, the approach advocated here only requires information about the behavior of the spectral density matrix around the origin, but it relies on a choice of (multiple) bandwidth(s) and threshold parameters. It does not require estimating the cointegrating vector(s) and is easier to implement than regression-based approaches, but it only provides a consistent estimate of the cointegration rank, and formal tests of the cointegration rank or levels of confidence are not available except for the special case of no cointegration. We apply the proposed methodology to the analysis of exchange rate dynamics among a system of seven exchange rates. Contrary to both fractional and integer-based parametric approaches, which indicate at most one cointegrating relation, our results suggest three or possibly four cointegrating relations in the data.  相似文献   

4.
Abstract

This paper develops a unified framework for fixed effects (FE) and random effects (RE) estimation of higher-order spatial autoregressive panel data models with spatial autoregressive disturbances and heteroscedasticity of unknown form in the idiosyncratic error component. We derive the moment conditions and optimal weighting matrix without distributional assumptions for a generalized moments (GM) estimation procedure of the spatial autoregressive parameters of the disturbance process and define both an RE and an FE spatial generalized two-stage least squares estimator for the regression parameters of the model. We prove consistency of the proposed estimators and derive their joint asymptotic distribution, which is robust to heteroscedasticity of unknown form in the idiosyncratic error component. Finally, we derive a robust Hausman test of the spatial random against the spatial FE model.  相似文献   

5.
Abstract We discuss a practical method to price and hedge European contingent claims on assets with price processes which follow a jump-diffusion. The method consists of a sequence of trinomial models for the asset price and option price processes which are shown to converge weakly to the corresponding continuous time jump-diffusion processes. The main difference with many existing methods is that our approach ensures that the intermediate discrete time approximations generate models which are themselves complete, just as in the Black-Scholes binomial approximations. This is only possible by dropping the assumption that the approximations of increments of the Wiener and Poisson processes on our trinomial tree are independent, but we show that the dependence between these processes disappears in the weak limit. The approximations thus define an easy and flexible method for pricing and hedging in jump-diffusion models using explicit trees for hedging and pricing. Mathematics Subject Classification (2000): 60B10, 60H35 Journal of Economic Literature Classification: G13  相似文献   

6.
The detection of multicollinearity in econometric models is usualy based on the so-called condition number (CN) of the data matrix X. However, the computation of the CN, which is the greater condition index, gives misleading results in particular cases and many commercial computer packages produce an inflated CN, even in cases of spurious multicollinearity, i.e. even if no collinearity exists when the explanatory variables are considered. And this is due to the very low total variation of some columns of the transformed data matrix, which is used to compute CN. On the other hand, we may have the problem of latent multocollinearity which can be revealed by additionally computing a revised CN. With all these in mind, we figure out the ill-conditioned situations, suggesting some practical rules of thumb to face such problems using a single diagnostic in a fairly simple procedure. It is noted that this procedure is not mentioned in the relevant literature.  相似文献   

7.
Sensitivity of the returns to scale (RTS) classifications in data envelopment analysis is studied by means of linear programming problems. The stability region for an observation preserving its current RTS classification (constant, increasing or decreasing returns to scale) can be easily investigated by the optimal values to a set of particular DEA-type formulations. Necessary and sufficient conditions are determined for preserving the RTS classifications when input or output data perturbations are non-proportional. It is shown that the sensitivity analysis method under proportional data perturbations can also be used to estimate the RTS classifications and discover the identical RTS regions yielded by the input-based and the output-based DEA methods. Thus, our approach provides information on both the RTS classifications and the stability of the classifications. This sensitivity analysis method can easily be applied via existing DEA codes.  相似文献   

8.
Repeated measurements often are analyzed by multivariate analysis of variance (MANOVA). An alternative approach is provided by multilevel analysis, also called the hierarchical linear model (HLM), which makes use of random coefficient models. This paper is a tutorial which indicates that the HLM can be specified in many different ways, corresponding to different sets of assumptions about the covariance matrix of the repeated measurements. The possible assumptions range from the very restrictive compound symmetry model to the unrestricted multivariate model. Thus, the HLM can be used to steer a useful middle road between the two traditional methods for analyzing repeated measurements. Another important advantage of the multilevel approach to analyzing repeated measures is the fact that it can be easily used also if the data are incomplete. Thus it provides a way to achieve a fully multivariate analysis of repeated measures with incomplete data. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

9.
Abstract

In this article, we introduce and evaluate testing procedures for specifying the number k of nearest neighbours in the weights matrix of a spatial econometric model. An increasing and a decreasing neighbours testing procedure are suggested. Kelejian's J-test for non-nested spatial models is used in the testing procedures. The testing procedures give formal justification for the choice of k, something which has been lacking in the classical spatial econometric literature. Simulations show that the testing procedures can be used in large samples to determine k. An empirical example involving house price data is provided.  相似文献   

10.
Abstract

This paper presents both a new approach to studying the consequences of accounting choice and a unique sample to examine the effects of accounting choice in the R&D context. We investigate the effect of firms' decision to capitalize R&D expenditures on the amount of information about future earnings reflected in current stock returns, as captured by the association between current-year returns and future earnings (FERC). We use a sample of UK firms, which includes both R&D capitalizers and expensers. An important feature of our tests is our use of a two-equation system to control for the endogeneity of the accounting choice (i.e. self-selection). Proponents of capitalization claim that it enables management to better communicate information about the success of projects and their probable future benefits. Consistent with this, we find that capitalization is associated with higher FERC than expensing.  相似文献   

11.
Stochastic FDH/DEA estimators for frontier analysis   总被引:2,自引:2,他引:0  
In this paper we extend the work of Simar (J Product Ananl 28:183–201, 2007) introducing noise in nonparametric frontier models. We develop an approach that synthesizes the best features of the two main methods in the estimation of production efficiency. Specifically, our approach first allows for statistical noise, similar to Stochastic frontier analysis (even in a more flexible way), and second, it allows modelling multiple-inputs-multiple-outputs technologies without imposing parametric assumptions on production relationship, similar to what is done in non-parametric methods, like Data Envelopment Analysis (DEA), Free Disposal Hull (FDH), etc.... The methodology is based on the theory of local maximum likelihood estimation and extends recent works of Kumbhakar et al. (J Econom 137(1):1–27, 2007) and Park et al. (J Econom 146:185–198, 2008). Our method is suitable for modelling and estimation of the marginal effects onto inefficiency level jointly with estimation of marginal effects of input. The approach is robust to heteroskedastic cases and to various (unknown) distributions of statistical noise and inefficiency, despite assuming simple anchorage models. The method also improves DEA/FDH estimators, by allowing them to be quite robust to statistical noise and especially to outliers, which were the main problems of the original DEA/FDH estimators. The procedure shows great performance for various simulated cases and is also illustrated for some real data sets. Even in the single-output case, our simulated examples show that our stochastic DEA/FDH improves the Kumbhakar et al. (J Econom 137(1):1–27, 2007) method, by making the resulting frontier smoother, monotonic and, if we wish, concave.  相似文献   

12.
This paper develops a statistical method for defining housing submarkets. The method is applied using household survey data for Sydney and Melbourne, Australia. First, principal component analysis is used to extract a set of factors from the original variables for both local government area (LGA) data and a combined set of LGA and individual dwelling data. Second, factor scores are calculated and cluster analysis is used to determine the composition of housing submarkets. Third, hedonic price equations are estimated for each city as a whole, fora prioriclassifications of submarkets, and for submarkets defined by the cluster analysis. The weighted mean squared errors from the hedonic equations are used to compare alternative classifications of submarkets. In Melbourne, the classification derived from aKmeans clustering procedure on individual dwelling data is significantly better than classifications derived from all other methods of constructing housing submarkets. In some other cases, the statistical analysis produces submarkets that are better than thea prioriclassification, but the improvement is not significant.  相似文献   

13.
Abstract In a recent critical review of de Finetti’s paper “Il problema dei pieni’’, the Nobel Prize winner Harry Markowitz recognized the primacy of de Finetti in applying the mean-variance approach to finance, but pointed out that de Finetti did not solve the problem for the general case of correlated risks. We argue in this paper that a more fair sentence would be: de Finetti did solve the general problem but under an implicit hypothesis of regularity which is not always satisfied. Moreover, a natural extension of de Finetti’s procedure to non-regular cases offers a general solution for the correlation case and shows that de Finetti anticipated a modern mathematical programming approach to mean-variance problems. Mathematics Subject Classification (2000): 91B30, 90C20 Journal of Economic Literature Classification: G11, C61, B23, D81, G22  相似文献   

14.
Statistical agencies often release a masked or perturbed version of survey data to protect the confidentiality of respondents' information. Ideally, a perturbation procedure should provide confidentiality protection without much loss of data quality, so that the released data may practically be treated as original data for making inferences. One major objective is to control the risk of correctly identifying any respondent's records in released data, by matching the values of some identifying or key variables. For categorical key variables, we propose a new approach to measuring identification risk and setting strict disclosure control goals. The general idea is to ensure that the probability of correctly identifying any respondent or surveyed unit is at most ξ, which is pre‐specified. Then, we develop an unbiased post‐randomisation procedure that achieves this goal for ξ>1/3. The procedure allows substantial control over possible changes to the original data, and the variance it induces is of a lower order of magnitude than sampling variance. We apply the procedure to a real data set, where it performs consistently with the theoretical results and quite importantly, shows very little data quality loss.  相似文献   

15.
Abstract

We attempt to clarify a number of points regarding use of spatial regression models for regional growth analysis. We show that as in the case of non-spatial growth regressions, the effect of initial regional income levels wears off over time. Unlike the non-spatial case, long-run regional income levels depend on: own region as well as neighbouring region characteristics, the spatial connectivity structure of the regions, and the strength of spatial dependence. Given this, the search for regional characteristics that exert important influences on income levels or growth rates should take place using spatial econometric methods that account for spatial dependence as well as own and neighbouring region characteristics, the type of spatial regression model specification, and weight matrix. The framework adopted here illustrates a unified approach for dealing with these issues.  相似文献   

16.
Abstract

This article examines the extent to which a proactive attitude can be considered a component of the entrepreneurial mindset and can be learned in the entrepreneurial classroom. We test the impact on students’ proactive attitude of two different teaching methods: a teacher-directed approach and a self-directed learning approach. We include group potency and emotions as variables that may moderate proactivity learning outcomes. Our sample is composed of 281 Master students in a French business school. Using a mixed methodological approach, the results demonstrate that the proactive attitude can be learned and that collaborative teamwork, a creative team spirit and positive emotions contribute to its development. We offer guidelines for the pedagogical design of EM education, an alternative tool to assess its impact, and a better understanding of emotional factors associated with group potency in student entrepreneurial teams.  相似文献   

17.
Abstract

This study analyses and reviews the literature on public leadership with a novel combination of bibliometric methods. We detect four generic approaches to public leadership (i.e. a functionalist, a behavioural, a biographical and a reformist approach) which differ with regard to their philosophy of science (i.e. objective vs subjective) and level of analysis (i.e. micro-level vs multi-level). From our findings, we derive four directions for future research which involve shifting the focus from the aspect of ‘leadership’ to the element of ‘public’, from simplicity to complexity, from universalism to cultural relativism and from public leadership to public followership.  相似文献   

18.
This paper proposes a simple procedure for obtaining monthly assessments of short-run perspectives for quarterly world GDP and trade. It combines high-frequency information from emerging and advanced countries so as to explain quarterly national accounts variables through bridge models. The union of all bridge equations leads to our world bridge model (WBM). The WBM approach of this paper is new for two reasons: its equations combine traditional short-run bridging with theoretical level-relationships, and it is the first time that forecasts of world GDP and trade have been computed for both advanced and emerging countries on the basis of a real-time database of approximately 7000 time series. Although the performances of the equations that are searched automatically should be taken as a lower bound, our results show that the forecasting ability of the WBM is superior to the benchmark. Finally, our results confirm that the use of revised data leads to models’ forecasting performances being overstated significantly.  相似文献   

19.
20.
This paper presents an inference approach for dependent data in time series, spatial, and panel data applications. The method involves constructing t and Wald statistics using a cluster covariance matrix estimator (CCE). We use an approximation that takes the number of clusters/groups as fixed and the number of observations per group to be large. The resulting limiting distributions of the t and Wald statistics are standard t and F distributions where the number of groups plays the role of sample size. Using a small number of groups is analogous to ‘fixed-b’ asymptotics of [Kiefer and Vogelsang, 2002] and [Kiefer and Vogelsang, 2005] (KV) for heteroskedasticity and autocorrelation consistent inference. We provide simulation evidence that demonstrates that the procedure substantially outperforms conventional inference procedures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号