首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
An F test [Nelson (1976)] of Parzen's prediction variance horizon [Parzen (1982)] of an ARMA model yields the number of steps ahead that forecasts contain information (short memory). A special 10 year pattern in Finnish GDP is introduced as a ‘seasonal’ in an ARMA-model. Forecasts three years ahead are statistically informative but exploiting the complete 10 year pattern raises doubts both about model memory and model validity.  相似文献   

2.
Spatial lag operators play an important role in such fields as biometrics, geography and spatial econometrics. Higher-order spatial interactions can be represented by powering spatial lag operators. The paper discusses the conditions under which higher-order spatial lag operators will contain circular routes. Next, it is demonstrated that valid causal inference with the help of spatial dynamic regression models, necessitates the elimination of circular routes. It is shown that the ML-search procedure proposed by Ord (1975) and generalized by Blommestein (1983a), will generate spurious results in case of circular routes. A ML-procedure to obtain non-spurious estimated parameters in general dynamic spatial regression models is outlined. In addition some recent proposed approximate ML-search procedures [see Blommestein (1984b)], based on the spectral decomposition of a matrix, are modified in a similar way. The last section demonstrates empirically the purport of the theoretical results derived in the paper.  相似文献   

3.
In our earlier paper [Srivastava, Agnihotri and Dwivedi (1980)] the dominance of double k-class over k-class with respect to exact mean squared error matrix criteria is established. It is observed that given a member of k-class, one can pick up a member of double k-class that will provide an improved estimator of the coefficients. This result prompted us to study the exact finite sample properties of the double k-class estimator. For this, we have considered a structural equation containing two endogenous variables and have investigated the properties of double k-class estimators of the coefficients of explanatory endogenous variables assuming characterizing scalars to be non-stochastic.  相似文献   

4.
5.
Ecological inference refers to the study of individuals using aggregate data and it is used in an impressive number of studies; it is well known, however, that the study of individuals using group data suffers from an ecological fallacy problem (Robinson in Am Sociol Rev 15:351–357, 1950). This paper evaluates the accuracy of two recent methods, the Rosen et al. (Stat Neerl 55:134–156, 2001) and the Greiner and Quinn (J R Stat Soc Ser A (Statistics in Society) 172:67–81, 2009) and the long-standing Goodman’s (Am Sociol Rev 18:663–664, 1953; Am J Sociol 64:610–625, 1959) method designed to estimate all cells of R × C tables simultaneously by employing exclusively aggregate data. To conduct these tests we leverage on extensive electoral data for which the true quantities of interest are known. In particular, we focus on examining the extent to which the confidence intervals provided by the three methods contain the true values. The paper also provides important guidelines regarding the appropriate contexts for employing these models.  相似文献   

6.
In this paper the unit root tests proposed by Dickey and Fuller (DF) and their rank counterpart suggested by Breitung and Gouriéroux (J Econom 81(1): 7–27, 1997) (BG) are analytically investigated under the presence of additive outlier (AO) contaminations. The results show that the limiting distribution of the former test is outlier dependent, while the latter one is outlier free. The finite sample size properties of these tests are also investigated under different scenarios of testing contaminated unit root processes. In the empirical study, the alternative DF rank test suggested in Granger and Hallman (J Time Ser Anal 12(3): 207–224, 1991) (GH) is also considered. In Fotopoulos and Ahn (J Time Ser Anal 24(6): 647–662, 2003), these unit root rank tests were analytically and empirically investigated and compared to the DF test, but with outlier-free processes. Thus, the results provided in this paper complement the studies of the previous works, but in the context of time series with additive outliers. Equivalently to DF and Granger and Hallman (J Time Ser Anal 12(3): 207–224, 1991) unit root tests, the BG test shows to be sensitive to AO contaminations, but with less severity. In practical situations where there would be a suspicion of additive outlier, the general conclusion is that the DF and Granger and Hallman (J Time Ser Anal 12(3): 207–224, 1991) unit root tests should be avoided, however, the BG approach can still be used.  相似文献   

7.
The literature on neighbor designs as introduced by Rees (Biometrics 23:779–791, 1967) is mainly devoted to construction methods, providing few results on their statistical properties, such as efficiency and optimality. A review of the available literature, with special emphasis on the optimality of neighbor designs under various fixed effects interference models, is given in Filipiak and Markiewicz (Commun Stat Theory Methods 46:1127–1143, 2017). The aim of this paper is to verify whether the designs presented by Filipiak and Markiewicz (2017) as universally optimal under fixed interference models are still universally optimal under models with random interference effects. Moreover, it is shown that for a specified covariance matrix of random interference effects, a universally optimal design under mixed interference models with block effects is universally optimal over a wider class of designs. In this paper the method presented by Filipiak and Markiewicz (Metrika 65:369–386, 2007) is extended and then applied to mixed interference models without or with block effects.  相似文献   

8.
Sanyu Zhou 《Metrika》2017,80(2):187-200
A simultaneous confidence band is a useful statistical tool in a simultaneous inference procedure. In recent years several papers were published that consider various applications of simultaneous confidence bands, see for example Al-Saidy et al. (Biometrika 59:1056–1062, 2003), Liu et al. (J Am Stat Assoc 99:395–403, 2004), Piegorsch et al. (J R Stat Soc 54:245–258, 2005) and Liu et al. (Aust N Z J Stat 55(4):421–434, 2014). In this article, we provide methods for constructing one-sided hyperbolic imultaneous confidence bands for both the multiple regression model over a rectangular region and the polynomial regression model over an interval. These methods use numerical quadrature. Examples are included to illustrate the methods. These approaches can be applied to more general regression models such as fixed-effect or random-effect generalized linear regression models to construct large sample approximate one-sided hyperbolic simultaneous confidence bands.  相似文献   

9.
The productive efficiency of a firm can be seen as composed of two parts, one persistent and one transient. The received empirical literature on the measurement of productive efficiency has paid relatively little attention to the difference between these two components. Ahn and Sickles (Econ Rev 19(4):461–492, 2000) suggested some approaches that pointed in this direction. The possibility was also raised in Greene (Health Econ 13(10):959–980, 2004. doi:10.1002/hec.938), who expressed some pessimism over the possibility of distinguishing the two empirically. Recently, Colombi (A skew normal stochastic frontier model for panel data, 2010) and Kumbhakar and Tsionas (J Appl Econ 29(1):110–132, 2012), in a milestone extension of the stochastic frontier methodology have proposed a tractable model based on panel data that promises to provide separate estimates of the two components of efficiency. The approach developed in the original presentation proved very cumbersome actually to implement in practice. Colombi (2010) notes that FIML estimation of the model is ‘complex and time consuming.’ In the sequence of papers, Colombi (2010), Colombi et al. (A stochastic frontier model with short-run and long-run inefficiency random effects, 2011, J Prod Anal, 2014), Kumbhakar et al. (J Prod Anal 41(2):321–337, 2012) and Kumbhakar and Tsionas (2012) have suggested other strategies, including a four step least squares method. The main point of this paper is that full maximum likelihood estimation of the model is neither complex nor time consuming. The extreme complexity of the log likelihood noted in Colombi (2010), Colombi et al. (2011, 2014) is reduced by using simulation and exploiting the Butler and Moffitt (Econometrica 50:761–764, 1982) formulation. In this paper, we develop a practical full information maximum simulated likelihood estimator for the model. The approach is very effective and strikingly simple to apply, and uses all of the sample distributional information to obtain the estimates. We also implement the panel data counterpart of the Jondrow et al. (J Econ 19(2–3):233–238, 1982) estimator for technical or cost inefficiency. The technique is applied in a study of the cost efficiency of Swiss railways.  相似文献   

10.
Rosen [13], Freeman [4], Halvorsen and Pollakowski [6], and others have stressed that economic theory does not suggest an appropriate functional form for hedonic price functions.1 It consequently is reasonable to try several functional forms and utilize the multiple regression equation with the best performance. In this spirit, Halvorsen and Pollakowski [6] recommend using the Box-Cox flexible functional form for hedonic analysis and measuring best performance with a goodness of fit test. The Box-Cox methodology has also been adapted in hedonic studies by Goodman [5], Linneman [10], Blomquist and Worley [1], and Eberts and Gronberg [3].2 The Box-Cox is particularly suited for testing functional forms because many familiar forms such as semilog, log linear, and translog are subsets of the flexible Box-Cox permitting nested hypothesis testing.In this note, we illustrate that the formal hypothesis testing advantage of the Box-Cox functional form is purchased at the expense of other important goals. The goal of most hedonic studies is to estimate the prices of the characteristics, to measure the response to changes in the prices, and/or to predict future expenditures. Using a best fit criterion to choose functional forms does not necessarily lead to more accurate estimates of characteristic prices. In fact, the large number of coefficients estimated with the Box-Cox functional form reduces the accuracy of any single coefficient which could lead to poorer estimates of specific prices. Second, because any negative number raised to a noninteger real power is imaginary, the traditional Box-Cox functional form is not suited to any data set containing negative numbers. Third, the Box-Cox functional form may be inappropriate for prediction. Since the mean predicted value of the untransformed dependent variable need not equal the mean of the sample upon which it is estimated, the predicted untransformed variable (housing value) will be biased. The predicted untransformed dependent variable may also be imaginary. Fourth, the nonlinear transformation results in complex estimates of slopes and elasticities which are often too cumbersome to use properly. We discuss each of these drawbacks and quantify them when possible in the remainder of this note.  相似文献   

11.
The relevance of risk preference and forecasting accuracy for investor survival has recently been the focus of a series of theoretical and simulation studies. At one extreme, it has been proven that risk preference can be entirely irrelevant (Sandroni in Econometrica 68:1303–1341, 2000; Blume and Easley in Econometrica 74(4):929–966, 2006). However, the agent-based computational approach indicates that risk preference matters and can be more relevant for survivability than forecasting accuracy (Chen and Huang in Advances in natural computation, Springer, Berlin, 2005; J Econ Behav Organ 67(3):702–717, 2008; Huang in J Econ Interact Coord, 2015). Chen and Huang (Inf Sci 177(5):1222–1229, 2007, 2008) further explained that it is the saving behavior of traders that determines their survivability. However, institutional investors do not have to consider saving decisions that are the most influential investors in modern financial markets. Additionally, traders in the above series of theoretical and simulation studies have learned to forecast the stochastic process that determines which asset will pay dividends, not the market prices and dividends. To relate the research on survivability to issues with respect to the efficient markets hypothesis, it is better to endow agents with the ability to forecast market prices and dividends. With the Santa Fe Artificial Stock Market, where traders do not have to consider saving decisions and can learn to forecast both asset prices and dividends, we revisit the issue of survivability and market efficiency. We find that the main finding of Chen and Huang (2008) that risk preference is much more relevant for survivability than forecasting accuracy still holds for a wide range of market conditions but can fail when the baseline dividend becomes very small. Moreover, the advantage of traders who are less averse to risk is revealed in the market where saving decisions are not taken into account. Finally, Huang’s (2015) argument regarding the degree of market inefficiency is confirmed.  相似文献   

12.
The present study proposes an analysis process to compare and contrast different approaches to content analysis. Moving from previous findings (Righettini and Sbalchiero, ICPP—international conference on public policy, 2015), related to consumer protection in the annual speeches of Italian Presidents of AGCOM, delivered between 2000 and 2015, statistical analyses of textual data are applied on the same set of texts in order to compare and contrast results and evaluate the opportunity of integrating different approaches to enrich the results. This review of results resorts to topic based methods for classification of context units (Reinert, Les Cah l’Anal Donnees 8(2):187–198, 1983), text clustering and lexical correspondence analysis (Lebart et al., Exploring textual data, 1998) in a general framework of content analysis and “lexical worlds” exploration (Reinert, Lang Soc 66:5–39, 1993), i.e., the identification of main topics and words used by AGCOM Presidents to talk about consumer protection. Results confirm the strengths and opportunities of topics detection approach and shed light on how quantitative methods might become useful to political scientists when available policy documents increase in number and size. One methodological innovation of this article is that it supplements the use of word categories in traditional content analysis with an automated topics analysis which exceeds the problems of reliability, replicability, and inferential circularity.  相似文献   

13.
Junius and Oosterhaven (2003) Junius, T. and Oosterhaven, J. 2003. The solution of updating or regionalizing a matrix with both positive and negative entries. Economic Systems Research, 15: 8796. [Taylor & Francis Online] [Google Scholar] present a RAS matrix balancing variant that can incorporate negative elements in the balancing. There are, however, a couple of issues in the approach described – the first being the handling of zeros in the initial estimate, and the second being the formulation of their minimum-information principle. We present a corrected exposition of GRAS.  相似文献   

14.
We present eight existing projection methods and test their relative performance in estimating Supply and Use tables (SUTs) of the Netherlands and Spain. Some of the methods presented have received little attention in the literature, and some have been slightly revised to better deal with negative elements and preserve the signs of original matrix entries. We find that (G)RAS and the methods proposed by Harthoorn and van Dalen (1987 Harthoorn, R. and J. van Dalen (1987) On the Adjustment of Tables with Lagrange Multipliers. NA-024. Central Bureau of Statistics, The Netherlands, National Accounts Research Division  [Google Scholar]) and Kuroda (1988 Kuroda, M. 1988. “A Method of Estimation for the Updating Transaction Matrix in the Input–Output Relationships”. In Statistical Data Bank Systems. Socio-Economic database and model building in Japan, Edited by: Uno, K. and Shishido, S. Amsterdam: North Holland, 43–56.  [Google Scholar]) produce the best estimates for the data in question. Their relative success also suggests the stability of ratios of larger transactions.  相似文献   

15.
Junius and Oosterhaven (2003) Junius, T. and Oosterhaven, J. 2003. The solution of updating or regionalizing a matrix with both positive and negative entries. Economic Systems Research, 15: 8796. [Taylor & Francis Online] [Google Scholar] developed the GRAS algorithm that minimizes the information gain when updating input–output tables with both positive and negative signs. Jackson and Murray (2004) Jackson, R. W. and Murray, A. T. 2004. Alternative input–output matrix updating formulations. Economic Systems Research, 16: 135148. [Taylor & Francis Online] [Google Scholar], however, claim that minimizing squared differences in coefficients produces a smaller information gain, which is theoretically impossible. In this comment, calculation errors are sorted out from differences in measures, and it is shown that the information gain needs to be taken in absolute terms when increasing and decreasing cell values occur together. The numerical results show that GRAS outperforms both sign-preserving alternatives in all but one comparison of lesser economic importance. Moreover, as opposed to the result of Jackson and Murray, they show that minimizing absolute differences consistently outperforms minimizing squared differences, which overweighs large errors in small coefficients.  相似文献   

16.
The stochastic search variable selection proposed by George and McCulloch (J Am Stat Assoc 88:881–889, 1993) is one of the most popular variable selection methods for linear regression models. Many efforts have been proposed in the literature to improve its computational efficiency. However, most of these efforts change its original Bayesian formulation, thus the comparisons are not fair. This work focuses on how to improve the computational efficiency of the stochastic search variable selection, but remains its original Bayesian formulation unchanged. The improvement is achieved by developing a new Gibbs sampling scheme different from that of George and McCulloch (J Am Stat Assoc 88:881–889, 1993). A remarkable feature of the proposed Gibbs sampling scheme is that, it samples the regression coefficients from their posterior distributions in a componentwise manner, so that the expensive computation of the inverse of the information matrix, which is involved in the algorithm of George and McCulloch (J Am Stat Assoc 88:881–889, 1993), can be avoided. Moreover, since the original Bayesian formulation remains unchanged, the stochastic search variable selection using the proposed Gibbs sampling scheme shall be as efficient as that of George and McCulloch (J Am Stat Assoc 88:881–889, 1993) in terms of assigning large probabilities to those promising models. Some numerical results support these findings.  相似文献   

17.
This study aims to analyze the link between the construction of an effective psychological contract with the organization and the success of the socialization process. To this purpose 241 employees of a Call Center organization have been contacted. A questionnaire composed by measures of Organizational Socialization (Haueter et al. Journal of Vocational Behavior, 63, 20–39, 2003), Psychological Contract (Rousseau 1995), Job Satisfaction (Wanous et al. Journal of Applied Psychology, 82, 247–252, 1997) and Organizational Committment (Allen and Meyer 1990) was administered. Results have underlined that organizational socialization may influence the development of the psychological contract thus determining job satisfaction and organizational commitment. This research has been developed in an interdisciplinary perspective, taking into account the peculiarity of the Italian legal framework. In this regard, the analysis has been focused on how the E.U. flexicurity strategy has been implemented in Italy, according to the recent reform of labour market regulation (2012–13) and on the specific regulations introduced for call centres.  相似文献   

18.
This paper is concerned with contrasting the impact of globalization pressures on industrial development in particular localities, with specific reference to the relative performance of regional clusters. A multiple case study approach is adopted in order to examine the decline of volume yacht manufacturing in a long-established English cluster and to compare its responses to globalization with those of major competitors located in other parts of Europe. The case study opens with an analysis of three sector-specific drivers of globalization that have exercised a decisive impact on the sector over the last three decades. In the main analytical section, two alternative approaches to the analysis of clusters (Porter 1990 Porter, ME. 1990. The Competitive Advantage of Nations, London: Macmillan. [Crossref] [Google Scholar], 2000 Porter, ME. 2000. Location, competition, and economic development: local clusters in a global economy. Economic Development Quarterly, 14(1): 1534. [Crossref], [Web of Science ®] [Google Scholar], Best 2001 Best, M. 2001. The New Competitive Advantage, Oxford: Oxford University Press. [Crossref] [Google Scholar]) are applied to the empirical material. The application of Porter's ‘diamond’ framework suggests some distinctive performance-related characteristics, while Best's ‘cluster dynamics’ model provides a more sophisticated explanation of the differential responses and outcomes identified in the English case. The implications for policy are that cluster-level outcomes may be predicated on the internal dynamics of their respective ‘entrepreneurial firms’, and that regional development initiatives would benefit from conceptual and empirical studies that can better address the historical and spatial complexity of the underlying processes.  相似文献   

19.
Lyu Ni  Fang Fang  Fangjiao Wan 《Metrika》2017,80(6-8):805-828
Huang et al. (J Bus Econ Stat 32:237–244, 2014) first proposed a Pearson Chi-Square based feature screening procedure tailored to multi-classification problem with ultrahigh dimensional categorical covariates, which is a common problem in practice but has seldom been discussed in the literature. However, their work establishes the sure screening property only in a limited setting. Moreover, the p value based adjustments when the number of categories involved by each covariate is different do not work well in several practical situations. In this paper, we propose an adjusted Pearson Chi-Square feature screening procedure and a modified method for tuning parameter selection. Theoretically, we establish the sure screening property of the proposed method in general settings. Empirically, the proposed method is more successful than Pearson Chi-Square feature screening in handling non-equal numbers of covariate categories in finite samples. Results of three simulation studies and one real data analysis are presented. Our work together with Huang et al. (J Bus Econ Stat 32:237–244, 2014) establishes a solid theoretical foundation and empirical evidence for the family of Pearson Chi-Square based feature screening methods.  相似文献   

20.
In recent years, a large number of empirical articles on structural decomposition analysis, which aims at disentangling an aggregate change in a variable into its r factors, has been published in this journal. Commonly used methods are the average of the two polar decompositions and the average of all r! elementary decompositions (Dietzenbacher and Los, 1998 Dietzenbacher, E. and Los, B. 1998. Structural decomposition techniques: sense and sensitivity. Economic Systems Research, 10: 307323. [Taylor & Francis Online] [Google Scholar], D&L). We propose to use instead the ‘ideal’ Montgomery decomposition, which means that it satisfies the requirement of factor reversal imposed in index number theory. We prefer it to the methods previously mentioned. The average of the two polar decompositions is not ‘ideal’, so that the outcome depends on the ordering of the factors. The average of all elementary decompositions is ‘ideal’, but requires the computation of an ever increasing number of decompositions when the number of factors increases. Application to the example of D&L (four factors) shows that the three methods yield results that are close to each other.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号