首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper considers a spatial panel data regression model with serial correlation on each spatial unit over time as well as spatial dependence between the spatial units at each point in time. In addition, the model allows for heterogeneity across the spatial units using random effects. The paper then derives several Lagrange multiplier tests for this panel data regression model including a joint test for serial correlation, spatial autocorrelation and random effects. These tests draw upon two strands of earlier work. The first is the LM tests for the spatial error correlation model discussed in Anselin and Bera [1998. Spatial dependence in linear regression models with an introduction to spatial econometrics. In: Ullah, A., Giles, D.E.A. (Eds.), Handbook of Applied Economic Statistics. Marcel Dekker, New York] and in the panel data context by Baltagi et al. [2003. Testing panel data regression models with spatial error correlation. Journal of Econometrics 117, 123–150]. The second is the LM tests for the error component panel data model with serial correlation derived by Baltagi and Li [1995. Testing AR(1) against MA(1) disturbances in an error component model. Journal of Econometrics 68, 133–151]. Hence, the joint LM test derived in this paper encompasses those derived in both strands of earlier works. In fact, in the context of our general model, the earlier LM tests become marginal LM tests that ignore either serial correlation over time or spatial error correlation. The paper then derives conditional LM and LR tests that do not ignore these correlations and contrast them with their marginal LM and LR counterparts. The small sample performance of these tests is investigated using Monte Carlo experiments. As expected, ignoring any correlation when it is significant can lead to misleading inference.  相似文献   

2.
This study develops an off-site emergency response plan for a nuclear power plant in Gujarat, India subject to time constraints with resource limitations and risk of radiation exposure to victims. We formulate an optimization model to capture the effect of delay in evacuation, limited resource availability, and costs associated with resource allocation. A single chain closed queuing network model with class switching is used to model traffic congestion during evacuation. The throughput measures from the queuing network are used as inputs in the optimization model. Further, two resource allocation strategies are suggested and genetic algorithm is used for optimizing resource utilization and evacuation risk. The results indicate that pooling resources among a cluster of affected areas is most suitable for evacuation. Numerical experiments are conducted to analyze the time trade-offs and the effect of service time variability on the expected evacuation time. The proposed model can serve as an important resource planning and allocation tool for emergency evacuation.  相似文献   

3.
Using two studies, we examine the dilution effect for green products, by testing whether advertising green benefits decreases their perceived instrumentality and thus harms sustainable development. We use a between‐subject design and ask participants to evaluate the efficacy of a pen (Study 1) and a dish detergent (Study 2) with and without environmental attributes. Our results are inconsistent with the predictions of the dilution model because the perceived instrumentality of both products does not decrease when environmental benefits are added. Our findings are relevant for eco‐labeling given anecdotal evidence suggesting that adding green information can harm the perceived quality of products.  相似文献   

4.
We consider the problems of estimation and testing in models with serially correlated discrete latent variables. A particular case of this is the time series regression model in which a discrete explanatory variable is measured with error. Test statistics are derived for detecting serial correlation in such a model. We then show that the likelihood function can be evaluated by a recurrence relation, and thus maximum likelihood estimation is computationally feasible. An illustrative example of these methods is given, followed by a brief discussion of their applicability to a Markov model of switching regressions.  相似文献   

5.
The recent housing market boom and bust in the United States illustrates that real estate returns are characterized by short-term positive serial correlation and long-term mean reversion to fundamental values. We develop an econometric model that includes these two components, but with weights that vary dynamically through time depending on recent forecasting performances. The smooth transition weighting mechanism can assign more weight to positive serial correlation in boom times, and more weight to reversal to fundamental values during downturns. We estimate the model with US national house price index data. In-sample, the switching mechanism significantly improves the fit of the model. In an out-of-sample forecasting assessment the model performs better than competing benchmark models.  相似文献   

6.
Economists frequently encounter data which are subject to different temporal aggregation. In this paper we give a maximum likelihood approach to these problems with a minimum of mathematical manipulation. We show that the best prediction of the data by related series and efficient estimation of parameters are inseparable. The relative efficiency of the maximum likelihood estimator to other estimators are also indicated.  相似文献   

7.
This paper considers a panel data regression model with heteroskedastic as well as serially correlated disturbances, and derives a joint LM test for homoskedasticity and no first order serial correlation. The restricted model is the standard random individual error component model. It also derives a conditional LM test for homoskedasticity given serial correlation, as well as, a conditional LM test for no first order serial correlation given heteroskedasticity, all in the context of a random effects panel data model. Monte Carlo results show that these tests along with their likelihood ratio alternatives have good size and power under various forms of heteroskedasticity including exponential and quadratic functional forms.  相似文献   

8.
This study investigates how to reduce future barriers to succession and other problems related to family governance by constructing a succession roadblock map. The study explores succession roadblocks in family businesses and provides a succession planning tool that is based on empirical data from 42 director members of the Taiwan Institute of Directors. An analytical hierarchy of family business succession and succession roadblocks are divided into three categories: family roadblocks, institutional roadblocks, and market roadblocks. Next, this study calculates the weights and rankings of the severity of such roadblocks and the likelihood of their occurrence. Specifically, this study constructs a succession roadblock matrix that categorizes succession roadblocks into four categories: the ownership dilution model, sell or withdraw model, ownership management model, and dispersive ownership model. This study also establishes a roadblock strategy matrix for successor positioning and proposes suggestions for practical strategic planning to overcome the challenges of succession roadblocks.  相似文献   

9.
Multilevel growth curve models for repeated measures data have become increasingly popular and stand as a flexible tool for investigating longitudinal change in students’ outcome variables. In addition, these models allow the estimation of school effects on students’ outcomes though making strong assumptions about the serial independence of level-1 residuals. This paper introduces a method which takes into account the serial correlation of level-1 residuals and also introduces such serial correlation at level-2 in a complex double serial correlation (DSC) multilevel growth curve model. The results of this study from both real and simulated data show a great improvement in school effects estimates compared to those that have previously been found using multilevel growth curve models without correcting for DSC for both the students’ status and growth criteria.  相似文献   

10.
信任机制作为对等网络安全问题解决方案,在国际上得到广泛研究,并取得许多重要成果。文章分析了信任机制的特点、性质和组成,重点分析了一种P2P网络信任模型:基于成员组的局部信任模型,并给出下一步可能的发展方向。  相似文献   

11.
In this paper we consider a general oligopoly model introduced by Dafermos and Nagurney which subsumes some models treated earlier numerically and give alternative variational inequality formulations of the governing Nash equilibrium defined over Cartesian products of sets. We then exploit these formulations to construct non-linear and linearized serial decomposition schemes by firms and by firm and demand market pairs. We also introduce a new equilibration operator to solve the embedded mathematical programming problems. Our computational experience for randomly generated examples for three classes of general problems suggests strongly that the linearized Gauss-Seidel decomposition method by firms is computationally superior.  相似文献   

12.
Recently, there has been a renewed interest in the class of stochastic blockmodels (SBM) and their applications to multi-subject brain networks. In our most recent work, we have considered an extension of the classical SBM, termed heterogeneous SBM (Het-SBM), that models subject variability in the cluster-connectivity profiles through the addition of a logistic regression model with subject-specific covariates on the level of each block. Although this model has proved to be useful in both the clustering and inference aspects of multi-subject brain network data, including fleshing out differences in connectivity between patients and controls, it does not account for dependencies that may exist within subjects. To overcome this limitation, we propose an extension of Het-SBM, termed Het-Mixed-SBM, in which we model the within-subject dependencies by adding subject- and block-level random intercepts in the embedded logistic regression model. Using synthetic data, we investigate the accuracy of the partitions estimated by our proposed model as well as the validity of inference procedures based on the Wald and permutation tests. Finally, we illustrate the model by analyzing the resting-state fMRI networks of 99 healthy volunteers from the Human Connectome Project (HCP) using covariates like age, gender, and IQ to explain the clustering patterns observed in the data.  相似文献   

13.
We introduce the papers appearing in the special issue of this journal associated with the WEHIA 2015. The papers in issue deal with two growing fields in the in the literature inspired by the complexity-based approach to economic analysis. The first group of contributions develops network models of financial systems and show how these models can shed light on relevant issues that emerged in the aftermath of the last financial crisis. The second group of contributions deals with the issue of validation of agent-based model. Agent-based models have proven extremely useful to account for key features economic dynamics that are usually neglected by more standard models. At the same time, agent-based models have been criticized for the lack of an adequate validation against empirical data. The works in this issue propose useful techniques to validate agent-based models, thus contributing to the wider diffusion of these models in the economic discipline.  相似文献   

14.
The problem of determining the appropriate stock replenishment quantity within a time-phased requirements planning environment has received considerable research attention in recent years. Relative performance characteristics of lot-sizing policies have been assessed as a function of the cost structure, the demand pattern, the product structure, forecast error, the length of the planning horizon, and the interaction between replenishment quantities and sequencing decisions. In particular, the relationship between lot sizing behavior and variability in the requirements profile has been intensely investigated. However, despite these efforts, the empirical evidence linking lotsizing performance with demand variability remains inconclusive. This article suggests that, in part, some of the ambiguity in the literature may be an artifact of a failure to adequately control for other important dimensions of simulated demand sequences. The features that have been thought to describe “lumpy” requirements profiles are discussed and the characteristic of periodicity or time-dependency in the demand entries is identified as a variable that has been insufficiently controlled in prior work. A reanalysis of the demand sequences originally published by Kaimann, and subsequently used in a number of comparative lot-sizing studies, reveals that the patterns differ not only in variability as measured by the coefficient of variation, but also in terms of correlation structure as described by the autocorrelation function. Alternative methods for simulating demand sequences are reviewed and a correlation transfer technique, which has the capability to simultaneously control both the degree of variability and correlation, is suggested as an improved method for the generation of synthetic sequences of “lumpy” demand. Using this technique, five of Kaimann's original sequences are rearranged, resulting in three sets of sequences differing only in the strength of serial correlation. Four lot-sizing procedures are applied to each of these sets to discern if the correlation structure has any appreciable effect on lot-sizing performance. Results indicate that, on average, higher total inventory costs are experienced when the demand environment is characterized by randomness. Economic order quantity and part-period balancing achieve lowest average costs when confronted with highly autocorrelated demand or patterns of few runs; conversely, minimum cost per period and Wagner-Whitin perform best under conditions of many runs. Both economic order quantity and part-period balancing perform most favorably in comparison to Wagner-Whitin when runs are few. In addition, there appears to be a potential interaction between the level of demand variability and the degree of serial correlation. This finding is somewhat disconcerting since high variability demand sequences used in some prior research were also characterized by relatively high levels of autocorrelation; hence it becomes most difficult to identify and decompose the unique influences of each demand pattern dimension on lot-sizing behavior. Because of this phenomenon, it is suggested that future studies direct greater attention to the demand simulating methodology than has heretofore been accorded.  相似文献   

15.
Barry M. Rubin 《Socio》1985,19(6):387-398
Research into wage determination and inflation at the level of the urban labor market has generally followed a Phillips curve adaptive expectations framework. This paper explores the accuracy of such specifications when national and intermarket linkages are ignored, and extends such specifications to incorporate these linkages. The present research also addresses the impact of serial correlation problems and time series aggregation bias on the ability to identify the local wage determination and inflation mechanism. The estimation results for both annual and quarterly specifications indicate that there is virtually no support for a Phillips curve adaptive expectations hypothesis when external linkages are included in the equations. It is demonstrated that specification errors and serial correlation problems are probably responsible for many of the contradictory and inconclusive results obtained in previous studies.  相似文献   

16.
This paper investigates the extent to which the technical and social contexts of organizations independently affect levels of workplace trust. We argue that, in an organizational context, trust is not just a relationship between an individual subject (the truster) and an object (the trustee) but is subject to effects from the conditions of the work relationship itself. We describe the organizational context as comprising both a technical system of production (where work gets done through the specification of tasks) and a social system of work (where problems of effort, compliance, conformity and motivation are managed). We analyse the relationship between trust and these two aspects of workplace context (technical and social systems). We also operationalize this in terms of differences between industries, occupational composition and human resource management practices. The model is tested using data drawn from the 1995 Australian Workplace Industrial Relations Survey. The results confirm that differences in industry, occupational composition and HRM practices all impact on levels of workplace trust. We review these results in terms of their implications for future research into the problem of analysing variation in trust at both the workplace and individual levels.  相似文献   

17.
This paper concerns a class of model selection criteria based on cross‐validation techniques and estimative predictive densities. Both the simple or leave‐one‐out and the multifold or leave‐m‐out cross‐validation procedures are considered. These cross‐validation criteria define suitable estimators for the expected Kullback–Liebler risk, which measures the expected discrepancy between the fitted candidate model and the true one. In particular, we shall investigate the potential bias of these estimators, under alternative asymptotic regimes for m. The results are obtained within the general context of independent, but not necessarily identically distributed, observations and by assuming that the candidate model may not contain the true distribution. An application to the class of normal regression models is also presented, and simulation results are obtained in order to gain some further understanding on the behavior of the estimators.  相似文献   

18.
This article deals with the question of whether the inclusion of multiplicative terms to model conditional effects in multiple regression is legitimate. The major arguments in the controversy relating to this subject are reviewed. The main conclusion is that most of the objections against multiplicative terms are based on misinterpretations of the coefficients of conditional models. For the often-ignored possible numerical problems in the estimation of these models, due to multicollinearity, an indirect estimation technique is proposed. The potentials of conditional regression analysis are demonstrated on a concrete example.  相似文献   

19.
20.
In recent years, the effect of disclosure on environmental and social information has been the subject of much research in an Anglo‐Saxon context. The European field, and especially the French companies, have not been sufficiently discussed. In this paper, we investigate the relationship between social and environmental disclosure and earning persistence (as a proxy of earning quality). We use the content analysis method with annual reports as a measure of social and environmental disclosure; the empirical validation is applied to the companies listed in the SBF 250 French stock market index over the 2005–2010 period. To measure earning persistence we opt for a regression of a time‐series model on panel data. The findings show that French companies are characterized by a high level of social and environmental reporting; this situation may affect positively the quality of earnings such as more persistent earnings. This means that companies with a higher level of social and environmental commitment are more likely to take benefits and to communicate more persistent earnings and be desirable to investors. Copyright © 2012 John Wiley & Sons, Ltd and ERP Environment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号