首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper reviews research issues in modeling panels of time series. Examples of this type of data are annually observed macroeconomic indicators for all countries in the world, daily returns on the individual stocks listed in the S&P500, and the sales records of all items in a retail store. A panel of time series concerns the case where the cross‐sectional dimension and the time dimension are large. Often, there is no a priori reason to select a few series or to aggregate the series over the cross‐sectional dimension. The use of, for example, a vector autoregression or other types of multivariate models then becomes cumbersome. Panel models and associated estimation techniques are more useful. Due to the large time dimension, one should however incorporate the time‐series features. And, the models should not have too many parameters to facilitate interpretation. This paper discusses representation, estimation and inference of relevant models and discusses recently proposed modeling approaches that explicitly aim to meet these requirements. The paper concludes with some reflections on the usefulness of large data sets. These concern sample selection issues and the notion that more detail also requires more complex models.  相似文献   

2.
The paper proposes a general framework for modeling multiple categorical latent variables (MCLV). The MCLV models extend latent class analysis or latent transition analysis to allow flexible measurement and structural components between endogenous categorical latent variables and exogenous covariates. Therefore, modeling frameworks in conventional structural equation models, for example, CFA and MIMIC models are feasible in the MCLV circumstances. Parameter estimations for the MCLV models are performed by using generalized expectation–maximization (E–M) algorithm. In addition, the adjusted Bayesian information criterion provides help for model selections. A substantive study of reading development is analyzed to illustrate the feasibility of MCLV models.  相似文献   

3.
Appropriate modelling of Likert‐type items should account for the scale level and the specific role of the neutral middle category, which is present in most Likert‐type items that are in common use. Powerful hierarchical models that account for both aspects are proposed. To avoid biased estimates, the models separate the neutral category when modelling the effects of explanatory variables on the outcome. The main model that is propagated uses binary response models as building blocks in a hierarchical way. It has the advantage that it can be easily extended to include response style effects and non‐linear smooth effects of explanatory variables. By simple transformation of the data, available software for binary response variables can be used to fit the model. The proposed hierarchical model can be used to investigate the effects of covariates on single Likert‐type items and also for the analysis of a combination of items. For both cases, estimation tools are provided. The usefulness of the approach is illustrated by applying the methodology to a large data set.  相似文献   

4.
Moors  Guy 《Quality and Quantity》2003,37(3):277-302
It is generally accepted that response style behavior in survey research may seriously distort the measurement of attitudes and subsequent causal models that include attitudinal dimensions. However, there in no single accepted methodological approach in dealing with this issue. This article aims at illustrating the flexibility of a latent class factor approach in diagnosing response style behavior and in adjusting findings from causal models with latent variables. We present a substantive example from the Belgian MHSM research project on integration-related attitudes among ethnic minorities. We argue that an extreme response style can be detected in analyzing two independent sets of Likert-type questions referring to `gender roles' and `feelings of ethnic discrimination'. If the response style is taken into account the effect of covariates on attitudinal dimensions is more adequately estimated.  相似文献   

5.
Factor analysis models are used in data dimensionality reduction problems where the variability among observed variables can be described through a smaller number of unobserved latent variables. This approach is often used to estimate the multidimensionality of well-being. We employ factor analysis models and use multivariate empirical best linear unbiased predictor (EBLUP) under a unit-level small area estimation approach to predict a vector of means of factor scores representing well-being for small areas. We compare this approach with the standard approach whereby we use small area estimation (univariate and multivariate) to estimate a dashboard of EBLUPs of the means of the original variables and then averaged. Our simulation study shows that the use of factor scores provides estimates with lower variability than weighted and simple averages of standardised multivariate EBLUPs and univariate EBLUPs. Moreover, we find that when the correlation in the observed data is taken into account before small area estimates are computed, multivariate modelling does not provide large improvements in the precision of the estimates over the univariate modelling. We close with an application using the European Union Statistics on Income and Living Conditions data.  相似文献   

6.
The psychometric literature contains many indices to detect aberrant respondents. A different, promising approach is using ordered latent class analysis with the goal to distinguish latent classes of respondents that are scalable, from latent classes of respondents that are not scalable (i.e., aberrant) according to the scaling model adopted. This article examines seven Latent Class models for a cumulative scale. A simulation study was performed to study the efficacy of different models for data that follow the scale model perfectly. A second simulation study was performed to study how well these models detect aberrant respondents.  相似文献   

7.
This paper proposes a three-step approach to forecasting time series of electricity consumption at different levels of household aggregation. These series are linked by hierarchical constraints—global consumption is the sum of regional consumption, for example. First, benchmark forecasts are generated for all series using generalized additive models. Second, for each series, the aggregation algorithm ML-Poly, introduced by Gaillard, Stoltz, and van Erven in 2014, finds an optimal linear combination of the benchmarks. Finally, the forecasts are projected onto a coherent subspace to ensure that the final forecasts satisfy the hierarchical constraints. By minimizing a regret criterion, we show that the aggregation and projection steps improve the root mean square error of the forecasts. Our approach is tested on household electricity consumption data; experimental results suggest that successive aggregation and projection steps improve the benchmark forecasts at different levels of household aggregation.  相似文献   

8.
We introduce a new family of network models, called hierarchical network models, that allow us to represent in an explicit manner the stochastic dependence among the dyads (random ties) of the network. In particular, each member of this family can be associated with a graphical model defining conditional independence clauses among the dyads of the network, called the dependency graph. Every network model with dyadic independence assumption can be generalized to construct members of this new family. Using this new framework, we generalize the Erdös–Rényi and the β models to create hierarchical Erdös–Rényi and β models. We describe various methods for parameter estimation, as well as simulation studies for models with sparse dependency graphs.  相似文献   

9.
Hierarchically structured data are common in many areas of scientific research. Such data are characterized by nested membership relations among the units of observation. Multilevel analysis is a class of methods that explicitly takes the hierarchical structure into account. Repeated measures data can be considered as having a hierarchical structure as well: measurements are nested within, for instance, individuals. In this paper, an overview is given of the multilevel analysis approach to repeated measures data. A simple application to growth curves is provided as an illustration. It is argued that multilevel analysis of repeated measures data is a powerful and attractive approach for several reasons, such as flexibility, and the emphasis on individual development.  相似文献   

10.
Abstract

This study develops two space-varying coefficient simultaneous autoregressive (SVC-SAR) models for areal data and applies them to the discrete/continuous choice model, which is an econometric model based on the consumer's utility maximization problem. The space-varying coefficient model is a statistical model in which the coefficients vary depending on their location. This study introduces the simultaneous autoregressive model for the underlying spatial dependence across coefficients, where the coefficients for one observation are affected by the sum of those for the other observations. This model is named the SVC-SAR model. Because of its flexibility, we use the Bayesian approach and construct its estimation method based on the Markov chain Monte Carlo simulation. The proposed models are applied to estimate the Japanese residential water demand function, which is an example of the discrete/continuous choice model.  相似文献   

11.
Predicting the geo-temporal variations of crime and disorder   总被引:2,自引:0,他引:2  
Traditional police boundaries—precincts, patrol districts, etc.—often fail to reflect the true distribution of criminal activity and thus do little to assist in the optimal allocation of police resources. This paper introduces methods for crime incident forecasting by focusing upon geographical areas of concern that transcend traditional policing boundaries. The computerised procedure utilises a geographical crime incidence-scanning algorithm to identify clusters with relatively high levels of crime (hot spots). These clusters provide sufficient data for training artificial neural networks (ANNs) capable of modelling trends within them. The approach to ANN specification and estimation is enhanced by application of a novel and noteworthy approach, the Gamma test (GT).  相似文献   

12.
This paper describes a method for finding optimal transformations for analyzing time series by autoregressive models. 'Optimal' implies that the agreement between the autoregressive model and the transformed data is maximal. Such transformations help 1) to increase the model fit, and 2) to analyze categorical time series. The method uses an alternating least squares algorithm that consists of two main steps: estimation and transformation. Nominal, ordinal and numerical data can be analyzed. Some alternative applications of the general idea are highlighted: intervention analysis, smoothing categorical time series, predictable components, spatial modeling and cross-sectional multivariate analysis. Limitations, modeling issues and possible extensions are briefly indicated.  相似文献   

13.
阳海渝  温超 《价值工程》2013,(12):308-309
关于非线性规划的问题的诸多传统解法中都存在效率较低、容易达到局部最优、甚至找不到最优解的局限性。而基本遗传算法也常常由于本身算法的局限性,在搜索最优解的过程中易早熟,局部搜索能力弱,后期收敛过慢。源于这些缺陷,本论文提出使用分层遗传算法来求解一类非线性规划问题,并通过数值实验,结果表明基于分层遗传算法求解一类非线性规划问题是非常有效的。  相似文献   

14.
Computationally efficient methods for Bayesian analysis of seemingly unrelated regression (SUR) models are described and applied that involve the use of a direct Monte Carlo (DMC) approach to calculate Bayesian estimation and prediction results using diffuse or informative priors. This DMC approach is employed to compute Bayesian marginal posterior densities, moments, intervals and other quantities, using data simulated from known models and also using data from an empirical example involving firms’ sales. The results obtained by the DMC approach are compared to those yielded by the use of a Markov Chain Monte Carlo (MCMC) approach. It is concluded from these comparisons that the DMC approach is worthwhile and applicable to many SUR and other problems.  相似文献   

15.
In this paper we propose a downside risk measure, the expectile-based Value at Risk (EVaR), which is more sensitive to the magnitude of extreme losses than the conventional quantile-based VaR (QVaR). The index θ of an EVaR is the relative cost of the expected margin shortfall and hence reflects the level of prudentiality. It is also shown that a given expectile corresponds to the quantiles with distinct tail probabilities under different distributions. Thus, an EVaR may be interpreted as a flexible QVaR, in the sense that its tail probability is determined by the underlying distribution. We further consider conditional EVaR and propose various Conditional AutoRegressive Expectile models that can accommodate some stylized facts in financial time series. For model estimation, we employ the method of asymmetric least squares proposed by Newey and Powell [Newey, W.K., Powell, J.L., 1987. Asymmetric least squares estimation and testing. Econometrica 55, 819–847] and extend their asymptotic results to allow for stationary and weakly dependent data. We also derive an encompassing test for non-nested expectile models. As an illustration, we apply the proposed modeling approach to evaluate the EVaR of stock market indices.  相似文献   

16.
We develop a new method for deriving minimal state variable (MSV) equilibria of a general class of Markov switching rational expectations models and a new algorithm for computing these equilibria. We compare our approach to previously known algorithms, and we demonstrate that ours is both efficient and more reliable than previous methods in the sense that it is able to find MSV equilibria that previously known algorithms cannot. Further, our algorithm can find all possible MSV equilibria in models. This feature is essential if one is interested in using a likelihood based approach to estimation.  相似文献   

17.
The multiplicity problem is evident in the simplest form of statistical analysis of gene expression data – the identification of differentially expressed genes. In more complex analysis, the problem is compounded by the multiplicity of hypotheses per gene. Thus, in some cases, it may be necessary to consider testing millions of hypotheses. We present three general approaches for addressing multiplicity in large research problems. (a) Use the scalability of false discovery rate (FDR) controlling procedures; (b) apply FDR-controlling procedures to a selected subset of hypotheses; (c) apply hierarchical FDR-controlling procedures. We also offer a general framework for ensuring reproducible results in complex research, where a researcher faces more than just one large research problem. We demonstrate these approaches by analyzing the results of a complex experiment involving the study of gene expression levels in different brain regions across multiple mouse strains.  相似文献   

18.
Multidimensional network data can have different levels of complexity, as nodes may be characterized by heterogeneous individual-specific features, which may vary across the networks. This article introduces a class of models for multidimensional network data, where different levels of heterogeneity within and between networks can be considered. The proposed framework is developed in the family of latent space models, and it aims to distinguish symmetric relations between the nodes and node-specific features. Model parameters are estimated via a Markov Chain Monte Carlo algorithm. Simulated data and an application to a real example, on fruits import/export data, are used to illustrate and comment on the performance of the proposed models.  相似文献   

19.
A Bayesian hierarchical mixed model is developed for multiple comparisons under a simple order restriction. The model facilitates inferences on the successive differences of the population means, for which we choose independent prior distributions that are mixtures of an exponential distribution and a discrete distribution with its entire mass at zero. We employ Markov Chain Monte Carlo (MCMC) techniques to obtain parameter estimates and estimates of the posterior probabilities that any two of the means are equal. The latter estimates allow one both to determine if any two means are significantly different and to test the homogeneity of all of the means. We investigate the performance of the model-based inferences with simulated data sets, focusing on parameter estimation and successive-mean comparisons using posterior probabilities. We then illustrate the utility of the model in an application based on data from a study designed to reduce lead blood concentrations in children with elevated levels. Our results show that the proposed hierarchical model can effectively unify parameter estimation, tests of hypotheses and multiple comparisons in one setting.  相似文献   

20.
Abstract.  This article surveys estimation in stationary time-series models using the approach of optimal instrumentation. We review tools that allow construction and implementation of optimal instrumental variables estimators in various circumstances – in single- and multiperiod models, in the absence and presence of conditional heteroskedasticity, by considering linear and non-linear instruments. We also discuss issues adjacent to the theme of optimal instruments. The article is directed primarily towards practitioners, but econometric theorists and teachers of graduate econometrics may also find it useful.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号