首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper implements the generalized maximum entropy (GME) method in longitudinal data setup to investigate the regression para meters and correlation among the repeated measurements. We derive the GME system using Shannon classical entropy as well as some higher‐order entropies assuming an autoregressive correlation structure. This method is illustrated using two simulated examples to study the effect of changing the support range and compare the performance of the GME approach with the classical estimation methods.  相似文献   

2.
In this paper we show how the Kalman filter, which is a recursive estimation procedure, can be applied to the standard linear regression model. The resulting "Kalman estimator" is compared with the classical least-squares estimator.
The applicability and (dis)advantages of the filter are illustrated by means of a case study which consists of two parts. In the first part we apply the filter to a regression model with constant parameters and in the second part the filter is applied to a regression model with time-varying stochastic parameters. The prediction-powers of various "Kalman predictors" are compared with "least-squares predictors" by using T heil 's prediction-error coefficient U.  相似文献   

3.
Dynamic model averaging (DMA) has become a very useful tool with regards to dealing with two important aspects of time-series analysis, namely, parameter instability and model uncertainty. An important component of DMA is the Kalman filter. It is used to filter out the latent time-varying regression coefficients of the predictive regression of interest, and produce the model predictive likelihood, which is needed to construct the probability of each model in the model set. To apply the Kalman filter, one must write the model of interest in linear state–space form. In this study, we demonstrate that the state–space representation has implications on out-of-sample prediction performance, and the degree of shrinkage. Using Monte Carlo simulations as well as financial data at different sampling frequencies, we document that the way in which the current literature tends to formulate the candidate time-varying parameter predictive regression in linear state–space form ignores empirical features that are often present in the data at hand, namely, predictor persistence and predictor endogeneity. We suggest a straightforward way to account for these features in the DMA setting. Results using the widely applied Goyal and Welch (2008) dataset document that modifying the DMA framework as we suggest has a bearing on equity premium point prediction performance from a statistical as well as an economic viewpoint.  相似文献   

4.
Generalized linear mixed models are widely used for analyzing clustered data. If the primary interest is in regression parameters, one can proceed alternatively, through the marginal mean model approach. In the present study, a joint model consisting of a marginal mean model and a cluster-specific conditional mean model is considered. This model is useful when both time-independent and time-dependent covariates are available. Furthermore our model is semi-parametric, as we assume a flexible, smooth semi-nonparametric density of the cluster-specific effects. This semi-nonparametric density-based approach outperforms the approach based on normality assumption with respect to some important features of 'between-cluster variation'. We employ a full likelihood-based approach and apply the Monte Carlo EM algorithm to analyze the model. A simulation study is carried out to demonstrate the consistency of the approach. Finally, we apply this to a study of long-term illness data.  相似文献   

5.
Quantile regression techniques have been widely used in empirical economics. In this paper, we consider the estimation of a generalized quantile regression model when data are subject to fixed or random censoring. Through a discretization technique, we transform the censored regression model into a sequence of binary choice models and further propose an integrated smoothed maximum score estimator by combining individual binary choice models, following the insights of Horowitz (1992) and Manski (1985). Unlike the estimators of Horowitz (1992) and Manski (1985), our estimators converge at the usual parametric rate through an integration process. In the case of fixed censoring, our approach overcomes a major drawback of existing approaches associated with the curse-of-dimensionality problem. Our approach for the fixed censored case can be extended readily to the case with random censoring for which other existing approaches are no longer applicable. Both of our estimators are consistent and asymptotically normal. A simulation study demonstrates that our estimators perform well in finite samples.  相似文献   

6.
In missing data problems, it is often the case that there is a natural test statistic for testing a statistical hypothesis had all the data been observed. A fuzzy  p -value approach to hypothesis testing has recently been proposed which is implemented by imputing the missing values in the "complete data" test statistic by values simulated from the conditional null distribution given the observed data. We argue that imputing data in this way will inevitably lead to loss in power. For the case of scalar parameter, we show that the asymptotic efficiency of the score test based on the imputed "complete data" relative to the score test based on the observed data is given by the ratio of the observed data information to the complete data information. Three examples involving probit regression, normal random effects model, and unidentified paired data are used for illustration. For testing linkage disequilibrium based on pooled genotype data, simulation results show that the imputed Neyman Pearson and Fisher exact tests are less powerful than a Wald-type test based on the observed data maximum likelihood estimator. In conclusion, we caution against the routine use of the fuzzy  p -value approach in latent variable or missing data problems and suggest some viable alternatives.  相似文献   

7.
As the internet’s footprint continues to expand, cybersecurity is becoming a major concern for both governments and the private sector. One such cybersecurity issue relates to data integrity attacks. This paper focuses on the power industry, where the forecasting processes rely heavily on the quality of the data. Data integrity attacks are expected to harm the performances of forecasting systems, which will have a major impact on both the financial bottom line of power companies and the resilience of power grids. This paper reveals the effect of data integrity attacks on the accuracy of four representative load forecasting models (multiple linear regression, support vector regression, artificial neural networks, and fuzzy interaction regression). We begin by simulating some data integrity attacks through the random injection of some multipliers that follow a normal or uniform distribution into the load series. Then, the four aforementioned load forecasting models are used to generate one-year-ahead ex post point forecasts in order to provide a comparison of their forecast errors. The results show that the support vector regression model is most robust, followed closely by the multiple linear regression model, while the fuzzy interaction regression model is the least robust of the four. Nevertheless, all four models fail to provide satisfying forecasts when the scale of the data integrity attacks becomes large. This presents a serious challenge to both load forecasters and the broader forecasting community: the generation of accurate forecasts under data integrity attacks. We construct our case study using the publicly-available data from Global Energy Forecasting Competition 2012. At the end, we also offer an overview of potential research topics for future studies.  相似文献   

8.
We discuss a regression model in which the regressors are dummy variables. The basic idea is that the observation units can be assigned to some well-defined combination of treatments, corresponding to the dummy variables. This assignment can not be done without some error, i.e. misclassification can play a role. This situation is analogous to regression with errors in variables. It is well-known that in these situations identification of the parameters is a prominent problem. We will first show that, in our case, the parameters are not identified by the first two moments but can be identified by the likelihood. Then we analyze two estimators. The first is a moment estimator involving moments up to the third order, and the second is a maximum likelihood estimator calculated with the help of the EM algorithm. Both estimators are evaluated on the basis of a small Monte Carlo experiment.  相似文献   

9.
We propose methods for testing hypothesis of non-causality at various horizons, as defined in Dufour and Renault (Econometrica 66, (1998) 1099–1125). We study in detail the case of VAR models and we propose linear methods based on running vector autoregressions at different horizons. While the hypotheses considered are nonlinear, the proposed methods only require linear regression techniques as well as standard Gaussian asymptotic distributional theory. Bootstrap procedures are also considered. For the case of integrated processes, we propose extended regression methods that avoid nonstandard asymptotics. The methods are applied to a VAR model of the US economy.  相似文献   

10.
In some linear regression problems samples may be unwittingly drawn from a population that is heterogeneous in respect to intercept. For such problems, an exploratory method is developed to identify influential subgroups of a heterogeneous population. Discriminant analysis is used to characterize, interpret and validate subgroups. Four criteria are proposed to select a set of influential subgroups, which are then used in estimating a linear regression model. The method is illustrated through analysis of real data on political violence.  相似文献   

11.
This paper surveys the state of the art in the econometrics of regression models with many instruments or many regressors based on alternative – namely, dimension – asymptotics. We list critical results of dimension asymptotics that lead to better approximations of properties of familiar and alternative estimators and tests when the instruments and/or regressors are numerous. Then, we consider the problem of estimation and inference in the basic linear instrumental variables regression setup with many strong instruments. We describe the failures of conventional estimation and inference, as well as alternative tools that restore consistency and validity. We then add various other features to the basic model such as heteroskedasticity, instrument weakness, etc., in each case providing a review of the existing tools for proper estimation and inference. Subsequently, we consider a related but different problem of estimation and testing in a linear mean regression with many regressors. We also describe various extensions and connections to other settings, such as panel data models, spatial models, time series models, and so on. Finally, we provide practical guidance regarding which tools are most suitable to use in various situations when many instruments and/or regressors turn out to be an issue.  相似文献   

12.
In this progress report, we first indicate the origins and early development of the Marshallian macroeconomic model (MMM) and briefly review some of our past empirical forecasting experiments with the model. Then we present recently developed one-sector, two-sector and n-sector models of an economy that can be employed to explain past experience, predict future outcomes and analyze policy problems. The results of simulation experiments with various versions of the model are provided to illustrate some of its dynamic properties that include “chaotic” features. Last, we present comments on planned future work with the model.  相似文献   

13.
This article develops influence diagnostics for log‐Birnbaum–Saunders (LBS) regression models with censored data based on case‐deletion model (CDM). The one‐step approximations of the estimates in CDM are given and case‐deletion measures are obtained. Meanwhile, it is shown that CDM is equivalent to mean shift outlier model (MSOM) in LBS regression models and an outlier test is presented based on MSOM. Furthermore, we discuss a score test for homogeneity of shape parameter in LBS regression models. Two numerical examples are given to illustrate our methodology and the properties of score test statistic are investigated through Monte Carlo simulations under different censoring percentages.  相似文献   

14.
Logistic regression analysis may well be used to develop a predictive model for a dichotomous medical outcome, such as short-term mortality. When the data set is small compared to the number of covariables studied, shrinkage techniques may improve predictions. We compared the performance of three variants of shrinkage techniques: 1) a linear shrinkage factor, which shrinks all coefficients with the same factor; 2) penalized maximum likelihood (or ridge regression), where a penalty factor is added to the likelihood function such that coefficients are shrunk individually according to the variance of each covariable; 3) the Lasso, which shrinks some coefficients to zero by setting a constraint on the sum of the absolute values of the coefficients of standardized covariables.
Logistic regression models were constructed to predict 30-day mortality after acute myocardial infarction. Small data sets were created from a large randomized controlled trial, half of which provided independent validation data. We found that all three shrinkage techniques improved the calibration of predictions compared to the standard maximum likelihood estimates. This study illustrates that shrinkage is a valuable tool to overcome some of the problems of overfitting in medical data.  相似文献   

15.
文章探讨了电力系统负荷的组成、特点,在分析比较常用的预测方法优缺点的基础之上,采用了灰色预测法与回归法相结合的方法建立了中长期负荷预测模型,把负荷预测工作分为2个部分:即用灰色预测法进行相关因素的预测和用回归法进行负荷预测。该模型充分利用了灰色预测法要求负荷数据少、不考虑分布规律、不考虑变化趋势、运算方便、易于检验等优点及回归法能够考虑到负荷所受的多种因素的特点,模型参数估计技术比较成熟,预测过程简单。  相似文献   

16.
T. J. Rao 《Metrika》1984,31(1):25-32
Summary We first consider Neyman's optimum allocation of sample size to strata in the light of available auxiliary information for which a suitable random permutation model is assumed. For a special case of this model the allocation of the sample size reduces to the same as when a certain superpopulation regression model is assumed. Motivated by this, more generally, we discuss some optimality results under random permutation models and compare them with the corresponding results when a superpopulation regression model is assumed.  相似文献   

17.
文章探讨了电力系统负荷的组成、特点,在分析比较常用的预测方法优缺点的基础之上,采用了灰色.预测法与回归法相结合的方法建立了中长期负荷预测模型,把负荷预测工作分为2个部分:即用灰色预测法进行相关因素的预测和用回归法进行负荷预测。该模型充分利用了灰色预测法要求负荷数据少、不考虑分布规律、不考虑变化趋势、运算方便、易于检验等优点及回归法能够考虑到负荷所受的多种因素的特点,模型参数估许技术比较成熟,预测过程简单。  相似文献   

18.
abstract This paper reports on a collaborative project involving organization scholars and clinicians to examine the ways in which individual and organizational health are conceptualized in the literature. We illustrate how the use of systems theories (in this case complexity theory) in relation to organizational health introduces problems such as the risk of promoting organizational health at the expense of individual well‐being. The phenomena of organizational health and individual health are often presented as having a symbiotic relationship and we suggest some circumstances where this is not the case. Our central argument is that we need to move beyond current conceptual limitations and move towards a more process‐based model of health in organization rather than organizational health.  相似文献   

19.
Efficient frontier estimation: a maximum entropy approach   总被引:1,自引:0,他引:1  
An alternative efficiency estimation approach is developed utilizing generalized maximum entropy (GME). GME combines the strengths of both SFA and DEA, allowing for the estimation of a frontier that is stochastic, without making an ad hoc assumption about the distribution of the efficiency component. GME results approach SFA results as the one-sided inefficiency bounds used by GME shrink. Results similar to DEA are achieved as the bounds increase. The GME results are distributed like DEA, but yield virtually the same rankings as SFA. The results suggest that GME may provide a link between various estimators of efficiency.
Jon RezekEmail:
  相似文献   

20.
We consider a semiparametric distributed lag model in which the “news impact curve” m is nonparametric but the response is dynamic through some linear filters. A special case of this is a nonparametric regression with serially correlated errors. We propose an estimator of the news impact curve based on a dynamic transformation that produces white noise errors. This yields an estimating equation for m that is a type two linear integral equation. We investigate both the stationary case and the case where the error has a unit root. In the stationary case we establish the pointwise asymptotic normality. In the special case of a nonparametric regression subject to time series errors our estimator achieves efficiency improvements over the usual estimators, see Xiao et al. [2003. More efficient local polynomial estimation in nonparametric regression with autocorrelated errors. Journal of the American Statistical Association 98, 980–992]. In the unit root case our procedure is consistent and asymptotically normal unlike the standard regression smoother. We also present the distribution theory for the parameter estimates, which is nonstandard in the unit root case. We also investigate its finite sample performance through simulation experiments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号