首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We analyze the quantile combination approach (QCA) of Lima and Meng (2017) in situations with mixed-frequency data. The estimation of quantile regressions with mixed-frequency data leads to a parameter proliferation problem, which can be addressed through extensions of the MIDAS and soft (hard) thresholding methods towards quantile regression. We use the proposed approach to forecast the growth rate of the industrial production index, and our results show that including high-frequency information in the QCA achieves substantial gains in terms of forecasting accuracy.  相似文献   

2.
The assessment of models of financial market behaviour requires evaluation tools. When complexity hinders a direct estimation approach, e.g., for agent based microsimulation models, simulation based estimators might provide an alternative. In order to apply such techniques, an objective function is required, which should be based on robust statistics of the time series under consideration. Based on the identification of robust statistics of foreign exchange rate time series in previous research, an objective function is derived. This function takes into account stylized facts about the unconditional distribution of exchange rate returns and properties of the conditional distribution, in particular, autoregressive conditional heteroscedasticity and long memory. A bootstrap procedure is used to obtain an estimate of the variance-covariance matrix of the different moments included in the objective function, which is used as a base for the weighting matrix. Finally, the properties of the objective function are analyzed for two different agent based models of the foreign exchange market, a simple GARCH-model and a stochastic volatility model using the DM/US-$ exchange rate as a benchmark. It is also discussed how the results might be used for inference purposes. Research has been supported by the DFG grant WI 20024/2-1/2. We are indebted to two anonymous referees of this journal, Leigh Tesfatsion, Patrick Burns and other participants of the CEF’06 conference in Limassol for helpful comments on preliminary versions of this paper.  相似文献   

3.
In this paper we investigate two-sample U -statistics in the case of clusters of repeated measurements observed on individuals from independent populations. The observations on the i -th individual in the first population are denoted by     , 1 ≤  i  ≤  m , and those on the k -th individual in the second population are denoted by     , 1 ≤  k  ≤  n . Given the kernel φ ( x ,  y ), we define the generalized two-sample U -statistic by
We derive the asymptotic distribution of U m , n for large sample sizes. As an application we study the generalized Mann–Whitney–Wilcoxon rank sum test for clustered data.  相似文献   

4.
Quantile models and estimators for data analysis   总被引:1,自引:0,他引:1  
Quantile regression is used to estimate the cross sectional relationship between high school characteristics and student achievement as measured by ACT scores. The importance of school characteristics on student achievement has been traditionally framed in terms of the effect on the expected value. With quantile regression the impact of school characteristics is allowed to be different at the mean and quantiles of the conditional distribution. Like robust estimation, the quantile approach detects relationships missed by traditional data analysis. Robust estimates detect the influence of the bulk of the data, whereas quantile estimates detect the influence of co-variates on alternate parts of the conditional distribution. Since our design consists of multiple responses (individual student ACT scores) at fixed explanatory variables (school characteristics) the quantile model can be estimated by the usual regression quantiles, but additionally by a regression on the empirical quantile at each school. This is similar to least squares where the estimate based on the entire data is identical to weighted least squares on the school averages. Unlike least squares however, the regression through the quantiles produces a different estimate than the regression quantiles.  相似文献   

5.
A widely used approach to modeling discrete-time network data assumes that discrete-time network data are generated by an unobserved continuous-time Markov process. While such models can capture a wide range of network phenomena and are popular in social network analysis, the models are based on the homogeneity assumption that all nodes share the same parameters. We remove the homogeneity assumption by allowing nodes to belong to unobserved subsets of nodes, called blocks, and assuming that nodes in the same block have the same parameters, whereas nodes in distinct blocks have distinct parameters. The resulting models capture unobserved heterogeneity across nodes and admit model-based clustering of nodes based on network properties chosen by researchers. We develop Bayesian data-augmentation methods and apply them to discrete-time observations of an ownership network of non-financial companies in Slovenia in its critical transition from a socialist economy to a market economy. We detect a small subset of shadow-financial companies that outpaces others in terms of the rate of change and the desire to accumulate stocks of other companies.  相似文献   

6.
Quantile regression for dynamic panel data with fixed effects   总被引:4,自引:0,他引:4  
This paper studies a quantile regression dynamic panel model with fixed effects. Panel data fixed effects estimators are typically biased in the presence of lagged dependent variables as regressors. To reduce the dynamic bias, we suggest the use of the instrumental variables quantile regression method of Chernozhukov and Hansen (2006) along with lagged regressors as instruments. In addition, we describe how to employ the estimated models for prediction. Monte Carlo simulations show evidence that the instrumental variables approach sharply reduces the dynamic bias, and the empirical levels for prediction intervals are very close to nominal levels. Finally, we illustrate the procedures with an application to forecasting output growth rates for 18 OECD countries.  相似文献   

7.
This paper develops methods of inference for nonparametric and semiparametric parameters defined by conditional moment inequalities and/or equalities. The parameters need not be identified. Confidence sets and tests are introduced. The correct uniform asymptotic size of these procedures is established. The false coverage probabilities and power of the CS’s and tests are established for fixed alternatives and some local alternatives. Finite-sample simulation results are given for a nonparametric conditional quantile model with censoring and a nonparametric conditional treatment effect model. The recommended CS/test uses a Cramér–von-Mises-type test statistic and employs a generalized moment selection critical value.  相似文献   

8.
In this article a novel approach to analyze clustered survival data that are subject to extravariation encountered through clustering of survival times is proposed. This is accomplished by extending the Cox proportional hazard model to a frailty model where the cluster-specific shared frailty is modeled nonparametrically. We assume a nonparametric Dirichlet process for the distribution of frailty. In such a semiparametric setup, we propose a hybrid method to draw model-based inferences. In the framework of the proposed hybrid method, the estimation of parameters is performed by implementing Monte Carlo expected conditional maximization algorithm. A simulation study is conducted to study the efficiency of our methodology. The proposed methodology is, thereafter, illustrated by a real-life data on recurrence time to infections in kidney patient.  相似文献   

9.
Generalized linear mixed models are widely used for analyzing clustered data. If the primary interest is in regression parameters, one can proceed alternatively, through the marginal mean model approach. In the present study, a joint model consisting of a marginal mean model and a cluster-specific conditional mean model is considered. This model is useful when both time-independent and time-dependent covariates are available. Furthermore our model is semi-parametric, as we assume a flexible, smooth semi-nonparametric density of the cluster-specific effects. This semi-nonparametric density-based approach outperforms the approach based on normality assumption with respect to some important features of 'between-cluster variation'. We employ a full likelihood-based approach and apply the Monte Carlo EM algorithm to analyze the model. A simulation study is carried out to demonstrate the consistency of the approach. Finally, we apply this to a study of long-term illness data.  相似文献   

10.
In most empirical studies, once the best model has been selected according to a certain criterion, subsequent analysis is conducted conditionally on the chosen model. In other words, the uncertainty of model selection is ignored once the best model has been chosen. However, the true data-generating process is in general unknown and may not be consistent with the chosen model. In the analysis of productivity and technical efficiencies in the stochastic frontier settings, if the estimated parameters or the predicted efficiencies differ across competing models, then it is risky to base the prediction on the selected model. Buckland et al. (Biometrics 53:603?C618, 1997) have shown that if model selection uncertainty is ignored, the precision of the estimate is likely to be overestimated, the estimated confidence intervals of the parameters are often below the nominal level, and consequently, the prediction may be less accurate than expected. In this paper, we suggest using the model-averaged estimator based on the multimodel inference to estimate stochastic frontier models. The potential advantages of the proposed approach are twofold: incorporating the model selection uncertainty into statistical inference; reducing the model selection bias and variance of the frontier and technical efficiency estimators. The approach is demonstrated empirically via the estimation of an Indian farm data set.  相似文献   

11.
We consider pseudo-panel data models constructed from repeated cross sections in which the number of individuals per group is large relative to the number of groups and time periods. First, we show that, when time-invariant group fixed effects are neglected, the OLS estimator does not converge in probability to a constant but rather to a random variable. Second, we show that, while the fixed-effects (FE) estimator is consistent, the usual t statistic is not asymptotically normally distributed, and we propose a new robust t statistic whose asymptotic distribution is standard normal. Third, we propose efficient GMM estimators using the orthogonality conditions implied by grouping and we provide t tests that are valid even in the presence of time-invariant group effects. Our Monte Carlo results show that the proposed GMM estimator is more precise than the FE estimator and that our new t test has good size and is powerful.  相似文献   

12.
For reasons of time constraint and cost reduction, censoring is commonly employed in practice, especially in reliability engineering. Among various censoring schemes, progressive Type-I right censoring provides not only the practical advantage of known termination time but also greater flexibility to the experimenter in the design stage by allowing for the removal of test units at non-terminal time points. In this article, we consider a progressively Type-I censored life-test under the assumption that the lifetime of each test unit is exponentially distributed. For small to moderate sample sizes, a practical modification is proposed to the censoring scheme in order to guarantee a feasible life-test under progressive Type-I censoring. Under this setup, we obtain the maximum likelihood estimator (MLE) of the unknown mean parameter and derive the exact sampling distribution of the MLE under the condition that its existence is ensured. Using the exact distribution of the MLE as well as its asymptotic distribution and the parametric bootstrap method, we then discuss the construction of confidence intervals for the mean parameter and their performance is assessed through Monte Carlo simulations. Finally, an example is presented in order to illustrate all the methods of inference discussed here.  相似文献   

13.
Quantile cointegrating regression   总被引:1,自引:1,他引:1  
Quantile regression has important applications in risk management, portfolio optimization, and asset pricing. The current paper studies estimation, inference and financial applications of quantile regression with cointegrated time series. In addition, a new cointegration model with quantile-varying coefficients is proposed. In the proposed model, the value of cointegrating coefficients may be affected by the shocks and thus may vary over the innovation quantile. The proposed model may be viewed as a stochastic cointegration model which includes the conventional cointegration model as a special case. It also provides a useful complement to cointegration models with (G)ARCH effects. Asymptotic properties of the proposed model and limiting distribution of the cointegrating regression quantiles are derived. In the presence of endogenous regressors, fully-modified quantile regression estimators and augmented quantile cointegrating regression are proposed to remove the second order bias and nuisance parameters. Regression Wald tests are constructed based on the fully modified quantile regression estimators. An empirical application to stock index data highlights the potential of the proposed method.  相似文献   

14.
Contaminated or corrupted data typically require strong assumptions to identify parameters of interest. However, weaker assumptions often identify bounds on these parameters. This paper addresses when covariate data—variables in addition to the one of interest—tighten these bounds. First, we construct the identification region for the distribution of the variable of interest. This region demonstrates that covariate data are useless without knowledge about the distribution of erroneous data conditional on the covariates. Then, we develop bounds both on probabilities and on parameters of this distribution that respect stochastic dominance.  相似文献   

15.
Record linkage is the act of bringing together records from two files that are believed to belong to the same unit (e.g., a person or business). It is a low‐cost way of increasing the set of variables available for analysis. Errors may arise in the linking process if an error‐free unit identifier is not available. Two types of linking errors include an incorrect link (records belonging to two different units are linked) and a missed record (an unlinked record for which a correct link exists). Naively ignoring linkage errors may mean that analysis of the linked file is biased. This paper outlines a “weighting approach” to making correct inference about regression coefficients and population totals in the presence of such linkage errors. This approach is designed for analysts who do not have the expertise or time to use specialist software required by other approaches but who are comfortable using weights in inference. The performance of the estimator is demonstrated in a simulation study.  相似文献   

16.
17.
For modelling the effect of crossed, fixed factors on the response variable in balanced designs with nested stratifications, a generalized linear mixed model is proposed. This model is based on a set of quasi-likelihood assumptions which imply quadratic variance functions. From these variance functions, deviances are obtained to quantify the variation per stratification. The effects of the fixed factors will be tested, an dispersion components will be estimated. The practical use of the model is illustrated by reanalysing a soldering failures problem.  相似文献   

18.
In this paper, upon using the known expressions for the Best Linear Unbiased Estimators (BLUEs) of the location and scale parameters of the Laplace distribution based on a progressively Type-II right censored sample, we derive the exact moment generating function (MGF) of the linear combination of standard Laplace order statistics. By using this MGF, we obtain the exact density function of the linear combination. This density function is then utilized to develop exact marginal confidence intervals (CIs) for the location and scale parameters through some pivotal quantities. Next, we derive the exact density of the BLUEs-based quantile estimator and use it to develop exact CIs for the population quantile. A brief mention is made about the reliability and cumulative hazard functions and as to how exact CIs can be constructed for these functions based on BLUEs. A Monte Carlo simulation study is then carried out to evaluate the performance of the developed inferential results. Finally, an example is presented to illustrate the point and interval estimation methods developed here.  相似文献   

19.
Sample surveys are widely used to obtain information about totals, means, medians and other parameters of finite populations. In many applications, similar information is desired for subpopulations such as individuals in specific geographic areas and socio‐demographic groups. When the surveys are conducted at national or similarly high levels, a probability sampling can result in just a few sampling units from many unplanned subpopulations at the design stage. Cost considerations may also lead to low sample sizes from individual small areas. Estimating the parameters of these subpopulations with satisfactory precision and evaluating their accuracy are serious challenges for statisticians. To overcome the difficulties, statisticians resort to pooling information across the small areas via suitable model assumptions, administrative archives and census data. In this paper, we develop an array of small area quantile estimators. The novelty is the introduction of a semiparametric density ratio model for the error distribution in the unit‐level nested error regression model. In contrast, the existing methods are usually most effective when the response values are jointly normal. We also propose a resampling procedure for estimating the mean square errors of these estimators. Simulation results indicate that the new methods have superior performance when the population distributions are skewed and remain competitive otherwise.  相似文献   

20.
基于分位数回归的中国居民消费研究   总被引:5,自引:0,他引:5  
本文从经济增长理论及一般均衡分析入手,将居民收入和政府支出引入效用函数,探讨消费、生产及政府行为三者之间的关系,得到消费的动态方程。同时基于该方程利用分位数回归进行实证分析,结果表明,不同消费量下各变量对消费有不同的影响,同时对城镇和农村的影响程度也各不相同。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号