首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
研究目标:解决随机效应分位回归模型中固定效应和随机效应系数同时估计和选择问题。研究方法:对固定效应和随机效应系数同时实施自适应Lasso惩罚,并为参数估计设计交替迭代算法。研究发现:新方法不仅对随机误差分布具有较强的稳健性,而且在不同稀疏度模型下均有着良好的表现,尤其是在高维情形时。研究创新:本文提出的方法在对模型中重要自变量进行选择的同时能够充分考虑随机效应的影响;交替迭代算法不仅有效解决了需要选择两个惩罚参数的困境,而且收敛速度快。研究价值:为实际工作者对面板数据和纵向数据的分析提供了有效的建模方法。  相似文献   

2.
Much Ado About Nothing: the Mixed Models Controversy Revisited   总被引:2,自引:2,他引:0  
We consider a well-known controversy that stems from the use of two mixed models for the analysis of balanced experimental data with a fixed and a random factor. It essentially originates in the different statistics developed from such models for testing that the variance parameter associated to the random factor is null. The corresponding hypotheses are interpreted as that of null random factor main effects in the presence of interaction. The controversy is further complicated by different opinions regarding the appropriateness of such hypothesis. Assuming that this is a sensible option, we show that the standard test statistics obtained under both models are really directed at different hypotheses and conclude that the problem lies in the definition of the main effects and interactions. We use expected values as in the fixed effects case to resolve the controversy showing that under the most commonly used model, the test usually associated to the inexistence of the random factor main effects addresses a different hypothesis. We discuss the choice of models, and some further problems that occur in the presence of unbalanced data.  相似文献   

3.
The relevance-weighted likelihood function weights individual contributions to the likelihood according to their relevance for the inferential problem of interest. Consistency and asymptotic normality of the weighted maximum likelihood estimator were previously proved for independent sequences of random variables. We extend these results to apply to dependent sequences, and, in so doing, provide a unified approach to a number of diverse problems in dependent data. In particular, we provide a heretofore unknown approach for dealing with heterogeneity in adaptive designs, and unify the smoothing approach that appears in many foundational papers for independent data. Applications are given in clinical trials, psychophysics experiments, time series models, transition models, and nonparametric regression. Received: April 2000  相似文献   

4.
We characterize the class of dominant-strategy incentive-compatible (or strategy-proof) random social choice functions in the standard multi-dimensional voting model where voter preferences over the various dimensions (or components) are lexicographically separable. We show that these social choice functions (which we call generalized random dictatorships) are induced by probability distributions on voter sequences of length equal to the number of components. They induce a fixed probability distribution on the product set of voter peaks. The marginal probability distribution over every component is a random dictatorship. Our results generalize the classic random dictatorship result in Gibbard (1977) and the decomposability results for strategy-proof deterministic social choice functions for multi-dimensional models with separable preferences obtained in LeBreton and Sen (1999).  相似文献   

5.
Budgeting and planning processes require medium-term sales forecasts with marketing scenarios. The complexity in modern retailing necessitates consistent, automatic forecasting and insight generation. Remedies to the high dimensionality problem have drawbacks; black box machine learning methods require voluminous data and lack insights, while regularization may bias causal estimates in interpretable models.The proposed FAIR (Fully Automatic Interpretable Retail forecasting) method supports the retail planning process with multi-step-ahead category-store level forecasts, scenario evaluations, and insights. It considers category-store specific seasonality, focal- and cross-category marketing, and adaptive base sales while dealing with regularization-induced confounding.We show, with three chains from the IRI dataset involving 30 categories, that regularization-induced confounding decreases forecast accuracy. By including focal- and cross-category marketing, as well as random disturbances, forecast accuracy is increased. FAIR is more accurate than the black box machine learning method Boosted Trees and other benchmarks while also providing insights that are in line with the marketing literature.  相似文献   

6.
The calculation of likelihood functions of many econometric models requires the evaluation of integrals without analytical solutions. Approaches for extending Gaussian quadrature to multiple dimensions discussed in the literature are either very specific or suffer from exponentially rising computational costs in the number of dimensions. We propose an extension that is very general and easily implemented, and does not suffer from the curse of dimensionality. Monte Carlo experiments for the mixed logit model indicate the superior performance of the proposed method over simulation techniques.  相似文献   

7.
The city size distribution in many countries is remarkably well described by a Pareto distribution. We derive conditions that standard urban models must satisfy in order to explain this regularity. We show that under general conditions urban models must have (i) a balanced growth path and (ii) a Pareto distribution for the underlying source of randomness. In particular, one of the following combinations can induce a Pareto distribution of city sizes: (i) preferences for different goods follow reflected random walks, and the elasticity of substitution between goods is 1; or (ii) total factor productivities of different goods follow reflected random walks, and increasing returns are equal across goods.  相似文献   

8.
We introduce a new forecasting methodology, referred to as adaptive learning forecasting, that allows for both forecast averaging and forecast error learning. We analyze its theoretical properties and demonstrate that it provides a priori MSE improvements under certain conditions. The learning rate based on past forecast errors is shown to be non-linear. This methodology is of wide applicability and can provide MSE improvements even for the simplest benchmark models. We illustrate the method’s application using data on agricultural prices for several agricultural products, as well as on real GDP growth for several of the corresponding countries. The time series of agricultural prices are short and show an irregular cyclicality that can be linked to economic performance and productivity, and we consider a variety of forecasting models, both univariate and bivariate, that are linked to output and productivity. Our results support both the efficacy of the new method and the forecastability of agricultural prices.  相似文献   

9.
This paper presents estimation methods for dynamic nonlinear models with correlated random effects (CRE) when having unbalanced panels. Unbalancedness is often encountered in applied work and ignoring it in dynamic nonlinear models produces inconsistent estimates even if the unbalancedness process is completely at random. We show that selecting a balanced panel from the sample can produce efficiency losses or even inconsistent estimates of the average marginal effects. We allow the process that determines the unbalancedness structure of the data to be correlated with the permanent unobserved heterogeneity. We discuss how to address the estimation by maximizing the likelihood function for the whole sample and also propose a Minimum Distance approach, which is computationally simpler and asymptotically equivalent to the Maximum Likelihood estimation. Our Monte Carlo experiments and empirical illustration show that the issue is relevant. Our proposed solutions perform better both in terms of bias and RMSE than the approaches that ignore the unbalancedness or that balance the sample.  相似文献   

10.
There has been considerable and controversial research over the past two decades into how successfully random effects misspecification in mixed models (i.e. assuming normality for the random effects when the true distribution is non‐normal) can be diagnosed and what its impacts are on estimation and inference. However, much of this research has focused on fixed effects inference in generalised linear mixed models. In this article, motivated by the increasing number of applications of mixed models where interest is on the variance components, we study the effects of random effects misspecification on random effects inference in linear mixed models, for which there is considerably less literature. Our findings are surprising and contrary to general belief: for point estimation, maximum likelihood estimation of the variance components under misspecification is consistent, although in finite samples, both the bias and mean squared error can be substantial. For inference, we show through theory and simulation that under misspecification, standard likelihood ratio tests of truly non‐zero variance components can suffer from severely inflated type I errors, and confidence intervals for the variance components can exhibit considerable under coverage. Furthermore, neither of these problems vanish asymptotically with increasing the number of clusters or cluster size. These results have major implications for random effects inference, especially if the true random effects distribution is heavier tailed than the normal. Fortunately, simple graphical and goodness‐of‐fit measures of the random effects predictions appear to have reasonable power at detecting misspecification. We apply linear mixed models to a survey of more than 4 000 high school students within 100 schools and analyse how mathematics achievement scores vary with student attributes and across different schools. The application demonstrates the sensitivity of mixed model inference to the true but unknown random effects distribution.  相似文献   

11.
通过讨论随机条件下仓库布局问题.建立了随机仓库布局问题机会约束规划模型,并设计出基于随机模拟的禁忌搜索算法求解模型,最后利用算例来验证算法的有效性。  相似文献   

12.
In this paper, we suggest a blockwise bootstrap wavelet to estimate the regression function in the nonparametric regression models with weakly dependent processes for both designs of fixed and random. We obtain the asymptotic orders of the biases and variances of the estimators and establish the asymptotic normality for a modified version of the estimators. We also introduce a principle to select the length of data block. These results show that the blockwise bootstrap wavelet is valid for general weakly dependent processes such as α-mixing, φ-mixing and ρ-mixing random variables.  相似文献   

13.
Ornstein–Uhlenbeck models are continuous-time processes which have broad applications in finance as, e.g., volatility processes in stochastic volatility models or spread models in spread options and pairs trading. The paper presents a least squares estimator for the model parameter in a multivariate Ornstein–Uhlenbeck model driven by a multivariate regularly varying Lévy process with infinite variance. We show that the estimator is consistent. Moreover, we derive its asymptotic behavior and test statistics. The results are compared to the finite variance case. For the proof we require some new results on multivariate regular variation of products of random vectors and central limit theorems. Furthermore, we embed this model in the setup of a co-integrated model in continuous time.  相似文献   

14.
We propose an alternative method for estimating the nonlinear component in semiparametric panel data models. Our method is based on marginal integration that allows us to recover the nonlinear component from an additive regression structure that results from the first differencing transformation. We characterize the asymptotic behavior of our estimator. We also extend the methodology to treat panel data models with two-way effects. Monte Carlo simulations show that our estimator behaves well in finite samples in both random effects and fixed effects settings.  相似文献   

15.
Random mechanisms have been used in real-life situations for reasons such as fairness. Voting and matching are two examples of such situations. We investigate whether the desirable properties of a random mechanism survive decomposition of the mechanism as a lottery over deterministic mechanisms that also hold such properties. To this end, we represent properties of mechanisms–such as ordinal strategy-proofness or individual rationality–using linear constraints. Using the theory of totally unimodular matrices from combinatorial integer programming, we show that total unimodularity is a sufficient condition for the decomposability of linear constraints on random mechanisms. As two illustrative examples we show that individual rationality is totally unimodular in general, and that strategy-proofness is totally unimodular in some individual choice models. We also introduce a second, more constructive approach to decomposition problems, and prove that feasibility, strategy-proofness, and unanimity, with and without anonymity, are decomposable in non-dictatorial single-peaked voting domains. Just importantly, we establish that strategy-proofness is not decomposable in some natural problems.  相似文献   

16.
Many structural break and regime-switching models have been used with macroeconomic and financial data. In this paper, we develop an extremely flexible modeling approach which can accommodate virtually any of these specifications. We build on earlier work showing the relationship between flexible functional forms and random variation in parameters. Our contribution is based around the use of priors on the time variation that is developed from considering a hypothetical reordering of the data and distance between neighboring (reordered) observations. The range of priors produced in this way can accommodate a wide variety of nonlinear time series models, including those with regime-switching and structural breaks. By allowing the amount of random variation in parameters to depend on the distance between (reordered) observations, the parameters can evolve in a wide variety of ways, allowing for everything from models exhibiting abrupt change (e.g. threshold autoregressive models or standard structural break models) to those which allow for a gradual evolution of parameters (e.g. smooth transition autoregressive models or time varying parameter models). Bayesian econometric methods for inference are developed for estimating the distance function and types of hypothetical reordering. Conditional on a hypothetical reordering and distance function, a simple reordering of the actual data allows us to estimate our models with standard state space methods by a simple adjustment to the measurement equation. We use artificial data to show the advantages of our approach, before providing two empirical illustrations involving the modeling of real GDP growth.  相似文献   

17.
In this paper, we provide an intensive review of the recent developments for semiparametric and fully nonparametric panel data models that are linearly separable in the innovation and the individual-specific term. We analyze these developments under two alternative model specifications: fixed and random effects panel data models. More precisely, in the random effects setting, we focus our attention in the analysis of some efficiency issues that have to do with the so-called working independence condition. This assumption is introduced when estimating the asymptotic variance–covariance matrix of nonparametric estimators. In the fixed effects setting, to cope with the so-called incidental parameters problem, we consider two different estimation approaches: profiling techniques and differencing methods. Furthermore, we are also interested in the endogeneity problem and how instrumental variables are used in this context. In addition, for practitioners, we also show different ways of avoiding the so-called curse of dimensionality problem in pure nonparametric models. In this way, semiparametric and additive models appear as a solution when the number of explanatory variables is large.  相似文献   

18.
ABSTRACT

Parameter uncertainty has fuelled criticisms on the robustness of results from computable general equilibrium models. This has led to the development of alternative sensitivity analysis approaches. Researchers have used Monte Carlo analysis for systematic sensitivity analysis because of its flexibility. But Monte Carlo analysis may yield biased simulation results. Gaussian quadratures have also been widely applied, although they can be difficult to apply in practice. This paper applies an alternative approach to systematic sensitivity analysis, Monte Carlo filtering and examines how its results compare to both Monte Carlo and Gaussian quadrature approaches. It does so via an application to rural development policies in Aberdeenshire, Scotland. We find that Monte Carlo filtering outperforms the conventional Monte Carlo approach and is a viable alternative when a Gaussian quadrature approach cannot be applied or is too complex to implement.  相似文献   

19.
We demonstrate that many current approaches for marginal modelling of correlated binary outcomes produce likelihoods that are equivalent to the copula‐based models herein. These general copula models of underlying latent threshold random variables yield likelihood‐based models for marginal fixed effects estimation and interpretation in the analysis of correlated binary data with exchangeable correlation structures. Moreover, we propose a nomenclature and set of model relationships that substantially elucidates the complex area of marginalised random‐intercept models for binary data. A diverse collection of didactic mathematical and numerical examples are given to illustrate concepts. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

20.
Given the growing number of available tools for modeling dynamic networks, the choice of a suitable model becomes central. The goal of this survey is to provide an overview of tie-oriented dynamic network models. The survey is focused on introducing binary network models with their corresponding assumptions, advantages, and shortfalls. The models are divided according to generating processes, operating in discrete and continuous time. First, we introduce the temporal exponential random graph model (TERGM) and the separable TERGM (STERGM), both being time-discrete models. These models are then contrasted with continuous process models, focusing on the relational event model (REM). We additionally show how the REM can handle time-clustered observations, that is, continuous-time data observed at discrete time points. Besides the discussion of theoretical properties and fitting procedures, we specifically focus on the application of the models on two networks that represent international arms transfers and email exchange, respectively. The data allow to demonstrate the applicability and interpretation of the network models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号