首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we apply the model selection approach based on likelihood ratio (LR) tests developed in Vuong (1986) to the problem of choosing between two normal linear regression models which are non-nested. We explicitly derive the procedure when the competing linear models are both misspecified. Some simplifications arise when the models are contained in a larger correctly specified linear regression model, or when one computing linear model is correctly specified.  相似文献   

2.
According to the law of likelihood, statistical evidence is represented by likelihood functions and its strength measured by likelihood ratios. This point of view has led to a likelihood paradigm for interpreting statistical evidence, which carefully distinguishes evidence about a parameter from error probabilities and personal belief. Like other paradigms of statistics, the likelihood paradigm faces challenges when data are observed incompletely, due to non-response or censoring, for instance. Standard methods to generate likelihood functions in such circumstances generally require assumptions about the mechanism that governs the incomplete observation of data, assumptions that usually rely on external information and cannot be validated with the observed data. Without reliable external information, the use of untestable assumptions driven by convenience could potentially compromise the interpretability of the resulting likelihood as an objective representation of the observed evidence. This paper proposes a profile likelihood approach for representing and interpreting statistical evidence with incomplete data without imposing untestable assumptions. The proposed approach is based on partial identification and is illustrated with several statistical problems involving missing data or censored data. Numerical examples based on real data are presented to demonstrate the feasibility of the approach.  相似文献   

3.
Parametric mixture models are commonly used in applied work, especially empirical economics, where these models are often employed to learn for example about the proportions of various types in a given population. This paper examines the inference question on the proportions (mixing probability) in a simple mixture model in the presence of nuisance parameters when sample size is large. It is well known that likelihood inference in mixture models is complicated due to (1) lack of point identification, and (2) parameters (for example, mixing probabilities) whose true value may lie on the boundary of the parameter space. These issues cause the profiled likelihood ratio (PLR) statistic to admit asymptotic limits that differ discontinuously depending on how the true density of the data approaches the regions of singularities where there is lack of point identification. This lack of uniformity in the asymptotic distribution suggests that confidence intervals based on pointwise asymptotic approximations might lead to faulty inferences. This paper examines this problem in details in a finite mixture model and provides possible fixes based on the parametric bootstrap. We examine the performance of this parametric bootstrap in Monte Carlo experiments and apply it to data from Beauty Contest experiments. We also examine small sample inferences and projection methods.  相似文献   

4.
Logistic Regression, a review   总被引:1,自引:0,他引:1  
A review is given of the development of logistic regression as a multi-purpose statistical tool.
A historical introduction shows several lines culminating in the unifying paper of Cox (1966), in which theory as developed in the field of bio-assay is shown to be applicable to designs as discriminant-analysis and case-control study. A review is given of several designs all leading to the same analysis. The link is made with epidemiological literature.
Several optimization criteria are discussed that can be used in the case of more observations per cell, namely maximum likelihood, minimum chi-square and weighted regression on the observed logits. Recent literature on the goodness of fit problem is reviewed and finally, comments are made about the non-parametric approach to logistic regression which is still in rapid development.  相似文献   

5.
Chi-Chung Wen 《Metrika》2010,72(2):199-217
This paper studies semiparametric maximum likelihood estimators in the Cox proportional hazards model with covariate error, assuming that the conditional distribution of the true covariate given the surrogate is known. We show that the estimator of the regression coefficient is asymptotically normal and efficient, its covariance matrix can be estimated consistently by differentiation of the profile likelihood, and the likelihood ratio test is asymptotically chi-squared. We also provide efficient algorithms for the computations of the semiparametric maximum likelihood estimate and the profile likelihood. The performance of this method is successfully demonstrated in simulation studies.  相似文献   

6.
We consider the dynamic factor model and show how smoothness restrictions can be imposed on factor loadings by using cubic spline functions. We develop statistical procedures based on Wald, Lagrange multiplier and likelihood ratio tests for this purpose. The methodology is illustrated by analyzing a newly updated monthly time series panel of US term structure of interest rates. Dynamic factor models with and without smooth loadings are compared with dynamic models based on Nelson–Siegel and cubic spline yield curves. We conclude that smoothness restrictions on factor loadings are supported by the interest rate data and can lead to more accurate forecasts. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

7.
This comment assesses how age, period and cohort (APC) effects are modelled with panel data in the social sciences. It considers variations on a 2-level multilevel model which has been used to show apparent evidence for simultaneous APC effects. We show that such an interpretation is often misleading, and that the formulation and interpretation of these models requires a better understanding of APC effects and the exact collinearity present between them. This interpretation must draw on theory to justify the claims that are made. By comparing two papers which over-interpret such a model, and another that in our view interprets it appropriately, we outline best practice for researchers aiming to use panel datasets to find APC effects, with an understanding that it is impossible for any statistical model to find and separate all three effects.  相似文献   

8.
Tsung-Shan Tsou 《Metrika》2010,71(1):101-115
This paper introduces a way of modifying the bivariate normal likelihood function. One can use the adjusted likelihood to generate valid likelihood inferences for the regression parameter of interest, even if the bivariate normal assumption is fallacious. The retained asymptotic legitimacy requires no knowledge of the true underlying joint distributions so long as their second moments exist. The extension to the multivariate situations is straightforward in theory and yet appears to be arduous computationally. Nevertheless, it is illustrated that the implementation of this seemingly sophisticated procedure is almost effortless needing only outputs from existing statistical software. The efficacy of the proposed parametric approach is demonstrated via simulations.  相似文献   

9.
The paper discusses the asymptotic validity of posterior inference of pseudo‐Bayesian quantile regression methods with complete or censored data when an asymmetric Laplace likelihood is used. The asymmetric Laplace likelihood has a special place in the Bayesian quantile regression framework because the usual quantile regression estimator can be derived as the maximum likelihood estimator under such a model, and this working likelihood enables highly efficient Markov chain Monte Carlo algorithms for posterior sampling. However, it seems to be under‐recognised that the stationary distribution for the resulting posterior does not provide valid posterior inference directly. We demonstrate that a simple adjustment to the covariance matrix of the posterior chain leads to asymptotically valid posterior inference. Our simulation results confirm that the posterior inference, when appropriately adjusted, is an attractive alternative to other asymptotic approximations in quantile regression, especially in the presence of censored data.  相似文献   

10.
This article provides both statistical analysis and empirical evidence that the dummy variable regression models extensively employed in the market seasonality literature may wind-up misleading results. We show that the estimates of the said model tend to reject the null hypothesis incorrectly once the stock returns exhibit higher volatility for the specified period under examination. Our empirical results suggest that the so-called “January effect” could be attributed to the application of inappropriate statistical method.  相似文献   

11.
We develop a general framework for analyzing the usefulness of imposing parameter restrictions on a forecasting model. We propose a measure of the usefulness of the restrictions that depends on the forecaster’s loss function and that could be time varying. We show how to conduct inference about this measure. The application of our methodology to analyzing the usefulness of no-arbitrage restrictions for forecasting the term structure of interest rates reveals that: (1) the restrictions have become less useful over time; (2) when using a statistical measure of accuracy, the restrictions are a useful way to reduce parameter estimation uncertainty, but are dominated by restrictions that do the same without using any theory; (3) when using an economic measure of accuracy, the no-arbitrage restrictions are no longer dominated by atheoretical restrictions, but for this to be true it is important that the restrictions incorporate a time-varying risk premium.  相似文献   

12.
A common exercise in empirical studies is a “robustness check”, where the researcher examines how certain “core” regression coefficient estimates behave when the regression specification is modified by adding or removing regressors. If the coefficients are plausible and robust, this is commonly interpreted as evidence of structural validity. Here, we study when and how one can infer structural validity from coefficient robustness and plausibility. As we show, there are numerous pitfalls, as commonly implemented robustness checks give neither necessary nor sufficient evidence for structural validity. Indeed, if not conducted properly, robustness checks can be completely uninformative or entirely misleading. We discuss how critical and non-critical core variables can be properly specified and how non-core variables for the comparison regression can be chosen to ensure that robustness checks are indeed structurally informative. We provide a straightforward new Hausman (1978) type test of robustness for the critical core coefficients, additional diagnostics that can help explain why robustness test rejection occurs, and a new estimator, the Feasible Optimally combined GLS (FOGLeSs) estimator, that makes relatively efficient use of the robustness check regressions. A new procedure for Matlab, testrob, embodies these methods.  相似文献   

13.
Model selection criteria often arise by constructing unbiased or approximately unbiased estimators of measures known as expected overall discrepancies (Linhart & Zucchini, 1986, p. 19). Such measures quantify the disparity between the true model (i.e., the model which generated the observed data) and a fitted candidate model. For linear regression with normally distributed error terms, the "corrected" Akaike information criterion and the "modified" conceptual predictive statistic have been proposed as exactly unbiased estimators of their respective target discrepancies. We expand on previous work to additionally show that these criteria achieve minimum variance within the class of unbiased estimators.  相似文献   

14.
Dynamic model averaging (DMA) has become a very useful tool with regards to dealing with two important aspects of time-series analysis, namely, parameter instability and model uncertainty. An important component of DMA is the Kalman filter. It is used to filter out the latent time-varying regression coefficients of the predictive regression of interest, and produce the model predictive likelihood, which is needed to construct the probability of each model in the model set. To apply the Kalman filter, one must write the model of interest in linear state–space form. In this study, we demonstrate that the state–space representation has implications on out-of-sample prediction performance, and the degree of shrinkage. Using Monte Carlo simulations as well as financial data at different sampling frequencies, we document that the way in which the current literature tends to formulate the candidate time-varying parameter predictive regression in linear state–space form ignores empirical features that are often present in the data at hand, namely, predictor persistence and predictor endogeneity. We suggest a straightforward way to account for these features in the DMA setting. Results using the widely applied Goyal and Welch (2008) dataset document that modifying the DMA framework as we suggest has a bearing on equity premium point prediction performance from a statistical as well as an economic viewpoint.  相似文献   

15.
Both the Arbitrage Pricing Theory (APT) and the Capital Asset Pricing Model (CAPM) place restrictions of the cross-sectional variation of conditional expectations of asset returns and of macro indicators. We show that these restrictions imposed on the reference statistical models lead to special cases of the reduced rank regression model. The maximum likelihood problem is solved by canonical correlation analysis. Likelihood ratio tests about the number of factors underlying stock returns are straightforward to calculate, thus allowing discrimination between competing financial theories. Moreover LR tests on the relevance of each macroeconomic indicator within a chosen model can be implemented. Some of the tests are illustrated by an application to Italian stock market data.  相似文献   

16.
On the analysis of multivariate growth curves   总被引:1,自引:0,他引:1  
Growth curve data arise when repeated measurements are observed on a number of individuals with an ordered dimension for occasions. Such data appear frequently in almost all fields in which statistical models are used, for instance in medicine, agriculture and engineering. In medicine, for example, more than one variable is often measured on each occasion. However, analyses are usually based on exploration of repeated measurements of only one variable. The consequence is that the information contained in the between-variables correlation structure will be discarded.  In this study we propose a multivariate model based on the random coefficient regression model for the analysis of growth curve data. Closed-form expressions for the model parameters are derived under the maximum likelihood (ML) and the restricted maximum likelihood (REML) framework. It is shown that in certain situations estimated variances of growth curve parameters are greater for REML. Also a method is proposed for testing general linear hypotheses. One numerical example is provided to illustrate the methods discussed. Received: 22 February 1999  相似文献   

17.
This paper contributes to the ongoing discussion about amenity-driven rural development strategies by examining the relationship between quality of life amenities and rural economic development in the Southeast USA. The premise is that what is true at a national level may provide a partial or misleading picture when we look at particular areas. Additionally, data available at the county level can often provide richer and more precise information than what is found at the national level. The paper estimates spatial regression models using county-level data. For the most part, the results suggest that the differences in quality of life and amenities factors can explain a large portion of the trend in per capita income, employment and population change across counties in the Southeast USA.  相似文献   

18.
A very well-known model in software reliability theory is that of Littlewood (1980). The (three) parameters in this model are usually estimated by means of the maximum likelihood method. The system of likelihood equations can have more than one solution. Only one of them will be consistent, however. In this paper we present a different, more analytical approach, exploiting the mathematical properties of the log-likelihood function itself. Our belief is that the ideas and methods developed in this paper could also be of interest for statisticians working on the estimation of the parameters of the generalised Pareto distribution. For those more generally interested in maximum likelihood the paper provides a 'practical case', indicating how complex matters may become when only three parameters are involved. Moreover, readers not familiar with counting process theory and software reliability are given a first introduction.  相似文献   

19.
In forensics it is a classical problem to determine, when a suspect S shares a property Γ with a criminal C, the probability that S = C. In this article we give a detailed account of this problem in various degrees of generality. We start with the classical case where the probability of having Γ, as well as the a priori probability of being the criminal, is the same for all individuals. We then generalize the solution to deal with heterogeneous populations, biased search procedures for the suspect, Γ‐correlations, uncertainty about the subpopulation of the criminal and the suspect, and uncertainty about theΓ‐frequencies. We also consider the effect of the way the search for S is conducted, in particular when this is done by a database search. A returning theme is that we show that conditioning is of importance when one wants to quantify the ‘weight’ of the evidence by a likelihood ratio. Apart from these mathematical issues, we also discuss the practical problems in applying these issues to the legal process. The posterior probabilities of C = S are typically the same for all reasonable choices of the hypotheses, but this is not the whole story. The legal process might force one to dismiss certain hypotheses, for instance when the relevant likelihood ratio depends on prior probabilities. We discuss this and related issues as well. As such, the article is relevant both from a theoretical and from an applied point of view.  相似文献   

20.
Evidence to support the Gibson paradox is often given in the form of a simple correlation between the nominal interest rate and the log of price level, or in the form of a simple linear regression between these two variables. Authors then show, using standard procedures of statistical inference, that the price level possesses a significant coefficient. We argue that this class of evidence is spurious since the nominal interest rate and the price level (both integrated variables) do not form a cointegrated system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号