首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 26 毫秒
1.
2.
This paper discusses analysis of categorical data which have been misclassified and where misclassification probabilities are known. Fields where this kind of misclassification occurs are randomized response, statistical disclosure control, and classification with known sensitivity and specificity. Estimates of true frequencies are given, and adjustments to the odds ratio are discussed. Moment estimates and maximum Likelihood estimates are compared and it is proved that they are the same in the interior of the parameter space. Since moment estimators are regularly outside the parameter space, special attention is paid to the possibility of boundary solutions. An example is given.  相似文献   

3.
Microeconometric treatments of discrete choice under risk are typically homoscedastic latent variable models. Specifically, choice probabilities are given by preference functional differences (given by expected utility, rank-dependent utility, etc.) embedded in cumulative distribution functions. This approach has a problem: Estimated utility function parameters meant to represent agents’ degree of risk aversion in the sense of Pratt (1964) do not imply a suggested “stochastically more risk averse” relation within such models. A new heteroscedastic model called “contextual utility” remedies this, and estimates in one data set suggest it explains (and especially predicts) as well as or better than other stochastic models.  相似文献   

4.
We develop a Bayesian semi-parametric approach to the instrumental variable problem. We assume linear structural and reduced form equations, but model the error distributions non-parametrically. A Dirichlet process prior is used for the joint distribution of structural and instrumental variable equations errors. Our implementation of the Dirichlet process prior uses a normal distribution as a base model. It can therefore be interpreted as modeling the unknown joint distribution with a mixture of normal distributions with a variable number of mixture components. We demonstrate that this procedure is both feasible and sensible using actual and simulated data. Sampling experiments compare inferences from the non-parametric Bayesian procedure with those based on procedures from the recent literature on weak instrument asymptotics. When errors are non-normal, our procedure is more efficient than standard Bayesian or classical methods.  相似文献   

5.
We introduce tests for finite-sample linear regressions with heteroskedastic errors. The tests are exact, i.e., they have guaranteed type I error probabilities when bounds are known on the range of the dependent variable, without any assumptions about the noise structure. We provide upper bounds on probability of type II errors, and apply the tests to empirical data.  相似文献   

6.
This paper proposes SupWald tests from a threshold autoregressive model computed with an adaptive set of thresholds. Simple examples of adaptive threshold sets are given. A second contribution of the paper is a general asymptotic null limit theory when the threshold variable is a level variable. We obtain a pivotal null limiting distribution under some simple conditions for bounded or asymptotically unbounded thresholds. Our general approach is flexible enough to allow a choice of the auxiliary threshold model or of the threshold set involved in the test specifically designed for nonlinear stationary alternatives relevant for macroeconomic and financial topics involving arbitrage in presence of transaction costs. A Monte-Carlo study and an application to the interest rates spread for French, German, New-Zealander and US post-1980 monthly data illustrate the ability of the adaptive SupWald tests to reject unit-root when the ADF does not.  相似文献   

7.
We study the problem of building confidence sets for ratios of parameters, from an identification robust perspective. In particular, we address the simultaneous confidence set estimation of a finite number of ratios. Results apply to a wide class of models suitable for estimation by consistent asymptotically normal procedures. Conventional methods (e.g. the delta method) derived by excluding the parameter discontinuity regions entailed by the ratio functions and which typically yield bounded confidence limits, break down even if the sample size is large ( Dufour, 1997). One solution to this problem, which we take in this paper, is to use variants of  Fieller’s ( 1940, 1954) method. By inverting a joint test that does not require identifying the ratios, Fieller-based confidence regions are formed for the full set of ratios. Simultaneous confidence sets for individual ratios are then derived by applying projection techniques, which allow for possibly unbounded outcomes. In this paper, we provide simple explicit closed-form analytical solutions for projection-based simultaneous confidence sets, in the case of linear transformations of ratios. Our solution further provides a formal proof for the expressions in Zerbe et al. (1982) pertaining to individual ratios. We apply the geometry of quadrics as introduced by  and , in a different although related context. The confidence sets so obtained are exact if the inverted test statistic admits a tractable exact distribution, for instance in the normal linear regression context. The proposed procedures are applied and assessed via illustrative Monte Carlo and empirical examples, with a focus on discrete choice models estimated by exact or simulation-based maximum likelihood. Our results underscore the superiority of Fieller-based methods.  相似文献   

8.
In this study we focus attention on model selection in the presence of panel data. Our approach is eclectic in that it combines both classical and Bayesian techniques. It is also novel in that we address not only model selection, but also model occurrence, i.e., the process by which ‘nature’ chooses a statistical framework in which to generate the data of interest. For a given data subset, there exist competing models each of which have an ex ante positive probability of being the correct model, but for any one generated sample, ex post exactly one such model is the basis for the observed data set. Attention focuses on how the underlying model occurrence probabilities of the competing models depend on characteristics of the environments in which the data subsets are generated. Classical, Bayesian, and mixed estimation approaches are developed. Bayesian approaches are shown to be especially attractive whenever the models are nested.  相似文献   

9.
This study presents an alternative to direct questioning and randomized response approaches to obtain survey information about sensitive issues. The approach used here is based on a logit model that can be used when survey data on the dependent variable are misclassified. The method is applied to a direct survey of undergraduate cheating behaviour. Student responses may not always be truthful. In particular, a student claiming to be a non‐cheater may actually be a cheater. The results indicate that the incidence of cheating in our sample is approximately 70% rather than the self‐reported value of 51%.  相似文献   

10.
We develop a non-parametric test of productive efficiency that accounts for errors-in-variables, following the approach of Varian. [1985. Nonparametric analysis of optimizing behavior with measurement error. Journal of Econometrics 30(1/2), 445–458]. The test is based on the general Pareto–Koopmans notion of efficiency, and does not require price data. Statistical inference is based on the sampling distribution of the L norm of errors. The test statistic can be computed using a simple enumeration algorithm. The finite sample properties of the test are analyzed by means of a Monte Carlo simulation using real-world data of large EU commercial banks.  相似文献   

11.
Since the pioneering work by Granger (1969), many authors have proposed tests of causality between economic time series. Most of them are concerned only with “linear causality in mean”, or if a series linearly affects the (conditional) mean of the other series. It is no doubt of primary interest, but dependence between series may be nonlinear, and/or not only through the conditional mean. Indeed conditional heteroskedastic models are widely studied recently. The purpose of this paper is to propose a nonparametric test for possibly nonlinear causality. Taking into account that dependence in higher order moments are becoming an important issue especially in financial time series, we also consider a test for causality up to the Kth conditional moment. Statistically, we can also view this test as a nonparametric omitted variable test in time series regression. A desirable property of the test is that it has nontrivial power against T1/2-local alternatives, where T is the sample size. Also, we can form a test statistic accordingly if we have some knowledge on the alternative hypothesis. Furthermore, we show that the test statistic includes most of the omitted variable test statistics as special cases asymptotically. The null asymptotic distribution is not normal, but we can easily calculate the critical regions by simulation. Monte Carlo experiments show that the proposed test has good size and power properties.  相似文献   

12.
The problem of classification of dimensional coherent elliptic random field observations into one of two populations specified by different regression mean models and common stationary scale matrix is considered, under the further assumption that the observations to be classified are dependent on the training samples. In this statistical frame, the behaviour of linear discriminant function is studied and an asymptotic expression for the distribution function of the probabilities of misclassification is derived.  相似文献   

13.
This paper derives the exact probability density function of the instrumental variable (IV) estimator of the exogenous variable coefficient vector in a structural equation containing n + 1 endogenous variables and N degrees of overidentification. The derivations make use of an operator calculus which simplifies the algebra of invariant polynomials with multiple matrix arguments. A leading case of the general distribution that is more amenable to analysis and computation is also presented. Conventional classical assumptions of normally distributed errors and non-random exogenous variables are employed.  相似文献   

14.
We propose a simple estimator for nonlinear method of moment models with measurement error of the classical type when no additional data, such as validation data or double measurements, are available. We assume that the marginal distributions of the measurement errors are Laplace (double exponential) with zero means and unknown variances and the measurement errors are independent of the latent variables and are independent of each other. Under these assumptions, we derive simple revised moment conditions in terms of the observed variables. They are used to make inference about the model parameters and the variance of the measurement error. The results of this paper show that the distributional assumption on the measurement errors can be used to point identify the parameters of interest. Our estimator is a parametric method of moments estimator that uses the revised moment conditions and hence is simple to compute. Our estimation method is particularly useful in situations where no additional data are available, which is the case in many economic data sets. Simulation study demonstrates good finite sample properties of our proposed estimator. We also examine the performance of the estimator in the case where the error distribution is misspecified.  相似文献   

15.
I derive the exact distribution of the exact determined instrumental variable estimator using a geometric approach. The approach provides a decomposition of the exact estimator. The results show that by geometric reasoning one may efficiently derive the distribution of the estimation error. The often striking non‐normal shape of the instrumental variable estimator, in the case of weak instruments and small samples, follows intuitively by the geometry of the problem. The method allows for intuitive interpretations of how the shape of the distribution is determined by instrument quality and endogeneity. The approach can also be used when deriving the exact distribution of any ratio of stochastic variables.  相似文献   

16.
17.
A class of adaptive sampling methods is introduced for efficient posterior and predictive simulation. The proposed methods are robust in the sense that they can handle target distributions that exhibit non-elliptical shapes such as multimodality and skewness. The basic method makes use of sequences of importance weighted Expectation Maximization steps in order to efficiently construct a mixture of Student-tt densities that approximates accurately the target distribution–typically a posterior distribution, of which we only require a kernel–in the sense that the Kullback–Leibler divergence between target and mixture is minimized. We label this approach Mixture of  ttby Importance Sampling weighted Expectation Maximization (MitISEM). The constructed mixture is used as a candidate density for quick and reliable application of either Importance Sampling (IS) or the Metropolis–Hastings (MH) method. We also introduce three extensions of the basic MitISEM approach. First, we propose a method for applying MitISEM in a sequential manner, so that the candidate distribution for posterior simulation is cleverly updated when new data become available. Our results show that the computational effort reduces enormously, while the quality of the approximation remains almost unchanged. This sequential approach can be combined with a tempering approach, which facilitates the simulation from densities with multiple modes that are far apart. Second, we introduce a permutation-augmented MitISEM approach. This is useful for importance or Metropolis–Hastings sampling from posterior distributions in mixture models without the requirement of imposing identification restrictions on the model’s mixture regimes’ parameters. Third, we propose a partial MitISEM approach, which aims at approximating the joint distribution by estimating a product of marginal and conditional distributions. This division can substantially reduce the dimension of the approximation problem, which facilitates the application of adaptive importance sampling for posterior simulation in more complex models with larger numbers of parameters. Our results indicate that the proposed methods can substantially reduce the computational burden in econometric models like DCC or mixture GARCH models and a mixture instrumental variables model.  相似文献   

18.
We consider questions of efficiency and redundancy in the GMM estimation problem in which we have two sets of moment conditions, where two sets of parameters enter into one set of moment conditions, while only one set of parameters enters into the other. We then apply these results to a selectivity problem in which the first set of moment conditions is for the model of interest, and the second set of moment conditions is for the selection process. We use these results to explain the counterintuitive result in the literature that, under an ignorability assumption that justifies GMM with weighted moment conditions, weighting using estimated probabilities of selection is better than weighting using the true probabilities. We also consider estimation under an exogeneity of selection assumption such that both the unweighted and the weighted moment conditions are valid, and we show that when weighting is not needed for consistency, it is also not useful for efficiency.  相似文献   

19.
Most genetic studies recruit high‐risk families, and the discoveries are based on non‐random selected groups. We must consider the consequences of this ascertainment process to apply the results of genetic research to the general population. In addition, in epidemiological studies, binary responses are often misclassified. We proposed a binary logistic regression model that provides a novel and flexible way to correct for misclassification in binary responses, taking into account the ascertainment issues. A hierarchical Bayesian analysis using Markov chain Monte Carlo method has been carried out to investigate the effect of covariates on disease status. The focus of this paper is to study the effect of classification errors and non‐random ascertainment on the estimates of the model parameters. An extensive simulation study indicated that the proposed model results in substantial improvement of the estimates. Two data sets have been revisited to illustrate the methodology.  相似文献   

20.
A common problem in data analysis occurs when one has many models to compare to a single or just a few data sets. For example, a researcher may conduct an experiment in which subjects respond by choosing one category from a small set of categories. The data set then consists of the frequencies with which the categories occur. Many substantive models may yield predictions of these frequencies, so that the researcher is faced with the problem of comparing the data to many a priori equally attractive theoretical predictions. This paper proposes a method for the simultaneous study of the predictions and data. The method improves on the standard approach to judging goodness-of-fit by treating the predictions as rows in a two (or higher) way contingency table. Log linear models for the probabilities that subjects respond in specific ways are used to determine how the predictions compare to the data and to rank the predictions in terms of their accuracy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号