首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We study the problem of building confidence sets for ratios of parameters, from an identification robust perspective. In particular, we address the simultaneous confidence set estimation of a finite number of ratios. Results apply to a wide class of models suitable for estimation by consistent asymptotically normal procedures. Conventional methods (e.g. the delta method) derived by excluding the parameter discontinuity regions entailed by the ratio functions and which typically yield bounded confidence limits, break down even if the sample size is large ( Dufour, 1997). One solution to this problem, which we take in this paper, is to use variants of  Fieller’s ( 1940, 1954) method. By inverting a joint test that does not require identifying the ratios, Fieller-based confidence regions are formed for the full set of ratios. Simultaneous confidence sets for individual ratios are then derived by applying projection techniques, which allow for possibly unbounded outcomes. In this paper, we provide simple explicit closed-form analytical solutions for projection-based simultaneous confidence sets, in the case of linear transformations of ratios. Our solution further provides a formal proof for the expressions in Zerbe et al. (1982) pertaining to individual ratios. We apply the geometry of quadrics as introduced by  and , in a different although related context. The confidence sets so obtained are exact if the inverted test statistic admits a tractable exact distribution, for instance in the normal linear regression context. The proposed procedures are applied and assessed via illustrative Monte Carlo and empirical examples, with a focus on discrete choice models estimated by exact or simulation-based maximum likelihood. Our results underscore the superiority of Fieller-based methods.  相似文献   

2.
This paper addresses the problem of data errors in discrete variables. When data errors occur, the observed variable is a misclassified version of the variable of interest, whose distribution is not identified. Inferential problems caused by data errors have been conceptualized through convolution and mixture models. This paper introduces the direct misclassification approach. The approach is based on the observation that in the presence of classification errors, the relation between the distribution of the ‘true’ but unobservable variable and its misclassified representation is given by a linear system of simultaneous equations, in which the coefficient matrix is the matrix of misclassification probabilities. Formalizing the problem in these terms allows one to incorporate any prior information into the analysis through sets of restrictions on the matrix of misclassification probabilities. Such information can have strong identifying power. The direct misclassification approach fully exploits it to derive identification regions for any real functional of the distribution of interest. A method for estimating the identification regions and construct their confidence sets is given, and illustrated with an empirical analysis of the distribution of pension plan types using data from the Health and Retirement Study.  相似文献   

3.
This paper concerns the problem of allocating a binary treatment among a target population based on observed covariates. The goal is to (i) maximize the mean social welfare arising from an eventual outcome distribution, when a budget constraint limits what fraction of the population can be treated and (ii) to infer the dual value, i.e. the minimum resources needed to attain a specific level of mean welfare via efficient treatment assignment. We consider a treatment allocation procedure based on sample data from randomized treatment assignment and derive asymptotic frequentist confidence interval for the welfare generated from it. We propose choosing the conditioning covariates through cross-validation. The methodology is applied to the efficient provision of anti-malaria bed net subsidies, using data from a randomized experiment conducted in Western Kenya. We find that subsidy allocation based on wealth, presence of children and possession of bank account can lead to a rise in subsidy use by about 9% points compared to allocation based on wealth only, and by 17% points compared to a purely random allocation.  相似文献   

4.
We reanalyze data from the observational study by Connors et al. (1996) on the impact of Swan–Ganz catheterization on mortality outcomes. The study by Connors et al. (1996) assumes that there are no unobserved differences between patients who are catheterized and patients who are not catheterized and finds that catheterization increases patient mortality. We instead allow for such differences between patients by implementing both the instrumental variable bounds of Manski (1990), which only exploits an instrumental variable, and the bounds of Shaikh and Vytlacil (2011), which exploit mild nonparametric, structural assumptions in addition to an instrumental variable. We propose and justify the use of indicators of weekday admission as an instrument for catheterization in this context. We find that in our application, the Manski (1990) bounds do not indicate whether catheterization increases or decreases mortality, where as the Shaikh and Vytlacil (2011) bounds reveal that at least for some diagnoses, Swan–Ganz catheterization reduces mortality at 7 days after catheterization. We show that the bounds of Shaikh and Vytlacil (2011) remain valid under even weaker assumptions than those described in Shaikh and Vytlacil (2011). We also extend the analysis to exploit a further nonparametric, structural assumption–that doctors catheterize individuals with systematically worse latent health–and find that this assumption further narrows these bounds and strengthens our conclusions. In our analysis, we construct confidence regions using the methodology developed in Romano and Shaikh (2008). We show in particular that the confidence regions are uniformly consistent in level over a large class of possible distributions for the observed data that include distributions where the instrument is arbitrarily “weak”.  相似文献   

5.
In this paper, we consider a regression model to study the distributional relationship between economic variables. Unlike the classical regression dealing exclusively with mean relationship, our model can be used to analyze the entire dependent structure in distribution. Technically, we treat density functions as random elements and represent the regression relationship as a compact linear operator in the Hilbert spaces of square integrable functions. We propose a consistent estimation procedure for our model, and develop a test to investigate the dependent structure of moments. An empirical example is provided to illustrate how our methodology can be implemented in practical applications.  相似文献   

6.
Graph‐theoretic methods of causal search based on the ideas of Pearl (2000), Spirtes et al. (2000), and others have been applied by a number of researchers to economic data, particularly by Swanson and Granger (1997) to the problem of finding a data‐based contemporaneous causal order for the structural vector autoregression, rather than, as is typically done, assuming a weakly justified Choleski order. Demiralp and Hoover (2003) provided Monte Carlo evidence that such methods were effective, provided that signal strengths were sufficiently high. Unfortunately, in applications to actual data, such Monte Carlo simulations are of limited value, as the causal structure of the true data‐generating process is necessarily unknown. In this paper, we present a bootstrap procedure that can be applied to actual data (i.e. without knowledge of the true causal structure). We show with an applied example and a simulation study that the procedure is an effective tool for assessing our confidence in causal orders identified by graph‐theoretic search algorithms.  相似文献   

7.
We propose a general two-step estimator for a popular Markov discrete choice model that includes a class of Markovian games with continuous observable state space. Our estimation procedure generalizes the computationally attractive methodology of Pesendorfer and Schmidt-Dengler (2008) that assumed finite observable states. This extension is non-trivial as the policy value functions are solutions to some type II integral equations. We show that the inverse problem is well-posed. We provide a set of primitive conditions to ensure root-T consistent estimation for the finite dimensional structural parameters and the distribution theory for the value functions in a time series framework.  相似文献   

8.
We propose a new diagnostic tool for time series called the quantilogram. The tool can be used formally and we provide the inference tools to do this under general conditions, and it can also be used as a simple graphical device. We apply our method to measure directional predictability and to test the hypothesis that a given time series has no directional predictability. The test is based on comparing the correlogram of quantile hits to a pointwise confidence interval or on comparing the cumulated squared autocorrelations with the corresponding critical value. We provide the distribution theory needed to conduct inference, propose some model free upper bound critical values, and apply our methods to S&P500 stock index return data. The empirical results suggest some directional predictability in returns. The evidence is strongest in mid range quantiles like 5–10% and for daily data. The evidence for predictability at the median is of comparable strength to the evidence around the mean, and is strongest at the daily frequency.  相似文献   

9.
This paper considers the identification and estimation of an extension of Roy’s model (1951) of sectoral choice, which includes a non-pecuniary component in the selection equation and allows for uncertainty on potential earnings. We focus on the identification of the non-pecuniary component, which is key to disentangling the relative importance of monetary incentives versus preferences in the context of sorting across sectors. By making the most of the structure of the selection equation, we show that this component is point identified from the knowledge of the covariate effects on earnings, as soon as one covariate is continuous. Notably, and in contrast to most results on the identification of Roy models, this implies that identification can be achieved without any exclusion restriction nor large support condition on the covariates. As a by-product, bounds are obtained on the distribution of the ex ante   monetary returns. We propose a three-stage semiparametric estimation procedure for this model, which yields root-nn consistent and asymptotically normal estimators. Finally, we apply our results to the educational context, by providing new evidence from French data that non-pecuniary factors are a key determinant of higher education attendance decisions.  相似文献   

10.
A major aim in recent nonparametric frontier modeling is to estimate a partial frontier well inside the sample of production units but near the optimal boundary. Two concepts of partial boundaries of the production set have been proposed: an expected maximum output frontier of order m=1,2,… and a conditional quantile-type frontier of order α∈]0,1]. In this paper, we answer the important question of how the two families are linked. For each m, we specify the order α for which both partial production frontiers can be compared. We show that even one perturbation in data is sufficient for breakdown of the nonparametric order-m frontiers, whereas the global robustness of the order-α frontiers attains a higher breakdown value. Nevertheless, once the α frontiers break down, they become less resistant to outliers than the order-m frontiers. Moreover, the m frontiers have the advantage to be statistically more efficient. Based on these findings, we suggest a methodology for identifying outlying data points. We establish some asymptotic results, contributing to important gaps in the literature. The theoretical findings are illustrated via simulations and real data.  相似文献   

11.
We develop a non-parametric test of productive efficiency that accounts for errors-in-variables, following the approach of Varian. [1985. Nonparametric analysis of optimizing behavior with measurement error. Journal of Econometrics 30(1/2), 445–458]. The test is based on the general Pareto–Koopmans notion of efficiency, and does not require price data. Statistical inference is based on the sampling distribution of the L norm of errors. The test statistic can be computed using a simple enumeration algorithm. The finite sample properties of the test are analyzed by means of a Monte Carlo simulation using real-world data of large EU commercial banks.  相似文献   

12.
The economic theory of option pricing imposes constraints on the structure of call functions and state price densities. Except in a few polar cases, it does not prescribe functional forms. This paper proposes a nonparametric estimator of option pricing models which incorporates various restrictions (such as monotonicity and convexity) within a single least squares procedure. The bootstrap is used to produce confidence intervals for the call function and its first two derivatives and to calibrate a residual regression test of shape constraints. We apply the techniques to option pricing data on the DAX.  相似文献   

13.
In this paper a nonparametric variance ratio testing approach is proposed for determining the cointegration rank in fractionally integrated systems. The test statistic is easily calculated without prior knowledge of the integration order of the data, the strength of the cointegrating relations, or the cointegration vector(s). The latter property makes it easier to implement than regression-based approaches, especially when examining relationships between several variables with possibly multiple cointegrating vectors. Since the test is nonparametric, it does not require the specification of a particular model and is invariant to short-run dynamics. Nor does it require the choice of any smoothing parameters that change the test statistic without being reflected in the asymptotic distribution. Furthermore, a consistent estimator of the cointegration space can be obtained from the procedure. The asymptotic distribution theory for the proposed test is non-standard but easily tabulated or simulated. Monte Carlo simulations demonstrate excellent finite sample properties, even rivaling those of well-specified parametric tests. The proposed methodology is applied to the term structure of interest rates, where, contrary to both fractional- and integer-based parametric approaches, evidence in favor of the expectations hypothesis is found using the nonparametric approach.  相似文献   

14.
In this paper, we consider testing distributional assumptions in multivariate GARCH models based on empirical processes. Using the fact that joint distribution carries the same amount of information as the marginal together with conditional distributions, we first transform the multivariate data into univariate independent data based on the marginal and conditional cumulative distribution functions. We then apply the Khmaladze's martingale transformation (K-transformation) to the empirical process in the presence of estimated parameters. The K-transformation eliminates the effect of parameter estimation, allowing a distribution-free test statistic to be constructed. We show that the K-transformation takes a very simple form for testing multivariate normal and multivariate t-distributions. The procedure is applied to a multivariate financial time series data set.  相似文献   

15.
We propose a finite sample approach to some of the most common limited dependent variables models. The method rests on the maximized Monte Carlo (MMC) test technique proposed by Dufour [1998. Monte Carlo tests with nuisance parameters: a general approach to finite-sample inference and nonstandard asymptotics. Journal of Econometrics, this issue]. We provide a general way for implementing tests and confidence regions. We show that the decision rule associated with a MMC test may be written as a Mixed Integer Programming problem. The branch-and-bound algorithm yields a global maximum in finite time. An appropriate choice of the statistic yields a consistent test, while fulfilling the level constraint for any sample size. The technique is illustrated with numerical data for the logit model.  相似文献   

16.
A class of adaptive sampling methods is introduced for efficient posterior and predictive simulation. The proposed methods are robust in the sense that they can handle target distributions that exhibit non-elliptical shapes such as multimodality and skewness. The basic method makes use of sequences of importance weighted Expectation Maximization steps in order to efficiently construct a mixture of Student-tt densities that approximates accurately the target distribution–typically a posterior distribution, of which we only require a kernel–in the sense that the Kullback–Leibler divergence between target and mixture is minimized. We label this approach Mixture of  ttby Importance Sampling weighted Expectation Maximization (MitISEM). The constructed mixture is used as a candidate density for quick and reliable application of either Importance Sampling (IS) or the Metropolis–Hastings (MH) method. We also introduce three extensions of the basic MitISEM approach. First, we propose a method for applying MitISEM in a sequential manner, so that the candidate distribution for posterior simulation is cleverly updated when new data become available. Our results show that the computational effort reduces enormously, while the quality of the approximation remains almost unchanged. This sequential approach can be combined with a tempering approach, which facilitates the simulation from densities with multiple modes that are far apart. Second, we introduce a permutation-augmented MitISEM approach. This is useful for importance or Metropolis–Hastings sampling from posterior distributions in mixture models without the requirement of imposing identification restrictions on the model’s mixture regimes’ parameters. Third, we propose a partial MitISEM approach, which aims at approximating the joint distribution by estimating a product of marginal and conditional distributions. This division can substantially reduce the dimension of the approximation problem, which facilitates the application of adaptive importance sampling for posterior simulation in more complex models with larger numbers of parameters. Our results indicate that the proposed methods can substantially reduce the computational burden in econometric models like DCC or mixture GARCH models and a mixture instrumental variables model.  相似文献   

17.
In this paper we consider the issue of unit root testing in cross-sectionally dependent panels. We consider panels that may be characterized by various forms of cross-sectional dependence including (but not exclusive to) the popular common factor framework. We consider block bootstrap versions of the group-mean (Im et al., 2003) and the pooled (Levin et al., 2002) unit root coefficient DF tests for panel data, originally proposed for a setting of no cross-sectional dependence beyond a common time effect. The tests, suited for testing for unit roots in the observed data, can be easily implemented as no specification or estimation of the dependence structure is required. Asymptotic properties of the tests are derived for T going to infinity and N finite. Asymptotic validity of the bootstrap tests is established in very general settings, including the presence of common factors and cointegration across units. Properties under the alternative hypothesis are also considered. In a Monte Carlo simulation, the bootstrap tests are found to have rejection frequencies that are much closer to nominal size than the rejection frequencies for the corresponding asymptotic tests. The power properties of the bootstrap tests appear to be similar to those of the asymptotic tests.  相似文献   

18.
We consider classes of multivariate distributions which can model skewness and are closed under orthogonal transformations. We review two classes of such distributions proposed in the literature and focus our attention on a particular, yet quite flexible, subclass of one of these classes. Members of this subclass are defined by affine transformations of univariate (skewed) distributions that ensure the existence of a set of coordinate axes along which there is independence and the marginals are known analytically. The choice of an appropriate m-dimensional skewed distribution is then restricted to the simpler problem of choosing m univariate skewed distributions. We introduce a Bayesian model comparison setup for selection of these univariate skewed distributions. The analysis does not rely on the existence of moments (allowing for any tail behaviour) and uses equivalent priors on the common characteristics of the different models. Finally, we apply this framework to multi-output stochastic frontiers using data from Dutch dairy farms.  相似文献   

19.
Since the pioneering work by Granger (1969), many authors have proposed tests of causality between economic time series. Most of them are concerned only with “linear causality in mean”, or if a series linearly affects the (conditional) mean of the other series. It is no doubt of primary interest, but dependence between series may be nonlinear, and/or not only through the conditional mean. Indeed conditional heteroskedastic models are widely studied recently. The purpose of this paper is to propose a nonparametric test for possibly nonlinear causality. Taking into account that dependence in higher order moments are becoming an important issue especially in financial time series, we also consider a test for causality up to the Kth conditional moment. Statistically, we can also view this test as a nonparametric omitted variable test in time series regression. A desirable property of the test is that it has nontrivial power against T1/2-local alternatives, where T is the sample size. Also, we can form a test statistic accordingly if we have some knowledge on the alternative hypothesis. Furthermore, we show that the test statistic includes most of the omitted variable test statistics as special cases asymptotically. The null asymptotic distribution is not normal, but we can easily calculate the critical regions by simulation. Monte Carlo experiments show that the proposed test has good size and power properties.  相似文献   

20.
This exploratory study is designed to provide an in-depth understanding of the communication world that Twitter mediates in the context of global conversations centering on the topic of Korean pop music (Kpop). Drawing on the theoretical framework of the duality of media, this study proposes that the multifaceted communication world that Twitter mediates can be understood as a result of interactions between Twitter users and the structure of Twitter. We collected all Tweets including the hashtag #kpop from November 9, 2011 to February 15, 2012, and then applied the webometric technique to visualize the #kpop Twitter networks across various regions of the world. We examined the use of URLs and hashtags in #kpop Tweets and found that Twitter use varied across regions, forming various communication networks. The results suggest that Twitter’s technological design can shape communication patterns as well as structures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号