首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper gives a test of overidentifying restrictions that is robust to many instruments and heteroskedasticity. It is based on a jackknife version of the overidentifying test statistic. Correct asymptotic critical values are derived for this statistic when the number of instruments grows large, at a rate up to the sample size. It is also shown that the test is valid when the number of instruments is fixed and there is homoskedasticity. This test improves on recently proposed tests by allowing for heteroskedasticity and by avoiding assumptions on the instrument projection matrix. This paper finds in Monte Carlo studies that the test is more accurate and less sensitive to the number of instruments than the Hausman–Sargan or GMM tests of overidentifying restrictions.  相似文献   

2.
This paper surveys the state of the art in the econometrics of regression models with many instruments or many regressors based on alternative – namely, dimension – asymptotics. We list critical results of dimension asymptotics that lead to better approximations of properties of familiar and alternative estimators and tests when the instruments and/or regressors are numerous. Then, we consider the problem of estimation and inference in the basic linear instrumental variables regression setup with many strong instruments. We describe the failures of conventional estimation and inference, as well as alternative tools that restore consistency and validity. We then add various other features to the basic model such as heteroskedasticity, instrument weakness, etc., in each case providing a review of the existing tools for proper estimation and inference. Subsequently, we consider a related but different problem of estimation and testing in a linear mean regression with many regressors. We also describe various extensions and connections to other settings, such as panel data models, spatial models, time series models, and so on. Finally, we provide practical guidance regarding which tools are most suitable to use in various situations when many instruments and/or regressors turn out to be an issue.  相似文献   

3.
We compare four different estimation methods for the coefficients of a linear structural equation with instrumental variables. As the classical methods we consider the limited information maximum likelihood (LIML) estimator and the two-stage least squares (TSLS) estimator, and as the semi-parametric estimation methods we consider the maximum empirical likelihood (MEL) estimator and the generalized method of moments (GMM) (or the estimating equation) estimator. Tables and figures of the distribution functions of four estimators are given for enough values of the parameters to cover most linear models of interest and we include some heteroscedastic cases and nonlinear cases. We have found that the LIML estimator has good performance in terms of the bounded loss functions and probabilities when the number of instruments is large, that is, the micro-econometric models with “many instruments” in the terminology of recent econometric literature.  相似文献   

4.
Einar Rasmussen   《Technovation》2008,28(8):506-517
Increased efforts are made in most industrialized countries to promote the commercialization of university research, for instance, through spin-off firm formation. Many studies have investigated the initiatives set up in the university sector that aim to support and facilitate the commercialization of research, such as technology transfer offices (TTO). However, few studies have looked at the increasing number of instruments introduced from the government. This paper reviews the Canadian support structure at federal level that aims to support the commercialization of publicly funded research. Two types of programs can be identified. Firstly, programs made to induce structural reforms within the university sector in order to improve the institutional capabilities to facilitate commercialization projects. Secondly, programs providing support to specific commercialization projects. This paper explores how these types of programs are operated at government level. An example of implementation at university level is also given. The lessons to be learned from the Canadian case are related to how the government initiatives encourage a bottom-up approach. This is accomplished by providing resources for direct use in commercialization projects or to develop professional expertise in technology transfer in the university sector, by experimenting with new initiatives, and finally by facilitating cooperation between commercializing organizations.  相似文献   

5.
Any minimization problem involves a computer algorithm. Many such algorithms have been developed for the boolean minimizations, in diverse areas from computer science to social sciences (with the famous QCA algorithm). For a small number of entries (causal conditions in the QCA) any such algorithm will find a minimal solution, especially with the aid of the modern computers. However, for a large number of conditions a quick and complete solution is not easy to find using an algorithmic approach, due to the extremely large space of possible combinations to search in. In this article I will demonstrate a simple alternative solution, a mathematical method to obtain all possible minimized prime implicants. This method is not only easier to understand than other complex algorithms, but it proves to be a faster method to obtain an exact and complete boolean solution.  相似文献   

6.
We develop a Bayesian semi-parametric approach to the instrumental variable problem. We assume linear structural and reduced form equations, but model the error distributions non-parametrically. A Dirichlet process prior is used for the joint distribution of structural and instrumental variable equations errors. Our implementation of the Dirichlet process prior uses a normal distribution as a base model. It can therefore be interpreted as modeling the unknown joint distribution with a mixture of normal distributions with a variable number of mixture components. We demonstrate that this procedure is both feasible and sensible using actual and simulated data. Sampling experiments compare inferences from the non-parametric Bayesian procedure with those based on procedures from the recent literature on weak instrument asymptotics. When errors are non-normal, our procedure is more efficient than standard Bayesian or classical methods.  相似文献   

7.
8.
We study the effects of monetary policy on economic activity separately identifying the effects of a conventional change in the fed funds rate from the policy of forward guidance. We use a structural VAR identified using external instruments from futures market data. The response of output to a fed funds rate shock is found to be consistent with typical monetary VAR analyses. However, the effect of a forward guidance shock that increases long‐term interest rates has an expansionary effect on output. This counterintuitive response is shown to be tied to the asymmetric information between the Federal Reserve and the public.  相似文献   

9.
Abstract. Consider the problem of estimating f(θ ) where fis a given function and 8 is the unknown parameter of a multinomial distribution. In order to describe the asymptotic behaviour of the frequency substitution estimator, conventional methods typically require differentiability of f , and the error representations depend on the unknown parameter θ.
In this note, a pararneter-free bound for the mean square error is derived which requires continuity of f only.  相似文献   

10.
环境监测仪器设备历来被看作环境保护的眼睛和尺子,也是环境监测站重要的物质基础,充分发挥环境监测的技术监督、技术支持和服务的作用,加强监测仪器设备的管理势在必行。文章阐述了仪器设备管理中存在的问题,提出了加强仪器设备管理的措施,探讨了提高仪器设备价值的途径。  相似文献   

11.
We aim to calibrate stochastic volatility models from option prices. We develop a Tikhonov regularization approach with an efficient numerical algorithm to recover the risk neutral drift term of the volatility (or variance) process. In contrast to most existing literature, we do not assume that the drift term has any special structure. As such, our algorithm applies to calibration of general stochastic volatility models. An extensive numerical analysis is presented to demonstrate the efficiency of our approach. Interestingly, our empirical study reveals that the risk neutral variance processes recovered from market prices of options on S&P 500 index and EUR/USD exchange rate are indeed linearly mean-reverting.  相似文献   

12.
We consider the estimation of the coefficients of a linear structural equation in a simultaneous equation system when there are many instrumental variables. We derive some asymptotic properties of the limited information maximum likelihood (LIML) estimator when the number of instruments is large; some of these results are new as well as old, and we relate them to results in some recent studies. We have found that the variance of the limiting distribution of the LIML estimator and its modifications often attain the asymptotic lower bound when the number of instruments is large and the disturbance terms are not necessarily normally distributed, that is, for the micro-econometric models of some cases recently called many instruments and many weak instruments.  相似文献   

13.
Record linkage is the act of bringing together records from two files that are believed to belong to the same unit (e.g., a person or business). It is a low‐cost way of increasing the set of variables available for analysis. Errors may arise in the linking process if an error‐free unit identifier is not available. Two types of linking errors include an incorrect link (records belonging to two different units are linked) and a missed record (an unlinked record for which a correct link exists). Naively ignoring linkage errors may mean that analysis of the linked file is biased. This paper outlines a “weighting approach” to making correct inference about regression coefficients and population totals in the presence of such linkage errors. This approach is designed for analysts who do not have the expertise or time to use specialist software required by other approaches but who are comfortable using weights in inference. The performance of the estimator is demonstrated in a simulation study.  相似文献   

14.
Pearn et al. (1999) considered a capability index C ′′ pmk, a new generalization of C pmk, for processes with asymmetric tolerances. In this paper, we provide a comparison between C ′′ pmk and other existing generalizations of C pmk on the accuracy of measuring process performance for processes with asymmetric tolerances. We show that the new generalization C ′′ pmk is superior to other existing generalizations of C pmk. Under the assumption of normality, we derive explicit forms of the cumulative distribution function and the probability density function of the estimated index . We show that the cumulative distribution function and the probability density function of the estimated index can be expressed in terms of a mixture of the chi-square distribution and the normal distribution. The explicit forms of the cumulative distribution function and the probability density function considerably simplify the complexity for analyzing the statistical properties of the estimated index . Received April 2000  相似文献   

15.
Nonparametric methodologies are proposed to assess college students' performance. Emphasis is given to gender and sector of high school. The application concerns the University of Campinas, a research university in Southeast Brazil. In Brazil college studies are based on a somewhat rigid set of subjects for each major. For this reason a simple GPA comparison may hide true performance. Therefore, we define individual vectors of course grades. These vectors are used in pairwise comparisons of common subject grades for individuals who entered college in the same year. The relative college performances of any two students are compared with their relative performances on the entrance exam score. A procedure based on generalized U-statistics is developed to test if there is selection bias in the entrance exam by some predefined groups, which is equipped with asymptotically normal distribution under both null and alternative hypotheses. Maximum power is attained by employing the union intersection principle, and resampling techniques such as nonparametric bootstrap are employed to generate the empirical distribution of the test statistics and get p-values.  相似文献   

16.
This study develops the need for formal conceptual definitions (sometimes called nominal definitions) and how to develop better measurement instruments for theory-building. It develops the underlying theory for ‘good’ formal conceptual definitions by defining terms, demonstrating that formal conceptual definitions are needed for all theory-building empirical research, explains how and why ‘good’ formal conceptual definitions are used to develop properties and their measures, and last, it logically explains that good formal conceptual definitions are necessary conditions for construct validity (content validity, criterion validity, convergent validity, and discriminant validity) while statistical tests are sufficient conditions for validity. This theory development explains why formal conceptual definitions are necessary before any traditional statistical empirical validity tests are performed. This study suggests that any statistical validity tests are not meaningful if the concept is not formally defined.In short, the theory of formal conceptual definitions provides a structure to develop ‘good’ measures of the formal theory that leads to ‘good’ empirical theory-building.  相似文献   

17.
Instrumental variable quantile regression: A robust inference approach   总被引:1,自引:0,他引:1  
In this paper, we develop robust inference procedures for an instrumental variables model defined by Y=Dα(U)Y=Dα(U) where Dα(U)Dα(U) is strictly increasing in U and U is a uniform variable that may depend on D but is independent of a set of instrumental variables Z. The proposed inferential procedures are computationally convenient in typical applications and can be carried out using software available for ordinary quantile regression. Our inferential procedure arises naturally from an estimation algorithm and has the important feature of being robust to weak and partial identification and remains valid even in cases where identification fails completely. The use of the proposed procedures is illustrated through two empirical examples.  相似文献   

18.
19.
A special class of combinatorial optimization problems is considered. We develop a compact nonconvex quadratic model for these problems that incorporates all inequality constraints in the objective function, and discuss two approximation algorithms for solving this model. One is inspired by Karmarkar's potential reduction algorithm for solving combinatorial optimization problems; the other is a variant of the reduced gradient method. The paper concludes with computational experiences with both real-life and randomly generated instances of the frequency assignment problem. Large problems are satisfactorily solved in reasonable computation times.  相似文献   

20.
The demand for nurses in virtually all western countries has been outpacing the supply for more than a decade. The situation is now at the point where the rules for good practice are being stretched to the limit and patient care is in jeopardy. The purpose of this paper is to present several ideas for maximizing the use of the available staff and to quantify the resultant benefits. Two approaches are investigated for substituting nurses with higher level skills for those with lower level skills when there is sufficient idle time to do so. Idle time is usually due to scheduling constraints and contractual agreements that prevent a hospital from arbitrarily assigning nurses to shifts over the week.When the substitution is skill-related, as it is here, it is often called downgrading. The models that we develop are for preference scheduling, which means that individual preferences are taken into account when constructing monthly rosters. There are several reasons for doing this in today's environment, the most important is the need to boost staff morale and increase retention. The problem is modeled as an integer program and solved with a column generation technique that relies on intelligent heuristics for identifying good candidate schedules. The computations show that high quality solutions, as measured by the reduction in the need for non-unit nurses as well as the degree to which preferences are satisfied, can usually be obtained in a matter of minutes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号