首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Much empirical research has been devoted to housing market segmentation and the implications for the application of the Hedonic Price Model. Market segmentation is demonstrated empirically in (J. Urban Econ., 3:2, 146–166 (1976); J. Urban Econ., 7:1, 102–108 (1980); Rev. Econ. Statist., 66:3, 404–406 (1974)). There appears to be theoretical evidence (J. Pol. Econ., 82:1, 34–55 (1974)) that such empirical efforts may have been less than adequate. Unfortunately, the empirical applications of Rosen's model (J. Environ. Econ. Manag., 5, 81–102 (1978); J. Urban Econ., 5, 3, 357–369 (1978)) do not seem to effectively account for the complexity of the factors which cause multiple equilibria (market segmentation). This paper demonstrates empirically that market segments can be defined along more dimensions than hitherto have been included in any analysis.  相似文献   

2.
This paper focuses on nonparametric efficiency analysis based on robust estimation of partial frontiers in a complete multivariate setup (multiple inputs and multiple outputs). It introduces α-quantile efficiency scores. A nonparametric estimator is proposed achieving strong consistency and asymptotic normality. Then if α increases to one as a function of the sample size we recover the properties of the FDH estimator. But our estimator is more robust to the perturbations in data, since it attains a finite gross-error sensitivity. Environmental variables can be introduced to evaluate efficiencies and a consistent estimator is proposed. Numerical examples illustrate the usefulness of the approach.  相似文献   

3.
Subsampling and the m out of n bootstrap have been suggested in the literature as methods for carrying out inference based on post-model selection estimators and shrinkage estimators. In this paper we consider a subsampling confidence interval (CI) that is based on an estimator that can be viewed either as a post-model selection estimator that employs a consistent model selection procedure or as a super-efficient estimator. We show that the subsampling CI (of nominal level 1−α for any α(0,1)) has asymptotic confidence size (defined to be the limit of finite-sample size) equal to zero in a very simple regular model. The same result holds for the m out of n bootstrap provided m2/n→0 and the observations are i.i.d. Similar zero-asymptotic-confidence-size results hold in more complicated models that are covered by the general results given in the paper and for super-efficient and shrinkage estimators that are not post-model selection estimators. Based on these results, subsampling and the m out of n bootstrap are not recommended for obtaining inference based on post-consistent model selection or shrinkage estimators.  相似文献   

4.
This paper examines the short-run effects of changes in exogenous variables (including several government policies) on the schedule relating market equilibrium rent to quality level. The basic model differs from Sweeney (Econometrica, 42, 147–167 (1974)) by use of a bid rent closed city approach rather than a supply and demand (partially) open city approach. The mathematics changes completely, the analytics simplify, and the results change somewhat. Housing is treated as a durable quality differentiated good, but frictional forces and the multidimensionality of the housing package are ignored. The exception is an extension to a monocentric city context, so that housing units vary in both quality and location.  相似文献   

5.
Censored regression quantiles with endogenous regressors   总被引:1,自引:0,他引:1  
This paper develops a semiparametric method for estimation of the censored regression model when some of the regressors are endogenous (and continuously distributed) and instrumental variables are available for them. A “distributional exclusion” restriction is imposed on the unobservable errors, whose conditional distribution is assumed to depend on the regressors and instruments only through a lower-dimensional “control variable,” here assumed to be the difference between the endogenous regressors and their conditional expectations given the instruments. This assumption, which implies a similar exclusion restriction for the conditional quantiles of the censored dependent variable, is used to motivate a two-stage estimator of the censored regression coefficients. In the first stage, the conditional quantile of the dependent variable given the instruments and the regressors is nonparametrically estimated, as are the first-stage reduced-form residuals to be used as control variables. The second-stage estimator is a weighted least squares regression of pairwise differences in the estimated quantiles on the corresponding differences in regressors, using only pairs of observations for which both estimated quantiles are positive (i.e., in the uncensored region) and the corresponding difference in estimated control variables is small. The paper gives the form of the asymptotic distribution for the proposed estimator, and discusses how it compares to similar estimators for alternative models.  相似文献   

6.
This paper examines the technical efficiency of US Federal Reserve check processing offices over 1980–2003. We extend results from Park et al. [Park, B., Simar, L., Weiner, C., 2000. FDH efficiency scores from a stochastic point of view. Econometric Theory 16, 855–877] and Daouia and Simar [Daouia, A., Simar, L., 2007. Nonparametric efficiency analysis: a multivariate conditional quantile approach. Journal of Econometrics 140, 375–400] to develop an unconditional, hyperbolic, α-quantile estimator of efficiency. Our new estimator is fully non-parametric and robust with respect to outliers; when used to estimate distance to quantiles lying close to the full frontier, it is strongly consistent and converges at rate root-n, thus avoiding the curse of dimensionality that plagues data envelopment analysis (DEA) estimators. Our methods could be used by policymakers to compare inefficiency levels across offices or by managers of individual offices to identify peer offices.  相似文献   

7.
We study the problem of testing hypotheses on the parameters of one- and two-factor stochastic volatility models (SV), allowing for the possible presence of non-regularities such as singular moment conditions and unidentified parameters, which can lead to non-standard asymptotic distributions. We focus on the development of simulation-based exact procedures–whose level can be controlled in finite samples–as well as on large-sample procedures which remain valid under non-regular conditions. We consider Wald-type, score-type and likelihood-ratio-type tests based on a simple moment estimator, which can be easily simulated. We also propose a C(α)-type test which is very easy to implement and exhibits relatively good size and power properties. Besides usual linear restrictions on the SV model coefficients, the problems studied include testing homoskedasticity against a SV alternative (which involves singular moment conditions under the null hypothesis) and testing the null hypothesis of one factor driving the dynamics of the volatility process against two factors (which raises identification difficulties). Three ways of implementing the tests based on alternative statistics are compared: asymptotic critical values (when available), a local Monte Carlo (or parametric bootstrap) test procedure, and a maximized Monte Carlo (MMC) procedure. The size and power properties of the proposed tests are examined in a simulation experiment. The results indicate that the C(α)-based tests (built upon the simple moment estimator available in closed form) have good size and power properties for regular hypotheses, while Monte Carlo tests are much more reliable than those based on asymptotic critical values. Further, in cases where the parametric bootstrap appears to fail (for example, in the presence of identification problems), the MMC procedure easily controls the level of the tests. Moreover, MMC-based tests exhibit relatively good power performance despite the conservative feature of the procedure. Finally, we present an application to a time series of returns on the Standard and Poor’s Composite Price Index.  相似文献   

8.
Estimating dynamic panel data discrete choice models with fixed effects   总被引:1,自引:0,他引:1  
This paper considers the estimation of dynamic binary choice panel data models with fixed effects. It is shown that the modified maximum likelihood estimator (MMLE) used in this paper reduces the order of the bias in the maximum likelihood estimator from O(T-1) to O(T-2), without increasing the asymptotic variance. No orthogonal reparametrization is needed. Monte Carlo simulations are used to evaluate its performance in finite samples where T is not large. In probit and logit models containing lags of the endogenous variable and exogenous variables, the estimator is found to have a small bias in a panel with eight periods. A distinctive advantage of the MMLE is its general applicability. Estimation and relevance of different policy parameters of interest in this kind of models are also addressed.  相似文献   

9.
Ruggiero (European Journal of Operational Research 115, 555–563. 1999) compared the two popular parametric frontier methods for cross-sectional data—the stochastic frontier and the corrected OLS—in a simulation study. He demonstrated that the inefficiency ranking accuracy of the established stochastic frontier is uniformly inferior to that of the misspecified Corrected OLS (COLS) (which lacks an error term). The reason for his result remains unclear, however. In this paper, a more extensive simulation study is therefore conducted to find out whether the superiority of COLS is simply due to small sample sizes or to poor performance of the inefficiency level estimator.JEL Classification: C1,C2,C5  相似文献   

10.
This paper considers estimation and inference in linear panel regression models with lagged dependent variables and/or other weakly exogenous regressors when N (the cross‐section dimension) is large relative to T (the time series dimension). It allows for fixed and time effects (FE‐TE) and derives a general formula for the bias of the FE‐TE estimator which generalizes the well‐known Nickell bias formula derived for the pure autoregressive dynamic panel data models. It shows that in the presence of weakly exogenous regressors inference based on the FE‐TE estimator will result in size distortions unless N/T is sufficiently small. To deal with the bias and size distortion of the FE‐TE estimator the use of a half‐panel jackknife FE‐TE estimator is considered and its asymptotic distribution is derived. It is shown that the bias of the half‐panel jackknife FE‐TE estimator is of order T?2, and for valid inference it is only required that N/T3→0, as N,T jointly. Extension to unbalanced panel data models is also provided. The theoretical results are illustrated with Monte Carlo evidence. It is shown that the FE‐TE estimator can suffer from large size distortions when N>T, with the half‐panel jackknife FE‐TE estimator showing little size distortions. The use of half‐panel jackknife FE‐TE estimator is illustrated with two empirical applications from the literature.  相似文献   

11.
This paper is singular in its use of the PSED dataset for deriving a better understanding of the nature of nascent entrepreneurs as compared to franchisee entrepreneurs. We used previous studies on the differences between the two groups and developed variables divided into three dimensions: (1) prior experience, (2) growth objectives, and (3) motivation and risk. Jonckheere–Terpstra (J–T) tests, Chi-Square tests, F-tests and logistic regression models detected differences in all three dimensions. The conclusion is that franchisee entrepreneurs in the United States of America are distinctive in their characteristics. As compared to nascent entrepreneurs, franchisee entrepreneurs have less experience, less confidence in their skills, less capital, more aspirations for larger organizations, less confidence in their abilities to make the business a success, and more belief that their first-year incomes will be stable.
Ilan Alon (Corresponding author)Email:
  相似文献   

12.
A simple econometric test for rational expectations in the case in which unobservable, rationally expected variables appear in a structural equation is presented. Using McCallum's instrumental variable estimator as a base, a test for rational expectations per se and a joint test of rational expectations and hypotheses about the structural equation are presented. The new test is shown to be a new interpretation of Basmann's test of overidentifying restrictions. As an illustration, the hypothesis that the forward exchange rate is the rationally expected future spot exchange rate is tested and rejected.  相似文献   

13.
We propose two new semiparametric specification tests which test whether a vector of conditional moment conditions is satisfied for any vector of parameter values θ0. Unlike most existing tests, our tests are asymptotically valid under weak and/or partial identification and can accommodate discontinuities in the conditional moment functions. Our tests are moreover consistent provided that identification is not too weak. We do not require the availability of a consistent first step estimator. Like Robinson [Robinson, Peter M., 1987. Asymptotically efficient estimation in the presence of heteroskedasticity of unknown form. Econometrica 55, 875–891] and many others in similar problems subsequently, we use k-nearest neighbor (knn) weights instead of kernel weights. The advantage of using knn weights is that local power is invariant to transformations of the instruments and that under strong point identification computation of the test statistic yields an efficient estimator of θ0 as a byproduct.  相似文献   

14.
This paper considers semiparametric efficient estimation of conditional moment models with possibly nonsmooth residuals in unknown parametric components (θ) and unknown functions (h) of endogenous variables. We show that: (1) the penalized sieve minimum distance (PSMD) estimator can simultaneously achieve root-n asymptotic normality of and nonparametric optimal convergence rate of , allowing for noncompact function parameter spaces; (2) a simple weighted bootstrap procedure consistently estimates the limiting distribution of the PSMD ; (3) the semiparametric efficiency bound formula of [Ai, C., Chen, X., 2003. Efficient estimation of models with conditional moment restrictions containing unknown functions. Econometrica, 71, 1795–1843] remains valid for conditional models with nonsmooth residuals, and the optimally weighted PSMD estimator achieves the bound; (4) the centered, profiled optimally weighted PSMD criterion is asymptotically chi-square distributed. We illustrate our theories using a partially linear quantile instrumental variables (IV) regression, a Monte Carlo study, and an empirical estimation of the shape-invariant quantile IV Engel curves.  相似文献   

15.
Ratio cum product method of estimation   总被引:1,自引:0,他引:1  
M. P. Singh 《Metrika》1967,12(1):34-42
Summary In this paper methods of estimation which may be considered as combination of ratio and product methods have been suggested. The mean square errors of these estimators utilizing two supplementary variables are compared with (i) simple unbiased estimator (p=0), (ii) usual ratio and product methods of estimation (p=1) and (iii) multivariate ratio and multivariate product estimators (p=2), wherep is the number of supplementary variables utilized. Conditions for their efficient use have been obtained for each case. Extension to general case ofp-variables has been briefly discussed. A new criteria for the efficient use of product estimator have been obtained.  相似文献   

16.
A fuzzy-QFD approach to supplier selection   总被引:5,自引:0,他引:5  
This article suggests a new method that transfers the house of quality (HOQ) approach typical of quality function deployment (QFD) problems to the supplier selection process. To test its efficacy, the method is applied to a supplier selection process for a medium-to-large industry that manufactures complete clutch couplings.The study starts by identifying the features that the purchased product should have (internal variables “WHAT”) in order to satisfy the company's needs, then it seeks to establish the relevant supplier assessment criteria (external variables “HOW”) in order to come up with a final ranking based on the fuzzy suitability index (FSI). The whole procedure was implemented using fuzzy numbers; the application of a fuzzy algorithm allowed the company to define by means of linguistic variables the relative importance of the “WHAT”, the “HOWWHAT” correlation scores, the resulting weights of the “HOW” and the impact of each potential supplier.Special attention is paid to the various subjective assessments in the HOQ process, and symmetrical triangular fuzzy numbers are suggested to capture the vagueness in people's verbal assessments.  相似文献   

17.
This paper helps to fill a gap in the public economics literature by providing empirical evidence on strategic interaction among local governments. Using the methodology of Case et al. (Journal of Public Economics,52, 285–307 (1993)), the paper focuses on the adoption of growth-control measures by cities in California and looks for evidence of policy interdependence in these choices. The data are drawn from an elaborate survey of growth control practices in California cities, conducted by Glickfeld and Levine (“Regional Growth . . .Local Reaction,” Lincoln Institute of Land Policy, Cambridge, MA, 1992). The survey results are used to compute an index of the stringency of growth controls in each city, which serves as the dependent variable for the study.  相似文献   

18.
In credibility theory, an unobservable random vector Y is approximated by a random vector ? in a pre-assigned set A of admitted estimators for Y. The credibility approximation ? for Y is best in the sense that it is the vector V ? A minimizing the distance between V and Y. The credibility estimator depends on observable random variables but also on unknown structure parameters. In the practical models, the latter can be estimated from realizations of the observable random variables.  相似文献   

19.
Let (T n ) n≥1 be a sequence random variables (rv) of interest distributed as T. In censorship models the rv T is subject to random censoring by another rv C. Let θ be the mode of T. In this paper we define a new smooth kernel estimator [^(q)]n{\hat{\theta}_n} of θ and establish its almost sure convergence under an α-mixing condition.  相似文献   

20.
This paper studies the predictive ability of a variety of models in forecasting the yield curve for the Brazilian fixed income market. We compare affine term structure models with a variation of the Nelson–Siegel exponential framework developed by Diebold and Li [Diebold, F., & Li, C. (2006). Forecasting the Term Structure of Government Yields. Journal of Econometrics, 130, 337–364]. Empirical results suggest that forecasts made with the latter methodology are superior, and appear to be more accurate at long horizons than other different benchmark forecasts. These results are important for policy-makers, as well as for portfolio and risk managers. Further research could study the predictive ability of such models in other emerging markets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号