首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 421 毫秒
1.
Given a normal sample with means \({{\bf x}_{1}^{\prime} {\bf \varphi}, \ldots, {\bf x}_{n}^{\prime} {\bf \varphi}}\) and variance v, minimum variance unbiased estimates are given for the moments of L, where log L is normal with mean \({{\bf x}^{\prime} {\bf \varphi}}\) and variance v. These estimates converge to wrong values if the normality assumption is false. In the latter case estimates based on any M-estimate of \({{\bf \varphi}}\) are available of bias \({O\left(n^{-1}\right)}\) and \({O\left(n^{-2}\right)}\). More generally, these are given for any smooth function of \({\left({\bf \varphi}, F\right)}\), where F is the unknown distribution of the residuals. The regression functions need not be linear.  相似文献   

2.
The literature on mixed methods and multimethods has burgeoned over the last 20 years, and researchers from a growing number and diversity of fields have progressively embraced these approaches. However, rapid growth in any movement inevitably gives rise to gaps or shortcomings, such as “identity crises” or divergent conceptual views. Although some authors draw a clear and sometimes opinionated distinction between mixed methods and multimethods, for others, they are synonymous. The concepts underlying both terms therefore have become blurred and generated much confusion. The aim of this article is to explore the origins of the confusion, describe our view of mixed methods and multimethod studies, and by doing so, help to clearly delineate the two concepts. The authors have presented their opinion of how these terms and concepts should be distinguished and call for a constructive debate of the issues involved in the mixed methods and multimethod literature. This is a way truly to propel the field forward.  相似文献   

3.
This paper considers the provision of some important municipal services and applies the non-parametric double bootstrap Simar and Wilson (J Econ 136(1):31–64, 2007) model based on a truncated-regression to estimate the effect of a group of relevant factors, which include the political sign of the governing party and the type of management, on robust DEA (Data Envelopment Analysis) estimates. Previous conditions, like separability, must hold for meaningful first- stage efficiency estimates and second-stage regression. After some confusion in the literature, Simar and Wilson (J Prod Anal 36(2):205–218, 2011b) clarify that their work of 2007 actually defines a statistical model where truncated (but not censored, i.e., Tobit, not Ordinary Least Square) regression yields a consistent estimation of model features. They demonstrate that conventional, likelihood-based approaches to inference are invalid, and they develop a bootstrap approach that yields valid inference in second stage regressions when these are appropriate. The results reveal a significant relation between efficiency and all the variables analysed and that municipalities governed by progressive parties are more efficient.  相似文献   

4.
5.
The literature on neighbor designs as introduced by Rees (Biometrics 23:779–791, 1967) is mainly devoted to construction methods, providing few results on their statistical properties, such as efficiency and optimality. A review of the available literature, with special emphasis on the optimality of neighbor designs under various fixed effects interference models, is given in Filipiak and Markiewicz (Commun Stat Theory Methods 46:1127–1143, 2017). The aim of this paper is to verify whether the designs presented by Filipiak and Markiewicz (2017) as universally optimal under fixed interference models are still universally optimal under models with random interference effects. Moreover, it is shown that for a specified covariance matrix of random interference effects, a universally optimal design under mixed interference models with block effects is universally optimal over a wider class of designs. In this paper the method presented by Filipiak and Markiewicz (Metrika 65:369–386, 2007) is extended and then applied to mixed interference models without or with block effects.  相似文献   

6.
This article considers estimation of regression function $f$ in the fixed design model $Y(x_i)=f(x_i)+ \epsilon (x_i), i=1,\ldots ,n$ , by use of the Gasser and Müller kernel estimator. The point set $\{ x_i\}_{i=1}^{n}\subset [0,1]$ constitutes the sampling design points, and $\epsilon (x_i)$ are correlated errors. The error process $\epsilon $ is assumed to satisfy certain regularity conditions, namely, it has exactly $k$ ( $=\!0, 1, 2, \ldots $ ) quadratic mean derivatives (q.m.d.). The quality of the estimation is measured by the mean squared error (MSE). Here the asymptotic results of the mean squared error are established. We found that the optimal bandwidth depends on the $(2k+1)$ th mixed partial derivatives of the autocovariance function along the diagonal of the unit square. Simulation results for the model of $k$ th order integrated Brownian motion error are given in order to assess the effect of the regularity of this error process on the performance of the kernel estimator.  相似文献   

7.
The paper provides one of the first applications of the double bootstrap procedure (Simar and Wilson 2007) in a two-stage estimation of the effect of environmental variables on non-parametric estimates of technical efficiency. This procedure enables consistent inference within models explaining efficiency scores, while simultaneously producing standard errors and confidence intervals for these efficiency scores. The application is to 88 livestock and 256 crop farms in the Czech Republic, split into individual and corporate.
Laure LatruffeEmail:
  相似文献   

8.
This paper investigates the existence of heterogeneous technologies in the US commercial banking industry through the nondynamic panel threshold effects estimation technique proposed by Hansen (Econometrica 64:413–430, 1999, Econometrica 68:575–603, 2000a). We employ the total assets as a threshold variable, which is typically considered as a proxy for bank’s size in the banking literature. We modify the threshold effects model to allow for time-varying effects, wherein these are modeled by a time polynomial of degree two as in Cornwell et al. (J Econom 46:185–200, 1990) model. Threshold effects estimation allows us to sort banks into discrete groups based on their size in a structural and consistent manner. We determine seven such distinct technology-groups within which banks are allowed to share the same technology parameters. We provide estimates of individual and group efficiency scores, as well as of those of returns to scale and measures of technological change for each group. The presence of the threshold(s) is tested via bootstrap procedure outlined in Hansen (Econometrica 64:413–430, 1999) and the relationship between bank size and efficiency ratios is investigated.  相似文献   

9.
The productive efficiency of a firm can be seen as composed of two parts, one persistent and one transient. The received empirical literature on the measurement of productive efficiency has paid relatively little attention to the difference between these two components. Ahn and Sickles (Econ Rev 19(4):461–492, 2000) suggested some approaches that pointed in this direction. The possibility was also raised in Greene (Health Econ 13(10):959–980, 2004. doi:10.1002/hec.938), who expressed some pessimism over the possibility of distinguishing the two empirically. Recently, Colombi (A skew normal stochastic frontier model for panel data, 2010) and Kumbhakar and Tsionas (J Appl Econ 29(1):110–132, 2012), in a milestone extension of the stochastic frontier methodology have proposed a tractable model based on panel data that promises to provide separate estimates of the two components of efficiency. The approach developed in the original presentation proved very cumbersome actually to implement in practice. Colombi (2010) notes that FIML estimation of the model is ‘complex and time consuming.’ In the sequence of papers, Colombi (2010), Colombi et al. (A stochastic frontier model with short-run and long-run inefficiency random effects, 2011, J Prod Anal, 2014), Kumbhakar et al. (J Prod Anal 41(2):321–337, 2012) and Kumbhakar and Tsionas (2012) have suggested other strategies, including a four step least squares method. The main point of this paper is that full maximum likelihood estimation of the model is neither complex nor time consuming. The extreme complexity of the log likelihood noted in Colombi (2010), Colombi et al. (2011, 2014) is reduced by using simulation and exploiting the Butler and Moffitt (Econometrica 50:761–764, 1982) formulation. In this paper, we develop a practical full information maximum simulated likelihood estimator for the model. The approach is very effective and strikingly simple to apply, and uses all of the sample distributional information to obtain the estimates. We also implement the panel data counterpart of the Jondrow et al. (J Econ 19(2–3):233–238, 1982) estimator for technical or cost inefficiency. The technique is applied in a study of the cost efficiency of Swiss railways.  相似文献   

10.
Zhaoping Hong  Yuao Hu  Heng Lian 《Metrika》2013,76(7):887-908
In this paper, we consider the problem of simultaneous variable selection and estimation for varying-coefficient partially linear models in a “small $n$ , large $p$ ” setting, when the number of coefficients in the linear part diverges with sample size while the number of varying coefficients is fixed. Similar problem has been considered in Lam and Fan (Ann Stat 36(5):2232–2260, 2008) based on kernel estimates for the nonparametric part, in which no variable selection was investigated besides that $p$ was assume to be smaller than $n$ . Here we use polynomial spline to approximate the nonparametric coefficients which is more computationally expedient, demonstrate the convergence rates as well as asymptotic normality of the linear coefficients, and further present the oracle property of the SCAD-penalized estimator which works for $p$ almost as large as $\exp \{n^{1/2}\}$ under mild assumptions. Monte Carlo studies and real data analysis are presented to demonstrate the finite sample behavior of the proposed estimator. Our theoretical and empirical investigations are actually carried out for the generalized varying-coefficient partially linear models, including both Gaussian data and binary data as special cases.  相似文献   

11.
We focus on the minimum distance density estimators \({\widehat{f}}_n\) of the true probability density \(f_0\) on the real line. The consistency of the order of \(n^{-1/2}\) in the (expected) L\(_1\)-norm of Kolmogorov estimator (MKE) is known if the degree of variations of the nonparametric family \(\mathcal {D}\) is finite. Using this result for MKE we prove that minimum Lévy and minimum discrepancy distance estimators are consistent of the order of \(n^{-1/2}\) in the (expected) L\(_1\)-norm under the same assumptions. Computer simulation for these minimum distance estimators, accompanied by Cramér estimator, is performed and the function \(s(n)=a_0+a_1\sqrt{n}\) is fitted to the L\(_1\)-errors of \({\widehat{f}}_n\) leading to the proportionality constant \(a_1\) determination. Further, (expected) L\(_1\)-consistency rate of Kolmogorov estimator under generalized assumptions based on asymptotic domination relation is studied. No usual continuity or differentiability conditions are needed.  相似文献   

12.
We review one method for estimating the modulus of continuity of a Schramm–Loewner evolution (SLE) curve in terms of the inverse Loewner map. Then we prove estimates about the distribution of the inverse Loewner map, which underpin the difficulty in bounding the modulus of continuity of SLE for $\kappa =8$ . The main idea in the proof of these estimates is applying the Girsanov theorem to reduce the problem to estimates about one-dimensional Brownian motion.  相似文献   

13.
In this paper we argue that the standard approach for measuring output and productivity in the trade sector has become obsolete. The key problem is that changes in prices of goods purchased for resale are not accounted for. We outline a consistent accounting framework for measuring trade productivity and provide new estimates, taking into account purchase prices of goods sold in a double deflation procedure. We find strong productivity improvements in the UK and US compared to France, Germany and The Netherlands since the mid-1990s. This finding is robust for various productivity measurement models.
Marcel P. TimmerEmail:
  相似文献   

14.
A method of Chow (1983) and a method of Dagli and Taylor (1982) for solving and estimating linear simultaneous equations under rational expectations are compared. The latter solution is shown to be a special case of the former in the sense of imposing a set of restrictions on the parameters of the former solution. Statistical methods to test the restrictions implicit in the latter solution are suggested. An illustrated model is provided to demonstrate the two methods, with the Dagli-Taylor method found to give inconsistent estimates when the restrictions are not met.  相似文献   

15.
We present eight existing projection methods and test their relative performance in estimating Supply and Use tables (SUTs) of the Netherlands and Spain. Some of the methods presented have received little attention in the literature, and some have been slightly revised to better deal with negative elements and preserve the signs of original matrix entries. We find that (G)RAS and the methods proposed by Harthoorn and van Dalen (1987 Harthoorn, R. and J. van Dalen (1987) On the Adjustment of Tables with Lagrange Multipliers. NA-024. Central Bureau of Statistics, The Netherlands, National Accounts Research Division  [Google Scholar]) and Kuroda (1988 Kuroda, M. 1988. “A Method of Estimation for the Updating Transaction Matrix in the Input–Output Relationships”. In Statistical Data Bank Systems. Socio-Economic database and model building in Japan, Edited by: Uno, K. and Shishido, S. Amsterdam: North Holland, 43–56.  [Google Scholar]) produce the best estimates for the data in question. Their relative success also suggests the stability of ratios of larger transactions.  相似文献   

16.
S. Bagchi 《Metrika》1987,34(1):95-105
TheE-optimality of the following designs within the class of all proper and connected designs with givenb, k andv under mixed effects model are established.
  1. A group divisible design with λ2 = λ1 + 1.
  2. A group divisible design with λ1 = λ2 + 1 and group size 2.
  3. A linked block design.
  4. The dual of design (i)
  5. The dual of design (ii).
All these designs are known to satisfy the same optimality property under fixed effects model whenk<v, while the design (i) is known to beE-optimal even whenk>v. From the results proved here, theE-optimality of designs (ii, (iii), (iv) and (v) under fixed effects model in the situation whenk >v also follows.  相似文献   

17.
In this paper, we present iterative or successive approximation methods for solving the coupled Hamilton–Jacobi–Isaacs equations (HJIEs) arising in nonzero-sum differential game for affine nonlinear systems. We particularly consider the ones arising in mixed \({\mathcal H}_{2}/{\mathcal H}_{\infty }\) control. However, the approach is perfectly general and can be applied to any others including those arising in the N-player case. The convergence of the method is established under fairly mild assumptions, and examples are solved to demonstrate the utility of the method. The results are also specialized to the coupled algebraic Riccati equations arising typically in mixed \({\mathcal H}_{2}/{\mathcal H}_{\infty }\) linear control. In this case, a bound within which the optimal solution lies is established. Finally, based on the iterative approach developed, a local existence result for the solution of the coupled-HJIEs is also established.  相似文献   

18.
19.
20.
A central claim of strategic HRM is the notion that the way a firm manages its workforce affects its corporate performance. In particular, ‘high performance human resource management’, a systematic approach toward HR management consisting of internally consistent HR dimensions that develop the skill and motivation of the workforce, is considered to contribute to the ‘bottom-line’ of companies. The benefits are attributed generally to ‘complementarities’ among the constituent dimensions. In the theoretical part of this paper we distinguish between three different processes resulting in such complementarities: reinforcement, flanking and compensation. These different processes are exemplified for five areas of high performance human resource management, incentives systems, training, sharing arrangements, guidance and selective recruitment. In the empirical part of this paper we examine whether the effect at the employee level can be traced to the complementary relationships among the five high performance HR dimensions. The core hypothesis to be tested in this study is that the complementarity effect of the high performance HR management system enhances employee performance over and above the sum of the effects of the five practices. This complementarity hypothesis is tested using a methodology for the test of systems effects suggested by Ichniowski et al. (1997 Ichniowski, C., Shaw, K. and Prennushi, G. 1997. The Effects of Human Resources Management Practices on Productivity: A Study of Steel Finishing Lines. The American Economic Review, 87(3): 291314. [Web of Science ®] [Google Scholar]). The data come from a matched establishment survey in two European countries, Ireland and the Netherlands. These datasets comprise data from nearly 400 establishments. Key findings are that the complementarity hypothesis is fully supported by the Irish data but rejected by the Dutch data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号