首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The existing models of mixed public–private school systems usually capture only the decreasing average cost faced by public schools, whereas empirical studies find evidence of it for private schools as well. Motivated by this, an equilibrium model of a mixed public–private school system is studied in this paper, whereby private schools also face decreasing average cost over enrollment. In the model, households, heterogeneous with respect to exogenously specified income and child’s ability, choose among a public and a private school. Private school charges tuition whereas public school is free. Public school spending is financed by income tax revenue collected from all households and the tax rate is determined via majority voting. Achievement of a child depends on its ability and education spending. Under the assumptions on the parameters of the model, a joint lognormal distribution of income and ability, and a Cobb–Douglas utility, majority voting equilibrium is numerically shown to exist. The model is calibrated to match certain statistics from the 2013 Turkish data. Using the calibrated model, we compare the benchmark for a mixed public–private school system with a pure public school system to understand the impact of shutting down some of the private schools in Turkey following the July 15 coup attempt. We find that mean achievement and variance of achievement after high school is \(0.039\%\) higher and \(0.013\%\) lower respectively in a pure public school system.  相似文献   

2.
We address the issue of equivalence of primal and dual measures of scale efficiency in general production theory framework. We find that particular types of homotheticity of technologies, which we refer to here as scale homotheticity, provide necessary and sufficient condition for such equivalence. We also identify the case when the scale homotheticity is equivalent to the homothetic structures from Shephard (Theory of cost and production functions, Princeton studies in mathematical economics. Princeton University Press, Princeton, 1970).  相似文献   

3.
4.
A distribution function F is a generalized distorted distribution of the distribution functions \(F_1,\ldots ,F_n\) if \(F=Q(F_1,\ldots ,F_n)\) for an increasing continuous distortion function Q such that \(Q(0,\ldots ,0)=0\) and \(Q(1,\ldots ,1)=1\). In this paper, necessary and sufficient conditions for the stochastic (ST) and the hazard rate (HR) orderings of generalized distorted distributions are provided when the distributions \(F_1,\ldots ,F_n\) are ordered. These results are used to obtain distribution-free ordering properties for coherent systems with heterogeneous components. In particular, we determine all the ST and HR orderings for coherent systems with 1–3 independent components. We also compare systems with dependent components. The results on distorted distributions are also used to get comparisons of finite mixtures.  相似文献   

5.
Detecting and modeling structural changes in time series models have attracted great attention. However, relatively little effort has been paid to the testing of structural changes in panel data models despite their increasing importance in economics and finance. In this paper, we propose a new approach to testing structural changes in panel data models. Unlike the bulk of the literature on structural changes, which focuses on detection of abrupt structural changes, we consider smooth structural changes for which model parameters are unknown deterministic smooth functions of time except for a finite number of time points. We use nonparametric local smoothing method to consistently estimate the smooth changing parameters and develop two consistent tests for smooth structural changes in panel data models. The first test is to check whether all model parameters are stable over time. The second test is to check potential time-varying interaction while allowing for a common trend. Both tests have an asymptotic N(0,1) distribution under the null hypothesis of parameter constancy and are consistent against a vast class of smooth structural changes as well as abrupt structural breaks with possibly unknown break points alternatives. Simulation studies show that the tests provide reliable inference in finite samples and two empirical examples with respect to a cross-country growth model and a capital structure model are discussed.  相似文献   

6.
We provide a new test for equality of two symmetric positive-definite matrices that leads to a convenient mechanism for testing specification using the information matrix equality or the sandwich asymptotic covariance matrix of the GMM estimator. The test relies on a new characterization of equality between two k dimensional symmetric positive-definite matrices A and B: the traces of AB?1 and BA?1 are equal to k if and only if A=B. Using this simple criterion, we introduce a class of omnibus test statistics for equality and examine their null and local alternative approximations under some mild regularity conditions. A preferred test in the class with good omni-directional power is recommended for practical work. Monte Carlo experiments are conducted to explore performance characteristics under the null and local as well as fixed alternatives. The test is applicable in many settings, including GMM estimation, SVAR models and high dimensional variance matrix settings.  相似文献   

7.
In this paper, the robust game model proposed by Aghassi and Bertsimas (Math Program Ser B 107:231–273, 2006) for matrix games is extended to games with a broader class of payoff functions. This is a distribution-free model of incomplete information for finite games where players adopt a robust-optimization approach to contend with payoff uncertainty. They are called robust players and seek the maximum guaranteed payoff given the strategy of the others. Consistently with this decision criterion, a set of strategies is an equilibrium, robust-optimization equilibrium, if each player’s strategy is a best response to the other player’s strategies, under the worst-case scenarios. The aim of the paper is twofold. In the first part, we provide robust-optimization equilibrium’s existence result for a quite general class of games and we prove that it exists a suitable value \(\epsilon \) such that robust-optimization equilibria are a subset of \(\epsilon \)-Nash equilibria of the nominal version, i.e., without uncertainty, of the robust game. This provides a theoretical motivation for the robust approach, as it provides new insight and a rational agent motivation for \(\epsilon \)-Nash equilibrium. In the last part, we propose an application of the theory to a classical Cournot duopoly model which shows significant differences between the robust game and its nominal version.  相似文献   

8.
In this work the ranked set sampling technique has been applied to estimate the scale parameter $\alpha $ of a log-logistic distribution under a situation where the units in a sample can be ordered by judgement method without any error. We have evaluated the Fisher information contained in the order statistics arising from this distribution and observed that median of a random sample contains the maximum information about the parameter $\alpha $ . Accordingly we have used median ranked set sampling to estimate $\alpha $ . We have further carried out the multistage median ranked set sampling to estimate $\alpha $ with improved precision. Suppose it is not possible to rank the units in a sample according to judgement method without error but the units can be ordered based on an auxiliary variable $Z$ such that $(X, Z)$ has a Morgenstern type bivariate log-logistic distribution (MTBLLD). In such a situation we have derived the Fisher information contained in the concomitant of rth order statistic of a random sample of size $n$ from MTBLLD and identified those concomitants among others which possess largest amount of Fisher information and defined an unbalanced ranked set sampling utilizing those units in the sample and thereby proposed an estimator of $\alpha $ using the measurements made on those units in this ranked set sample.  相似文献   

9.
Manoj Chacko 《Metrika》2017,80(3):333-349
In this paper we consider Bayes estimation based on ranked set sample when ranking is imperfect, in which units are ranked based on measurements made on an easily and exactly measurable auxiliary variable X which is correlated with the study variable Y. Bayes estimators under squared error loss function and LINEX loss function for the mean of the study variate Y, when (XY) follows a Morgenstern type bivariate exponential distribution, are obtained based on both usual ranked set sample and extreme ranked set sample. Estimation procedures developed in this paper are illustrated using simulation studies and a real data.  相似文献   

10.
In this paper we study convolution residuals, that is, if $X_1,X_2,\ldots ,X_n$ are independent random variables, we study the distributions, and the properties, of the sums $\sum _{i=1}^lX_i-t$ given that $\sum _{i=1}^kX_i>t$ , where $t\in \mathbb R $ , and $1\le k\le l\le n$ . Various stochastic orders, among convolution residuals based on observations from either one or two samples, are derived. As a consequence computable bounds on the survival functions and on the expected values of convolution residuals are obtained. Some applications in reliability theory and queueing theory are described.  相似文献   

11.
12.
Ya. Yu. Nikitin 《Metrika》2018,81(6):609-618
We consider two scale-free tests of normality based on the characterization of the symmetric normal law by Ahsanullah et al. (Normal and student’s t-distributions and their applications, Springer, Berlin, 2014). Both tests have an U-empirical structure, but the first one is of integral type, while the second one is of Kolmogorov type. We discuss the limiting behavior of the test statistics and calculate their local exact Bahadur efficiency for location, skew and contamination alternatives.  相似文献   

13.
14.
The BDS test is the best-known correlation integral–based test, and it is now an important part of most standard econometric data analysis software packages. This test depends on the proximity ( $\varepsilon )$ and the embedding dimension ( $m)$ parameters both of which are chosen by the researcher. Although different studies (e.g., Kanzler in Very fast and correctly sized estimation of the BDS statistic. Department of Economics, Oxford University, Oxford, 1999) have been carried out to provide an adequate selection of the proximity parameter, no relevant research has yet been done on $m$ . In practice, researchers usually compute the BDS statistic for different values of $m$ , but sometimes these results are contradictory because some of them accept the null and others reject it. This paper aims to fill this gap. To that end, we propose a new simple, yet powerful, aggregate test for independence, based on BDS outputs from a given data set, that allows the consideration of all of the information contained in several embedding dimensions without the ambiguity of the well-known BDS tests.  相似文献   

15.
In a model à la Venables of 1996 Venables, A. J. (1996) Equilibrium locations of vertically linked industries, International Economic review, 37, 341359. doi: 10.2307/2527327[Crossref], [Web of Science ®] [Google Scholar], we distinguish two kinds of intermediate goods: complex goods that entail endogenous coordination costs, and simple goods that do not. Coordination costs depend on geographical distance and the number of intermediate goods used in the production process. In the final stage of integration, there are two possible spatial configurations: (1) a symmetric configuration and (2) a partial core–periphery equilibrium, comprised of a core region that produces the final and complex intermediate goods, and a periphery that produces simple intermediate goods. We discuss some policy implications of this multiple-equilibria outcome.

Les coûts de coordination et la géographie de la production  相似文献   

16.
Motivated by the effect hierarchy principle, Zhang et al. (Stat Sinica 18:1689–1705, 2008) introduced an aliased effect number pattern (AENP) for regular fractional factorial designs and based on the new pattern proposed a general minimum lower-order confounding (GMC) criterion for choosing optimal $2^{n-m}$ designs. Zhang et al. (Stat Sinica 18:1689–1705, 2008) proved that most existing criteria can be obtained by functions of the AENP. In this paper we propose a simple method for the calculation of AENP. The method is much easier than before since the calculation only makes use of the design matrix. All 128-run GMC designs with the number of factors ranging from 8 to 32 are provided for practical use.  相似文献   

17.
Schwartz in (Nous,7, 1972, Definition, 3) introduces a generalization of the Condorcet criterion, which is the classical approach to rational choice in the context of cycles, and he defines the Schwartz set. Deb (J Econ Theory 16:103–110, 1977) shows that the Schwartz set consists of the maximal elements according to the transitive closure of the asymmetric part of a binary relation corresponding to a choice process or representing the decision maker’s preferences. This note provides a short and simple proof of Deb’s theorem on the characterization of the Schwartz set.  相似文献   

18.
Logistic Models support an alternative general solution to compute the point of subjective equality (Vidotto et al., I modelli simple logistic e rating scale nella determinazione del punto di eguaglianza soggettivo: una nuova prospettiva per il metodo degli stimoli costanti, 1996) when the Method of Constant Stimuli is used (Bock and Jones, The Measurement and Prediction of Judgment and Choice, 1968). The Extended Logistic Models (Andrich, Appl. Psychol. Meas. II:581–594, 1978) offer the theoretical frame to compute individual and general thresholds when the method of constant stimuli is applied using a forced choice with more than two alternatives. As an example of the advantages of the application of this procedure, we show a data-set derived from an experiment on rhythm perception (Maestrini, La percezione del cambiamento nei ritmi uditivi, 2003), where two groups of experimental and naïve subjects were asked to judge whether the listened rhythm was constant, accelerated or decelerated. We have computed individual and general thresholds differentiating constancy from both acceleration and deceleration for all the different experimental conditions. The main advantage of this solution, compared to traditional psychophysical techniques, is not only related to better estimates of the individual point of subjective equality. The improvement can be summarized in the available fit tests to verify the agreement between the model both for stimuli and subjects.  相似文献   

19.
The behaviour of the goodness-of-fit procedure for normality based on weighted integrals of the empirical characteristic function, discussed in the case of i.i.d. data, for instance, in Epps and Pulley (Biometrika 70:723–726, 1983), is considered here in the context of ranked set sampling (RSS) data. In the RSS context, we obtain the limiting distribution of the empirical characteristic process and perform a power study, against a broad set of alternatives, that enables an evaluation of the gain in power that occurs when a simple random sample is replaced by RSS data. The adaptation of the results obtained in the Gaussian RSS setting to the case of other important location-scale families is also discussed.  相似文献   

20.
The purpose of this note is twofold. First, we survey the study of the percolation phase transition on the Hamming hypercube $\{0,1\}^{m}$ obtained in the series of papers (Borgs et al. in Random Struct Algorithms 27:137–184, 2005; Borgs et al. in Ann Probab 33:1886–1944, 2005; Borgs et al. in Combinatorica 26:395–410, 2006; van der Hofstad and Nachmias in Hypercube percolation, Preprint 2012). Secondly, we explain how this study can be performed without the use of the so-called “lace expansion” technique. To that aim, we provide a novel simple proof that the triangle condition holds at the critical probability.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号