共查询到20条相似文献,搜索用时 15 毫秒
1.
Sizeable gender differences in employment rates are observed in many countries. Sample selection into the workforce might therefore be a relevant issue when estimating gender wage gaps. We propose a semi-parametric estimator of densities in the presence of covariates which incorporates sample selection. We describe a simulation algorithm to implement counterfactual comparisons of densities. The proposed methodology is used to investigate the gender wage gap in Italy. We find that, when sample selection is taken into account, the gender wage gap widens, especially at the bottom of the wage distribution. 相似文献
2.
An implicit enumeration algorithm is defined to obtain solutions to the commercial bank check processing encoder scheduling problem. The specific application is of particular interest because a significant factor in determining optimality is the float costs associated with checks which are unprocessed and unavailable for presentation at check clearing deadlines, thereby making the timing of the activity of crucial importance. A one day time horizon is employed to reflect those situations where banks have the scheduling flexibility afforded by part-time and/or temporary help in additon to a complement of full-time operators. Comparisons are made with other suggested approaches to daily encoder scheduling. Results indicate that dynamic programming can be an attractive methodology to attack this complex problem. 相似文献
3.
This paper introduces a numerical method for solving concave continuous state dynamic programming problems which is based on a pair of polyhedral approximations of concave functions. The method is globally convergent and produces computable upper and lower bounds on the value function which can in theory be made arbitrarily tight. This is true regardless of the pattern of binding constraints, the smoothness of model primitives, and the dimensionality and rectangularity of the state space. We illustrate the method's performance using an optimal firm management problem subject to credit constraints and partial investment irreversibilities. 相似文献
4.
The dynamic programming approach for a family of optimal investment models with vintage capital is here developed. The problem falls into the class of infinite horizon optimal control problems of PDE’s with age structure that have been studied in various papers (12, 11, 33 and 35) either in cases when explicit solutions can be found or using Maximum Principle techniques. 相似文献
5.
Minimum Kolmogorov distance estimates of arbitrary parameters are considered. They are shown to be strongly consistent if
the parameter space metric is topologically weaker than the metric induced by the Kolmogorov distance of distributions from
the statistical model. If the parameter space metric can be locally uniformly upper-bounded by the induced metric then these
estimates are shown to be consistent of ordern
−1/2. Similar results are proved for minimum Kolmogorov distance estimates of densities from parametrized families where the consistency
is considered in theL
1-norm. The presented conditions for the existence, consistency, and consistency of ordern
−1/2 are much weaker than those established in the literature for estimates with similar properties. It is shown that these assumptions
are satisfied e.g. by all location and scale models with parent distributions different from Dirac, and by all standard exponential
models.
Supported by the scientific exchange program between the Hungarian Academy of Sciences and the Royal Belgian Academy of Sciences,
and by GACR grant 201/93/0232. 相似文献
6.
Giri Kumar Tayi 《Journal of Operations Management》1985,5(2):237-246
The traditional quality control approach based on statistical tools has been very useful and effective when output and input qualities can be denned in terms of a single characteristic. However, in process industries such as paper, the output quality is denned in terms of two or more distinct characteristics; hence, reducing the deviation of one output characteristic from its permissible limits could result in forcing other output and/or input characteristics to deviate from their respective limits. Compounding this phenomenon is the fact that most of these industries produce substantial amounts of pollutants whose characteristics are a function of the input and output characteristics. Thus, with increasing costs of waste treatment and stringent pollution standards, there arises a notion of a trade-off between attaining market specified output characteristics and meeting federally regulated pollution standards.In this article a general process quality control problem has been formulated that reflects the above trade-off both in terms of a linear and a polynomial goal programming problem. Major advantages and differences between the two formulations are highlighted and illustrated with a practical example drawn from the paper industry.Three separate cases each with different priorities assigned to the output, pollutant and input characteristics are developed and solved under both formulations. Based on the analysis it is observed that the different solutions that result are contingent on the assumptions concerning the priorities associated with each goal and the manner by which one chooses to incorporate tradeoffs between goals in the objective function. Additionally, it is found that the solutions obtained under polynomial goal programming formulation are more conducive for implementation in practical quality control contexts. 相似文献
7.
Howard E. Thompson 《Managerial and Decision Economics》1985,6(3):132-140
The purpose of this paper is to illustrate a simple technique of estimating the cost of equity capital as well as a statistical measure of its reliability. The approach specifies a probability structure that is readily estimated from company dividend data, and front that structure both the estimated cost of capital and its standard error are calculated. 相似文献
9.
A goal programming approach to public investment decisions: A case study of rural roads in Indonesia 总被引:1,自引:0,他引:1
The development planner must often face complex problems with multiple, conflicting objectives. Goal programming provides a general methodology for solving such problems. The tool is applied here to aid in the selection of rural road projects in the Indonesian Rural Works Program. Selection criteria are formalized into a set of nineteen goals which form the basis for a goal programming model. Changes in priority levels of goals and weights are used to analyze the respective effects upon the spatial distribution of investments. The approach is applicable to a wide range of problems and a variety of sensitivity analyses. Despite clear advantages, several drawbacks must be noted. First, the application of the methodology, given its degree of sophistication, is limited to a central decision making unit which has access to appropriate software. Second, the technique assumes that the planner has the ability to formulate alternative actions and consequences in a quantifiable expression. 相似文献
10.
This article investigates a two-way ANOVA model with interactions assuming that the vector of error variables possesses a general spherically symmetric distribution instead of a multivariate normal one. Via a geometric approach we study a test for the usual hypothesis of non-interaction under this general assumption. Moreover, based on a certain geometric representation formula, we establish exponential large deviation rates of the least squares estimators in the above model for a specific class of spherical distributions.This research was partially supported by a special research grant from Alexander von Humboldt-Stiftung, Bonn, F. R. Germany. 相似文献
11.
The technique of Monte Carlo (MC) tests [Dwass (1957, Annals of Mathematical Statistics 28, 181–187); Barnard (1963, Journal of the Royal Statistical Society, Series B 25, 294)] provides a simple method for building exact tests from statistics whose finite sample distribution is intractable but can be simulated (when no nuisance parameter is involved). We extend this method in two ways: first, by allowing for MC tests based on exchangeable possibly discrete test statistics; second, by generalizing it to statistics whose null distribution involves nuisance parameters [maximized MC (MMC) tests]. Simplified asymptotically justified versions of the MMC method are also proposed: these provide a simple way of improving standard asymptotics and dealing with nonstandard asymptotics. 相似文献
12.
13.
《Statistica Neerlandica》1961,15(3):239-242
Samenvatting
De verdeling van een steekproef-quantiel is, gelijk bekend, asymptotisch normaal. De bewijzen in de meeste bekende leerboeken richten zich op de waarschijnlijkheids-dichtheid van deze verdeling. Hier wordt een doorzichtig bewijs gegeven, dat uitgaat van de [cumulatieve] verdelingsfunctie en dat aansluit bij de stelling van De Moivre — Laplace over de asymptotische normaliteit van de binomiale verdeling. Het leidt tevens tot eeen suggestie voor betere normalebenaderingen voor de verdeling van een steekproef-quantiel. 相似文献
De verdeling van een steekproef-quantiel is, gelijk bekend, asymptotisch normaal. De bewijzen in de meeste bekende leerboeken richten zich op de waarschijnlijkheids-dichtheid van deze verdeling. Hier wordt een doorzichtig bewijs gegeven, dat uitgaat van de [cumulatieve] verdelingsfunctie en dat aansluit bij de stelling van De Moivre — Laplace over de asymptotische normaliteit van de binomiale verdeling. Het leidt tevens tot eeen suggestie voor betere normalebenaderingen voor de verdeling van een steekproef-quantiel. 相似文献
14.
Laurens Cherchye Thomas Demuynck Bram De Rock 《Journal of Mathematical Economics》2011,47(4-5):564-575
Focusing on the testable revealed preference restrictions on the equilibrium manifold, we show that the rationalizability problem is NP-complete. Subsequently, we present a mixed integer programming (MIP) approach to characterize the testable implications of general equilibrium models. Attractively, this MIP approach naturally applies to settings with any number of observations and any number of agents. This is in contrast with existing approaches in the literature. We also demonstrate the versatility of our MIP approach in terms of dealing with alternative types of assignable information. Finally, we illustrate our methodology on a data set drawn from the US economy. In this application, an important focus is on the discriminatory power of the rationalizability tests under study. 相似文献
15.
Tests for the goodness-of-fit problem based on sample spacings, i.e., observed distances between successive order statistics,
have been used in the literature. We propose a new test based on the number of “small” and “large” spacings. The asymptotic
theory under close alternative sequences is also given thus enabling one to calculate the asymptotic relative efficiencies
of such tests. A comparison of the new test and other spacings tests is given. 相似文献
16.
In this paper, we propose a flexible, parametric class of switching regime models allowing for both skewed and fat-tailed outcome and selection errors. Specifically, we model the joint distribution of each outcome error and the selection error via a newly constructed class of multivariate distributions which we call generalized normal mean–variance mixture distributions. We extend Heckman’s two-step estimation procedure for the Gaussian switching regime model to the new class of models. When the distributions of the outcome errors are asymmetric, we show that an additional correction term accounting for skewness in the outcome error distribution (besides the analogue of the well known inverse mill’s ratio) needs to be included in the second step regression. We use the two-step estimators of parameters in the model to construct simple estimators of average treatment effects and establish their asymptotic properties. Simulation results confirm the importance of accounting for skewness in the outcome errors in estimating both model parameters and the average treatment effect and the treatment effect for the treated. 相似文献
17.
A stochastic frontier model with correction for sample selection 总被引:1,自引:2,他引:1
William Greene 《Journal of Productivity Analysis》2010,34(1):15-24
Heckman’s (Ann Econ Soc Meas 4(5), 475–492, 1976; Econometrica 47, 153–161, 1979) sample selection model has been employed in three decades of applications of linear regression studies. This paper builds
on this framework to obtain a sample selection correction for the stochastic frontier model. We first show a surprisingly
simple way to estimate the familiar normal-half normal stochastic frontier model using maximum simulated likelihood. We then
extend the technique to a stochastic frontier model with sample selection. In an application that seems superficially obvious,
the method is used to revisit the World Health Organization data (WHO in The World Health Report, WHO, Geneva 2000; Tandon et al. in Measuring the overall health system performance for 191 countries, World Health Organization, 2000) where the sample partitioning is based on OECD membership. The original study pooled all 191 countries. The OECD members
appear to be discretely different from the rest of the sample. We examine the difference in a sample selection framework. 相似文献
18.
We describe a system for the automatic scheduling of employees in the particular setting in which: the number of employees wanted on duty throughout the week fluctuates; the availabilities of the employees varies and changes from week to week; and a new schedule must be produced each week, by virtue of the changing demand for service.The problem which we address appears in a variety of settings, including: airline reservation offices; telephone offices; supermarkets; fast food restaurants; banks and hotels.Previous approaches to the problem have relied chiefly on formal methods, generally involving one or another variation of linear or integer, mathematical programming. We suggest that except in cases involving very small problems (only a handful of employees) that those approaches have not proven promising, especially where union rules and management requirements impose complex constraints on the problem, and that a heuristic approach has proven to be substantially superior.We set forth the general features of our heuristic approach, which we see as an application of artificial intelligence; we show how, in contrast to other approaches, which design shifts as if employees were always available and try to fit those shifts to employees who are not always available, our system design shifts with deference to the employees' limited availabilities; we suggest that, for a given service level, our system produces schedules with a better “fit”—number of employees actually on duty comparing more favorably with the number wanted; and we state that while, for a given service level, a ‘manual scheduler’ may take up to 8 hours each week to prepare a good schedule, our system, on most micro computers, routinely produces better schedules involving up to 100 employees in about 20 minutes.The scheduling of employees is generally considered to be a managerial function, in the setting of the problem we address. When a craft employee is replaced on an assembly line by a machine which performs the same function, we speak of the replacing mechanism as an industrial robot.We suggest that systems like that which we describe deserve a name, to distinguish them from comparable, computer based systems which do not replace, but rather supplement a manager, and we suggest the name ‘managerial robot’ for such systems.We set forth the characteristics which we feel would justify applying the term ‘managerial robot’ to a computer based system, and suggest that classification is basic to understanding and communication and that just as terms such as decision support systems and expert systems prove useful in our increasingly advanced, technological society, so also the term managerial robot has a place in our scheme of things.Decision support systems do not qualify as managerial robots for the reason that managerial robots don't simply support the decision making process, but rather replace the manager in his performance of a function which, when performed by a human being, is considered a managerial function.Nor do we consider managerial robots to qualify as expert systems. While our scheduling system contains an inference mechanism, and could be enhanced to improve the quality of its schedules thru ‘experience’ (and thus to ‘learn’?), that—lacking a knowledge base in the sense of expert systems-and most of all in replacing rather than supporting the decision maker, the managerial robot needs a term of its own.We elaborate, in this paper, a specific application of our system, and show how the design of shifts, and the placement of breaks, serve to yield a fit whose quality no human scheduler can duplicate. 相似文献
19.
The performance on small and medium-size samples of several techniques to solve the classification problem in discriminant analysis is investigated. The techniques considered are two widely used parametric statistical techniques (Fisher's linear discriminant function and Smith's quadratic function), and a class of recently proposed nonparametric estimation techniques based on mathematical programming (linear and mixed-integer programming). A simulation study is performed, analyzing the relative performance of the above techniques in the two-group case, for various small sample sizes, moderate group overlap and across six different data conditions. Training samples as well as validation samples are used to assess the classificatory performance of the techniques. The degree of group overlap and sample sizes selected for analysis in this paper are of interest in practice because they closely reflect conditions of many real data sets. The results of the experiment show that Smith's nonlinear quadratic function tends to be superior on the training samples and validation samples when the variances–covariances across groups are heterogeneous, while the mixed-integer technique performs best on the training samples when the variances–covariances are equal, and on validation samples with equal variances and discrete uniform independent variables. The mixed-integer technique and the quadratic discriminant function are also found to be more sensitive than the other techniques to the sample size, giving disproportionally inaccurate results on small samples. 相似文献
20.
We solve the stochastic neoclassical growth model, the workhorse of modern macroeconomics, using C++14, Fortran 2008, Java, Julia, Python, Matlab, Mathematica, and R. We implement the same algorithm, value function iteration, in each of the languages. We report the execution times of the codes in a Mac and in a Windows computer and briefly comment on the strengths and weaknesses of each language. 相似文献