首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Tolerance problems in mechanical engineering ought to be treated with statistical methods.
It is shown that the use of range (instead of standard deviation) for measuring variability in tolerances and fits leads to systematic errors.
The suggestion is made to measure variability in mechanical engineering by T = 4,65 , being the 98%-range in the case of normal distribution.
A simple statistical approach to selective assembling is mentioned.  相似文献   

2.
Momentarily (fall 196o) the Netherlands Central Bureau of Statistics is revising its price indexnumbers of family living. Some features of this revision are summarized below.
The old series is based on an expenditure pattern of 1951, whereas the new series will be calculated according to an expenditure pattern of 1959/'60. The latter data will be derived from a budget survey held among 250 households of manual and clerical workers consisting of 4 persons and grossing between four and eight thousand guilders a year (para. 6). The period covered was April 1959 till April i960.
The author indicates the way in which the varieties of the budget items to be covered by the monthly price surveys are chosen (para. 7). He discusses the principles and results of determining the number of price quotations (para. 8).
The choice of the municipalities in which price data will be collected is explained. An outline is given of the organisation of the new surveys apparatus (para. 14).  相似文献   

3.
《Statistica Neerlandica》1946,1(2):102-107
If we want to check a hypothesis by numbers, obtained from measuring or from counting in a random sample, practically always a difference will be found between these numbers and those, derived from that hypothesis-Such a difference may be caused by faults in the hypothesis or by factors which we do not want to analyse such as faults in the measuring or the influence of the sampling or influences acting together with the one regarded in our hypothesis.
The theory of statistics shows a way to estimate the chance that such another cause would result into a difference equal to or large than the one actually found. If in a given case this "chance of surpassing" is found to be small, we have to reject the hypothesis concerned. The smaller this chance is the larger is the likelihood of that rejection; it is the estimate of the likelihood of each statistical conclusion.  相似文献   

4.
Quality and Industrial Organisation: "To-morrow".
The influence of organised consumers, European integration and economic development on quality and organisation has been given.
A special way of organising from the bottom to the top has been explained. Emphasis has been placed on the necessity of and means of communication.  相似文献   

5.
This paper considers a two-plant production and distribution problem where the production cost of each plant depend on the amount produced. Demand is price inelastic and uniformly distributed on the plane. Transport costs are directly proportional to the straight-line travel distance from each plant. The constant of proportionality may differ between plants. Two cases are considered: (i) the two plants are owned by a single supplier, and (ii) the two plants are operated by competing firms. The profit-maximizing solution, when convexity assumptions are imposed on the production cost functions, determines the amount produced in each plant and defines the marketing region for each plant. It is shown that the solution for the single supplier case is identical to the solution in the competitor case and that the solution for each case is unique.  相似文献   

6.
《Statistica Neerlandica》1948,2(5-6):228-234
Summary  (Sample size for a single sampling scheme).
The operating characteristic of a sampling scheme may be specified by the producers 1 in 20 risk point ( p 1), at which the probability of rejecting a batch is 0.05, and the consumers 1 in 20 risk point ( p 2) at which the probability of accepting a batch of that quality is also 0.05.
A nomogram is given (fig. 2) to determine for single sampling schemes and for given values of p1 and p 2 the necessary sample size ( n ) and the allowable number of defectives in the sample ( c ).
The nomogram may reversedly be used to determine the producers and consumers 1 in 20 risk points for a given single sampling scheme.
The curves in this nomogram were computed from a table of percentage points of the χ2 distribution. For v > 30 Wilson and Hilferty's approximation to the χ2 distribution was used.  相似文献   

7.
A sequence of logistic models is fitted to data from a Dutch follow-up study on preterm infants (POPS). To examine the adequacy of the model, a recently developed non parametric method to check goodness of fit is applied (le Cessie and Van Houwelingen (1991)). This method uses a test statistic based upon kernel regression methods.
In this paper the problem of choosing a "best" bandwidth, corresponding to the greatest power of the test statistic, is avoided by computing the test statistic for a range of different bandwidths. Testing is then based upon the asymptotic distribution of the maximum of the test statistics.
The testing method is used as a goodness of fit criterion, and the contribution of each individual observation to the test statistic is used as a diagnostic tool to localize deviations of the model, and to determine directions in which the model can be improved.  相似文献   

8.
Summary  (Statistical investigation of the distribution of data for the solids of bread (in loaves analysed in the Food Inspection Laboratory in Amsterdam))
The distributions of the data of the solids of bread as analysed during the years of the war are investigated. The means and the standard deviations are calculated, also χ2, kurtosis and skewness supposing the distributions to be normal. An example of calculation is given in table I. Actual numbers for different years are given in table II and in table III. The distributions were tested on normality because former investigations showed that the distribution of under survey prepared loaves did not deviate significantly from the normal.
It is found that generally the investigated distributions cannot be regarded as normal. Though symmetric they show leptokurtosis and the χ2-test for the goodness of fit of normal equation gives values of P 0,01 (or a little more). Similar distributions were found by Clancey1) in his investigation of numbers of chemical analyses of industrial products (about 10% of the distributions showed this shape, some 10% were truncated leptokurtic curves) and by us for the fat percentage of meals from the governmental eating-houses. The distributions are represented on probability-paper. This way of representing results gives a clear view of the variations of the mean and the standarddeviation in the course of the years (fig. 1). The deviations of the shape of the normal straight line on probability paper by special causes is investigated (fig. 3, fig. 4, fig. 5, fig. 6 to be compared with fig. 2). With this "spectrum" of possibilities of deviations from the normal distribution in mind the special cause for the leptokurtic shape in our special case has been discussed.  相似文献   

9.
This article presents a statistical approach to assess the coherence of official results of referendum processes. The statistical analysis described is divided in four phases, according to the methodology used and the corresponding results: (1) Initial Study, (2) Quantification of irregular certificates of election, (3) Identification of irregular voting centers and (4) Estimation of recall referendum results.
The technique of cluster analysis is applied to address the issue of heterogeneity of the parishes with respect to their political preferences.
The Venezuelan recall referendum 2004 is the case study we used to apply the proposed methodology, based on the data published by the "Consejo Nacional Electoral" (CNE-National Electoral Council). Finally, we present the conclusions of the study which we summarize as follows: The percentage of irregular certificates of election is between 22.2% and 26.5% of the total; 18% of the voting centers show an irregular voting pattern in their certificates of election, the votes corresponding to this irregularity are around 2,550,000; The result estimate, using the unbiased votes as representative of the population for the percentage of YES votes against President Chávez is 56.4% as opposed to the official result of 41%.  相似文献   

10.
Summary  Many books about probability and statistics only mention the weak and the strong law of large numbers for samples from distributions with finite expectation. However, these laws also hold for distributions with infinite expectation and then the sample average has to go to infinity with increasing sample size.
Being curious about the way in which this would happen, we simulated increasing samples (up to n = 40000) from three distributions with infinite expectation. The results were somewhat surprising at first sight, but understandable after some thought. Most statisticians, when asked, seem to expect a gradual increase of the average with the size of the sample. So did we. In general, however, this proves to be wrong and for different parent distributions different types of conduct appear from this experiment.
The samples from the "absolute Cauchy"-distribution are most interesting from a practical point of view: the average takes a high jump from time to time and decreases in between. In practice it might well happen, that the observations causing the jumps would be discarded as outlying observations.  相似文献   

11.
Note on the generation of most probable frequency distributions   总被引:1,自引:0,他引:1  
The most probable distribution of a stochastic variable is obtained by maximizing the entropy of its distribution under given constraints, by applying Lagrange's procedure.
The constraints then determine the type of frequency distribution. The above holds for continuous as well as for discrete distributions.
In this note we give a survey of various constraints and the corresponding frequency distributions.  相似文献   

12.
Summary  Controlling is intervening in a situation on the basis of measurements. The three elements occurring in this definition may each contain an uncertainty that sets a limit to the control efficiency:
The measurements may be in error owing to both inaccuracies and sluggishness of the measurements. Proper and rapid data-processing is essential therefore.
The interventions may lose part of their effect through over-determinancy or dynamically unfavourable (sluggish) responses. In either case statistical calculations disclose the average control errors that are liable to be made, and also the way in which these can be minimized.
The situation may be unclear, i.e. the static and dynamic process characteristics are insufficiently known. In such cases regression and correlation techniques may aid in finding a solution.
Following a general review, we shall discuss in more detail those aspects that are bound up with the dynamics of the phenomena. Arguments from information theory reveal that the dynamic efficiency of control actions depends on correlation functions of time series (disturbances) and response curves of the systems (processes). The effects of disturbance correlation time, disturbance variance, sampling time, sample treatment, measuring errors, measuring time, rapidity of the intervention, etc., on the efficiency are elucidated by means of some formulae, graphs and examples.  相似文献   

13.
《Statistica Neerlandica》1948,2(1-2):74-94
Summary  (The principles of the Analysis of Regression with an application to dosage mortality data).
The principles of the analysis of regression are explained to research workers. For advanced statisticians a mathematical treatment is given in the appendix (in English), which follows closely the practical procedure and differs markedly from those given by Kendall or Wilks. The suite of a previous article on dosage mortality data ( Statistica 1: 257) is given by way of example.  相似文献   

14.
Consider an ordered sample (1), (2),…, (2n+1) of size 2 n +1 from the normal distribution with parameters μ and . We then have with probability one
(1) < (2) < … < (2 n +1).
The random variable
n =(n+1)/(2n+1)-(1)
that can be described as the quotient of the sample median and the sample range, provides us with an estimate for μ/, that is easy to calculate. To calculate the distribution of h n is quite a different matter***. The distribution function of h1, and the density of h2 are given in section 1. Our results seem hardly promising for general hn. In section 2 it is shown that hn is asymptotically normal.
In the sequel we suppose μ= 0 and = 1, i.e. we consider only the "central" distribution. Note that hn can be used as a test statistic replacing Student's t. In that case the central hn is all that is needed.  相似文献   

15.
In this paper we show how the Kalman filter, which is a recursive estimation procedure, can be applied to the standard linear regression model. The resulting "Kalman estimator" is compared with the classical least-squares estimator.
The applicability and (dis)advantages of the filter are illustrated by means of a case study which consists of two parts. In the first part we apply the filter to a regression model with constant parameters and in the second part the filter is applied to a regression model with time-varying stochastic parameters. The prediction-powers of various "Kalman predictors" are compared with "least-squares predictors" by using T heil 's prediction-error coefficient U.  相似文献   

16.
17.
Statistical Integration Through Metadata Management   总被引:1,自引:0,他引:1  
Faster and more versatile technology is fuelling user demand for statistical agencies to produce an ever wider range of outputs, and to ensure those outputs are consistent and mutually related to the greatest extent possible. Statistical integration is an approach for enhancing the information content of separate statistical collections conducted by an agency, and is necessary for consistency. It has two aspects-conceptual and physical-the former being a prerequisite for the latter. This paper focuses on methods for achieving statistical integration through better management of metadata. It draws on experiences at the Australian Bureau of Statistics in the development and use of a central repository (the "Information Warehouse") to manage data and metadata. It also makes reference to comparable initiatives at other national statistical agencies.
The main conclusions are as follows. First, a prototyping approach is required in developing new functionality to support statistical integration as it is not clear in advance what tools are needed. Second, metadata from separate collections cannot easily be rationalised until they have been loaded to a central repository and are visible alongside one another so their inconsistencies are evident. Third, to be effective, conceptual integration must be accompanied by physical integration. Fourth, there is great scope for partnerships and exchange of ideas between agencies. Finally, statistical integration must be built into the ongoing collection processes and viewed as a way of life.  相似文献   

18.
This study presents an optimal exponent for transforming the exponentials for statistical process control (SPC) applications. The optimal exponent, 3.5454, is determined by minimizing the sum of the squared differences between two distinct cumulative probability functions. The normal distribution closely approximates the transformed distribution. The study investigates an interval of indifference for the exponent using two criteria, the square root of sum of the squared differences ( and the Kullback-Leibler distance (K-L). This interval is [3.4, 3.77], which implies that exponents falling in this interval have very similar results in transforming the exponentials. The study explores an example of flash memory wafer. The individual chart using the transformed data eliminating location the parameter has better detection power than that without eliminating the location parameter. Moreover, the control chart is also easier for practitioners to interpret and implement than probabilistic control limits.  相似文献   

19.
In this paper we intend to establish relations between the way efficiency is measured in the literature on efficiency analysis, and the notion of distance in topology. In particular we study the Holder norms and their relationship to the shortage function (Luenberger (1995) and the directional distance function (Chambers, Chung and Färe (1995–96)). Along this line, we provide mathematical programs to compute the Holder distance function. However, this has a perverse property that undermines its attractiveness: it fails the commensurability condition suggested by Russell (1988). Thus, we introduce a commensurable Holder distance function invariant with respect to a change in the units of measurement. Among other things we obtain some continuity result and we prove that the well known Debreu-Farrell measure is a special case of the Holder distance function.  相似文献   

20.
《Statistica Neerlandica》1948,2(5-6):206-227
Summary  (Superposition of two frequency distributions)
Notation:
n: number of observations
M: arithmetic mean
: standard deviation
μr: rth moment coefficient
β1: coefficient of skewness
β2: coefficient of kurtosis.
The suffixes a and b apply to the component distributions. The suffix t applies to the resulting distributions.

The problem: Given the first r moments of two frequency distributions (to begin with μ0). Find the first r moments of the distribution resulting from superposition of the two components ( r ≥ 5 ).
Formulae [1]. … [ 5 ] (§ 3 ) give the results in their most general form up to μ4.
Some special cases are treated in § 4, and eight different cases of superposition of two normal distributions in § 5.
In § 6 some remarks are made about the reverse situation, i.e. the splitting into two normal components of a combined frequency distribution.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号