首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 890 毫秒
1.
Summary The infinite period stationary inventory model is considered. There is a constant lead time, a nonnegative set-up cost, a linear purchase cost, a holding and shortage cost function, a fixed discount factor β, 0 < β < 1, and total backlogging of unfilled demand. Both the total discounted cost (β < 1) and the average cost (β= 1) criteria are considered. Under the assumption that the negatives of the one period holding and shortage costs are unimodal, a unified proof of the existence of an optimal (s.S) policy is given. As a by-product of the proof upper and lower bounds on the optimal values of s and S are found. New results simplify the algorithm of Veinott and Wagner for finding an optimal (s, S) policy for the case β< 1. Further it is shown that the conditions imposed on the one period holding and shortage costs can be weakened slightly.  相似文献   

2.
3.
R. Göb 《Metrika》1992,39(1):139-153
Investigations on acceptance sampling have attached rather few attention to defects inspection. As to sampling models and the corresponding operating characteristic (OC) function of single defects sampling plans, generally only typeB (process model) has been considered: sampling from a production process where the OC is conceived as a function of sample sizen, acceptance numberc, and process average number of defects per itemλ. For modern quality control, both the steadily increasing complexity of products and the need for differentiated cost calculation involve a clear demand for defects sampling in its practically most relevant form: lot-by-lot single sampling plans, where the OC (typeA, lot OC) is considered as a function of lot sizeN, sample sizen, acceptance numberc, number of defects in the lotK. The present paper develops two lot OC functions from suitable assumptions on the arrangements of the total numberK of defects over theN elements of the lot. Limiting theorems on these OC functions are used to justify to some extent the customary assumption that the Poisson process OC can serve as an approximation for typeA. The dependence of the OC functions on sample sizen, acceptance numberc, total number of defects in the lotK is described by simple formulae.  相似文献   

4.
The American Psychological Association Task Force recommended that researchers always report and interpret effect sizes for quantitative data. However, no such recommendation was made for qualitative data. Thus, the first objective of the present paper is to provide a rationale for reporting and interpreting effect sizes in qualitative research. Arguments are presented that effect sizes enhance the process of verstehen/hermeneutics advocated by interpretive researchers. The second objective of this paper is to provide a typology of effect sizes in qualitative research. Examples are given illustrating various applications of effect sizes. For instance, when conducting typological analyses, qualitative analysts only identify emergent themes; yet, these themes can be quantitized to ascertain the hierarchical structure of emergent themes. The final objective is to illustrate how inferential statistics can be utilized in qualitative data analyses. This can be accomplished by treating words arising from individuals, or observations emerging from a particular setting, as sample units of data that represent the total number of words/observations existing from that sample member/context. Heuristic examples are provided to demonstrate how inferential statistics can be used to provide more complex levels of verstehen than is presently undertaken in qualitative research.  相似文献   

5.
We consider the uniformly most powerful unbiased (UMPU) one-sided test for the comparison of two proportions based on sample sizes m and n, i.e., the randomized version of Fisher's exact one-sided test. It will be shown that the power function of the one-sided UMPU-test based on sample sizes m and n can coincide with the power function of the UMPU-test based on sample sizes m+1 and n for certain levels on the entire parameter space. A characterization of all such cases with identical power functions is derived. Finally, this characterization is closely related to number theoretical problems concerning Fermat-like binomial equations. Some consequences for Fisher's original exact test will be discussed, too.  相似文献   

6.
This study investigated the effect of advertising on cigarette sales, particularly after 1967. Data were collected using sources of the Statistical abstract of the U.S., the Historical Statistics of the U.S. and Vital Statistics between 1955 and 1979. A multiple regression model was used to analyze the data. Cigarette consumption was used as a dependent variable. Disposable Income, Death Rate due to cancer of the respiratory system/total cancer death, advertising outlays for cigarettes: Newspapers and television advertisement/total advertising cost, cigarette-production (including long and regular sizes), sales outlets, loyalty/total loyalty, average price for cigarettes and a dummy variable were used as independent variables. Analysis revealed that there is a significant negative relationship between cigarette-consumption and total cancer death and average prices on the one hand, and a significant positive association between loyalty and cigarette consumption on the other. Although advertising expenditures are not statistically significant, increased spending on advertising has an increasing effect on cigarette-consumption.  相似文献   

7.
The best known achievement of the literature on resource-allocating mechanisms and their message spaces is the first rigorous proof of the competitive mechanism's informational efficiency. In an exchange economy withN persons andK+1 commodities (including a numeraire), that mechanism announcesK prices as well as aK-compenent trade vector for each ofN−1 persons, making a total ofNK message variables. Trial messages are successively announced and after each announcement each personprivately determines, usingprivate information, whether she finds the proposed trades acceptable at the announced prices. When a message is reached with which all are content, then the trades specified in that message take place, and they satisfy Pareto optimality and individual rationality. The literature shows that no (suitably regular) mechanism can achieve the same thing with fewer thanNK message variables. In the classic proof, all the candidate mechanisms have the privacy property, and the proof uses that property in a crucial way. ‘Non-private’ mechanisms are, however, well-defined. We present a proof that forN>K,NK remains a lower bound even when we permit ‘non-private’ mechanisms. Our new proof does not use privacy at all. But in a non-private mechanism, minimality of the number of message variables can hardly be defended as the hallmark of informational efficiency, since a non-private mechanism requires some persons to know something about the private information of othersin addition to the information contained in the messages. The new proof of the lower boundNK invites a new interpretation of the competitive mechanism's informational efficiency. We provide a new concept of efficiency which the competitive mechanism exhibits and which does rest on privacy even whenN>K. To do so, we first define a class ofprojection mechanisms, wherein some of the message variables are proposed values of the action to be taken, and the rest are auxiliary variables. The competitive mechanism has the projection property, with a trade vector as its action and prices as the auxiliary variables. A projection mechanism proposes an action; for each proposal, the agents then use the auxiliary variables, together with their private information, to verify that the proposed action meets the mechanism's goal (Pareto optimality and individual rationality for the competitive mechanism) if, indeed, it does meet that goal. For a given goal, we seek projection mechanisms for which theverification effort (suitably measured) is not greater than that of any other projection mechanism that achieves the goal. We show the competitive mechanism to be verification-minimal within the class of private projection mechanisms that achieve Pareto optimality and individual rationality; that proofdoes use the privacy of the candidate mechanisms. We also show, under certain conditions, that a verification-minimal projection mechanism achieving a given goal has smallest ‘total communication effort’ (which is locally equivalent to the classic ‘message-space size’) among all private mechanisms that achieve the goal, whether or not they have the projection property.  相似文献   

8.
9.
Faraz and Parsian (Statistical Paper, 47: 569–593, 2006) have shown that the double warning lines (DWL) scheme detects process shifts more quickly than the other variable ratio sampling schemes such as variable sample sizes (VSS), variable sampling intervals (VSI) and variable sample sizes and sampling intervals (VSSVSI). In this paper, the DWLT2control chart for monitoring the process mean vector is economically designed. The cost model proposed by Costa and Rahim (Journal of Applied Statistics, 28: 875–885, 2001) is used here and is minimized through a genetic algorithm (GA) approach. Then the effects of the model parameters on the chart parameters and resulting operating loss is studied and finally a comparison between all possible variable ratio sampling (VRS) schemes are made to choose the best option economically.  相似文献   

10.
M. J. Ahsan  S. U. Khan 《Metrika》1982,29(1):71-78
The problem of allocating the sample numbers to the strata in multivariate stratified surveys, where, apart from the cost involved in enumerating the selected individuals in the sample, there is an overhead cost associated with each stratum, has been formulated as a non-linear programming problem. The variances of the posterior distributions of the means of various characters are put to restraints and the total cost is minimized. The main problem is broken into subproblems for each of which the objective function turns out to be convex. When the number of subproblems happens to be large an approach has been indicated for obtaining an approximate solution by solving only a small number of subproblems.  相似文献   

11.
This paper analyses the relevance of accounting fundamentals to inform about equity risk as measured by the cost of equity capital. Assuming the latter is a summary measure of how investors make decisions regarding the allocation of resources, the strength of the association between the cost of capital and the accounting‐based measures of risk indicates how important these measures are for market participants when making economic decisions. To infer the cost of equity capital, we use the O'Hanlon and Steele's method, which is based on the residual income valuation model. Moreover, we use the insights from this model to provide a theoretical underpinning for the choice of the accounting variables related to risk. The sample refers to the non‐financial firms listed in the Madrid Stock Exchange along the period 1987–2002. Our results support our initial expectations regarding the association between the cost of equity capital and the accounting‐based risk variables, thereby supporting the usefulness of fundamental analysis to determine the risk inherent in share's future payoffs. In particular, we highlight the role of investing risk, which has been ignored in previous research. Our results are also robust to measures of risk other than the cost of capital such as the variability in total returns and the firm's systematic risk (β).  相似文献   

12.
The purpose of this paper is to emphasize the importance of sampling and sample size considerations in all qualitative research. Such considerations would help qualitative researchers to select sample sizes and sampling designs that are most compatible with their research purposes. First, we discuss the importance of sampling in qualitative research. Next, we outline 24 designs for selecting a sample in qualitative research. We then discuss the importance of selecting a sample size that yields data that have a realistic chance of reaching data saturation, theoretical saturation, or informational redundancy. Based on the literature, we then provide sample size guidelines for several qualitative research designs. As such, we provide a framework for making sampling and sample size considerations in interpretive research. An earlier version of this article received the 2004 Southwest Educational Research Association (SERA) Outstanding Paper Award.  相似文献   

13.
This paper introduces tests for residual serial correlation in cointegrating regressions. The tests are devised in the frequency domain by using the spectral measure estimates. The asymptotic distributions of the tests are derived and test consistency is established. The asymptotic distributions are obtained by using the assumptions and methods that are different from those used in Grenander and Rosenblatt (1957) and Durlauf (1991). Small-scale simulation results are reported to illustrate the finite sample performance of the tests under various distributional assumptions on the data generating process. The distributions considered are normal and t-distributions. The tests are shown to have stable size at sample sizes as large as 50 or 100. Additionally, it is shown that the tests are reasonably powerful against the ARMA residuals. An empirical application of the tests to investigate the ‘weak-form’ efficiency in the foreign exchange market is also reported.  相似文献   

14.
Faraz and Parsian (Stat Pap 47:569–593, 2006) have shown that the double warning lines (DWL) scheme detects process shifts more quickly than the other variable ratio sampling schemes such as variable sample sizes (VSS), variable sampling intervals (VSI) and variable sample sizes and sampling intervals (VSSVSI). In this paper, the DWL T2 control chart for monitoring the process mean vector is economically designed. The cost model proposed by Costa and Rahim (J Appl Stat 28:875–885, 2001) is used here and is minimized through a genetic algorithm (GA) approach. Then the effects of the model parameters on the chart parameters and resulting operating loss is studied and finally a comparison between all possible variable ratio sampling (VRS) schemes are made to choose the best option economically.  相似文献   

15.
Summary The problem considered in this paper is a generalization of the usual Rao, Hartley and Cochran (RHC) scheme. In the usual RHC scheme the population ofN units is randomly divided inton groups wheren is the size of the sample. In this paper we propose to divide the population under consideration into (n+k) random groups wherek is some positive integer. Then a sample ofn groups is selected by using simple random sampling without replacement (SRSWOR). The expressions for the unbiased estimator of population total, its variance and the unbiased estimate of variance have been obtained under the proposed sheme. The condition under which the proposed sheme is more efficient than the usual RHC scheme has also been investigated.  相似文献   

16.
We propose a unit root test for panels with cross-sectional dependency. We allow general dependency structure among the innovations that generate data for each of the cross-sectional units. Each unit may have different sample size, and therefore unbalanced panels are also permitted in our framework. Yet, the test is asymptotically normal, and does not require any tabulation of the critical values. Our test is based on nonlinear IV estimation of the usual augmented Dickey–Fuller type regression for each cross-sectional unit, using as instruments nonlinear transformations of the lagged levels. The actual test statistic is simply defined as a standardized sum of individual IV t-ratios. We show in the paper that such a standardized sum of individual IV t-ratios has limit normal distribution as long as the panels have large individual time series observations and are asymptotically balanced in a very weak sense. We may have the number of cross-sectional units arbitrarily small or large. In particular, the usual sequential asymptotics, upon which most of the available asymptotic theories for panel unit root models heavily rely, are not required. Finite sample performance of our test is examined via a set of simulations, and compared with those of other commonly used panel unit root tests. Our test generally performs better than the existing tests in terms of both finite sample sizes and powers. We apply our nonlinear IV method to test for the purchasing power parity hypothesis in panels.  相似文献   

17.
During the period from 1998 to 2000, China implemented several new asset write‐down regulations that mandate lower of cost or market accounting (LCM) for most non‐cash assets. This is a study of the relevance and reliability of those regulations for investors in China. The study measures the association of net asset value with market value of equity and the association of accounting income with stock return, on both a historical cost accounting (HCA) basis and on an LCM basis. A fixed‐effects model controlling both year and firm effects is used in a balanced panel sample. The panel regressions show high levels of explanatory power. LCM values can be relevant but may be measured with sufficient error that they do not improve the prediction of firm values. Reliability is measured using non‐nested, overlapping model comparison tests (J and Cox). The paper also considers whether discretionary motivations influence the amount of write‐down. The study supports the relevance of LCM reforms, but finds that reliability is not increased over HCA during the period under study. Reliability appears to be reduced by the voluntary nature of LCM provisions during part of the period and by the effects of opportunism for some firms in the sample.  相似文献   

18.
B. Heiligers 《Metrika》1991,38(1):377-381
Summary We give a simple proof for the well known result that a block design (for an additive, fixed effects block model withv treatments inb blocks) is connected iff its ℰ-matrix has rankv-1.  相似文献   

19.
In dynamic panel regression, when the variance ratio of individual effects to disturbance is large, the system‐GMM estimator will have large asymptotic variance and poor finite sample performance. To deal with this variance ratio problem, we propose a residual‐based instrumental variables (RIV) estimator, which uses the residual from regressing Δyi,t?1 on as the instrument for the level equation. The RIV estimator proposed is consistent and asymptotically normal under general assumptions. More importantly, its asymptotic variance is almost unaffected by the variance ratio of individual effects to disturbance. Monte Carlo simulations show that the RIV estimator has better finite sample performance compared to alternative estimators. The RIV estimator generates less finite sample bias than difference‐GMM, system‐GMM, collapsing‐GMM and Level‐IV estimators in most cases. Under RIV estimation, the variance ratio problem is well controlled, and the empirical distribution of its t‐statistic is similar to the standard normal distribution for moderate sample sizes.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号