首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Classical optimal strategies are notorious for producing remarkably volatile portfolio weights over time when applied with parameters estimated from data. This is predominantly explained by the difficulty to estimate expected returns accurately. In Lindberg (Bernoulli 15:464–474, 2009), a new parameterization of the drift rates was proposed with the aim to circumventing this difficulty, and a continuous time mean–variance optimal portfolio problem was solved. This approach was further developed in Alp and Korn (Decis Econ Finance 34:21–40, 2011a) to a jump-diffusion setting. In the present paper, we solve a different portfolio problem under the market parameterization in Lindberg (Bernoulli 15:464–474, 2009). Here, the admissible investment strategies are given as the amounts of money to be held in each stock and are allowed to be adapted stochastic processes. In the references above, the admissible strategies are the deterministic and bounded fractions of the total wealth. The optimal strategy we derive is not the same as in Lindberg (Bernoulli 15:464–474, 2009), but it can still be viewed as investing equally in each of the n Brownian motions in the model. As a consequence of the problem assumptions, the optimal final wealth can become non-negative. The present portfolio problem is solved also in Alp and Korn (Submitted, 2011b), using the L 2-projection approach of Schweizer (Ann Probab 22:1536–1575, 1995). However, our method of proof is direct and much easier accessible.  相似文献   

2.
Durbin (Biometrika 48:41–55, 1961) proposed a method called random substitution, by which a composite problem of goodness-of-fit can be reduced to a simple one. In this paper we provide a method of finding the p-value of any test statistic, for a composite goodness-of-fit problem, based on the simulation of a large number of conditional samples, using an analog of Durbin’s proposal in a reverse-type application. We analyze a Bayesian chi-square test proposed in Johnson (Ann Stat 32:2361–2384, 2004) which relies on a single randomization and relate it with Durbin’s original method. We also review a related proposal for conditional Monte-Carlo simulation in Lindqvist and Taraldsen (Biometrika 92:451–464, 2005) and compare it with our procedure. We show our method in a non-group example introduced in Lindqvist and Taraldsen (Biometrika 90:489–490, 2003).  相似文献   

3.
A typical microarray experiment often involves comparisons of hundreds or thousands of genes. Since a large number of genes are compared, simple use of a significance test without adjustment for multiple comparison artifacts could lead to a large chance of false positive findings. In this context, Tsai et al. (Biometrics 59:1071–1081, 2003) have presented a model that studies the overall error rate when testing multiple hypotheses. This model involves the distribution of the sum of non-independent Bernoulli trials and this distribution is approximated by using a beta-binomial structure. Instead of using a beta-binomial model, in this paper, we derive the exact distribution of the sum of non-independent and non-identically distributed Bernoulli random variables. The distribution obtained is used to compute the conditional false discovery rates and the results are compared to those obtained, in Table 3, by Tsai et al. (Biometrics 59:1071–1081, 2003).  相似文献   

4.
In this note we study the National Resident Matching Program (NRMP) algorithm in the US market for physicians. We report on two problems that concern the presence of couples, a feature explicitly incorporated in the new NRMP algorithm (cf. Roth and Peranson in Am Econ Rev 89:748–780, 1999). First, we show that the new NRMP algorithm may not find an existing stable matching, even when couples’ preferences are ‘responsive’, i.e., when Gale and Shapley’s (Am Math Monthly 69:9–15, 1962) deferred acceptance algorithm (on which the old NRMP algorithm is based) is applicable. Second, we demonstrate that the new NRMP algorithm may also be anipulated by couples acting as singles.   相似文献   

5.
Warner’s (J Am Stat Assoc 60:63–69, 1965) pioneering ‘randomized response’ (RR) technique (RRT), useful in unbiasedly estimating the proportion of people bearing a sensitive characteristic, is based exclusively on simple random sampling (SRS) with replacement (WR). Numerous developments that follow from it also use SRSWR and employ in estimation the mean of the RR’s yielded by each person everytime he/she is drawn in the sample. We examine how the accuracy in estimation alters when instead we use the mean of the RR’s from each distinct person sampled and also alternatively if the Horvitz and Thompson’s (HT, J Am Stat Assoc 47:663–685, 1952) method is employed for an SRSWR eliciting only, a single RR from each. Arnab’s (1999) approach of using repeated RR’s from each distinct person sampled is not taken up here to avoid algebraic complexity. Pathak’s (Sánkhya 24:287–302, 1962) results for direct response (DR) from SRSWR are modified with the use of such RR’s. Through simulations we present relative levels in accuracy in estimation by the above three alternative methods.  相似文献   

6.
Yan Liu  Min-Qian Liu 《Metrika》2012,75(1):33-53
Supersaturated designs (SSDs) have been highly valued in recent years for their ability of screening out important factors in the early stages of experiments. Recently, Liu and Lin (in Statist Sinica 19:197–211, 2009) proposed a method to construct optimal mixed-level SSDs from smaller multi-level SSDs and transposed orthogonal arrays (OAs). This paper extends their method to construct more equidistant optimal SSDs by replacing the multi-level SSDs and transposed OAs with mixed-level SSDs and general transposed difference matrices, respectively, and then proposes two practical methods for constructing weak equidistant SSDs based on this extended method. A large number of new optimal SSDs can be constructed from these three methods. Some examples are provided and more new designs are listed in “Appendix” for practical use.  相似文献   

7.
We study the local turnpike property for two classes of infinite-horizon discrete-time deterministic maximization problems which have common applications, e.g., optimal growth theory. We follow a functional-analytic approach and rely on an implicit function theorem for the space of the sequences which converge to zero. We shall assume the existence of an optimal path which is not necessarily a steady-state. Relying on material developped in Blot and Crettez (Decis Econo Finance 27:1–34, 2004), “On the smoothness of optimal paths” Decis Econ Finance, 21:1–34, 2004), we provide conditions under which a variation in the initial conditions (i.e., capital stock and discount rate) yields an optimal solution which converges toward a reference solution when time becomes infinite. We also provide new results on bounded solutions of difference equations. We gratefully thank the editor, Silvano Holzer, and two anonymous referees for remarks and advices on a previous version of this paper.  相似文献   

8.
In this article, we have merged or intersected two typologies: Greene’s (Res Sch 13(1):93–98, 2006) four-domain typology for developing a methodological or research paradigm in the social and behavioral sciences and Onwuegbuzie and Johnson’s (Res Sch 13(1):48–63, 2006) nine-component typology for assessing mixed research legitimation. We argue that merging or interconnecting these typologies present a framework for assessing legitimation in mixed research. Specifically, we demonstrate how the nine types of legitimation map onto Greene’s (Res Sch 13(1):93–98, 2006) four methodological domains and illustrate how legitimation in mixed research, rather than being viewed as a procedure that occurs at a specific step of the mixed research process, is better conceptualized as a continuous iterative, interactive, and dynamic process. Additionally, in presenting this framework, we hope to reduce misperceptions that some researchers have voiced about mixed research.  相似文献   

9.
This study examines issues of data quality in a survey conducted in a nationwide probability sample of 8985 students in the last four years of high school education in Greece. Respondents completed an extensive questionnaire whose main topic of investigation was the use of licit and illicit substances (tobacco, alcohol, cannabis, etc.). We examined the effect of sex, age and school performance on data quality. Specifically, we related these factors to the probabilities of correctly observing filters in the questionnaire, of certain inconsistencies in responses, of reporting difficulty in understanding the questions and being able to answer honestly. It was found that all these factors have strong effects. Boys’ responses presented more problems than girls’ (median odds ratio in a series of separate logistic regressions = 1.35) and younger respondents’ more than older (median odds ratio 1.34 for ages 13–14 and 1.06 for ages 15–16, compared to ages 17–18). The strongest effect was related to school performance compared to the best students (school marks 18–20), median odds ratios were 1.31 for marks 15–17, 1.76 for 12–14 and 3.25 for 10–11. Implications for questionnaire design are discussed.  相似文献   

10.
In this paper, we study the asymptotic properties of simulation extrapolation (SIMEX) based variance estimation that was proposed by Wang et al. (J R Stat Soc Series B 71:425–445, 2009). We first investigate the asymptotic normality of the parameter estimator in general parametric variance function and the local linear estimator for nonparametric variance function when permutation SIMEX (PSIMEX) is used. The asymptotic optimal bandwidth selection with respect to approximate mean integrated squared error (AMISE) for nonparametric estimator is also studied. We finally discuss constructing confidence intervals/bands of the parameter/function of interest. Other than applying the asymptotic results so that normal approximation can be used, we recommend a nonparametric Monte Carlo algorithm to avoid estimating the asymptotic variance of estimator. Simulation studies are carried out for illustration.  相似文献   

11.
Abstract. This article studies the design of optimal mechanisms to regulate entry in natural oligopoly markets, assuming the regulator is unable to control the behavior of firms once they are in the market. We adapt the Clarke–Groves mechanism, characterize the optimal mechanism that maximizes the weighted sum of expected social surplus and expected tax revenue, and show that these mechanisms avoid budget deficits and prevent excessive entry. Received: 7 May 2001 / Accepted: 24 June 2002 We would like to thank seminar participants at Bonn and Berlin, in particular Peter Bank, Wieland Müller, and Urs Schweizer, the two anonymous referees, and the associate editor for most useful and exceptionally detailed comments. Financial support was received by the Deutsche Forschungsgemeinschaft, SFB 373 (“Quantifikation und Simulation ?konomischer Prozesse”), Humboldt–Universit?t zu Berlin.  相似文献   

12.
Many techniques are met in the literature (see for instance Bartholomew and Forbes (Statistical Techniques for Manpower Planning. wiley, New York 1979); Gunz (Organiz. Stud. 9(4), 529–554, 1988); Becker and Huselid (Human Resour. Manage. 38, 287–301, 1999); Wagner et al. (J. Manage. Med. 14(5/6), 383–405, 2000); Harris and Ogbonna (J. Business Res. 51, 157–166, 2001); Rogg et al. (J. Manage. 27, 431–449, 2001), among others), for planning the manpower resources. However, we haven’t seen in the literature an empirical study regarding the proper application of optimal control, which considered to be the most efficient method for multi-objective programming. With this in mind, we analyse in this paper the way of applying optimal control for manpower planning. For this purpose, and in order to facilitate the presentation, we first adopted a comparatively simple dynamic system (plant), with analytical presentation of stocks and flows. Next we proceed to the formulation of an optimal control problem, aiming to achieve in the most satisfactory way some preassigned targets. These targets mainly refer to a desirable trajectory of the plant stocks over time, in order to fully satisfy the needs for human resources over the planning horizon. Finally we present a method of solution of the formulated control problem which is based on the use of the generalized inverse Lazaridis (Qual. Quan. 120, 297–306, 1986). We believe that it is very important for successful management, that the policy makers have to know the effect of their polices and to determine the optimal path of the state variables (i.e. the ones describing the system) before the realization of the plan, so as to be able to reform their strategies, reallocate the resources and arrange the infrastructure accordingly, if all these are necessary, as it can be depicted from the optimal control solution.  相似文献   

13.
An important issue when conducting stochastic frontier analysis is how to choose a proper parametric model, which includes choices of the functional form of the frontier function, distributions of the composite errors, and also the exogenous variables. In this paper, we extend the likelihood ratio test of Vuong, Econometrica 57(2):307–333, (1989) and Takeuchi’s, Suri-Kagaku (Math Sci) 153:12–18, (1976) model selection criterion to the stochastic frontier models. The most attractive feature of this test is that it can not only be used for testing a non-nested model, but also still be applicable even when the general model is misspecified. Finally, we also demonstrate how to apply this test to the Indian farm data used by Battese and Coelli, J Prod Anal 3:153–169, (1992), Empir Econ 20(2):325–332, (1995) and Alvarez et al., J Prod Anal 25:201–212, (2006).  相似文献   

14.
Dey and Midha (Biometrika 83(2):484–489, 1996) constructed optimal block designs for complete diallel cross experiment using triangular partially balanced incomplete block designs with two associate classes. They listed optimal block designs for the lines in the range from 5 ≤ v ≤ 10. In this paper, we are also proposing additional optimal block designs for complete diallel cross experiment using two associate class partially balanced block designs for some additional values of v. Our method yields designs for proper and non-proper settings for complete diallel cross experiments. The proper and non proper designs are optimal in the sense of Kempthorne (Genetics 41:451–459, 1956) and non-proper designs are universally optimal in the sense of Kiefer (A survey of statistical design and linear models, North Holland, Amsterdam, 1975). The list of practically important designs is also given.  相似文献   

15.
In this paper, we implement the conditional difference asymmetry model (CDAS) for square tables with nominal categories proposed by Tomizawa et al. (J. Appl. Stat. 31(3): 271–277, 2004) with the use of the non-standard log-linear model formulation approach. The implementation is carried out by refitting the model in the 3 ×  3 table in (Tomizawa et al. J. Appl. Stat. 31(3): 271–277, 2004). We extend this approach to a larger 4 ×  4 table of religious affiliation. We further calculated the measure of asymmetry along with its asymptotic standard error and confidence bounds. The procedure is implemted with SAS PROC GENMOD but can also be implemented in SPSS by following the discussion in (Lawal, J. Appl. Stat. 31(3): 279–303, 2004; Lawal, Qual. Quant. 38(3): 259–289, 2004).  相似文献   

16.
In this paper, I use a unique proprietary dataset from the foreign exchange market to examine the existing hypotheses on price clustering. I find that market uncertainty plays an important role in price clustering. Moreover, since trading behavior changes under different market conditions, market timing also affects the likelihood of price clustering. The results support both the price resolution hypothesis (Ball et al. J Futures Mark 5:29–43, 1985) and the negotiation hypothesis (Harris Rev Financ Stud 4:389–415, 1991). Since the data covers the interbank foreign exchange market, which is the market for the professional bank dealers, the attraction hypothesis is less likely to be a plausible explanation for price clustering in the foreign exchange market.  相似文献   

17.
Since Solow (Q J Econ 70:65–94, 1956) the economic literature has widely accepted innovation and technological progress as the central drivers of long-term economic growth. From the microeconomic perspective, this has led to the idea that the growth effects on the macroeconomic level should be reflected in greater competitiveness of the firms. Although innovation effort does not always translate into greater competitiveness, it is recognized that innovation is, in an appropriate sense, unique and differs from other inputs like labor or capital. Nonetheless, often this uniqueness is left unspecified. We analyze two arguments rendering innovation special, the first related to partly non-discretionary innovation input levels and the second to the induced increase in the firm’s competitiveness on the global market. Methodologically the analysis is based on restriction tests in non-parametric frontier models, where we use and extend tests proposed by Simar and Wilson (Commun Stat Simul Comput 30(1):159–184, 2001; J Prod Anal, forthcoming, 2010). The empirical data is taken from the German Community Innovation Survey 2007 (CIS 2007), where we focus on mechanical engineering firms. Our results are consistent with the explanation of the firms’ inability to freely choose the level of innovation inputs. However, we do not find significant evidence that increased innovation activities correspond to an increase in the ability to serve the global market.  相似文献   

18.
We assess market valuation of airline convertible preferred stocks using a contingent claims valuation model that was extensively tested by Ramanlal et al. (Rev Quant Financ Account 10:303–319, 1998). Our sample consists of 4,096 daily price observations of 11 convertible preferred stocks issued by the U.S. airlines in 1980–1991. For each convertible we estimate daily model prices for 2 years after issuance and compare them with market prices by calculating pricing errors. While the entire sample’s mean pricing error is found to be negative 3.8%, the panel data analysis and the mean pricing errors of the sub-samples indicate that the undervaluation is much more severe in the first 6 months of trading. The results suggest that airlines leave about 10% on the table when they raise capital by issuing convertible securities.  相似文献   

19.
Tao  C. J.  Chen  S. C.  Chang  L. 《Quality and Quantity》2009,43(4):677-694
At present time the ISP (Internet Service Provider) already make a great impact to the human life as well as the economic society, everything has a close relation with the internet service and due to the fast development of information technology further lower the cost of ISP and improved the service speed as well as its quality. All of this make the customer layer of ISP more popular than before and thus bring up the fierce competition in this industry. Due to the entry barrier of high investment and high technology of ISP industry it has become as a monopoly market and the monopoly market is very respect the competition and cooperation relationship with competitors. Besides, most of the relevant literature in the past is focus on the measurement and analysis of customer to the self-company and few of them mention how to include the satisfaction of competitor’s customer into measurement and analysis. As golden strategy stated in the Sun-Tze Strategic: “Know yourself and know your competitor well, winning every war.” we must consider the strength, weakness, opportunity and threat on customer satisfaction of your company and competitors. Base on the above motivation, this article will apply the DMAIC (Define, Measure, Analyze, Improve, Control) methodology of 6-Sigma, focus on the viewpoint of customer satisfaction of ISP industry. At first we utilize the 5 equal scale measurement define the network quality item of ISP, which provided by the self-company and competitors. And measuring the degree of “satisfaction” and “importance” of these quality items, then use the performance evaluation matrix and strength and weakness matrix to analyze the Strength, Weakness, Opportunity and Threat of self and competitors, focus on the quality items that fall out of the 2 sigma and use the strength–weakness strategic chart to establish the improvement strategy. And at last, utilize the strength–weakness matrix chart as the control tool to verify and sustain the effectiveness of the improved performance. The complete and easy measurement improvement model provided in this article can be used by the enterprise to effectively and quickly evaluating and analyzing the service quality of self and competitors. And under the reasonable cost condition with considering the competing opportunity and threat of market to effectively improving the customer service quality and promoting the overall customer satisfaction and create powerful high value-added quality competition strength.  相似文献   

20.
We show that the Hotelling–Lau elasticity of substitution, an extension of the Allen–Uzawa elasticity to allow for optimal output-quantity (or utility) responses to changes in factor prices, inherits all of the failings of the Allen–Uzawa elasticity identified by Blackorby and Russell [(1989) Am Econ Rev 79: 882–888]. An analogous extension of the Morishima elasticity of substitution to allow for output quantity changes preserves the salient properties of the original Hicksian notion of elasticity of substitution. We thank Paolo Bertoletti for drawing our attention to the issue addressed in this paper and for his comments on an earlier draft.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号