首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 272 毫秒
1.
J. Ledolter 《Metrika》1979,26(1):43-56
Wold's decomposition theorem [Wold] states that every weakly stationary stochastic process can be written as a linear combination of orthogonal shocks. For practical reasons, however, it is desirable to employ models which use parameters parsimoniously.Box andJenkins [1970] show how parsimony can be achieved by representing the linear process in terms of a small number of autoregressive and moving average terms (ARIMA-models). The Gaussian hypothesis assumes that the shocks follow a normal distribution with fixed mean and variance. In this case the process is characterized by first and second order moments. The normality assumption seems reasonable for many kinds of series. However, it was pointed out byKendall [1953],Mandelbrot [1963, 1967],Fama [1965],Mandelbrot andTaylor [1967] that particularly for stock price data the distribution of the shocks appears leptokurtic: In this paper we investigate the sensitivity of ARIMA models to non-normality of the distribution of the shocks. We suppose that the distribution function of the shocks is a member of the symmetric exponential power family, which includes the normal as well as leptokurtic and platikurtic distributions. A Bayesian approach is adopted and the inference robustness of ARIMA models with respect to
  1. the estimation of parameters
  2. the forecasts of future observations is discussed.
  相似文献   

2.
Si prende in considerazione una politica di Bonus attuata da una Compagnia di assicurazione e si studia l'andamento della perdita attesa della Compagnia al variare dell'entità del Bonus nei casi in cui:
  1. la funzioneL(s), che esprime il livello minimo dell'entità dei sinistri non occultabili all'istantes, sia lineare ins;
  2. la funzioneL(s) renda massima la previsione del guadagno di ogni singolo assicurato.
  相似文献   

3.
S. Bagchi 《Metrika》1987,34(1):95-105
TheE-optimality of the following designs within the class of all proper and connected designs with givenb, k andv under mixed effects model are established.
  1. A group divisible design with λ2 = λ1 + 1.
  2. A group divisible design with λ1 = λ2 + 1 and group size 2.
  3. A linked block design.
  4. The dual of design (i)
  5. The dual of design (ii).
All these designs are known to satisfy the same optimality property under fixed effects model whenk<v, while the design (i) is known to beE-optimal even whenk>v. From the results proved here, theE-optimality of designs (ii, (iii), (iv) and (v) under fixed effects model in the situation whenk >v also follows.  相似文献   

4.
P. C. Bagga 《Metrika》1973,20(1):36-40
Bagga [1967] andJohnson [1954] developed algorithms for finding an optimum sequence, whenn jobs require processing on two machines. The criteria chosen by them were minimum waiting time of jobs and minimum elapsed time respectively. All then jobs were assumed to be of the same type i.e., all have to be processed on two machinesA andB in the orderAB. This paper presents a procedure for finding the optimal schedule of the jobs, when the jobs may be of the following types:
  1. Jobs which are to be processed on only one of the machines.
  2. Jobs which require processing on the two machinesA andB in the orderAB; and
  3. Jobs which require processing on the two machinesA andB in the orderBA.
  相似文献   

5.
We present a new type of Artificial Neural Networks: the Self-Reflexive Networks. We utter the theoretical presuppositions; their dynamics is analogous to the one ascribed to autopoietic systems: self-referentiality, unsupervised learning and unintentionally cooperative and contractual activities of their own units. We also hypothesize a new concept of perception. We present the basic equations of Self-Reflexive Networks, new concepts as the one of dynamic target, of Re-entry with dedicated and fixed connections, of Meta-Units. Therefore, we experiment a specific type of Self-Reflexive Networks, the Monodedicated, within the interpretation of a toy-DB and we have hinted at other already made experimentations, experimentations in process and planned experimentations. From the applicative work that we present a few specifics and novelties of this type of Neural Networks emerge:
  1. the capability of answering to complex, strange, wrong or not precise questions, through the same algorithms through which the learning phase took place.
  2. the capability of spontaneously transforming their own learning inaccuracy in analogic capability and original self-organization capability.
  3. the capability of spontaneously integrate the models that it experienced in different moments in an achronical hyper-model.
  4. the capability of behaving as it had explored a decisions graph of large dimensions, both deeply and in extension. With the consequence of behaving as an Addressing Memory forself-dynamic Contents.
  5. the capability of always learning, rapidly and anyway, besides the complexity of the learning patterns.
  6. the capability of answering simultaneously from different points of view, behaving, in this case, as a network that builds more similarity models for each vector-stimulus that it receives.
  7. the capability of adjusting in a biunivocal way, each question to the consulting DB and each DB to the question that are submitted. The consequence of this fact is the continuous creation of new answering models.
  8. the capability of building during the learning phase, a weights matrix that provides a subconceptual representation of the bi-directional relations between each couple of input variables.
  9. the capability, through the Metaunits, to integrate in a unitary typology, nodes with different saturation speed and, therefore, with different memory: in fact, while the SR units are short memory nodes, since each new stumulus zeros the previous stimulus, the Metaunits memorize the SR different stimulus during time, functioning as an average length memory. This fact should confirm that the avarage length memory is of a different level from the immediate memory and that it is based only uponrelation among perceptive stimulus which are distributed in parallel and in sequence. In this context the weights matrix constitute the SR long term memory. And in this sense it will be opportune to think at a methodic through which the Metaunits can influence during time, the same weights matrix. In any case, in the SR there areservice nodes orfilter nodes andlearning nodes as if they were weights (the Metaunits).
  相似文献   

6.
Proseguendo nella ricerca, la cui prima parte è stata pubblicata nel precedente numero di questa Rivista, trattiamo delle successioni di riassunti esaustivi a fini preditivi (r.e.f.p.) ed affrontiamo i seguenti problemi:
  1. assegnata una distribuzione predittiva per cui esiste un r.e.f.p. è possibile individuare un modello ipotetico con essa compatibile?
  2. in caso affermativo, qual è il collegamento tra tale r.e.f.p. e gli eventuali r.e. in senso classico relativi al modello ipotetico individuato?
Infine analizzeremo, in modo piuttosto informale e perciò ulteriormente sviluppabile, il problema dell'espressione analitica di un r.e.f.p.  相似文献   

7.
In questa nota si studia una equazione funzionale che ha interessanti applicazioni al Calcolo delle Probabilità. Si illustrano due possibilità applicative:
  1. la scelta del nucleo delle trasformate integrali;
  2. la caratterizzazione di opportune classi di variabili casuali.
  相似文献   

8.
La classica disuguaglianza che definisce la concavità di una funzione: $$f(\alpha x + \bar \alpha y) \geqq \alpha f(x) + \bar \alpha f(y){\mathbf{ }}\bar \alpha = 1 - \alpha$$ può essere generalizzata in due modi:
  1. ammettendo pesi diversi nei due membri;
  2. ammettendo medie diverse dall'aritmetica.
In questo lavoro si mostra come queste due estensioni contengano vari tipi di concavità generalizzata proposti in letteratura: quasi concavità, pseudoconcavità, concavità forte, “concavificabilità”. Le due estensioni citate si possono per di più riunire in un'unica che prevede sia pesi eventualmente diversi nei due membri sia medie non necessariamente artimetiche. Presentiamo anche alcune semplici proprietà di queste funzioni concave generalizzate.  相似文献   

9.
We consider the mixed AR(1) time series model $$X_t=\left\{\begin{array}{ll}\alpha X_{t-1}+ \xi_t \quad {\rm w.p.} \qquad \frac{\alpha^p}{\alpha^p-\beta ^p},\\ \beta X_{t-1} + \xi_{t} \quad {\rm w.p.} \quad -\frac{\beta^p}{\alpha^p-\beta ^p} \end{array}\right.$$ for ?1 < β p ≤ 0 ≤ α p  < 1 and α p ? β p  > 0 when X t has the two-parameter beta distribution B2(p, q) with parameters q > 1 and ${p \in \mathcal P(u,v)}$ , where $$\mathcal P(u,v) = \left\{u/v : u < v,\,u,v\,{\rm odd\,positive\,integers} \right\}.$$ Special attention is given to the case p = 1. Using Laplace transform and suitable approximation procedures, we prove that the distribution of innovation sequence for p = 1 can be approximated by the uniform discrete distribution and that for ${p \in \mathcal P(u,v)}$ can be approximated by a continuous distribution. We also consider estimation issues of the model.  相似文献   

10.
The aim of this note is to give a simple characterization of the rationalizability of decision rules (or action profiles). The necessary and sufficient condition we obtain suggests interesting analogies between the Implementation Problem and Revealed Preference Theory. Two particular cases are examined:
  • 1.(a) The one-dimensional context, which shows that our condition is a generalization of the monotonicity condition of Spence-Mirrlees,
  • 2.(b) The linear set-up, which shows that rationalizability in multiple dimension requires more than monotonicity: it implies also symmetry conditions which are translated by Partial Differential Equations (analogue in this context of Slutsky equations for Revealed Preference Theory).
  相似文献   

11.
Beno?t and Ok (Games Econ Behav 64:51–67, 2008) show that in a society with at least three agents any weakly unanimous social choice correspondence (SCC) is Maskin’s monotonic if and only if it is Nash-implementable via a simple stochastic mechanism (Beno?t-Ok’s Theorem). This paper fully identifies the class of weakly unanimous SCCs that are Nash-implementable via a simple stochastic mechanism endowed with Saijo’s message space specification (Saijo in Econometrica 56:693–700, 1988). It is shown that this class of SCCs is equivalent to the class of SCCs that are Nash-implementable via Beno?t-Ok’s Theorem.  相似文献   

12.
In this paper we study convolution residuals, that is, if $X_1,X_2,\ldots ,X_n$ are independent random variables, we study the distributions, and the properties, of the sums $\sum _{i=1}^lX_i-t$ given that $\sum _{i=1}^kX_i>t$ , where $t\in \mathbb R $ , and $1\le k\le l\le n$ . Various stochastic orders, among convolution residuals based on observations from either one or two samples, are derived. As a consequence computable bounds on the survival functions and on the expected values of convolution residuals are obtained. Some applications in reliability theory and queueing theory are described.  相似文献   

13.
14.
Global methods that fit a single forecasting method to all time series in a set have recently shown surprising accuracy, even when forecasting large groups of heterogeneous time series. We provide the following contributions that help understand the potential and applicability of global methods and how they relate to traditional local methods that fit a separate forecasting method to each series:
  • •Global and local methods can produce the same forecasts without any assumptions about similarity of the series in the set.
  • •The complexity of local methods grows with the size of the set while it remains constant for global methods. This result supports the recent evidence and provides principles for the design of new algorithms.
  • •In an extensive empirical study, we show that purposely naïve algorithms derived from these principles show outstanding accuracy. In particular, global linear models provide competitive accuracy with far fewer parameters than the simplest of local methods.
  相似文献   

15.
《Technovation》1987,6(1):57-68
This paper describes an investigation during which 225 small and medium-sized enterprises were asked about their experience of cooperation with universities in dealing with technical and scientific problems. About a quarter replied that they had had such experience in the last ten years. These were asked the following questions:
  • &#x02022;- Were they satisfied with the cooperation?
  • &#x02022;- What were the arguments against cooperation?
  • &#x02022;- What especially important factors facilitated cooperation?
The results show that companies, in practice, ascribe the highest significance to factors quite different from those which are emphasized in theoretical literature.  相似文献   

16.
This article focuses on a recent concept of covariation for processes taking values in a separable Banach space $B$ and a corresponding quadratic variation. The latter is more general than the classical one of Métivier and Pellaumail. Those notions are associated with some subspace $\chi $ of the dual of the projective tensor product of $B$ with itself. We also introduce the notion of a convolution type process, which is a natural generalization of the Itô process and the concept of $\bar{\nu }_0$ -semimartingale, which is a natural extension of the classical notion of semimartingale. The framework is the stochastic calculus via regularization in Banach spaces. Two main applications are mentioned: one related to Clark–Ocone formula for finite quadratic variation processes; the second one concerns the probabilistic representation of a Hilbert valued partial differential equation of Kolmogorov type.  相似文献   

17.
In this note we discuss the following problem. LetX andY to be two real valued independent r.v.'s with d.f.'sF and ?. Consider the d.f.F*? of the r.v.X oY, being o a binary operation among real numbers. We deal with the following equation: $$\mathcal{G}^1 (F * \phi ,s) = \mathcal{G}^2 (F,s)\square \mathcal{G}^3 (\phi ,s)\forall s \in S$$ where \(\mathcal{G}^1 ,\mathcal{G}^2 ,\mathcal{G}^3 \) are real or complex functionals, т another binary operation ands a parameter. We give a solution, that under stronger assumptions (Aczél 1966), is the only one, of the problem. Such a solution is obtained in two steps. First of all we give a solution in the very special case in whichX andY are degenerate r.v.'s. Secondly we extend the result to the general case under the following additional assumption: $$\begin{gathered} \mathcal{G}^1 (\alpha F + (1 - \alpha )\phi ,s) = H[\mathcal{G}^i (F,s),\mathcal{G}^i (\phi ,s);\alpha ] \hfill \\ \forall \alpha \in [0,1]i = 1,2,3 \hfill \\ \end{gathered} $$ .  相似文献   

18.
KBP is an innovative compensation approach wiyh some important advantages for both workers and management. It is growing in use, but there is still not much information available to guide managers who want to use it. Our research suggests that each KBP implementation is idiosyncratic. At CARCO, each of the several plants using KBP designed its own KBP plan. They had very little information upon which they could draw. Likewise, at CONCO the plant manager and other key executives set out the philosophy and structure of KBP with very little guidance, except some discussion with one manager who had some experience with it. At this time, each application of KBP must be individually worked out with respect to such issues as the method of evaluation for pay advancement, the levels at which pay increases ought to be granted, and ways to provide opportunities for skill advancement. There are other areas that we need to know more about:
  • •⊎ Under what conditions should an organizationwide or job-circle KBP approach be implemented?
  • •⊎ How does the relationship between the range of skills and the range of pay affect performance and satisfaction?
  • •⊎ When job circles are used, what guides should be used for grouping jobs into KBP classes?
  • •⊎ How does the size of an organization and the range of task variety relate to KBP plans?
  • •⊎ How do organizational and pay system differences affect practices such as rotation and participation?
KBP is an interesting and potentially useful approach to compensation. We have suggested some approaches to these issues in this article, but we all need to know more about it.  相似文献   

19.
This paper attempts to model elections by incorporating voter judgments about candidate and leader competence. The proposed model can be linked to Madison’s understanding of the nature of the choice of Chief Magistrate (Madison, James Madison: writings. The Library of America, New York, 1999 [1787]) and Condorcet’s work on the so-called “Jury Theorem” (Condorcet 1994 [1785]). Electoral models use the notion of a Nash Equilibrium. This notion generally depends on a fixed point argument. For deterministic electoral models, there will typically be no equilibrium. Instead we introduce the idea of a preference field, $H,$ for the society. A condition called half-openess of $H$ is sufficient to guarantee existence of a local direction gradient, $d,$ Even when $d$ is not well-defined we can use the idea of the heart for the society. This is an attractor of the set of social moves that can occur. As an application, a stochastic model of elections is considered, and applied to the 2008 presidential election in the United States. In such a stochastic model the electoral origin will satisfy the first order condition for a local Nash equilibrium. We then show how to compute the Hessian of each candidate’s vote share function, and obtain necessary and sufficient conditions for convergence to the electoral origin, suggesting that there will be a social direction gradient. The origin maximizes aggregrate voter utility and can be interpreted as a fit choice for the polity.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号