首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Many social phenomena can be viewed as processes in which individuals in social groups develop agreement (e.g., public opinion, the spreading of rumor, the formation of social and linguistic conventions). Conceptual Agreement Theory (CAT) models social agreement as a simplified communicational event in which an Observer \((O)\) and Actor \((A)\) exchange ideas about a concept \(C\) , and where \(O\) uses that information to infer whether \(A\) ’s conceptual state is the same as its own (i.e., to infer agreement). Agreement may be true (when \(O\) infers that \(A\) is thinking \(C\) and this is in fact the case, event \(a1\) ) or illusory (when \(O\) infers that \(A\) is thinking \(C\) and this is not the case, event \(a2\) ). In CAT, concepts that afford \(a1\) or \(a2\) become more salient in the minds of members of social groups. Results from an agent-based model (ABM) and probabilistic model that implement CAT show that, as our conceptual analyses suggested would be the case, the simulated social system selects concepts according to their usefulness to agents in promoting agreement among them (Experiment 1). Furthermore, the ABM exhibits more complex dynamics where similar minded agents cluster and are able to retain useful concepts even when a different group of agents discards them (Experiment 2). We discuss the relevance of CAT and the current findings for analyzing different social communication events, and suggest ways in which CAT could be put to empirical test.  相似文献   

2.
Let $\mathcal{M }_{\underline{i}}$ be an exponential family of densities on $[0,1]$ pertaining to a vector of orthonormal functions $b_{\underline{i}}=(b_{i_1}(x),\ldots ,b_{i_p}(x))^\mathbf{T}$ and consider a problem of estimating a density $f$ belonging to such family for unknown set ${\underline{i}}\subset \{1,2,\ldots ,m\}$ , based on a random sample $X_1,\ldots ,X_n$ . Pokarowski and Mielniczuk (2011) introduced model selection criteria in a general setting based on p-values of likelihood ratio statistic for $H_0: f\in \mathcal{M }_0$ versus $H_1: f\in \mathcal{M }_{\underline{i}}\setminus \mathcal{M }_0$ , where $\mathcal{M }_0$ is the minimal model. In the paper we study consistency of these model selection criteria when the number of the models is allowed to increase with a sample size and $f$ ultimately belongs to one of them. The results are then generalized to the case when the logarithm of $f$ has infinite expansion with respect to $(b_i(\cdot ))_1^\infty $ . Moreover, it is shown how the results can be applied to study convergence rates of ensuing post-model-selection estimators of the density with respect to Kullback–Leibler distance. We also present results of simulation study comparing small sample performance of the discussed selection criteria and the post-model-selection estimators with analogous entities based on Schwarz’s rule as well as their greedy counterparts.  相似文献   

3.
The relevance of risk preference and forecasting accuracy for investor survival has recently been the focus of a series of theoretical and simulation studies. At one extreme, it has been proven that risk preference can be entirely irrelevant (Sandroni in Econometrica 68:1303–1341, 2000; Blume and Easley in Econometrica 74(4):929–966, 2006). However, the agent-based computational approach indicates that risk preference matters and can be more relevant for survivability than forecasting accuracy (Chen and Huang in Advances in natural computation, Springer, Berlin, 2005; J Econ Behav Organ 67(3):702–717, 2008; Huang in J Econ Interact Coord, 2015). Chen and Huang (Inf Sci 177(5):1222–1229, 2007, 2008) further explained that it is the saving behavior of traders that determines their survivability. However, institutional investors do not have to consider saving decisions that are the most influential investors in modern financial markets. Additionally, traders in the above series of theoretical and simulation studies have learned to forecast the stochastic process that determines which asset will pay dividends, not the market prices and dividends. To relate the research on survivability to issues with respect to the efficient markets hypothesis, it is better to endow agents with the ability to forecast market prices and dividends. With the Santa Fe Artificial Stock Market, where traders do not have to consider saving decisions and can learn to forecast both asset prices and dividends, we revisit the issue of survivability and market efficiency. We find that the main finding of Chen and Huang (2008) that risk preference is much more relevant for survivability than forecasting accuracy still holds for a wide range of market conditions but can fail when the baseline dividend becomes very small. Moreover, the advantage of traders who are less averse to risk is revealed in the market where saving decisions are not taken into account. Finally, Huang’s (2015) argument regarding the degree of market inefficiency is confirmed.  相似文献   

4.
In this paper we study convolution residuals, that is, if $X_1,X_2,\ldots ,X_n$ are independent random variables, we study the distributions, and the properties, of the sums $\sum _{i=1}^lX_i-t$ given that $\sum _{i=1}^kX_i>t$ , where $t\in \mathbb R $ , and $1\le k\le l\le n$ . Various stochastic orders, among convolution residuals based on observations from either one or two samples, are derived. As a consequence computable bounds on the survival functions and on the expected values of convolution residuals are obtained. Some applications in reliability theory and queueing theory are described.  相似文献   

5.
This paper attempts to model elections by incorporating voter judgments about candidate and leader competence. The proposed model can be linked to Madison’s understanding of the nature of the choice of Chief Magistrate (Madison, James Madison: writings. The Library of America, New York, 1999 [1787]) and Condorcet’s work on the so-called “Jury Theorem” (Condorcet 1994 [1785]). Electoral models use the notion of a Nash Equilibrium. This notion generally depends on a fixed point argument. For deterministic electoral models, there will typically be no equilibrium. Instead we introduce the idea of a preference field, $H,$ for the society. A condition called half-openess of $H$ is sufficient to guarantee existence of a local direction gradient, $d,$ Even when $d$ is not well-defined we can use the idea of the heart for the society. This is an attractor of the set of social moves that can occur. As an application, a stochastic model of elections is considered, and applied to the 2008 presidential election in the United States. In such a stochastic model the electoral origin will satisfy the first order condition for a local Nash equilibrium. We then show how to compute the Hessian of each candidate’s vote share function, and obtain necessary and sufficient conditions for convergence to the electoral origin, suggesting that there will be a social direction gradient. The origin maximizes aggregrate voter utility and can be interpreted as a fit choice for the polity.  相似文献   

6.
In this paper, we consider the estimation problem of individual weights of three objects. For the estimation we use the chemical balance weighing design and the criterion of D-optimality. We assume that the error terms ${\varepsilon_{i},\ i=1,2,\dots,n,}$ are a first-order autoregressive process. This assumption implies that the covariance matrix of errors depends on the known parameter ρ. We present the chemical balance weighing design matrix ${\widetilde{\bf X}}$ and we prove that this design is D-optimal in certain classes of designs for ${\rho\in[0,1)}$ and it is also D-optimal in the class of designs with the design matrix ${{\bf X} \in M_{n\times 3}(\pm 1)}$ for some ρ ≥ 0. We prove also the necessary and sufficient conditions under which the design is D-optimal in the class of designs ${M_{n\times 3}(\pm 1)}$ , if ${\rho\in[0,1/(n-2))}$ . We present also the matrix of the D-optimal factorial design with 3 two-level factors.  相似文献   

7.
Random weighting estimation of stable exponent   总被引:1,自引:0,他引:1  
This paper presents a new random weighting method to estimation of the stable exponent. Assume that $X_1, X_2, \ldots ,X_n$ is a sequence of independent and identically distributed random variables with $\alpha $ -stable distribution G, where $\alpha \in (0,2]$ is the stable exponent. Denote the empirical distribution function of G by $G_n$ and the random weighting estimation of $G_n$ by $H_n$ . An empirical distribution function $\widetilde{F}_n$ with U-statistic structure is defined based on the sum-preserving property of stable random variables. By minimizing the Cramer-von-Mises distance between $H_n$ and ${\widetilde{F}}_n$ , the random weighting estimation of $\alpha $ is constructed in the sense of the minimum distance. The strong consistency and asymptotic normality of the random weighting estimation are also rigorously proved. Experimental results demonstrate that the proposed random weighting method can effectively estimate the stable exponent, resulting in higher estimation accuracy than the Zolotarev, Press, Fan and maximum likelihood methods.  相似文献   

8.
9.
We consider how information concentration affects a seller’s revenue in common value auctions. The common value is a function of $n$ random variables partitioned among $m \le n$ bidders. For each partition, the seller devises an optimal mechanism. We show that whenever the value function allows scalar sufficient statistics for each player’s signals, the mechanism design problem is well-defined. Additionally, whenever a common regularity condition is satisfied, a coarser partition always reduces revenues. In particular, any merger or collusion among bidders reduces revenue.  相似文献   

10.
Bing Guo  Qi Zhou  Runchu Zhang 《Metrika》2014,77(6):721-732
Zhang et al. (Stat Sinica 18:1689–1705, 2008) introduced an aliased effect-number pattern for two-level regular designs and proposed a general minimum lower-order confounding (GMC) criterion for choosing optimal designs. All the GMC \(2^{n-m}\) designs with \(N/4+1\le n\le N-1\) were constructed by Li et al. (Stat Sinica 21:1571–1589, 2011), Zhang and Cheng (J Stat Plan Inference 140:1719–1730, 2010) and Cheng and Zhang (J Stat Plan Inference 140:2384–2394, 2010), where \(N=2^{n-m}\) is run number and \(n\) is factor number. In this paper, we first study some further properties of GMC design, then we construct all the GMC \(2^{n-m}\) designs respectively with the three parameter cases of \(n\le N-1\) : (i) \(m\le 4\) , (ii) \(m\ge 5\) and \(n=(2^m-1)u+r\) for \(u>0\) and \(r=0,1,2\) , and (iii) \(m\ge 5\) and \(n=(2^m-1)u+r\) for \(u\ge 0\) and \(r=2^m-3,2^m-2\) .  相似文献   

11.
Hitchcock (Synthese 97:335–364, 1993) argues that the ternary probabilistic theory of causality meets two problems due to the problem of disjunctive factors, while arguing that the unanimity probabilistic theory of causality, which is founded on the binary contrast, does not meet them. Hitchcock also argues that only the ternary theory conveys the information about complex relations of causal relevance. In this paper, I show that Eells’ solution (Probabilistic causality, Cambridge University Press, Cambridge, 1991), which is founded on the unanimity theory, meet the two problems. I also show that the unanimity theory too reveals complex relations of causal relevance. I conclude that the two probabilistic theories of causality carve up the same causal structure in two formally different and conceptually consistent ways. Hitchcock’s ternary theory inspires several major philosophers (Maslen, Causation and counterfactuals, pp. 341–357. MIT Press, Cambridge, 2004; Schaffer, Philos Rev 114, 297–328, 2005; Northcott, Phil Stud 139, 111–123, 2007; Hausman, The place of probability in science: In honor of Eelleys Eells (1953–2006), pp. 47–64, Springer, Dordrecht, 2010) who have recently developed the ternary theory or the quaternary theory. This paper leads them to reconsider the relation between the ternary theory and the binary theory.  相似文献   

12.
Let \((X_1,X_2,\ldots ,X_n)\) be a Gaussian random vector with a common correlation coefficient \(\rho _n,\,0\le \rho _n<1\) , and let \(M_n= \max (X_1,\ldots , X_n),\,n\ge 1\) . For any given \(a>0\) , define \(T_n(a)= \left\{ j,\,1\le j\le n,\,X_j\in (M_n-a,\,M_n]\right\} ,\,K_n(a)= \#T_n(a)\) and \(S_n(a)=\sum \nolimits _{j\in T_n(a)}X_j,\,n\ge 1\) . In this paper, we obtain the limit distributions of \((K_n(a))\) and \((S_n(a))\) , under the assumption that \(\rho _n\rightarrow \rho \) as \(n\rightarrow \infty ,\) for some \(\rho \in [0,1)\) .  相似文献   

13.
Peng Zhao  Yiying Zhang 《Metrika》2014,77(6):811-836
In this article, we study the stochastic properties of the maxima from two independent heterogeneous gamma random variables with different both shape parameters and scale parameters. Our main purpose is to address how the heterogeneity of a random sample of size 2 affects the magnitude, skewness and dispersion of the maxima in the sense of various stochastic orderings. Let \(X_{1}\) and \(X_{2}\) be two independent gamma random variables with \(X_{i}\) having shape parameter \(r_{i}>0\) and scale parameter \(\lambda _{i}\) , \(i=1,2\) , and let \(X^{*}_{1}\) and \(X^{*}_{2}\) be another set of independent gamma random variables with \(X^{*}_{i}\) having shape parameter \(r_{i}^{*}>0\) and scale parameter \(\lambda _{i}^{*}\) , \(i=1,2\) . Denote by \(X_{2:2}\) and \(X^{*}_{2:2}\) the corresponding maxima, respectively. It is proved that, among others, if \((r_{1},r_{2})\) majorize \((r_{1}^{*},r_{2}^{*})\) and \((\lambda _{1},\lambda _{2})\) weakly majorize \((\lambda _{1}^{*},\lambda _{2}^{*})\) , then \(X_{2:2}\) is stochastically larger that \(X^{*}_{2:2}\) in the sense of the likelihood ratio order. We also study the skewness according to the star order for which a very general sufficient condition is provided, using which some useful consequences can be obtained. The new results established here strengthen and generalize some of the results known in the literature.  相似文献   

14.
We consider the (possibly nonlinear) regression model in \(\mathbb{R }^q\) with shift parameter \(\alpha \) in \(\mathbb{R }^q\) and other parameters \(\beta \) in \(\mathbb{R }^p\) . Residuals are assumed to be from an unknown distribution function (d.f.). Let \(\widehat{\phi }\) be a smooth \(M\) -estimator of \(\phi = {{\beta }\atopwithdelims (){\alpha }}\) and \(T(\phi )\) a smooth function. We obtain the asymptotic normality, covariance, bias and skewness of \(T(\widehat{\phi })\) and an estimator of \(T(\phi )\) with bias \(\sim n^{-2}\) requiring \(\sim n\) calculations. (In contrast, the jackknife and bootstrap estimators require \(\sim n^2\) calculations.) For a linear regression with random covariates of low skewness, if \(T(\phi ) = \nu \beta \) , then \(T(\widehat{\phi })\) has bias \(\sim n^{-2}\) (not \(n^{-1}\) ) and skewness \(\sim n^{-3}\) (not \(n^{-2}\) ), and the usual approximate one-sided confidence interval (CI) for \(T(\phi )\) has error \(\sim n^{-1}\) (not \(n^{-1/2}\) ). These results extend to random covariates.  相似文献   

15.
The main result of the paper is the following characterization of the generalized arcsine density p γ (t) = t γ?1(1 ? t) γ?1/B(γ, γ)   with ${t \in (0, 1)}$ and ${\gamma \in(0,\frac12) \cup (\frac12,1)}$ : a r.v. ξ supported on [0, 1] has the generalized arcsine density p γ (t) if and only if ${ {\mathbb E} |\xi- x|^{1-2 \gamma}}$ has the same value for almost all ${x \in (0,1)}$ . Moreover, the measure with density p γ (t) is a unique minimizer (in the space of all probability measures μ supported on (0, 1)) of the double expectation ${ (\gamma-\frac12 ) {\mathbb E} |\xi-\xi^{\prime}|^{1-2 \gamma}}$ , where ξ and ξ′ are independent random variables distributed according to the measure μ. These results extend recent results characterizing the standard arcsine density (the case ${\gamma=\frac12}$ ).  相似文献   

16.
Classical optimal strategies are notorious for producing remarkably volatile portfolio weights over time when applied with parameters estimated from data. This is predominantly explained by the difficulty to estimate expected returns accurately. In Lindberg (Bernoulli 15:464–474, 2009), a new parameterization of the drift rates was proposed with the aim to circumventing this difficulty, and a continuous time mean–variance optimal portfolio problem was solved. This approach was further developed in Alp and Korn (Decis Econ Finance 34:21–40, 2011a) to a jump-diffusion setting. In the present paper, we solve a different portfolio problem under the market parameterization in Lindberg (Bernoulli 15:464–474, 2009). Here, the admissible investment strategies are given as the amounts of money to be held in each stock and are allowed to be adapted stochastic processes. In the references above, the admissible strategies are the deterministic and bounded fractions of the total wealth. The optimal strategy we derive is not the same as in Lindberg (Bernoulli 15:464–474, 2009), but it can still be viewed as investing equally in each of the n Brownian motions in the model. As a consequence of the problem assumptions, the optimal final wealth can become non-negative. The present portfolio problem is solved also in Alp and Korn (Submitted, 2011b), using the L 2-projection approach of Schweizer (Ann Probab 22:1536–1575, 1995). However, our method of proof is direct and much easier accessible.  相似文献   

17.
18.
The BDS test is the best-known correlation integral–based test, and it is now an important part of most standard econometric data analysis software packages. This test depends on the proximity ( $\varepsilon )$ and the embedding dimension ( $m)$ parameters both of which are chosen by the researcher. Although different studies (e.g., Kanzler in Very fast and correctly sized estimation of the BDS statistic. Department of Economics, Oxford University, Oxford, 1999) have been carried out to provide an adequate selection of the proximity parameter, no relevant research has yet been done on $m$ . In practice, researchers usually compute the BDS statistic for different values of $m$ , but sometimes these results are contradictory because some of them accept the null and others reject it. This paper aims to fill this gap. To that end, we propose a new simple, yet powerful, aggregate test for independence, based on BDS outputs from a given data set, that allows the consideration of all of the information contained in several embedding dimensions without the ambiguity of the well-known BDS tests.  相似文献   

19.
The recent financial crisis highlighted the importance of better understanding the interaction between macroeconomic and financial conditions. In this paper, we provide a financial social accounting matrix for the Canadian economy and use it to assess the strength of real-financial linkages by calculating and comparing multipliers with and without endogenous financial flows. It is found that taking into account financial flows increases the impact of a final demand shock on output by 4–11%. Moreover, between 2008 and 2009H1, the investment decisions of financial institutions together with the fact that non-financial institutions were unwilling or unable to increase their financial liabilities led to estimated declines in all GDP multipliers. The impact of a final demand shock on GDP declined 3–5%, while the impact of an increase in the availability of investment funds fell 30% and 55% for financial and non-financial corporations, respectively.? ?The views expressed in this paper are those of the authors. No responsibility for them should be attributed to Statistics Canada. View all notes  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号