首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Bentler and Raykov (2000, Journal of Applied Psychology 85: 125–131), and Jöreskog (1999a, http://www.ssicentral.com/lisrel/column3.htm, 1999b http://www.ssicentral. com/lisrel/column5.htm) proposed procedures for calculating R 2 for dependent variables involved in loops or possessing correlated errors. This article demonstrates that Bentler and Raykov’s procedure can not be routinely interpreted as a “proportion” of explained variance, while Jöreskog’s reduced-form calculation is unnecessarily restrictive. The new blocked-error-R 2 (beR 2) uses a minimal hypothetical causal intervention to resolve the variance-partitioning ambiguities created by loops and correlated errors. Hayduk (1996) discussed how stabilising feedback models – models capable of counteracting external perturbations – can result in an acceptable error variance which exceeds the variance of the dependent variable to which that error is attached. For variables included within loops, whether stabilising or not, beR 2 provides the same value as Hayduk’s (1996) loop-adjusted-R 2. For variables not involved in loops and not displaying correlated residuals, beR 2 reports the same value as the traditional regression R 2. Thus, beR 2 provides a conceptualisation of the proportion of explained variance that spans both recursive and nonrecursive structural equation models. A procedure for calculating beR 2 in any SEM program is provided.  相似文献   

2.
3.
This paper aims at developing a new methodology to measure and decompose global DMU efficiency into efficiency of inputs (or outputs). The basic idea rests on the fact that global DMU's efficiency score might be misleading when managers proceed to reallocate their inputs or redefine their outputs. Literature provides a basic measure for global DMU's efficiency score. A revised model was developed for measuring efficiencies of global DMUs and their inputs (or outputs) efficiency components, based on a hypothesis of virtual DMUs. The present paper suggests a method for measuring global DMU efficiency simultaneously with its efficiencies of inputs components, that we call Input decomposition DEA model (ID-DEA), and its efficiencies of outputs components, that we call output decomposition DEA model (OD-DEA). These twin models differ from Supper efficiency model (SE-DEA) and Common Set Weights model (CSW-DEA). The twin models (ID-DEA, OD-DEA) were applied to agricultural farms, and the results gave different efficiency scores of inputs (or outputs), and at the same time, global DMU's efficiency score was given by the Charnes, Cooper and Rhodes (Charnes et al., 1978) [1], CCR78 model. The rationale of our new hypothesis and model is the fact that managers don't have the same information level about all inputs and outputs that constraint them to manage resources by the (global) efficiency scores. Then each input/output has a different reality depending on the manager's decision in relationship to information available at the time of decision. This paper decomposes global DMU's efficiency into input (or output) components' efficiencies. Each component will have its score instead of a global DMU score. These findings would improve management decision making about reallocating inputs and redefining outputs. Concerning policy implications of the DEA twin models, they help policy makers to assess, ameliorate and reorient their strategies and execute programs towards enhancing the best practices and minimising losses.  相似文献   

4.
While Entrepreneurial behavior involves starting and running a new venture (Gartner, 1988), it seems that Academic entrepreneurial behavior (AEB) is somewhat unique as it extends beyond a focus on startups to include both commercial and non-commercial activities (Abreu and Grinevichb, 2013). Additionally, AEB is influenced by both financial and non-financial rewards (Lam, 2010). Despite these differences, studies of AEB have typically focused primarily on academics who have participated (or intend to participate) in a university spinout, as if all academic entrepreneurs are birds of the same feather. Expanding the unit of analysis to also include academics not participating in commercial activities could provide insights for the development of AEB. An in-depth qualitative analysis of 30 life science academics in Australia indicates the presence of four distinctive categories of AEB: non-entrepreneurial, semi-entrepreneurial, pre-entrepreneurial and entrepreneurial. More interestingly, the same academic can exhibit different AEB in relation to different research project(s) and depending on the available support mechanisms (particularly financial). Our findings suggest that AEB is not necessarily driven by opportunity recognition, and research on the topic must consider other factors beyond the individual academic, such as the project and funding mechanisms.  相似文献   

5.
This article provides an exact non-cooperative foundation of the sequential Raiffa solution for two-person bargaining games. Based on an approximate foundation due to Myerson (1991) for any two-person bargaining game (S, d) an extensive form game GS,d is defined that has an infinity of weakly subgame perfect equilibria whose payoff vectors coincide with that of the sequential Raiffa solution of (S, d). Moreover all those equilibria share the same equilibrium path consisting of proposing the Raiffa solution and accepting it in the first stage of the game.By a modification of GS,d the analogous result is provided for subgame perfect equilibria. These results immediately extend to implementation of a sequential Raiffa (solution based) social choice rule in subgame perfect equilibrium.  相似文献   

6.
Even a cursory perusal of the social science literature suffices to show the prevalence of dichotomous thinking. Many of these dichotomies deal with some aspect of the “conceptual versus empirical” distinction. This paper shows that while dichotomies predominate for some reason, the actual research process that they are designed to represent deals minimally with three separate and necessary levels. We term these the conceptual level (X), the empirical level (X′), and the operational or indicator level (X″). This minimal three level model is applied to an analysis of philosophical foundations of measurement, specifically the formulations of Northrop and Bridgman. It is shown that both of these formulations are essentially dichotomous, while the phenomena they deal with are trichotomous. For example, Northrop's “concepts by postulation” and “concepts by intuition” are purportedly separate levels connected by an epistemic correlation. Application of the three level model reveals that both are true concepts, and thus belong on the same level of analysis (X). Similarly, application of the three level model to Bridgman's formulation shows that both mental and physical concepts belong on the same level (X). Bridgman's formulation is valuable in pointing out that operations are not restricted to one level of analysis, and in fact we see them to be crucial on all three levels. The three level model is not a panacea, but does provide an efficacious framework for the difficult but important task of analyzing the philosophical underpinning of measurement.  相似文献   

7.
Anna Lytova  Leonid Pastur 《Metrika》2009,69(2-3):153-172
We consider n × n real symmetric random matrices n ?1/2 W with independent (modulo symmetry condition) entries and the (null) sample covariance matrices n ?1 A T A with independent entries of m × n matrix A. Assuming first that the 4th cumulant (excess) κ 4 of entries of W and A is zero and that their 4th moments satisfy a Lindeberg type condition, we prove that linear statistics of eigenvalues of the above matrices satisfy the central limit theorem (CLT) as n → ∞, m → ∞, ${m/n\rightarrow c\in[0,\infty)}$ with the same variance as for Gaussian matrices if the test functions of statistics are smooth enough (essentially of the class ${\mathbb{C}^5}$ ). This is done by using a simple “interpolation trick”. Then, by using a more elaborated techniques, we prove the CLT in the case of non-zero excess of entries for essentially ${\mathbb{C}^4}$ test function. Here the variance contains additional term proportional to κ 4. The proofs of all limit theorems follow essentially the same scheme.  相似文献   

8.
We evaluate the performance of several volatility models in estimating one-day-ahead Value-at-Risk (VaR) of seven stock market indices using a number of distributional assumptions. Because all returns series exhibit volatility clustering and long range memory, we examine GARCH-type models including fractionary integrated models under normal, Student-t and skewed Student-t distributions. Consistent with the idea that the accuracy of VaR estimates is sensitive to the adequacy of the volatility model used, we find that AR (1)-FIAPARCH (1,d,1) model, under a skewed Student-t distribution, outperforms all the models that we have considered including widely used ones such as GARCH (1,1) or HYGARCH (1,d,1). The superior performance of the skewed Student-t FIAPARCH model holds for all stock market indices, and for both long and short trading positions. Our findings can be explained by the fact that the skewed Student-t FIAPARCH model can jointly accounts for the salient features of financial time series: fat tails, asymmetry, volatility clustering and long memory. In the same vein, because it fails to account for most of these stylized facts, the RiskMetrics model provides the least accurate VaR estimation. Our results corroborate the calls for the use of more realistic assumptions in financial modeling.  相似文献   

9.
Daily and weekly seasonalities are always taken into account in day-ahead electricity price forecasting, but the long-term seasonal component has long been believed to add unnecessary complexity, and hence, most studies have ignored it. The recent introduction of the Seasonal Component AutoRegressive (SCAR) modeling framework has changed this viewpoint. However, this framework is based on linear models estimated using ordinary least squares. This paper shows that considering non-linear autoregressive (NARX) neural network-type models with the same inputs as the corresponding SCAR-type models can lead to yet better performances. While individual Seasonal Component Artificial Neural Network (SCANN) models are generally worse than the corresponding SCAR-type structures, we provide empirical evidence that committee machines of SCANN networks can outperform the latter significantly.  相似文献   

10.
A typical question in MDS is whether two alternative configurations that are both acceptable in terms of data fit may be considered “practically the same”. To answer such questions on the equivalency of MDS solutions. Lingoes & Borg (1983) have recently proposed a quasistatistical decision strategy that allows one to take various features of the situation into account. This paper adds another important piece of information to this approach: for the Lingoes-Borg decision criterion R, we compute what proportion of R-values is greater/less than the observed coefficient if one were to consider all possible alternative distance sets within certain bounds defined by the observed fit coefficients for two alternative MDS solutions, what are the limits of acceptability for such fit coefficients, and how are the observed MDS configurations interrelated.  相似文献   

11.
12.
Jorge M. Arevalillo 《Metrika》2012,75(8):1009-1024
In this paper we study the relation between the r* saddlepoint approximation and the Edgeworth expansion when quite general assumptions for the statistic under consideration are fulfilled. We will show that the two term Edgeworth expansion approximates the r* formula up to an O(n ?3/2) remainder; this provides a new way of looking at the order of the error of the r* approximation. This finding will be used to inspect the close connection between the r* formula and the Edgeworth B adjustment introduced in Phillips (Biometrika 65:91–98, 1978). We will show that, whenever an Edgeworth expansion exists, this adjustment approximates both the distribution function of the statistic and the r* formula to the same order degree as the Edgeworth expansion. Some numerical examples for the sample mean and U-statistics are given in order to shed light on the theoretical discussion.  相似文献   

13.
The present study examined two general research questions pertaining to the passage of a law designed to encourage whistle-blowing: (a) Has the incidence of perceived wrongdoing, whistle-blowing, anonymous whistle-blowing, or retaliation changed following the passage of the law? (b) What variables predict the comprehensiveness of retaliation that identified whistle-blowers claim to have experienced? One questionnaire was mailed to members of 15 organizations affected by the law in 1980 (n=8500) and a second was administered in 1983 (n=4700). There was some evidence that the law had beneficial effects; specifically, the incidence of perceived wrongdoing declined and whistle-blowing increased. Unfortunately, identified whistle-blowers were just as likely to experience retaliation in 1983 as they were in 1980. The predictors of the comprehensiveness of the retaliation experienced were generally the same in both years. The results tentatively suggest that more legal and organizational encouragement of whistle-blowing is needed.  相似文献   

14.
This paper provides a formal justification for the existence of subjective random components intrinsic to the outcome evaluation process of decision makers and explicitly assumed in the stochastic choice literature. We introduce the concepts of admissible error function and generalized certainty equivalent, which allow us to analyze two different criteria, a cardinal and an ordinal one, when defining suitable approximations to expected utility values. Contrary to the standard literature requirements for irrational preferences, adjustment errors arise in a natural way within our setting, their existence following directly from the disconnectedness of the range of the utility functions. Conditions for the existence of minimal errors are also studied. Our results imply that neither the cardinal nor the ordinal criterion do necessarily provide the same evaluation for two or more different prospects with the same expected utility value. As a consequence, a rational decision maker may define two different generalized certainty equivalents when presented with the same prospect in two different occasions.  相似文献   

15.
Results from cointegration tests clearly suggest that TFP and the relative price of investment (RPI) are not cointegrated. Evidence on the alternative possibility that they may nonetheless contain a common I(1) component generating long-horizon co-variation between them crucially depends on the fact that (i) structural breaks are, or are not allowed for, and (ii) the precise nature and timing of such breaks. Not allowing for breaks, evidence points towards the presence of a common component inducing positive long-horizon covariation, which is compatible with the notion that the technology transforming consumption goods into investment goods is non-linear, and the RPI is also impacted upon by neutral shocks. Allowing for breaks, evidence suggests that long-horizon covariation is either nil or negative.Assuming, for illustrative purposes, that the two series contain a common component inducing negative long-horizon covariation, evidence based on structural VARs shows that this common shock (i) plays an important role in macroeconomic fluctuations, explaining sizeable fractions of the forecast error variance of main macro series, and (ii) generates ‘disinflationary booms’, characterized by transitory increases in hours, and decreases in inflation.  相似文献   

16.
17.
A geometric interpretation is developed for so-called reconciliation methodologies used to forecast time series that adhere to known linear constraints. In particular, a general framework is established that nests many existing popular reconciliation methods within the class of projections. This interpretation facilitates the derivation of novel theoretical results. First, reconciliation via projection is guaranteed to improve forecast accuracy with respect to a class of loss functions based on a generalised distance metric. Second, the Minimum Trace (MinT) method minimises expected loss for this same class of loss functions. Third, the geometric interpretation provides a new proof that forecast reconciliation using projections results in unbiased forecasts, provided that the initial base forecasts are also unbiased. Approaches for dealing with biased base forecasts are proposed. An extensive empirical study of Australian tourism flows demonstrates the theoretical results of the paper and shows that bias correction prior to reconciliation outperforms alternatives that only bias-correct or only reconcile forecasts.  相似文献   

18.
This paper analyzes the relation between nominal exchange rate volatility and several macroeconomic variables, namely real output growth, excess credit, foreign direct investment (FDI) and the current account balance, in the Central and Eastern European EU member states. Using panel estimations for the period between 1995 and 2008, we find that lower exchange rate volatility is associated with higher growth, higher stocks of FDI, higher current account deficits, and higher excess credit. At the same time, the recent evidence seems to suggest that following the global financial crisis, “hard peg” countries may have experienced a more severe adjustment process than “floaters”. The results are economically and statistically significant and robust.  相似文献   

19.
This paper tests the behavioral equivalence of a class of strategically-equivalent mechanisms that also do not differ in terms of their procedures. In a private value setting, we introduce a family of mechanisms, so-called Mechanism (α), that generalizes the standard first-price sealed-bid auction. In Mechanism (α), buyers are asked to submit a value which will then be multiplied by α to calculate the bids in the auction. When α =?1, Mechanism (α) is the standard first-price sealed-bid auction. We show that for any α, calculated bids should be identical across mechanisms. We conduct a laboratory experiment to test the behavioral equivalence of this class of mechanisms under different values of α. Even though the procedure and environment do not change across auctions, we do not observe the same bidding behavior across these strategically-equivalent mechanisms. Our research can inform mechanism design literature with respect to the design of optimal mechanisms.  相似文献   

20.
We provide a new central limit theorem (CLT) for spatial processes under weak conditions that are plausible for many economic applications in which location is endogenous. In particular, our CLT is designed for problems that have some, but not necessarily all, of the following features: (i) Agents choose the locations of observations to maximize profits, welfare, or some other objective. (ii) The objects that are chosen (e.g., stores or brands) interact with one another. For example, they can be substitutes or complements. (iii) Interaction can be complex. In particular, interaction between i and j need not depend only on the distance between the locations of i and j, but can also depend on distance to or location of other observations k, or possibly on the number of other such observations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号