首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Even a cursory perusal of the social science literature suffices to show the prevalence of dichotomous thinking. Many of these dichotomies deal with some aspect of the “conceptual versus empirical” distinction. This paper shows that while dichotomies predominate for some reason, the actual research process that they are designed to represent deals minimally with three separate and necessary levels. We term these the conceptual level (X), the empirical level (X′), and the operational or indicator level (X″). This minimal three level model is applied to an analysis of philosophical foundations of measurement, specifically the formulations of Northrop and Bridgman. It is shown that both of these formulations are essentially dichotomous, while the phenomena they deal with are trichotomous. For example, Northrop's “concepts by postulation” and “concepts by intuition” are purportedly separate levels connected by an epistemic correlation. Application of the three level model reveals that both are true concepts, and thus belong on the same level of analysis (X). Similarly, application of the three level model to Bridgman's formulation shows that both mental and physical concepts belong on the same level (X). Bridgman's formulation is valuable in pointing out that operations are not restricted to one level of analysis, and in fact we see them to be crucial on all three levels. The three level model is not a panacea, but does provide an efficacious framework for the difficult but important task of analyzing the philosophical underpinning of measurement.  相似文献   

2.
Single-equation instrumental variable estimators (e.g., the k-class) are frequently employed to estimate econometric equations. This paper employs Kadane's (1971) small-σ method and a squared-error matrix loss function to characterize a single-equation class of optimal instruments, A. A is optimal (asymptotically for a small scalar multiple, σ, of the model's disturbance) in that all of its members are preferred to all non-members. From this characterization it is shown all k-class estimators and certain iterative estimators belong to A. However, non-iterative principal component estimators [e.g., Kloek and Mennes (1960)] are unlikely to belong to A. These latter instrumental variable estimators have been advocated [see Amemiya (1966) and Kloek and Mennes (1960)] for estimating ‘large’ econometric models.  相似文献   

3.
《Journal of econometrics》2002,111(2):223-249
Cointegration occurs when the long-run multiplier matrix of a vector autoregressive model exhibits rank reduction. Using a singular value decomposition of the unrestricted long-run multiplier matrix, we construct a parameter that reflects the presence of rank reduction. Priors and posteriors of the parameters of the cointegration model follow from conditional priors and posteriors of the unrestricted long-run multiplier matrix given that the parameter that reflects rank reduction is equal to zero. This idea leads to a complete Bayesian framework for cointegration analysis. It includes prior specification, simulation schemes for obtaining posterior distributions and determination of the cointegration rank via Bayes factors. We apply the proposed Bayesian cointegration analysis to the Danish data of Johansen and Juselius (Oxford Bull. Econom. Stat. 52 (1990) 169).  相似文献   

4.
This paper makes three principal contributions to the growing body of empirically oriented research on dynamic factor demand systems that are based on the adjustment cost model of the firm. First, a simplified procedure is described for deriving demand and supply functions which are amenable to empirical estimation and which are consistent with intertemporal expected profit maximization and a general expectations formation process for future prices. Second, it is pointed out that estimation of a complete system of demand and supply functions permits the empirical identification of both the firm's technology and its expectations formation process. Finally, the procedure is applied to aggregate annual U.S. manufacturing data for the 1947-1977 period and the consistency of the data with the theoretical framework is investigated.  相似文献   

5.
This paper shows consistency of a two-step estimation of the factors in a dynamic approximate factor model when the panel of time series is large (n large). In the first step, the parameters of the model are estimated from an OLS on principal components. In the second step, the factors are estimated via the Kalman smoother. The analysis develops the theory for the estimator considered in Giannone et al. (2004) and Giannone et al. (2008) and for the many empirical papers using this framework for nowcasting.  相似文献   

6.
This paper presents an algebraic analysis of the graphs of the k-class estimator, its asymptotic standard error and asymptotic t-ratio as functions of k for a single structural equation containing one or more endogenous explanatory variables. These results are illustrated by the corresponding graphs of the second and fifth equations of the Girshick-Haavelmo (1947) Demand for Food Model.Tests of the rank condition for identification are also developed. They are found to involve the values of k which explode the k-class estimator.  相似文献   

7.
Data sharing in today's information society poses a threat to individual privacy and organisational confidentiality. k-anonymity is a widely adopted model to prevent the owner of a record being re-identified. By generalising and/or suppressing certain portions of the released dataset, it guarantees that no records can be uniquely distinguished from at least other k?1 records. A key requirement for the k-anonymity problem is to minimise the information loss resulting from data modifications. This article proposes a top-down approach to solve this problem. It first considers each record as a vertex and the similarity between two records as the edge weight to construct a complete weighted graph. Then, an edge cutting algorithm is designed to divide the complete graph into multiple trees/components. The Large Components with size bigger than 2k?1 are subsequently split to guarantee that each resulting component has the vertex number between k and 2k?1. Finally, the generalisation operation is applied on the vertices in each component (i.e. equivalence class) to make sure all the records inside have identical quasi-identifier values. We prove that the proposed approach has polynomial running time and theoretical performance guarantee O(k). The empirical experiments show that our approach results in substantial improvements over the baseline heuristic algorithms, as well as the bottom-up approach with the same approximate bound O(k). Comparing to the baseline bottom-up O(logk)-approximation algorithm, when the required k is smaller than 50, the adopted top-down strategy makes our approach achieve similar performance in terms of information loss while spending much less computing time. It demonstrates that our approach would be a best choice for the k-anonymity problem when both the data utility and runtime need to be considered, especially when k is set to certain value smaller than 50 and the record set is big enough to make the runtime have to be taken into account.  相似文献   

8.
The Frobenius eigenvector of a positive square matrix is obtained by iterating the multiplication of an arbitrary positive vector by the matrix. Bródy (1997) noticed that, when the entries of the matrix are independently and identically distributed, the speed of convergence increases statistically with the dimension of the matrix. As the speed depends on the ratio between the subdominant and the dominant eigenvalues, Bródy's conjecture amounts to stating that this ratio tends to zero when the dimension tends to infinity. The paper provides a simple proof of the result. Some mathematical and economic aspects of the problem are discussed.  相似文献   

9.
Exponential smoothing procedures, in particular those recommended byBrown [1962] are used extensively in many areas of economics, business and engineering. It is shown in this paper that:
  1. Brown's forecasting procedures are optimal in terms of achieving minimum mean square error forecasts only if the underlying stochastic process is included in a limited subclass of ARIMA (p, d, q) processes. Hence, it is shown what assumptions are made when using these procedures.
  2. The implication of point (i) is that the users ofBrown's procedures tacitly assume that the stochastic processes which occur in the real world are from the particular restricted subclass of ARIMA (p, d, q) processes. No reason can be found why these particular models should occur more frequently than others.
  3. It is further shown that even if a stochastic process which would lead toBrown's model occurred, the actual methods used for making the forecasts are clumsy and much simpler procedures can be employed.
  相似文献   

10.
The purpose of the present investigation is to examine the influence of sample size (N) and model parsimony on a set of 22 goodness-of-fit indices including those typically used in confirmatory factor analysis and some recently developed indices. For sample data simulated from two known population data structures, values for 6 of 22 fit indices were reasonably independent ofN and were not significantly affected by estimating parameters known to have zero values in the population: two indices based on noncentrality described by McDonald (1989; McDonald and Marsh, 1990), a relative (incremental) index based on noncentrality (Bentler, 1990; McDonald & Marsh, 1990), unbiased estimates of LISREL's GFI and AGFI (Joreskog & Sorbom, 1981) presented by Steiger (1989, 1990) that are based on noncentrality, and the widely known relative index developed by Tucker and Lewis (1973). Penalties for model complexity designed to control for sampling fluctuations and to address the inevitable compromise between goodness of fit and model parsimony were evaluated.  相似文献   

11.
The paper considers n-dimensional VAR models for variables exhibiting cointegration and common cyclical features. Two specific reduced rank vector error correction models are discussed. In one, named the “strong form” and denoted by SF, the collection of all coefficient matrices of a VECM has rank less than n, in the other, named the “weak form” and denoted by WF, the collection of all coefficient matrices except the matrix of coefficient of error correction terms has rank less than n. The paper explores the theoretical connections between these two forms, suggests asymptotic tests for each form and examines the small sample properties of these tests by Monte Carlo simulations.  相似文献   

12.
One important but unrealistic assumption in the simplified Alonso–Mills–Muth (AMM(0)) model is that the composite good is ubiquitous and thus there is zero shopping cost for residents. This paper assumes that the composite good is only sold by a monopoly vendor inside the city and hence a shopping cost is inevitable for residents. It is shown that the vendor will locate at the city boundary in equilibrium. In contrast to the symmetric land rent pattern in the AMM(0) model, the current AMM(k) model offers an asymmetric land rent pattern in equilibrium. Moreover, this paper shows that a rent-maximizing government either regulates the vendor to locate at the central business district (CBD) (when income is high) or does not enact any regulation (when income is low).  相似文献   

13.
The Condorcet efficiency of single-stage election procedures is considered under the assumption of impartial culture for large electorates. The most efficient ranked voting rule is either Borda rule or a truncated scoring rule. A decision rule is established to determine the number of candidates, k, that individuals should be required to vote for, whether or not ranking should be required, and the scoring rule that should be used if ranking is required. This decision depends upon the number of candidates available and the probabilities that individuals will vote if they must rank k candidates or simply report k candidates.  相似文献   

14.
The negative relationship between stock market P/E ratios and government bond yields seems to have become conventional wisdom among practitioners. However, limited empirical evidence and a misleading suggestion that the model originated in the Fed are used to support the model's plausibility. This article argues that the Fed model is flawed from a theoretical standpoint and reports evidence from 20 countries that seriously questions its empirical merits. Despite its widespread use and acceptance, the Fed model is found to be a failure both as a normative and as a positive model of equity pricing.  相似文献   

15.
This paper aims at developing a new methodology to measure and decompose global DMU efficiency into efficiency of inputs (or outputs). The basic idea rests on the fact that global DMU's efficiency score might be misleading when managers proceed to reallocate their inputs or redefine their outputs. Literature provides a basic measure for global DMU's efficiency score. A revised model was developed for measuring efficiencies of global DMUs and their inputs (or outputs) efficiency components, based on a hypothesis of virtual DMUs. The present paper suggests a method for measuring global DMU efficiency simultaneously with its efficiencies of inputs components, that we call Input decomposition DEA model (ID-DEA), and its efficiencies of outputs components, that we call output decomposition DEA model (OD-DEA). These twin models differ from Supper efficiency model (SE-DEA) and Common Set Weights model (CSW-DEA). The twin models (ID-DEA, OD-DEA) were applied to agricultural farms, and the results gave different efficiency scores of inputs (or outputs), and at the same time, global DMU's efficiency score was given by the Charnes, Cooper and Rhodes (Charnes et al., 1978) [1], CCR78 model. The rationale of our new hypothesis and model is the fact that managers don't have the same information level about all inputs and outputs that constraint them to manage resources by the (global) efficiency scores. Then each input/output has a different reality depending on the manager's decision in relationship to information available at the time of decision. This paper decomposes global DMU's efficiency into input (or output) components' efficiencies. Each component will have its score instead of a global DMU score. These findings would improve management decision making about reallocating inputs and redefining outputs. Concerning policy implications of the DEA twin models, they help policy makers to assess, ameliorate and reorient their strategies and execute programs towards enhancing the best practices and minimising losses.  相似文献   

16.
The Baysian estimation of the mean vector θ of a p-variate normal distribution under linear exponential (LINEX) loss function is studied when as a special restricted model, it is suspected that for a p × r known matrix Z the hypothesis θ = , ${\beta\in\Re^r}The Baysian estimation of the mean vector θ of a p-variate normal distribution under linear exponential (LINEX) loss function is studied when as a special restricted model, it is suspected that for a p × r known matrix Z the hypothesis θ = , b ? ?r{\beta\in\Re^r} may hold. In this area we show that the Bayes and empirical Bayes estimators dominate the unrestricted estimator (when nothing is known about the mean vector θ).  相似文献   

17.
Because of the increased availability of large panel data sets, common factor models have become very popular. The workhorse of the literature is the principal components (PC) method, which is based on an eigen-analysis of the sample covariance matrix of the data. Some of its uses are to estimate the factors and their loadings, to determine the number of factors, and to conduct inference when estimated factors are used in panel regression models. The bulk of the underlying theory that justifies these uses is based on the assumption that both the number of time periods, T, and the number of cross-section units, N, tend to infinity. This is a drawback, because in practice T and N are always finite, which means that the asymptotic approximation can be poor, and there are plenty of simulation results that confirm this. In the current paper, we focus on the typical micro panel where only N is large and T is finite and potentially very small—a scenario that has not received much attention in the PC literature. A version of PC is proposed, henceforth referred to as cross-section average-based PC (CPC), whereby the eigen-analysis is performed on the covariance matrix of the cross-section averaged data as opposed to on the covariance matrix of the raw data as in original PC. The averaging attenuates the idiosyncratic noise, and this is the reason why in CPC T can be fixed. Mirroring the development in the PC literature, the new method is used to estimate the factors and their average loadings, to determine the number of factors, and to estimate factor-augmented regressions, leading to a complete CPC-based toolbox. The relevant theory is established, and is evaluated using Monte Carlo simulations.  相似文献   

18.
Some nonparametric latent trait models for dichotomous data are considered. We deal with n subjects, each answering to the same set of of k items, each item being scored dichotomously. We are interested in ordering item difficulties α1,...αk . In Sec. 2 it is shown that in the considered nonparametric models the ordering is identifiable. Then an order estimator is defined and its quality is described by the probabilities of correct, wrong and deferred decision. Asymptotic behaviour of these probabilities are considered for n→∞ and any k≥2. The hypothesis that the probability of wrong decision diminishes when the model is “more distant” from so called random response model, is proved for n≤3 and verified numerically for n≥3. In Sec. 4 we discuss critically some parameters of nonparametric models known in the literature as “coefficients of scalability”. In particular, for k=2 their connections with the evaluation of positive dependence are considered.  相似文献   

19.
In this paper, we argue that environmental economists who have dedicated their attention to problems of market and regulatory failure have been remiss in ignoring the potential for failure in the one institution that actually manages environmental resources - the business firm. Traditionally the firm has been modelled as a unitary, rational, optimising persona ficta. There is, however, abundant empirical and theoretical evidence to suggest that the business firm is an imperfect institution in that there are systematic deviations between the environmental objectives of the firm's leaders (principals) and the actions of the firm's employees (agents) which determine environmental performance. In the paper, we draw parallels between the causes of market failure and public policy tools to correct them on one hand and the causes of organisational failure and the management tools suited to their remedy on the other. Although much of the paper is concerned with the interrelationship between public policy that promotes sustainability and business policy to fashion a sustainable enterprise, our work is relevant irrespective of the reason why a firm's principal may want to improve environmental performance. No matter what the reason, the principal must concern him- or herself with operationalising objectives in management systems. It is consistent with the precautionary principle to assume that employees will do what the firm measures and rewards, not what its principal says is important. We build a verbal model, based on the language of principal-agent theory, to analyse how different management instruments might be employed to improve the firm's environmental performance. The model is one of three decision makers in a vertical hierarchy. Each of the first two has various instruments at its disposal to influence the behaviour of the agents subordinate to it. In the end, the goal is to ensure consistency between social, economic, and personal objectives. The specific management tools we analyse, with reference to the formal modelling which has appeared in the literature, include the compensation system, quantification and monitoring of non-financial objectives, internal pricing, horizontal task restructuring, centralisation vs. decentralisation of decision making, and corporate sanctions of agents for negligence. We conclude the paper by reiterating that the corporate policy statements to the effect that the firm should respect the environment are insufficient to ensure that result. In addition, firms' principals must operationalise that goal in the systems of measurement and control which govern the behaviour of those who really matter - the employees.  相似文献   

20.
This study analyzes mean probability distributions reported by ASA-NBER forecasters on two macroeconomic variables, GNP and the GNP implicit price deflator. In the derivation of expectations, a critical assertion has been that the aggregate average expectation can be regarded as coming from a normal distribution. We find that, in fact, this assumption should be rejected in favor of distributions which are more peaked and skewed. For IPD, they are mostly positively skewed, and for nominal GNP the reverse is true. We then show that a non-central scaled t-distribution fit the empirical distributions remarkably well. The practice of using the degree of consensus across a group of predictions as a measure of a typical forecasters' uncertainty about the prediction is called to question.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号