首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 386 毫秒
1.
从信息论的角度看,会计实际上是一个数据加总与分解的过程,如果信息使用者的偏好不变,其加总数据必然导致信息耗损,分解数据则必然导致信息补偿。2006年初新颁布的《企业会计准则第30号——财务报表列报》对利润表的列报方式做了较大的变动。本文借鉴信息熵理论,构建了有别于传统的报表编制评价模型,并通过对2001~2005年某上市公司的利润表进行重述,实际考察了《企业会计准则第30号》对利润表的信息耗损与补偿,在此基础上给出了报表编制的一般性程序。  相似文献   

2.
理论文献已经证明了细分数据模型的内生性和加总过程产生的内生性是加总偏误的根本原因。但是由于内生性涉及误差项与回归量之间的相关性问题,试图通过实证方法审视这两类内生性对加总偏误的影响变得比较困难,而数值模拟却是一个较为理想的方法。在数值模拟中,我们通过控制随机变量的分布形式以及随机变量之间的相关程度,进而对两类内生性因素产生的加总偏误进行全面细致的考察。本文的研究将为加总偏误的内生性解释提供有力证据。  相似文献   

3.
Analysts carrying out input–output analyses of environmental issues are often plagued by environmental and input–output data existing in different classifications, with environmentally sensitive sectors sometimes being aggregated in the economic input–output database. In principle there are two alternatives for dealing with such misalignment: either environmental data have to be aggregated into the input–output classification, which entails an undesirable loss of information, or input–output data have to be disaggregated based on fragmentary information. In this article, I show that disaggregation of input–output data, even if based on few real data points, is superior to aggregating environmental data in determining input–output multipliers. This is especially true if the disaggregated sectors are heterogeneous with respect to their economic and environmental characteristics. The results of this work may help analysts in understanding that disaggregation based on even a small amount of proxy information can improve the accuracy of input–output multipliers significantly. Perhaps, these results will also provide encouragement for preferring model disaggregation to aggregation in future work.  相似文献   

4.
Measurement of technical efficiency is carried out at many levels of aggregation—at the individual branch, plant, division, or district level; at the company- or organization-wide level; at the industry or sectoral level; or at the economy-wide level. In this paper, we examine the conditions under which these indexes constructed at various levels of aggregation can be consistent with one another—that is, the extent to which efficiency indexes can be consistently aggregated. Unfortunately, our results are discouraging, indicating that very strong restrictions on the technology and/or the efficiency index itself are required to enable consistent aggregation (or disaggregation).  相似文献   

5.
The S-curve sums up the dynamic relationship between terms of trade and trade balance. This pattern has received weak support in some developed and less developed countries when aggregate trade data are used. Empirical regularities based on aggregate trade data can be biased since aggregation can potentially suppress some of the patterns observed in trade at the bilateral level. This paper overcomes this problem by employing bilateral trade data from Sweden and finds that the S-curve is invariant to this level of disaggregation. Indeed, Sweden has a bilateral S-curve with 12 out of 17 cases examined for the 1980Q1–2005Q1 period.  相似文献   

6.
Data envelopment analysis (DEA) has recently become relatively popular with road safety experts. Therefore, various decision-making units (DMUs), such as EU countries, have been assessed in terms of road safety performance (RSP). However, the DEA has been criticized because it evaluates DMUs based only on the concept of self-assessment, and, therefore does not provide a unique ranking for DMUs. Therefore, cross efficiency method (CEM) was developed to overcome this shortcoming. Peer-evaluations in addition to self-evaluation have made the CEM to be recognized as an effective method for ranking DMUs. The traditional CEM is based only on the standard CCR (Charnes, Cooper and Rhodes) model, and it evaluates DMUs according to their position relative to the best practice frontier while neglecting the worst practice frontier. However, the DMUs can also be assessed based on their position relative to the worst practice frontier. In this regard, the present study aims to provide a double-frontier CEM for assessing RSP by taking into account the best and worst frontiers simultaneously. For this purpose, the cross efficiency and cross anti-efficiency matrices are generated.Even though a weighted average method (WAM) is most frequently used for cross efficiency aggregation, the decision maker's (DM) preference structure may not be reflected. For this reason, the present study mainly focuses on the evidential reasoning approach (ERA), as a nonlinear aggregation method, rather than the linear WAM. Equal weights are often used for cross efficiency aggregation; consequently, the effect of the DM's subjective judgments in obtaining the overall efficiency is ignored. In this respect, the minimax entropy approach (MEA) and the maximum disparity approach (MMDA) are applied for determining the ordered weighted averaging (OWA) operator weights for cross efficiency aggregation. The weighted cross efficiencies and cross anti-efficiencies are then aggregated using the ERA. Finally, the proposed method, called DF-CEM-ERA, is used to evaluate the RSP of EU countries as well as Serbian police departments (PDs).  相似文献   

7.
In this paper we offer a method for deciding how to aggregate a set of elementary industries. The method is then applied to the problem of estimating a wage equation that allows for industry-specific effects. Our method explicitly formalizes the trade-off between goodness-of-fit and parsimony implicit in an aggregation problem. By varying the parameter of the assumed loss function, one obtains a whole sequence of aggregation levels. Further, the resulting sequence is consistent; that is, groupings formed at one level of aggregation will never be undone when one aggregates further.  相似文献   

8.
Most empirical studies for the post Bretton-Woods period fail to find evidence of a long-run Purchasing Power Parity (PPP) relationship. An investigation into the failure of PPP is made in this study by using disaggregated price data. This disaggregation is on two levels: location (prices from US and Canadian cities rather than national aggregates) and type of goods (e.g., fuel oil, a tradable commodity and local public transportation, a non-tradable). This disaggregation allows for the testing of the importance of borders (implying an exchange rate), distances, and types of goods in the failure of PPP. The analysis conducted suggests that both country borders and distances play a significant role. However, there is mixed evidence concerning type of goods as an important determinant of the failure of PPP.  相似文献   

9.
Preference aggregation is here investigated for a society defined as a measure space of individuals and called a measure society. Individual preferences are represented through continuous vnm utilities. It is shown that aggregating preferences in an utilitarian way for any kind of measure society is possible under adapted Pareto conditions.  相似文献   

10.
ABSTRACT

Regulators such as the SEC and standard setting bodies such as the FASB and the IASB argue the case for the conceptual desirability of fair value measurement, notably on the relevance dimension. Recent standards on financial instruments and certain non-financial items adopt the new measurement paradigm. This paper takes issue with the notion of decision usefulness of a fair-value-based reporting system from a theoretical perspective. Emphasis is put on the evaluation of the theoretical soundness of the arguments put forward by regulators and standard setting bodies. The analysis is conducted as economic (a priori) analysis. Two approaches to decision usefulness are adopted, the measurement or valuation perspective and the information perspective. Findings indicate that the decision relevance of fair value measurement can be justified from both perspectives, yet the conceptual case is not strong. The information aggregation notion that underlies standard setters' endorsement of fair value measurement turns out to be theoretically restricted in its validity and applicability. Also, comparative analysis of fair value accounting vs. historical cost accounting yields mixed results. One immediate implication of the research – a condition for the further implementation of fair value accounting – is the need to clarify standard setters' notion of accounting income, its presumed contribution to decision relevance and its disaggregation.  相似文献   

11.
We study the Maximal Covering Location Problem with Accessibility Indicators and Mobile Units that maximizes the facilities coverage, the accessibility of the zones to the open facilities, and the spatial disaggregation. The main characteristic of our problem is that mobile units can be deployed from open facilities to extend the coverage, accessibility, and opportunities for the inhabitants of the different demand zones. We formulate the Maximal Covering Location Problem with Accessibility Indicators and Mobile Units as a mixed-integer linear programming model. To solve larger instances, we propose a matheuristic (combination of exact and heuristic methods) composed of an Estimation of Distribution Algorithm and a parameterized Maximal Covering Location Problem with Accessibility Indicators and Mobile Units integer model. To test our methodology, we consider the Maximal Covering Location Problem with Accessibility Indicators and Mobile Units model to cover the low-income zones with Severe Acute Respiratory Syndrome Coronavirus 2 patients. Using official databases, we made a set of instances where we considered the poverty index, number of population, locations of hospitals, and Severe Acute Respiratory Syndrome Coronavirus 2 patients. The experimental results show the efficiency of our methodologies. Compared to the case without mobile units, we drastically improve the coverage and accessibility for the inhabitants of the demand zones.  相似文献   

12.
By aggregating simple, possibly dependent, dynamic micro-relationships, it is shown that the aggregate series may have univariate long-memory models and obey integrated, or infinite length transfer function relationships. A long-memory time series model is one having spectrum or order ω-2d for small frequencies ω, d>0. These models have infinite variance for d≧12 but finite variance for d<12. For d=1 the series that need to be differenced to achieve stationarity occur, but this case is not found to occur from aggregation. It is suggested that if series obeying such models occur in practice, from aggregation, then present techniques being used for analysis are not appropriate.  相似文献   

13.
This article deals with several problems that arise when the Theil coefficient of income inequality is computed in practice.
Aggregation of income data into brackets leads to an underestimation of the true Theil inequality, which is defined as the value of the coefficient as computed from individual income data. The assumption that the individual incomes are distributed according to a linear density function within the income brackets is suggested as a method to estimate this aggregation error. Calculations show that this method approximates the true aggregation error reasonably well.
Several methods are discussed concerning the treatment of negative incomes. In particular one can construct an income bracket that contains both negative and positive incomes and which in the summation formula is weighted with zero weight. Of all methods this procedure using the assumption of a linear density function within brackets, yields the highest value of the Theil coefficient and is thus preferred to the other alternatives.  相似文献   

14.
This paper gives necessary and sufficient conditions for the aggregation of preferences, extending an earlier treatment of aggregation by Stolper, Gorman, Samuelson and Chipman. Such aggregation procedures are intended to deal with the problem of aggregating demand functions in econometrics, where the aggregate is required to be independent to the income distribution. Thus, it is usually assumed in this form of aggregation that all consumers face the same prices, but that the distribution of income is unrestricted.In order to establish the characterisation result, we present a new approach to preference aggregation which involves summing certain subsets of the graphs of the preferences, viewed as subsets of a Euclidean space. This procedure has a clear geometrical interpretation, and a number of useful applications. In particular, it enables us to analyse the possibility of aggregation when prices are not constrained to be the same for all consumers, a case of possible empirical significance. We also show that the Stolper-Gorman-Samuelson-Chipman construction of community indifference curves coincides with a special case of this procedure.Finally, this approach allows us to develop the relationship between these forms of aggregation and the preference aggregation problem as it occurs in social choice theory.  相似文献   

15.
Not all components of earnings are expected to provide similar information regarding future earnings. For example, basic financial statement analysis indicates that the persistence of ordinary income should be greater than the persistence of special, extraordinary, or discontinued operations. Because the market assigns higher multiples to earnings components that are more persistent, differentiating earnings components on the basis of relative persistence would appear to be useful. A focus on relative predictive value is consistent with research findings and user recommendations on separating earnings components that are persistent or permanent from those that are transitory or temporary. This paper examines the persistence and forecast accuracy of earnings components for retail and manufacturing companies listed in the world's two largest equity markets; the USA and Japan. We find the forecast accuracy of earnings in both the USA and Japan increases with greater disaggregation of earnings components. The results further indicate that the improvements in forecast accuracy due to earnings disaggregation are greater in the USA than in Japan. The greater emphasis and more detailed guidelines for reporting earnings components in the USA produce a better differentiation in the persistence of earnings components resulting in greater forecast improvements from earnings disaggregation.  相似文献   

16.
This study analyses the value relevance of the different components of the earnings figure that appear in the Spanish profit and loss account in order to determine the preferred level of disaggregation by investors. It is considered that the disaggregation may help to evaluate the earnings quality; that is, its predictive ability about future earnings. We use a valuation model based on Ohlson (1995), which models firm value as a function of book value of equity and earnings, adding the earnings components to determine whether they provide incremental price-relevant information beyond aggregate earnings. In addition, we allow the parameters to vary under some firm-specific circumstances. Our results support the usefulness of the earnings decomposition for valuation purposes, resting primarily on the disclosure of the corporation tax, particularly for either small companies, or with a high-risk profile or with low persistence of earnings. It seems that neither financial profit nor extraordinary earnings have additional information content over the bottom-line figure, which is consistent with the IASC's position on ordinary versus extraordinary items.  相似文献   

17.
Aggregate analysis has been established as a standard method on the study of market response behavior for a long time. Aggregation has advanced our understanding of the linkages among social characteristics and aggregate response behavior. However, aggregate analysis has been hindered by fragmentary and unsystematic procedures to determine the most appropriate level of aggregation. The general objective of this paper is to provide a conceptual framework to determine the level of aggregation of variables in data analysis. In addition, statistical procedures are suggested in this framework to verify and to determine the level of aggregation represented by a variable. The conceptual framework is useful for deciding if the variables are to be analyzed from micro-analysis focus or macro-analysis focus. The statistical procedures enable the researcher to systematically identify and verify the level(s) of aggregation of variables in an existing data set.  相似文献   

18.
In this paper we examine whether site-development competition can be used to facilitate land assembly, in the absence of contingent contracts. In particular, we attempt to determine (1) whether competition can be induced among prospective sellers, (2) whether or not competition increases aggregation rates, and (3) what effects competition has on the distribution of surplus among the bargaining parties. We also study the incidence with which a buyer (endogenously) chooses to deal with a single “large parcel” owner vs. multiple “small parcel” owners. To do so, we make use of a laboratory experiment where all the relevant information about the project is common knowledge and landowner valuations are private information. Our results show that competition more than doubles aggregation rates, with aggregation rates of approximately 40% in the baseline, and at least 84% in the competitive treatments. We also find that developers have a strong preference to make transactions with landowners who have consolidated land holdings, doing so in 24/27 successful aggregations, providing empirical evidence that there is a link between the transactions cost associated with land-assembly and suburbanization, as suggested by Miceli and Sirmans (2007).  相似文献   

19.
The object of this paper is to demonstrate in economic terms the equivalence of the problem of aggregation in input-output analysis with coalition and bargaining problems. Depending on the specific norm for aggregation it is shown that the aggregation criterion and the coalition forming criterion in an n-person game leads to a broadly similar situation in the market sense given that the market operates to that criterion. It is also shown that a mathematical analogue to this formulation may be obtained via the techniques of geometric programming.  相似文献   

20.
A large body of empirical work has established the significance of cash flow in explaining investment dynamics. This finding is further taken as evidence of capital market imperfections. We show, using a perfect capital markets model, that time-to-build for capital projects creates an investment-cash-flow sensitivity as found in empirical studies that may not be indicative of capital market frictions. The result is due to mis-specification present in empirical investment-q equations under time-to-build investment. In addition, time aggregation error can give rise to cash-flow effects independently of the time-to-build effect. Importantly, both errors arise independently of potential measurement error in q. Evidence from a large panel of U.K. manufacturing firms confirms the validity of the time-to-build investment channel.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号