首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 500 毫秒
1.
It is argued if xt ~ I(1) and yt ~ I(1), then running a regression xt on yt would produce spurious results because e t would generally be I(1). However, there may exist a ‘b’ such that e t  = x t - by t is I(0), then running a regression x t on y t would not produce spurious results. This special case of two integrated time series is known in the literature as cointegration. In this particular case, x t and y t are said to be cointegrated. In our review of the development of the concept of cointegration, we identified that the underlying reason for this special case to arise is the proposition that if x t  ~ I(d x ), y t  ~ I(d y ), then z t  = bx t  + cy t  ~ I(max(d x ,d y )). In this research, we offer evidence against this proposition.  相似文献   

2.
This paper suggests a theory of choice among strategic situations when the rules of play are not properly specified. We take the view that a “strategic situation” is adequately described by a TU game since it specifies what is feasible for each coalition but is silent on the procedures that are used to allocate the surplus. We model the choice problem facing a decision maker (DM) as having to choose from finitely many “actions”. The known “consequence” of the ith action is a coalition from game f i over a fixed set of players \(N_i\cup\{d\}\) (where d stands for the DM). Axioms are imposed on her choice as the list of consequences (f 1,..., f m ) from the m actions varies. We characterize choice rules that are based on marginal contributions of the DM in general and on the Shapley Value in particular.  相似文献   

3.
In this paper we consider the standard voting model with a finite set of alternatives A and n voters and address the following question: what are the characteristics of domains \({\mathcal D}\) that induce the property that every strategy-proof social choice function \({f: {\mathcal D}^n \rightarrow A}\) satisfying unanimity, has the tops-only property? We first impose a minimal richness condition which ensures that for every alternative a, there exists an admissible ordering where a is maximal. We identify conditions on \({\mathcal D}\) that are sufficient for strategy-proofness and unanimity to imply tops onlyness in the general case of n voters and in the special case, n = 2. We provide an algorithm for constructing tops-only domains from connected graphs with elements of A as nodes. We provide several applications of our results. Finally, we relax the minimal richness assumption and partially extend our results.  相似文献   

4.
5.
Nanyang Bu 《Economic Theory》2016,61(1):115-125
We study the problem of assigning objects to buyers. Monetary transfers are allowed. Each buyer’s preference space contains, but is not limited to, the linear additively separable preferences. A rule maps each preference profile to an allocation. We are concerned about the possibility that a group of buyers may engage in the following kind of manipulation: They make side payments internally and then carry out a joint misrepresentation. A rule is strongly group strategy-proof if no group can gain by engaging in such operations. We also consider several other appealing requirements. We find that the posted-price rules are the only one that satisfies non-triviality, non-imposition, envy-freeness, and strong group strategy-proofness.  相似文献   

6.
This study analyzes the persistency of total and disaggregated Turkish exports for different shock magnitudes using the quantile autoregression (QAR) method in line with Koenker and Xiao (J Am Stat Assoc 99:775–787, 2004). The results suggest that the persistence of shocks are not similar across different quantiles of Total Exports and disaggregated export sectors, indicating an asymmetry in the case of negative and positive shocks across different export sectors. The persistency behavior of Total Exports as well as Food and Beverages, Chemicals, Basic Metals, Raw Materials, Motor Vehicles and Radio & TV exports are asymmetric to negative versus positive shocks, which cannot be captured by traditional unit root tests. Thus, sound interpretation of QAR results is necessary for policy makers to identify shock characteristics and thereby pursue appropriate policies for overcoming adverse impacts on the economy.  相似文献   

7.
We examine the impact of financial reforms on efficient reallocation of capital within and between sectors in South Africa using firm-level panel data for the period 1991–2008. The measure of efficient allocation of capital is based on the Tobin’s Q. We find that financial reforms are associated with improvements in within-sector, but not between-sector allocation of capital. These results imply that for South Africa to unleash the potential for take-off that is often associated with reallocation of resources from the primitive to modern sectors, reforms that focus beyond the financial sector are necessary. While more research is necessary to determine what would fully constitute such additional reforms, our analysis shows that reforms that improve the quality of economic institutions may be a step in the right the direction.  相似文献   

8.
We experimentally test different rule-based contribution mechanisms in a repeated 4-player public goods game with endowment heterogeneity and compare them to a VCM, distinguishing between a random and an effort-based allocation of endowments. We find that endowment heterogeneities limit the efficiency gains from these rule-based contribution schemes under random allocation. Under effort-based allocations, substantial efficiency gains relative to a VCM occur. These are largely driven by significant reductions of contributions in VCM, while the rule-based mechanisms generate stable efficiency levels, even though falling short in realizing the maximal efficiency gains. Our results indicate that the procedure of endowment allocation impacts the perception of what constitutes a fair burden sharing.  相似文献   

9.
We discuss Monte Carlo methodology that can be used to explore alternative approaches to estimating spatial regression models. Our focus is on models that include spatial lags of the dependent variable, e.g., the SAR specification. A major point is that practitioners rely on scalar summary measures of direct and indirect effects estimates to interpret the impact of changes in explanatory variables on the dependent variable of interest. We argue that these should be the focus of Monte Carlo experiments. Since effects estimates reflect a nonlinear function of both \(\beta \) and \(\rho \), past studies’ focus exclusively on \(\beta \) and \(\rho \) parameter estimates may not provide useful information regarding statistical properties of effects estimates produced by alternative estimators. Since effects estimates have recently become the focus of inference regarding the significance of (scalar summary) direct and indirect impacts arising from changes in the explanatory variables, empirical measures of dispersion produced by simulating draws from the (estimated) variance–covariance matrix of the parameters \(\beta \) and \(\rho \) should be part of the Monte Carlo study. An implication is that differences in the quality of estimated variance–covariance matrices arising from alternative estimators also plays a role in determining the accuracy of inference. An applied illustration is used to demonstrate how these issues can impact conclusions regarding the performance of alternative estimators.  相似文献   

10.
This paper examines themes and concerns about my book, The Order of Public Reason, raised in the three essays in this symposium by Peter Boettke & Rosolino Candela, Michael Munger and Kevin Vallier. The three essays present variations on a common theme: I need to embrace deeper commitments than The Order of Public Reason acknowledges. In my estimation these proposals lead to places that I do not wish to go — nor should anyone devoted to core Hayekian insights. The goal of the book is show how a diversity of moral views can lead to a cooperative social morality while abjuring as far as possible “external” moral claims — claims that do not derive from the perspectives of cooperating individuals. The diverse individual moral perspectives, and what they understand as normative, must be the real engines of social normativity. In this essay I stress the primacy of the individual normative perspectives in generating social morality; this helps show why the urge to embrace deeper commitments should be resisted. Rather than going over the presentation in The Order of Public Reason to stress this point, I sketch a modest recasting of the analysis in terms of models of individual moral interaction.  相似文献   

11.
Assuming constant marginal cost, it is shown that a switch from specific to ad valorem taxation that results in the same collusive price has no effect on the critical discount factor required to sustain collusion. This result is shown to hold for Cournot oligopoly when collusion is sustained with Nash-reversion strategies or optimal-punishment strategies. In a Cournot duopoly model with linear demand and quadratic costs, it is shown that the critical discount factor is lower with an ad valorem tax than with a specific tax that results in the same collusive price. However, in contrast to Colombo and Labrecciosa (J Public Econ 97:196–205, 2013) it is shown that the revenue is always higher with an ad valorem tax than with a specific tax.  相似文献   

12.
This paper contributes to a theoretical underpinning of the economic freedom–political freedom relationship. We use the theory of social orders (North et al. in Violence and social orders: a conceptual framework for understanding recorded human history, Cambridge University Press, Cambridge, 2009) to look at the Hayek–Friedman Hypothesis (HFH), which leads us to propose a novel interpretation. The core insight of our weak interpretation of the hypothesis is that economic freedom is a necessary condition for maintaining political freedom in open access order countries (countries with high levels of both freedoms), i.e., once achieved, political freedom needs economic freedom to be stable; but the HFH is not relevant for limited access orders (rent-seeking-dominated orders). We find empirical support for the weak interpretation with canonical correlations and conditional logit regressions, using a panel database for 122 countries for the period 1980–2011.  相似文献   

13.
In time series context, estimation and testing issues with autoregressive and moving average (ARMA) models are well understood. Similar issues in the context of spatial ARMA models for the disturbance of the regression, however, remain largely unexplored. In this paper, we discuss the problems of testing no spatial dependence in the disturbances against the alternative of spatial ARMA process incorporating the possible presence of spatial dependence in the dependent variable. The problems of conducting such a test are twofold. First, under the null hypothesis, the nuisance parameter is not identified, resulting in a singular information matrix (IM), which is a nonregular case in statistical inference. To take account of singular IM, we follow Davies (Biometrika 64(2):247–254, 1977; Biometrika 74(1):33–43, 1987) and propose a test procedure based on the supremum of the Rao score test statistic. Second, the possible presence of spatial lag dependence will have adverse effect on the performance of the test. Using the general test procedure of Bera and Yoon (Econom Theory 9:649–658, 1993) under local misspecification, we avoid the explicit estimation of the spatial autoregressive parameter. Thus our suggested tests are entirely based on ordinary least squares estimation. Tests suggested here can be viewed as a generalization of Anselin et al. (Reg Sci Urban Econ 26:77–104, 1996). We conduct Monte Carlo simulations to investigate the finite sample properties of the proposed tests. Simulation results show that our tests have good finite sample properties both in terms of size and power, compared to other tests in the literature. We also illustrate the applications of our tests through several data sets.  相似文献   

14.
Public reason is justified to the extent that it uses (only) arguments, assumptions, or goals that are allowable as “public” reasons. But this exclusion requires some prior agreement on domains, and a process that disallows new unacceptable reasons by unanimous consent. Surprisingly, this problem of reconciliation is nearly the same, mutatis mutandis, as that faced by micro-economists working on general equilibrium, where a conceit—tâtonnement, directed by an auctioneer—was proposed by Leon Walras. Gaus’s justification of public reason requires the “as if” solution of a Kantian Parliamentarian, who rules on whether a proposal is “in order.” Previous work on public reason, by Rousseau, Kant, and Rawls, have all reduced decision-making and the process of “reasoning” to choice by a unitary actor, thereby begging the questions of disagreement, social choice, and reconciliation. Gaus, to his credit, solves that problem, but at the price of requiring that the process “knows” information that is in fact indiscernible to any of the participants. In fact, given the dispersed and radical situatedness of human aims and information, it is difficult for individuals, much less groups, to determine when norms are publicly justified or not. More work is required to fully take on Hayek’s insight that no person, much less all people, can have sufficient reasons to endorse the relevant norm, rule or law.  相似文献   

15.
We investigate the impact of an uncertain number of false individual null hypotheses on commonly used p value combination methods. Under such uncertainty, these methods perform quite differently and often yield conflicting results. Consequently, we develop a combination of “combinations of p values” (CCP) test aimed at maintaining good power properties across such uncertainty. The CCP test is based on a simple union–intersection principle that exploits the weak correspondence between two underlying p value combination methods. Monte Carlo simulations show that the CCP test controls size and closely tracks the power of the best individual methods. We empirically apply the CCP test to explore the stationarity in real exchange rates and the information rigidity in inflation and output growth forecasts.  相似文献   

16.
Should the powers of monitoring compliance and allocating tradeable emissions allowances be appointed to a unique supranational regulator or decentralized to several local regulators? To answer this question, we develop a two stage-two country game where environmental regulators set the amount of emission allowances and the level of monitoring effort to achieve full compliance while the regulated firms choose actual emissions and the number of permits to be held. Various, possibly conflicting, spillovers between countries arise in a decentralized setting. We show that decentralization is socially harmful if no asymmetry among institutional settings is introduced and can be suboptimal even when decentralization features lower monitoring costs than a centralized setting. Lower monitoring costs are therefore necessary, but not sufficient, to justify decentralization. Also, our analysis reveals that welfare can be higher under decentralization even if the corresponding environmental quality is worse than under centralization. Indeed, better environmental quality is sufficient but not necessary for higher welfare under decentralization. Finally, we discuss how these results can provide a theoretical rationale for the recent evolution of the EU ETS design.  相似文献   

17.
In models of learning by experimentation that exhibit signal dependence, a benchmark using a passive learner has been proposed. The use of this benchmark is flawed – first, passive learning does not disentangle the effects of knowing that beliefs, as well as other state variables, might change, and we address this issue directly by introducing a naïve learner. Secondly, and more tellingly, passive learning does not do what it is supposed to do, namely help measure the gains from active experimentation; the naïve learner enables us to illustrate this point in the context of a particular example.  相似文献   

18.
Does Benford’s Law hold in economic research and forecasting?   总被引:1,自引:0,他引:1  
First and higher order digits in data sets of natural and socio-economic processes often follow a distribution called Benford’s law. This phenomenon has been used in business and scientific applications, especially in fraud detection for financial data. In this paper, we analyse whether Benford’s law holds in economic research and forecasting. First, we examine the distribution of regression coefficients and standard errors in research papers, published in Empirica and Applied Economics Letters. Second, we analyse forecasts of GDP growth and CPI inflation in Germany, published in Consensus Forecasts. There are two main findings: The relative frequencies of the first and second digits in economic research are broadly consistent with Benford’s law. In sharp contrast, the second digits of Consensus Forecasts exhibit a massive excess of zeros and fives, raising doubts on their information content.  相似文献   

19.
It has been shown that, in the two-sector Benhabib–Farmer–Guo model with technologies of social increasing returns that exhibits indeterminacy, progressive income taxes de-stabilize the economy. This paper revisits the robustness of the tax implication in the two-sector Benhabib–Nishimura model with technologies of social constant returns that exhibits indeterminacy. We show that a progressive income tax stabilizes the economy against sunspot fluctuations, and thus the tax implication based on the two-sector Benhabib–Farmer–Guo model is not robust.  相似文献   

20.
We examine three tools that can enhance coordination success in a repeated multiple-choice coordination game. Gradualism means that the game starts as an easy coordination problem and moves gradually to a more difficult one. The Endogenous Ascending mechanism implies that a gradual increase in the upper bound of coordination occurs only if coordination with the Pareto superior equilibrium in a stage game is attained. The Endogenous Descending mechanism requires that when the game’s participants fail to coordinate, the level of the next coordination game be adjusted such that the game becomes simpler. We show that gradualism may not always work, but in such instances, its effect can be reinforced by endogeneity. Our laboratory experiment provides evidence that a mechanism that combines three tools, herein termed the “Gradualism with Endogenous Ascending and Descending (GEAD)” mechanism, works well. We discuss how the GEAD mechanism can be applied to real-life situations that suffer from coordination failure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号