首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
从穆勒等人对或然性的探讨,经耶方斯对概率归纳逻辑的开创,到卡尔纳普代表的现代概率归纳逻辑体系,考察了概率归纳逻辑的发展历程,从中揭示其兴起的原因,并分析现代归纳逻辑发展的一些新趋势。  相似文献   

2.
This paper presents a rational theory of categorization and similarity-based reasoning. I study a model of sequential learning in which the decision maker infers unknown properties of an object from information about other objects. The decision maker may use the following heuristics: divide objects into categories with similar properties and predict that a member of a category has a property if some other member of this category has this property. The environment is symmetric: the decision maker has no reason to believe that the objects and properties are a priori different. In symmetric environments, categorization is an optimal solution to an inductive inference problem. Any optimal solution looks as if the decision maker categorizes. Various experimental observations about similarity-based reasoning coincide with the optimal behavior in my model.  相似文献   

3.
A Bayesian network approach is used to conduct decision analysis of nutrient abatement measures in the Morsa catchment, South Eastern Norway. The paper demonstrates the use of Bayesian networks as a meta-modelling tool in integrated river basin management (IRBM) for structuring and combining the probabilistic information available in existing cost-effectiveness studies, eutrophication models and data, non-market valuation studies and expert opinion. The Bayesian belief network is used to evaluate eutrophication mitigation costs relative to benefits, as part of the economic analysis under the EU Water Framework Directive (WFD). Pros and cons of Bayesian networks as reported in the literature are reviewed in light of the results from our Morsa catchment model. The reported advantages of Bayesian networks in promoting integrated, inter-disciplinary evaluation of uncertainty in IRBM, as well as the apparent advantages for risk communication with stakeholders, are offset in our case by the cost of obtaining reliable probabilistic data and meta-model validation procedures.  相似文献   

4.
We consider the dynamics of reasoning by general rules (theories) and by specific cases (analogies). When an agent faces an exogenous process, we show that, under mild conditions, if reality happens to be simple, the agent will converge to adopt a theory and discard analogical thinking. If, however, reality is complex, analogical reasoning is unlikely to disappear. By contrast, when the agent is a player in a large population coordination game, and the process is generated by all playersʼ predictions, convergence to a theory is much more likely. This may explain how a large population of players selects an equilibrium in such a game, and how social norms emerge. Mixed cases, involving noisy endogenous processes are likely to give rise to complex dynamics of reasoning, switching between theories and analogies.  相似文献   

5.
We develop a model of games with awareness that allows for differential levels of awareness. We show that, for the standard modal-logical interpretations of belief and awareness, a player cannot believe there exist propositions of which he is unaware. Nevertheless, we argue that a boundedly rational individual may regard the possibility that there exist propositions of which she is unaware as being supported by inductive reasoning, based on past experience and consideration of the limited awareness of others. In this paper, we provide a formal representation of inductive reasoning in the context of a dynamic game with differential awareness. We show that, given differential awareness over time and between players, individuals can derive inductive support for propositions expressing their own unawareness. We consider the ecological rationality of heuristics to guide decisions in problems involving differential awareness.  相似文献   

6.
In this paper I study the El Farol problem, a deterministic, boundedly rational, multi‐agent model of a resource subject to congestion externalities that was initially studied computationally by Arthur (1994). I represent the interaction as a game, compute the set of Nash equilibria in mixed strategies of this game, and show analytically how the method of inductive inference employed by the agents in Arthur's computer simulation leads the empirical distribution of aggregate attendance to be like those in the set of Nash equilibria of the game. This set contains only completely mixed strategy profiles, which explains why aggregate attendance appears random in the computer simulation even though its set‐up is completely deterministic.  相似文献   

7.
ABSTRACT

This essay examines the role of mechanisms and Bayesian inference in process tracing. With respect to mechanisms, I argue that the core of process tracing with causal inference is the identification of mechanisms understood as intervening events. Events are different from standard intervening variables when used with process tracing, because events are treated as sets in which cases can have membership. With respect to Bayesian analysis, I concur with recent writings that suggest Bayesian inference is at the heart of process tracing. The Bayesian nature of process tracing explains why it is inappropriate to view qualitative research as suffering from a small-N problem and certain standard causal identification problems. More generally, the paper shows how the power of process tracing as a qualitative methodology depends on and grows from its set-theoretic underpinnings.  相似文献   

8.
The paper introduces Bayesian inference into a demand model. This allows us to test for the negativity condition of the substitution matrix which is difficult to handle directly in the traditional approach. To illustrate the Bayesian inference procedures, we estimate the Rotterdam model and test the demand properties using Japanese data. The empirical results show the importance of specifically considering negativity in demand analysis. First version received: September 1997/final version received: February 1998  相似文献   

9.
In most situations, we can only collect small samples for modeling specific economic forecasting problems. Thus, the availability of accurate predictive model for small samples is vital for solving these problems. This research makes an early investigation on the small sample-oriented case-based kernel predictive method (SSOCBKPM) by integrating support vector machine in case reuse of case-based reasoning, and conducts an early application of SSOCBKPM in binary economic forecasting under the n-splits-k-times hold-out method. After business cases consisting of small samples are represented, the most similar cases from small samples to the current problem are retrieved from small case base. The most similar cases retrieved from small samples are then mapped into a higher dimensional space by kernel function to be candidate support vectors, in which dimension a hyper-plane of support vector machine is constructed by reusing the most similar cases. Two datasets for firm failure prediction and one dataset for loan failure prediction were used to test performance of SSOCBKPM. 100 times' random selection of each of the 20%, 35%, 50%, 65%, and 80% of the total samples are respectively used in training to simulate the availability of samples. The results indicate that SSOCBKPM improves accuracy, stability and sensitivity of the classical CBR significantly; and improves performance of SVM significantly when the volume of the training samples becomes smaller. The SSOCBKPM is more useful in economic forecasting than case-based reasoning and support vector machine since the proportion of available samples is commonly small and less than 50% of the population.  相似文献   

10.
This paper shows how particle filtering facilitates likelihood-based inference in dynamic macroeconomic models. The economies can be non-linear and/or non-normal. We describe how to use the output from the particle filter to estimate the structural parameters of the model, those characterizing preferences and technology, and to compare different economies. Both tasks can be implemented from either a classical or a Bayesian perspective. We illustrate the technique by estimating a business cycle model with investment-specific technological change, preference shocks, and stochastic volatility.  相似文献   

11.
12.
This article discusses Bayesian inference in change‐point models. The main existing approaches treat all change‐points equally, a priori, using either a Uniform prior or an informative hierarchical prior. Both approaches assume a known number of change‐points. Some undesirable properties of these approaches are discussed. We develop a new Uniform prior that allows some of the change‐points to occur out of sample. This prior has desirable properties, can be interpreted as “noninformative,” and treats the number of change‐points as unknown. Artificial and real data exercises show how these different priors can have a substantial impact on estimation and prediction.  相似文献   

13.
The purpose of this paper is to analyze and compare the results of applying classical and Bayesian methods to testing for a unit root in time series with a single endogenous structural break. We utilize a data set of macroeconomic time series for the Mexican economy similar to the Nelson–Plosser one. Under both approaches, we make use of innovational outlier models allowing for an unknown break in the trend function. Classical inference relies on bootstrapped critical values, in order to make inference comparable to the finite sample Bayesian one. Results from both approaches are discussed and compared.  相似文献   

14.
We introduce a framework to study individuals’ behavior in environments that are deterministic, but too complex to permit tractable deterministic representations. An agent in these environments uses a probabilistic model to cope with his inability to think through all contingencies in advance. We interpret this probabilistic model as embodying all patterns the agent perceives, yet allowing for the possibility that there may be important details he had missed. Although the implied behavior is rational, it is consistent with an agent who believes his environment is too complex to warrant precise planning, foregoes finely detailed contingent rules in favor of vaguer plans, and expresses a preference for flexibility.  相似文献   

15.
16.

In this paper, we consider the signals approach as an early-warning-system to detect crises. Crisis detection from a signals approach involves Type I and II errors which are handled through a utility function. We provide a Bayesian model and we test the effectiveness of the signals approach in three data sets: (1) Currency and banking crises for 76 currency and 26 banking crises in 15 developing and 5 industrial countries between 1970 and 1995, (2) costly asset price booms using quarterly data ranging from 1970 to 2007, and (3) public debt crises in Europe in 11 countries in the European Monetary Union from the introduction of the Euro until November 2011. The Bayesian model relies on a vector autoregression for indicator variables, and incorporates dynamic factors, time-varying weights in the latent composite indicator and special priors to avoid the proliferation of parameters. The Bayesian vector autoregressions are extended to a semi-parametric context to capture non-linearities. Our evidence reveals that our approach is successful as an early-warning mechanism after allowing for breaks and nonlinearities and, perhaps more importantly, the composite indicator is better represented as a flexible nonlinear function of the underlying indicators.

  相似文献   

17.
We develop Bayesian inference for an unconditional quantile regression model. Our approach provides better estimates in the upper tail of the wage distribution as well as valid small sample confidence intervals for the Oaxaca–Blinder decomposition. We analyze the recent changes in the US wage structure using data from the CPS Outgoing Rotation Group from 1992 to 2009. We find that the largest part of the recent changes is explained mainly by differences in returns to education while the decline in the unionization rate has a small impact, and that earnings inequality is rising more at the top end of the wage distribution.  相似文献   

18.
We study a stylized theory of the volatility reduction in the U.S. after 1984—the Great Moderation—which attributes part of the stabilization to less volatile shocks and another part to more difficult inference on the part of Bayesian households attempting to learn the latent state of the economy. We use a standard equilibrium business cycle model with technology following an unobserved regime‐switching process. After 1984, according to Kim and Nelson (1999a), the variance of U.S. macroeconomic aggregates declined because boom and recession regimes moved closer together, keeping conditional variance unchanged. In our model this makes the signal extraction problem more difficult for Bayesian households, and in response they moderate their behavior, reinforcing the effect of the less volatile stochastic technology and contributing an extra measure of moderation to the economy. We construct example economies in which this learning effect accounts for about 30% of a volatility reduction of the magnitude observed in the postwar U.S. data.  相似文献   

19.
Traditional specification testing does not always improve subsequent inference. We demonstrate by means of computer experiments under which circumstances, and how severely, data-driven model selection can destroy the size properties of subsequent parameter tests, if they are used without adjusting for the model-selection step. The investigated models are representative of macroeconometric and microeconometric workhorses. The model selection procedures include information criteria as well as sequences of significance tests (“general-to-specific”). We find that size distortions can be particularly large when competing models are close, with closeness being defined relatively to the sample size.  相似文献   

20.
We study the standard model of bilateral trade under incomplete information dropping the assumption that traders know on which side of the market they are. We consider two mechanisms that differ only in the number of offers that an agent can submit. These mechanisms are realistic and they are ex post individually rational (i.e. regret free), while the usual mechanisms proposed in the literature satisfy the weaker requirement of interim individual rationality. Properties of the Bayesian equilibria are described for the general case. For the case where valuations are uniformly distributed in the unit square, two types of equilibria are derived for each mechanism and their efficiency properties are analyzed. As expected, the equilibria under the double offer mechanism are less inefficient than those under the single offer mechanism.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号