首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Panel and life-course data are ideally suited to unravelling labour market dynamics, but their designs differ, with potential consequences for the estimated relationships. To gauge the extent to which these two data designs produce dissimilar transition rates and the causation thereof, we use the German Life History Study and the German Socio-Economic Panel. Life-course data in particular suffer from recall effects due to memory bias causing understated transition probabilities. Panel data suffer from seam effects due to spurious transitions between statuses recalled in activity calendars that generate heaps at particular time points and cause overstated transition probabilities. We combine the two datasets and estimate multilevel (multistate) discrete-time models for event history data to model transitions between labour market states taking these factors into account. Though we find much lower transition rates in the life-course study, confirming the results of Solga (Qual Quant 35:291–309, 2001) in this Journal for East-Germany, part of the difference can be explained by short spells recall bias. The estimated models on exit, re-entry and job mobility on the combined datasets show indeed a negative retrospective design effect. Another specification that includes the length of the recall period shows no significant decrease in the transition probabilities with increasing length, suggesting that the negative design effect is due to other design differences.  相似文献   

2.
The state of the art in coherent structure theory is driven by two assertions, both of which are limiting: (1) all units of a system can exist in one of two states, failed or functioning; and (2) at any point in time, each unit can exist in only one of the above states. In actuality, units can exist in more than two states, and it is possible that a unit can simultaneously exist in more than one state. This latter feature is a consequence of the view that it may not be possible to precisely define the subsets of a set of states; such subsets are called vague . The first limitation has been addressed via work labeled 'multistate systems'; however, this work has not capitalized on the mathematics of many-valued propositions in logic. Here, we invoke its truth tables to define the structure function of multistate systems and then harness our results in the context of vagueness. A key contribution of this paper is to argue that many-valued logic is a common platform for studying both multistate and vague systems but, to do so, it is necessary to lean on several principles of statistical inference.  相似文献   

3.
4.
In this paper we propose a downside risk measure, the expectile-based Value at Risk (EVaR), which is more sensitive to the magnitude of extreme losses than the conventional quantile-based VaR (QVaR). The index θ of an EVaR is the relative cost of the expected margin shortfall and hence reflects the level of prudentiality. It is also shown that a given expectile corresponds to the quantiles with distinct tail probabilities under different distributions. Thus, an EVaR may be interpreted as a flexible QVaR, in the sense that its tail probability is determined by the underlying distribution. We further consider conditional EVaR and propose various Conditional AutoRegressive Expectile models that can accommodate some stylized facts in financial time series. For model estimation, we employ the method of asymmetric least squares proposed by Newey and Powell [Newey, W.K., Powell, J.L., 1987. Asymmetric least squares estimation and testing. Econometrica 55, 819–847] and extend their asymptotic results to allow for stationary and weakly dependent data. We also derive an encompassing test for non-nested expectile models. As an illustration, we apply the proposed modeling approach to evaluate the EVaR of stock market indices.  相似文献   

5.
In areas from medicine to climate change to economics, we are faced with huge challenges and a need for accurate forecasts, yet our ability to predict the future has been found wanting. The basic problem is that complex systems such as the atmosphere or the economy can not be reduced to simple mathematical laws and modeled accordingly. The equations in numerical models are therefore only approximations to reality, and are often highly sensitive to external influences and small changes in parameterisation — they can be made to fit past data, but are less good at prediction. Since decisions are usually based on our best models of the future, how can we proceed? This paper draws a comparison between two apparently different fields: biology and economics. In biology, drug development is a highly inefficient and expensive process, which in the past has relied heavily on trial and error. Institutions such as pharmaceutical companies and universities are now radically changing their approach and adopting techniques from the new field of systems biology to integrate information from disparate sources and improve the development process. A similar revolution is required in economics if models are to reflect the nature of human economic activity and provide useful tools for policy makers. We outline the main foundations for a theory of systems economics.  相似文献   

6.
投标人参考点效应与最优公开保留价博弈分析   总被引:1,自引:0,他引:1  
本文从行为经济学出发,用博弈论的方法研究存在参考点效应时,投标人在第一价格和第二价格密封招标中的报价策略以及招标人的最优公开保留价定价策略,并对不同招标方式下投标人的期望收益进行比较。本文给出存在参考点效应时招标最优公开保留价的定价公式,分析了参考点效应和投标人数对最优公开保留价的影响。研究表明,在采用公开保留价的情况下,考虑投标人参考点效应对报价策略的影响时,不同招标方式给投标人带来的期望收益是相等的,即此时招标支付等价性命题仍然成立。此外,随着投标人参考点效应的增强和投标人数的增加,招标人的最优公开保留价下降。  相似文献   

7.
Interval estimation is an important objective of most experimental and observational studies. Knowing at the design stage of the study how wide the confidence interval (CI) is expected to be and where its limits are expected to fall can be very informative. Asymptotic distribution of the confidence limits can also be used to answer complex questions of power analysis by computing power as probability that a CI will exclude a given parameter value. The CI‐based approach to power and methods of calculating the expected size and location of asymptotic CIs as a measure of expected precision of estimation are reviewed in the present paper. The theory is illustrated with commonly used estimators, including unadjusted risk differences, odds ratios and rate ratios, as well as more complex estimators based on multivariable linear, logistic and Cox regression models. It is noted that in applications with the non‐linear models, some care must be exercised when selecting the appropriate variance expression. In particular, the well‐known ‘short‐cut’ variance formula for the Cox model can be very inaccurate under unequal allocation of subjects to comparison groups. A more accurate expression is derived analytically and validated in simulations. Applications with ‘exact’ CIs are also considered.  相似文献   

8.
The paper discusses the implications for economics of the notion of structural stability. Definitions are given to characterize dynamical economic models generating movements the qualitative properties of which are conserved under perturbation. Consequences of these definitions, in particular for time evolutions which are globally or locally stable, are then presented through three theorems. Last, two examples — one in micro, one in macroeconomics — illustrate the type of results that can be obtained from such theorems.  相似文献   

9.
Datasets examining periodontal disease records current (disease) status information of tooth‐sites, whose stochastic behavior can be attributed to a multistate system with state occupation determined at a single inspection time. In addition, the tooth‐sites remain clustered within a subject, and the number of available tooth‐sites may be representative of the true periodontal disease status of that subject, leading to an ‘informative cluster size’ scenario. To provide insulation against incorrect model assumptions, we propose a non‐parametric regression framework to estimate state occupation probabilities at a given time and state exit/entry distributions, utilizing weighted monotonic regression and smoothing techniques. We demonstrate the superior performance of our proposed weighted estimators over the unweighted counterparts via a simulation study and illustrate the methodology using a dataset on periodontal disease.  相似文献   

10.
Most rational expectations models involve equations in which the dependent variable is a function of its lags and its expected future value. We investigate the asymptotic bias of generalized method of moment (GMM) and maximum likelihood (ML) estimators in such models under misspecification. We consider several misspecifications, and focus more specifically on the case of omitted dynamics in the dependent variable. In a stylized DGP, we derive analytically the asymptotic biases of these estimators. We establish that in many cases of interest the two estimators of the degree of forward-lookingness are asymptotically biased in opposite direction with respect to the true value of the parameter. We also propose a quasi-Hausman test of misspecification based on the difference between the GMM and ML estimators. Using Monte-Carlo simulations, we show that the ordering and direction of the estimators still hold in a more realistic New Keynesian macroeconomic model. In this set-up, misspecification is in general found to be more harmful to GMM than to ML estimators.  相似文献   

11.
In all empirical-network studies, the observed properties of economic networks are informative only if compared with a well-defined null model that can quantitatively predict the behavior of such properties in constrained graphs. However, predictions of the available null-model methods can be derived analytically only under assumptions (e.g., sparseness of the network) that are unrealistic for most economic networks like the world trade web (WTW). In this paper we study the evolution of the WTW using a recently-proposed family of null network models. The method allows to analytically obtain the expected value of any network statistic across the ensemble of networks that preserve on average some local properties, and are otherwise fully random. We compare expected and observed properties of the WTW in the period 1950–2000, when either the expected number of trade partners or total country trade is kept fixed and equal to observed quantities. We show that, in the binary WTW, node-degree sequences are sufficient to explain higher-order network properties such as disassortativity and clustering-degree correlation, especially in the last part of the sample. Conversely, in the weighted WTW, the observed sequence of total country imports and exports are not sufficient to predict higher-order patterns of the WTW. We discuss some important implications of these findings for international-trade models.  相似文献   

12.
《Journal of econometrics》2005,126(2):411-444
This paper has two main purposes. Firstly, we develop various ways of defining efficiency in the case of multiple-output production. Our framework extends a previous model by allowing for nonseparability of inputs and outputs. We also specifically consider the case where some of the outputs are undesirable, such as pollutants. We investigate how these efficiency definitions relate to one another and to other approaches proposed in the literature. Secondly, we examine the behavior of these definitions in two examples of practically relevant size and complexity. One of these involves banking and the other agricultural data. Our main findings can be summarized as follows. For a given efficiency definition, efficiency rankings are found to be informative, despite the considerable uncertainty in the inference on efficiencies. It is, however, important for the researcher to select an efficiency concept appropriate to the particular issue under study, since different efficiency definitions can lead to quite different conclusions.  相似文献   

13.
Multiple regression analysis with grouped data is often used as a method for exploring environmental preferences, the preferred unit for measurement in such analyses is mean scores rather than individual scores.Although this procedure allows us to reduce the potential for error in measuring different variables and, as a consequence of this, improves the reliability of the technique, it also produces some additional, undesirable effects. The latter include artificial increases in R2 values which give the impression that a high degree of fit has been achieved for the regression model. Indeed, this goodness fit often appears to be better than that which could have been achieved by using individual scores. Further, given that different studies operate with differing numbers of subjects in their groups, the R2 scores which result from the analyses of these groups are not directly comparable.In the following discussion, we demonstrate how any value, other than zero for correlations between variables, can be increased, at will, by simply expanding the number of subjects in each group. We present the specialised formulae used for quantifying this increase and offer a warning about the purely relative nature of any study which bases its conclusions on models of regression analysis using grouped data.  相似文献   

14.
In the last decade, a number of models for the dynamic facility location problem have been proposed. The various models contain differing assumptions regarding the revenues and costs realized in the opening, operation, and closure of a facility as well as considering which of the facility sites are candidates for acquisition or disposal at the beginning of a time period. Since the problem becomes extremely large for practical applications, much of the research has been directed toward developing efficient solution techniques. Most of the models and solutions assume that the facilities will be disposed of at the end of the time horizon since distant future conditions usually can't be forecasted with any reasonable degree of accuracy. The problem with this approach is that the “optimal” solution is optimal for only one hypothesized post horizon facility configuration and may become nonoptimal under a different configuration. Post-optimality analysis is needed to assure management that the “optimal” decision to open or close a facility at a given point in time won't prove to be “nonoptimal” when the planning horizon is extended or when design parameters in subsequent time periods change. If management has some guarantee that the decision to open or close a facility in a given time period won't change, it can safely direct attention to the accuracy of the design parameters within that time period.This paper proposes a mixed integer linear programming model to determine which of a finite set of warehouse sites will be operating in each time period of a finite planning horizon. The model is general in the sense that it can reflect a number of acquisition alternatives—purchase, lease or rent. The principal assumptions of the model are: a) Warehouses are assumed to have infinite capacity in meeting customer demand, b) In each time period, any non-operating warehouse is a candidate for becoming operational, and likewise any operating warehouse is a candidate for disposal, c) During a given time period, the fixed costs of becoming operational at a site are greater than the disposal value at that site to reflect the nonrecoverable costs involved in operating a warehouse. These costs are separate from the acquisition and liquidation values of the site. d) During a time period the operation of a warehouse incurs overhead and maintenance costs as well as a depreciation in the disposal value.To solve the model, it is first simplified and a partial optimal solution is obtained by the iterative examination by both lower and upper bounds on the savings realized if a site is opened in a given time period. An attempt is made to fix each warehouse open or closed in each time period. The bounds are based on the delta and omega tests proposed by Efroymson and Ray (1966) and Khumawala (1972) with adjustment for changes in the value of the warehouse between the beginning and end of a time period. A complete optimal solution is obtained by solving the reduced model with Benders' decomposition procedure. The optimal solution is then tested to determine which time periods contain “tentative” decisions that may be affected by post horizon data by analyzing the relationship between the lower (or upper) bounds used in the model simplification time period. If the warehouse decisions made in a time period satisfy these relationships and are thus unaffected by data changes in subsequent time periods, then the decisions made in earlier time periods will also be unaffected by future changes.  相似文献   

15.
During the past two decades, innovations protected by patents have played a key role in business strategies. This fact enhanced studies of the determinants of patents and the impact of patents on innovation and competitive advantage. Sustaining competitive advantages is as important as creating them. Patents help sustaining competitive advantages by increasing the production cost of competitors, by signaling a better quality of products and by serving as barriers to entry. If patents are rewards for innovation, more R&D should be reflected in more patent applications but this is not the end of the story. There is empirical evidence showing that patents through time are becoming easier to get and more valuable to the firm due to increasing damage awards from infringers. These facts question the constant and static nature of the relationship between R&D and patents. Furthermore, innovation creates important knowledge spillovers due to its imperfect appropriability. Our paper investigates these dynamic effects using US patent data from 1979 to 2000 with alternative model specifications for patent counts. We introduce a general dynamic count panel data model with dynamic observable and unobservable spillovers, which encompasses previous models, is able to control for the endogeneity of R&D and therefore can be consistently estimated by maximum likelihood. Apart from allowing for firm specific fixed and random effects, we introduce a common unobserved component, or secret stock of knowledge, that affects differently the propensity to patent of each firm across sectors due to their different absorptive capacity.  相似文献   

16.
巴塞尔新资本协议在鼓励银行采用内部评级法评估信用风险以提取资本准备的同时也强化了各国监管机构对内部评级模型绩效检验与审查的要求.CreditMetrics和CreditRisk+是银行业信用风险评估的基准模型.从建模的数学方法看,CreditRisk+是基于违约的判断,而CreditMetrics则是根据等级变化评价.利用江苏省银监局的相关统计数据对信用风险评估模型进行参数特性审查与绩效检验,结果显示这两类常用模型都可以在江苏的商业银行经营实践中稳定地实现根据信贷组合的实际风险状况进行内部资本配置这一目标.  相似文献   

17.
Current economic theory typically assumes that all the macroeconomic variables belonging to a given economy are driven by a small number of structural shocks. As recently argued, apart from negligible cases, the structural shocks can be recovered if the information set contains current and past values of a large, potentially infinite, set of macroeconomic variables. However, the usual practice of estimating small size causal Vector AutoRegressions can be extremely misleading as in many cases such models could fully recover the structural shocks only if future values of the few variables considered were observable. In other words, the structural shocks may be non‐fundamental with respect to the small dimensional vector used in current macroeconomic practice. By reviewing a recent strand of econometric literature, we show that, as a solution, econometricians should enlarge the space of observations, and thus consider models able to handle very large panels of related time series. Among several alternatives, we review dynamic factor models together with their economic interpretation, and we show how non‐fundamentalness is non‐generic in this framework. Finally, using a factor model, we provide new empirical evidence on the effect of technology shocks on labour productivity and hours worked.  相似文献   

18.
何家理 《价值工程》2011,30(24):10-11
以我国四个主体功能区划分为参考依据,探讨湿润山区(以秦巴山区为例)经济生态发展目标模式。即在生态平衡目标和经济平衡目标矛盾统一的过程中,抉择最优化的经济生态良性互动发展模式。从区域特色出发,将秦巴山区按坡度划分为高山区、浅山区、河流区等三个区域单元,参照国家在该区域功能划分上的现有扶持政策(自然区保护政策、退耕还林政策、南水北调政策),在区域特色与扶持政策相结合的基础上探寻经济发展途径(野生动植物保护与旅游业—茶、桑与富硒食品加工业—水电与水产业),凝练秦巴山区生态经济发展方向(特色产业—传统产业—优势产业)。最终建立生态与经济互动发展圈结构模式。  相似文献   

19.
Summary Investment and uncertainty
After a few introductory remarks about the necessity of uniform definitions about plan horizon, returns, expenses, interest factor ett. for all investment alternatives and the necessity of an also uniform way of prediction, several models on investment analysis which are known from literature are briefly discussed. For simplicity there is only one decision variable (i.e. capacity); the variables influencing it are called exogeneous variables. These exogeneous variables may be deterministic and then one has to maximize the sum of the discounted cash flows from each project where different levels of investment into a project are supposed to exclude each other. The maximization may be subject to restrictions or not. If the exogeneous variables are stochastic one has to maximize the expected utility, mostly defined by the expected value and the variance of the discounted cash flows. The way to carry this out is simulation. Also portfolio analysis may be used, but there are some objections against this method, which are discussed.  相似文献   

20.
The successful introduction of new durable products plays an important part in helping companies to stay ahead of their competitors. Decisions relating to these products can be improved by the availability of reliable pre-launch forecasts of their adoption time series. However, producing such forecasts is a difficult, complex and challenging task, mainly because of the non-availability of past time series data relating to the product, and the multiple factors that can affect adoptions, such as customer heterogeneity, macroeconomic conditions following the product launch, and technological developments which may lead to the product’s premature obsolescence. This paper provides a critical review of the literature to examine what it can tell us about the relative effectiveness of three fundamental approaches to filling the data void : (i) management judgment, (ii) the analysis of judgments by potential customers, and (iii) formal models of the diffusion process. It then shows that the task of producing pre-launch time series forecasts of adoption levels involves a set of sub-tasks, which all involve either quantitative estimation or choice, and argues that the different natures of these tasks mean that the forecasts are unlikely to be accurate if a single method is employed. Nevertheless, formal models should be at the core of the forecasting process, rather than unstructured judgment. Gaps in the literature are identified, and the paper concludes by suggesting a research agenda so as to indicate where future research efforts might be employed most profitably.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号