首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 29 毫秒
1.
This paper proposes a new method to empirically validate simulation models that generate artificial time series data comparable with real-world data. The approach is based on comparing structures of vector autoregression models which are estimated from both artificial and real-world data by means of causal search algorithms. This relatively simple procedure is able to tackle both the problem of confronting theoretical simulation models with the data and the problem of comparing different models in terms of their empirical reliability. Moreover the paper provides an application of the validation procedure to the agent-based macroeconomic model proposed by Dosi et al. (2015).  相似文献   

2.
This paper concerns a class of model selection criteria based on cross‐validation techniques and estimative predictive densities. Both the simple or leave‐one‐out and the multifold or leave‐m‐out cross‐validation procedures are considered. These cross‐validation criteria define suitable estimators for the expected Kullback–Liebler risk, which measures the expected discrepancy between the fitted candidate model and the true one. In particular, we shall investigate the potential bias of these estimators, under alternative asymptotic regimes for m. The results are obtained within the general context of independent, but not necessarily identically distributed, observations and by assuming that the candidate model may not contain the true distribution. An application to the class of normal regression models is also presented, and simulation results are obtained in order to gain some further understanding on the behavior of the estimators.  相似文献   

3.
The estimation of finite‐horizon discrete‐choice dynamic programming (DCDP) models is computationally expensive. This limits their realism and impedes verification and validation efforts. Keane and Wolpin (Review of Economics and Statistics, 1994, 76(4), 648–672) propose an interpolation method that ameliorates the computational burden but introduces approximation error. I describe their approach in detail, successfully recompute their original quality diagnostics, and provide some additional insights that underscore the trade‐off between computation time and the accuracy of estimation results.  相似文献   

4.
The aim of this article is first to review how the standard econometric methods for panel data may be adapted to the problem of estimating frontier models and (in)efficiencies. The aim is to clarify the difference between the fixed and random effect model and to stress the advantages of the latter. Then a semi-parametric method is proposed (using a non-parametric method as a first step), the message being that in order to estimate frontier models and (in)efficiences with panel data, it is an appealing method. Since analytic sampling distributions of efficiencies are not available, a bootstrap method is presented in this framework. This provides a tool allowing to assess the statistical significance of the obtained estimators. All the methods are illustrated in the problem of estimating the inefficiencies of 19 railway companies observed over a period of 14 years (1970–1983).Article presented at the ORSA/TIMS joint national meeting, Productivity and Global Competition, Philadelphia, October 29–31, 1990. An earlier version of the paper was presented at the European Workshop on Efficiency and Productivity Measurement in the Service Industries held at CORE, October 20–21, 1989. Helpful comments of Jacques Mairesse, Benoît Mulkay, Sergio Perelman, Michel Mouchart, Shawna Grosskopf and Rolf Färe, at various stages of the paper, are gratefully acknowledged.  相似文献   

5.
Novel coronavirus disease (COVID-19) and resulting lockdowns have contributed to major retail operational disturbances around the globe, forcing retail organizations to manage their operations effectively. The impact can be measured as a black swan event (BSE). Therefore, to understand its impact on retail operations and enhance operational performance, the study attempts to evaluate retail operations and develop a decision-making model for disruptive events in Morocco. The study develops a three-phase evaluation approach. The approach involves fuzzy logic (to measure the current performance of retail operations), graph theory (to develop an exit strategy for retail operations based on different scenarios), and ANN and random forest-based prediction model with K-cross validation (to predict customer retention for retail operations). This methodology is preferred to develop a unique decision-making model for BSE. From the analysis, the current retail performance index has been computed as “Average” level and the graph-theoretic approach highlighted the critical attributes of retail operations. Further, the study identified triggering attributes for customer retention using machine learning-based prediction models (MLBPM) and develops a contactless payment system for customers' safety and hygiene. The framework can be used on a periodic basis to help retail managers to improve their operational performance level for disruptive events.  相似文献   

6.
传统的主成分分析(PCA)本质上是一种线性映射算法,无法有效处理非线性关系的数据。本文在分析自联想神经网络(AANN)的基础上,借鉴传统PCA方法中的序数主成分概念,提出了基于顺序自联想神经网络(SAANN)的非线性主成分分析法(NLPCA)。进一步,结合神经网络(NN)和Logisitic模型,以我国上市公司为研究对象,分别构建了基于NLPCA-NN和NLPCA-Logisitic的信用评估模型。实证结果及ROC曲线分析表明,本文构建的NLPCA相比传统的线性PCA方法能有效地实现数据的非线性特征提取与降维,提高模型预测性能。此外,实证结果还表明,在相同PCA方法处理数据的条件下,神经网络模型的信用评估效果要好于Logisitic模型。  相似文献   

7.
Credit identification is one of core issues of financing process. Enterprise credit involves a lot of financial and non-financial measures, among which entrepreneurship is an important but rarely mentioned variable. Good entrepreneur credit often leads to good enterprise credit. A comprehensive analysis of enterprise credit identification is important to avoid losses, foster excellent enterprise and make the optimal allocation of resources. The existing literature mainly studied the impact of entrepreneurship on enterprise credit from the perspective of historical information, which is about average and tendency. Hence, those models were unable to explain the function of complex human nature and, consequently, linear models are unable to well describe the relationship between enterprise credit and entrepreneur credit. Given the deficiency of parametric models when discussing the impact of entrepreneur credit, a non parametric approach are proposed to individually describe the impact path of different individuals. This paper established a decision tree based on nonparametric approach to verify the practicability of the model in the evaluation of enterprise credit recognition. In the end of this paper, we demonstrate the validity of the non parametric model and the validation method of it.  相似文献   

8.
模糊层次法在大规模定制客户需求评估中的应用   总被引:3,自引:2,他引:1  
将模糊层次分析法应用于大规模定制客户需求评价中,来确定客户需求的权重。基于对客户需求特点的分析,建立了客户需求评价指标体系,详细阐述了模糊层次分析法的算法步骤,最后以某暖风机生产企业为例进行了实例验证。其结果表明此方法既有效又可行。  相似文献   

9.
Abstract The credit risk problem is one of the most important issues of modern financial mathematics. Fundamentally it consists in computing the default probability of a company going into debt. The problem can be studied by means of Markov transition models. The generalization of the transition models by means of homogeneous semi-Markov models is presented in this paper. The idea is to consider the credit risk problem as a reliability problem. In a semi-Markov environment it is possible to consider transition probabilities that change as a function of waiting time inside a state. The paper also shows how to apply semi-Markov reliability models in a credit risk environment. In the last section an example of the model is provided. Mathematics Subject Classification (2000): 60K15, 60K20, 90B25, 91B28 Journal of Economic Literature Classification: G21, G33  相似文献   

10.
In this study, we addressed the problem of point and probabilistic forecasting by describing a blending methodology for machine learning models from the gradient boosted trees and neural networks families. These principles were successfully applied in the recent M5 Competition in both the Accuracy and Uncertainty tracks. The key points of our methodology are: (a) transforming the task into regression on sales for a single day; (b) information-rich feature engineering; (c) creating a diverse set of state-of-the-art machine learning models; and (d) carefully constructing validation sets for model tuning. We show that the diversity of the machine learning models and careful selection of validation examples are most important for the effectiveness of our approach. Forecasting data have an inherent hierarchical structure (12 levels) but none of our proposed solutions exploited the hierarchical scheme. Using the proposed methodology, we ranked within the gold medal range in the Accuracy track and within the prizes in the Uncertainty track. Inference code with pre-trained models are available on GitHub.1  相似文献   

11.
Information systems (IS) are strongly influenced by changes in new technology and should react swiftly in response to external conditions. Resilience engineering is a new method that can enable these systems to absorb changes. In this study, a new framework is presented for performance evaluation of IS that includes DeLone and McLean’s factors of success in addition to resilience. Hence, this study is an attempt to evaluate the impact of resilience on IS by the proposed model in Iranian Gas Engineering and Development Company via the data obtained from questionnaires and Fuzzy Data Envelopment Analysis (FDEA) approach. First, FDEA model with α-cut = 0.05 was identified as the most suitable model to this application by performing all Banker, Charnes and Cooper and Charnes, Cooper and Rhodes models of and FDEA and selecting the appropriate model based on maximum mean efficiency. Then, the factors were ranked based on the results of sensitivity analysis, which showed resilience had a significantly higher impact on the proposed model relative to other factors. The results of this study were then verified by conducting the related ANOVA test. This is the first study that examines the impact of resilience on IS by statistical and mathematical approaches.  相似文献   

12.
This paper develops the structure of a parsimonious Portfolio Index (PI) GARCH model. Unlike the conventional approach to Portfolio Index returns, which employs the univariate ARCH class, the PI-GARCH approach incorporates the effects on individual assets, leading to a better understanding of portfolio risk management, and achieves greater accuracy in forecasting Value-at-Risk (VaR) thresholds. For various asymmetric GARCH models, a Portfolio Index Composite News Impact Surface (PI-CNIS) is developed to measure the effects of news on the conditional variances. The paper also investigates the finite sample properties of the PI-GARCH model. The empirical example shows that the asymmetric PI-GARCH-t model outperforms the GJR-t model and the filtered historical simulation with a t distribution in forecasting VaR thresholds.  相似文献   

13.
Analysis, model selection and forecasting in univariate time series models can be routinely carried out for models in which the model order is relatively small. Under an ARMA assumption, classical estimation, model selection and forecasting can be routinely implemented with the Box–Jenkins time domain representation. However, this approach becomes at best prohibitive and at worst impossible when the model order is high. In particular, the standard assumption of stationarity imposes constraints on the parameter space that are increasingly complex. One solution within the pure AR domain is the latent root factorization in which the characteristic polynomial of the AR model is factorized in the complex domain, and where inference questions of interest and their solution are expressed in terms of the implied (reciprocal) complex roots; by allowing for unit roots, this factorization can identify any sustained periodic components. In this paper, as an alternative to identifying periodic behaviour, we concentrate on frequency domain inference and parameterize the spectrum in terms of the reciprocal roots, and, in addition, incorporate Gegenbauer components. We discuss a Bayesian solution to the various inference problems associated with model selection involving a Markov chain Monte Carlo (MCMC) analysis. One key development presented is a new approach to forecasting that utilizes a Metropolis step to obtain predictions in the time domain even though inference is being carried out in the frequency domain. This approach provides a more complete Bayesian solution to forecasting for ARMA models than the traditional approach that truncates the infinite AR representation, and extends naturally to Gegenbauer ARMA and fractionally differenced models.  相似文献   

14.
A new approach to Markov chain Monte Carlo simulation was recently proposed by Propp and Wilson. This approach, unlike traditional ones, yields samples which have exactly the desired distribution. The Propp–Wilson algorithm requires this distribution to have a certain structure called monotonicity. In this paper an idea of Kendall is applied to show how the algorithm can be extended to the case where monotonicity is replaced by anti-monotonicity. As illustrating examples, simulations of the hard-core model and the random-cluster model are presented.  相似文献   

15.
Bayesian model selection with posterior probabilities and no subjective prior information is generally not possible because of the Bayes factors being ill‐defined. Using careful consideration of the parameter of interest in cointegration analysis and a re‐specification of the triangular model of Phillips (Econometrica, Vol. 59, pp. 283–306, 1991), this paper presents an approach that allows for Bayesian comparison of models of cointegration with ‘ignorance’ priors. Using the concept of Stiefel and Grassman manifolds, diffuse priors are specified on the dimension and direction of the cointegrating space. The approach is illustrated using a simple term structure of the interest rates model.  相似文献   

16.
Data Envelopment Analysis (DEA) assumes, in most cases, that all inputs and outputs are controlled by the Decision Making Unit (DMU). Inputs and/or outputs that do not conform to this assumption are denoted in DEA asnon-discretionary (ND) factors. Banker and Morey [1986] formulated several variants of DEA models which incorporated ND with ordinary factors. This article extends the Banker and Morey approach for treating nondiscretionary factors in two ways. First, the model is extended to allow for thesimultaneous presence of ND factors in both the input and the output sets. Second, a generalization is offered which, for the first time, enables a quantitative evaluation ofpartially controlled factors. A numerical example is given to illustrate the different models.The editor for this paper was Wade D. Cook.  相似文献   

17.
In this paper we, like several studies in the recent literature, employ a Bayesian approach to estimation and inference in models with endogeneity concerns by imposing weaker prior assumptions than complete excludability. When allowing for instrument imperfection of this type, the model is only partially identified, and as a consequence standard estimates obtained from the Gibbs simulations can be unacceptably imprecise. We thus describe a substantially improved ‘semi‐analytic’ method for calculating parameter marginal posteriors of interest that only require use of the well‐mixing simulations associated with the identifiable model parameters and the form of the conditional prior. Our methods are also applied in an illustrative application involving the impact of body mass index on earnings. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

18.
Estimation of spatial autoregressive panel data models with fixed effects   总被引:13,自引:0,他引:13  
This paper establishes asymptotic properties of quasi-maximum likelihood estimators for SAR panel data models with fixed effects and SAR disturbances. A direct approach is to estimate all the parameters including the fixed effects. Because of the incidental parameter problem, some parameter estimators may be inconsistent or their distributions are not properly centered. We propose an alternative estimation method based on transformation which yields consistent estimators with properly centered distributions. For the model with individual effects only, the direct approach does not yield a consistent estimator of the variance parameter unless T is large, but the estimators for other common parameters are the same as those of the transformation approach. We also consider the estimation of the model with both individual and time effects.  相似文献   

19.
We introduce the papers appearing in the special issue of this journal associated with the WEHIA 2015. The papers in issue deal with two growing fields in the in the literature inspired by the complexity-based approach to economic analysis. The first group of contributions develops network models of financial systems and show how these models can shed light on relevant issues that emerged in the aftermath of the last financial crisis. The second group of contributions deals with the issue of validation of agent-based model. Agent-based models have proven extremely useful to account for key features economic dynamics that are usually neglected by more standard models. At the same time, agent-based models have been criticized for the lack of an adequate validation against empirical data. The works in this issue propose useful techniques to validate agent-based models, thus contributing to the wider diffusion of these models in the economic discipline.  相似文献   

20.
Since the 1990s, the Akaike Information Criterion (AIC) and its various modifications/extensions, including BIC, have found wide applicability in econometrics as objective procedures that can be used to select parsimonious statistical models. The aim of this paper is to argue that these model selection procedures invariably give rise to unreliable inferences, primarily because their choice within a prespecified family of models (a) assumes away the problem of model validation, and (b) ignores the relevant error probabilities. This paper argues for a return to the original statistical model specification problem, as envisaged by Fisher (1922), where the task is understood as one of selecting a statistical model in such a way as to render the particular data a truly typical realization of the stochastic process specified by the model in question. The key to addressing this problem is to replace trading goodness-of-fit against parsimony with statistical adequacy as the sole criterion for when a fitted model accounts for the regularities in the data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号