共查询到20条相似文献,搜索用时 0 毫秒
1.
Robust tests and estimators based on nonnormal quasi-likelihood functions are developed for autoregressive models with near unit root. Asymptotic power functions and power envelopes are derived for point-optimal tests of a unit root when the likelihood is correctly specified. The shapes of these power functions are found to be sensitive to the extent of nonnormality in the innovations. Power loss resulting from using least-squares unit-root tests in the presence of thick-tailed innovations appears to be greater than in stationary models. 相似文献
2.
Law enforcement agencies need crime forecasts to support their tactical operations; namely, predicted crime locations for next week based on data from the previous week. Current practice simply assumes that spatial clusters of crimes or “hot spots” observed in the previous week will persist to the next week. This paper introduces a multivariate prediction model for hot spots that relates the features in an area to the predicted occurrence of crimes through the preference structure of criminals. We use a point-pattern-based transition density model for space–time event prediction that relies on criminal preference discovery as observed in the features chosen for past crimes. The resultant model outperforms the current practices, as demonstrated statistically by an application to breaking and entering incidents in Richmond, VA. 相似文献
3.
《International Journal of Forecasting》2014,30(2):328-343
Predicting the risk of mortgage prepayments has been the focus of many studies over the past three decades. Most of these works have used single prediction models, such as logistic regressions and survival models, to seek the key influencing factors. From the point of view of customer relationship management (CRM), a two-stage model (i.e., the segment and prediction model) is proposed for analyzing the risk of mortgage prepayment in this research. In the first stage, random forests are used to segment mortgagors into different groups; then, a proportional hazard model is constructed to predict the prepayment time of the mortgagors in the second stage. The results indicate that the two-stage model predicts mortgage prepayment more accurately than the single-stage model (non-segmentation model). 相似文献
4.
In this paper, a new statistical method to deal with the quantum finance is proposed. Through analyzing the stock data of China Mobile Communication Corporation, we discover its quantum financial effect, and then we innovate the method of testing the existence of the quantum financial effect. Furthermore, the classical normal process of the Glosten–Jagannathan–Runkle (GJR) model has been changed into the quantum wave‐function distribution, which is based on the ‘one‐dimensional infinitely deep square potential well’. The research shows that the quantum GJR model can reveal the interior uncertainty of the financial market and has a better prediction availability. 相似文献
5.
This paper studies inference in a continuous time game where an agent’s decision to quit an activity depends on the participation of other players. In equilibrium, similar actions can be explained not only by direct influences but also by correlated factors. Our model can be seen as a simultaneous duration model with multiple decision makers and interdependent durations. We study the problem of determining the existence and uniqueness of equilibrium stopping strategies in this setting. This paper provides results and conditions for the detection of these endogenous effects. First, we show that the presence of such effects is a necessary and sufficient condition for simultaneous exits. This allows us to set up a nonparametric test for the presence of such influences, which is robust to multiple equilibria. Second, we provide conditions under which parameters in the game are identified. Finally, we apply the model to data on desertion in the Union Army during the American Civil War, and find evidence of endogenous influences. 相似文献
6.
Attribute Selection is an important issue for developing a prediction model, however, how to determine an effective attribute selection algorithm is an important but difficult issue. Attribute selection can effectively delete the irrelevant and redundant attributes to increase the prediction accuracy, and evaluating attribute selection methods usually need to consider several criteria such as accuracy, type I error, and type II error. In this paper, the selected attribute process is modeled as a group multiple attributes decision making (GMADM) problem. In evaluating different GMACD methods, the most results usually are consistently, But there are some situations where the evaluated outcomes have different results. The GMADM method is useful tool for evaluating attribute selection algorithms, and the TOPSIS is capable of identifying a compromised solution when different GMADM method result in conflicting rankings. Therefore, this paper proposes an objective (persuasive) GMADM-based attributes selection method to solve this disagreement and help decision makers pick the most suitable method. After verification, the proposed model is more persuasive to evaluate the attributes selection methods for developing prediction model. 相似文献
7.
Baba Thiam 《Statistica Neerlandica》2019,73(1):63-77
In this paper, we studied an alternative estimator of the regression function when the covariates are observed with error. It is based on the minimization of the relative mean squared error. We obtain expressions for its asymptotic bias and variance together with an asymptotic normality result. Our technique is illustrated on simulation studies. Numerical results suggest that the studied estimator can lead to tangible improvements in prediction over the usual kernel deconvolution regression estimator, particularly in the presence of several outliers in the dataset. 相似文献
8.
Bayesian averaging,prediction and nonnested model selection 总被引:1,自引:0,他引:1
This paper studies the asymptotic relationship between Bayesian model averaging and post-selection frequentist predictors in both nested and nonnested models. We derive conditions under which their difference is of a smaller order of magnitude than the inverse of the square root of the sample size in large samples. This result depends crucially on the relation between posterior odds and frequentist model selection criteria. Weak conditions are given under which consistent model selection is feasible, regardless of whether models are nested or nonnested and regardless of whether models are correctly specified or not, in the sense that they select the best model with the least number of parameters with probability converging to 1. Under these conditions, Bayesian posterior odds and BICs are consistent for selecting among nested models, but are not consistent for selecting among nonnested models and possibly overlapping models. These findings have important bearing for applied researchers who are frequent users of model selection tools for empirical investigation of model predictions. 相似文献
9.
Online auction has now been a popular mechanism in setting prices for internet users. However, auction price prediction, involving the modeling of uncertainty regarding the bidding process, is a challenging task primarily due to the variety of factors changing in auction settings. Even if all the factors were accounted for, there still exist uncertainties in human behavior when bidding in auctions. In this paper, three models, regression, neural networks and neuro-fuzzy, are constructed to predict the final prices of English auctions, using real-world online auction data collected from Yahoo-Kimo Auction. The empirical results show that the neuro fuzzy system can catch the complicated relationship among the variables accurately much better than the others, which is of great help for the buyers to avoid overpricing and for the sellers to facilitate the auction. Besides, the knowledge base obtained from neuro fuzzy provides the elaborative relationship among the variables, which can be further tested for theory building. 相似文献
10.
Pooia Lalbakhsh 《Enterprise Information Systems》2017,11(5):758-785
This paper presents a transportable ant colony discrimination strategy (TACD) to predict corporate bankruptcy, a topic of vital importance that is attracting increasing interest in the field of economics. The proposed algorithm uses financial ratios to build a binary prediction model for companies with the two statuses of bankrupt and non-bankrupt. The algorithm takes advantage of an improved version of continuous ant colony optimisation (CACO) at the core, which is used to create an accurate, simple and understandable linear model for discrimination. This also enables the algorithm to work with continuous values, leading to more efficient learning and adaption by avoiding data discretisation. We conduct a comprehensive performance evaluation on three real-world data sets under a stratified cross-validation strategy. In three different scenarios, TACD is compared with 11 other bankruptcy prediction strategies. We also discuss the efficiency of the attribute selection methods used in the experiments. In addition to its simplicity and understandability, statistical significance tests prove the efficiency of TACD against the other prediction algorithms in both measures of AUC and accuracy. 相似文献
11.
12.
《International Journal of Forecasting》2022,38(3):920-943
This paper introduces a novel meta-learning algorithm for time series forecast model performance prediction. We model the forecast error as a function of time series features calculated from historical time series with an efficient Bayesian multivariate surface regression approach. The minimum predicted forecast error is then used to identify an individual model or a combination of models to produce the final forecasts. It is well known that the performance of most meta-learning models depends on the representativeness of the reference dataset used for training. In such circumstances, we augment the reference dataset with a feature-based time series simulation approach, namely GRATIS, to generate a rich and representative time series collection. The proposed framework is tested using the M4 competition data and is compared against commonly used forecasting approaches. Our approach provides comparable performance to other model selection and combination approaches but at a lower computational cost and a higher degree of interpretability, which is important for supporting decisions. We also provide useful insights regarding which forecasting models are expected to work better for particular types of time series, the intrinsic mechanisms of the meta-learners, and how the forecasting performance is affected by various factors. 相似文献
13.
This paper considers the identification and estimation of an extension of Roy’s model (1951) of sectoral choice, which includes a non-pecuniary component in the selection equation and allows for uncertainty on potential earnings. We focus on the identification of the non-pecuniary component, which is key to disentangling the relative importance of monetary incentives versus preferences in the context of sorting across sectors. By making the most of the structure of the selection equation, we show that this component is point identified from the knowledge of the covariate effects on earnings, as soon as one covariate is continuous. Notably, and in contrast to most results on the identification of Roy models, this implies that identification can be achieved without any exclusion restriction nor large support condition on the covariates. As a by-product, bounds are obtained on the distribution of the ex ante monetary returns. We propose a three-stage semiparametric estimation procedure for this model, which yields root-n consistent and asymptotically normal estimators. Finally, we apply our results to the educational context, by providing new evidence from French data that non-pecuniary factors are a key determinant of higher education attendance decisions. 相似文献
14.
Fisher and Inference for Scores 总被引:1,自引:0,他引:1
This paper examines the work of Fisher and Bartlett on discriminant analysis, ordinal response regression and correspondence analysis. Placing these methods with canonical correlation analysis in the context of the singular value decomposition of particular matrices, we use explicit models and vector space notation to unify these methods, understand Fisher's approach, understand Bartlett's criticisms of Fisher and relate both to modern thinking. We consider in particular the formulation of certain hypotheses and Fisher's arguments to obtain approximate distributions for tests of these hypotheses (without assuming multivariate normality) and put these in modern notation. Using perturbation techniques pioneered by G.S. Watson, we give an asymptotic justification for Fisher's test for assigned scores and thereby resolve a long standing conflict between Fisher and Bartlett. 相似文献
15.
In this paper we propose Bayesian and frequentist approaches to ecological inference, based on R × C contingency tables, including a covariate. The proposed Bayesian model extends the binomial-beta hierarchical model developed by K ing , R osen and T anner (1999) from the 2×2 case to the R × C case. As in the 2×2 case, the inferential procedure employs Markov chain Monte Carlo (MCMC) methods. As such, the resulting MCMC analysis is rich but computationally intensive. The frequentist approach, based on first moments rather than on the entire likelihood, provides quick inference via nonlinear least-squares, while retaining good frequentist properties. The two approaches are illustrated with simulated data, as well as with real data on voting patterns in Weimar Germany. In the final section of the paper we provide an overview of a range of alternative inferential approaches which trade-off computational intensity for statistical efficiency. 相似文献
16.
文章以中国铝业公司河南分公司氧化铝生产过程为背景,着重研究了液固比的智能集成预测模型的建立和应用,以原矿浆配料过程为研究对象,应用神经网络和灰色理论建立了液固比预测模型。现场运行结果表明该模型的有效性和精确性,很好地实现液固比的在线预测。 相似文献
17.
This paper aims to demonstrate a possible aggregation gain in predicting future aggregates under a practical assumption of model misspecification. Empirical analysis of a number of economic time series suggests that the use of the disaggregate model is not always preferred over the aggregate model in predicting future aggregates, in terms of an out-of-sample prediction root-mean-square error criterion. One possible justification of this interesting phenomena is model misspecification. In particular, if the model fitted to the disaggregate series is misspecified (i.e., not the true data generating mechanism), then the forecast made by a misspecified model is not always the most efficient. This opens up an opportunity for the aggregate model to perform better. It will be of interest to find out when the aggregate model helps. In this paper, we study a framework where the underlying disaggregate series has a periodic structure. We derive and compare the efficiency loss in linear prediction of future aggregates using the adapted disaggregate model and aggregate model. Some scenarios for aggregation gain to occur are identified. Numerical results show that the aggregate model helps over a fairly large region in the parameter space of the periodic model that we studied. 相似文献
18.
In a simple multivariate normal prediction setting, derivation of a predictive distribution can flow from formal Bayes arguments as well as pivoting arguments. We look at two special cases and show that the classical invariant predictive distribution is based on a pivot whose sampling distribution depends on the parameter – that is, the pivot is not an ancillary statistic. In contrast, a predictive distribution derived by a structural argument is based on a pivot with a parameter free distribution (an ancillary statistic). The classical procedure is formal Bayes for the Jeffreys prior. Our results show that this procedure does not have a structural or fiducial interpretation. 相似文献
19.
Patrick Minford Michael Wickens Yongdeng Xu 《Oxford bulletin of economics and statistics》2019,81(1):178-194
We propose a way of testing a subset of equations of a DSGE model. The test draws on statistical inference for limited information models and the use of indirect inference to test DSGE models. Using the numerical small sample distribution of our test for two subsets of equations of the Smets–Wouters model we show that the test has accurate size and good power in small samples, and better power than using asymptotic distribution theory. In a test of the Smets–Wouters model on US Great Moderation data, we reject the specification of the wage‐price but not the expenditure sector. This points to the wage‐price sector as the source of overall model rejection. 相似文献
20.
Efthymios G. Tsionas 《Journal of Applied Econometrics》2006,21(5):669-676
An important issue in models of technical efficiency measurement concerns the temporal behaviour of inefficiency. Consideration of dynamic models is necessary but inference in such models is complicated. In this paper we propose a stochastic frontier model that allows for technical inefficiency effects and dynamic technical inefficiency, and use Bayesian inference procedures organized around data augmentation techniques to provide inferences. Also provided are firm‐specific efficiency measures. The new methods are applied to a panel of large US commercial banks over the period 1989–2000. Copyright © 2006 John Wiley & Sons, Ltd. 相似文献