首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This research sought to identify and group the external motivators that encourage individuals in emerging countries to donate money and/or goods. Therefore, 46 external variables were identified in the literature that motivate an individual to donate. They were grouped by similarity into five external donation motivating factors (environmental and/or political motivation, the cause or circumstances of the donation, the organisation's characteristics, influence from third parties, and personal rewards) that resulted in a proposed donation model. This model was supported by semi‐structured interviews with 22 individuals who donate money and/or goods frequently. The results supported the existence of the five proposed factors in the model and three new variables were identified: a “lack of government support,” “service for the donor” and “donation tuition with low value.”  相似文献   

2.
Multinomial logit and nested logit models of mode choice in travel to work and housing location choice are estimated from 1970 U.S. census data aggregated to small zones of the Chicago SMSA. The estimated models are then used to derive the “housing rent,” “travel time,” and “travel cost” elasticities of location demand. The effects of sampling variation, sample size, attribute inclusion, model specification, and estimation method on the estimated elasticities are evaluated and found to be important. The elasticities are also compared and found to agree with those obtained from other discrete choice models and, in the case of “housing rent,” with estimates obtained from models based on other theoretical structure.  相似文献   

3.
This paper considers factor estimation from heterogeneous data, where some of the variables—the relevant ones—are informative for estimating the factors, and others—the irrelevant ones—are not. We estimate the factor model within a Bayesian framework, specifying a sparse prior distribution for the factor loadings. Based on identified posterior factor loading estimates, we provide alternative methods to identify relevant and irrelevant variables. Simulations show that both types of variables are identified quite accurately. Empirical estimates for a large multi‐country GDP dataset and a disaggregated inflation dataset for the USA show that a considerable share of variables is irrelevant for factor estimation.  相似文献   

4.
This paper discusses pooling versus model selection for nowcasting with large datasets in the presence of model uncertainty. In practice, nowcasting a low‐frequency variable with a large number of high‐frequency indicators should account for at least two data irregularities: (i) unbalanced data with missing observations at the end of the sample due to publication delays; and (ii) different sampling frequencies of the data. Two model classes suited in this context are factor models based on large datasets and mixed‐data sampling (MIDAS) regressions with few predictors. The specification of these models requires several choices related to, amongst other things, the factor estimation method and the number of factors, lag length and indicator selection. Thus there are many sources of misspecification when selecting a particular model, and an alternative would be pooling over a large set of different model specifications. We evaluate the relative performance of pooling and model selection for nowcasting quarterly GDP for six large industrialized countries. We find that the nowcast performance of single models varies considerably over time, in line with the forecasting literature. Model selection based on sequential application of information criteria can outperform benchmarks. However, the results highly depend on the selection method chosen. In contrast, pooling of nowcast models provides an overall very stable nowcast performance over time. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

5.
We study Ackerberg, Caves, and Frazer's (Econometrica, 2015, 83, 2411–2451; hereafter ACF) production function estimation method using Monte Carlo simulations. First, we replicate their results by following their procedure to confirm the existence of a spurious minimum in the estimation, as noted by ACF. In the population, or when sample sizes are sufficiently large, this “global” identification problem may not be a concern because the spurious minimum occurs only at extreme values of capital and labor coefficients. However, in finite samples, their estimator can produce estimates that may not be clearly distinguishable from the spurious ones. In our second experiment, we modify the ACF procedure and show that robust estimates can be obtained using additional lagged instruments or sequential search. We also provide some arguments for why such modifications help in the ACF setting.  相似文献   

6.
Sequential estimation problems for the mean parameter of an exponential distribution has received much attention over the years. Purely sequential and accelerated sequential estimators and their asymptotic second-order characteristics have been laid out in the existing literature, both for minimum risk point as well as bounded length confidence interval estimation of the mean parameter. Having obtained a data set from such sequentially designed experiments, the paper investigates estimation problems for the associatedreliability function. Second-order approximations are provided for the bias and mean squared error of the proposed estimator of the reliability function, first under a general setup. An ad hoc bias-corrected version is also introduced. Then, the proposed estimator is investigated further under some specific sequential sampling strategies, already available in the literature. In the end, simulation results are presented for comparing the proposed estimators of the reliability function for moderate sample sizes and various sequential sampling strategies.  相似文献   

7.
8.
In the behavioral sciences, response variables are often non-continuous, ordinal variables. Conventional structural equation models (SEMs) have been generalized to accommodate ordinal responses. In this study, three different estimation methods on real data were performed with ordinal variables. Empirical results obtained from the different estimation methods on given real large sample educational data were investigated and compared to recent simulation results. As a result, even very large sample is available, model estimations and fits for ordinal data are affected from inconvenient estimation methods thus it is concluded that asymptotically distribution free estimation method specialized for ordinal variables is more convenient way to model ordinal variables.  相似文献   

9.
研究目标:解决随机效应分位回归模型中固定效应和随机效应系数同时估计和选择问题。研究方法:对固定效应和随机效应系数同时实施自适应Lasso惩罚,并为参数估计设计交替迭代算法。研究发现:新方法不仅对随机误差分布具有较强的稳健性,而且在不同稀疏度模型下均有着良好的表现,尤其是在高维情形时。研究创新:本文提出的方法在对模型中重要自变量进行选择的同时能够充分考虑随机效应的影响;交替迭代算法不仅有效解决了需要选择两个惩罚参数的困境,而且收敛速度快。研究价值:为实际工作者对面板数据和纵向数据的分析提供了有效的建模方法。  相似文献   

10.
This paper studies the estimation of the distribution of non‐sequential search costs. We show that the search cost distribution is identified by combining data from multiple markets with common search technology but varying consumer valuations, firms' costs, and numbers of competitors. To exploit such data optimally, we provide a new method based on semi‐nonparametric estimation. We apply our method to a dataset of online prices for memory chips and find that the search cost density is essentially bimodal, such that a large fraction of consumers searches very little, whereas a smaller fraction searches a relatively large number of stores. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

11.
This paper considers a panel data model with time-varying individual effects. The data are assumed to contain a large number of cross-sectional units repeatedly observed over a fixed number of time periods. The model has a feature of the fixed-effects model in that the effects are assumed to be correlated with the regressors. The unobservable individual effects are assumed to have a factor structure. For consistent estimation of the model, it is important to estimate the true number of individual effects. We propose a generalized methods of moments procedure by which both the number of individual effects and the regression coefficients can be consistently estimated. Some important identification issues are also discussed. Our simulation results indicate that the proposed methods produce reliable estimates.  相似文献   

12.
To characterize heteroskedasticity, nonlinearity, and asymmetry in tail risk, this study investigates a class of conditional (dynamic) expectile models with partially varying coefficients in which some coefficients are allowed to be constants, but others are allowed to be unknown functions of random variables. A three-stage estimation procedure is proposed to estimate both the parametric constant coefficients and nonparametric functional coefficients. Their asymptotic properties are investigated under a time series context, together with a new simple and easily implemented test for testing the goodness of fit of models and a bandwidth selector based on newly defined cross-validatory estimation for the expected forecasting expectile errors. The proposed methodology is data-analytic and of sufficient flexibility to analyze complex and multivariate nonlinear structures without suffering from the curse of dimensionality. Finally, the proposed model is illustrated by simulated data, and applied to analyzing the daily data of the S&P500 return series.  相似文献   

13.
We consider nonlinear heteroscedastic single‐index models where the mean function is a parametric nonlinear model and the variance function depends on a single‐index structure. We develop an efficient estimation method for the parameters in the mean function by using the weighted least squares estimation, and we propose a “delete‐one‐component” estimator for the single‐index in the variance function based on absolute residuals. Asymptotic results of estimators are also investigated. The estimation methods for the error distribution based on the classical empirical distribution function and an empirical likelihood method are discussed. The empirical likelihood method allows for incorporation of the assumptions on the error distribution into the estimation. Simulations illustrate the results, and a real chemical data set is analyzed to demonstrate the performance of the proposed estimators.  相似文献   

14.
A number of recent studies in the economics literature have focused on the usefulness of factor models in the context of prediction using “big data” (see Bai and Ng, 2008; Dufour and Stevanovic, 2010; Forni, Hallin, Lippi, & Reichlin, 2000; Forni et al., 2005; Kim and Swanson, 2014a; Stock and Watson, 2002b, 2006, 2012, and the references cited therein). We add to this literature by analyzing whether “big data” are useful for modelling low frequency macroeconomic variables, such as unemployment, inflation and GDP. In particular, we analyze the predictive benefits associated with the use of principal component analysis (PCA), independent component analysis (ICA), and sparse principal component analysis (SPCA). We also evaluate machine learning, variable selection and shrinkage methods, including bagging, boosting, ridge regression, least angle regression, the elastic net, and the non-negative garotte. Our approach is to carry out a forecasting “horse-race” using prediction models that are constructed based on a variety of model specification approaches, factor estimation methods, and data windowing methods, in the context of predicting 11 macroeconomic variables that are relevant to monetary policy assessment. In many instances, we find that various of our benchmark models, including autoregressive (AR) models, AR models with exogenous variables, and (Bayesian) model averaging, do not dominate specifications based on factor-type dimension reduction combined with various machine learning, variable selection, and shrinkage methods (called “combination” models). We find that forecast combination methods are mean square forecast error (MSFE) “best” for only three variables out of 11 for a forecast horizon of h=1, and for four variables when h=3 or 12. In addition, non-PCA type factor estimation methods yield MSFE-best predictions for nine variables out of 11 for h=1, although PCA dominates at longer horizons. Interestingly, we also find evidence of the usefulness of combination models for approximately half of our variables when h>1. Most importantly, we present strong new evidence of the usefulness of factor-based dimension reduction when utilizing “big data” for macroeconometric forecasting.  相似文献   

15.
We analyze the impact of sentiment and attention variables on the stock market volatility by using a novel and extensive dataset that combines social media, news articles, information consumption, and search engine data. We apply a state-of-the-art sentiment classification technique in order to investigate the question of whether sentiment and attention measures contain additional predictive power for realized volatility when controlling for a wide range of economic and financial predictors. Using a penalized regression framework, we identify the most relevant variables to be investors’ attention, as measured by the number of Google searches on financial keywords (e.g. “financial market” and “stock market”), and the daily volume of company-specific short messages posted on StockTwits. In addition, our study shows that attention and sentiment variables are able to improve volatility forecasts significantly, although the magnitudes of the improvements are relatively small from an economic point of view.  相似文献   

16.
This study examines the role of financial ratios in predicting companies’ default risk using the quantile hazard model (QHM) approach and compares its results to the discrete hazard model (DHM). We adopt the LASSO method to select essential predictors among the variables mentioned in the literature. We show the preeminence of our proposed QHM through the fact that it presents a different degree of financial ratios’ effect over various quantile levels. While DHM only confirms the aftermaths of “stock return volatilities” and “total liabilities” and the positive effects of “stock price”, “stock excess return”, and “profitability” on businesses, under high quantile levels QHM is able to supplement “cash and short-term investment to total assets”, “market capitalization”, and “current liabilities ratio” into the list of factors that influence a default. More interestingly, “cash and short-term investment to total assets” and “market capitalization” switch signs in high quantile levels, showing their different influence on companies with different risk levels. We also discover evidence for the distinction of default probability among different industrial sectors. Lastly, our proposed QHM empirically demonstrates improved out-of-sample forecasting performance.  相似文献   

17.
Abstract

A spatial vector autoregressive model (SpVAR) is defined as a VAR which includes spatial as well as temporal lags among a vector of stationary state variables. SpVARs may contain disturbances that are spatially as well as temporally correlated. Although the structural parameters are not fully identified in SpVARs, contemporaneous spatial lag coefficients may be identified by weakly exogenous state variables. Dynamic spatial panel data econometrics is used to estimate SpVARs. The incidental parameter problem is handled by bias correction rather than more popular alternatives such as generalised methods of moments (GMM). The interaction between temporal and spatial stationarity is discussed. The impulse responses for SpVARs are derived, which naturally depend upon the temporal and spatial dynamics of the model. We provide an empirical illustration using annual spatial panel data for Israel. The estimated SpVAR is used to calculate impulse responses between variables, over time, and across space. Finally, weakly exogenous instrumental variables are used to identify contemporaneous spatial lag coefficients.  相似文献   

18.
A.R. Banai-Kashani 《Socio》1984,18(3):159-166
Several theories, concepts, methods, or alternatively, “paradigms” have been suggested for the explanation and prediction of the location behavior of urban households. Increasingly, hwoever, “behavioral” approaches to the explanation of the dimensions of “choice” and/or exploration of alternative hypotheses have been cumbersome in the “mechanistic” paradigms of the social system and its related subsystems.An alternative paradigm of Analytic Hierarchy Process (AHP) is proposed to explore alternative structural specification hypotheses (sequential versus simultaneous) on the grouping and changing relative importance of “instrumental” and “non-instrumental” factors affecting location decision-making. AHP estimates of the “locational shares” of an urban (corridor) zonal population provide a paradigmatic basis for behavioral, vis-a-vis environmental, explanation of location decision-processes in a methodologically efficient, robust and theoretically inclusive framework of hierarchy systems.This paradigm is proposed for locational analysis requiring an effective integration of multilevel, environmental (contextual) and behavioral measures of relative importance, with limited data, and, for multidimensional problems in planning and policy-making, formidably requiring the integration of positive with normative analysis of systems.  相似文献   

19.
《Journal of econometrics》2002,109(2):341-363
Despite the commonly held belief that aggregate data display short-run comovement, there has been little discussion about the econometric consequences of this feature of the data. We use exhaustive Monte-Carlo simulations to investigate the importance of restrictions implied by common-cyclical features for estimates and forecasts based on vector autoregressive models. First, we show that the “best” empirical model developed without common cycle restrictions need not nest the “best” model developed with those restrictions. This is due to possible differences in the lag-lengths chosen by model selection criteria for the two alternative models. Second, we show that the costs of ignoring common cyclical features in vector autoregressive modelling can be high, both in terms of forecast accuracy and efficient estimation of variance decomposition coefficients. Third, we find that the Hannan–Quinn criterion performs best among model selection criteria in simultaneously selecting the lag-length and rank of vector autoregressions.  相似文献   

20.
Sequential methods have been used for many applications; especially, when fixed sample procedures are not possible and/or when “early stopping” of sampling is beneficial for applications. At the same time, the issue of how to make correct inferences when measurement errors are present has drawn considerable attention from statisticians. In this paper, the problems of sequential estimation of generalized linear models when there are measurement errors in both adaptive and fixed design cases are studied. The proposed sequential procedure is proved to be asymptotically consistent and efficient in the sense of Chow and Robbins [Ann Math Stat 36(2):457–462, 1965] when measurement errors decay gradually as the number of sequentially selected design points increases. This assumption is useful in sequentially designed experiments, and can also be fulfilled in the case when replicate measurements are available. Some numerical studies based on a Rasch model and a logistic regression model are conducted to evaluate the performance of the proposed procedure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号