首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 515 毫秒
1.
Statistical agencies often release a masked or perturbed version of survey data to protect the confidentiality of respondents' information. Ideally, a perturbation procedure should provide confidentiality protection without much loss of data quality, so that the released data may practically be treated as original data for making inferences. One major objective is to control the risk of correctly identifying any respondent's records in released data, by matching the values of some identifying or key variables. For categorical key variables, we propose a new approach to measuring identification risk and setting strict disclosure control goals. The general idea is to ensure that the probability of correctly identifying any respondent or surveyed unit is at most ξ, which is pre‐specified. Then, we develop an unbiased post‐randomisation procedure that achieves this goal for ξ>1/3. The procedure allows substantial control over possible changes to the original data, and the variance it induces is of a lower order of magnitude than sampling variance. We apply the procedure to a real data set, where it performs consistently with the theoretical results and quite importantly, shows very little data quality loss.  相似文献   

2.
Price indices for heterogeneous goods such as real estate or fine art constitute crucial information for institutional or private investors considering alternative investment decisions in times of financial markets turmoil. Classical mean‐variance analysis of alternative investments has been hampered by the lack of a systematic treatment of volatility in these markets. In this paper we propose a hedonic regression framework which explicitly defines an underlying stochastic process for the price index, allowing to treat the volatility parameter as the object of interest. The model can be estimated using maximum likelihood in combination with the Kalman filter. We derive theoretical properties of the volatility estimator and show that it outperforms the standard estimator. We show that extensions to allow for time‐varying volatility are straightforward using a local‐likelihood approach. In an application to a large data set of international blue chip artists, we show that volatility of the art market, although generally lower than that of financial markets, has risen after the financial crisis of 2008–09, but sharply decreased during the recent debt crisis. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

3.
This paper compares two methods for undertaking likelihood‐based inference in dynamic equilibrium economies: a sequential Monte Carlo filter and the Kalman filter. The sequential Monte Carlo filter exploits the nonlinear structure of the economy and evaluates the likelihood function of the model by simulation methods. The Kalman filter estimates a linearization of the economy around the steady state. We report two main results. First, both for simulated and for real data, the sequential Monte Carlo filter delivers a substantially better fit of the model to the data as measured by the marginal likelihood. This is true even for a nearly linear case. Second, the differences in terms of point estimates, although relatively small in absolute values, have important effects on the moments of the model. We conclude that the nonlinear filter is a superior procedure for taking models to the data. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

4.
With cointegration tests often being oversized under time‐varying error variance, it is possible, if not likely, to confuse error variance non‐stationarity with cointegration. This paper takes an instrumental variable (IV) approach to establish individual‐unit test statistics for no cointegration that are robust to variance non‐stationarity. The sign of a fitted departure from long‐run equilibrium is used as an instrument when estimating an error‐correction model. The resulting IV‐based test is shown to follow a chi‐square limiting null distribution irrespective of the variance pattern of the data‐generating process. In spite of this, the test proposed here has, unlike previous work relying on instrumental variables, competitive local power against sequences of local alternatives in 1/T‐neighbourhoods of the null. The standard limiting null distribution motivates, using the single‐unit tests in a multiple testing approach for cointegration in multi‐country data sets by combining P‐values from individual units. Simulations suggest good performance of the single‐unit and multiple testing procedures under various plausible designs of cross‐sectional correlation and cross‐unit cointegration in the data. An application to the equilibrium relationship between short‐ and long‐term interest rates illustrates the dramatic differences between results of robust and non‐robust tests.  相似文献   

5.
This paper presents a model for the heterogeneity and dynamics of the conditional mean and conditional variance of individual wages. A bias‐corrected likelihood approach, which reduces the estimation bias to a term of order 1/T2, is used for estimation and inference. The small‐sample performance of the proposed estimator is investigated in a Monte Carlo study. The simulation results show that the bias of the maximum likelihood estimator is substantially corrected for designs calibrated to the data used in the empirical analysis, drawn from the PSID. The empirical results show that it is important to account for individual unobserved heterogeneity and dynamics in the variance, and that the latter is driven by job mobility. The model also explains the non‐normality observed in log‐wage data. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

6.
The construction of an importance density for partially non‐Gaussian state space models is crucial when simulation methods are used for likelihood evaluation, signal extraction, and forecasting. The method of efficient importance sampling is successful in this respect, but we show that it can be implemented in a computationally more efficient manner using standard Kalman filter and smoothing methods. Efficient importance sampling is generally applicable for a wide range of models, but it is typically a custom‐built procedure. For the class of partially non‐Gaussian state space models, we present a general method for efficient importance sampling. Our novel method makes the efficient importance sampling methodology more accessible because it does not require the computation of a (possibly) complicated density kernel that needs to be tracked for each time period. The new method is illustrated for a stochastic volatility model with a Student's t distribution.  相似文献   

7.
a semiparametric estimator for binary‐outcome sample‐selection models is proposed that imposes only single index assumptions on the selection and outcome equations without specifying the error term distribution. I adopt the idea in Lewbel (2000) using a ‘special regressor’ to transform the binary response Y so that the transformed Y becomes linear in the latent index, which then makes it possible to remove the selection correction term by differencing the transformed Y equation. There are various versions of the estimator, which perform differently trading off bias and variance. A simulation study is conducted, and then I apply the estimators to US presidential election data in 2008 and 2012 to assess the impact of racial prejudice on the elections, as a black candidate was involved for the first time ever in the US history.  相似文献   

8.
Survey Estimates by Calibration on Complex Auxiliary Information   总被引:1,自引:0,他引:1  
In the last decade, calibration estimation has developed into an important field of research in survey sampling. Calibration is now an important methodological instrument in the production of statistics. Several national statistical agencies have developed software designed to compute calibrated weights based on auxiliary information available in population registers and other sources. This paper reviews some recent progress and offers some new perspectives. Calibration estimation can be used to advantage in a range of different survey conditions. This paper examines several situations, including estimation for domains in one‐phase sampling, estimation for two‐phase sampling, and estimation for two‐stage sampling with integrated weighting. Typical of those situations is complex auxiliary information, a term that we use for information made up of several components. An example occurs when a two‐stage sample survey has information both for units and for clusters of units, or when estimation for domains relies on information from different parts of the population. Complex auxiliary information opens up more than one way of computing the final calibrated weights to be used in estimation. They may be computed in a single step or in two or more successive steps. Depending on the approach, the resulting estimates do differ to some degree. All significant parts of the total information should be reflected in the final weights. The effectiveness of the complex information is mirrored by the variance of the resulting calibration estimator. Its exact variance is not presentable in simple form. Close approximation is possible via the corresponding linearized statistic. We define and use automated linearization as a shortcut in finding the linearized statistic. Its variance is easy to state, to interpret and to estimate. The variance components are expressed in terms of residuals, similar to those of standard regression theory. Visual inspection of the residuals reveals how the different components of the complex auxiliary information interact and work together toward reducing the variance.  相似文献   

9.
Today, the prime aim of central banking is to achieve price stability and, to a lesser extent, output stability. To this end, central banks use various monetary policy rules. This paper intends to provide a broad survey of the literature on Taylor-type monetary policy rules with a time-varying parameter (TVP) specification. To include the TVP feature, some modification is made in the monetary transmission mechanism of Taylor-type monetary policy models to account for the changing risk preference of individuals. In line with this approach, we introduce an interest rate pass-through specification of the monetary transmission process in a general equilibrium model to account for the varying perceptions of risk by individuals. We include an application for Turkey and estimate the time-variable parameters of the model by employing a structural extended Kalman filter (EKF). The results indicate that the EKF performs better than the standard Kalman filter in estimating the reaction function of the central bank.  相似文献   

10.
In this paper we show how the Kalman filter, which is a recursive estimation procedure, can be applied to the standard linear regression model. The resulting "Kalman estimator" is compared with the classical least-squares estimator.
The applicability and (dis)advantages of the filter are illustrated by means of a case study which consists of two parts. In the first part we apply the filter to a regression model with constant parameters and in the second part the filter is applied to a regression model with time-varying stochastic parameters. The prediction-powers of various "Kalman predictors" are compared with "least-squares predictors" by using T heil 's prediction-error coefficient U.  相似文献   

11.
A simulation-based non-linear filter is developed for prediction and smoothing in non-linear and/or non-normal structural time-series models. Recursive algorithms of weighting functions are derived by applying Monte Carlo integration. Through Monte Carlo experiments, it is shown that (1) for a small number of random draws (or nodes) our simulation-based density estimator using Monte Carlo integration (SDE) performs better than Kitagawa's numerical integration procedure (KNI), and (2) SDE and KNI give less biased parameter estimates than the extended Kalman filter (EKF). Finally, an estimation of per capita final consumption data is taken as an application to the non-linear filtering problem.  相似文献   

12.
This paper considers the problem of forecasting realized variance measures. These measures are highly persistent estimates of the underlying integrated variance, but are also noisy. Bollerslev, Patton and Quaedvlieg (2016), Journal of Econometrics 192(1), 1–18 exploited this so as to extend the commonly used heterogeneous autoregressive (HAR) by letting the model parameters vary over time depending on the estimated measurement error variances. We propose an alternative specification that allows the autoregressive parameters of HAR models to be driven by a latent Gaussian autoregressive process that may also depend on the estimated measurement error variance. The model parameters are estimated by maximum likelihood using the Kalman filter. Our empirical analysis considers the realized variances of 40 stocks from the S&P 500. Our model based on log variances shows the best overall performance and generates superior forecasts both in terms of a range of different loss functions and for various subsamples of the forecasting period.  相似文献   

13.
Extant evidence that the self‐employed overestimate their returns by a greater margin than employees is consistent with two mutually inclusive possibilities. Self‐employment may foster optimism or intrinsic optimists may be drawn to self‐employment. Previous research is generally unable to disentangle these effects because of reliance on cross‐sectional data. Using longitudinal data, this paper finds that employees who will be self‐employed in the future overestimate their short‐term financial wellbeing by more than those who never become self‐employed. Optimism is higher still when self‐employed. These results suggest that the greater optimism of the self‐employed reflects both psychological disposition and environmental factors. By providing greater scope for optimism, self‐employment entices the intrinsically optimistic.  相似文献   

14.
The nonnormal stable laws and Student t distributions are used to model the unconditional distribution of financial asset returns, as both models display heavy tails. The relevance of the two models is subject to debate because empirical estimates of the tail shape conditional on either model give conflicting signals. This stems from opposing bias terms. We exploit the biases to discriminate between the two distributions. A sign estimator for the second‐order scale parameter strengthens our results. Tail estimates based on asset return data match the bias induced by finite‐variance unconditional Student t data and the generalized autoregressive conditional heteroscedasticity process.  相似文献   

15.
孙秋云 《价值工程》2011,30(19):51-51
卡尔曼滤波及其改进方法在行驶车辆状态估计中有着广泛的应用,并取得良好的效果;本文针对自适应卡尔曼滤波算法、无轨迹卡尔曼滤波算法的应用等进行了综合性阐述。  相似文献   

16.
Project failure is likely to generate a negative emotional response for those involved in the project. But do all people feel the same way? And are some better able to regulate their emotions to learn from the failure experience? In this paper we develop an emotion framework of project failure that relies on self‐determination to explain variance in the intensity of the negative emotions triggered by project failure and self‐compassion to explain variance in learning from project failure. We discuss the implications of our model for research on entrepreneurial and innovative organizations, employees' psychological ownership, and personal engagement at work.  相似文献   

17.
Dynamic model averaging (DMA) has become a very useful tool with regards to dealing with two important aspects of time-series analysis, namely, parameter instability and model uncertainty. An important component of DMA is the Kalman filter. It is used to filter out the latent time-varying regression coefficients of the predictive regression of interest, and produce the model predictive likelihood, which is needed to construct the probability of each model in the model set. To apply the Kalman filter, one must write the model of interest in linear state–space form. In this study, we demonstrate that the state–space representation has implications on out-of-sample prediction performance, and the degree of shrinkage. Using Monte Carlo simulations as well as financial data at different sampling frequencies, we document that the way in which the current literature tends to formulate the candidate time-varying parameter predictive regression in linear state–space form ignores empirical features that are often present in the data at hand, namely, predictor persistence and predictor endogeneity. We suggest a straightforward way to account for these features in the DMA setting. Results using the widely applied Goyal and Welch (2008) dataset document that modifying the DMA framework as we suggest has a bearing on equity premium point prediction performance from a statistical as well as an economic viewpoint.  相似文献   

18.
Drawing on the relational perspective and self‐consistency theory, we theorize how relationships involving work‐centric, off‐work‐centric, and/or personal components can affect an employee's organization‐based self‐esteem and job performance in Chinese organizational contexts. Matched data were collected from a multi‐source sample that included 219 employee–supervisor dyads from a Chinese bank. Results based on hierarchical regression analyses reveal that a high‐quality relationship with a supervisor through work and off‐work domains (leader–member exchange and guanxi) is positively related to organization‐based self‐esteem. Organization‐based self‐esteem plays a mediating role in the relationship between guanxi and job performance. Additionally, career mentoring from a supervisor (a work‐centric relationship involving personal components) moderates the relationship between organization‐based self‐esteem and job performance.  相似文献   

19.
We analyse the finite sample properties of maximum likelihood estimators for dynamic panel data models. In particular, we consider transformed maximum likelihood (TML) and random effects maximum likelihood (RML) estimation. We show that TML and RML estimators are solutions to a cubic first‐order condition in the autoregressive parameter. Furthermore, in finite samples both likelihood estimators might lead to a negative estimate of the variance of the individual‐specific effects. We consider different approaches taking into account the non‐negativity restriction for the variance. We show that these approaches may lead to a solution different from the unique global unconstrained maximum. In an extensive Monte Carlo study we find that this issue is non‐negligible for small values of T and that different approaches might lead to different finite sample properties. Furthermore, we find that the Likelihood Ratio statistic provides size control in small samples, albeit with low power due to the flatness of the log‐likelihood function. We illustrate these issues modelling US state level unemployment dynamics.  相似文献   

20.
In dynamic panel regression, when the variance ratio of individual effects to disturbance is large, the system‐GMM estimator will have large asymptotic variance and poor finite sample performance. To deal with this variance ratio problem, we propose a residual‐based instrumental variables (RIV) estimator, which uses the residual from regressing Δyi,t?1 on as the instrument for the level equation. The RIV estimator proposed is consistent and asymptotically normal under general assumptions. More importantly, its asymptotic variance is almost unaffected by the variance ratio of individual effects to disturbance. Monte Carlo simulations show that the RIV estimator has better finite sample performance compared to alternative estimators. The RIV estimator generates less finite sample bias than difference‐GMM, system‐GMM, collapsing‐GMM and Level‐IV estimators in most cases. Under RIV estimation, the variance ratio problem is well controlled, and the empirical distribution of its t‐statistic is similar to the standard normal distribution for moderate sample sizes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号