首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 199 毫秒
1.
The present paper introduces a methodology for the semiparametric or non‐parametric two‐sample equivalence problem when the effects are specified by statistical functionals. The mean relative risk functional of two populations is given by the average of the time‐dependent risk. This functional is a meaningful non‐parametric quantity, which is invariant under strictly monotone transformations of the data. In the case of proportional hazard models, the functional determines just the proportional hazard risk factor. It is shown that an equivalence test of the type of the two‐sample Savage rank test is appropriate for this functional. Under proportional hazards, this test can be carried out as an exact level α test. It also works quite well under other semiparametric models. Similar results are presented for a Wilcoxon rank‐sum test for equivalence based on the Mann–Whitney functional given by the relative treatment effect.  相似文献   

2.
Despite the solid theoretical foundation on which the gravity model of bilateral trade is based, empirical implementation requires several assumptions which do not follow directly from the underlying theory. First, unobserved trade costs are assumed to be a (log‐)linear function of observables. Second, the effects of trade costs on trade flows are assumed to be constant across country pairs. Maintaining consistency with the underlying theory, but relaxing these assumptions, we estimate gravity models—in levels and logs—using two data sets via nonparametric methods. The results are striking. Despite the added flexibility of the nonparametric models, parametric models based on these assumptions offer equally or more reliable in‐sample predictions and out‐of‐sample forecasts in the majority of cases, particularly in the levels model. Moreover, formal statistical tests fail to reject either parametric functional form. Thus, concerns in the gravity literature over functional form appear unwarranted, and estimation of the gravity model in levels is recommended. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

3.
This paper presents empirical evidence on the effectiveness of eight different parametric ARCH models in describing daily stock returns. Twenty‐seven years of UK daily data on a broad‐based value weighted stock index are investigated for the period 1971–97. Several interesting results are documented. Overall, the results strongly demonstrate the utility of parametric ARCH models in describing time‐varying volatility in this market. The parameters proxying for asymmetry in models that recognize the asymmetric behaviour of volatility are highly significant in each and every case. However, the ‘performance’ of the various parameterizations is often fairly similar with the exception of the multiplicative GARCH model that performs qualitatively differently on several dimensions of performance. The outperformance of any model(s) is not consistent across different sub‐periods of the sample, suggesting that the optimal choice of a model is period‐specific. The outperformance is also not consistent as we change from in‐sample inferences to out‐of‐sample inferences within the same period. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

4.
Statistical Decision Problems and Bayesian Nonparametric Methods   总被引:1,自引:0,他引:1  
This paper considers parametric statistical decision problems conducted within a Bayesian nonparametric context. Our work was motivated by the realisation that typical parametric model selection procedures are essentially incoherent. We argue that one solution to this problem is to use a flexible enough model in the first place, a model that will not be checked no matter what data arrive. Ideally, one would use a nonparametric model to describe all the uncertainty about the density function generating the data. However, parametric models are the preferred choice for many statisticians, despite the incoherence involved in model checking, incoherence that is quite often ignored for pragmatic reasons. In this paper we show how coherent parametric inference can be carried out via decision theory and Bayesian nonparametrics. None of the ingredients discussed here are new, but our main point only becomes evident when one sees all priors—even parametric ones—as measures on sets of densities as opposed to measures on finite-dimensional parameter spaces.  相似文献   

5.
We propose a measure of predictability based on the ratio of the expected loss of a short‐run forecast to the expected loss of a long‐run forecast. This predictability measure can be tailored to the forecast horizons of interest, and it allows for general loss functions, univariate or multivariate information sets, and covariance stationary or difference stationary processes. We propose a simple estimator, and we suggest resampling methods for inference. We then provide several macroeconomic applications. First, we illustrate the implementation of predictability measures based on fitted parametric models for several US macroeconomic time series. Second, we analyze the internal propagation mechanism of a standard dynamic macroeconomic model by comparing the predictability of model inputs and model outputs. Third, we use predictability as a metric for assessing the similarity of data simulated from the model and actual data. Finally, we outline several non‐parametric extensions of our approach. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

6.
The flow of natural gas within a gas transmission network is studied with the aim to optimize such networks. The analysis of real data provides a deeper insight into the behaviour of gas in‐ and outflow. Several models for describing dependence between the maximal daily gas flow and the temperature on network exits are proposed. A modified sigmoidal regression is chosen from the class of parametric models. As an alternative, a semi‐parametric regression model based on penalized splines is considered. The comparison of models and the forecast of gas loads for very low temperatures based on both approaches is included. The application of the obtained results is discussed.  相似文献   

7.
Recent years have seen an explosion of activity in the field of functional data analysis (FDA), in which curves, spectra, images and so on are considered as basic functional data units. A central problem in FDA is how to fit regression models with scalar responses and functional data points as predictors. We review some of the main approaches to this problem, categorising the basic model types as linear, non‐linear and non‐parametric. We discuss publicly available software packages and illustrate some of the procedures by application to a functional magnetic resonance imaging data set.  相似文献   

8.
This paper is concerned with the statistical inference on seemingly unrelated varying coefficient partially linear models. By combining the local polynomial and profile least squares techniques, and estimating the contemporaneous correlation, we propose a class of weighted profile least squares estimators (WPLSEs) for the parametric components. It is shown that the WPLSEs achieve the semiparametric efficiency bound and are asymptotically normal. For the non‐parametric components, by applying the undersmoothing technique, and taking the contemporaneous correlation into account, we propose an efficient local polynomial estimation. The resulting estimators are shown to have mean‐squared errors smaller than those estimators that neglect the contemporaneous correlation. In addition, a class of variable selection procedures is developed for simultaneously selecting significant variables and estimating unknown parameters, based on the non‐concave penalized and weighted profile least squares techniques. With a proper choice of regularization parameters and penalty functions, the proposed variable selection procedures perform as efficiently as if one knew the true submodels. The proposed methods are evaluated using wide simulation studies and applied to a set of real data.  相似文献   

9.
Two‐state models (working/failed or alive/dead) are widely used in reliability and survival analysis. In contrast, multi‐state stochastic processes provide a richer framework for modeling and analyzing the progression of a process from an initial to a terminal state, allowing incorporation of more details of the process mechanism. We review multi‐state models, focusing on time‐homogeneous semi‐Markov processes (SMPs), and then describe the statistical flowgraph framework, which comprises analysis methods and algorithms for computing quantities of interest such as the distribution of first passage times to a terminal state. These algorithms algebraically combine integral transforms of the waiting time distributions in each state and invert them to get the required results. The estimated transforms may be based on parametric distributions or on empirical distributions of sample transition data, which may be censored. The methods are illustrated with several applications.  相似文献   

10.
This paper focuses on the dynamic misspecification that characterizes the class of small‐scale New Keynesian models currently used in monetary and business cycle analysis, and provides a remedy for the typical difficulties these models have in accounting for the rich contemporaneous and dynamic correlation structure of the data. We suggest using a statistical model for the data as a device through which it is possible to adapt the econometric specification of the New Keynesian model such that the risk of omitting important propagation mechanisms is kept under control. A pseudo‐structural form is built from the baseline system of Euler equations by forcing the state vector of the system to have the same dimension as the state vector characterizing the statistical model. The pseudo‐structural form gives rise to a set of cross‐equation restrictions that do not penalize the autocorrelation structure and persistence of the data. Standard estimation and evaluation methods can be used. We provide an empirical illustration based on USA quarterly data and a small‐scale monetary New Keynesian model.  相似文献   

11.
We study parametric and non‐parametric approaches for assessing the accuracy and coverage of a population census based on dual system surveys. The two parametric approaches being considered are post‐stratification and logistic regression, which have been or will be implemented for the US Census dual system surveys. We show that the parametric model‐based approaches are generally biased unless the model is correctly specified. We then study a local post‐stratification approach based on a non‐parametric kernel estimate of the Census enumeration functions. We illustrate that the non‐parametric approach avoids the risk of model mis‐specification and is consistent under relatively weak conditions. The performances of these estimators are evaluated numerically via simulation studies and an empirical analysis based on the 2000 US Census post‐enumeration survey data.  相似文献   

12.
This paper presents a design for compensation systems for green strategy implementation based on parametric and non‐parametric approaches. The purpose of the analysis is to use formal modeling to explain the issues that arise with the multi‐task problem of implementing an environmental strategy in addition to an already existing profit‐oriented strategy. For the first class of compensation systems (parametric), a multi‐task model is used as a basis. For the second class of compensation systems (non‐parametric), data envelopment analysis is applied.Copyright © 2003 John Wiley & Sons, Ltd. and ERP Environment  相似文献   

13.
One of the statistical methods deployed in medical sciences to investigate time to event data is the survival analysis. This study, comparing efficiency of some parametric and semiparametric survival models, aims at investigating the effect of demographic and socio-economic factors on the growth failure of children below 2 years of age in Iran. The survival models including exponential, Weibull, log-logistic and log-normal models were compared to proportional hazards and extended Cox models by Akaike Information Criterion and variability of the estimated parameters. Based on the results, the log-normal model is recommended for analyzing the growth failure data of children in Iran. Furthermore, it is suggested that female children, children born to illiterate mothers and children born in larger households receive more attention in terms of growth failure.  相似文献   

14.
I examine the effects of insurance status and managed care on hospitalization spells, and develop a new approach for sample selection problems in parametric duration models. MLE of the Flexible Parametric Selection (FPS) model does not require numerical integration or simulation techniques. I discuss application to the exponential, Weibull, log‐logistic and gamma duration models. Applying the model to the hospitalization data indicates that the FPS model may be preferred even in cases in which other parametric approaches are available. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

15.
In their advocacy of the rank‐transformation (RT) technique for analysis of data from factorial designs, Mende? and Yi?it (Statistica Neerlandica, 67, 2013, 1–26) missed important analytical studies identifying the statistical shortcomings of the RT technique, the recommendation that the RT technique not be used, and important advances that have been made for properly analyzing data in a non‐parametric setting. Applied data analysts are at risk of being misled by Mende? and Yi?it, when statistically sound techniques are available for the proper non‐parametric analysis of data from factorial designs. The appropriate methods express hypotheses in terms of normalized distribution functions, and the test statistics account for variance heterogeneity.  相似文献   

16.
Multiple event data are frequently encountered in medical follow‐up, engineering and other applications when the multiple events are considered as the major outcomes. They may be repetitions of the same event (recurrent events) or may be events of different nature. Times between successive events (gap times) are often of direct interest in these applications. The stochastic‐ordering structure and within‐subject dependence of multiple events generate statistical challenges for analysing such data, including induced dependent censoring and non‐identifiability of marginal distributions. This paper provides an overview of a class of existing non‐parametric estimation methods for gap time distributions for various types of multiple event data, where sampling bias from induced dependent censoring is effectively adjusted. We discuss the statistical issues in gap time analysis, describe the estimation procedures and illustrate the methods with a comparative simulation study and a real application to an AIDS clinical trial. A comprehensive understanding of challenges and available methods for non‐parametric analysis can be useful because there is no existing standard approach to identifying an appropriate gap time method that can be used to address research question of interest. The methods discussed in this review would allow practitioners to effectively handle a variety of real‐world multiple event data.  相似文献   

17.
In this paper we model Value‐at‐Risk (VaR) for daily asset returns using a collection of parametric univariate and multivariate models of the ARCH class based on the skewed Student distribution. We show that models that rely on a symmetric density distribution for the error term underperform with respect to skewed density models when the left and right tails of the distribution of returns must be modelled. Thus, VaR for traders having both long and short positions is not adequately modelled using usual normal or Student distributions. We suggest using an APARCH model based on the skewed Student distribution (combined with a time‐varying correlation in the multivariate case) to fully take into account the fat left and right tails of the returns distribution. This allows for an adequate modelling of large returns defined on long and short trading positions. The performances of the univariate models are assessed on daily data for three international stock indexes and three US stocks of the Dow Jones index. In a second application, we consider a portfolio of three US stocks and model its long and short VaR using a multivariate skewed Student density. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

18.
This paper proposes a set of formal tests to address the goodness‐of‐fit of Markov switching models. These formal tests are constructed as tests of model consistency and of both parametric and non‐parametric encompassing. The formal tests are then combined with informal tests using simulation in combination with non‐parametric density and conditional mean estimation. The informal tests are shown to be useful in shedding light on the failure (or success) of the encompassing tests. Several examples are provided.  相似文献   

19.
We develop a model of marketing efficiency based on a directional distance function that allows for marketing spillovers. A parametric model is used to test for spillovers from rival marketing and from a firm's marketing activity of its other related products. We then show how this information can be incorporated into a non‐parametric model and used to estimate marketing inefficiency. We apply brand level data from the US brewing industry to the non‐parametric model to determine the effectiveness of television, radio, and print advertising. We find that advertising spillovers are important in brewing and show that efficiency estimates are inaccurate when spillover effects are ignored. Our results also suggest that marketing efficiency may be an important component to firm success in brewing, a result that may apply to other consumer goods industries. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

20.
Yield curve models within the popular Nelson–Siegel class are shown to arise from formal low‐order Taylor approximations of the generic Gaussian affine term structure model. Extensive empirical testing on government and bank‐risk yield curve datasets for the five largest industrial economies shows that the arbitrage‐free three‐factor (Level, Slope, Curvature) Nelson–Siegel model generally provides an acceptable representation of the data relative to the three‐factor Gaussian affine term structure model. The combined theoretical foundation and empirical evidence means that Nelson–Siegel models may be applied and interpreted from the perspective of Gaussian affine term structure models that already have firm statistical and theoretical foundations in the literature. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号