首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A Stochastic Frontier Production Function with Flexible Risk Properties   总被引:1,自引:1,他引:1  
This paper considers a stochastic frontier production function which has additive, heteroscedastic error structure. The model allows for negative or positive marginal production risks of inputs, as originally proposed by Just and Pope (1978). The technical efficiencies of individual firms in the sample are a function of the levels of the input variables in the stochastic frontier, in addition to the technical inefficiency effects. These are two features of the model which are not exhibited by the commonly used stochastic frontiers with multiplicative error structures.An empirical application is presented using cross-sectional data on Ethiopian peasant farmers. The null hypothesis of no technical inefficiencies of production among these farmers is accepted. Further, the flexible risk models do not fit the data on peasant farmers as well as the traditional stochastic frontier model with multiplicative error structure.  相似文献   

2.
The paper proposes a method for forecasting conditional quantiles. In practice, one often does not know the “true” structure of the underlying conditional quantile function, and in addition, we may have a large number of predictors. Focusing on such cases, we introduce a flexible and practical framework based on penalized high-dimensional quantile averaging. In addition to prediction, we show that the proposed method can also serve as a predictor selector. We conduct extensive simulation experiments to asses its prediction and variable selection performances for nonlinear and linear time series model designs. In terms of predictor selection, the approach tends to select the true set of predictors with minimal false positives. With respect to prediction accuracy, the method competes well even with the benchmark/oracle methods that know one or more aspects of the underlying quantile regression model. We further illustrate the merit of the proposed method by providing an application to the out-of-sample forecasting of U.S. core inflation using a large set of monthly macroeconomic variables based on FRED-MD database. The application offers several empirical findings.  相似文献   

3.
Household projections are key components of analyses of several issues of social concern, including the welfare of the elderly, housing, and environmentally significant consumption patterns. Researchers or policy makers that use such projections need appropriate representations of uncertainty in order to inform their analyses. However, the weaknesses of the traditional approach of providing alternative variants to single "best guess" projection are magnified in household projections, which have many output variables of interest, and many input variables beyond fertility, mortality, and migration. We review current methods of household projections and the potential for using them to produce probabilistic projections, which would address many of these weaknesses. We then propose a new framework for a household projection method of intermediate complexity that we believe is a good candidate for providing a basis for further development of probabilistic household projections. An extension of the traditional headship rate approach, this method is based on modelling changes in headship rates decomposed by household size as a function of variables describing demographic events such as parity specific fertility, union formation and dissolution, and leaving home. It has moderate data requirements, manageable complexity, allows for direct specification of demographic events, and produces output that includes the most important household characteristics for many applications. An illustration of how such a model might be constructed, using data on the U.S. and China over the past several decades, demonstrates the viability of the approach.  相似文献   

4.
In this note, we will consider the problem of recovering an unknown input function when the output function is observed in its entirety, blurred with functional error. An estimator is constructed whose risk converges at an optimal rate. In this functional model, convergence rates of order 1/n (n is the sample size) are possible, provided that the error distribution is sufficiently concentrated so as to compensate for the ill‐posedness of the inverse of the model operator.  相似文献   

5.
Cross-validation is a method used to estimate the expected prediction error of a model. Such estimates may be of interest in themselves, but their use for model selection is more common. Unfortunately, cross-validation is viewed as being computationally expensive in many situations. In this paper it is shown that the h-block cross-validation function for least-squares based estimators can be expressed in a form which can enormously impact on the amount of calculation required. The standard approach is of O(T2) where T denotes the sample size, while the proposed approach is of O(T) and yields identical numerical results. The proposed approach has widespread potential application ranging from the estimation of expected prediction error to least squares-based model specification to the selection of the series order for non-parametric series estimation. The technique is valid for general stationary observations. Simulation results and applications are considered. © 1997 by John Wiley & Sons, Ltd.  相似文献   

6.
We consider model selection facing uncertainty over the choice of variables and the occurrence and timing of multiple location shifts. General-to-simple selection is extended by adding an impulse indicator for every observation to the set of candidate regressors: see Johansen and Nielsen (2009). We apply that approach to a fat-tailed distribution, and to processes with breaks: Monte Carlo experiments show its capability of detecting up to 20 shifts in 100 observations, while jointly selecting variables. An illustration to US real interest rates compares impulse-indicator saturation with the procedure in Bai and Perron (1998).  相似文献   

7.
The increasing penetration of intermittent renewable energy in power systems brings operational challenges. One way of supporting them is by enhancing the predictability of renewables through accurate forecasting. Convolutional Neural Networks (Convnets) provide a successful technique for processing space-structured multi-dimensional data. In our work, we propose the U-Convolutional model to predict hourly wind speeds for a single location using spatio-temporal data with multiple explanatory variables as an input. The U-Convolutional model is composed of a U-Net part, which synthesizes input information, and a Convnet part, which maps the synthesized data into a single-site wind prediction. We compare our approach with advanced Convnets, a fully connected neural network, and univariate models. We use time series from the Climate Forecast System Reanalysis as datasets and select temperature and u- and v-components of wind as explanatory variables. The proposed models are evaluated at multiple locations (totaling 181 target series) and multiple forecasting horizons. The results indicate that our proposal is promising for spatio-temporal wind speed prediction, with results that show competitive performance on both time horizons for all datasets.  相似文献   

8.
Standard bankruptcy prediction methods lead to models weighted by the types of failure firms included in the estimation sample. These kinds of weighted models may lead to severe classification errors when they are applied to such types of failing (and non-failing) firms which are in the minority in the estimation sample (frequency effect). The purpose of this study is to present a bankruptcy prediction method based on identifying two different failure types, i.e. the solidity and liquidity bankruptcy firms, to avoid the frequency effect. Both of the types are depicted by a theoretical gambler's ruin model of its own to yield an approximation of failure probability separately for both types. These models are applied to the data of randomly selected Finnish bankrupt and non-bankrupt firms. A logistic regression model based on a set of financial variables is used as a benchmark model. Empirical results show that the resulting heavily solidity-weighted logistic model may lead to severe errors in classifying non-bankrupt firms. The present approach will avoid these kinds of error by separately evaluating the probability of the solidity and liquidity bankruptcy; the firm is not classified bankrupt as long as neither of the probabilities exceeds the critical value. This leads the present prediction method slightly to outperform the logistic model in the overall classification accuracy.  相似文献   

9.
This paper concerns a class of model selection criteria based on cross‐validation techniques and estimative predictive densities. Both the simple or leave‐one‐out and the multifold or leave‐m‐out cross‐validation procedures are considered. These cross‐validation criteria define suitable estimators for the expected Kullback–Liebler risk, which measures the expected discrepancy between the fitted candidate model and the true one. In particular, we shall investigate the potential bias of these estimators, under alternative asymptotic regimes for m. The results are obtained within the general context of independent, but not necessarily identically distributed, observations and by assuming that the candidate model may not contain the true distribution. An application to the class of normal regression models is also presented, and simulation results are obtained in order to gain some further understanding on the behavior of the estimators.  相似文献   

10.
With cointegration tests often being oversized under time‐varying error variance, it is possible, if not likely, to confuse error variance non‐stationarity with cointegration. This paper takes an instrumental variable (IV) approach to establish individual‐unit test statistics for no cointegration that are robust to variance non‐stationarity. The sign of a fitted departure from long‐run equilibrium is used as an instrument when estimating an error‐correction model. The resulting IV‐based test is shown to follow a chi‐square limiting null distribution irrespective of the variance pattern of the data‐generating process. In spite of this, the test proposed here has, unlike previous work relying on instrumental variables, competitive local power against sequences of local alternatives in 1/T‐neighbourhoods of the null. The standard limiting null distribution motivates, using the single‐unit tests in a multiple testing approach for cointegration in multi‐country data sets by combining P‐values from individual units. Simulations suggest good performance of the single‐unit and multiple testing procedures under various plausible designs of cross‐sectional correlation and cross‐unit cointegration in the data. An application to the equilibrium relationship between short‐ and long‐term interest rates illustrates the dramatic differences between results of robust and non‐robust tests.  相似文献   

11.
Quantile regression for dynamic panel data with fixed effects   总被引:4,自引:0,他引:4  
This paper studies a quantile regression dynamic panel model with fixed effects. Panel data fixed effects estimators are typically biased in the presence of lagged dependent variables as regressors. To reduce the dynamic bias, we suggest the use of the instrumental variables quantile regression method of Chernozhukov and Hansen (2006) along with lagged regressors as instruments. In addition, we describe how to employ the estimated models for prediction. Monte Carlo simulations show evidence that the instrumental variables approach sharply reduces the dynamic bias, and the empirical levels for prediction intervals are very close to nominal levels. Finally, we illustrate the procedures with an application to forecasting output growth rates for 18 OECD countries.  相似文献   

12.
This paper is to elaborate, using a functional analysis approach, the mathematical aspects of scale-dependent input-output models developed by Evans, Sandberg, Lahiri and Chander. All the basic results are established in abstract spaces of any dimension and in the case of existence theorems we do not require the continuity of the input function. A further generalization is made to a model with alternative processes. An application is also made to a model with indivisible commodities.  相似文献   

13.
This paper introduces a novel meta-learning algorithm for time series forecast model performance prediction. We model the forecast error as a function of time series features calculated from historical time series with an efficient Bayesian multivariate surface regression approach. The minimum predicted forecast error is then used to identify an individual model or a combination of models to produce the final forecasts. It is well known that the performance of most meta-learning models depends on the representativeness of the reference dataset used for training. In such circumstances, we augment the reference dataset with a feature-based time series simulation approach, namely GRATIS, to generate a rich and representative time series collection. The proposed framework is tested using the M4 competition data and is compared against commonly used forecasting approaches. Our approach provides comparable performance to other model selection and combination approaches but at a lower computational cost and a higher degree of interpretability, which is important for supporting decisions. We also provide useful insights regarding which forecasting models are expected to work better for particular types of time series, the intrinsic mechanisms of the meta-learners, and how the forecasting performance is affected by various factors.  相似文献   

14.
Efficiency measurement with multiple outputs and multiple inputs   总被引:1,自引:2,他引:1  
This paper discusses modeling technical and allocative inefficiencies in both cost minimizing and profit maximizing frameworks with special emphasis on multiple inputs and multiple outputs. Both primal and dual models are considered for this purpose. In the primal approach we use a separable output and input function (the constant elasticity of transformation output function and Cobb-Douglas input function). The dual models assume translog cost or profit functions. Technical inefficiency is assumed to be random in the cross-sectional models, and fixed firm-specific parameter in the panel data models. Allocative inefficiencies are always treated as input-specific parameters. We derive exact relations linking technical inefficiency and allocative inefficiencies to cost and profit when the underlying technology is represented by a flexible functional form such as the translog. It is shown that appending a one-sided homoscedastic error term to model technical inefficiency, or neglecting technical inefficiency altogether in a translog profit tunciton results in model misspecification and inconsistent parameter estimates.  相似文献   

15.
By-elections, or special elections, play an important role in many democracies – but whilst there are multiple forecasting models for national elections, there are no such models for multiparty by-elections. Multiparty by-elections present particular analytic problems related to the compositional character of the data and structural zeros where parties fail to stand. I model party vote shares using Dirichlet regression, a technique suited for compositional data analysis. After identifying predictor variables from a broader set of candidate variables, I estimate a Dirichlet regression model using data from all post-war by-elections in the UK (n=468). The cross-validated error of the model is comparable to the error of costly and infrequent by-election polls (MAE: 4.0 compared to 3.6 for polls). The steps taken in the analysis are in principle applicable to any system that uses by-elections to fill legislative vacancies.  相似文献   

16.
In this paper we propose an approach to both estimate and select unknown smooth functions in an additive model with potentially many functions. Each function is written as a linear combination of basis terms, with coefficients regularized by a proper linearly constrained Gaussian prior. Given any potentially rank deficient prior precision matrix, we show how to derive linear constraints so that the corresponding effect is identified in the additive model. This allows for the use of a wide range of bases and precision matrices in priors for regularization. By introducing indicator variables, each constrained Gaussian prior is augmented with a point mass at zero, thus allowing for function selection. Posterior inference is calculated using Markov chain Monte Carlo and the smoothness in the functions is both the result of shrinkage through the constrained Gaussian prior and model averaging. We show how using non-degenerate priors on the shrinkage parameters enables the application of substantially more computationally efficient sampling schemes than would otherwise be the case. We show the favourable performance of our approach when compared to two contemporary alternative Bayesian methods. To highlight the potential of our approach in high-dimensional settings we apply it to estimate two large seemingly unrelated regression models for intra-day electricity load. Both models feature a variety of different univariate and bivariate functions which require different levels of smoothing, and where component selection is meaningful. Priors for the error disturbance covariances are selected carefully and the empirical results provide a substantive contribution to the electricity load modelling literature in their own right.  相似文献   

17.
This paper provides a control function estimator to adjust for endogeneity in the triangular simultaneous equations model where there are no available exclusion restrictions to generate suitable instruments. Our approach is to exploit the dependence of the errors on exogenous variables (e.g. heteroscedasticity) to adjust the conventional control function estimator. The form of the error dependence on the exogenous variables is subject to restrictions, but is not parametrically specified. In addition to providing the estimator and deriving its large-sample properties, we present simulation evidence which indicates the estimator works well.  相似文献   

18.
We propose two methods to choose the variables to be used in the estimation of the structural parameters of a singular DSGE model. The first selects the vector of observables that optimizes parameter identification; the second selects the vector that minimizes the informational discrepancy between the singular and non‐singular model. An application to a standard model is discussed and the estimation properties of different setups compared. Practical suggestions for applied researchers are provided. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

19.
20.
The forecast of the real estate market is an important part of studying the Chinese economic market. Most existing methods have strict requirements on input variables and are complex in parameter estimation. To obtain better prediction results, a modified Holt's exponential smoothing (MHES) method was proposed to predict the housing price by using historical data. Unlike the traditional exponential smoothing models, MHES sets different weights on historical data and the smoothing parameters depend on the sample size. Meanwhile, the proposed MHES incorporates the whale optimization algorithm (WOA) to obtain the optimal parameters. Housing price data from Kunming, Changchun, Xuzhou and Handan were used to test the performance of the model. The housing prices results of four cities indicate that the proposed method has a smaller prediction error and shorter computation time than that of other traditional models. Therefore, WOA-MHES can be applied efficiently to housing price forecasting and can be a reliable tool for market investors and policy makers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号