首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
An econometric methodology is proposed for reconciling inaccurate measures of latent data which are subject to accounting constraints. The method deals with the case in which the measurement errors are serially correlated, generalizing previous contributions. A class of efficient estimators are derived for the latent data. Consistent estimators for the weight matrices applied to the observed information based on a linear regression procedure are obtained together with confidence interval estimators for these weight matrices. Approximate confidence intervals are suggested for the latent data themselves together with specification tests for the assumptions underlying the procedure. An application of the proposed method is made to U.K. Gross Domestic Product in constant prices for 1958Q–1989Q4.  相似文献   

2.
A stochastic frontier production function is defined for panel data on firms, in which the non-negative technical inefficiency effects are assumed to be a function of firm-specific variables and time. The inefficiency effects are assumed to be independently distributed as truncations of normal distributions with constant variance, but with means which are a linear function of observable variables. This panel data model is an extension of recently proposed models for inefficiency effects in stochastic frontiers for cross-sectional data. An empirical application of the model is obtained using up to ten years of data on paddy farmers from an Indian village. The null hypotheses, that the inefficiency effects are not stochastic or do not depend on the farmer-specific variables and time of observation, are rejected for these data.  相似文献   

3.
Forecasting demand during the early stages of a product's life cycle is a difficult but essential task for the purposes of marketing and policymaking. This paper introduces a procedure to derive accurate forecasts for newly introduced products for which limited data are available. We begin with the assumption that the consumer reservation price is related to the timing with which the consumer adopts the product. The model is estimated using reservation price data derived through a consumer survey, and the forecast is updated with sales data as they become available using Bayes's rule. The proposed model's forecasting performance is compared with that of benchmark models (i.e., Bass model, logistic growth model, and a Bayesian model based on analogy) using 23 quarters' worth of data on South Korea's broadband Internet services market. The proposed model outperforms all benchmark models in both prelaunch and postlaunch forecasting tests, supporting the thesis that consumer reservation price can be used to forecast demand for a new product before or shortly after product launch.  相似文献   

4.
A system of regression equations for analyzing panel data with random heterogeneity in intercepts and coefficients, and unbalanced panel data is considered. A maximum likelihood (ML) procedure for joint estimation of all parameters is described. Since its implementation for numerical computation is complicated, simplified procedures are presented. The simplifications essentially concern the estimation of the covariance matrices of the random coefficients. The application and ‘anatomy’ of the proposed algorithm for modified ML estimation are illustrated by using panel data for output, inputs and costs for 111 manufacturing firms observed up to 22 years.  相似文献   

5.
We are concerned with the problem of spot volatility estimation in the presence of microstructure noise. We introduce an estimator based on the technique of multi‐step regularization. A preliminary form for such an estimator was proposed in Ogawa (2008) and was shown to work in a real‐time manner. However, the main drawback of this scheme is that it needs a lot of observation data. The aim of the present paper is to introduce an improvement to this scheme, such that the modified estimator can work more efficiently and with a data set of smaller size. The technical aspects of implementation of the proposed scheme and its performance on simulated data are analysed. The scheme is tested against other spot volatility estimators, namely a realized volatility type estimator, the Fourier estimator and three kernel estimators.  相似文献   

6.
This paper addresses M-estimation of conditional mean functions when observations are missing at random. The usual approach of correcting for missing data, when the missing data mechanism is ignorable, is inverse probability weighting (IPW). An alternative semiparametric M-estimator which involves local polynomial matching techniques is proposed and its asymptotic distribution is derived. Like IPW, the proposed estimation approach has a double robustness property for the estimation of unconditional means. Monte Carlo evidence suggests slightly better finite sample properties of the semiparametric M-estimator relatively to IPW. A version of the proposed estimator is applied to estimate the impact of noncognitive skills on wages in Germany for two different educational treatment regimes.  相似文献   

7.
The inflation rate is a key economic indicator for which forecasters are constantly seeking to improve the accuracy of predictions, so as to enable better macroeconomic decision making. Presented in this paper is a novel approach which seeks to exploit auxiliary information contained within inflation forecasts for developing a new and improved forecast for inflation by modeling with Multivariate Singular Spectrum Analysis (MSSA). Unlike other forecast combination techniques, the key feature of the proposed approach is its use of forecasts, i.e. data into the future, within the modeling process and extracting auxiliary information for generating a new and improved forecast. We consider real data on consumer price inflation in UK, obtained via the Office for National Statistics. A variety of parametric and nonparametric models are then used to generate univariate forecasts of inflation. Thereafter, the best univariate forecast is considered as auxiliary information within the MSSA model alongside historical data for UK consumer price inflation, and a new multivariate forecast is generated. We find compelling evidence which shows the benefits of the proposed approach at generating more accurate medium to long term inflation forecasts for UK in relation to the competing models. Finally, through the discussion, we also consider Google Trends forecasts for inflation within the proposed framework.  相似文献   

8.
《Ecological Economics》2007,60(4):406-418
Sustainability indicators based on local data provide a practical method to monitor progress towards sustainable development. However, since there are many conflicting frameworks proposed to develop indicators, it is unclear how best to collect these data. The purpose of this paper is to analyse the literature on developing and applying sustainability indicators at local scales to develop a methodological framework that summarises best practice. First, two ideological paradigms are outlined: one that is expert-led and top–down, and one that is community-based and bottom–up. Second, the paper assesses the methodological steps proposed in each paradigm to identify, select and measure indicators. Finally, the paper concludes by proposing a learning process that integrates best practice for stakeholder-led local sustainability assessments. By integrating approaches from different paradigms, the proposed process offers a holistic approach for measuring progress towards sustainable development. It emphasizes the importance of participatory approaches setting the context for sustainability assessment at local scales, but stresses the role of expert-led methods in indicator evaluation and dissemination. Research findings from around the world are used to show how the proposed process can be used to develop quantitative and qualitative indicators that are both scientifically rigorous and objective while remaining easy to collect and interpret for communities.  相似文献   

9.
In this paper, bounds of the Gini index, based on grouped data, are proposed assuming sparse information on mean incomes in the sense that data on either the overall mean income or some of the group mean incomes are not reported. It turns out that the proposed bounds are identical to those proposed by other authors that have more stringent information requirements with respect to mean incomes.  相似文献   

10.
A class of smooth transition momentum-threshold autoregressive (ST–MTAR) tests is proposed to allow testing of the unit root hypothesis against an alternative of asymmetric adjustment about a smooth nonlinear trend. Monte-Carlo simulation is employed to derive finite-sample critical values for the proposed test and illustrate its attractive power properties against a range of stationary alternatives. The empirical relevance of the ST–MTAR test is highlighted via an application to aggregate house price data for the UK. Interestingly, house prices are found to exhibit structural change characterized a fitted logistic smooth transition process, with the newly proposed ST–MTAR test providing the most significant results of the alternative smooth transition unit root tests available.  相似文献   

11.
We derive tests for persistent effects in a general linear dynamic panel data context. Two sources of persistent behavior are considered: time-invariant unobserved factors (captured by an individual random effect) and dynamic persistence or “state dependence” (captured by autoregressive behavior). We will use a maximum likelihood framework to derive a family of tests that help researchers learn whether persistence is due to individual heterogeneity, dynamic effect, or both. The proposed tests have power only in the direction they are designed to perform, that is, they are locally robust to the presence of alternative sources of persistence, and consequently, are able to identify which source of persistence is active. A Monte Carlo experiment is implemented to explore the finite sample performance of the proposed procedures. The tests are applied to a panel data series of real GDP growth for the period 1960–2005.  相似文献   

12.
A main drawback of existing fuzzy time series forecasting methods is that they lack persuasiveness in determining universe of discourse and the length of intervals. Two approaches are proposed for overcoming the problem, and the proposed approaches are more objective and reasonable to improve the persuasiveness in determining the universe of discourse, length of intervals and membership functions of fuzzy time series. The first approach is using Minimize Entropy Principle Approach (MEPA) to partition the universe of discourse and build membership functions, and the second is using Trapezoid Fuzzification Approach (TFA). Monthly amount data of IT project expenditure of a company are used to evaluate the performance of the proposed approaches. The forecasting accuracies of the proposed approaches are better than those of previous methods.  相似文献   

13.
Latent Consideration Sets and Continuous Demand Systems   总被引:2,自引:1,他引:1  
This paper develops a theoretically consistent continuous demand system model that incorporates latent, probabilistic consideration sets. In contrast to existing discrete choice consideration models, the proposed model is econometrically tractable with consumption data for many goods. The model’s empirical properties are illustrated with an 89-site recreation data set from the 1994 National Survey of Recreation and the Environment (NSRE). Parameter and welfare estimates suggest that the latent consideration set models fit the data better and may imply a bias-variance tradeoff relative to traditional models.   相似文献   

14.
Efficiency measurement using stochastic frontier models is well established in applied econometrics. However, no published work seems to be available on efficiency analysis using spatial data dealing with possible spatial dependence between regions. This article considers a stochastic frontier model with decomposition of inefficiency into an idiosyncratic and a spatial, spillover component. Exact posterior distributions of parameters are derived, and computational schemes based on Gibbs sampling with data augmentation are proposed to conduct simulation‐based inference and efficiency measurement. The new method is illustrated using production data for Italian regions (1970–1993). Clearly, further theoretical and empirical research on the subject would be of great interest.  相似文献   

15.
This paper investigates differences between the educational attainment of immigrants, children of immigrants and native-born individuals in Australia by using Australian Youth Survey (AYS) data combined with aggregate Australian Census data. Differences in educational attainment are decomposed into: (i) typical demographic and socio-economic sources common to all ethnic groups; (ii) unobserved region of residence and region of origin effects; and (iii) neighbourhood effects such as degree and ethnic concentration of particular ethnic groups in different neighbourhoods. A theoretical model incorporating these effects is proposed but structural estimation is not possible for lack of appropriate data. Instead, a reduced form methodology is proposed and employed. The empirical results identify positive ethnic neighbourhood effects in high school completion and university enrolment for some immigrant groups in Australia, in particular first and second generation immigrants from Asia. The results indicate that it is not just the size of the ethnic network but the ‘quality’ of the network that is important.  相似文献   

16.
This article reviews and discusses the problem of choosing smoothing parameters and resampling schemes for specification tests in econometrics. While smoothing is used for the regularization of the non-specified parts of the null hypothesis and omnibus alternatives, the resampling serves for determining the critical value. Several of the existing selection methods are discussed, implemented, and compared. This has been done for cross-sectional data along the example of additivity testing. Doubtless, all problems considered here carry over to specification testing with dependent data. Intensive simulations illustrate that this is still an open problem that easily corrupts these tests in practice. Possible ways out of the dilemma are proposed.  相似文献   

17.
Cost data are a central aspect of eco-efficiency measures, either as means to assess value of production, or, more directly, as one dimension of the efficiency ratio. Several aspects may affect the quality of cost data, among them definitions, time and space, and confidentiality issues. Somewhat surprisingly, cost data quality has received little attention in the field of sustainability and eco-efficiency so far. Even worse, perhaps, is the lack of tools suitable for a cost data quality assessment and management.This paper discusses parameters that affect cost data quality, and will then propose a pedigree matrix as a tool designed for managing cost data quality issues. The application of the matrix is described, also in combination with a previously proposed, and broadly used, pedigree matrix for environmental data quality management.  相似文献   

18.
基于多维评价模型的农业多功能性价值评估   总被引:7,自引:0,他引:7  
吕耀 《经济地理》2008,28(4):650-655
农业多功能性理念有助于重新全面认识农业诸多功能并为其评价提供分析框架.文章在考虑农业各项功能间相互独立但冲突关系的基础上构建多维评价模型.结合数据可获得性、评价方法与指标选取原则,应用三维评价模型对我国农业的食物生产、经济和生态功能进行定量评价.应用因子分析法对三大功能的主要影响因子进行提取;根据三大功能的价值间不同组合类型,用分层聚类法将我国农业分为九类情景模式.研究结果显示,多维评价模型能够在充分利用现有统计数据的基础上,对农业的生产、经济和生态功能进行较准确的价值评估,模型模拟结果及情景聚类基本符合我国农业发展现状.该多维评价模型具有很大的灵活性,可用于其他领域中不同尺度的多目标价值评估.  相似文献   

19.
Due to few historical data that can be obtained in an emerging securities market, the future returns, risk and liquidity of securities cannot be forecasted precisely. The investment environment is usually fuzzy and uncertain. To handle these imprecise data, this paper discusses a fuzzy multi-period portfolio optimization problem where the returns, risk, and liquidity of securities are represented by interval variables. By taking the return, risk, liquidity and diversification degree of portfolio into consideration, an interval multi-period portfolio selection optimization model is proposed with the objective of maximizing the terminal wealth under the constraints of the return, risk and diversification degree of portfolio at each period. In the proposed model, a proportion entropy is employed to measure the diversification degree of portfolio. Using the fuzzy decision-making theory and multi-objective programming approach, the proposed model is transformed into a crisp nonlinear programming. Then, we design an improved particle swarm optimization algorithm for solution. Finally, a numerical example is given to illustrate the application of our model and demonstrate the effectiveness of the designed algorithm.  相似文献   

20.
Risk Preferences,Production Risk and Firm Heterogeneity*   总被引:1,自引:0,他引:1  
A new technique is proposed for deriving the risk preference function under production risk and expected utility of profit maximization. The derivation depends on neither a specific parametric form of the utility function nor any distribution of the error term representing production risk. The proposed risk preference function is flexible enough to test different types of risk behavior and symmetry of the output distribution. Furthermore, our production risk specification allows for inputs with positive and negative marginal risk. The econometric model accommodates production risk, risk preferences and firm heterogeneity simultaneously. Norwegian salmon farming data are used as an application.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号