首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 437 毫秒
1.
In the article are given examples showing how, by using musical notation in registering (complicated) developments, considerably more qualitative variables may be taken into account than we were used to; furthermore, the temporal element may be represented with a degree of precision that corresponds to our needs. In case of cross-sectional data modifications of musical notation may be used. However, we have to recognize that there are also variables, maybe important ones, that — for several reasons — cannot be registered with a very high precision. Other complications are hinted at. The method described is rather a working method by means of which to improve the basis for a further quantitative analysis than a method suitable for showing final research results.  相似文献   

2.
Graphic representation of complicated courses is often necessary to detect patterns that may be worth analysing. Examples are given to show how musical notation or modifications of musical notation may be used to register courses (or cross-sectional data) with more variables than usual. One can register courses with known duration of components (and then also simultaneities); the time scale may be defined according to data. One can also register sequences without known duration of components. Finally the method can be modified so as to suit cross-sectional data. The method can be used to register a single case but also a group of cases that are thus rendered comparable. It is a method of registration, not of analysis but one that may help prepare a refined analysis.  相似文献   

3.
Johanson  Eva 《Quality and Quantity》2001,35(4):429-443
Good graphic representations are a great help to directly see patterns in research results.Courses are no exception. However, a course means that time has to be a component in the representation. If the components are few, it is easy to draw conventional graphic representations over a time axis.However, if there are more than, say, 12–15 components under study the result becomes unreadable. If, instead, we use musical notation we can notate hundreds of components with the time given according to the time scale that suits the data. Readability is kept. The method is developed for two special cases; courses, where the duration of components is known, and courses where the order of components is known but not their duration. Information is shown on one or several staves as something resembling a musical score. As a by-product a modified musical notation is shown to be useful also for complicated cross-sections. These are working methods facilitating final analysis in making directly readable what should otherwise be exceeding the range of vision.  相似文献   

4.
Factor analysis models are used in data dimensionality reduction problems where the variability among observed variables can be described through a smaller number of unobserved latent variables. This approach is often used to estimate the multidimensionality of well-being. We employ factor analysis models and use multivariate empirical best linear unbiased predictor (EBLUP) under a unit-level small area estimation approach to predict a vector of means of factor scores representing well-being for small areas. We compare this approach with the standard approach whereby we use small area estimation (univariate and multivariate) to estimate a dashboard of EBLUPs of the means of the original variables and then averaged. Our simulation study shows that the use of factor scores provides estimates with lower variability than weighted and simple averages of standardised multivariate EBLUPs and univariate EBLUPs. Moreover, we find that when the correlation in the observed data is taken into account before small area estimates are computed, multivariate modelling does not provide large improvements in the precision of the estimates over the univariate modelling. We close with an application using the European Union Statistics on Income and Living Conditions data.  相似文献   

5.
Regression analyses of cross-country economic growth data are complicated by two main forms of model uncertainty: the uncertainty in selecting explanatory variables and the uncertainty in specifying the functional form of the regression function. Most discussions in the literature address these problems independently, yet a joint treatment is essential. We present a new framework that makes such a joint treatment possible, using flexible nonlinear models specified by Gaussian process priors and addressing the variable selection problem by means of Bayesian model averaging. Using this framework, we extend the linear model to allow for parameter heterogeneity of the type suggested by new growth theory, while taking into account the uncertainty in selecting explanatory variables. Controlling for variable selection uncertainty, we confirm the evidence in favor of parameter heterogeneity presented in several earlier studies. However, controlling for functional form uncertainty, we find that the effects of many of the explanatory variables identified in the literature are not robust across countries and variable selections.  相似文献   

6.
The increasing richness of data encourages a comprehensive understanding of economic and financial activities, where variables of interest may include not only scalar (point-like) indicators, but also functional (curve-like) and compositional (pie-like) ones. In many research topics, the variables are also chronologically collected across individuals, which falls into the paradigm of longitudinal analysis. The complicated nature of data, however, increases the difficulty of modeling these variables under the classic longitudinal framework. In this study, we investigate the linear mixed-effects model (LMM) for such complex data. Different types of variables are first consistently represented using the corresponding basis expansions so that the classic LMM can then be conducted on them, which generalizes the theoretical framework of LMM to complex data analysis. A number of simulation studies indicate the feasibility and effectiveness of the proposed model. We further illustrate its practical utility in a real data study on Chinese stock market and show that the proposed method can enhance the performance and interpretability of the regression for complex data with diversified characteristics.  相似文献   

7.
The survival pattern of Swedish commercial banks during the period 1830--1990 is studied by parametric and non-parametric event-history methods. In particular we study the sensitivity of the conclusions reached with respect to the model used. It is found that the hazard is inversely U-shaped, which means that models that cannot allow for this type of hazard run into difficulties. Thus two of the most popular approaches in the analysis of event history data, the Gompertz and the Weibull models produce misleading results regarding the development of the death risk of banks over time. As regards the effect of explanatory variables on survival, on the other hand, most models are found to be robust and even in cases of misspecified baseline hazards, the estimated effects of the explanatory variables do not seem to be seriously wrong.  相似文献   

8.
Factor models have been applied extensively for forecasting when high‐dimensional datasets are available. In this case, the number of variables can be very large. For instance, usual dynamic factor models in central banks handle over 100 variables. However, there is a growing body of literature indicating that more variables do not necessarily lead to estimated factors with lower uncertainty or better forecasting results. This paper investigates the usefulness of partial least squares techniques that take into account the variable to be forecast when reducing the dimension of the problem from a large number of variables to a smaller number of factors. We propose different approaches of dynamic sparse partial least squares as a means of improving forecast efficiency by simultaneously taking into account the variable forecast while forming an informative subset of predictors, instead of using all the available ones to extract the factors. We use the well‐known Stock and Watson database to check the forecasting performance of our approach. The proposed dynamic sparse models show good performance in improving efficiency compared to widely used factor methods in macroeconomic forecasting. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

9.
New Nonlinear Approaches for the Adjustment and Updating of a SAM   总被引:1,自引:0,他引:1  
Many structural relationships should be taken into account in any reasonable adjustment and updating process. These structural relationships are mainly represented by ratios of different types, such as technical coefficients or the proportion of the cell value in relation to its row or column total. We believe that in many cases (either because of lack of information or when the time elapsed for the estimation of a social accounting matrix is not long enough to allow for any significant structural change) the updating process should try to minimize the rela- tive deviation of the new coefficients from the initial ones in a homogeneous way. This homogeneity would mean that the magnitude of this relative deviation is similar among the elements of each row or column, therefore avoiding the concentration of the changes in particular cells of the SAM. In this work, we propose some new adjustment criteria in order to obtain a more homogeneous relative adjustment of the structural coefficients. These criteria combine the adjustment method proposed by Matuszewski et al. (1964) with other deviation functions. Each of the adjustment criteria proposed leads to a nonlinear optimization problem which is reformulated as a linear program. We test the usefulness of this proposal by comparing its results with the ones obtained by more standard approaches and we are able to show that these approaches tend to produce a less homogeneous pattern of coefficient adjustment, under certain circumstances, than the ones we put forward.  相似文献   

10.
Dynamic Programming is used to derive the optimal feedback solution to the minimization of a quadratic welfare loss-functional subject to a linear econometric model, when the value of some instrument variables can not be optimized in every model period, but only in single ones. In this way, the relative inertia of fiscal policy-making, as compared to monetary policy-making, can e.g. be taken into account. Analytical expressions are derived for the optimal feedback rules and for the minimum expected losses, and iterative schemes are proposed for their numerical computation. It is suggested that a numerical analysis of the economic gain to be realized by making more frequent adjustment of fiscal policy variables than is actually undertaken could yield valuable information for policy-makers.  相似文献   

11.
Multilevel growth curve models for repeated measures data have become increasingly popular and stand as a flexible tool for investigating longitudinal change in students’ outcome variables. In addition, these models allow the estimation of school effects on students’ outcomes though making strong assumptions about the serial independence of level-1 residuals. This paper introduces a method which takes into account the serial correlation of level-1 residuals and also introduces such serial correlation at level-2 in a complex double serial correlation (DSC) multilevel growth curve model. The results of this study from both real and simulated data show a great improvement in school effects estimates compared to those that have previously been found using multilevel growth curve models without correcting for DSC for both the students’ status and growth criteria.  相似文献   

12.
This study has two main purposes. First one is the necessity of taking minimum wages into account, if there is a purpose to analyze the relationship between wages and productivity in an economy which has high unemployment rates and informal employment. Second one is about the analyzing method of this relationship. We choose TAR cointegration analysis for this relation. First step of this analysis is testing for stationarity of the variables. However the low power of traditional unit root tests is examined and proved in many studies but not taken into account in TAR cointegration studies in literature. This study shows that traditional unit root tests are unfavorable for the variables which have TAR structures. Because of this shortcoming of traditional unit root tests, these results must be supported with TAR unit root tests.  相似文献   

13.
14.
This paper considers factor estimation from heterogeneous data, where some of the variables—the relevant ones—are informative for estimating the factors, and others—the irrelevant ones—are not. We estimate the factor model within a Bayesian framework, specifying a sparse prior distribution for the factor loadings. Based on identified posterior factor loading estimates, we provide alternative methods to identify relevant and irrelevant variables. Simulations show that both types of variables are identified quite accurately. Empirical estimates for a large multi‐country GDP dataset and a disaggregated inflation dataset for the USA show that a considerable share of variables is irrelevant for factor estimation.  相似文献   

15.
Very often, evaluators of regional policies only emphasize regional effects, but in fact it is also necessary to consider the national ones. The purpose of this paper is to demonstrate, by means of simulations made with the REGINA model of the French economy, that these national effects are quite important and depend on the national and regional economic situation. Hence, regional policy may be used not only for reducing regional inequalities but also for improving national development. These two aspects: regional equity versus national efficiency, must be taken into account when evaluating the usefulness of regional policies.  相似文献   

16.
AFT regression-adjusted monitoring of reliability data in cascade processes   总被引:1,自引:0,他引:1  
Today’s competitive market has witnessed a growing interest in improving the reliability of products in both service and industrial operations. A large number of monitoring schemes have been introduced to effectively control the reliability-related quality characteristics. These methods have focused on single-stage processes or considered quality variables which are independent. However, the main feature of multistage processes is the cascade property which needs to be justified for the sake of optimal process monitoring. The problem becomes complicated when the presence of censored observations is pronounced. Therefore, both the effects of influential covariates and censored data must be taken into account while presenting a monitoring scheme. In this paper, the accelerated failure time models are used and two regression-adjusted control schemes based on Cox-Snell residuals are devised. Two different scenarios with censored and non-censored data are considered respectively. The competing control charts are compared in terms of zero-state and steady-state average run length criteria using Markov chain approach. The comparison study reveals that the cumulative sum based monitoring procedure is superior and more effective. It should be noted that the application of the proposed monitoring schemes are not restricted to manufacturing processes and thus service operations such as healthcare systems can benefit from them.  相似文献   

17.
Given that the use of Likert scales is increasingly common in the field of social research it is necessary to determine which methodology is the most suitable for analysing the data obtained; although, given the categorization of these scales, the results should be treated as ordinal data it is often the case that they are analysed using techniques designed for cardinal measures. One of the most widely used techniques for studying the construct validity of data is factor analysis, whether exploratory or confirmatory, and this method uses correlation matrices (generally Pearson) to obtain factor solutions. In this context, and by means of simulation studies, we aim to illustrate the advantages of using polychoric rather than Pearson correlations, taking into account that the latter require quantitative variables measured in intervals, and that the relationship between these variables has to be monotonic. The results show that the solutions obtained using polychoric correlations provide a more accurate reproduction of the measurement model used to generate the data.  相似文献   

18.
In this article, we merge two strands from the recent econometric literature. First, factor models based on large sets of macroeconomic variables for forecasting, which have generally proven useful for forecasting. However, there is some disagreement in the literature as to the appropriate method. Second, forecast methods based on mixed‐frequency data sampling (MIDAS). This regression technique can take into account unbalanced datasets that emerge from publication lags of high‐ and low‐frequency indicators, a problem practitioner have to cope with in real time. In this article, we introduce Factor MIDAS, an approach for nowcasting and forecasting low‐frequency variables like gross domestic product (GDP) exploiting information in a large set of higher‐frequency indicators. We consider three alternative MIDAS approaches (basic, smoothed and unrestricted) that provide harmonized projection methods that allow for a comparison of the alternative factor estimation methods with respect to nowcasting and forecasting. Common to all the factor estimation methods employed here is that they can handle unbalanced datasets, as typically faced in real‐time forecast applications owing to publication lags. In particular, we focus on variants of static and dynamic principal components as well as Kalman filter estimates in state‐space factor models. As an empirical illustration of the technique, we use a large monthly dataset of the German economy to nowcast and forecast quarterly GDP growth. We find that the factor estimation methods do not differ substantially, whereas the most parsimonious MIDAS projection performs best overall. Finally, quarterly models are in general outperformed by the Factor MIDAS models, which confirms the usefulness of the mixed‐frequency techniques that can exploit timely information from business cycle indicators.  相似文献   

19.
We empirically investigate the determinants of the payment form in mergers and acquisitions and introduce new variables on the target and acquirer investment characteristics to evaluate whether the concerns of target and acquirer shareholders are taken into account. Our sample encompasses mergers between publicly listed US companies from 1985 to 2004. Similarly we also consider the determinants of announcement returns using the same set of variables. We establish the relevance of a previously unreported variable for the determination of the payment form, the correlation of returns between target and acquirer, besides the more established determinants hostile takeovers, and defence mechanisms; weak evidence is found for the significance of budget constraints and no evidence for asymmetric information or tax considerations being a relevant factor. We do not find that announcement returns are explained by the variables considered.  相似文献   

20.
In statistical analysis and operational research many techniques are being applied, which are based upon the fact that the phenomenon under consideration has a distribution, which is approximately normal. This distribution-type has got a number of very nice properties and a constant pattern, so that "normal" phenomenona can be predicted. In case of non-normal, but deviated or skew distributions, analysis and forecasting become much more complicated. In this article a method has been described in which some non-normal variables can be transformed into normal ones, so that techniques based on normal distributions may be used.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号