首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper discusses the stylized facts, the theory, and the remaining problems of productivity dispersion, which is essentially related to the concept of equilibrium in the neoclassical theory. Empirical study of data relating to Japanese firms shows that they all obey the Pareto law, and also that the Pareto index decreases with the level of aggregation. In order to explain these two stylized facts we propose a theoretical framework built on the basic principle of statistical physics and on the concept of superstatistics, an approach that accommodates fluctuations of aggregate demand. We show that the allocation of production factors depends crucially on the level of aggregate demand, and that the higher the level of aggregate demand, the closer the economy is to the frontier of the production possibility set.  相似文献   

2.
Chiu-Ming Luk 《Socio》1985,19(6):407-416
This paper examines macrovariations in China's development during the early eighties. The conceptual framework used was the core-periphery model. Sixteen variables were collected for all of the 29 provincial level units of mainland China. After collapsing the data into two important dimensions by means of principal components analysis, maps of component scores were drawn to show the existence of core and periphery in China's development. To cross-check these results, several cluster analysis procedures were tried to arrive at the best grouping of provinces. A conceptual model was proposed to help understand China's development. Discussions were also made relating the results to the current open-door policy of China to speculate on the future development in China.  相似文献   

3.
Family supportive supervision has emerged as an important prerequisite for effective work-family integration and employees' well-being. Scholars are addressing the need to develop family supportive managers and have introduced a new construct and measure, ‘family supportive supervisor behavior’. So far, little attention has been focused on the underlying behavioral process and managerial characteristics that triggers family supportive supervisor behavior. In response, a multilevel conceptual framework is developed that identifies individual-level and contextual-level factors that would predict managers' overall tendency to engage in family supportive supervisor behavior. The consequences of family supportive supervisor behavior on organizational outcomes such as the subordinate and the team level and its practical implications are outlined. In presenting a multilevel conceptual framework for family supportive supervisor behavior, a research agenda is proposed that can guide future researchers in the field of family supportive supervision.  相似文献   

4.
Multilevel modeling is important for human resource management (HRM) research in that it often analyzes and interprets hierarchal data residing at more than one level of analysis. However, HRM research in general lags behind other disciplines, such as education, health, marketing, and psychology in the use of a multilevel analytical strategy. This article integrates the most recent literature into the theoretical and applied basics of multilevel modeling applicable to HRM research. A range of multilevel modeling issues have been discussed and they include statistical logic underpinning multilevel modeling, level conceptualization of variables, data aggregation, hypothesis tests, reporting mediation paths, and cross‐level interactions. An empirical example concerning complex cross‐level mediated moderation is presented that will suffice to illustrate the principles and the procedures for implementing a multilevel analytical strategy in HRM research. © 2015 Wiley Periodicals, Inc.  相似文献   

5.
Peter A. Rogerson 《Socio》1983,17(5-6):373-380
When forecasting aggregate variables, a choice must often be made to either add up individual forecasts made at a disaggregate level or to simply forecast at the aggregate level. The presence of heterogeneity introduces aggregation bias and makes the disaggregates approach more preferable, while the presence of data and specification errors introduces relatively large variances in the disaggregate forecasts, making the aggregate approach more preferable. It is suggested that the mean square error is useful in evaluating the combined effects of heterogeneity and specification and data errors, and in facilitating comparisons between aggregate and disaggregate approaches to aggregate variable forecasting.  相似文献   

6.
7.
We propose a method to solve models with heterogeneous agents and aggregate uncertainty. The law of motion describing aggregate behavior is obtained by explicitly aggregating the individual policy rule. The algorithm is simpler and faster than existing algorithms that rely on parameterization of the cross-sectional distribution and/or a computationally intensive simulation step. Explicit aggregation establishes a link between the individual policy rule and the set of necessary aggregate state variables, an insight that can be helpful in determining what state variables to include in other algorithms as well.  相似文献   

8.
Official statistics production based on a combination of data sources, including sample survey, census and administrative registers, is becoming more and more common. Reduction of response burden, gains of production cost efficiency as well as potentials for detailed spatial‐demographic and longitudinal statistics are some of the major advantages associated with the use of integrated statistical data. Data integration has always been an essential feature associated with the use of administrative register data. But survey and census data should also be integrated, so as to widen their scope and improve the quality. There are many new and difficult challenges here that are beyond the traditional topics of survey sampling and data integration. In this article we consider statistical theory for data integration on a conceptual level. In particular, we present a two‐phase life‐cycle model for integrated statistical microdata, which provides a framework for the various potential error sources, and outline some concepts and topics for quality assessment beyond the ideal of error‐free data. A shared understanding of these issues will hopefully help us to collocate and coordinate efforts in future research and development.  相似文献   

9.
Measuring knowledge development is a new statistical activity that warrants urgent attention in the light of the current Internet explosion. The Internet creates virtual networks by connecting information nodes, knowledge nexus, people and institutions. The Internet has resulted in an unprecedented proliferation of Information, Communication, Knowledge and Entertainment (ICKE), which has in turn brought about structural changes in all aspects of social, economic and political governance. For public policy formulators, including the statistical community, it is imperative that the knowledge development aspect of ICKE be measured. Being abstract, knowledge is difficult to quantify. However, the manifestations of attributes and variables of any knowledge development activity are measurable. The paper outlines a conceptual framework for achieving this. This proposed framework adopts a socio-technological approach, premised on contemporary information and knowledge development as an integral of the people and technology dimensions. To illustrate the workability of the proposed model, the paper identifies some parameters and variables in the current statistical system, and highlights some new data generated via the Internet Subscriber Study and ICT Exposition Visitor Study. All illustrations refer to Malaysian data. Finally, the paper outlines 'way forward' initiatives for establishing a full-fledged set of information and knowledge development indicators.  相似文献   

10.
In the last two decades, marketing databases have grown significantly in terms of size and richness of available information. The analysis of these databases raises several information-related and statistical issues. We aim at providing an overview of a selection of issues related to the analysis of large data sets. We focus on the two important areas: single source databases and customer transaction databases. We discuss models that have been used to describe customer behavior in these fields. Among the issues discussed are the development of parsimonious models, estimation methods, aggregation of data, data-fusion and the optimization of customer-level profit functions. We conclude that problems related to the analysis of large databases are far from resolved, and will stimulate new research avenues in the near future.  相似文献   

11.
Numerous studies examining the linkage between corporate entrepreneurship and performance resort to the entrepreneurial orientation construct to assess a firm's degree of entrepreneurship. Little conceptual and empirical research has been devoted to understanding the factors and conditions that produce Entrepreneurial Orientation. Generic explanatory variables such as environment, organization, strategy and culture have been mentioned in past research, but though a number of hypotheses have been proposed, few have been thoroughly developed and tested. In this article, we focus on one explanatory variable – culture – that we develop along multiple axes. We propose a conceptual framework that aims to provide a better understanding of how three interdependent levels of culture – national, industry and corporate – influence Entrepreneurial Orientation.  相似文献   

12.
This paper aims to demonstrate a possible aggregation gain in predicting future aggregates under a practical assumption of model misspecification. Empirical analysis of a number of economic time series suggests that the use of the disaggregate model is not always preferred over the aggregate model in predicting future aggregates, in terms of an out-of-sample prediction root-mean-square error criterion. One possible justification of this interesting phenomena is model misspecification. In particular, if the model fitted to the disaggregate series is misspecified (i.e., not the true data generating mechanism), then the forecast made by a misspecified model is not always the most efficient. This opens up an opportunity for the aggregate model to perform better. It will be of interest to find out when the aggregate model helps. In this paper, we study a framework where the underlying disaggregate series has a periodic structure. We derive and compare the efficiency loss in linear prediction of future aggregates using the adapted disaggregate model and aggregate model. Some scenarios for aggregation gain to occur are identified. Numerical results show that the aggregate model helps over a fairly large region in the parameter space of the periodic model that we studied.  相似文献   

13.
This paper investigates whether there is time variation in the excess sensitivity of aggregate consumption growth to anticipated aggregate disposable income growth using quarterly US data over the period 1953–2014. Our empirical framework contains the possibility of stickiness in aggregate consumption growth and takes into account measurement error and time aggregation. Our empirical specification is cast into a Bayesian state‐space model and estimated using Markov chain Monte Carlo (MCMC) methods. We use a Bayesian model selection approach to deal with the non‐regular test for the null hypothesis of no time variation in the excess sensitivity parameter. Anticipated disposable income growth is calculated by incorporating an instrumental variables estimation approach into our MCMC algorithm. Our results suggest that the excess sensitivity parameter in the USA is stable at around 0.23 over the entire sample period. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

14.
This paper proposes a template for modelling complex datasets that integrates traditional statistical modelling approaches with more recent advances in statistics and modelling through an exploratory framework. Our approach builds on the well-known and long standing traditional idea of 'good practice in statistics' by establishing a comprehensive framework for modelling that focuses on exploration, prediction, interpretation and reliability assessment, a relatively new idea that allows individual assessment of predictions.
The integrated framework we present comprises two stages. The first involves the use of exploratory methods to help visually understand the data and identify a parsimonious set of explanatory variables. The second encompasses a two step modelling process, where the use of non-parametric methods such as decision trees and generalized additive models are promoted to identify important variables and their modelling relationship with the response before a final predictive model is considered. We focus on fitting the predictive model using parametric, non-parametric and Bayesian approaches.
This paper is motivated by a medical problem where interest focuses on developing a risk stratification system for morbidity of 1,710 cardiac patients given a suite of demographic, clinical and preoperative variables. Although the methods we use are applied specifically to this case study, these methods can be applied across any field, irrespective of the type of response.  相似文献   

15.
Short-Term Load Forecasting (STLF) is a fundamental instrument in the efficient operational management and planning of electric utilities. Emerging smart grid technologies pose new challenges and opportunities. Although load forecasting at the aggregate level has been extensively studied, electrical load forecasting at fine-grained geographical scales of households is more challenging. Among existing approaches, semi-parametric generalized additive models (GAM) have been increasingly popular due to their accuracy, flexibility, and interpretability. Their applicability is justified when forecasting is addressed at higher levels of aggregation, since the aggregated load pattern contains relatively smooth additive components. High resolution data are highly volatile, forecasting the average load using GAM models with smooth components does not provide meaningful information about the future demand. Instead, we need to incorporate irregular and volatile effects to enhance the forecast accuracy. We focus on the analysis of such hybrid additive models applied on smart meters data and show that it leads to improvement of the forecasting performances of classical additive models at low aggregation levels.  相似文献   

16.
Exchange rate sensitivity of US bilateral trade flows   总被引:1,自引:0,他引:1  
The traditional way of assessing the impact of currency depreciation on the trade balance has been to estimate the elasticity of trade volume to relative prices. To this end, most previous studies used aggregate trade data. To avoid aggregation biases potentially hidden in aggregate data, recent studies have relied on bilateral trade data. Since import and export price data is not available on bilateral level, this study proposes an alternative way of assessing the impact of currency depreciation on bilateral trade flows. The models are applied between the US and her 19 industrial trading partners using recent advances in time-series modeling.  相似文献   

17.
由于独立性关系到审计质量,关系到资本市场的效率,关系到注册会计师行业的生存。注册会计师的独立性问题是一直受到广泛关注的问题。各国及有关的国际组织都试图对独立性问题做出明确的规定。其中美国ISB及IFAC分别构建了独立性概念(原则)框架,对独立性问题作了较全面的规定。而我国对独立性问题的相关规定还很不系统,本文借鉴ISB及IFAC的相关规定阐述了独立性概念框架的主要内容,以期能够提供参考。  相似文献   

18.
Non‐response is a common source of error in many surveys. Because surveys often are costly instruments, quality‐cost trade‐offs play a continuing role in the design and analysis of surveys. The advances of telephone, computers, and Internet all had and still have considerable impact on the design of surveys. Recently, a strong focus on methods for survey data collection monitoring and tailoring has emerged as a new paradigm to efficiently reduce non‐response error. Paradata and adaptive survey designs are key words in these new developments. Prerequisites to evaluating, comparing, monitoring, and improving quality of survey response are a conceptual framework for representative survey response, indicators to measure deviations thereof, and indicators to identify subpopulations that need increased effort. In this paper, we present an overview of representativeness indicators or R‐indicators that are fit for these purposes. We give several examples and provide guidelines for their use in practice.  相似文献   

19.
Hector Correa 《Socio》1985,19(1):63-79
The starting point of this paper is a conceptual model of bureaucratic corruption, based on the assumption that bureaucrats (a) maximize personal utilities, and (b) control a monopoly in the production of certain goods and services. This conceptual model leads to a system of three equations representing demand, supply and market clearance of the goods and services supplied by the bureaucracy. Using the market clearance equation, an equation for the level of corruption in a bureaucracy is obtained. The conceptual model helps to specify the signs of the partial derivatives of the level of corruption with respect to the explanatory variables. Two sets of data are used to test the model. The first refers to levels of corruption in the bureaucracies of 17 Latin American countries, while the second deals with levels of corruption of Federal officials working in individual states in the U.S.A., state officials, and local officials. The conclusions obtained thus far using the statistical analysis confirm the expectations derived with the conceptual model.  相似文献   

20.
In this research, we propose a disaster response model combining preparedness and responsiveness strategies. The selective response depends on the level of accuracy that our forecasting models can achieve. In order to decide the right geographical space and time window of response, forecasts are prepared and assessed through a spatial–temporal aggregation framework, until we find the optimum level of aggregation. The research considers major earthquake data for the period 1985–2014. Building on the produced forecasts, we develop accordingly a disaster response model. The model is dynamic in nature, as it is updated every time a new event is added in the database. Any forecasting model can be optimized though the proposed spatial–temporal forecasting framework, and as such our results can be easily generalized. This is true for other forecasting methods and in other disaster response contexts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号