首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   111篇
  免费   0篇
财政金融   11篇
工业经济   2篇
计划管理   45篇
经济学   18篇
运输经济   2篇
贸易经济   9篇
经济概况   24篇
  2020年   1篇
  2019年   2篇
  2018年   1篇
  2017年   2篇
  2016年   2篇
  2015年   1篇
  2014年   6篇
  2013年   15篇
  2012年   4篇
  2011年   5篇
  2010年   3篇
  2009年   7篇
  2008年   5篇
  2007年   3篇
  2006年   3篇
  2005年   2篇
  2004年   6篇
  2003年   2篇
  2002年   6篇
  2001年   3篇
  2000年   1篇
  1999年   2篇
  1997年   2篇
  1994年   3篇
  1992年   1篇
  1991年   1篇
  1989年   2篇
  1988年   1篇
  1987年   1篇
  1986年   5篇
  1985年   1篇
  1984年   2篇
  1983年   1篇
  1982年   1篇
  1981年   1篇
  1979年   2篇
  1978年   2篇
  1977年   1篇
  1976年   2篇
排序方式: 共有111条查询结果,搜索用时 15 毫秒
1.
In this paper the concept of a municipal welfare function is defined. It reflects the evaluation by local authorities of several levels of local expenditures. On the basis of an extensive survey among all Dutch municipal authorities these functions are estimated for about 550 Dutch municipalities with respect to total expenditures and differentiated with respect to several portfolios, like public works, education, etc. The variation of the estimated municipal welfare parameters is explained by objectively measurable municipal characteristics like the number of the inhabitants, age distribution of inhabitants and houses, number of unemployed, regional situation.  相似文献   
2.
The construction of an importance density for partially non‐Gaussian state space models is crucial when simulation methods are used for likelihood evaluation, signal extraction, and forecasting. The method of efficient importance sampling is successful in this respect, but we show that it can be implemented in a computationally more efficient manner using standard Kalman filter and smoothing methods. Efficient importance sampling is generally applicable for a wide range of models, but it is typically a custom‐built procedure. For the class of partially non‐Gaussian state space models, we present a general method for efficient importance sampling. Our novel method makes the efficient importance sampling methodology more accessible because it does not require the computation of a (possibly) complicated density kernel that needs to be tracked for each time period. The new method is illustrated for a stochastic volatility model with a Student's t distribution.  相似文献   
3.
Abstract

Within public administration and policy sciences the concept of policy networks nowadays is well accepted. Not much attention has been paid so far to strategies aimed at institutional design. Therefore, in this article, we develop a conceptual framework to study institutional design more thoroughly. We do this by specifying the nature and variety of institutional rules that guide the behaviour of actors within networks. Given this categorization of rules, we identify possible strategies to change network rules. Next, we focus on the strategic context of attempts to influence the nature of institutional rules: the process of institutional design. We conclude with suggestions to apply the conceptual framework to empirical research into the forms, impacts and implications of attempts to change the institutional features of policy networks.  相似文献   
4.
We consider forecasting the term structure of interest rates with the assumption that factors driving the yield curve are stationary around a slowly time‐varying mean or ‘shifting endpoint’. The shifting endpoints are captured using either (i) time series methods (exponential smoothing) or (ii) long‐range survey forecasts of either interest rates or inflation and output growth, or (iii) exponentially smoothed realizations of these macro variables. Allowing for shifting endpoints in yield curve factors provides substantial and significant gains in out‐of‐sample predictive accuracy, relative to stationary and random walk benchmarks. Forecast improvements are largest for long‐maturity interest rates and for long‐horizon forecasts. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   
5.
We use data from Germany, The Netherlands, Portugal and Spain to test for the effect of earnings variation on individual earnings. We replicate estimates for the USA and find that the variance of earnings in an occupation affects individual wages positively while the skewness of earnings has a negative effect. Both results are in conformity with wage compensation for risk averse workers. First version received: March 2001/Final version received: July 2002 We are grateful to two anonymous referees for valuable comments.  相似文献   
6.
Robustness issues in multilevel regression analysis   总被引:8,自引:0,他引:8  
A multilevel problem concerns a population with a hierarchical structure. A sample from such a population can be described as a multistage sample. First, a sample of higher level units is drawn (e.g. schools or organizations), and next a sample of the sub‐units from the available units (e.g. pupils in schools or employees in organizations). In such samples, the individual observations are in general not completely independent. Multilevel analysis software accounts for this dependence and in recent years these programs have been widely accepted. Two problems that occur in the practice of multilevel modeling will be discussed. The first problem is the choice of the sample sizes at the different levels. What are sufficient sample sizes for accurate estimation? The second problem is the normality assumption of the level‐2 error distribution. When one wants to conduct tests of significance, the errors need to be normally distributed. What happens when this is not the case? In this paper, simulation studies are used to answer both questions. With respect to the first question, the results show that a small sample size at level two (meaning a sample of 50 or less) leads to biased estimates of the second‐level standard errors. The answer to the second question is that only the standard errors for the random effects at the second level are highly inaccurate if the distributional assumptions concerning the level‐2 errors are not fulfilled. Robust standard errors turn out to be more reliable than the asymptotic standard errors based on maximum likelihood.  相似文献   
7.
The goal of meta-analysis is to integrate the research results of a number of studies on a specific topic. Characteristic for meta-analysis is that in general only the summary statistics of the studies are used and not the original data. When the published research results to be integrated are longitudinal, multilevel analysis can be used for the meta-analysis. We will demonstrate this with an example of longitudinal data on the mental development of infants. We distinguish four levels in the data. The highest level (4) is the publication, in which the results of one or more studies are published. The third level consists of the separate studies. At this level we have knowledge about the degree of prematurity of the group of infants in the specific study. The second level are the repeated measures. We have data about the test age, the mental development, the corresponding standard deviations, and the sample sizes. The lowest level is needed for the specification of the meta-analysis model. Both the way in which the multilevel model has to be specified (the Mln-program is used) as the results will be presented and interpreted.  相似文献   
8.
We extend the class of dynamic factor yield curve models in order to include macroeconomic factors. Our work benefits from recent developments in the dynamic factor literature related to the extraction of the common factors from a large panel of macroeconomic series and the estimation of the parameters in the model. We include these factors in a dynamic factor model for the yield curve, in which we model the salient structure of the yield curve by imposing smoothness restrictions on the yield factor loadings via cubic spline functions. We carry out a likelihood-based analysis in which we jointly consider a factor model for the yield curve, a factor model for the macroeconomic series, and their dynamic interactions with the latent dynamic factors. We illustrate the methodology by forecasting the U.S. term structure of interest rates. For this empirical study, we use a monthly time series panel of unsmoothed Fama–Bliss zero yields for treasuries of different maturities between 1970 and 2009, which we combine with a macro panel of 110 series over the same sample period. We show that the relationship between the macroeconomic factors and the yield curve data has an intuitive interpretation, and that there is interdependence between the yield and macroeconomic factors. Finally, we perform an extensive out-of-sample forecasting study. Our main conclusion is that macroeconomic variables can lead to more accurate yield curve forecasts.  相似文献   
9.
In the Netherlands, many Pre-Vocational Secondary Education schools are implementing elements of competence-based education. These learning environments are expected to elicit the use of deep information processing strategies and to positively influence learning outcomes. While questionnaires are often used to investigate the preferences of students for particular types of information processing strategies in other educational contexts, these instruments cannot simply be adopted unaltered for use in Pre-Vocational Secondary Education where several characteristics of the students must be taken into account. This study explores the psychometric properties of three instruments for the measurement of student preferences for deep or surface information processing strategies in competence-based Pre-Vocational Secondary Education. The utility of a semi-structured interview, a questionnaire, and the think-aloud method was investigated. The questionnaire appeared to be the most accurate instrument and allowed easy classification of students in terms of their information processing preferences. The think-aloud method provided profound insight into the information processing strategies that the students preferred for a learning task and the frequencies with which the strategies were used. The interview results largely corresponded to the results produced by the other measurement instruments, but the interview data lacked the expected richness and depth.  相似文献   
10.
European index funds and exchange‐traded funds underperform their benchmarks by 50 to 150 basis points per annum. The explanatory power of dividend withholding taxes as a determinant of this underperformance is at least on par with fund expenses. Dividend taxes also explain performance differences between funds that track different benchmarks and time variation in fund performance. Our results imply that not only fund expenses, but also dividend taxes can result in a substantial drag on mutual fund performance.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号