全文获取类型
收费全文 | 256篇 |
免费 | 10篇 |
专业分类
财政金融 | 35篇 |
工业经济 | 4篇 |
计划管理 | 137篇 |
经济学 | 35篇 |
综合类 | 8篇 |
运输经济 | 2篇 |
旅游经济 | 3篇 |
贸易经济 | 30篇 |
农业经济 | 6篇 |
经济概况 | 6篇 |
出版年
2023年 | 5篇 |
2022年 | 3篇 |
2021年 | 5篇 |
2020年 | 19篇 |
2019年 | 21篇 |
2018年 | 12篇 |
2017年 | 16篇 |
2016年 | 8篇 |
2015年 | 7篇 |
2014年 | 4篇 |
2013年 | 18篇 |
2012年 | 13篇 |
2011年 | 14篇 |
2010年 | 6篇 |
2009年 | 6篇 |
2008年 | 14篇 |
2007年 | 9篇 |
2006年 | 8篇 |
2005年 | 9篇 |
2004年 | 7篇 |
2003年 | 5篇 |
2002年 | 6篇 |
2001年 | 9篇 |
2000年 | 3篇 |
1999年 | 5篇 |
1998年 | 4篇 |
1997年 | 8篇 |
1996年 | 4篇 |
1995年 | 4篇 |
1994年 | 3篇 |
1993年 | 4篇 |
1991年 | 3篇 |
1989年 | 2篇 |
1988年 | 2篇 |
排序方式: 共有266条查询结果,搜索用时 15 毫秒
211.
Akaike-type criteria and the reliability of inference: Model selection versus statistical model specification 总被引:1,自引:0,他引:1
Since the 1990s, the Akaike Information Criterion (AIC) and its various modifications/extensions, including BIC, have found wide applicability in econometrics as objective procedures that can be used to select parsimonious statistical models. The aim of this paper is to argue that these model selection procedures invariably give rise to unreliable inferences, primarily because their choice within a prespecified family of models (a) assumes away the problem of model validation, and (b) ignores the relevant error probabilities. This paper argues for a return to the original statistical model specification problem, as envisaged by Fisher (1922), where the task is understood as one of selecting a statistical model in such a way as to render the particular data a truly typical realization of the stochastic process specified by the model in question. The key to addressing this problem is to replace trading goodness-of-fit against parsimony with statistical adequacy as the sole criterion for when a fitted model accounts for the regularities in the data. 相似文献
212.
Estimation Techniques for Ordinal Data in Multiple Frame Surveys with Complex Sampling Designs 下载免费PDF全文
Maria del Mar Rueda Antonio Arcos David Molina Maria Giovanna Ranalli 《Revue internationale de statistique》2018,86(1):51-67
Surveys usually include questions where individuals must select one in a series of possible options that can be sorted. On the other hand, multiple frame surveys are becoming a widely used method to decrease bias due to undercoverage of the target population. In this work, we propose statistical techniques for handling ordinal data coming from a multiple frame survey using complex sampling designs and auxiliary information. Our aim is to estimate proportions when the variable of interest has ordinal outcomes. Two estimators are constructed following model‐assisted generalised regression and model calibration techniques. Theoretical properties are investigated for these estimators. Simulation studies with different sampling procedures are considered to evaluate the performance of the proposed estimators in finite size samples. An application to a real survey on opinions towards immigration is also included. 相似文献
213.
《International Journal of Forecasting》2021,37(4):1355-1375
Estimation and prediction in high dimensional multivariate factor stochastic volatility models is an important and active research area, because such models allow a parsimonious representation of multivariate stochastic volatility. Bayesian inference for factor stochastic volatility models is usually done by Markov chain Monte Carlo methods (often by particle Markov chain Monte Carlo methods), which are usually slow for high dimensional or long time series because of the large number of parameters and latent states involved. Our article makes two contributions. The first is to propose a fast and accurate variational Bayes methods to approximate the posterior distribution of the states and parameters in factor stochastic volatility models. The second is to extend this batch methodology to develop fast sequential variational updates for prediction as new observations arrive. The methods are applied to simulated and real datasets, and shown to produce good approximate inference and prediction compared to the latest particle Markov chain Monte Carlo approaches, but are much faster. 相似文献
214.
For hypotheses on the coefficient values of the lagged-dependent variables in the ARX class of dynamic regression models, test procedures are developed which yield exact inference for given (up to an unknown scale factor) distribution of the innovation errors. They include exact tests on the maximum lag length, for structural change and on the presence of (seasonal or multiple) unit roots, i.e. they cover situations where usually asymptotic and non-exact t, F, AOC, ADF or HEGY tests are employed. The various procedures are demonstrated and compared in illustrative empirical models and the approach is critically discussed. 相似文献
215.
Robust tests and estimators based on nonnormal quasi-likelihood functions are developed for autoregressive models with near unit root. Asymptotic power functions and power envelopes are derived for point-optimal tests of a unit root when the likelihood is correctly specified. The shapes of these power functions are found to be sensitive to the extent of nonnormality in the innovations. Power loss resulting from using least-squares unit-root tests in the presence of thick-tailed innovations appears to be greater than in stationary models. 相似文献
216.
《Revue internationale de statistique》2017,85(1):84-107
With the rapid, ongoing expansions in the world of data, we need to devise ways of getting more students much further, much faster. One of the choke points affecting both accessibility to a broad spectrum of students and faster progress is classical statistical inference based on normal theory. In this paper, bootstrap‐based confidence intervals and randomisation tests conveyed through dynamic visualisation are developed as a means of reducing cognitive demands and increasing the speed with which application areas can be opened up. We also discuss conceptual pathways and the design of software developed to enable this approach. 相似文献
217.
How much nominal rigidity is there in the US economy? Testing a new Keynesian DSGE model using indirect inference 总被引:1,自引:0,他引:1
Vo Phuong Mai Le David Meenagh Michael Wickens 《Journal of Economic Dynamics and Control》2011,35(12):2078-2104
We evaluate the Smets-Wouters New Keynesian model of the US postwar period, using indirect inference, the bootstrap and a VAR representation of the data. We find that the model is strongly rejected. While an alternative (New Classical) version of the model fares no better, adding limited nominal rigidity to it produces a ‘weighted’ model version closest to the data. But on data from 1984 onwards - the ‘great moderation’ - the best model version is one with a high degree of nominal rigidity, close to New Keynesian. Our results are robust to a variety of methodological and numerical issues. 相似文献
218.
《International Journal of Forecasting》2020,36(4):1407-1419
The purpose of this article is to propose a method to minimize the difference between electoral predictions and electoral results. It builds on findings that stem from established democracies, where most of the research has been carried out, but it focuses on filling the gap for developing nations, which have thus far been neglected by the literature. It proposes a two-stage model in which data are first collected, filtered and weighed according to biases, and then output using Bayesian algorithms and Markov chains. It tests the specification using data stemming from 11 Latin American countries. It shows that the model is remarkably accurate. In comparison to polls, not only does it produce more precise estimates for every election, but it also produces a more accurate forecast for nine out of every ten candidates. The article closes with a discussion on the limitations of the model and a proposal for future research. 相似文献
219.
In frequentist inference, we commonly use a single point (point estimator) or an interval (confidence interval/“interval estimator”) to estimate a parameter of interest. A very simple question is: Can we also use a distribution function (“distribution estimator”) to estimate a parameter of interest in frequentist inference in the style of a Bayesian posterior? The answer is affirmative, and confidence distribution is a natural choice of such a “distribution estimator”. The concept of a confidence distribution has a long history, and its interpretation has long been fused with fiducial inference. Historically, it has been misconstrued as a fiducial concept, and has not been fully developed in the frequentist framework. In recent years, confidence distribution has attracted a surge of renewed attention, and several developments have highlighted its promising potential as an effective inferential tool. This article reviews recent developments of confidence distributions, along with a modern definition and interpretation of the concept. It includes distributional inference based on confidence distributions and its extensions, optimality issues and their applications. Based on the new developments, the concept of a confidence distribution subsumes and unifies a wide range of examples, from regular parametric (fiducial distribution) examples to bootstrap distributions, significance (p‐value) functions, normalized likelihood functions, and, in some cases, Bayesian priors and posteriors. The discussion is entirely within the school of frequentist inference, with emphasis on applications providing useful statistical inference tools for problems where frequentist methods with good properties were previously unavailable or could not be easily obtained. Although it also draws attention to some of the differences and similarities among frequentist, fiducial and Bayesian approaches, the review is not intended to re‐open the philosophical debate that has lasted more than two hundred years. On the contrary, it is hoped that the article will help bridge the gaps between these different statistical procedures. 相似文献
220.
This paper considers the determinants of a binary indicator for the existence of functional limitations using seven waves (1991–1997) of the British Household Panel Survey (BHPS). The focal point of our analysis is the contributions of state dependence, heterogeneity and serial correlation in explaining the dynamics of health. To investigate these issues we apply static and dynamic panel probit models with flexible error structures. To estimate the models we use maximum simulated likelihood (MSL) with antithetic acceleration and implement a recently proposed test for the existence of asymptotic bias. The dynamic models show strong positive state dependence.
Data from the British Household Panel Survey (BHPS) were supplied by the ESRC Data Archive. Neither the original collectors of the data nor the Archive bear any responsibility for the analysis or interpretations presented here. Funding was provided by the ESRC award no: R000238169-Simulation-based econometric approaches to investigating the interaction of lifestyle and health. The authors would like to thank William Greene for valuable comments on an earlier draft of the paper, Roberto Leon Gonzalez for valuable programming advice, and participants at the iHEA Third International conference, York, 22–25 July 2001 and York Seminars in Health Econometrics (YSHE) for their comments. 相似文献