全文获取类型
收费全文 | 256篇 |
免费 | 10篇 |
专业分类
财政金融 | 35篇 |
工业经济 | 4篇 |
计划管理 | 137篇 |
经济学 | 35篇 |
综合类 | 8篇 |
运输经济 | 2篇 |
旅游经济 | 3篇 |
贸易经济 | 30篇 |
农业经济 | 6篇 |
经济概况 | 6篇 |
出版年
2023年 | 5篇 |
2022年 | 3篇 |
2021年 | 5篇 |
2020年 | 19篇 |
2019年 | 21篇 |
2018年 | 12篇 |
2017年 | 16篇 |
2016年 | 8篇 |
2015年 | 7篇 |
2014年 | 4篇 |
2013年 | 18篇 |
2012年 | 13篇 |
2011年 | 14篇 |
2010年 | 6篇 |
2009年 | 6篇 |
2008年 | 14篇 |
2007年 | 9篇 |
2006年 | 8篇 |
2005年 | 9篇 |
2004年 | 7篇 |
2003年 | 5篇 |
2002年 | 6篇 |
2001年 | 9篇 |
2000年 | 3篇 |
1999年 | 5篇 |
1998年 | 4篇 |
1997年 | 8篇 |
1996年 | 4篇 |
1995年 | 4篇 |
1994年 | 3篇 |
1993年 | 4篇 |
1991年 | 3篇 |
1989年 | 2篇 |
1988年 | 2篇 |
排序方式: 共有266条查询结果,搜索用时 15 毫秒
221.
《International Journal of Forecasting》2021,37(4):1355-1375
Estimation and prediction in high dimensional multivariate factor stochastic volatility models is an important and active research area, because such models allow a parsimonious representation of multivariate stochastic volatility. Bayesian inference for factor stochastic volatility models is usually done by Markov chain Monte Carlo methods (often by particle Markov chain Monte Carlo methods), which are usually slow for high dimensional or long time series because of the large number of parameters and latent states involved. Our article makes two contributions. The first is to propose a fast and accurate variational Bayes methods to approximate the posterior distribution of the states and parameters in factor stochastic volatility models. The second is to extend this batch methodology to develop fast sequential variational updates for prediction as new observations arrive. The methods are applied to simulated and real datasets, and shown to produce good approximate inference and prediction compared to the latest particle Markov chain Monte Carlo approaches, but are much faster. 相似文献
222.
Akaike-type criteria and the reliability of inference: Model selection versus statistical model specification 总被引:1,自引:0,他引:1
Since the 1990s, the Akaike Information Criterion (AIC) and its various modifications/extensions, including BIC, have found wide applicability in econometrics as objective procedures that can be used to select parsimonious statistical models. The aim of this paper is to argue that these model selection procedures invariably give rise to unreliable inferences, primarily because their choice within a prespecified family of models (a) assumes away the problem of model validation, and (b) ignores the relevant error probabilities. This paper argues for a return to the original statistical model specification problem, as envisaged by Fisher (1922), where the task is understood as one of selecting a statistical model in such a way as to render the particular data a truly typical realization of the stochastic process specified by the model in question. The key to addressing this problem is to replace trading goodness-of-fit against parsimony with statistical adequacy as the sole criterion for when a fitted model accounts for the regularities in the data. 相似文献
223.
Estimation Techniques for Ordinal Data in Multiple Frame Surveys with Complex Sampling Designs 下载免费PDF全文
Maria del Mar Rueda Antonio Arcos David Molina Maria Giovanna Ranalli 《Revue internationale de statistique》2018,86(1):51-67
Surveys usually include questions where individuals must select one in a series of possible options that can be sorted. On the other hand, multiple frame surveys are becoming a widely used method to decrease bias due to undercoverage of the target population. In this work, we propose statistical techniques for handling ordinal data coming from a multiple frame survey using complex sampling designs and auxiliary information. Our aim is to estimate proportions when the variable of interest has ordinal outcomes. Two estimators are constructed following model‐assisted generalised regression and model calibration techniques. Theoretical properties are investigated for these estimators. Simulation studies with different sampling procedures are considered to evaluate the performance of the proposed estimators in finite size samples. An application to a real survey on opinions towards immigration is also included. 相似文献
224.
《Revue internationale de statistique》2017,85(1):84-107
With the rapid, ongoing expansions in the world of data, we need to devise ways of getting more students much further, much faster. One of the choke points affecting both accessibility to a broad spectrum of students and faster progress is classical statistical inference based on normal theory. In this paper, bootstrap‐based confidence intervals and randomisation tests conveyed through dynamic visualisation are developed as a means of reducing cognitive demands and increasing the speed with which application areas can be opened up. We also discuss conceptual pathways and the design of software developed to enable this approach. 相似文献
225.
Ruipeng Liu 《European Journal of Finance》2013,19(12):971-991
In this paper, we consider an extension of the recently proposed bivariate Markov-switching multifractal model of Calvet, Fisher, and Thompson [2006. “Volatility Comovement: A Multifrequency Approach.” Journal of Econometrics 131: 179–215]. In particular, we allow correlations between volatility components to be non-homogeneous with two different parameters governing the volatility correlations at high and low frequencies. Specification tests confirm the added explanatory value of this specification. In order to explore its practical performance, we apply the model for computing value-at-risk statistics for different classes of financial assets and compare the results with the baseline, homogeneous bivariate multifractal model and the bivariate DCC-GARCH of Engle [2002. “Dynamic Conditional Correlation: A Simple Class of Multivariate Generalized Autoregressive Conditional Heteroskedasticity Models.” Journal of Business & Economic Statistics 20 (3): 339–350]. As it turns out, the multifractal model with heterogeneous volatility correlations provides more reliable results than both the homogeneous benchmark and the DCC-GARCH model. 相似文献
226.
In frequentist inference, we commonly use a single point (point estimator) or an interval (confidence interval/“interval estimator”) to estimate a parameter of interest. A very simple question is: Can we also use a distribution function (“distribution estimator”) to estimate a parameter of interest in frequentist inference in the style of a Bayesian posterior? The answer is affirmative, and confidence distribution is a natural choice of such a “distribution estimator”. The concept of a confidence distribution has a long history, and its interpretation has long been fused with fiducial inference. Historically, it has been misconstrued as a fiducial concept, and has not been fully developed in the frequentist framework. In recent years, confidence distribution has attracted a surge of renewed attention, and several developments have highlighted its promising potential as an effective inferential tool. This article reviews recent developments of confidence distributions, along with a modern definition and interpretation of the concept. It includes distributional inference based on confidence distributions and its extensions, optimality issues and their applications. Based on the new developments, the concept of a confidence distribution subsumes and unifies a wide range of examples, from regular parametric (fiducial distribution) examples to bootstrap distributions, significance (p‐value) functions, normalized likelihood functions, and, in some cases, Bayesian priors and posteriors. The discussion is entirely within the school of frequentist inference, with emphasis on applications providing useful statistical inference tools for problems where frequentist methods with good properties were previously unavailable or could not be easily obtained. Although it also draws attention to some of the differences and similarities among frequentist, fiducial and Bayesian approaches, the review is not intended to re‐open the philosophical debate that has lasted more than two hundred years. On the contrary, it is hoped that the article will help bridge the gaps between these different statistical procedures. 相似文献
227.
Protected areas are cornerstones for biodiversity conservation, yet they can be controversial because of their potential impact on the livelihoods of local people due to restrictions on agricultural land use and the extractive use of natural resources. This study evaluates the impact of PAs on households’ livelihoods, as measured by total household income (THI) and livestock income (LI). We use a survey and a quasi-experimental design to gather socioeconomic and biophysical data from households living within, adjacent to and outside three national parks (NPs) in Ethiopia and employ matching methods to isolate the impact of NPs. Our findings suggest that there is no evidence that the establishment of NPs adversely affects local livelihoods. Instead, we find that households within and in adjacent areas to NPs have higher incomes compared to those living outside. Understanding the heterogeneity of the effect of NPs on local livelihoods can help in designing well-targeted policy interventions that improve conservation goals while also addressing livelihood concerns of resource-dependent local communities. 相似文献
228.
229.
Edward C. Sewell 《Statistica Neerlandica》2013,67(2):211-226
To make causal inferences from observational data, researchers have often turned to matching methods. These methods are variably successful. We address issues with matching methods by redefining the matching problem as a subset selection problem. Given a set of covariates, we seek to find two subsets, a control group and a treatment group, so that we obtain optimal balance, or, in other words, the minimum discrepancy between the distributions of these covariates in the control and treatment groups. Our formulation captures the key elements of the Rubin causal model and translates nicely into a discrete optimization framework. 相似文献
230.
Ivette LunaAuthor Vitae Rosangela Ballini Author Vitae 《International Journal of Forecasting》2011,27(3):708
This paper presents a data-driven approach applied to the long term prediction of daily time series in the Neural Forecasting Competition. The proposal comprises the use of adaptive fuzzy rule-based systems in a top-down modeling framework. Therefore, daily samples are aggregated to build weekly time series, and consequently, model optimization is performed in a top-down framework, thus reducing the forecast horizon from 56 to 8 steps ahead. Two different disaggregation procedures are evaluated: the historical and daily top-down approaches. Data pre-processing and input selection are carried out prior to the model adjustment. The prediction results are validated using multiple time series, as well as rolling origin evaluations with model re-calibration, and the results are compared with those obtained using daily models, allowing us to analyze the effectiveness of the top-down approach for longer forecast horizons. 相似文献