首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   326篇
  免费   16篇
财政金融   37篇
工业经济   9篇
计划管理   98篇
经济学   143篇
运输经济   1篇
旅游经济   1篇
贸易经济   38篇
农业经济   3篇
经济概况   12篇
  2023年   7篇
  2022年   5篇
  2021年   9篇
  2020年   12篇
  2019年   19篇
  2018年   16篇
  2017年   15篇
  2016年   13篇
  2015年   10篇
  2014年   14篇
  2013年   56篇
  2012年   28篇
  2011年   12篇
  2010年   12篇
  2009年   7篇
  2008年   7篇
  2007年   10篇
  2006年   8篇
  2005年   3篇
  2004年   7篇
  2003年   7篇
  2002年   7篇
  2001年   8篇
  2000年   8篇
  1999年   4篇
  1998年   2篇
  1997年   4篇
  1996年   3篇
  1995年   3篇
  1994年   2篇
  1993年   4篇
  1992年   2篇
  1987年   1篇
  1986年   2篇
  1984年   1篇
  1983年   1篇
  1980年   2篇
  1979年   1篇
  1978年   2篇
  1977年   1篇
  1976年   1篇
  1968年   2篇
  1966年   1篇
  1962年   1篇
  1954年   1篇
  1933年   1篇
排序方式: 共有342条查询结果,搜索用时 62 毫秒
91.

Over the last two decades, work on the Post Keynesian theory of endogenous money has been flourishing, and has prompted a rethinking of the complex nature of money in modern economies. At the heart of the debate between what have now been labelled the accommodationist (or horizontalist) approach and the structuralist approach to endogenous money are the issues of the slope of the supply curves of reserves and of credit money, respectively. Using the distinction between a single period analysis and a continuation analysis, similarities and differences between those approaches are explained, and the suggestion is then made for retaining and re-interpreting them into a more general theory.  相似文献   
92.
93.
In this article, we investigate the behaviour of a number of methods for estimating the co‐integration rank in VAR systems characterized by heteroskedastic innovation processes. In particular, we compare the efficacy of the most widely used information criteria, such as Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) , with the commonly used sequential approach of Johansen [Likelihood‐based Inference in Cointegrated Vector Autoregressive Models (1996)] based around the use of either asymptotic or wild bootstrap‐based likelihood ratio type tests. Complementing recent work done for the latter in Cavaliere, Rahbek and Taylor [Econometric Reviews (2014) forthcoming], we establish the asymptotic properties of the procedures based on information criteria in the presence of heteroskedasticity (conditional or unconditional) of a quite general and unknown form. The relative finite‐sample properties of the different methods are investigated by means of a Monte Carlo simulation study. For the simulation DGPs considered in the analysis, we find that the BIC‐based procedure and the bootstrap sequential test procedure deliver the best overall performance in terms of their frequency of selecting the correct co‐integration rank across different values of the co‐integration rank, sample size, stationary dynamics and models of heteroskedasticity. Of these, the wild bootstrap procedure is perhaps the more reliable overall as it avoids a significant tendency seen in the BIC‐based method to over‐estimate the co‐integration rank in relatively small sample sizes.  相似文献   
94.
In this paper we introduce the Random Recursive Partitioning (RRP) matching method. RRP generates a proximity matrix which might be useful in econometric applications like average treatment effect estimation. RRP is a Monte Carlo method that randomly generates non‐empty recursive partitions of the data and evaluates the proximity between two observations as the empirical frequency they fall in a same cell of these random partitions over all Monte Carlo replications. From the proximity matrix it is possible to derive both graphical and analytical tools to evaluate the extent of the common support between data sets. The RRP method is “honest” in that it does not match observations “at any cost”: if data sets are separated, the method clearly states it. The match obtained with RRP is invariant under monotonic transformation of the data. Average treatment effect estimators derived from the proximity matrix seem to be competitive compared to more commonly used estimators. RRP method does not require a particular structure of the data and for this reason it can be applied when distances like Mahalanobis or Euclidean are not suitable, in the presence of missing data or when the estimated propensity score is too sensitive to model specifications. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   
95.
96.
We investigate the multivariate intraday structure in interest rates, focusing on implied forward rates from Eurofutures contracts. Since futures markets are the most liquid for interest rate instruments and they yield high-quality intraday data, it is somehow surprising that their intraday behavior has not been thoroughly studied in the literature.We find interesting similarities with the foreign exchange market: scaling law, intraday patterns, all of which point to the heterogeneity of market participants. Other properties like asymmetric causal information flow between fine and coarse volatilities for the same time series are present in our data. There are also lead–lag correlations across the term structure of implied forward rates, but they tend to disappear as markets mature.A principal component analysis of the short end of the yield curve allows us to determine the most important components and to reduce the number of time series needed to describe the term structure. We find the decomposition rather stable over time. The first component, which describes the curve level, shows an asymmetry in the information flow between volatilities of different time resolution, i.e., the coarse-grained volatility predicts the fine-grained volatility better than the other way around, as observed in the foreign exchange market. The remaining components do not show such an effect, having instead significant negative autocorrelations for the time series themselves. A heterogeneous autoregressive conditional heteroskedasticity (HARCH) model is estimated for the first component and the impact of different market agents is discussed.  相似文献   
97.
This paper deals with the estimation of the mean of a spatial population. Under a design‐based approach to inference, an estimator assisted by a penalized spline regression model is proposed and studied. Proof that the estimator is design‐consistent and has a normal limiting distribution is provided. A simulation study is carried out to investigate the performance of the new estimator and its variance estimator, in terms of relative bias, efficiency, and confidence interval coverage rate. The results show that gains in efficiency over standard estimators in classical sampling theory may be impressive.  相似文献   
98.
Abstract Equality of opportunity is a widely accepted principle of distributive justice and it is the leading idea of most political platforms in several countries. According to this principle, a society might institute policies that secure an equal distribution of the means to reach a valuable outcome among its members. Once the set of opportunities have been equalized, which particular opportunity, the individual chooses from those open to her, is outside the scope of justice. Ex ante inequalities, and only those inequalities, should be eliminated or compensated for by public intervention. The recent literature on the opportunity egalitarianism often merges these questions introducing two different economic issues. On one side the design of a public policy intended to implement the equality of opportunity view and on the other side the problem of measuring the degree of opportunity inequality in a society. We describe the basic setting and assumptions of some different approaches derived by Roemer’s algorithm for public policy and then we discuss some theoretical and empirical studies to separate and test alternative paradigms on the measurement of inequality of opportunity. Accordingly, an extended critique on the causality issue on policies and measurements is taken into account.  相似文献   
99.
In this last decade, worldwide attention has been focused on the hazards derived from the interaction between extreme natural phenomena and critical infrastructures and/or chemical and process industry (natural–technological hazards or Na-Tech). Due to the recent occurrence of significant events, great attention has also been given to Na-Tech hazards triggered by volcanic eruptions; in particular, the eruption of the Icelandic volcano alarmed the European community due to the ash fallout over the continent, which caused significant problems for the population, road, rail and air traffic and production activities. This study aims at defining a procedure for the representation of the vulnerability of industrial facilities to potential volcanic ash fallouts. Its implementation on a Geographical Information System has also been executed and a semi-automatic procedure for the vulnerability mapping has been constructed.  相似文献   
100.
Abstract After a short history of the concept of human capital (henceforth HC) in economic thought ( Section 1 ), this study presents the two main methods for estimating the value of the stock of HC – the retrospective and prospective one – with a review of the models proposed ( Section 2 ). These methods are linked both to the theory of HC investment as a rational choice ( Section 3 ), the literature analysing the contribution of HC investment to economic growth and the HC estimating method through educational attainment ( Section 4 ). The more recent literature on HC as a latent variable is also assessed ( Section 5 ) and a new method of estimation where HC is seen both as an unknown function of formative indicators and as a ‘latent effect’ underlying earned income is proposed ( Section 6 ). Section 7 concludes.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号