首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3984篇
  免费   128篇
  国内免费   16篇
财政金融   751篇
工业经济   169篇
计划管理   1159篇
经济学   810篇
综合类   155篇
运输经济   57篇
旅游经济   81篇
贸易经济   489篇
农业经济   247篇
经济概况   210篇
  2024年   5篇
  2023年   69篇
  2022年   50篇
  2021年   116篇
  2020年   181篇
  2019年   152篇
  2018年   139篇
  2017年   164篇
  2016年   171篇
  2015年   102篇
  2014年   234篇
  2013年   492篇
  2012年   146篇
  2011年   253篇
  2010年   184篇
  2009年   194篇
  2008年   200篇
  2007年   167篇
  2006年   172篇
  2005年   140篇
  2004年   112篇
  2003年   87篇
  2002年   86篇
  2001年   90篇
  2000年   64篇
  1999年   72篇
  1998年   47篇
  1997年   51篇
  1996年   32篇
  1995年   31篇
  1994年   21篇
  1993年   16篇
  1992年   11篇
  1991年   12篇
  1990年   8篇
  1989年   7篇
  1988年   5篇
  1987年   4篇
  1986年   2篇
  1985年   14篇
  1984年   12篇
  1983年   6篇
  1982年   4篇
  1981年   1篇
  1980年   1篇
  1979年   1篇
排序方式: 共有4128条查询结果,搜索用时 296 毫秒
91.
Pricing default swaps: Empirical evidence   总被引:1,自引:0,他引:1  
In this paper we compare market prices of credit default swaps with model prices. We show that a simple reduced form model outperforms directly comparing bonds' credit spreads to default swap premiums. We find that the model yields unbiased premium estimates for default swaps on investment grade issuers, but only if we use swap or repo rates as proxy for default-free interest rates. This indicates that the government curve is no longer seen as the reference default-free curve. We also show that the model is relatively insensitive to the value of the assumed recovery rate.  相似文献   
92.
We review developments in conducting inference for model parameters in the presence of intertemporal and cross‐sectional dependence with an emphasis on panel data applications. We review the use of heteroskedasticity and autocorrelation consistent (HAC) standard error estimators, which include the standard clustered and multiway clustered estimators, and discuss alternative sample‐splitting inference procedures, such as the Fama–Macbeth procedure, within this context. We outline pros and cons of the different procedures. We then illustrate the properties of the discussed procedures within a simulation experiment designed to mimic the type of firm‐level panel data that might be encountered in accounting and finance applications. Our conclusion, based on theoretical properties and simulation performance, is that sample‐splitting procedures with suitably chosen splits are the most likely to deliver robust inferential statements with approximately correct coverage properties in the types of large, heterogeneous panels many researchers are likely to face.  相似文献   
93.
Some financial stress events lead to macroeconomic downturns, while others appear to be isolated to financial markets. We identify financial stress regimes using a model that explicitly links financial variables to macro‐economic outcomes. The stress regimes are identified using an unbalanced panel of financial variables with an embedded method for variable selection. Our identified stress regimes are associated with corporate credit tightening and with NBER recessions. An exogenous deterioration in our financial conditions index has strong negative effects in economic activity, and negative amplification effects on inflation in the stress regime. These results are obtained with a novel factor‐augmented vector autoregressive model with smooth‐transition regimes (FASTVAR).  相似文献   
94.
This paper analyzes the dynamics of pair comovements between different domestic European stock market returns (Spain, France, Germany, Switzerland and the United Kingdom) seeking to check whether there is a unique source of risk driving those dynamics. Once it is shown that the comovements are time-varying, the question is to find whether a global index such as the Euro Stoxx can be considered the main source of risk. To that end we estimate and test for time-varying global pair covariances and for time-varying remaining pair covariances once the effect of the Euro Stoxx is removed. The empirical results are obtained considering locally stationary variables, a family that includes variables with first and second time-varying moments. Under that framework time-varying means and covariances can be estimated using a spline-based procedure and Wald-type statistics can be computed to test for time-variations. A simulation study shows that the role of the mean estimation part is crucial to the good performance of the tests for second moments. The empirical results evidence that all global pair covariances for the European countries analyzed are time-varying, but also that the Euro Stoxx can be considered as the driving source of risk for these time-varying dynamics. This conclusion is very useful for modeling purpose and financial strategies. Finally, we repeat the analysis considering the Nasdaq as an alternative global index and find that it explains only a small part of the dynamics in the European pair comovements.  相似文献   
95.
The construction of an importance density for partially non‐Gaussian state space models is crucial when simulation methods are used for likelihood evaluation, signal extraction, and forecasting. The method of efficient importance sampling is successful in this respect, but we show that it can be implemented in a computationally more efficient manner using standard Kalman filter and smoothing methods. Efficient importance sampling is generally applicable for a wide range of models, but it is typically a custom‐built procedure. For the class of partially non‐Gaussian state space models, we present a general method for efficient importance sampling. Our novel method makes the efficient importance sampling methodology more accessible because it does not require the computation of a (possibly) complicated density kernel that needs to be tracked for each time period. The new method is illustrated for a stochastic volatility model with a Student's t distribution.  相似文献   
96.
The business models of banks are often seen as the result of a variety of simultaneously determined managerial choices, such as those regarding the types of activities, funding sources, level of diversification, and size. Moreover, owing to the fuzziness of data and the possibility that some banks may combine features of different business models, the use of hard clustering methods has often led to poorly identified business models. In this paper we propose a framework to deal with these challenges based on an ensemble of three unsupervised clustering methods to identify banking business models: fuzzy c‐means (which allows us to handle fuzzy clustering), self‐organizing maps (which yield intuitive visual representations of the clusters), and partitioning around medoids (which circumvents the presence of data outliers). We set up our analysis in the context of the European banking sector, which has seen its regulators increasingly focused on examining the business models of supervised entities in the aftermath of the twin financial crises. In our empirical application, we find evidence of four distinct banking business models and further distinguish between banks with a clearly defined business model (core banks) and others (non‐core banks), as well as banks with a stable business model over time (persistent banks) and others (non‐persistent banks). Our proposed framework performs well under several robustness checks related with the sample, clustering methods, and variables used.  相似文献   
97.
This paper makes several contributions to the emerging literature on the post-entry behavior of international new ventures. Based on an extensive longitudinal data set, we investigate the dynamics of commitment, growth and survival of different types of newly internationalizing Belgian firms. Global start-ups have the highest initial and rapidly rising export commitment per market and are also more likely to continue exporting over time than geographically focused start-ups, and traditional staged exporters. However, global start-ups also display the highest failure rate. This high failure rate appears to result primarily from the ‘liability of newness’ and less from the added complexity associated with rapid and wide scope internationalization.  相似文献   
98.
《Journal of Retailing》2014,90(4):493-510
Prior ingredient branding research has examined the influence of “stated” factors such as fit between partner brands on composite product (e.g., Tide with Downy fabric softener) attitudes. This research focuses on choice of composite products, and addresses three managerially relevant questions: Which consumer segments are more likely to adopt the composite product? Will the choice of the composite product have positive or negative reciprocal effects on partner brands? Will the introduction of the composite product benefit the primary or the secondary brand more? The authors use a brand choice model to investigate the “revealed” choice of complements-based composite products. Study results indicate that (i) despite high fit between the composite product and the primary brand, consumer segments may have different choice likelihoods for these products, whereas prior research suggests equal likelihood; (ii) the choice of a composite product may not provide a positive reciprocal effect to the secondary brand; and (iii) the introduction of a composite product may benefit the primary brand more than the secondary brand, whereas prior research suggests a symmetrical benefit for the partner brands. Finally, the finding that introducing a composite product may not cannibalize the sale of the primary brand extends the ingredient branding literature, which has been silent on this issue.  相似文献   
99.
以甘肃白水江自然保护区李子坝社区巡护监测为例,从经济学的视角,运用比较优势和竞合模型分析该社区巡护监测与专业巡护监测的两种行为。模型分析显示两者各有优势,在专业巡护监测的基础上,如果辅之以社区巡护监测能更有效地促进环境与生物的多样性的保护,既可以提升有效管理的水平,又可以达到社区保护利益的最大化。  相似文献   
100.
There is by now a long tradition of using the EM algorithm to find maximum‐likelihood estimates (MLEs) when the data are incomplete in any of a wide range of ways, even when the observed‐data likelihood can easily be evaluated and numerical maximisation of that likelihood is available as a conceptually simple route to the MLEs. It is rare in the literature to see numerical maximisation employed if EM is possible. But with excellent general‐purpose numerical optimisers now available free, there is no longer any reason, as a matter of course, to avoid direct numerical maximisation of likelihood. In this tutorial, I present seven examples of models in which numerical maximisation of likelihood appears to have some advantages over the use of EM as a route to MLEs. The mathematical and coding effort is minimal, as there is no need to derive and code the E and M steps, only a likelihood evaluator. In all the examples, the unconstrained optimiser nlm available in R is used, and transformations are used to impose constraints on parameters. I suggest therefore that the following question be asked of proposed new applications of EM: Can the MLEs be found more simply and directly by using a general‐purpose numerical optimiser?  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号