全文获取类型
收费全文 | 669篇 |
免费 | 13篇 |
国内免费 | 2篇 |
专业分类
财政金融 | 177篇 |
工业经济 | 12篇 |
计划管理 | 240篇 |
经济学 | 111篇 |
综合类 | 23篇 |
运输经济 | 13篇 |
旅游经济 | 5篇 |
贸易经济 | 56篇 |
农业经济 | 22篇 |
经济概况 | 25篇 |
出版年
2024年 | 2篇 |
2023年 | 11篇 |
2022年 | 11篇 |
2021年 | 13篇 |
2020年 | 23篇 |
2019年 | 34篇 |
2018年 | 19篇 |
2017年 | 28篇 |
2016年 | 28篇 |
2015年 | 17篇 |
2014年 | 37篇 |
2013年 | 90篇 |
2012年 | 24篇 |
2011年 | 35篇 |
2010年 | 25篇 |
2009年 | 48篇 |
2008年 | 33篇 |
2007年 | 30篇 |
2006年 | 29篇 |
2005年 | 23篇 |
2004年 | 17篇 |
2003年 | 18篇 |
2002年 | 14篇 |
2001年 | 13篇 |
2000年 | 14篇 |
1999年 | 10篇 |
1998年 | 8篇 |
1997年 | 3篇 |
1996年 | 5篇 |
1995年 | 4篇 |
1994年 | 2篇 |
1993年 | 3篇 |
1992年 | 1篇 |
1991年 | 3篇 |
1990年 | 2篇 |
1989年 | 1篇 |
1988年 | 1篇 |
1987年 | 1篇 |
1986年 | 2篇 |
1984年 | 1篇 |
1982年 | 1篇 |
排序方式: 共有684条查询结果,搜索用时 15 毫秒
601.
为了有效地度量中小企业融资的信用风险,选取代表中小企业的经营与发展能力、利润构成与盈利能力、资产与负债和现金流量等4个方面13个指标作为解释变量。由于解释变量之间存在较高的多重共线性和样本量偏小等情况,应用偏最小二乘法提取PLS成分,排除系统中的噪声干扰,构建一个度量信用风险的二分类因变量的Logistic模型。实际数据预测结果表明,该模型不仅具有良好的平稳性和准确性,而且具有较强的解释过程变化的能力。 相似文献
602.
Love and Shumway (1994) developed a nonparametric deterministic test for monopsony market power using a normalized quadratic restricted cost function with one input for which the firm has potential market power. This research examines monopsony power using Lau’s Hessian identity relationships based on the empirical properties of duality theory. Lau’s Hessian identity shows the Hessian matrices are equal under pure competition using an unrestricted profit function, restricted profit function and production function approach. We examine how market power changes in the monopsony case using Lau’s Hessian identity relationships. Results show that there is a difference between the unrestricted and restricted profit function results under monopsony power. The important implication is that if an input or output is potentially in a market subject to market power, that input or output should be modelled as a fixed input or output to correctly recover the underlying technology. 相似文献
603.
Takuya Hasebe 《Applied economics》2016,48(20):1902-1913
We derive the asymptotic variance of the Blinder–Oaxaca decomposition effects. We show that the delta method approach that builds on the assumption of fixed regressors understates true variability of the decomposition effects when regressors are stochastic. Our proposed variance estimator takes randomness of regressors into consideration. Our approach is applicable to both the linear and nonlinear decompositions. Previously, only a bootstrap method has been a valid option for nonlinear decompositions. As our derivation follows the general framework of m-estimation, it is straightforward to extend our variance estimator to a cluster-robust variance estimator. We demonstrate the finite-sample performance of our variance estimator with a Monte Carlo study and present a real-data application. 相似文献
604.
The paper aims to analyse the behaviour of a battery of non-survey techniques of constructing regional I-O tables in estimating impact. For this aim, a Monte Carlo simulation, based on the generation of ‘true’ multiregional I-O tables, was carried out. By aggregating multi-regional I-O tables, national I-O tables were obtained. From the latter, indirect regional tables were derived through the application of various regionalisation methods and the relevant multipliers were compared with the ‘true’ multipliers using a set of statistics. Three aspects of the behaviour of the methods have been analysed: performances to reproduce ‘true’ multipliers, variability of simulation error and direction of bias. The results have demonstrated that the Flegg et al. Location Quotient (FLQ) and its augmented version (AFLQ) represent an effective improvement of conventional techniques based on the use of location quotients in both reproducing ‘true’ multipliers and generating more stable simulation errors. In addition, the results have confirmed the existence of a tendency of the methods to over/underestimate impact. In the cases of the FLQ and the AFLQ, this tendency depends on the value of the parameter δ. 相似文献
605.
Trends and cycles in economic time series: A Bayesian approach 总被引:1,自引:0,他引:1
Trends and cyclical components in economic time series are modeled in a Bayesian framework. This enables prior notions about the duration of cycles to be used, while the generalized class of stochastic cycles employed allows the possibility of relatively smooth cycles being extracted. The posterior distributions of such underlying cycles can be very informative for policy makers, particularly with regard to the size and direction of the output gap and potential turning points. From the technical point of view a contribution is made in investigating the most appropriate prior distributions for the parameters in the cyclical components and in developing Markov chain Monte Carlo methods for both univariate and multivariate models. Applications to US macroeconomic series are presented. 相似文献
606.
Jinhee Jo 《Global Economic Review》2016,45(3):294-309
AbstractThis paper provides a practical guide to Bayesian estimation of simultaneous entry games of complete information with heterogeneous firms. Bayesian inference requires computation of the likelihood, which is carried out by simulating unobservables. To avoid errors from finite simulations, we apply Andrieu and Roberts’s [2009. The pseudo-marginal approach for efficient Monte Carlo computations, Annals of Statistics, 37(2), pp. 697–725] pseudo-marginal approach. We rely also on adaptive Markov chain Monte Carlo algorithms that improve computational performance. 相似文献
607.
Judith?A.?ClarkeEmail author Marsha?J.?Courchane 《The Journal of Real Estate Finance and Economics》2005,30(1):5-31
We examine the effect of sample design on estimation and inference for disparate treatment in binary logistic models used to assess for fair lending. Our Monte Carlo experiments provide information on how sample design affects efficiency (in terms of mean squared error) of estimation of the disparate treatment parameter and power of a test for statistical insignificance of this parameter. The sample design requires two decision levels: first, the degree of stratification of the loan applicants (Level I Decision) and secondly, given a Level I Decision, how to allocate the sample across strata (Level II Decision). We examine four Level I stratification strategies: no stratification (simple random sampling), exogenously stratifying loan cases by race, endogenously stratifying cases by loan outcome (denied or approved), and stratifying exogenously by race and endogenously by outcome. Then, we consider five Level II methods: proportional, balanced, and three designs based on applied studies. Our results strongly support the use of stratifying by both race and loan outcome coupled with a balanced sample design when interest is in estimation of, or testing for statistical significance of, the disparate treatment parameter. 相似文献
608.
Given a multi-dimensional Markov diffusion X, the Malliavin integration by parts formula provides a family of representations of the conditional expectation E[g(X
2)|X1]. The different representations are determined by some localizing functions. We discuss the problem of variance reduction within this family. We characterize an exponential function as the unique integrated mean-square-error minimizer among the class of separable localizing functions. For general localizing functions, we prove existence and uniqueness of the optimal localizing function in a suitable Sobolev space. We also provide a PDE characterization of the optimal solution which allows to draw the following observation : the separable exponential function does not minimize the integrated mean square error, except for the trivial one-dimensional case. We provide an application to a portfolio allocation problem, by use of the dynamic programming principle.Mathematics Subject Classification:
60H07, 65C05, 49-00JEL Classification:
G10, C10The authors gratefully acknowledge for the comments raised by an anonymous referee, which helped understanding the existence result of Sect. [4.2] of this paper. 相似文献
609.
Many empirical studies suggest that the distribution of risk factors has heavy tails. One always assumes that the underlying
risk factors follow a multivariate normal distribution that is a assumption in conflict with empirical evidence. We consider
a multivariate t distribution for capturing the heavy tails and a quadratic function of the changes is generally used in the risk factor for
a non-linear asset. Although Monte Carlo analysis is by far the most powerful method to evaluate a portfolio Value-at-Risk
(VaR), a major drawback of this method is that it is computationally demanding. In this paper, we first transform the assets
into the risk on the returns by using a quadratic approximation for the portfolio. Second, we model the return’s risk factors
by using a multivariate normal as well as a multivariate t distribution. Then we provide a bootstrap algorithm with importance resampling and develop the Laplace method to improve
the efficiency of simulation, to estimate the portfolio loss probability and evaluate the portfolio VaR. It is a very powerful
tool that propose importance sampling to reduce the number of random number generators in the bootstrap setting. In the simulation
study and sensitivity analysis of the bootstrap method, we observe that the estimate for the quantile and tail probability
with importance resampling is more efficient than the naive Monte Carlo method. We also note that the estimates of the quantile
and the tail probability are not sensitive to the estimated parameters for the multivariate normal and the multivariate t distribution.
The research of Shih-Kuei Lin was partially supported by the National Science Council under grants NSC 93-2146-H-259-023.
The research of Cheng-Der Fuh was partially supported by the National Science Council under grants NSC 94-2118-M-001-028. 相似文献
610.