首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1487篇
  免费   32篇
  国内免费   7篇
财政金融   175篇
工业经济   65篇
计划管理   416篇
经济学   288篇
综合类   83篇
运输经济   52篇
旅游经济   29篇
贸易经济   250篇
农业经济   56篇
经济概况   110篇
信息产业经济   2篇
  2024年   3篇
  2023年   27篇
  2022年   37篇
  2021年   63篇
  2020年   86篇
  2019年   64篇
  2018年   38篇
  2017年   56篇
  2016年   38篇
  2015年   35篇
  2014年   69篇
  2013年   85篇
  2012年   112篇
  2011年   133篇
  2010年   69篇
  2009年   91篇
  2008年   82篇
  2007年   79篇
  2006年   89篇
  2005年   53篇
  2004年   36篇
  2003年   37篇
  2002年   25篇
  2001年   27篇
  2000年   13篇
  1999年   9篇
  1998年   13篇
  1997年   12篇
  1996年   13篇
  1995年   4篇
  1994年   3篇
  1993年   5篇
  1992年   6篇
  1991年   3篇
  1990年   3篇
  1989年   1篇
  1988年   2篇
  1986年   1篇
  1985年   3篇
  1982年   1篇
排序方式: 共有1526条查询结果,搜索用时 10 毫秒
21.
Selecting the most promising candidates to fill an open position can be a difficult task when there are many applicants. Each applicant achieves certain performance levels in various categories and the resulting information can be overwhelming. We demonstrate how data envelopment analysis (DEA) can be used as a fair screening and sorting tool to support the candidate selection and decision-making process. Each applicant is viewed as an entity with multiple achievements. Without any a priori preference or information on the multiple achievements, DEA identifies the non-dominated solutions, which, in our case, represent the “best” candidates. A DEA-aided recruiting process was developed that (1) determines the performance levels of the “best” candidates relative to other applicants; (2) evaluates the degree of excellence of “best” candidates’ performance; (3) forms consistent tradeoff information on multiple recruiting criteria among search committee members, and, then, (4) clusters the applicants.  相似文献   
22.
Data envelopment analysis (DEA) is a non-parametric approach for measuring the relative efficiencies of peer decision making units (DMUs). In recent years, it has been widely used to evaluate two-stage systems under different organization mechanisms. This study modifies the conventional leader–follower DEA models for two-stage systems by considering the uncertainty of data. The dual deterministic linear models are first constructed from the stochastic CCR models under the assumption that all components of inputs, outputs, and intermediate products are related only with some basic stochastic factors, which follow continuous and symmetric distributions with nonnegative compact supports. The stochastic leader–follower DEA models are then developed for measuring the efficiencies of the two stages. The stochastic efficiency of the whole system can be uniquely decomposed into the product of the efficiencies of the two stages. Relationships between stochastic efficiencies from stochastic CCR and stochastic leader–follower DEA models are also discussed. An example of the commercial banks in China is considered using the proposed models under different risk levels.  相似文献   
23.
The result shows that it accepts the null hypothesis. Namely, there is no significant difference in the operating efficiency of universities in different regions. That is to say, although the efficiency of the central and western universities is slightly better than that of the eastern universities in terms of the average efficiency, there is no significant efficiency difference among the eastern, central, and western regions statistically. Therefore, it shows a balanced development trend for the efficiency of universities in different regions.  相似文献   
24.
Data with large dimensions will bring various problems to the application of data envelopment analysis (DEA). In this study, we focus on a “big data” problem related to the considerably large dimensions of the input-output data. The four most widely used approaches to guide dimension reduction in DEA are compared via Monte Carlo simulation, including principal component analysis (PCA-DEA), which is based on the idea of aggregating input and output, efficiency contribution measurement (ECM), average efficiency measure (AEC), and regression-based detection (RB), which is based on the idea of variable selection. We compare the performance of these methods under different scenarios and a brand-new comparison benchmark for the simulation test. In addition, we discuss the effect of initial variable selection in RB for the first time. Based on the results, we offer guidelines that are more reliable on how to choose an appropriate method.  相似文献   
25.
The work presents a robust approach to labor share analysis. The estimate of labor share presents various complexities related to the nature of the data sets to be analyzed. Typically, labor share is evaluated by using discriminant analysis and linear or generalized linear models, that do not take into account the presence of possible outliers. Moreover, the variables to be considered are often characterized by a high dimensional structure. The proposed approach has the objective of improving the estimation of the model using robust multivariate regression techniques and data transformation.  相似文献   
26.
对于宏观经济统计数据的异常性和波动性进行分析,已成为研究数据质量的最核心内容之一。本文从经济系统的角度运用随机方差扩大模型对我国36个宏观经济时间序列的数据质量进行了全面分析,发现了数据异常及波动的特点和规律。研究结论表明,大部分异常点的出现或多或少都是以聚集成堆的形式出现,它们之间有深刻的内在联系,异常点的出现大多与各种历史因素以及外部冲击有关;几乎所有的原始序列都有显著的偏度,过多的峰度也是明显的,因此它们被显著地拒绝认为服从正态分布;名义序列的特征在更大程度上受到异常点的影响。  相似文献   
27.
The research examining macroeconomic data for developed economies suggests that an understanding of the nature of data revisions is important both for the production of accurate macroeconomic forecasts and for forecast evaluation. This paper focuses on Chinese data, for which there has been substantial debate about data quality for some time. The key finding in this paper is that, while it is true that the Chinese macroeconomic data revisions are not well-behaved, they are not very different from similarly-timed U.S. macroeconomic data revisions. The positive bias in Chinese real GDP revisions is a result of the fast-growing service sector, which is notably hard to measure in real time. A better understanding of the revisions process is particularly helpful for studies of the forecast errors from surveys of forecasters, where the choice of the vintage for outcomes may have an impact on the estimated forecast errors.  相似文献   
28.
This article deals with the modelling of the static and dynamic technical efficiency under conditions of municipal libraries of municipalities with 1000–5000 inhabitants. The aim of this article is to determine the level of the technical efficiency and the factors that influence the results of modelling of the static and dynamic technical efficiency of 34 selected municipal libraries for the years of 2011 and 2015. The first model tests the technical efficiency of conventional services of public libraries. The second model tests the technical efficiency of municipal libraries’ operation. The third model tests the technical efficiency of the key revenues and expenditures. The results in the static models estimate the average technical efficiency of municipal libraries in the interval (0.691–0.759) for the input-oriented models, and in the interval (1.413–2.005) for the output-oriented models. In the dynamic models, the majority of municipal libraries in 2015 showed lower technical efficiency and productivity in comparison with the year of 2011. The factors influencing the level of efficiency and its course include the inputs and the outputs, and their combinations within individual models.  相似文献   
29.
Ratio type financial indicators are the most popular explanatory variables in bankruptcy prediction models. These measures often exhibit heavily skewed distribution because of the presence of outliers. In the absence of clear definition of outliers, ad hoc approaches can be found in the literature for identifying and handling extreme values. However, it is not clear how these different approaches can affect the predictive power of models. There seems to be consensus in the literature on the necessity of handling outliers, at the same time, it is not clear how to define extreme values to be handled in order to maximize the predictive power of models. There are two possible ways to reduce the bias originating from outliers: omission and winsorization. Since the first approach has been examined previously in the literature, we turn our attention to the latter. We applied the most popular classification methodologies in this field: discriminant analysis, logistic regression, decision trees (CHAID and CART) and neural networks (multilayer perceptron). We assessed the predictive power of models in the framework of tenfold stratified crossvalidation and area under the ROC curve. We analyzed the effect of winsorization at 1, 3 and 5% and at 2 and 3 standard deviations, furthermore we discretized the range of each variable by the CHAID method and used the ordinal measures so obtained instead of the original financial ratios. We found that this latter data preprocessing approach is the most effective in the case of our dataset. In order to check the robustness of our results, we carried out the same empirical research on the publicly available Polish bankruptcy dataset from the UCI Machine Learning Repository. We obtained very similar results on both datasets, which indicates that the CHAID-based categorization of financial ratios is an effective way of handling outliers with respect to the predictive performance of bankruptcy prediction models.  相似文献   
30.
The latest development in the asset pricing literature is the emergence of empirical asset pricing models comprising q‐factors (profitability and investment factors) in conjunction with other factors. However, as in the case of the older empirical models, there is scepticism regarding the application of these newer factor models consisting of q‐factors because of the debate surrounding the explanatory power of these empirically inspired asset pricing models. This review attempts to synthesize studies pertaining to the four alternative explanations of the asset pricing models comprising the q‐factors (profitability and investment) – the data snooping hypothesis, the risk‐based explanation, the irrational investor behaviour explanation and the interpretation that suggest that the combination of the risk‐free asset and the factors comprising the model span the mean‐variance efficient tangency portfolio that prices the universe of assets.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号