首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The proportional odds model is the most widely used model when the response has ordered categories. In the case of high‐dimensional predictor structure, the common maximum likelihood approach typically fails when all predictors are included. A boosting technique pomBoost is proposed to fit the model by implicitly selecting the influential predictors. The approach distinguishes between metric and categorical predictors. In the case of categorical predictors, where each predictor relates to a set of parameters, the objective is to select simultaneously all the associated parameters. In addition, the approach distinguishes between nominal and ordinal predictors. In the case of ordinal predictors, the proposed technique uses the ordering of the ordinal predictors by penalizing the difference between the parameters of adjacent categories. The technique has also a provision to consider some mandatory predictors (if any) that must be part of the final sparse model. The performance of the proposed boosting algorithm is evaluated in a simulation study and applications with respect to mean squared error and prediction error. Hit rates and false alarm rates are used to judge the performance of pomBoost for selection of the relevant predictors.  相似文献   

2.
刘天卓 《价值工程》2010,29(20):126-127
对科研项目进行合理排序,是科研项目评审和筛选的前提,是科研管理的重要内容。现有的研究均是基于专家打分进行的,因为专家打分的主观性,排序结果往往失于准确。本文基于信息熵理论和专家有序投票模型,提出一种多专家多科研项目的排序方法。该方法的数据基础(即专家排序)更为合理,因而能得出更为客观的排序结果。实例研究证实了本方法的合理性。  相似文献   

3.
一种技术只有应用于实际才算是真正拥有价值。林业可视化是一门鲜为人知新兴技术,虽然在某些领域有了一些应用,但其规模尚小,深度尚浅。因此,该技术的推广应用还有很大的发展空间。本文从林业可视化技术推广模式的角度出发,对林业可视化技术的推广进行了深入研究,借助其他科学技术的推广模式,结合林业可视化技术的自身特点,对林业可视化技术的推广模式进行了总结分类,并且提出了创新。  相似文献   

4.
This paper proposes a cluster HAR-type model that adopts the hierarchical clustering technique to form the cascade of heterogeneous volatility components. In contrast to the conventional HAR-type models, the proposed cluster models are based on the relevant lagged volatilities selected by the cluster group Lasso. Our simulation evidence suggests that the cluster group Lasso dominates other alternatives in terms of variable screening and that the cluster HAR serves as the top performer in forecasting the future realized volatility. The forecasting superiority of the cluster models are also demonstrated in an empirical application where the highest forecasting accuracy tends to be achieved by separating the jumps from the continuous sample path volatility process.  相似文献   

5.
This paper is concerned with the statistical inference on seemingly unrelated varying coefficient partially linear models. By combining the local polynomial and profile least squares techniques, and estimating the contemporaneous correlation, we propose a class of weighted profile least squares estimators (WPLSEs) for the parametric components. It is shown that the WPLSEs achieve the semiparametric efficiency bound and are asymptotically normal. For the non‐parametric components, by applying the undersmoothing technique, and taking the contemporaneous correlation into account, we propose an efficient local polynomial estimation. The resulting estimators are shown to have mean‐squared errors smaller than those estimators that neglect the contemporaneous correlation. In addition, a class of variable selection procedures is developed for simultaneously selecting significant variables and estimating unknown parameters, based on the non‐concave penalized and weighted profile least squares techniques. With a proper choice of regularization parameters and penalty functions, the proposed variable selection procedures perform as efficiently as if one knew the true submodels. The proposed methods are evaluated using wide simulation studies and applied to a set of real data.  相似文献   

6.
This paper proposes an integrated SERVQUAL model, analytic hierarchy process, and technique for order performance by similarity to ideal solution (AHP-TOPSIS) method to evaluate service quality among employment-related government agencies. A case study was conducted in the Philippines to establish critical dimensions attributed to the modified SERVQUAL model investigated covering five government agencies. It is found out that responsiveness dimension needs the most improvement in terms of carrying out quality service while promptness of services is considered as the most important sub-dimension. Furthermore, the proposed approach can enable government administrators to lead its efforts and resources in improving service quality to critical dimensions and sub-dimensions.  相似文献   

7.
The purpose of this paper is to investigate the role of the process approach in optimisation programme implementation. It is proposed that application of a process model of a company provides overcoming of functional boundaries and, consequently, overcoming of sub-optimisation of logistics system performance. The process model of an internal logistics system of a wholesale trading company, based on the Supply Chain Operations Reference (SCOR) model, has been developed. Relations between business functions, processes and performance indicators (metrics) have been analysed. The optimisation model has been developed, and comparative analysis of possible results of optimisation of processes and functions has been conducted. Results demonstrate that optimisation of functions results in a sub-optimal solution, caused by functional boundaries, whereas optimisation of processes results in an optimal one. Research provides the rationale for process approach implementation in order to make optimal decisions regarding the logistics activities and the technique of practical implementation of an optimisation programme.  相似文献   

8.
As the volume and complexity of data continues to grow, more attention is being focused on solving so-called big data problems. One field where this focus is pertinent is credit card fraud detection. Model selection approaches can identify key predictors for preventing fraud. Stagewise Selection is a classic model selection technique that has experienced a revitalized interest due to its computational simplicity and flexibility. Over a sequence of simple learning steps, stagewise techniques build a sequence of candidate models that is less greedy than the stepwise approach.This paper introduces a new stochastic stagewise technique that integrates a sub-sampling approach into the stagewise framework, yielding a simple tool for model selection when working with big data. Simulation studies demonstrate the proposed technique offers a reasonable trade off between computational cost and predictive performance. We apply the proposed approach to synthetic credit card fraud data to demonstrate the technique’s application.  相似文献   

9.
Appropriate modelling of Likert‐type items should account for the scale level and the specific role of the neutral middle category, which is present in most Likert‐type items that are in common use. Powerful hierarchical models that account for both aspects are proposed. To avoid biased estimates, the models separate the neutral category when modelling the effects of explanatory variables on the outcome. The main model that is propagated uses binary response models as building blocks in a hierarchical way. It has the advantage that it can be easily extended to include response style effects and non‐linear smooth effects of explanatory variables. By simple transformation of the data, available software for binary response variables can be used to fit the model. The proposed hierarchical model can be used to investigate the effects of covariates on single Likert‐type items and also for the analysis of a combination of items. For both cases, estimation tools are provided. The usefulness of the approach is illustrated by applying the methodology to a large data set.  相似文献   

10.
Microaggregation is a popular statistical disclosure control technique for continuous data. The basic principle of microaggregation is to group the observations in a data set and to replace them by their corresponding group means. However, while reducing the disclosure risk of data files, the technique also affects the results of statistical analyses. The paper deals with the impact of microaggregation on a multiple linear regression in continuous variables. We show that parameter estimates are biased if the dependent variable is used to form the groups. Using this result, we develop a consistent estimator that removes the aggregation bias, and derive its asymptotic covariance matrix.  相似文献   

11.
This paper proposes a Granger Causality test allowing for threshold effects. The proposed test can be conducted on the basis of the threshold autoregressive distributed lag model or the augmented logistic smooth transition autoregressive model. The proposed test is applied to the U.S. civilian unemployment rate, and it is shown that real investment, real GDP and real interest rate are helpful for improving the in-sample fit of unemployment.  相似文献   

12.
In this paper, a new randomized response model is proposed, which is shown to have a Cramer–Rao lower bound of variance that is lower than the Cramer–Rao lower bound of variance suggested by Singh and Sedory at equal protection or greater protection of respondents. A new measure of protection of respondents in the setup of the efficient use of two decks of cards, because of Odumade and Singh, is also suggested. The developed Cramer–Rao lower bounds of variances are compared under different situations through exact numerical illustrations. Survey data to estimate the proportion of students who have sometimes driven a vehicle after drinking alcohol and feeling over the legal limit are collected by using the proposed randomization device and then analyzed. The proposed randomized response technique is also compared with a black box technique within the same survey. A method to determine minimum sample size in randomized response sampling based on a small pilot survey is also given.  相似文献   

13.
In this article, we propose new Monte Carlo methods for computing a single marginal likelihood or several marginal likelihoods for the purpose of Bayesian model comparisons. The methods are motivated by Bayesian variable selection, in which the marginal likelihoods for all subset variable models are required to compute. The proposed estimates use only a single Markov chain Monte Carlo (MCMC) output from the joint posterior distribution and it does not require the specific structure or the form of the MCMC sampling algorithm that is used to generate the MCMC sample to be known. The theoretical properties of the proposed method are examined in detail. The applicability and usefulness of the proposed method are demonstrated via ordinal data probit regression models. A real dataset involving ordinal outcomes is used to further illustrate the proposed methodology.  相似文献   

14.
This paper seeks to provide the services sector with a focus on the assessment of quality and for this purpose, a technique that may able a quantitative approach to evaluating quality is proposed. The use of the fuzzy sets theory to process data was used, thus allowing a more flexible and suitable insight into the characteristics of the service sector. An extension of the technique for order performance by similarity to the ideal solution was used. This informs managers of the distance from the company $\prime $ s current level of quality, if compared to a company of perfect quality by means of an overall evaluation. The same technique was used to detect changes in the level of quality during the period surveyed by using a stratified assessment. Finally, a practical application of the approach proposed is presented.  相似文献   

15.
This paper illustrates the variety of PERT technique known as random PERT. The aim of this technique is to help plan the duration of activities, something which can be particularly difficult in psychosocial programs. Thus, this task is often carried out by experts, who know that there are many events which may modify the proposed calendar. The paper includes an empirical illustration of random PERT applied to a physical activity/sports program for elderly people.  相似文献   

16.
Quantitative methods normally do not fully exhaust nor sufficiently show the structural information contained in sets of data. Therefore the authors introduce a new general technique for analyzing structures which is based on qualitative methods. The proposed technique can be divided into several steps. First, the given structural information is prepared with the help of graph theoretical tools. Then the obtained results are condensed in several steps to manageable vectors (key values), the most important step being the construction of graph theoretical decay patterns. These strongly condensed data allow the use of statistical methods and offer the chance to compare even a large number of structures simultaneously. After having introduced the necessary theoretical tools, the authors then present the results of some empirical investigations which showed the usefulness of the proposed technique. Compared with the corresponding quantitative methods, the empirical investigations also showed that our technique is relatively robust with respect to short-comings in the primary material. This result opens up opportunities for obtaining more actual yet costsaving structural information.  相似文献   

17.
本文研究易变质产品库存模型中供货阶段和订货阶段之间的价格差异,引进双边定价的概念,提出一种新的基于易变质产品的双边定价库存模型。新模型以利润最大化为目标,寻求供货阶段和订货阶段的价格均衡条件和时间均衡条件。文中通过严格的数学推导,证明了目标函数最优解的存在性和唯一性。在这基础上,提出一种基于利润最大化的求解最优价格和最优时间比例系数的数值优化算法。实验结果证明了该数值算法具有快速收敛的特性,同时说明了该模型的合理性和有效性。  相似文献   

18.
Factor modeling is a powerful statistical technique that permits common dynamics to be captured in a large panel of data with a few latent variables, or factors, thus alleviating the curse of dimensionality. Despite its popularity and widespread use for various applications ranging from genomics to finance, this methodology has predominantly remained linear. This study estimates factors nonlinearly through the kernel method, which allows for flexible nonlinearities while still avoiding the curse of dimensionality. We focus on factor-augmented forecasting of a single time series in a high-dimensional setting, known as diffusion index forecasting in macroeconomics literature. Our main contribution is twofold. First, we show that the proposed estimator is consistent and it nests the linear principal component analysis estimator as well as some nonlinear estimators introduced in the literature as specific examples. Second, our empirical application to a classical macroeconomic dataset demonstrates that this approach can offer substantial advantages over mainstream methods.  相似文献   

19.
文章对JND技术应用到图像数字水印算法中进行了阐述。为了提高纠错能力,水印嵌入前经过Goppa编码预处理,然后采用一种基于JND的分类器将分块后的载体图像分类,以不同的嵌入强度将水印信息嵌入到载体图像中。实验结果表明,利用该算法实现的水印具有很好的视觉掩蔽效应和鲁棒性。  相似文献   

20.
Predictors of gerontechnology acceptance by older Hong Kong Chinese   总被引:1,自引:0,他引:1  
The aim of this study was to examine the factors which influence the acceptance of gerontechnology by older Hong Kong Chinese. A research model based on the technology acceptance model (TAM) and Unified Theory of Acceptance and Use of Technology (UTAUT) was proposed. It was empirically tested by using survey data collected from 1012 seniors aged 55 and over in Hong Kong. A face to face interview technique with a preset questionnaire was employed to collect data and Structural Equation Modeling (SEM) was used for data analysis. The proposed model could explain 55.4% of the variance in actual usage of gerontechnology. However, in contrast to TAM and UTAUT, significant effects for perceived usefulness, perceived ease of use, and attitude towards using on usage behavior were not found in this study. Personal attributes like technology self-efficacy and anxiety, and facilitating conditions were more decisive than perceived benefits for predicting gerontechnology usage behavior of Hong Kong older Chinese. The managerial implications generated from the results are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号