详细信息
基于拟凸损失的核正则化成对学习算法的收敛速度 被引量:2
The Convergence Rate for Kernel-Based Regularized Pair Learning Algorithm with a Quasiconvex Loss
文献类型:期刊文献
中文题名:基于拟凸损失的核正则化成对学习算法的收敛速度
英文题名:The Convergence Rate for Kernel-Based Regularized Pair Learning Algorithm with a Quasiconvex Loss
作者:王淑华[1,2];王英杰[3];陈振龙[1];盛宝怀[2]
机构:[1]浙江工商大学统计与数学学院,杭州310018;[2]绍兴文理学院应用统计系,绍兴312000;[3]华中农业大学信息学院,武汉430070
年份:2020
卷号:40
期号:3
起止页码:389
中文期刊名:系统科学与数学
外文期刊名:Journal of Systems Science and Mathematical Sciences
收录:CSTPCD、、北大核心2017、CSCD2019_2020、北大核心、CSCD
基金:国家自然科学基金(61877039,11971432);教育部人文社会科学研究规划基金(18YJA910001)资助课题。
语种:中文
中文关键词:成对学习;拟凸函数;核正则化算法;收敛速度
外文关键词:Pairwise learning;quasiconvex function;kernel-based regularized method;convergence rate
中文摘要:核正则化排序算法是目前机器学习理论领域讨论的热点问题,而成对学习算法是排序算法的推广.文章给出一种基于拟凸损失的核正则化成对学习算法,利用拟凸分析理论对该算法进行误差分析,给出算法的收敛速度.分析结果表明,算法的样本误差与损失函数中的参数选择有关.数值实验结果显示,与基于最小二乘损失的排序算法相比较,该算法有更稳健的学习性能.
外文摘要:Regularized ranking algorithm based on kernels has recently gained much attention in machine learning theory,and pairwise learning is the generalization of ranking problem.In this paper,a kernel-based regularized pairwise learning algorithm with a quasiconvex loss function is provided,the error estimate is given by using the quasiconvex analysis theory,and an explicit learning rate is obtained.It is shown that the sample error is influenced by the parameters in the loss function.The experiments show that our method is more robust compared with the ranking algorithm with the least square loss function.
参考文献:
正在载入数据...