日本av中文字幕

【学术会议】“人工智能中的数学方法与优化”学术研讨会

信息来源:   点击次数:  发布时间:2025-12-24

为了加强人工智能与数学优化领域相关学者的相互交流与合作促进人工智能与多学科交叉领域的融合发展,浙江科技大学日本av中文字幕 /数学与交叉科学研究院拟于20251226日至28日在杭州举办2025人工智能中的数学方法与优化学术研讨会本次会议邀请相关领域的专家作系列邀请报告,欢迎广大师生参加!

会议联系人:日本av中文字幕 /数学与交叉科学研究院 喻高航


日程安排

20251227日,星期六(闻理园A4-216

8:50-9:00

开幕式致辞

序号

时间

报告人

题目

主持人

1

9:00-9:35

韩德仁

Convergence rate of inexact augmented Lagrangian method with practical relative error criterion for composite convex programming

凌 晨

2

9:35-10:10

黄正海

Levenberg-Marquardt Hard Thresholding Pursuit for Sparse Bilinear Inverse Problems

3

10:10-10:45

杨庆之

The low-rank approximation of fourth-order partial-symmetric and conjugate partial-symmetric tensors

刘新为

4

10:45-11:20

杨俊锋

New Adaptive Gradient Methods for Convex and Nonconvex Optimization

5

11:20-11:55

蔡邢菊

Understanding the Convergence of the Preconditioned PDHG Method: A View of Indefinite Proximal ADMM

宋义生

12:00-13:30

休 息

6

13:30-14:05

彭 拯

An accelerated variable metric proximal-perturbed Lagrangian method for nonlinearly constrained nonconvex optimization

曾燎原

7

14:05-14:40

刘勇进

qNBO: quasi-Newton Meets Bilevel Optimization

陈中明

8

14:40-15:15

徐玲玲

求解二次均衡问题的ADMM型算法

王 群

9

15:15-15:50

汪廷华

HSIC Lasso with applications

祝汉灿

10

15:50-16:25

肖运海

Robust Estimation and Variables Selection in Spatial Autoregressive Model with Partly Varying Coefficients

何洪津

16:25-16:30

与会代表合影



报告标题

Convergence rate of inexact augmented Lagrangian method with practical relative error criterion for composite convex programming


报告人:韩德仁 (北京航空航天大学 教授)

报告摘要: We consider the composite convex optimization problem with a linear equality constraint. We propose a practical inexact augmented Lagrangian (IAL) framework which uses two relative error criteria. Under the first criterion, we establish the convergence and sublinear ergodic convergence rates. Additionally, by adding the second criterion, we obtain sublinear non-ergodic convergence rates. Numerical experiments on the basis pursuit problems and constrained Lasso problems are carried out to show the efficiency of the proposed IAL method. We also discuss some possible extensions.


报告人简介韩德仁,教授,博士生导师,北京航空航天大学数学科学学院院长、教育部数学类专业教指委秘书长。从事大规模优化、变分不等式问题及其应用研究工作,发表多篇学术论文。曾获中国运筹学会青年科技奖,江苏省科学技术奖等奖项主持国家自然科学基金重点项目、杰出青年基金项目等多项项目。任中国运筹学会副理事长、算法软件与应用分会理事长;《数值计算与计算机应用》、《Journal of the Operations Research Society of China》、《Journal of Global Optimization》、《Asia-Pacific Journal of Operational Research》编委。


报告标题

Levenberg-Marquardt Hard Thresholding Pursuit for Sparse Bilinear Inverse Problems


报告人:黄正海 (天津大学 教授)

报告摘要: In this talk, we propose two Levenberg-Marquardt methods merged with hard thresholding pursuits for solving sparse bilinear inverse problems. Our second method is an improvement of the first one by incorporating a novel support set refinement step. Under suitable assumptions, we show that the proposed methods are globally and locally quadratically convergent to an optimal solution of the underlying problem. The efficiency of the algorithms is demonstrated through numerical experiments on randomly generated problems which indicate the superior performance of the proposed methods compared to several existing methods.


报告人简介黄正海,天津大学数学学院教授、博士生导师。1999年博士毕业于复旦大学。主要从事最优化理论、算法及其应用方面的研究工作,在求解互补与变分不等式问题、对称锥优化与对称锥互补问题、稀疏优化、张量优化、核磁共振医学成像、人脸识别等方面取得了一系列有意义的成果。目前的主要研究兴趣是稀疏优化、张量优化、以及机器学习中的优化理论方法及其应用。已发表SCI检索论文150余篇,代表作发表于最优化领域顶刊Mathematical ProgrammingSIAM Journal on Optimization、数值代数顶刊SIAM Journal on Matrix Analysis and Applications、图像处理顶刊SIAM Journal on Imaging Sciences、信息科学顶刊IEEE Transactions on Information Theory、信息监控顶刊IEEE Transactions on Information Forensics and Security、信号处理顶刊IEEE Transactions on Signal Processing等。连续获得多项国家自然科学基金资助。曾获得中国科学院优秀博士后奖和教育部高等学校自然科学奖二等奖。目前为中国运筹学会数学规划分会副理事长;国际期刊《Pacific Journal of Optimization》、《Applied Mathematics and Computation》、《Asia-Pacific Journal of Operational Research》和《Statistics, Optimization & Information Computing》的编委。


报告标题

The low-rank approximation of fourth-order partial-symmetric and conjugate partial-symmetric tensors


报告人:杨庆之 (南开大学 教授)

报告摘要: In this talk, we present an orthogonal matrix outer product decomposition for the fourth-order conjugate partial-symmetric (CPS) tensor and show that the greedy successive rank-one approximation (SROA) algorithm can recover this decomposition exactly. Based on this matrix decomposition, the CP rank of CPS tensor is bounded by the matrix rank, which can be applied to low-rank tensor completion. Additionally, we give the rank-one equivalence property for the CPS tensor based on the SVD of matrix, which can be applied to the rank-one approximation for CPS tensors. Finally we show the efficiency of model and methods presented in this talk with some numerical experiments.


报告人简介杨庆之,南开大学数学科学学院教授,研究方向是最优化方法和张量计算。发表学术论文70多篇,主持和参与了多项国家自然科学基金、教育部博士点基金和天津市自然科学基金等项目。曾获天津市自然科学奖二等奖(第一完成人)、广西自然科学奖二等奖(完成人)、天津市优秀博士论文指导教师、新疆维吾尔自治区天山学者特聘教授等奖励和荣誉称号。目前任《Journal of the Operations Research Society of China》《高等学校计算数学学报》编委,天津市数学会监事长。曾担任《计算数学》编委,南开大学数学学院科学与工程计算系主任,天津市计算数学会理事长,中国计算数学会常务理事,中国运筹学会数学规划分会常务理事等职务。


报告标题

New Adaptive Gradient Methods for Convex and Nonconvex Optimization


报告人:杨俊锋 (南京大学 教授)

报告摘要: Consider the unconstrained optimization problem of a continuously differentiable function using the vanilla gradient method. When the objective function is convex and the gradient operator is locally Lipschitz continuous, we propose an adaptive strategy based on the short Barzilai-Borwein step size formula for choosing the step size. The resulting algorithm is line-search-free and parameter-free. We establish the convergence of the iterates and the ergodic convergence of the objective function value. Compared with existing works in this line of research, our algorithm provides the best lower bounds on the step size and the average of the step sizes. Furthermore, we present extensions to the locally strongly convex case and the case of composite convex optimization. Our numerical results also demonstrate the promising potential of the proposed algorithms on some representative examples. We also present an adaptive strategy for choosing the step sizes when the objective function is globally L-smooth but possibly nonconvex.

(Joint work with Shiqian Ma, Zilong Ye and Danqing Zhou)


报告人简介杨俊锋,南京大学数学学院教授,博士生导师、副院长。20097月起在南京大学数学学院工作,主要从事最优化计算方法及其应用研究,在SIAM系列、MORMathematics of Computation等杂志上发表论文40余篇,开发图像去模糊软代码包FTVd,压缩感知一模解码代码包YALL1,核磁共振图像复原代码包RecPF等。先后主持国家自然科学基金优秀青年项目等6项,获中国运筹学会青年科技奖,入选教育部新世纪优秀人才支持计划等,2020—2024年连续5年入选爱思唯尔中国高被引学者。担任中国运筹学会理事等,担任《计算数学》《ASVAO》《NACO》《SOIC》杂志编委、《Optimization in Engineering》客座编委等。


报告标题

Understanding the Convergence of the Preconditioned PDHG Method: A View of Indefinite Proximal ADMM


报告人:蔡邢菊 (南京师范大学 教授)

报告摘要: The primal-dual hybrid gradient (PDHG) algorithm is popular in solving min-max problems which are being widely used in a variety of areas. To improve the applicability and efficiency of PDHG for different application scenarios, we focus on the preconditioned PDHG (PrePDHG) algorithm, which is a framework covering PDHG, alternating direction method of multipliers (ADMM), and other methods. We give the optimal convergence condition of PrePDHG in the sense that the key parameters in the condition can not be further improved, which fills the theoretical gap in the-state-of-art convergence results of PrePDHG, and obtain the ergodic and non-ergodic sublinear convergence rates of PrePDHG. The theoretical analysis is achieved by establishing the equivalence between PrePDHG and indefinite proximal ADMM. Besides, we discuss various choices of the proximal matrices in PrePDHG and derive some interesting results. For example, the convergence condition of diagonal PrePDHG is improved to be tight, the dual stepsize of the balanced augmented Lagrangian method can be enlarged to 4/3 from 1, and a balanced augmented Lagrangian method with symmetric Gauss-Seidel iterations is also explored. Numerical results on the matrix game, projection onto the Birkhoff polytope, earth moverr's distance, and CT reconstruction verify the effectiveness and superiority of PrePDHG.


报告人简介蔡邢菊,南京师范大学教授,博导。主要从事最优化理论与算法、变分不等式、数值优化方向研究工作。主持多项国家基金,获江苏省科技进步奖一等奖一项,发表SCI论文70余篇。担任中国运筹学会副秘书长、算法软件与应用分会常务理事兼秘书长、数学规划分会常务理事,江苏省运筹学会理事长。


报告标题

An accelerated variable metric proximal-perturbed Lagrangian method for nonlinearly constrained nonconvex optimization


报告人:  (湘潭大学 教授)

报告摘要: Nonconvex constrained optimization problems, which arise in various fields such as machine learning and signal processing, are often solved using the Augmented Lagrangian (AL) method due to its simplicity and scalability. However, analyzing the convergence of AL-based methods in nonconvex settings remains challenging, especially when standard constraint qualifications like the Linear Independence Constraint Qualification (LICQ) do not hold or the set of multipliers is unbounded. To address these limitations, we build on the proximal perturbed Lagrangian framework and develop a new accelerated variable metric proximal Lagrangian method for nonconvex constrained composite optimization. We establish that every accumulation point of the generated sequence satisfies the Karush-Kuhn-Tucker (KKT) conditions. Moreover, we analyze the iteration complexity of our method under a well-defined stopping criterion. Finally, some numerical results show our algorithm is more effective than the original method.


报告人简介彭拯,湘潭大学数学与计算科学学院,教授,博士生导师。主要从事数学优化理论、算法及其应用研究,当前研究兴趣在于流形优化理论与算法、集成芯片及其EDA、下一代通信网络等工程领域的大规模非凸非光滑优化问题求解算法,尤其关注随机优化算法与非单调优化算法相关研究。主持国家重要科研项目6项,当前兼任中国运筹学会常务理事、湖南省运筹学会副理事长,中国运筹学会算法软件及其应用分会常务理事和数学规划分会理事。


报告标题

qNBO: quasi-Newton Meets Bilevel Optimization


报告人刘勇进 (福州大学 教授)

报告摘要:Bilevel optimization, addressing challenges in hierarchical learning tasks, has gained significant interest in machine learning. The practical implementation of the gradient descent method to bilevel optimization encounters computational hurdles, notably the computation of the exact lower-level solution and the inverse Hessian of the lower-level objective. Although these two aspects are inherently connected, existing methods typically handle them separately by solving the lower-level problem and a linear system for the inverse Hessian-vector product. In this talk, we introduce a general framework to address these computational challenges in a coordinated manner. Specifically, we leverage quasi-Newton algorithms to accelerate the resolution of the lower-level problem while efficiently approximating the inverse Hessian-vector product. Furthermore, by exploiting the superlinear convergence properties of BFGS, we establish the non-asymptotic convergence analysis of the BFGS adaptation within our framework. Numerical experiments demonstrate the comparable or superior performance of the proposed algorithms in real-world learning tasks, including hyperparameter optimization, data hyper-cleaning, and few-shot meta-learning.


报告人简介:刘勇进,福州大学嘉锡学者特聘教授、博士生导师,福建省闽江特聘教授,担任福建省应用数学中心(福州大学)主任。研究兴趣主要包括:最优化理论、方法与应用,大规模数值计算,统计优化等,研究成果在包括Mathematical Programming (Series A)SIAM Journal on OptimizationSIAM Journal on Scientific Computing等优化与计算领域国际顶级学术期刊上发表。主持国家重点研发专项课题1项,主持国家自然科学基金4项(面上项目3项、青年基金1项),主持教育部、省重点项目等部省级纵向科研项目7项。现任中国数学会理事、中国运筹学会理事、中国运筹学会数学规划分会常务理事、中国运筹学会算法软件与应用分会常务理事、中国统计学会理事、福建省运筹学会会长、福建省数学学会副会长。担任国际期刊Annals of Applied Mathematics编委


报告标题

求解二次均衡问题的ADMM型算法


报告人:徐玲玲 (南京师范大学 教授)

报告摘要均衡问题包括许多数学模型, 如优化问题、变分不等式问题、鞍点问题、非合作博弈中的纳什均衡问题、不动点问题设计高效的数值算法对于解决这些问题具有重要的实际意义本文考虑将二次均衡函数分解为两个二次均衡函数的和,并且基于变分不等式问题的ADMM 算法, 设计 ADMM 型算法求解二次均衡问题 该算法将求解均衡问题转化为求解强单调的变分不等式问题我们证明了 ADMM 型算法的收敛性进一步,我们将二次均衡问题的 ADMM 型算法推广为一种不精确的算法,该方法近似求解算法每一步迭代的 VI 子问题在不精确求解的情况下我们证明了不精确 ADMM 型算法仍然具有收敛性。


报告人简介徐玲玲,南京师范大学数学科学学院教授,主要从事最优化理论与算法方面的研究,主持国家自然科学基金青年基金、面上基金,江苏省高校自然科学基金、科学与工程计算国家重点实验室开放课题(重点)等,作为项目骨干参加国家重点研发计划一项,目前担任中国运筹学会宣传工作委员会副主任、江苏省运筹学会常务副秘书长等职。


报告标题

HSIC Lasso with applications


报告人:汪廷华 (赣南师范大学 教授)

报告摘要: The Hilbert-Schmidt Independence Criterion Least absolute shrinkage and selection operator (HSIC Lasso), first introduced by Yamada et al. in 2014, represents an advanced feature selection method that integrates the nonlinear dependency measurement capabilities of HSIC with the sparsity-inducing regularization of Lasso. In recent years, HSIC Lasso has demonstrated remarkable versatility beyond its original scope of feature selection, emerging as a powerful tool for diverse applications including statistical inference, kernel learning, and deep learning. This talk will briefly summarize the basic concepts and theory of HSIC Lasso, and its optimization, extended models, and task-oriented algorithms with applications. Furthermore, some potential problems and directions deserving future exploration will be discussed.


报告人简介Tinghua Wang received the Ph.D. degree in Computer Science from Beijing Jiaotong University, Beijing, China, 2010. From October 2011 to October 2013, he was a Postdoctoral Research Fellow with the Institute of Computer Science and Technology, Peking University, Beijing, China. From March 2016 to March 2017, he was a Visting Scholar with the Centre for Artificial Intelligence, Faculty of Engineering and Information Technology, University of Technology Sydney, Ultimo, NSW, Australia. Prof. Wang is currently a full-time professor with the School of Mathematics and Computer Science, Gannan Normal University, Ganzhou, China. His research interests include artificial intelligence and machine learning. He has authored and coauthored more than 60 papers in IEEE Transactions on Fuzzy Systems, Information Fusion, other refereed journals, and conference proceedings.


报告标题

Robust Estimation and Variables Selection in Spatial Autoregressive Model with Partly Varying Coefficients


报告人:肖运海 (河南大学 教授)

报告摘要: The Spatial Autoregressive (SAR) model plays a key role in regression analysis by capturing spatial dependence, which is a typical characteristic in spatial econometrics and regional studies. Nevertheless, statistical inference in SAR models generally relies on certain assumptions, such as linearity, to ensure the validity of the results. In this talk, we investigate situations where the coefficients are allowed to vary based on specific index variables. To achieve this, we use B-splines to represent the varying coefficients as a series of spline coefficients. Then, we apply an adaptive-lasso penalty to efficiently estimate and select the significant coefficients. From a theoretical perspective, we establish the selection consistency and asymptotic normality properties of the proposed estimation method. Specifically, we adopt an exponential squared loss function, which generalizes least squares and offers greater robustness when handling outliers or heavy-tailed errors. For practical implementation, we leverage a DC algorithm framework and use an inexact alternating direction method of multipliers (ADMM) to solve the resulting convex optimization problems. We provide empirical studies using both simulated data and real-world examples, which show that the proposed estimation method performs effectively with finite sample sizes, particularly in terms of model accuracy and variable selection efficiency.


报告人简介肖运海,河南大学教授,河南省特聘教授,博士生导师,研究方向为数学优化、统计优化。2007年获得湖南大学博士学位,并分别于2010年和2011年在南京大学和台湾成功大学完成博士后研究。曾在加拿大西蒙弗雷泽大学、新加坡国立大学、香港理工大学和台湾成功大学等做访问学者。主研国家自然科学基金4项,河南省杰出青年基金项目1项,参与国家重点基础研究发展计划项目1项。在MPCJCGSCOAPJSCCSDA等学术期刊上发表论文60余篇。任中国运筹学会理事、中国工业与应用数学会理事、河南大学学术委员等。