The VU-algorithm is a superlinearly convergent method for minimizing nonsmooth, convex functions. At each iteration, the algorithm works with a certain V-space and its orthogonal u-space, such that the nonsmoothness of the objective function is concentrated on its projection onto the V-space, and on the U-space the projection is smooth. This structure allows for an alternation between a Newton-like step where the function is smooth, and a proximal-point step that is used to find iterates with promising VU-decompositions. We establish a derivative-free variant of the VU-algorithm for convex finite-max objective functions. We show global convergence and provide numerical results from a proof-of-concept implementation, which demonstrates the feasibility and practical value of the approach. We also carry out some tests using nonconvex functions and discuss the results.
History
Citation
Hare, W., Planiden, C. & Sagastizabal, C. (2019). A derivative-free VU-algorithm for convex finite-max problems. Optimization Methods and Software, Online First 1-39.