The VU-algorithm is a superlinearly convergent method for minimizing nonsmooth, convex functions. At each iteration, the algorithm works with a certain V-space and its orthogonal u-space, such that the nonsmoothness of the objective function is concentrated on its projection onto the V-space, and on the U-space the projection is smooth. This structure allows for an alternation between a Newton-like step where the function is smooth, and a proximal-point step that is used to find iterates with promising VU-decompositions. We establish a derivative-free variant of the VU-algorithm for convex finite-max objective functions. We show global convergence and provide numerical results from a proof-of-concept implementation, which demonstrates the feasibility and practical value of the approach. We also carry out some tests using nonconvex functions and discuss the results.
Available for download on Sunday, September 27, 2020