Based on a manifold , a starting point , and an element from the (sub)gradient (or descent) direction of a function this function performs a subgradient descent algorithm %}
with step sizes , where we keep track of the minimal value . If is not single valued, this algorithm assumes, that it is meant as a parallel subgradient algorithm with respect to the last dimension(s) of .
The implementation is based on the paper [1].
All functions involved are provided as anonymous functions:
The subgradient subgradF
is a function based on @(x)
, the step size is a
function @(x,descentDir,iter,initial)
based on the current
iterate, descent direction , the iteration and an
initial step size to start seaching with. This value is set to the last step
size within the iterations. Finally the stoppingCriterion
is a function
based on @(x,xold,iter)
that returns false
as long as the algorithm should run, hence the algorithm stops if the stopping
criterion is fulfilled for the first time.
Optional Parameters
Debug
 If set to an anonymous functional depending on
@(x,xold,iter)
this is called in each cycle, for example to produce text output or save iteration data to a file. Record
 If set to a functional
@(x,iter)
returning a column vector, and calling thecyclicProximalPoint
with a second return value, this function records these column vectors in a matrix returned in the second return valuerecData
.
There are several (sub)gradients available.
Matlab Documentation
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
% subGradientDescent(M,x,F,gradF,stepSizeRule,stoppingCriterion)
% compute a sub gradient descent on F using x as inital data and subgradF
% as the (sub)gradient. If F returns a number, the classical subgradient
% descent is performed; if it returs a vector or matrix; a parallel
% subgradient descent is computed, where the (last) data dimensions of x
% are handled in parallel.
%
% Ferreira, O.P.; Oliverira, P.R.: Subgradient Algorithm on Riemannian
% manifolds
% Journal of Optimization Theory and Applications 97.1, 93104, 1998.
%
% INPUT
% M : a manifold M
% x : initial data
% F : the function F for the different gradients in
% parallel or just a value for standard gradient
% descent
% subgradF : a function @(x) returning the gradient of F at x
% stepSize : a function @(x,eta,iter,initial) returning the
% stepsize at x with direction eta corresponding to
% a certain rule (e.g. Amijo) based on iterate point
% x, descent direction eta, iteration iter and
% initial stepsize initial.
% stoppingCriterion : a function @(x,xold,iter) determining whether to
% stop or not
% OPTIONAL
% Debug : ([]) a function @ (x,xold,iter) producing debug
% output
% Record : (@(x,iter)) a function returning a column vector,
% if there's a second return value and this function
% is given, data is recorded in an array and returned
% 
% MVIRT  R. Bergmann  20180315
See also
 The Cyclic Proximal Point Algorithm
 The gradient descent algorithm
 List of available (sub)gradients
 List of available functionals
 an easy stopping criterion creator
References

Ferreira, O P and Oliveira, P R (1998). Subgradient algorithm on Riemannian manifolds. Journal of Optimization Theory and Applications. 97 93–104
The subgradient method is generalized to the context of Riemannian manifolds. The motivation can be seen in nonEuclidean metrics that occur in interiorpoint methods. In that frame, the natural curves for local steps are the geodesies relative to the specific Riemannian manifold. In this paper, the influence of the sectional curvature of the manifold on the convergence of the method is discussed, as well as the proof of convergence if the sectional curvature is nonnegative.@article{FO98, author = {Ferreira, O. P. and Oliveira, P. R.}, title = {Subgradient algorithm on {R}iemannian manifolds}, journal = {Journal of Optimization Theory and Applications}, year = {1998}, volume = {97}, number = {1}, pages = {93104}, doi = {10.1023/A:1022675100677} }