PEPit.primitive_steps.inexact_gradient_step

PEPit.primitive_steps.inexact_gradient_step(x0, f, gamma, epsilon, notion='absolute')[source]

This routines performs a step \(x \leftarrow x_0 - \gamma d_{x_0}\) where \(d_{x_0}\) is close to the gradient of \(f\) in \(x_0\) in the following sense:

\[\begin{split}\|d_{x_0} - \nabla f(x_0)\|^2 \leqslant \left\{ \begin{eqnarray} & \varepsilon^2 & \text{if notion is set to 'absolute'}, \\ & \varepsilon^2 \|\nabla f(x_0)\|^2 & \text{if notion is set to 'relative'}. \end{eqnarray} \right.\end{split}\]

This relative approximation is used at least in in 3 PEPit examples, in particular in 2 unconstrained convex minimizations: an inexact gradient descent, and an inexact accelerated gradient.

Parameters
  • x0 (Point) – starting point x0.

  • f (Function) – a function.

  • gamma (float) – the step size parameter.

  • epsilon (float) – the required accuracy.

  • notion (string) – defines the mode (absolute or relative inaccuracy).

Returns
  • x (Point) – the output point.

  • dx0 (Point) – the approximate (sub)gradient of f at x0.

  • fx0 (Expression) – the value of the function f at x0.

Raises

ValueError – if notion is not set in [‘absolute’, ‘relative’].

Note

When \(\gamma\) is set to 0, then the this routine returns \(x_0\), \(d_{x_0}\), and \(f_{x_0}\). It is used as is in the example of unconstrained convex minimization scheme called “inexact gradient exact line search” only to access to the direction \(d_{x_0}\) close to the gradient \(g_{x_0}\).