PEPit.primitive_steps.exact_linesearch_step
- PEPit.primitive_steps.exact_linesearch_step(x0, f, directions)[source]
This routines outputs some \(x\) by mimicking an exact line/span search in specified directions. It is used for instance in
PEPit.examples.unconstrained_convex_minimization.wc_gradient_exact_line_search
and inPEPit.examples.unconstrained_convex_minimization.wc_conjugate_gradient
.The routine aims at mimicking the operation:
\begin{eqnarray} x & = & x_0 - \sum_{i=1}^{T} \gamma_i d_i,\\ \text{with } \overrightarrow{\gamma} & = & \arg\min_\overrightarrow{\gamma} f\left(x_0 - \sum_{i=1}^{T} \gamma_i d_i\right), \end{eqnarray}where \(T\) denotes the number of directions \(d_i\). This operation can equivalently be described in terms of the following conditions:
\begin{eqnarray} x - x_0 & \in & \text{span}\left\{d_1,\ldots,d_T\right\}, \\ \nabla f(x) & \perp & \text{span}\left\{d_1,\ldots,d_T\right\}. \end{eqnarray}In this routine, we instead constrain \(x_{t}\) and \(\nabla f(x_{t})\) to satisfy
\begin{eqnarray} \forall i=1,\ldots,T: & \left< \nabla f(x);\, d_i \right> & = & 0,\\ \text{and } & \left< \nabla f(x);\, x - x_0 \right> & = & 0, \end{eqnarray}which is a relaxation of the true line/span search conditions.
Note
The latest condition is automatically implied by the 2 previous ones.
Warning
One can notice this routine does not encode completely the fact that \(x_{t+1} - x_t\) must be a linear combination of the provided directions (i.e., this routine performs a relaxation). Therefore, if this routine is included in a PEP, the obtained value might be an upper bound on the true worst-case value.
Although not always tight, this relaxation is often observed to deliver pretty accurate results (in particular, it automatically produces tight results under some specific conditions, see, e.g., [1]). Two such examples are provided in the conjugate gradient and gradient with exact line search example files.
References
- Parameters
- Returns
x (Point) – such that all vectors in directions are orthogonal to the (sub)gradient of f at x.
gx (Point) – a (sub)gradient of f at x.
fx (Expression) – the function f evaluated at x.