6. Fixed point

6.1. Halpern iteration

PEPit.examples.fixed_point_problems.wc_halpern_iteration(n, wrapper='cvxpy', solver=None, verbose=1)[source]

Consider the fixed point problem

\[\mathrm{Find}\, x:\, x = Ax,\]

where \(A\) is a non-expansive operator, that is a \(L\)-Lipschitz operator with \(L=1\).

This code computes a worst-case guarantee for the Halpern Iteration. That is, it computes the smallest possible \(\tau(n)\) such that the guarantee

\[\|x_n - Ax_n\|^2 \leqslant \tau(n) \|x_0 - x_\star\|^2\]

is valid, where \(x_n\) is the output of the Halpern iteration, and \(x_\star\) the fixed point of \(A\).

In short, for a given value of \(n\), \(\tau(n)\) is computed as the worst-case value of \(\|x_n - Ax_n\|^2\) when \(\|x_0 - x_\star\|^2 \leqslant 1\).

Algorithm: The Halpern iteration can be written as

\[x_{t+1} = \frac{1}{t + 2} x_0 + \left(1 - \frac{1}{t + 2}\right) Ax_t.\]

Theoretical guarantee: A tight worst-case guarantee for Halpern iteration can be found in [2, Theorem 2.1]:

\[\|x_n - Ax_n\|^2 \leqslant \left(\frac{2}{n+1}\right)^2 \|x_0 - x_\star\|^2.\]

References: The method was first proposed in [1]. The detailed analysis and tight bound are available in [2].

[1] B. Halpern (1967). Fixed points of nonexpanding maps. American Mathematical Society, 73(6), 957–961.

[2] F. Lieder (2021). On the convergence rate of the Halpern-iteration. Optimization Letters, 15(2), 405-418.

Parameters:
  • n (int) – number of iterations.

  • wrapper (str) – the name of the wrapper to be used.

  • solver (str) – the name of the solver the wrapper should use.

  • verbose (int) –

    level of information details to print.

    • -1: No verbose at all.

    • 0: This example’s output.

    • 1: This example’s output + PEPit information.

    • 2: This example’s output + PEPit information + solver details.

Returns:
  • pepit_tau (float) – worst-case value

  • theoretical_tau (float) – theoretical value

Example

>>> pepit_tau, theoretical_tau = wc_halpern_iteration(n=25, wrapper="cvxpy", solver=None, verbose=1)
(PEPit) Setting up the problem: size of the Gram matrix: 28x28
(PEPit) Setting up the problem: performance measure is the minimum of 1 element(s)
(PEPit) Setting up the problem: Adding initial conditions and general constraints ...
(PEPit) Setting up the problem: initial conditions and general constraints (1 constraint(s) added)
(PEPit) Setting up the problem: interpolation conditions for 1 function(s)
                        Function 1 : Adding 351 scalar constraint(s) ...
                        Function 1 : 351 scalar constraint(s) added
(PEPit) Setting up the problem: additional constraints for 0 function(s)
(PEPit) Compiling SDP
(PEPit) Calling SDP solver
(PEPit) Solver status: optimal (wrapper:cvxpy, solver: MOSEK); optimal value: 0.005917282090077699
(PEPit) Primal feasibility check:
                The solver found a Gram matrix that is positive semi-definite up to an error of 3.755174265259971e-09
                All the primal scalar constraints are verified up to an error of 1.3075734314749177e-08
(PEPit) Dual feasibility check:
                The solver found a residual matrix that is positive semi-definite
                All the dual scalar values associated with inequality constraints are nonnegative up to an error of 4.4389586421813856e-09
(PEPit) The worst-case guarantee proof is perfectly reconstituted up to an error of 2.425619306975869e-07
(PEPit) Final upper bound (dual): 0.005917288963138354 and lower bound (primal example): 0.005917282090077699
(PEPit) Duality gap: absolute: 6.873060655332441e-09 and relative: 1.1615232383221049e-06
*** Example file: worst-case performance of Halpern Iterations ***
        PEPit guarantee:         ||xN - AxN||^2 <= 0.00591729 ||x0 - x_*||^2
        Theoretical guarantee:   ||xN - AxN||^2 <= 0.00591716 ||x0 - x_*||^2

6.2. Optimal Contractive Halpern iteration

PEPit.examples.fixed_point_problems.wc_optimal_contractive_halpern_iteration(n, gamma, wrapper='cvxpy', solver=None, verbose=1)[source]

Consider the fixed point problem

\[\mathrm{Find}\, x:\, x = Ax,\]

where \(A\) is a \(1/\gamma\)-contractive operator, i.e. a \(L\)-Lipschitz operator with \(L=1/\gamma\).

This code computes a worst-case guarantee for the Optimal Contractive Halpern Iteration. That is, it computes the smallest possible \(\tau(n, \gamma)\) such that the guarantee

\[\|x_n - Ax_n\|^2 \leqslant \tau(n, \gamma) \|x_0 - x_\star\|^2\]

is valid, where \(x_n\) is the output of the Optimal Contractive Halpern iteration, and \(x_\star\) is the fixed point of \(A\). In short, for a given value of \(n, \gamma\), \(\tau(n, \gamma)\) is computed as the worst-case value of \(\|x_n - Ax_n\|^2\) when \(\|x_0 - x_\star\|^2 \leqslant 1\).

Algorithm: The Optimal Contractive Halpern iteration can be written as

\[x_{t+1} = \left(1 - \frac{1}{\varphi_{t+1}} \right) Ax_t + \frac{1}{\varphi_{t+1}} x_0.\]

where \(\varphi_k = \sum_{i=0}^k \gamma^{2i}\) and \(x_0\) is a starting point.

Theoretical guarantee: A tight worst-case guarantee for the Optimal Contractive Halpern iteration can be found in [1, Corollary 3.3, Theorem 4.1]:

\[\|x_n - Ax_n\|^2 \leqslant \left(1 + \frac{1}{\gamma}\right)^2 \left( \frac{1}{\sum_{k=0}^n \gamma^k} \right)^2 \|x_0 - x_\star\|^2.\]

References: The detailed approach and tight bound are available in [1].

[1] J. Park, E. Ryu (2022). Exact Optimal Accelerated Complexity for Fixed-Point Iterations. In 39th International Conference on Machine Learning (ICML).

Parameters:
  • n (int) – number of iterations.

  • gamma (float) – \(\gamma \ge 1\). \(A\) will be \(1/\gamma\)-contractive.

  • wrapper (str) – the name of the wrapper to be used.

  • solver (str) – the name of the solver the wrapper should use.

  • verbose (int) –

    level of information details to print.

    • -1: No verbose at all.

    • 0: This example’s output.

    • 1: This example’s output + PEPit information.

    • 2: This example’s output + PEPit information + solver details.

Returns:
  • pepit_tau (float) – worst-case value

  • theoretical_tau (float) – theoretical value

Example

>>> pepit_tau, theoretical_tau = wc_optimal_contractive_halpern_iteration(n=10, gamma=1.1, wrapper="cvxpy", solver=None, verbose=1)
(PEPit) Setting up the problem: size of the Gram matrix: 13x13
(PEPit) Setting up the problem: performance measure is the minimum of 1 element(s)
(PEPit) Setting up the problem: Adding initial conditions and general constraints ...
(PEPit) Setting up the problem: initial conditions and general constraints (1 constraint(s) added)
(PEPit) Setting up the problem: interpolation conditions for 1 function(s)
                        Function 1 : Adding 66 scalar constraint(s) ...
                        Function 1 : 66 scalar constraint(s) added
(PEPit) Setting up the problem: additional constraints for 0 function(s)
(PEPit) Compiling SDP
(PEPit) Calling SDP solver
(PEPit) Solver status: optimal (wrapper:cvxpy, solver: MOSEK); optimal value: 0.010613261462073679
(PEPit) Primal feasibility check:
                The solver found a Gram matrix that is positive semi-definite up to an error of 5.170073408600879e-09
                All the primal scalar constraints are verified up to an error of 1.5453453107439064e-08
(PEPit) Dual feasibility check:
                The solver found a residual matrix that is positive semi-definite
                All the dual scalar values associated with inequality constraints are nonnegative up to an error of 3.883430104655162e-09
(PEPit) The worst-case guarantee proof is perfectly reconstituted up to an error of 1.396268932559219e-07
(PEPit) Final upper bound (dual): 0.010613268001708536 and lower bound (primal example): 0.010613261462073679
(PEPit) Duality gap: absolute: 6.5396348579438435e-09 and relative: 6.161757986753765e-07
*** Example file: worst-case performance of Optimal Contractive Halpern Iterations ***
        PEPit guarantee:         ||xN - AxN||^2 <= 0.0106133 ||x0 - x_*||^2
        Theoretical guarantee:   ||xN - AxN||^2 <= 0.0106132 ||x0 - x_*||^2

6.3. Krasnoselskii-Mann with constant step-sizes

PEPit.examples.fixed_point_problems.wc_krasnoselskii_mann_constant_step_sizes(n, gamma, wrapper='cvxpy', solver=None, verbose=1)[source]

Consider the fixed point problem

\[\mathrm{Find}\, x:\, x = Ax,\]

where \(A\) is a non-expansive operator, that is a \(L\)-Lipschitz operator with \(L=1\).

This code computes a worst-case guarantee for the Krasnolselskii-Mann (KM) method with constant step-size. That is, it computes the smallest possible \(\tau(n)\) such that the guarantee

\[\frac{1}{4}\|x_n - Ax_n\|^2 \leqslant \tau(n) \|x_0 - x_\star\|^2\]

is valid, where \(x_n\) is the output of the KM method, and \(x_\star\) is some fixed point of \(A\) (i.e., \(x_\star=Ax_\star\)).

Algorithm: The constant step-size KM method is described by

\[x_{t+1} = \left(1 - \gamma\right) x_{t} + \gamma Ax_{t}.\]

Theoretical guarantee: A theoretical upper bound is provided by [1, Theorem 4.9]

\[\begin{split}\tau(n) = \left\{ \begin{eqnarray} \frac{1}{n+1}\left(\frac{n}{n+1}\right)^n \frac{1}{4 \gamma (1 - \gamma)}\quad & \text{if } \frac{1}{2}\leqslant \gamma \leqslant \frac{1}{2}\left(1+\sqrt{\frac{n}{n+1}}\right) \\ (\gamma - 1)^{2n} \quad & \text{if } \frac{1}{2}\left(1+\sqrt{\frac{n}{n+1}}\right) < \gamma \leqslant 1. \end{eqnarray} \right.\end{split}\]

Reference:

[1] F. Lieder (2018). Projection Based Methods for Conic Linear Programming Optimal First Order Complexities and Norm Constrained Quasi Newton Methods. PhD thesis, HHU Düsseldorf.

Parameters:
  • n (int) – number of iterations.

  • gamma (float) – step-size between 1/2 and 1

  • wrapper (str) – the name of the wrapper to be used.

  • solver (str) – the name of the solver the wrapper should use.

  • verbose (int) –

    level of information details to print.

    • -1: No verbose at all.

    • 0: This example’s output.

    • 1: This example’s output + PEPit information.

    • 2: This example’s output + PEPit information + solver details.

Returns:
  • pepit_tau (float) – worst-case value

  • theoretical_tau (float) – theoretical value

Example

>>> pepit_tau, theoretical_tau = wc_krasnoselskii_mann_constant_step_sizes(n=3, gamma=3 / 4, wrapper="cvxpy", solver=None, verbose=1)
(PEPit) Setting up the problem: size of the Gram matrix: 6x6
(PEPit) Setting up the problem: performance measure is the minimum of 1 element(s)
(PEPit) Setting up the problem: Adding initial conditions and general constraints ...
(PEPit) Setting up the problem: initial conditions and general constraints (1 constraint(s) added)
(PEPit) Setting up the problem: interpolation conditions for 1 function(s)
                        Function 1 : Adding 10 scalar constraint(s) ...
                        Function 1 : 10 scalar constraint(s) added
(PEPit) Setting up the problem: additional constraints for 0 function(s)
(PEPit) Compiling SDP
(PEPit) Calling SDP solver
(PEPit) Solver status: optimal (wrapper:cvxpy, solver: MOSEK); optimal value: 0.1406249823498115
(PEPit) Primal feasibility check:
                The solver found a Gram matrix that is positive semi-definite up to an error of 2.6923151650319117e-09
                All the primal scalar constraints are verified up to an error of 1.7378567473969042e-09
(PEPit) Dual feasibility check:
                The solver found a residual matrix that is positive semi-definite
                All the dual scalar values associated with inequality constraints are nonnegative
(PEPit) The worst-case guarantee proof is perfectly reconstituted up to an error of 4.412159703651858e-08
(PEPit) Final upper bound (dual): 0.14062498615478927 and lower bound (primal example): 0.1406249823498115
(PEPit) Duality gap: absolute: 3.804977777299712e-09 and relative: 2.7057623145755453e-08
*** Example file: worst-case performance of Kranoselskii-Mann iterations ***
        PEPit guarantee:         1/4||xN - AxN||^2 <= 0.140625 ||x0 - x_*||^2
        Theoretical guarantee:   1/4||xN - AxN||^2 <= 0.140625 ||x0 - x_*||^2

6.4. Krasnoselskii-Mann with increasing step-sizes

PEPit.examples.fixed_point_problems.wc_krasnoselskii_mann_increasing_step_sizes(n, wrapper='cvxpy', solver=None, verbose=1)[source]

Consider the fixed point problem

\[\mathrm{Find}\, x:\, x = Ax,\]

where \(A\) is a non-expansive operator, that is a \(L\)-Lipschitz operator with \(L=1\).

This code computes a worst-case guarantee for the Krasnolselskii-Mann method. That is, it computes the smallest possible \(\tau(n)\) such that the guarantee

\[\frac{1}{4}\|x_n - Ax_n\|^2 \leqslant \tau(n) \|x_0 - x_\star\|^2\]

is valid, where \(x_n\) is the output of the KM method, and \(x_\star\) is some fixed point of \(A\) (i.e., \(x_\star=Ax_\star\)).

Algorithm: The KM method is described by

\[x_{t+1} = \frac{1}{t + 2} x_{t} + \left(1 - \frac{1}{t + 2}\right) Ax_{t}.\]

Reference: This scheme was first studied using PEPs in [1].

[1] F. Lieder (2018). Projection Based Methods for Conic Linear Programming Optimal First Order Complexities and Norm Constrained Quasi Newton Methods. PhD thesis, HHU Düsseldorf.

Parameters:
  • n (int) – number of iterations.

  • wrapper (str) – the name of the wrapper to be used.

  • solver (str) – the name of the solver the wrapper should use.

  • verbose (int) –

    level of information details to print.

    • -1: No verbose at all.

    • 0: This example’s output.

    • 1: This example’s output + PEPit information.

    • 2: This example’s output + PEPit information + solver details.

Returns:
  • pepit_tau (float) – worst-case value

  • theoretical_tau (None) – no theoretical value

Example

>>> pepit_tau, theoretical_tau = wc_krasnoselskii_mann_increasing_step_sizes(n=3, wrapper="cvxpy", solver=None, verbose=1)
(PEPit) Setting up the problem: size of the Gram matrix: 6x6
(PEPit) Setting up the problem: performance measure is the minimum of 1 element(s)
(PEPit) Setting up the problem: Adding initial conditions and general constraints ...
(PEPit) Setting up the problem: initial conditions and general constraints (1 constraint(s) added)
(PEPit) Setting up the problem: interpolation conditions for 1 function(s)
                        Function 1 : Adding 10 scalar constraint(s) ...
                        Function 1 : 10 scalar constraint(s) added
(PEPit) Setting up the problem: additional constraints for 0 function(s)
(PEPit) Compiling SDP
(PEPit) Calling SDP solver
(PEPit) Solver status: optimal (wrapper:cvxpy, solver: MOSEK); optimal value: 0.11963370896691235
(PEPit) Primal feasibility check:
                The solver found a Gram matrix that is positive semi-definite up to an error of 1.146590974387308e-09
                All the primal scalar constraints are verified up to an error of 6.733909263534343e-10
(PEPit) Dual feasibility check:
                The solver found a residual matrix that is positive semi-definite
                All the dual scalar values associated with inequality constraints are nonnegative
(PEPit) The worst-case guarantee proof is perfectly reconstituted up to an error of 1.650803029863701e-08
(PEPit) Final upper bound (dual): 0.11963371048428333 and lower bound (primal example): 0.11963370896691235
(PEPit) Duality gap: absolute: 1.5173709788651735e-09 and relative: 1.2683473512342912e-08
*** Example file: worst-case performance of Kranoselskii-Mann iterations ***
        PEPit guarantee:         1/4 ||xN - AxN||^2 <= 0.119634 ||x0 - x_*||^2

6.5. Inconsistent Halpern iteration

PEPit.examples.fixed_point_problems.wc_inconsistent_halpern_iteration(n, wrapper='cvxpy', solver=None, verbose=1)[source]

Consider the fixed point problem

\[\mathrm{Find}\, x:\, x = Ax,\]

where \(A\) is a non-expansive operator, that is a \(L\)-Lipschitz operator with \(L=1\). When the solution of above problem, or fixed point, does not exist, behavior of the fixed-point iteration with A can be characterized with infimal displacement vector \(v\).

This code computes a worst-case guarantee for the Halpern Iteration, when A is not necessarily consistent, i.e., does not necessarily have fixed point. That is, it computes the smallest possible \(\tau(n)\) such that the guarantee

\[\|x_n - Ax_n - v\|^2 \leqslant \tau(n) \|x_0 - x_\star\|^2\]

is valid, where \(x_n\) is the output of the Halpern iteration and \(x_\star\) is the point where \(v\) is attained, i.e.,

\[v = x_\star - Ax_\star\]

In short, for a given value of \(n\), \(\tau(n)\) is computed as the worst-case value of \(\|x_n - Ax_n - v\|^2\) when \(\|x_0 - x_\star\|^2 \leqslant 1\).

Algorithm: The Halpern iteration can be written as

\[x_{t+1} = \frac{1}{t + 2} x_0 + \left(1 - \frac{1}{t + 2}\right) Ax_t.\]

Theoretical guarantee: A worst-case guarantee for Halpern iteration can be found in [1, Theorem 8]:

\[\|x_n - Ax_n - v\|^2 \leqslant \left(\frac{\sqrt{Hn + 12} + 1}{n + 1}\right)^2 \|x_0 - x_\star\|^2.\]

References: The detailed approach is available in [1].

[1] J. Park, E. Ryu (2023). Accelerated Infeasibility Detection of Constrained Optimization and Fixed-Point Iterations. International Conference on Machine Learning.

Parameters:
  • n (int) – number of iterations.

  • wrapper (str) – the name of the wrapper to be used.

  • solver (str) – the name of the solver the wrapper should use.

  • verbose (int) –

    Level of information details to print.

    • -1: No verbose at all.

    • 0: This example’s output.

    • 1: This example’s output + PEPit information.

    • 2: This example’s output + PEPit information + CVXPY details.

Returns:
  • pepit_tau (float) – worst-case value

  • theoretical_tau (float) – theoretical value

Example

>>> pepit_tau, theoretical_tau = wc_inconsistent_halpern_iteration(n=25, verbose=1)
(PEPit) Setting up the problem: size of the Gram matrix: 29x29
(PEPit) Setting up the problem: performance measure is the minimum of 1 element(s)
(PEPit) Setting up the problem: Adding initial conditions and general constraints ...
(PEPit) Setting up the problem: initial conditions and general constraints (1 constraint(s) added)
(PEPit) Setting up the problem: interpolation conditions for 1 function(s)
                        Function 1 : Adding 378 scalar constraint(s) ...
                        Function 1 : 378 scalar constraint(s) added
(PEPit) Setting up the problem: additional constraints for 0 function(s)
(PEPit) Compiling SDP
(PEPit) Calling SDP solver
(PEPit) Solver status: optimal (wrapper:cvxpy, solver: MOSEK); optimal value: 0.02678884717170149
(PEPit) Primal feasibility check:
                The solver found a Gram matrix that is positive semi-definite
                All the primal scalar constraints are verified up to an error of 4.2928149923682213e-10
(PEPit) Dual feasibility check:
                The solver found a residual matrix that is positive semi-definite
                All the dual scalar values associated with inequality constraints are nonnegative
(PEPit) The worst-case guarantee proof is perfectly reconstituted up to an error of 3.9511359559460285e-06
(PEPit) Final upper bound (dual): 0.02678881575052497 and lower bound (primal example): 0.02678884717170149
(PEPit) Duality gap: absolute: -3.142117652177312e-08 and relative: -1.1729200708183147e-06
*** Example file: worst-case performance of (possibly inconsistent) Halpern Iterations ***
        PEPit guarantee:         ||xN - AxN - v||^2 <= 0.0267888 ||x0 - x_*||^2
        Theoretical guarantee:   ||xN - AxN - v||^2 <= 0.0366417 ||x0 - x_*||^2