5. Monotone inclusions and variational inequalities
5.1. Proximal point
- PEPit.examples.monotone_inclusions_variational_inequalities.wc_proximal_point(alpha, n, wrapper='cvxpy', solver=None, verbose=1)[source]
Consider the monotone inclusion problem
\[\mathrm{Find}\, x:\, 0\in Ax,\]where \(A\) is maximally monotone. We denote \(J_A = (I + A)^{-1}\) the resolvent of \(A\).
This code computes a worst-case guarantee for the proximal point method. That, it computes the smallest possible \(\tau(n, \alpha)\) such that the guarantee
\[\|x_n - x_{n-1}\|^2 \leqslant \tau(n, \alpha) \|x_0 - x_\star\|^2,\]is valid, where \(x_\star\) is such that \(0 \in Ax_\star\).
Algorithm: The proximal point algorithm for monotone inclusions is described as follows, for \(t \in \{ 0, \dots, n-1\}\),
\[x_{t+1} = J_{\alpha A}(x_t),\]where \(\alpha\) is a step-size.
Theoretical guarantee: A tight theoretical guarantee can be found in [1, section 4].
\[\|x_n - x_{n-1}\|^2 \leqslant \frac{\left(1 - \frac{1}{n}\right)^{n - 1}}{n} \|x_0 - x_\star\|^2.\]Reference:
- Parameters:
alpha (float) – the step-size.
n (int) – number of iterations.
wrapper (str) – the name of the wrapper to be used.
solver (str) – the name of the solver the wrapper should use.
verbose (int) –
level of information details to print.
-1: No verbose at all.
0: This example’s output.
1: This example’s output + PEPit information.
2: This example’s output + PEPit information + solver details.
- Returns:
pepit_tau (float) – worst-case value.
theoretical_tau (float) – theoretical value.
Example
>>> pepit_tau, theoretical_tau = wc_proximal_point(alpha=2, n=10, wrapper="cvxpy", solver=None, verbose=1) (PEPit) Setting up the problem: size of the Gram matrix: 12x12 (PEPit) Setting up the problem: performance measure is the minimum of 1 element(s) (PEPit) Setting up the problem: Adding initial conditions and general constraints ... (PEPit) Setting up the problem: initial conditions and general constraints (1 constraint(s) added) (PEPit) Setting up the problem: interpolation conditions for 1 function(s) Function 1 : Adding 55 scalar constraint(s) ... Function 1 : 55 scalar constraint(s) added (PEPit) Setting up the problem: additional constraints for 0 function(s) (PEPit) Compiling SDP (PEPit) Calling SDP solver (PEPit) Solver status: optimal (wrapper:cvxpy, solver: MOSEK); optimal value: 0.038742046287953344 (PEPit) Primal feasibility check: The solver found a Gram matrix that is positive semi-definite up to an error of 2.905660281263931e-09 All the primal scalar constraints are verified up to an error of 5.7862823421572784e-09 (PEPit) Dual feasibility check: The solver found a residual matrix that is positive semi-definite All the dual scalar values associated with inequality constraints are nonnegative (PEPit) The worst-case guarantee proof is perfectly reconstituted up to an error of 5.069601027862802e-08 (PEPit) Final upper bound (dual): 0.03874204889860354 and lower bound (primal example): 0.038742046287953344 (PEPit) Duality gap: absolute: 2.610650196022135e-09 and relative: 6.73854493027619e-08 *** Example file: worst-case performance of the Proximal Point Method*** PEPit guarantee: ||x(n) - x(n-1)||^2 <= 0.038742 ||x0 - xs||^2 Theoretical guarantee: ||x(n) - x(n-1)||^2 <= 0.038742 ||x0 - xs||^2
5.2. Accelerated proximal point
- PEPit.examples.monotone_inclusions_variational_inequalities.wc_accelerated_proximal_point(alpha, n, wrapper='cvxpy', solver=None, verbose=1)[source]
Consider the monotone inclusion problem
\[\mathrm{Find}\, x:\, 0\in Ax,\]where \(A\) is maximally monotone. We denote \(J_A = (I + A)^{-1}\) the resolvent of \(A\).
This code computes a worst-case guarantee for the accelerated proximal point method proposed in [1]. That, it computes the smallest possible \(\tau(n, \alpha)\) such that the guarantee
\[\|x_n - y_n\|^2 \leqslant \tau(n, \alpha) \|x_0 - x_\star\|^2,\]is valid, where \(x_\star\) is such that \(0 \in Ax_\star\).
Algorithm: Accelerated proximal point is described as follows, for \(t \in \{ 0, \dots, n-1\}\)
\[\begin{split}\begin{eqnarray} x_{t+1} & = & J_{\alpha A}(y_t), \\ y_{t+1} & = & x_{t+1} + \frac{t}{t+2}(x_{t+1} - x_{t}) - \frac{t}{t+1}(x_t - y_{t-1}), \end{eqnarray}\end{split}\]where \(x_0=y_0=y_{-1}\)
Theoretical guarantee: A tight theoretical worst-case guarantee can be found in [1, Theorem 4.1], for \(n \geqslant 1\),
\[\|x_n - y_{n-1}\|^2 \leqslant \frac{1}{n^2} \|x_0 - x_\star\|^2.\]Reference:
- Parameters:
alpha (float) – the step-size
n (int) – number of iterations.
wrapper (str) – the name of the wrapper to be used.
solver (str) – the name of the solver the wrapper should use.
verbose (int) –
level of information details to print.
-1: No verbose at all.
0: This example’s output.
1: This example’s output + PEPit information.
2: This example’s output + PEPit information + solver details.
- Returns:
pepit_tau (float) – worst-case value
theoretical_tau (float) – theoretical value
Example
>>> pepit_tau, theoretical_tau = wc_accelerated_proximal_point(alpha=2, n=10, wrapper="cvxpy", solver=None, verbose=1) (PEPit) Setting up the problem: size of the Gram matrix: 12x12 (PEPit) Setting up the problem: performance measure is the minimum of 1 element(s) (PEPit) Setting up the problem: Adding initial conditions and general constraints ... (PEPit) Setting up the problem: initial conditions and general constraints (1 constraint(s) added) (PEPit) Setting up the problem: interpolation conditions for 1 function(s) Function 1 : Adding 55 scalar constraint(s) ... Function 1 : 55 scalar constraint(s) added (PEPit) Setting up the problem: additional constraints for 0 function(s) (PEPit) Compiling SDP (PEPit) Calling SDP solver (PEPit) Solver status: optimal (wrapper:cvxpy, solver: MOSEK); optimal value: 0.01000002559061373 (PEPit) Primal feasibility check: The solver found a Gram matrix that is positive semi-definite up to an error of 2.589511920935478e-09 All the primal scalar constraints are verified up to an error of 7.459300880967301e-09 (PEPit) Dual feasibility check: The solver found a residual matrix that is positive semi-definite All the dual scalar values associated with inequality constraints are nonnegative (PEPit) The worst-case guarantee proof is perfectly reconstituted up to an error of 4.7282086824171376e-08 (PEPit) Final upper bound (dual): 0.010000027890024786 and lower bound (primal example): 0.01000002559061373 (PEPit) Duality gap: absolute: 2.2994110556590064e-09 and relative: 2.2994051713400514e-07 *** Example file: worst-case performance of the Accelerated Proximal Point Method*** PEPit guarantee: ||x_n - y_n||^2 <= 0.01 ||x_0 - x_s||^2 Theoretical guarantee: ||x_n - y_n||^2 <= 0.01 ||x_0 - x_s||^2
5.3. Optimal Strongly-monotone Proximal Point
- PEPit.examples.monotone_inclusions_variational_inequalities.wc_optimal_strongly_monotone_proximal_point(n, mu, wrapper='cvxpy', solver=None, verbose=1)[source]
Consider the monotone inclusion problem
\[\mathrm{Find}\, x:\, 0\in Ax,\]where \(A\) is maximally \(\mu\)-strongly monotone. We denote by \(J_{A}\) the resolvent of \(A\).
For any \(x\) such that \(x = J_{A} y\) for some \(y\), define the resolvent residual \(\tilde{A}x = y - J_{A}y \in Ax\).
This code computes a worst-case guarantee for the Optimal Strongly-monotone Proximal Point Method (OS-PPM). That is, it computes the smallest possible \(\tau(n, \mu)\) such that the guarantee
\[\|\tilde{A}x_n\|^2 \leqslant \tau(n, \mu) \|x_0 - x_\star\|^2,\]is valid, where \(x_n\) is the output of the Optimal Strongly-monotone Proximal Point Method, and \(x_\star\) is a zero of \(A\). In short, for a given value of \(n, \mu\), \(\tau(n, \mu)\) is computed as the worst-case value of \(\|\tilde{A}x_n\|^2\) when \(\|x_0 - x_\star\|^2 \leqslant 1\).
Algorithm: The Optimal Strongly-monotone Proximal Point Method can be written as
\begin{eqnarray} x_{t+1} & = & J_{A} y_t,\\ y_{t+1} & = & x_{t+1} + \frac{\varphi_{t} - 1}{\varphi_{t+1}} (x_{t+1} - x_t) - \frac{2 \mu \varphi_{t}}{\varphi_{t+1}} (y_t - x_{t+1}) \\ & & + \frac{(1+2\mu) \varphi_{t-1}}{\varphi_{t+1}} (y_{t-1} - x_t). \end{eqnarray}where \(\varphi_k = \sum_{i=0}^k (1+2\mu)^{2i}\) with \(\varphi_{-1}=0\) and \(x_0 = y_0 = y_{-1}\) is a starting point.
This method is equivalent to the Optimal Contractive Halpern iteration.
Theoretical guarantee: A tight worst-case guarantee for the Optimal Strongly-monotone Proximal Point Method can be found in [1, Theorem 3.2, Corollary 4.2]:
\[\|\tilde{A}x_n\|^2 \leqslant \left( \frac{1}{\sum_{k=0}^{N-1} (1+2\mu)^k} \right)^2 \|x_0 - x_\star\|^2.\]References: The detailed approach and tight bound are available in [1].
- Parameters:
n (int) – number of iterations.
mu (float) – \(\mu \ge 0\). \(A\) will be maximal \(\mu\)-strongly monotone.
wrapper (str) – the name of the wrapper to be used.
solver (str) – the name of the solver the wrapper should use.
verbose (int) –
level of information details to print.
-1: No verbose at all.
0: This example’s output.
1: This example’s output + PEPit information.
2: This example’s output + PEPit information + solver details.
- Returns:
pepit_tau (float) – worst-case value.
theoretical_tau (float) – theoretical value.
Example
>>> pepit_tau, theoretical_tau = wc_optimal_strongly_monotone_proximal_point(n=10, mu=0.05, wrapper="cvxpy", solver=None, verbose=1) (PEPit) Setting up the problem: size of the Gram matrix: 12x12 (PEPit) Setting up the problem: performance measure is the minimum of 1 element(s) (PEPit) Setting up the problem: Adding initial conditions and general constraints ... (PEPit) Setting up the problem: initial conditions and general constraints (1 constraint(s) added) (PEPit) Setting up the problem: interpolation conditions for 1 function(s) Function 1 : Adding 55 scalar constraint(s) ... Function 1 : 55 scalar constraint(s) added (PEPit) Setting up the problem: additional constraints for 0 function(s) (PEPit) Compiling SDP (PEPit) Calling SDP solver (PEPit) Solver status: optimal (wrapper:cvxpy, solver: MOSEK); optimal value: 0.003936989547244047 (PEPit) Primal feasibility check: The solver found a Gram matrix that is positive semi-definite up to an error of 1.0607807556608727e-09 All the primal scalar constraints are verified up to an error of 3.5351675688243754e-09 (PEPit) Dual feasibility check: The solver found a residual matrix that is positive semi-definite All the dual scalar values associated with inequality constraints are nonnegative (PEPit) The worst-case guarantee proof is perfectly reconstituted up to an error of 1.804099035739487e-08 (PEPit) Final upper bound (dual): 0.003936990621254958 and lower bound (primal example): 0.003936989547244047 (PEPit) Duality gap: absolute: 1.0740109105886186e-09 and relative: 2.7280004117370406e-07 *** Example file: worst-case performance of Optimal Strongly-monotone Proximal Point Method *** PEPit guarantee: ||AxN||^2 <= 0.00393699 ||x0 - x_*||^2 Theoretical guarantee: ||AxN||^2 <= 0.00393698 ||x0 - x_*||^2
5.4. Douglas Rachford Splitting
- PEPit.examples.monotone_inclusions_variational_inequalities.wc_douglas_rachford_splitting(L, mu, alpha, theta, wrapper='cvxpy', solver=None, verbose=1)[source]
Consider the monotone inclusion problem
\[\mathrm{Find}\, x:\, 0\in Ax + Bx,\]where \(A\) is \(L\)-Lipschitz and maximally monotone and \(B\) is (maximally) \(\mu\)-strongly monotone. We denote by \(J_{\alpha A}\) and \(J_{\alpha B}\) the resolvents of respectively \(\alpha A\) and \(\alpha B\).
This code computes a worst-case guarantee for the Douglas-Rachford splitting (DRS). That is, given two initial points \(w^{(0)}_t\) and \(w^{(1)}_t\), this code computes the smallest possible \(\tau(L, \mu, \alpha, \theta)\) (a.k.a. “contraction factor”) such that the guarantee
\[\|w^{(0)}_{t+1} - w^{(1)}_{t+1}\|^2 \leqslant \tau(L, \mu, \alpha, \theta) \|w^{(0)}_{t} - w^{(1)}_{t}\|^2,\]is valid, where \(w^{(0)}_{t+1}\) and \(w^{(1)}_{t+1}\) are obtained after one iteration of DRS from respectively \(w^{(0)}_{t}\) and \(w^{(1)}_{t}\).
In short, for given values of \(L\), \(\mu\), \(\alpha\) and \(\theta\), the contraction factor \(\tau(L, \mu, \alpha, \theta)\) is computed as the worst-case value of \(\|w^{(0)}_{t+1} - w^{(1)}_{t+1}\|^2\) when \(\|w^{(0)}_{t} - w^{(1)}_{t}\|^2 \leqslant 1\).
Algorithm: One iteration of the Douglas-Rachford splitting is described as follows, for \(t \in \{ 0, \dots, n-1\}\),
\begin{eqnarray} x_{t+1} & = & J_{\alpha B} (w_t),\\ y_{t+1} & = & J_{\alpha A} (2x_{t+1}-w_t),\\ w_{t+1} & = & w_t - \theta (x_{t+1}-y_{t+1}). \end{eqnarray}Theoretical guarantee: Theoretical worst-case guarantees can be found in [1, section 4, Theorem 4.3]. Since the results of [2] tighten that of [1], we compare with [2, Theorem 4.3] below. The theoretical results are complicated and we do not copy them here.
References: The detailed PEP methodology for studying operator splitting is provided in [2].
- Parameters:
L (float) – the Lipschitz parameter.
mu (float) – the strongly monotone parameter.
alpha (float) – the step-size in the resolvent.
theta (float) – algorithm parameter.
wrapper (str) – the name of the wrapper to be used.
solver (str) – the name of the solver the wrapper should use.
verbose (int) –
level of information details to print.
-1: No verbose at all.
0: This example’s output.
1: This example’s output + PEPit information.
2: This example’s output + PEPit information + solver details.
- Returns:
pepit_tau (float) – worst-case value.
theoretical_tau (float) – theoretical value.
Example
>>> pepit_tau, theoretical_tau = wc_douglas_rachford_splitting(L=1, mu=.1, alpha=1.3, theta=.9, wrapper="cvxpy", solver=None, verbose=1) (PEPit) Setting up the problem: size of the Gram matrix: 6x6 (PEPit) Setting up the problem: performance measure is the minimum of 1 element(s) (PEPit) Setting up the problem: Adding initial conditions and general constraints ... (PEPit) Setting up the problem: initial conditions and general constraints (1 constraint(s) added) (PEPit) Setting up the problem: interpolation conditions for 2 function(s) Function 1 : Adding 2 scalar constraint(s) ... Function 1 : 2 scalar constraint(s) added Function 2 : Adding 1 scalar constraint(s) ... Function 2 : 1 scalar constraint(s) added (PEPit) Setting up the problem: additional constraints for 0 function(s) (PEPit) Compiling SDP (PEPit) Calling SDP solver (PEPit) Solver status: optimal (wrapper:cvxpy, solver: MOSEK); optimal value: 0.928770707839351 (PEPit) Primal feasibility check: The solver found a Gram matrix that is positive semi-definite up to an error of 3.297473722026212e-09 All the primal scalar constraints are verified up to an error of 1.64989273354621e-08 (PEPit) Dual feasibility check: The solver found a residual matrix that is positive semi-definite All the dual scalar values associated with inequality constraints are nonnegative (PEPit) The worst-case guarantee proof is perfectly reconstituted up to an error of 3.4855088898444464e-07 (PEPit) Final upper bound (dual): 0.9287707057295752 and lower bound (primal example): 0.928770707839351 (PEPit) Duality gap: absolute: -2.109775798508906e-09 and relative: -2.2715787445719413e-09 *** Example file: worst-case performance of the Douglas Rachford Splitting*** PEPit guarantee: ||w_(t+1)^0 - w_(t+1)^1||^2 <= 0.928771 ||w_(t)^0 - w_(t)^1||^2 Theoretical guarantee: ||w_(t+1)^0 - w_(t+1)^1||^2 <= 0.928771 ||w_(t)^0 - w_(t)^1||^2
5.5. Three operator splitting
- PEPit.examples.monotone_inclusions_variational_inequalities.wc_three_operator_splitting(L, mu, beta, alpha, theta, wrapper='cvxpy', solver=None, verbose=1)[source]
Consider the monotone inclusion problem
\[\mathrm{Find}\, x:\, 0\in Ax + Bx + Cx,\]where \(A\) is maximally monotone, \(B\) is \(\beta\)-cocoercive and C is the gradient of some \(L\)-smooth \(\mu\)-strongly convex function. We denote by \(J_{\alpha A}\) and \(J_{\alpha B}\) the resolvents of respectively \(\alpha A\) and \(\alpha B\).
This code computes a worst-case guarantee for the three operator splitting (TOS). That is, given two initial points \(w^{(0)}_t\) and \(w^{(1)}_t\), this code computes the smallest possible \(\tau(L, \mu, \beta, \alpha, \theta)\) (a.k.a. “contraction factor”) such that the guarantee
\[\|w^{(0)}_{t+1} - w^{(1)}_{t+1}\|^2 \leqslant \tau(L, \mu, \beta, \alpha, \theta) \|w^{(0)}_{t} - w^{(1)}_{t}\|^2,\]is valid, where \(w^{(0)}_{t+1}\) and \(w^{(1)}_{t+1}\) are obtained after one iteration of TOS from respectively \(w^{(0)}_{t}\) and \(w^{(1)}_{t}\).
In short, for given values of \(L\), \(\mu\), \(\beta\), \(\alpha\) and \(\theta\), the contraction factor \(\tau(L, \mu, \beta, \alpha, \theta)\) is computed as the worst-case value of \(\|w^{(0)}_{t+1} - w^{(1)}_{t+1}\|^2\) when \(\|w^{(0)}_{t} - w^{(1)}_{t}\|^2 \leqslant 1\).
Algorithm: One iteration of the algorithm is described in [1]. For \(t \in \{ 0, \dots, n-1\}\),
\begin{eqnarray} x_{t+1} & = & J_{\alpha B} (w_t),\\ y_{t+1} & = & J_{\alpha A} (2x_{t+1} - w_t - C x_{t+1}),\\ w_{t+1} & = & w_t - \theta (x_{t+1} - y_{t+1}). \end{eqnarray}References: The TOS was proposed in [1], the analysis of such operator splitting methods using PEPs was proposed in [2].
- Parameters:
L (float) – smoothness constant of C.
mu (float) – strong convexity of C.
beta (float) – cocoercivity of B.
alpha (float) – step-size (in the resolvents).
theta (float) – overrelaxation parameter.
wrapper (str) – the name of the wrapper to be used.
solver (str) – the name of the solver the wrapper should use.
verbose (int) –
level of information details to print.
-1: No verbose at all.
0: This example’s output.
1: This example’s output + PEPit information.
2: This example’s output + PEPit information + solver details.
- Returns:
pepit_tau (float) – worst-case value.
theoretical_tau (None) – no theoretical value.
Example
>>> pepit_tau, theoretical_tau = wc_three_operator_splitting(L=1, mu=.1, beta=1, alpha=.9, theta=1.3, wrapper="cvxpy", solver=None, verbose=1) (PEPit) Setting up the problem: size of the Gram matrix: 8x8 (PEPit) Setting up the problem: performance measure is the minimum of 1 element(s) (PEPit) Setting up the problem: Adding initial conditions and general constraints ... (PEPit) Setting up the problem: initial conditions and general constraints (1 constraint(s) added) (PEPit) Setting up the problem: interpolation conditions for 3 function(s) Function 1 : Adding 1 scalar constraint(s) ... Function 1 : 1 scalar constraint(s) added Function 2 : Adding 1 scalar constraint(s) ... Function 2 : 1 scalar constraint(s) added Function 3 : Adding 2 scalar constraint(s) ... Function 3 : 2 scalar constraint(s) added (PEPit) Setting up the problem: additional constraints for 0 function(s) (PEPit) Compiling SDP (PEPit) Calling SDP solver (PEPit) Solver status: optimal (wrapper:cvxpy, solver: MOSEK); optimal value: 0.7796890707911295 (PEPit) Primal feasibility check: The solver found a Gram matrix that is positive semi-definite up to an error of 1.062458310263129e-08 All the primal scalar constraints are verified up to an error of 4.036799094997434e-08 (PEPit) Dual feasibility check: The solver found a residual matrix that is positive semi-definite All the dual scalar values associated with inequality constraints are nonnegative (PEPit) The worst-case guarantee proof is perfectly reconstituted up to an error of 1.0385673195889567e-06 (PEPit) Final upper bound (dual): 0.7796890635199223 and lower bound (primal example): 0.7796890707911295 (PEPit) Duality gap: absolute: -7.27120719190566e-09 and relative: -9.325778011134313e-09 *** Example file: worst-case contraction factor of the Three Operator Splitting *** PEPit guarantee: ||w_(t+1)^0 - w_(t+1)^1||^2 <= 0.779689 ||w_(t)^0 - w_(t)^1||^2
5.6. Optimistic gradient
- PEPit.examples.monotone_inclusions_variational_inequalities.wc_optimistic_gradient(n, gamma, L, wrapper='cvxpy', solver=None, verbose=1)[source]
Consider the monotone variational inequality
\[\mathrm{Find}\, x_\star \in C\text{ such that } \left<F(x_\star);x-x_\star\right> \geqslant 0\,\,\forall x\in C,\]where \(C\) is a closed convex set and \(F\) is maximally monotone and Lipschitz.
This code computes a worst-case guarantee for the optimistic gradient method. That, it computes the smallest possible \(\tau(n)\) such that the guarantee
\[\|\tilde{x}_n - \tilde{x}_{n-1}\|^2 \leqslant \tau(n) \|x_0 - x_\star\|^2,\]is valid, where \(\tilde{x}_n\) is the output of the optimistic gradient method and \(x_0\) its starting point.
Algorithm: The optimistic gradient method is described as follows, for \(t \in \{ 0, \dots, n-1\}\),
\begin{eqnarray} \tilde{x}_{t} & = & \mathrm{Proj}_{C} [x_t-\gamma F(\tilde{x}_{t-1})], \\ {x}_{t+1} & = & \tilde{x}_t + \gamma (F(\tilde{x}_{t-1}) - F(\tilde{x}_t)). \end{eqnarray}where \(\gamma\) is some step-size.
Theoretical guarantee: The method and many variants of it are discussed in [1] and a PEP formulation suggesting a worst-case guarantee in \(O(1/n)\) can be found in [2, Appendix D].
References:
- Parameters:
n (int) – number of iterations.
gamma (float) – the step-size.
L (float) – the Lipschitz constant.
wrapper (str) – the name of the wrapper to be used.
solver (str) – the name of the solver the wrapper should use.
verbose (int) –
level of information details to print.
-1: No verbose at all.
0: This example’s output.
1: This example’s output + PEPit information.
2: This example’s output + PEPit information + solver details.
- Returns:
pepit_tau (float) – worst-case value.
theoretical_tau (None) – no theoretical bound.
Example
>>> pepit_tau, theoretical_tau = wc_optimistic_gradient(n=5, gamma=1 / 4, L=1, wrapper="cvxpy", solver=None, verbose=1) (PEPit) Setting up the problem: size of the Gram matrix: 15x15 (PEPit) Setting up the problem: performance measure is the minimum of 1 element(s) (PEPit) Setting up the problem: Adding initial conditions and general constraints ... (PEPit) Setting up the problem: initial conditions and general constraints (1 constraint(s) added) (PEPit) Setting up the problem: interpolation conditions for 2 function(s) Function 1 : Adding 49 scalar constraint(s) ... Function 1 : 49 scalar constraint(s) added Function 2 : Adding 42 scalar constraint(s) ... Function 2 : 42 scalar constraint(s) added (PEPit) Setting up the problem: additional constraints for 0 function(s) (PEPit) Compiling SDP (PEPit) Calling SDP solver (PEPit) Solver status: optimal (wrapper:cvxpy, solver: MOSEK); optimal value: 0.06631412698565144 (PEPit) Primal feasibility check: The solver found a Gram matrix that is positive semi-definite All the primal scalar constraints are verified up to an error of 2.881525884221303e-09 (PEPit) Dual feasibility check: The solver found a residual matrix that is positive semi-definite All the dual scalar values associated with inequality constraints are nonnegative up to an error of 2.0104852598258245e-08 (PEPit) The worst-case guarantee proof is perfectly reconstituted up to an error of 9.484355451749491e-08 (PEPit) Final upper bound (dual): 0.06631413197861648 and lower bound (primal example): 0.06631412698565144 (PEPit) Duality gap: absolute: 4.992965041417108e-09 and relative: 7.529263021885893e-08 *** Example file: worst-case performance of the Optimistic Gradient Method*** PEPit guarantee: ||x(n) - x(n-1)||^2 <= 0.0663141 ||x0 - xs||^2
5.7. Past extragradient
- PEPit.examples.monotone_inclusions_variational_inequalities.wc_past_extragradient(n, gamma, L, wrapper='cvxpy', solver=None, verbose=1)[source]
Consider the monotone variational inequality
\[\mathrm{Find}\, x_\star \in C\text{ such that } \left<F(x_\star);x-x_\star\right> \geqslant 0\,\,\forall x\in C,\]where \(C\) is a closed convex set and \(F\) is maximally monotone and Lipschitz.
This code computes a worst-case guarantee for the past extragradient method. That, it computes the smallest possible \(\tau(n)\) such that the guarantee
\[\|x_n - x_{n-1}\|^2 \leqslant \tau(n) \|x_0 - x_\star\|^2,\]is valid, where \(x_n\) is the output of the past extragradient method and \(x_0\) its starting point.
Algorithm: The past extragradient method is described as follows, for \(t \in \{ 0, \dots, n-1\}\),
\begin{eqnarray} \tilde{x}_{t} & = & \mathrm{Proj}_{C} [x_t-\gamma F(\tilde{x}_{t-1})], \\ {x}_{t+1} & = & \mathrm{Proj}_{C} [x_t-\gamma F(\tilde{x}_{t})]. \end{eqnarray}where \(\gamma\) is some step-size.
Theoretical guarantee: The method and many variants of it are discussed in [1]. A worst-case guarantee in \(O(1/n)\) can be found in [2, 3].
References:
- Parameters:
n (int) – number of iterations.
gamma (float) – the step-size.
L (float) – the Lipschitz constant.
wrapper (str) – the name of the wrapper to be used.
solver (str) – the name of the solver the wrapper should use.
verbose (int) –
level of information details to print.
-1: No verbose at all.
0: This example’s output.
1: This example’s output + PEPit information.
2: This example’s output + PEPit information + solver details.
- Returns:
pepit_tau (float) – worst-case value.
theoretical_tau (None) – no theoretical bound.
Example
>>> pepit_tau, theoretical_tau = wc_past_extragradient(n=5, gamma=1 / 4, L=1, wrapper="cvxpy", solver=None, verbose=1) (PEPit) Setting up the problem: size of the Gram matrix: 20x20 (PEPit) Setting up the problem: performance measure is the minimum of 1 element(s) (PEPit) Setting up the problem: Adding initial conditions and general constraints ... (PEPit) Setting up the problem: initial conditions and general constraints (1 constraint(s) added) (PEPit) Setting up the problem: interpolation conditions for 2 function(s) Function 1 : Adding 144 scalar constraint(s) ... Function 1 : 144 scalar constraint(s) added Function 2 : Adding 42 scalar constraint(s) ... Function 2 : 42 scalar constraint(s) added (PEPit) Setting up the problem: additional constraints for 0 function(s) (PEPit) Compiling SDP (PEPit) Calling SDP solver (PEPit) Solver status: optimal (wrapper:cvxpy, solver: MOSEK); optimal value: 0.06026638371348259 (PEPit) Primal feasibility check: The solver found a Gram matrix that is positive semi-definite up to an error of 1.0034351046829282e-09 All the primal scalar constraints are verified up to an error of 8.834041276273297e-10 (PEPit) Dual feasibility check: The solver found a residual matrix that is positive semi-definite All the dual scalar values associated with inequality constraints are nonnegative up to an error of 6.653590985767508e-09 (PEPit) The worst-case guarantee proof is perfectly reconstituted up to an error of 3.799251158012433e-08 (PEPit) Final upper bound (dual): 0.06026638541530024 and lower bound (primal example): 0.06026638371348259 (PEPit) Duality gap: absolute: 1.7018176451388811e-09 and relative: 2.823825722196363e-08 *** Example file: worst-case performance of the Past Extragradient Method*** PEPit guarantee: ||x(n) - x(n-1)||^2 <= 0.0602664 ||x0 - xs||^2