Functions

Convex

class PEPit.functions.ConvexFunction(is_leaf=True, decomposition_dict=None, reuse_gradient=False)[source]

Bases: Function

The ConvexFunction class overwrites the add_class_constraints method of Function, implementing the interpolation constraints of the class of convex, closed and proper (CCP) functions (i.e., convex functions whose epigraphs are non-empty closed sets).

General CCP functions are not characterized by any parameter, hence can be instantiated as

Example

>>> from PEPit import PEP
>>> from PEPit.functions import ConvexFunction
>>> problem = PEP()
>>> func = problem.declare_function(function_class=ConvexFunction)
Parameters
  • is_leaf (bool) – True if self is defined from scratch. False if self is defined as linear combination of leaf.

  • decomposition_dict (dict) – Decomposition of self as linear combination of leaf Function objects. Keys are Function objects and values are their associated coefficients.

  • reuse_gradient (bool) – If True, the same subgradient is returned when one requires it several times on the same Point. If False, a new subgradient is computed each time one is required.

add_class_constraints()[source]

Formulates the list of interpolation constraints for self (CCP function).

Strongly convex

class PEPit.functions.StronglyConvexFunction(mu, is_leaf=True, decomposition_dict=None, reuse_gradient=False)[source]

Bases: Function

The StronglyConvexFunction class overwrites the add_class_constraints method of Function, implementing the interpolation constraints of the class of strongly convex closed proper functions (strongly convex functions whose epigraphs are non-empty closed sets).

Attributes

mu (float) – strong convexity parameter

Strongly convex functions are characterized by the strong convexity parameter \(\mu\), hence can be instantiated as

Example

>>> from PEPit import PEP
>>> from PEPit.functions import StronglyConvexFunction
>>> problem = PEP()
>>> func = problem.declare_function(function_class=StronglyConvexFunction, mu=.1)

References

[1] A. Taylor, J. Hendrickx, F. Glineur (2017). Smooth strongly convex interpolation and exact worst-case performance of first-order methods. Mathematical Programming, 161(1-2), 307-345.

Parameters
  • mu (float) – The strong convexity parameter.

  • is_leaf (bool) – True if self is defined from scratch. False if self is defined as linear combination of leaf.

  • decomposition_dict (dict) – Decomposition of self as linear combination of leaf Function objects. Keys are Function objects and values are their associated coefficients.

  • reuse_gradient (bool) – If True, the same subgradient is returned when one requires it several times on the same Point. If False, a new subgradient is computed each time one is required.

add_class_constraints()[source]

Formulates the list of interpolation constraints for self (strongly convex closed proper function), see [1, Corollary 2].

Smooth

class PEPit.functions.SmoothFunction(L=1.0, is_leaf=True, decomposition_dict=None, reuse_gradient=True)[source]

Bases: Function

The SmoothFunction class overwrites the add_class_constraints method of Function, implementing the interpolation constraints of the class of smooth (not necessarily convex) functions.

Attributes

L (float) – smoothness parameter

Smooth functions are characterized by the smoothness parameter L, hence can be instantiated as

Example

>>> from PEPit import PEP
>>> from PEPit.functions import SmoothFunction
>>> problem = PEP()
>>> func = problem.declare_function(function_class=SmoothFunction, L=1.)

References

[1] A. Taylor, J. Hendrickx, F. Glineur (2017). Exact worst-case performance of first-order methods for composite convex optimization. SIAM Journal on Optimization, 27(3):1283–1313.

Parameters
  • L (float) – The smoothness parameter.

  • is_leaf (bool) – True if self is defined from scratch. False if self is defined as linear combination of leaf.

  • decomposition_dict (dict) – Decomposition of self as linear combination of leaf Function objects. Keys are Function objects and values are their associated coefficients.

  • reuse_gradient (bool) – If True, the same subgradient is returned when one requires it several times on the same Point. If False, a new subgradient is computed each time one is required.

Note

Smooth functions are necessarily differentiable, hence reuse_gradient is set to True.

add_class_constraints()[source]

Formulates the list of interpolation constraints for self (smooth (not necessarily convex) function), see [1, Theorem 3.10].

Convex and smooth

class PEPit.functions.SmoothConvexFunction(L=1.0, is_leaf=True, decomposition_dict=None, reuse_gradient=True)[source]

Bases: SmoothStronglyConvexFunction

The SmoothConvexFunction implements smooth convex functions as particular cases of SmoothStronglyConvexFunction.

Attributes

L (float) – smoothness parameter

Smooth convex functions are characterized by the smoothness parameter L, hence can be instantiated as

Example

>>> from PEPit import PEP
>>> from PEPit.functions import SmoothConvexFunction
>>> problem = PEP()
>>> func = problem.declare_function(function_class=SmoothConvexFunction, L=1.)
Parameters
  • is_leaf (bool) – True if self is defined from scratch. False if self is defined as linear combination of leaf.

  • decomposition_dict (dict) – Decomposition of self as linear combination of leaf Function objects. Keys are Function objects and values are their associated coefficients.

  • reuse_gradient (bool) – If True, the same subgradient is returned when one requires it several times on the same Point. If False, a new subgradient is computed each time one is required.

  • L (float) – The smoothness parameter.

Note

Smooth convex functions are necessarily differentiable, hence reuse_gradient is set to True.

Convex and quadratically upper bounded

class PEPit.functions.ConvexQGFunction(L=1, is_leaf=True, decomposition_dict=None, reuse_gradient=False)[source]

Bases: Function

The ConvexQGFunction class overwrites the add_class_constraints method of Function, implementing the interpolation constraints of the class of quadratically upper bounded (\(\text{QG}^+\) [1]), i.e. \(\forall x, f(x) - f_\star \leqslant \frac{L}{2} \|x-x_\star\|^2\), and convex functions.

Attributes

L (float) – The quadratic upper bound parameter

General quadratically upper bounded (\(\text{QG}^+\)) convex functions are characterized by the quadratic growth parameter L, hence can be instantiated as

Example

>>> from PEPit import PEP
>>> from PEPit.functions import ConvexQGFunction
>>> problem = PEP()
>>> func = problem.declare_function(function_class=ConvexQGFunction, L=1)

References:

[1] B. Goujaud, A. Taylor, A. Dieuleveut (2022). Optimal first-order methods for convex functions with a quadratic upper bound.

Parameters
  • L (float) – The quadratic upper bound parameter.

  • is_leaf (bool) – True if self is defined from scratch. False if self is defined as linear combination of leaf.

  • decomposition_dict (dict) – decomposition of self as linear combination of leaf Function objects. Keys are Function objects and values are their associated coefficients.

  • reuse_gradient (bool) – If True, the same subgradient is returned when one requires it several times on the same Point. If False, a new subgradient is computed each time one is required.

add_class_constraints()[source]

Formulates the list of interpolation constraints for self (quadratically maximally growing convex function); see [1, Theorem 2.6].

Strongly convex and smooth

class PEPit.functions.SmoothStronglyConvexFunction(mu, L=1.0, is_leaf=True, decomposition_dict=None, reuse_gradient=True)[source]

Bases: Function

The SmoothStronglyConvexFunction class overwrites the add_class_constraints method of Function, by implementing interpolation constraints of the class of smooth strongly convex functions.

Attributes
  • mu (float) – strong convexity parameter

  • L (float) – smoothness parameter

Smooth strongly convex functions are characterized by parameters \(\mu\) and L, hence can be instantiated as

Example

>>> from PEPit import PEP
>>> from PEPit.functions import SmoothStronglyConvexFunction
>>> problem = PEP()
>>> func = problem.declare_function(function_class=SmoothStronglyConvexFunction, mu=.1, L=1.)

References

[1] A. Taylor, J. Hendrickx, F. Glineur (2017). Smooth strongly convex interpolation and exact worst-case performance of first-order methods. Mathematical Programming, 161(1-2), 307-345.

Parameters
  • mu (float) – The strong convexity parameter.

  • L (float) – The smoothness parameter.

  • is_leaf (bool) – True if self is defined from scratch. False if self is defined as linear combination of leaf.

  • decomposition_dict (dict) – Decomposition of self as linear combination of leaf Function objects. Keys are Function objects and values are their associated coefficients.

  • reuse_gradient (bool) – If True, the same subgradient is returned when one requires it several times on the same Point. If False, a new subgradient is computed each time one is required.

Note

Smooth strongly convex functions are necessarily differentiable, hence reuse_gradient is set to True.

add_class_constraints()[source]

Formulates the list of interpolation constraints for self (smooth strongly convex function); see [1, Theorem 4].

Convex and Lipschitz continuous

class PEPit.functions.ConvexLipschitzFunction(M=1.0, is_leaf=True, decomposition_dict=None, reuse_gradient=False)[source]

Bases: Function

The ConvexLipschitzFunction class overwrites the add_class_constraints method of Function, implementing the interpolation constraints of the class of convex closed proper (CCP) Lipschitz continuous functions.

Attributes

M (float) – Lipschitz parameter

CCP Lipschitz continuous functions are characterized by a parameter M, hence can be instantiated as

Example

>>> from PEPit import PEP
>>> from PEPit.functions import ConvexLipschitzFunction
>>> problem = PEP()
>>> func = problem.declare_function(function_class=ConvexLipschitzFunction, M=1.)

References

[1] A. Taylor, J. Hendrickx, F. Glineur (2017). Exact worst-case performance of first-order methods for composite convex optimization. SIAM Journal on Optimization, 27(3):1283–1313.

Parameters
  • M (float) – The Lipschitz continuity parameter of self.

  • is_leaf (bool) – True if self is defined from scratch. False if self is defined as linear combination of leaf.

  • decomposition_dict (dict) – Decomposition of self as linear combination of leaf Function objects. Keys are Function objects and values are their associated coefficients.

  • reuse_gradient (bool) – If True, the same subgradient is returned when one requires it several times on the same Point. If False, a new subgradient is computed each time one is required.

add_class_constraints()[source]

Formulates the list of interpolation constraints for self (CCP Lipschitz continuous function), see [1, Theorem 3.5].

Convex indicator

class PEPit.functions.ConvexIndicatorFunction(D=inf, is_leaf=True, decomposition_dict=None, reuse_gradient=False)[source]

Bases: Function

The ConvexIndicatorFunction class overwrites the add_class_constraints method of Function, implementing interpolation constraints for the class of closed convex indicator functions.

Attributes

D (float) – upper bound on the diameter of the feasible set, possibly set to np.inf

Convex indicator functions are characterized by a parameter D, hence can be instantiated as

Example

>>> from PEPit import PEP
>>> from PEPit.functions import ConvexIndicatorFunction
>>> problem = PEP()
>>> func = problem.declare_function(function_class=ConvexIndicatorFunction, D=1)

References

[1] A. Taylor, J. Hendrickx, F. Glineur (2017). Exact worst-case performance of first-order methods for composite convex optimization. SIAM Journal on Optimization, 27(3):1283–1313.

Parameters
  • D (float) – Diameter of the support of self.

  • is_leaf (bool) – True if self is defined from scratch. False if self is defined as linear combination of leaf.

  • decomposition_dict (dict) – Decomposition of self as linear combination of leaf Function objects. Keys are Function objects and values are their associated coefficients.

  • reuse_gradient (bool) – If True, the same subgradient is returned when one requires it several times on the same Point. If False, a new subgradient is computed each time one is required.

add_class_constraints()[source]

Formulates the list of interpolation constraints for self (closed convex indicator function), see [1, Theorem 3.6].

Convex support functions

class PEPit.functions.ConvexSupportFunction(M=inf, is_leaf=True, decomposition_dict=None, reuse_gradient=False)[source]

Bases: Function

The ConvexSupportFunction class overwrites the add_class_constraints method of Function, implementing interpolation constraints for the class of closed convex support functions.

Attributes

M (float) – upper bound on the Lipschitz constant

Convex support functions are characterized by a parameter M, hence can be instantiated as

Example

>>> from PEPit import PEP
>>> from PEPit.functions import ConvexSupportFunction
>>> problem = PEP()
>>> func = problem.declare_function(function_class=ConvexSupportFunction, M=1)

References

[1] A. Taylor, J. Hendrickx, F. Glineur (2017). Exact worst-case performance of first-order methods for composite convex optimization. SIAM Journal on Optimization, 27(3):1283–1313.

Parameters
  • M (float) – Lipschitz constant of self.

  • is_leaf (bool) – True if self is defined from scratch. False is self is defined as linear combination of leaf .

  • decomposition_dict (dict) – Decomposition of self as linear combination of leaf Function objects. Keys are Function objects and values are their associated coefficients.

  • reuse_gradient (bool) – If True, the same subgradient is returned when one requires it several times on the same Point. If False, a new subgradient is computed each time one is required.

add_class_constraints()[source]

Formulates the list of interpolation constraints for self (closed convex support function), see [1, Corollary 3.7].

Restricted secant inequality and error bound

class PEPit.functions.RsiEbFunction(mu, L=1, is_leaf=True, decomposition_dict=None, reuse_gradient=False)[source]

Bases: Function

The RsiEbFunction class overwrites the add_class_constraints method of Function, implementing the interpolation constraints of the class of functions verifying the “lower” restricted secant inequality (\(\text{RSI}^-\)) and the “upper” error bound (\(\text{EB}^+\)).

Attributes
  • mu (float) – Restricted sequent inequality parameter

  • L (float) – Error bound parameter

\(\text{RSI}^-\) and \(\text{EB}^+\) functions are characterized by parameters \(\mu\) and L, hence can be instantiated as

Example

>>> from PEPit import PEP
>>> from PEPit.functions import RsiEbFunction
>>> problem = PEP()
>>> h = problem.declare_function(function_class=RsiEbFunction, mu=.1, L=1)

References

A definition of the class of \(\text{RSI}^-\) and \(\text{EB}^+\) functions can be found in [1].

[1] C. Guille-Escuret, B. Goujaud, A. Ibrahim, I. Mitliagkas (2022). Gradient Descent Is Optimal Under Lower Restricted Secant Inequality And Upper Error Bound. arXiv 2203.00342.

Parameters
  • mu (float) – The restricted secant inequality parameter.

  • L (float) – The upper error bound parameter.

  • is_leaf (bool) – True if self is defined from scratch. False if self is defined as linear combination of leaf .

  • decomposition_dict (dict) – decomposition of self as linear combination of leaf Function objects. Keys are Function objects and values are their associated coefficients.

  • reuse_gradient (bool) – If True, the same subgradient is returned when one requires it several times on the same Point. If False, a new subgradient is computed each time one is required.

add_class_constraints()[source]

Formulates the list of necessary conditions for interpolation of self, see [1, Theorem 1].