Functions

Convex

class PEPit.functions.ConvexFunction(is_leaf=True, decomposition_dict=None, reuse_gradient=False, name=None)[source]

Bases: Function

The ConvexFunction class overwrites the add_class_constraints method of Function, implementing the interpolation constraints of the class of convex, closed and proper (CCP) functions (i.e., convex functions whose epigraphs are non-empty closed sets).

General CCP functions are not characterized by any parameter, hence can be instantiated as

Example

>>> from PEPit import PEP
>>> from PEPit.functions import ConvexFunction
>>> problem = PEP()
>>> func = problem.declare_function(function_class=ConvexFunction)
Parameters:
  • is_leaf (bool) – True if self is defined from scratch. False if self is defined as linear combination of leaf.

  • decomposition_dict (dict) – Decomposition of self as linear combination of leaf Function objects. Keys are Function objects and values are their associated coefficients.

  • reuse_gradient (bool) – If True, the same subgradient is returned when one requires it several times on the same Point. If False, a new subgradient is computed each time one is required.

  • name (str) – name of the object. None by default. Can be updated later through the method set_name.

add_class_constraints()[source]

Add the convexity constraints.

static set_convexity_constraint_i_j(xi, gi, fi, xj, gj, fj)[source]

Formulates the list of interpolation constraints for self (CCP function).

Strongly convex

class PEPit.functions.StronglyConvexFunction(mu, is_leaf=True, decomposition_dict=None, reuse_gradient=False, name=None)[source]

Bases: Function

The StronglyConvexFunction class overwrites the add_class_constraints method of Function, implementing the interpolation constraints of the class of strongly convex closed proper functions (strongly convex functions whose epigraphs are non-empty closed sets).

Attributes:

mu (float) – strong convexity parameter

Strongly convex functions are characterized by the strong convexity parameter \(\mu\), hence can be instantiated as

Example

>>> from PEPit import PEP
>>> from PEPit.functions import StronglyConvexFunction
>>> problem = PEP()
>>> func = problem.declare_function(function_class=StronglyConvexFunction, mu=.1)

References:

[1] A. Taylor, J. Hendrickx, F. Glineur (2017). Smooth strongly convex interpolation and exact worst-case performance of first-order methods. Mathematical Programming, 161(1-2), 307-345.

Parameters:
  • mu (float) – The strong convexity parameter.

  • is_leaf (bool) – True if self is defined from scratch. False if self is defined as linear combination of leaf.

  • decomposition_dict (dict) – Decomposition of self as linear combination of leaf Function objects. Keys are Function objects and values are their associated coefficients.

  • reuse_gradient (bool) – If True, the same subgradient is returned when one requires it several times on the same Point. If False, a new subgradient is computed each time one is required.

  • name (str) – name of the object. None by default. Can be updated later through the method set_name.

add_class_constraints()[source]

Formulates the list of interpolation constraints for self (strongly convex closed proper function), see [1, Corollary 2].

set_strong_convexity_constraint_i_j(xi, gi, fi, xj, gj, fj)[source]

Set strong convexity interpolation constraints.

Smooth

class PEPit.functions.SmoothFunction(L, is_leaf=True, decomposition_dict=None, reuse_gradient=True, name=None)[source]

Bases: Function

The SmoothFunction class overwrites the add_class_constraints method of Function, implementing the interpolation constraints of the class of smooth (not necessarily convex) functions.

Attributes:

L (float) – smoothness parameter

Smooth functions are characterized by the smoothness parameter L, hence can be instantiated as

Example

>>> from PEPit import PEP
>>> from PEPit.functions import SmoothFunction
>>> problem = PEP()
>>> func = problem.declare_function(function_class=SmoothFunction, L=1.)

References:

[1] A. Taylor, J. Hendrickx, F. Glineur (2017). Exact worst-case performance of first-order methods for composite convex optimization. SIAM Journal on Optimization, 27(3):1283–1313.

Parameters:
  • L (float) – The smoothness parameter.

  • is_leaf (bool) – True if self is defined from scratch. False if self is defined as linear combination of leaf.

  • decomposition_dict (dict) – Decomposition of self as linear combination of leaf Function objects. Keys are Function objects and values are their associated coefficients.

  • reuse_gradient (bool) – If True, the same subgradient is returned when one requires it several times on the same Point. If False, a new subgradient is computed each time one is required.

  • name (str) – name of the object. None by default. Can be updated later through the method set_name.

Note

Smooth functions are necessarily differentiable, hence reuse_gradient is set to True.

add_class_constraints()[source]

Formulates the list of interpolation constraints for self (smooth (not necessarily convex) function), see [1, Theorem 3.10].

set_smoothness_i_j(xi, gi, fi, xj, gj, fj)[source]

Set smoothness interpolation constraints.

Convex and smooth

class PEPit.functions.SmoothConvexFunction(L, is_leaf=True, decomposition_dict=None, reuse_gradient=True, name=None)[source]

Bases: Function

The SmoothConvexFunction class overwrites the add_class_constraints method of Function, by implementing interpolation constraints of the class of smooth convex functions.

Attributes:

L (float) – smoothness parameter

Smooth convex functions are characterized by the smoothness parameter L, hence can be instantiated as

Example

>>> from PEPit import PEP
>>> from PEPit.functions import SmoothConvexFunction
>>> problem = PEP()
>>> func = problem.declare_function(function_class=SmoothConvexFunction, L=1.)

References:

[1] A. Taylor, J. Hendrickx, F. Glineur (2017). Smooth strongly convex interpolation and exact worst-case performance of first-order methods. Mathematical Programming, 161(1-2), 307-345.

Parameters:
  • L (float) – The smoothness parameter.

  • is_leaf (bool) – True if self is defined from scratch. False if self is defined as linear combination of leaf.

  • decomposition_dict (dict) – Decomposition of self as linear combination of leaf Function objects. Keys are Function objects and values are their associated coefficients.

  • reuse_gradient (bool) – If True, the same subgradient is returned when one requires it several times on the same Point. If False, a new subgradient is computed each time one is required.

  • name (str) – name of the object. None by default. Can be updated later through the method set_name.

Note

Smooth convex functions are necessarily differentiable, hence reuse_gradient is set to True.

add_class_constraints()[source]

Add class constraints.

set_smoothness_convexity_constraint_i_j(xi, gi, fi, xj, gj, fj)[source]

Formulates the list of interpolation constraints for self (smooth convex function).

Convex and quadratically upper bounded

class PEPit.functions.ConvexQGFunction(L, is_leaf=True, decomposition_dict=None, reuse_gradient=False, name=None)[source]

Bases: Function

The ConvexQGFunction class overwrites the add_class_constraints method of Function, implementing the interpolation constraints of the class of quadratically upper bounded (\(\text{QG}^+\) [1]), i.e. \(\forall x, f(x) - f_\star \leqslant \frac{L}{2} \|x-x_\star\|^2\), and convex functions.

Attributes:

L (float) – The quadratic upper bound parameter

General quadratically upper bounded (\(\text{QG}^+\)) convex functions are characterized by the quadratic growth parameter L, hence can be instantiated as

Example

>>> from PEPit import PEP
>>> from PEPit.functions import ConvexQGFunction
>>> problem = PEP()
>>> func = problem.declare_function(function_class=ConvexQGFunction, L=1)

References:

[1] B. Goujaud, A. Taylor, A. Dieuleveut (2022). Optimal first-order methods for convex functions with a quadratic upper bound.

Parameters:
  • L (float) – The quadratic upper bound parameter.

  • is_leaf (bool) – True if self is defined from scratch. False if self is defined as linear combination of leaf.

  • decomposition_dict (dict) – decomposition of self as linear combination of leaf Function objects. Keys are Function objects and values are their associated coefficients.

  • reuse_gradient (bool) – If True, the same subgradient is returned when one requires it several times on the same Point. If False, a new subgradient is computed each time one is required.

  • name (str) – name of the object. None by default. Can be updated later through the method set_name.

add_class_constraints()[source]

Formulates the list of interpolation constraints for self (quadratically maximally growing convex function); see [1, Theorem 2.6].

static set_convexity_constraint_i_j(xi, gi, fi, xj, gj, fj)[source]

Formulates the list of interpolation constraints for self (CCP function).

set_qg_convexity_constraint_i_j(xi, gi, fi, xj, gj, fj)[source]

Formulates the list of interpolation constraints for self (qg convex function).

Strongly convex and smooth

class PEPit.functions.SmoothStronglyConvexFunction(mu, L, is_leaf=True, decomposition_dict=None, reuse_gradient=True, name=None)[source]

Bases: Function

The SmoothStronglyConvexFunction class overwrites the add_class_constraints method of Function, by implementing interpolation constraints of the class of smooth strongly convex functions.

Attributes:
  • mu (float) – strong convexity parameter

  • L (float) – smoothness parameter

Smooth strongly convex functions are characterized by parameters \(\mu\) and \(L\), hence can be instantiated as

Example

>>> from PEPit import PEP
>>> from PEPit.functions import SmoothStronglyConvexFunction
>>> problem = PEP()
>>> func = problem.declare_function(function_class=SmoothStronglyConvexFunction, mu=.1, L=1.)

References:

[1] A. Taylor, J. Hendrickx, F. Glineur (2017). Smooth strongly convex interpolation and exact worst-case performance of first-order methods. Mathematical Programming, 161(1-2), 307-345.

Parameters:
  • mu (float) – The strong convexity parameter.

  • L (float) – The smoothness parameter.

  • is_leaf (bool) – True if self is defined from scratch. False if self is defined as linear combination of leaf.

  • decomposition_dict (dict) – Decomposition of self as linear combination of leaf Function objects. Keys are Function objects and values are their associated coefficients.

  • reuse_gradient (bool) – If True, the same subgradient is returned when one requires it several times on the same Point. If False, a new subgradient is computed each time one is required.

  • name (str) – name of the object. None by default. Can be updated later through the method set_name.

Note

Smooth strongly convex functions are necessarily differentiable, hence reuse_gradient is set to True.

add_class_constraints()[source]

Add class constraints.

set_smoothness_strong_convexity_constraint_i_j(xi, gi, fi, xj, gj, fj)[source]

Formulates the list of interpolation constraints for self (smooth strongly convex function).

Convex and Lipschitz continuous

class PEPit.functions.ConvexLipschitzFunction(M, is_leaf=True, decomposition_dict=None, reuse_gradient=False, name=None)[source]

Bases: Function

The ConvexLipschitzFunction class overwrites the add_class_constraints method of Function, implementing the interpolation constraints of the class of convex closed proper (CCP) Lipschitz continuous functions.

Attributes:

M (float) – Lipschitz parameter

CCP Lipschitz continuous functions are characterized by a parameter M, hence can be instantiated as

Example

>>> from PEPit import PEP
>>> from PEPit.functions import ConvexLipschitzFunction
>>> problem = PEP()
>>> func = problem.declare_function(function_class=ConvexLipschitzFunction, M=1.)

References:

[1] A. Taylor, J. Hendrickx, F. Glineur (2017). Exact worst-case performance of first-order methods for composite convex optimization. SIAM Journal on Optimization, 27(3):1283–1313.

Parameters:
  • M (float) – The Lipschitz continuity parameter of self.

  • is_leaf (bool) – True if self is defined from scratch. False if self is defined as linear combination of leaf.

  • decomposition_dict (dict) – Decomposition of self as linear combination of leaf Function objects. Keys are Function objects and values are their associated coefficients.

  • reuse_gradient (bool) – If True, the same subgradient is returned when one requires it several times on the same Point. If False, a new subgradient is computed each time one is required.

  • name (str) – name of the object. None by default. Can be updated later through the method set_name.

add_class_constraints()[source]

Formulates the list of interpolation constraints for self (CCP Lipschitz continuous function), see [1, Theorem 3.5].

static set_convexity_constraint_i_j(xi, gi, fi, xj, gj, fj)[source]

Formulates the list of interpolation constraints for self (CCP function).

set_lipschitz_continuity_constraint_i(xi, gi, fi)[source]

Formulates the Lipschitz continuity constraint by bounding the gradients.

Smooth convex and Lipschitz continuous

class PEPit.functions.SmoothConvexLipschitzFunction(L, M, is_leaf=True, decomposition_dict=None, reuse_gradient=True, name=None)[source]

Bases: Function

The SmoothConvexLipschitzFunction class overwrites the add_class_constraints method of Function, by implementing interpolation constraints of the class of smooth convex Lipschitz continuous functions.

Attributes:
  • L (float) – smoothness parameter

  • M (float) – Lipschitz continuity parameter

Smooth convex Lipschitz continuous functions are characterized by the smoothness parameters L and Lipschitz continuity parameter M, hence can be instantiated as

Example

>>> from PEPit import PEP
>>> from PEPit.functions import SmoothConvexLipschitzFunction
>>> problem = PEP()
>>> func = problem.declare_function(function_class=SmoothConvexLipschitzFunction, L=1., M=1.)

References:

[1] A. Taylor, J. Hendrickx, F. Glineur (2017). Exact worst-case performance of first-order methods for composite convex optimization. SIAM Journal on Optimization, 27(3):1283–1313.

Parameters:
  • L (float) – The smoothness parameter.

  • M (float) – The Lipschitz continuity parameter of self.

  • is_leaf (bool) – True if self is defined from scratch. False if self is defined as linear combination of leaf.

  • decomposition_dict (dict) – Decomposition of self as linear combination of leaf Function objects. Keys are Function objects and values are their associated coefficients.

  • reuse_gradient (bool) – If True, the same subgradient is returned when one requires it several times on the same Point. If False, a new subgradient is computed each time one is required.

  • name (str) – name of the object. None by default. Can be updated later through the method set_name.

Note

Smooth convex Lipschitz continuous functions are necessarily differentiable, hence reuse_gradient is set to True.

add_class_constraints()[source]

Formulates the list of interpolation constraints for smooth convex functions; see [1, Theorem 4], and add the Lipschitz continuity interpolation constraints.

set_lipschitz_continuity_constraint_i(xi, gi, fi)[source]

Formulates the Lipschitz continuity constraint by bounding the gradients.

set_smoothness_convexity_constraint_i_j(xi, gi, fi, xj, gj, fj)[source]

Formulates the list of interpolation constraints for smooth convex functions.

Convex indicator

class PEPit.functions.ConvexIndicatorFunction(D=inf, is_leaf=True, decomposition_dict=None, reuse_gradient=False, name=None)[source]

Bases: Function

The ConvexIndicatorFunction class overwrites the add_class_constraints method of Function, implementing interpolation constraints for the class of closed convex indicator functions.

Attributes:

D (float) – upper bound on the diameter of the feasible set, possibly set to np.inf

Convex indicator functions are characterized by a parameter D, hence can be instantiated as

Example

>>> from PEPit import PEP
>>> from PEPit.functions import ConvexIndicatorFunction
>>> problem = PEP()
>>> func = problem.declare_function(function_class=ConvexIndicatorFunction, D=1)

References:

[1] A. Taylor, J. Hendrickx, F. Glineur (2017). Exact worst-case performance of first-order methods for composite convex optimization. SIAM Journal on Optimization, 27(3):1283–1313.

Parameters:
  • D (float) – Diameter of the support of self. Default value set to infinity.

  • is_leaf (bool) – True if self is defined from scratch. False if self is defined as linear combination of leaf.

  • decomposition_dict (dict) – Decomposition of self as linear combination of leaf Function objects. Keys are Function objects and values are their associated coefficients.

  • reuse_gradient (bool) – If True, the same subgradient is returned when one requires it several times on the same Point. If False, a new subgradient is computed each time one is required.

  • name (str) – name of the object. None by default. Can be updated later through the method set_name.

add_class_constraints()[source]

Formulates the list of interpolation constraints for self (closed convex indicator function), see [1, Theorem 3.6].

static set_convexity_constraint_i_j(xi, gi, fi, xj, gj, fj)[source]

Formulates the list of interpolation constraints for self (CCP function).

set_diameter_constraint_i_j(xi, gi, fi, xj, gj, fj)[source]

Formulate the constraints bounding the diameter of the support of self.

static set_value_constraint_i(xi, gi, fi)[source]

Set the value of the function to 0 everywhere on the support.

Convex support functions

class PEPit.functions.ConvexSupportFunction(M=inf, is_leaf=True, decomposition_dict=None, reuse_gradient=False, name=None)[source]

Bases: Function

The ConvexSupportFunction class overwrites the add_class_constraints method of Function, implementing interpolation constraints for the class of closed convex support functions.

Attributes:

M (float) – upper bound on the Lipschitz constant

Convex support functions are characterized by a parameter M, hence can be instantiated as

Example

>>> from PEPit import PEP
>>> from PEPit.functions import ConvexSupportFunction
>>> problem = PEP()
>>> func = problem.declare_function(function_class=ConvexSupportFunction, M=1)

References:

[1] A. Taylor, J. Hendrickx, F. Glineur (2017). Exact worst-case performance of first-order methods for composite convex optimization. SIAM Journal on Optimization, 27(3):1283–1313.

Parameters:
  • M (float) – Lipschitz constant of self. Default value set to infinity.

  • is_leaf (bool) – True if self is defined from scratch. False is self is defined as linear combination of leaf .

  • decomposition_dict (dict) – Decomposition of self as linear combination of leaf Function objects. Keys are Function objects and values are their associated coefficients.

  • reuse_gradient (bool) – If True, the same subgradient is returned when one requires it several times on the same Point. If False, a new subgradient is computed each time one is required.

  • name (str) – name of the object. None by default. Can be updated later through the method set_name.

add_class_constraints()[source]

Formulates the list of interpolation constraints for self (closed convex support function), see [1, Corollary 3.7].

static set_convexity_constraint_i_j(xi, gi, fi, xj, gj, fj)[source]

Formulates the list of interpolation constraints for self (CCP function).

static set_fenchel_value_constraint_i(xi, gi, fi)[source]

Set the value of the Fenchel transform to 0.

set_lipschitz_continuity_constraint_i(xi, gi, fi)[source]

Set Lipschitz continuity constraint so that its Fenchel transform has a bounded support.

Restricted secant inequality and error bound

class PEPit.functions.RsiEbFunction(mu, L, is_leaf=True, decomposition_dict=None, reuse_gradient=False, name=None)[source]

Bases: Function

The RsiEbFunction class overwrites the add_class_constraints method of Function, implementing the interpolation constraints of the class of functions verifying the “lower” restricted secant inequality (\(\text{RSI}^-\)) and the “upper” error bound (\(\text{EB}^+\)).

Attributes:
  • mu (float) – Restricted sequent inequality parameter

  • L (float) – Error bound parameter

\(\text{RSI}^-\) and \(\text{EB}^+\) functions are characterized by parameters \(\mu\) and L, hence can be instantiated as

Example

>>> from PEPit import PEP
>>> from PEPit.functions import RsiEbFunction
>>> problem = PEP()
>>> h = problem.declare_function(function_class=RsiEbFunction, mu=.1, L=1)

References:

A definition of the class of \(\text{RSI}^-\) and \(\text{EB}^+\) functions can be found in [1].

[1] C. Guille-Escuret, B. Goujaud, A. Ibrahim, I. Mitliagkas (2022). Gradient Descent Is Optimal Under Lower Restricted Secant Inequality And Upper Error Bound. arXiv 2203.00342.

Parameters:
  • mu (float) – The restricted secant inequality parameter.

  • L (float) – The upper error bound parameter.

  • is_leaf (bool) – True if self is defined from scratch. False if self is defined as linear combination of leaf .

  • decomposition_dict (dict) – decomposition of self as linear combination of leaf Function objects. Keys are Function objects and values are their associated coefficients.

  • reuse_gradient (bool) – If True, the same subgradient is returned when one requires it several times on the same Point. If False, a new subgradient is computed each time one is required.

  • name (str) – name of the object. None by default. Can be updated later through the method set_name.

add_class_constraints()[source]

Formulates the list of necessary conditions for interpolation of self, see [1, Theorem 1].

set_eb_constraints_i_j(xi, gi, fi, xj, gj, fj)[source]

Set EB constraints.

set_rsi_constraints_i_j(xi, gi, fi, xj, gj, fj)[source]

Set RSI constraints.

Convex and smooth by block

class PEPit.functions.BlockSmoothConvexFunction(partition, L, is_leaf=True, decomposition_dict=None, reuse_gradient=True, name=None)[source]

Bases: Function

The BlockSmoothConvexFunction class overwrites the add_class_constraints method of Function, by implementing necessary constraints for interpolation of the class of smooth convex functions by blocks.

Attributes:
  • partition (BlockPartition) – partitioning of the variables (in blocks).

  • L (list) – smoothness parameters (one per block).

Smooth convex functions by blocks are characterized by a list of parameters \(L_i\) (one per block), hence can be instantiated as

Example

>>> from PEPit import PEP
>>> from PEPit.functions import BlockSmoothConvexFunction
>>> problem = PEP()
>>> partition = problem.declare_block_partition(d=3)
>>> L = [1, 4, 10]
>>> func = problem.declare_function(function_class=BlockSmoothConvexFunction, partition=partition, L=L)

References:

[1] Z. Shi, R. Liu (2016). Better worst-case complexity analysis of the block coordinate descent method for large scale machine learning. In 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA).

Parameters:
  • partition (BlockPartition) – a BlockPartition.

  • L (list) – smoothness parameters (list of floats). The size of the list must be equal to the number of blocks of the partition.

  • is_leaf (bool) – True if self is defined from scratch. False if self is defined as linear combination of leaf.

  • decomposition_dict (dict) – Decomposition of self as linear combination of leaf Function objects. Keys are Function objects and values are their associated coefficients.

  • reuse_gradient (bool) – If True, the same subgradient is returned when one requires it several times on the same Point. If False, a new subgradient is computed each time one is required.

  • name (str) – name of the object. None by default. Can be updated later through the method set_name.

Note

Smooth convex functions by blocks are necessarily differentiable, hence reuse_gradient is set to True.

add_class_constraints()[source]

Formulates the list of necessary constraints for interpolation for self (block smooth convex function); see [1, Lemma 1.1].

Strongly convex and smooth quadratic

class PEPit.functions.SmoothStronglyConvexQuadraticFunction(mu, L, is_leaf=True, decomposition_dict=None, reuse_gradient=True, name=None)[source]

Bases: Function

The SmoothStronglyConvexQuadraticFunction class overwrites the add_class_constraints method of Function, by implementing interpolation constraints of the class of smooth strongly convex quadratic functions.

Attributes:
  • mu (float) – strong convexity parameter

  • L (float) – smoothness parameter

Smooth strongly convex quadratic functions are characterized by parameters \(\mu\) and L, hence can be instantiated as

Example

>>> from PEPit import PEP
>>> from PEPit.functions import SmoothStronglyConvexQuadraticFunction
>>> problem = PEP()
>>> func = problem.declare_function(function_class=SmoothStronglyConvexQuadraticFunction, mu=.1, L=1.)

References:

[1] N. Bousselmi, J. Hendrickx, F. Glineur (2023). Interpolation Conditions for Linear Operators and applications to Performance Estimation Problems. arXiv preprint.

Parameters:
  • mu (float) – The strong convexity parameter.

  • L (float) – The smoothness parameter.

  • is_leaf (bool) – True if self is defined from scratch. False if self is defined as linear combination of leaf.

  • decomposition_dict (dict) – Decomposition of self as linear combination of leaf Function objects. Keys are Function objects and values are their associated coefficients.

  • reuse_gradient (bool) – If True, the same subgradient is returned when one requires it several times on the same Point. If False, a new subgradient is computed each time one is required.

  • name (str) – name of the object. None by default. Can be updated later through the method set_name.

Note

Smooth strongly convex quadratic functions are necessarily differentiable, hence reuse_gradient is set to True.

add_class_constraints()[source]

Formulates the list of interpolation constraints for self (smooth strongly convex quadratic function); see [1, Theorem 3.9].

set_symmetry_constraint_i_j(xi, gi, fi, xj, gj, fj)[source]

Ensure the Hessian is symmetric.

set_value_constraint_i(xi, gi, fi)[source]

Set the value of the function.