Functions

Convex

class PEPit.functions.ConvexFunction(_, is_leaf=True, decomposition_dict=None, reuse_gradient=False)[source]

Bases: PEPit.function.Function

The ConvexFunction class overwrites the add_class_constraints method of Function, implementing the interpolation constraints of the class of convex, closed and proper (CCP) functions (i.e., convex functions whose epigraphs are non-empty closed sets).

General CCP functions are not characterized by any parameter, hence can be instantiated as

Example

>>> from PEPit import PEP
>>> problem = PEP()
>>> func = problem.declare_function(function_class=ConvexFunction, param=dict())
Parameters
  • is_leaf (bool) – True if self is defined from scratch. False is self is defined as linear combination of leaf .

  • decomposition_dict (dict) – decomposition of self as linear combination of leaf Function objects. Keys are Function objects and values are their associated coefficients.

  • reuse_gradient (bool) – If True, the same subgradient is returned when one requires it several times on the same Point. If False, a new subgradient is computed each time one is required.

add_class_constraints()[source]

Formulates the list of interpolation constraints for self (CCP function).

Strongly convex

class PEPit.functions.StronglyConvexFunction(param, is_leaf=True, decomposition_dict=None, reuse_gradient=False)[source]

Bases: PEPit.function.Function

The StronglyConvexFunction class overwrites the add_class_constraints method of Function, implementing the interpolation constraints of the class of strongly convex closed proper functions (strongly convex functions whose epigraphs are non-empty closed sets).

Attributes

mu (float) – strong convexity parameter

Strongly convex functions are characterized by the strong convexity parameter \(\mu\), hence can be instantiated as

Example

>>> from PEPit import PEP
>>> problem = PEP()
>>> func = problem.declare_function(function_class=StronglyConvexFunction, param={'mu': .1})

References

[1] A. Taylor, J. Hendrickx, F. Glineur (2017). Smooth strongly convex interpolation and exact worst-case performance of first-order methods. Mathematical Programming, 161(1-2), 307-345.

Parameters
  • param (dict) – contains the values of mu

  • is_leaf (bool) – True if self is defined from scratch. False is self is defined as linear combination of leaf .

  • decomposition_dict (dict) – decomposition of self as linear combination of leaf Function objects. Keys are Function objects and values are their associated coefficients.

  • reuse_gradient (bool) – If True, the same subgradient is returned when one requires it several times on the same Point. If False, a new subgradient is computed each time one is required.

add_class_constraints()[source]

Formulates the list of interpolation constraints for self (strongly convex closed proper function), see [1, Corollary 2].

Smooth

class PEPit.functions.SmoothFunction(param, is_leaf=True, decomposition_dict=None, reuse_gradient=True)[source]

Bases: PEPit.function.Function

The SmoothFunction class overwrites the add_class_constraints method of Function, implementing the interpolation constraints of the class of smooth (not necessarily convex) functions.

Attributes

L (float) – smoothness parameter

Smooth functions are characterized by the smoothness parameter L, hence can be instantiated as

Example

>>> from PEPit import PEP
>>> problem = PEP()
>>> func = problem.declare_function(function_class=SmoothFunction, param={'L': 1})

References

[1] A. Taylor, J. Hendrickx, F. Glineur (2017). Exact worst-case performance of first-order methods for composite convex optimization. SIAM Journal on Optimization, 27(3):1283–1313.

Parameters
  • param (dict) – contains the values L

  • is_leaf (bool) – True if self is defined from scratch. False is self is defined as linear combination of leaf .

  • decomposition_dict (dict) – decomposition of self as linear combination of leaf Function objects. Keys are Function objects and values are their associated coefficients.

  • reuse_gradient (bool) – If True, the same subgradient is returned when one requires it several times on the same Point. If False, a new subgradient is computed each time one is required.

Note

Smooth functions are necessarily differentiable, hence reuse_gradient is set to True.

add_class_constraints()[source]

Formulates the list of interpolation constraints for self (smooth (not necessarily convex) function), see [1, Theorem 3.10].

Convex and smooth

class PEPit.functions.SmoothConvexFunction(param, is_leaf=True, decomposition_dict=None, reuse_gradient=True)[source]

Bases: PEPit.functions.smooth_strongly_convex_function.SmoothStronglyConvexFunction

The SmoothConvexFunction implements smooth convex functions as particular cases of SmoothStronglyConvexFunction.

Attributes

L (float) – smoothness parameter

Smooth convex functions are characterized by the smoothness parameter L, hence can be instantiated as

Example

>>> from PEPit import PEP
>>> problem = PEP()
>>> func = problem.declare_function(function_class=SmoothConvexFunction, param={'L': 1})
Parameters
  • param (dict) – contains the value of L

  • is_leaf (bool) – True if self is defined from scratch. False is self is defined as linear combination of leaf .

  • decomposition_dict (dict) – decomposition of self as linear combination of leaf Function objects. Keys are Function objects and values are their associated coefficients.

  • reuse_gradient (bool) – If True, the same subgradient is returned when one requires it several times on the same Point. If False, a new subgradient is computed each time one is required.

Note

Smooth convex functions are necessarily differentiable, hence reuse_gradient is set to True.

Strongly convex and smooth

class PEPit.functions.SmoothStronglyConvexFunction(param, is_leaf=True, decomposition_dict=None, reuse_gradient=True)[source]

Bases: PEPit.function.Function

The SmoothStronglyConvexFunction class overwrites the add_class_constraints method of Function, by implementing interpolation constraints of the class of smooth strongly convex functions.

Attributes
  • mu (float) – strong convexity parameter

  • L (float) – smoothness parameter

Smooth strongly convex functions are characterized by parameters \(\mu\) and L, hence can be instantiated as

Example

>>> from PEPit import PEP
>>> problem = PEP()
>>> func = problem.declare_function(function_class=SmoothStronglyConvexFunction, param={'mu': .1, 'L': 1})

References

[1] A. Taylor, J. Hendrickx, F. Glineur (2017). Smooth strongly convex interpolation and exact worst-case performance of first-order methods. Mathematical Programming, 161(1-2), 307-345.

Parameters
  • param (dict) – contains the values of mu and L

  • is_leaf (bool) – True if self is defined from scratch. False is self is defined as linear combination of leaf .

  • decomposition_dict (dict) – decomposition of self as linear combination of leaf Function objects. Keys are Function objects and values are their associated coefficients.

  • reuse_gradient (bool) – If True, the same subgradient is returned when one requires it several times on the same Point. If False, a new subgradient is computed each time one is required.

Note

Smooth strongly convex functions are necessarily differentiable, hence reuse_gradient is set to True.

add_class_constraints()[source]

Formulates the list of interpolation constraints for self (smooth strongly convex function); see [1, Theorem 4].

Convex and Lipschitz continuous

class PEPit.functions.ConvexLipschitzFunction(param, is_leaf=True, decomposition_dict=None, reuse_gradient=False)[source]

Bases: PEPit.function.Function

The ConvexLipschitzFunction class overwrites the add_class_constraints method of Function, implementing the interpolation constraints of the class of convex closed proper (CCP) Lipschitz continuous functions.

Attributes

M (float) – Lipschitz parameter

CCP Lipschitz continuous functions are characterized by a parameter M, hence can be instantiated as

Example

>>> from PEPit import PEP
>>> problem = PEP()
>>> func = problem.declare_function(function_class=ConvexLipschitzFunction, param={'M': 1})

References

[1] A. Taylor, J. Hendrickx, F. Glineur (2017). Exact worst-case performance of first-order methods for composite convex optimization. SIAM Journal on Optimization, 27(3):1283–1313.

Parameters
  • param (dict) – contains the value of M

  • is_leaf (bool) – True if self is defined from scratch. False is self is defined as linear combination of leaf .

  • decomposition_dict (dict) – decomposition of self as linear combination of leaf Function objects. Keys are Function objects and values are their associated coefficients.

  • reuse_gradient (bool) – If True, the same subgradient is returned when one requires it several times on the same Point. If False, a new subgradient is computed each time one is required.

add_class_constraints()[source]

Formulates the list of interpolation constraints for self (CCP Lipschitz continuous function), see [1, Theorem 3.5].

Convex indicator

class PEPit.functions.ConvexIndicatorFunction(param, is_leaf=True, decomposition_dict=None, reuse_gradient=False)[source]

Bases: PEPit.function.Function

The ConvexIndicatorFunction class overwrites the add_class_constraints method of Function, implementing interpolation constraints for the class of closed convex indicator functions.

Attributes

D (float) – upper bound on the diameter of the feasible set

Convex indicator functions are characterized by a parameter D, hence can be instantiated as

Example

>>> from PEPit import PEP
>>> problem = PEP()
>>> func = problem.declare_function(function_class=ConvexIndicatorFunction, param={'D': 1})

References

[1] A. Taylor, J. Hendrickx, F. Glineur (2017). Exact worst-case performance of first-order methods for composite convex optimization. SIAM Journal on Optimization, 27(3):1283–1313.

Parameters
  • param (dict) – contains the values of D

  • is_leaf (bool) – True if self is defined from scratch. False is self is defined as linear combination of leaf .

  • decomposition_dict (dict) – decomposition of self as linear combination of leaf Function objects. Keys are Function objects and values are their associated coefficients.

  • reuse_gradient (bool) – If True, the same subgradient is returned when one requires it several times on the same Point. If False, a new subgradient is computed each time one is required.

add_class_constraints()[source]

Formulates the list of interpolation constraints for self (closed convex indicator function), see [1, Theorem 3.6].