For all operators provided DifferentiationInterface, there can be only one differentiated (or "active") argument, which we call x.
However, the release v0.6 introduced the possibility of additional "context" arguments, which are not differentiated but still passed to the function after x.
Contexts can be useful if you have a function y = f(x, a, b, c, ...) or f!(y, x, a, b, c, ...) and you want derivatives of y with respect to x only.
Another option would be creating a closure, but that is sometimes undesirable.
Every context argument must be wrapped in a subtype of Context and come after the differentiated input x.
Right now, there are two kinds of context: Constant and Cache.
!!! warning Not every backend supports every type of context. See the documentation on Backends for more details.
Semantically, both of these calls compute the partial gradient of f(x, c) with respect to x, but they consider c differently:
gradient(f, backend, x, Constant(c))
gradient(f, backend, x, Cache(c))In the first call, c is kept unchanged throughout the function evaluation.
In the second call, c can be mutated with values computed during the function.
Importantly, one can prepare an operator with an arbitrary value c' of the Constant (subject to the usual restrictions on preparation).
The values in a provided Cache never matter anyway.
When faced with sparse Jacobian or Hessian matrices, one can take advantage of their sparsity pattern to speed up the computation.
DifferentiationInterface does this automatically if you pass a backend of type [AutoSparse](@extref ADTypes.AutoSparse).
!!! tip To know more about sparse AD, read the survey What Color Is Your Jacobian? Graph Coloring for Computing Derivatives (Gebremedhin et al., 2005).
AutoSparse backends only support jacobian and hessian (as well as their variants), because other operators do not output matrices.
An AutoSparse backend must be constructed from three ingredients:
- An underlying (dense) backend, which can be
SecondOrderor anything from ADTypes.jl - A sparsity pattern detector like:
- [
TracerSparsityDetector](@extref SparseConnectivityTracer.TracerSparsityDetector) from SparseConnectivityTracer.jl - [
SymbolicsSparsityDetector](@extref Symbolics.SymbolicsSparsityDetector) from Symbolics.jl DenseSparsityDetectorfrom DifferentiationInterface.jl (beware that this detector only gives a locally valid pattern)- [
KnownJacobianSparsityDetector](@extref ADTypes.KnownJacobianSparsityDetector) or [KnownHessianSparsityDetector](@extref ADTypes.KnownHessianSparsityDetector) from ADTypes.jl (if you already know the pattern)
- [
- A coloring algorithm from SparseMatrixColorings.jl, such as:
- [
GreedyColoringAlgorithm](@extref SparseMatrixColorings.GreedyColoringAlgorithm) (our generic recommendation) - [
ConstantColoringAlgorithm](@extref SparseMatrixColorings.ConstantColoringAlgorithm) (if you have already computed the optimal coloring and always want to return it)
- [
!!! note
Symbolic backends have built-in sparsity handling, so AutoSparse(AutoSymbolics()) and AutoSparse(AutoFastDifferentiation()) do not need additional configuration for pattern detection or coloring.
The preparation step of jacobian or hessian with an AutoSparse backend can be long, because it needs to detect the sparsity pattern and perform a matrix coloring.
But after preparation, the more zeros are present in the matrix, the greater the speedup will be compared to dense differentiation.
!!! danger
The result of preparation for an AutoSparse backend cannot be reused if the sparsity pattern changes.
The complexity of sparse Jacobians or Hessians grows with the number of distinct colors in a coloring of the sparsity pattern.
To reduce this number of colors, [GreedyColoringAlgorithm](@extref SparseMatrixColorings.GreedyColoringAlgorithm) has two main settings: the order used for vertices and the decompression method.
Depending on your use case, you may want to modify either of these options to increase performance.
See the documentation of SparseMatrixColorings.jl for details.
When a Jacobian matrix has both dense rows and dense columns, it can be more efficient to use "mixed-mode" differentiation, a mixture of forward and reverse.
The associated bidirectional coloring algorithm automatically decides how to cover the Jacobian using a set of columns (computed in forward mode) plus a set of rows (computed in reverse mode).
This behavior is triggered as soon as you put a MixedMode object inside AutoSparse, like so:
AutoSparse(
MixedMode(forward_backend, reverse_backend);
sparsity_detector,
coloring_algorithm
)At the moment, mixed mode tends to work best (output fewer colors) when the [GreedyColoringAlgorithm](@extref SparseMatrixColorings.GreedyColoringAlgorithm) is provided with a [RandomOrder](@extref SparseMatrixColorings.RandomOrder) instead of the usual [NaturalOrder](@extref SparseMatrixColorings.NaturalOrder), and when "post-processing" is activated after coloring.
For full reproducibility, you should use a random number generator from StableRNGs.jl.
Thus, the right setup looks like:
using StableRNGs
seed = 3
coloring_algorithm = GreedyColoringAlgorithm(RandomOrder(StableRNG(seed), seed); postprocessing=true)The jacobian and hessian operators compute matrices by repeatedly applying lower-level operators (pushforward, pullback or hvp) to a set of tangents.
The tangents usually correspond to basis elements of the appropriate vector space.
We could call the lower-level operator on each tangent separately, but some packages (ForwardDiff.jl and Enzyme.jl) have optimized implementations to handle multiple tangents at once.
This behavior is often called "vector mode" AD, but we call it "batch mode" to avoid confusion with Julia's Vector type.
As a matter of fact, the optimal batch size NTuple and not a Vector.
When the underlying vector space has dimension jacobian and hessian process
For every backend which does not support batch mode, the batch size is set to AutoForwardDiff](@extref ADTypes.AutoForwardDiff) and [AutoEnzyme](@extref ADTypes.AutoEnzyme), more complicated rules apply.
If the backend object has a pre-determined batch size