You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -130,7 +129,6 @@ And we can run the same benchmarks:
130
129
);
131
130
```
132
131
133
-
Have you seen this?
134
-
It's blazingly fast.
135
-
And you know what's even better?
136
-
You didn't need to look at the docs of either ForwardDiff.jl or Enzyme.jl to achieve top performance with both, or to compare them.
132
+
Not only is it blazingly fast, you achieved this speedup without looking at the docs of either ForwardDiff.jl or Enzyme.jl!
133
+
In short, DifferentiationInterface.jl allows for easy testing and comparison of AD backends.
134
+
If you want to go further, check out the [DifferentiationTest.jl tutorial](https://gdalle.github.io/DifferentiationInterface.jl/DifferentiationInterfaceTest/dev/tutorial/).
We present a typical workflow with DifferentiationInterfaceTest.jl, building on the [DifferentiationInterface.jl tutorial](https://gdalle.github.io/DifferentiationInterface.jl/DifferentiationInterface/dev/tutorial/) (which we encourage you to read first).
8
+
9
+
```@repl tuto
10
+
using DifferentiationInterface, DifferentiationInterfaceTest
11
+
import ForwardDiff, Enzyme
12
+
import DataFrames
13
+
```
14
+
15
+
## Introduction
16
+
17
+
The AD backends we want to compare are [ForwardDiff.jl](https://github.com/JuliaDiff/ForwardDiff.jl) and [Enzyme.jl](https://github.com/EnzymeAD/Enzyme.jl).
The main entry point for testing is the function [`test_differentiation`](@ref).
55
+
It has many options, but the main ingredients are the following:
56
+
57
+
```@repl tuto
58
+
test_differentiation(
59
+
backends, # the backends you want to compare
60
+
scenarios, # the scenarios you defined,
61
+
correctness=true, # compares values against the reference
62
+
type_stability=true, # checks type stability with JET.jl
63
+
detailed=true, # prints a detailed test set
64
+
)
65
+
```
66
+
67
+
If you are too lazy to manually specify the reference, you can also provide an AD backend as the `correctness` keyword argument, which will serve as the ground truth for comparison.
68
+
69
+
## Benchmarking
70
+
71
+
Once you are confident that your backends give the correct answers, you probably want to compare their performance.
72
+
This is made easy by the [`benchmark_differentiation`](@ref) function, whose syntax should feel familiar:
The resulting object is a `Vector` of structs, which can easily be converted into a `DataFrame` from [DataFrames.jl](https://github.com/JuliaData/DataFrames.jl):
79
+
80
+
```@repl tuto
81
+
df = DataFrames.DataFrame(benchmark_result)
82
+
```
83
+
84
+
Here's what the resulting `DataFrame` looks like with all its columns.
85
+
Note that we only compare (possibly) in-place operators, because they are always more efficient.
0 commit comments