You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For some reason the in-place version is slower than our first attempt, but as you can see it has one less allocation, corresponding to the gradient vector.
74
+
For some reason the in-place version is not much better than our first attempt.
75
+
However, as you can see, it has one less allocation: it corresponds to the gradient vector we provided.
75
76
Don't worry, we're not done yet.
76
77
77
78
## Preparing for multiple gradients
@@ -133,36 +134,48 @@ It's blazingly fast.
133
134
And you know what's even better?
134
135
You didn't need to look at the docs of either ForwardDiff.jl or Enzyme.jl to achieve top performance with both, or to compare them.
135
136
136
-
## Testing and benchmarking
137
+
## Testing
137
138
138
139
DifferentiationInterface.jl also provides some utilities for more involved comparison between backends.
139
-
They are gathered in a submodule called [`DifferentiationInterfaceTest`](https://github.com/gdalle/DifferentiationInterface.jl/tree/main/lib/DifferentiationInterfaceTest).
140
+
They are gathered in a submodule called `DifferentiationInterfaceTest`, located [here](https://github.com/gdalle/DifferentiationInterface.jl/tree/main/lib/DifferentiationInterfaceTest) in the repo.
140
141
141
142
```@repl tuto
142
143
using DifferentiationInterfaceTest
143
144
```
144
145
145
-
The main entry point is[`test_differentiation`](@ref), which is used as follows:
146
+
For testing, you can use[`test_differentiation`](@ref) as follows:
146
147
147
148
```@repl tuto
148
-
data = test_differentiation(
149
+
test_differentiation(
149
150
[AutoForwardDiff(), AutoEnzyme(Enzyme.Reverse)], # backends to compare
150
-
[gradient], # operators to try
151
-
[Scenario(f; x=x)]; # test scenario
151
+
[gradient, pullback], # operators to try
152
+
[Scenario(f; x=rand(3)), Scenario(f; x=rand(3,3))]; # test scenarios
152
153
correctness=AutoZygote(), # compare results to a "ground truth" from Zygote
153
-
benchmark=true, # measure runtime and allocations too
154
154
detailed=true, # print detailed test set
155
155
);
156
156
```
157
157
158
-
The output of `test_differentiation` when `benchmark=true` can be converted to a `DataFrame` from [DataFrames.jl](https://github.com/JuliaData/DataFrames.jl):
158
+
## Benchmarking
159
+
160
+
Once you have ascertained correctness, performance will be your next concern.
161
+
The interface of [`benchmark_differentiation`](@ref) is very similar to the one we've just seen, but this time it returns a data object.
The `BenchmarkData` object is just a struct of vectors, and you can easily convert to a `DataFrame` from [DataFrames.jl](https://github.com/JuliaData/DataFrames.jl):
159
172
160
173
```@repl tuto
161
174
df = DataFrames.DataFrame(pairs(data)...)
162
175
```
163
176
164
177
Here's what the resulting `DataFrame` looks like with all its columns.
165
-
Note that the results may be slightly different from the ones presented above (we use [Chairmarks.jl](https://github.com/LilithHafner/Chairmarks.jl) internally instead of BenchmarkTools.jl, and measure slightly different operators).
178
+
Note that the results may vary from the ones presented above (we use [Chairmarks.jl](https://github.com/LilithHafner/Chairmarks.jl) internally instead of BenchmarkTools.jl, and measure slightly different operators).
Cross-test a list of `backends` for a list of `operators` on a list of `scenarios`, running a variety of different tests.
54
-
55
-
If `benchmark=true`, return a [`BenchmarkData`](@ref) object, otherwise return `nothing`.
4
+
Test a list of `backends` for a list of `operators` on a list of `scenarios`.
56
5
57
6
# Default arguments
58
7
@@ -66,9 +15,7 @@ Testing:
66
15
- `correctness=true`: whether to compare the differentiation results with the theoretical values specified in each scenario. If a backend object like `correctness=AutoForwardDiff()` is passed instead of a boolean, the results will be compared using that reference backend as the ground truth.
67
16
- `call_count=false`: whether to check that the function is called the right number of times
68
17
- `type_stability=false`: whether to check type stability with JET.jl (thanks to `@test_opt`)
69
-
- `benchmark=false`: whether to run and return a benchmark suite with Chairmarks.jl
70
-
- `allocations=false`: whether to check that the benchmarks are allocation-free
71
-
- `detailed=false`: whether to print a detailed test set (by scenario) or condensed test set (by operator)
18
+
- `detailed=false`: whether to print a detailed or condensed test log
72
19
73
20
Filtering:
74
21
@@ -82,6 +29,7 @@ Filtering:
82
29
83
30
Options:
84
31
32
+
- `logging=true`: whether to log progress
85
33
- `isapprox=isapprox`: function used to compare objects, only needs to be set for complicated cases beyond arrays / scalars
86
34
- `rtol=1e-3`: precision for correctness testing (when comparing to the reference outputs)
87
35
"""
@@ -93,8 +41,6 @@ function test_differentiation(
93
41
correctness::Union{Bool,AbstractADType}=true,
94
42
type_stability::Bool=false,
95
43
call_count::Bool=false,
96
-
benchmark::Bool=false,
97
-
allocations::Bool=false,
98
44
detailed=false,
99
45
# filtering
100
46
input_type::Type=Any,
@@ -105,64 +51,45 @@ function test_differentiation(
0 commit comments