Skip to content

Commit 073e2b7

Browse files
Patch test, update docs
1 parent 9b31222 commit 073e2b7

File tree

7 files changed

+143
-30
lines changed

7 files changed

+143
-30
lines changed

src/ImageSharp.Drawing.WebGPU/WEBGPU_BACKEND.md

Lines changed: 46 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -8,11 +8,31 @@ The WebGPU backend and staged scene pipeline are based on ideas and implementati
88

99
This document explains the backend as a newcomer would need to understand it:
1010

11+
- where the public WebGPU entry points fit
1112
- what problem the WebGPU backend is solving
1213
- what `WebGPUDrawingBackend` actually owns
1314
- how one flush moves through the backend boundary
1415
- where fallback, layer composition, and runtime caching fit into the design
1516

17+
## Where The Public WebGPU Types Fit
18+
19+
The public WebGPU surface area around this backend is small and target-first:
20+
21+
- `WebGPUEnvironment` exposes explicit support probes for the library-managed WebGPU environment
22+
- `WebGPUWindow<TPixel>` owns a native window and either runs a render loop or returns `WebGPUWindowFrame<TPixel>` instances through `TryAcquireFrame(...)`
23+
- `WebGPURenderTarget<TPixel>` owns an offscreen native target for GPU rendering, hybrid CPU plus GPU canvases, and readback
24+
- `WebGPUDeviceContext<TPixel>` wraps a shared or caller-owned device and queue and creates native-only or hybrid frames and canvases over external textures
25+
- `WebGPUNativeSurfaceFactory` is the low-level escape hatch for caller-owned native targets
26+
27+
Those types all exist to get a `DrawingCanvas<TPixel>` over a native WebGPU target, sometimes paired with a CPU region through `HybridCanvasFrame<TPixel>`. Once the canvas flushes, `WebGPUDrawingBackend` becomes the execution boundary.
28+
29+
The support probes also live outside the backend:
30+
31+
- `WebGPUEnvironment.TryProbeAvailability(...)` checks whether the library-managed WebGPU device and queue can be acquired
32+
- `WebGPUEnvironment.TryProbeComputePipelineSupport(...)` runs the crash-isolated trivial compute-pipeline probe
33+
34+
That split keeps support probing separate from flush execution. `WebGPUDrawingBackend` is the flush executor, not the public support API. The WebGPU constructors create their objects directly; callers use `WebGPUEnvironment` when they want explicit preflight checks.
35+
1636
## The Main Problem
1737

1838
The canvas hands the backend a prepared composition scene. That is already a big simplification, but it does not mean the GPU can render that scene directly.
@@ -52,6 +72,13 @@ That means `WebGPUDrawingBackend` is responsible for entry-point orchestration,
5272

5373
It is the policy boundary of the GPU path.
5474

75+
It does not own:
76+
77+
- public support probing
78+
- window creation
79+
- render-target allocation APIs
80+
- caller-device or caller-surface interop setup
81+
5582
### Flush Context
5683

5784
`WebGPUFlushContext` is the flush-scoped execution context for one GPU flush.
@@ -116,6 +143,12 @@ The expensive staged work is delegated:
116143
- `WebGPUSceneResources` owns flush-scoped GPU resources
117144
- `WebGPUSceneDispatch` owns the staged compute pipeline
118145

146+
The public object graph around those responsibilities is also separate:
147+
148+
- `WebGPUEnvironment` handles explicit support probes
149+
- `WebGPUWindow<TPixel>`, `WebGPURenderTarget<TPixel>`, and `WebGPUDeviceContext<TPixel>` construct native targets, frames, and canvases
150+
- `DrawingCanvas<TPixel>` hands a prepared `CompositionScene` to the backend
151+
119152
## The Flush Boundary
120153

121154
`FlushCompositions<TPixel>(...)` in `WebGPUDrawingBackend.cs` is the top-level scene flush entry point.
@@ -199,6 +232,8 @@ They cache things such as:
199232
- composite pipelines
200233
- a small amount of reusable device-scoped support state
201234

235+
`WebGPURuntime` also backs the explicit support probes surfaced by `WebGPUEnvironment`. The probe and runtime layer is where the library-managed device/queue availability and crash-isolated compute-pipeline test are cached.
236+
202237
Everything else in the staged scene path is intentionally flush-scoped.
203238

204239
That split keeps flushes isolated while still allowing truly device-scoped state to be reused.
@@ -221,23 +256,27 @@ The staged scene pipeline itself is described in [`WEBGPU_RASTERIZER.md`](d:/Git
221256

222257
If you want to understand the backend first, read the code in this order:
223258

224-
1. `WebGPUDrawingBackend.cs`
225-
2. `WebGPUFlushContext.cs`
226-
3. `WebGPURuntime.cs`
227-
4. `WebGPURuntime.DeviceSharedState.cs`
228-
5. `WebGPUDrawingBackend.ComposeLayer.cs`
229-
6. `WEBGPU_RASTERIZER.md`
259+
1. `WebGPUEnvironment.cs`
260+
2. `WebGPUWindow{TPixel}.cs`, `WebGPUWindowFrame{TPixel}.cs`, `WebGPURenderTarget{TPixel}.cs`, and `WebGPUDeviceContext{TPixel}.cs`
261+
3. `WebGPUDrawingBackend.cs`
262+
4. `WebGPUFlushContext.cs`
263+
5. `WebGPURuntime.cs`
264+
6. `WebGPURuntime.DeviceSharedState.cs`
265+
7. `WebGPUDrawingBackend.ComposeLayer.cs`
266+
8. `WEBGPU_RASTERIZER.md`
230267

231268
That order mirrors the newcomer view of the system:
232269

233-
backend policy -> flush context -> runtime lifetime -> layer composition -> staged raster pipeline
270+
support and target setup -> backend policy -> flush context -> runtime lifetime -> layer composition -> staged raster pipeline
234271

235272
## The Mental Model To Keep
236273

237274
The easiest way to keep this backend straight is to remember that it is not the rasterizer itself. It is the orchestration and policy layer around a staged GPU rasterizer. It decides whether a flush can stay on the GPU, runs that staged path as one flush-scoped unit of work, and falls back cleanly when it cannot.
238275

239276
If that model is clear, the major types fall into place:
240277

278+
- `WebGPUEnvironment` exposes explicit support probes
279+
- `WebGPUWindow<TPixel>`, `WebGPURenderTarget<TPixel>`, and `WebGPUDeviceContext<TPixel>` create canvases over native targets
241280
- `WebGPUDrawingBackend` orchestrates and decides policy
242281
- `WebGPUFlushContext` owns one flush's execution state
243282
- `WebGPURuntime` owns longer-lived device state

src/ImageSharp.Drawing.WebGPU/WEBGPU_BACKEND_PROCESS.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
The WebGPU documentation is split into two newcomer-first documents:
44

55
- [`WEBGPU_BACKEND.md`](d:/GitHub/SixLabors/ImageSharp.Drawing/src/ImageSharp.Drawing.WebGPU/WEBGPU_BACKEND.md)
6-
Explains what `WebGPUDrawingBackend` owns, how a flush reaches the GPU path, where fallback lives, how layer composition fits in, and how runtime/device-scoped state relates to flush-scoped work.
6+
Explains how `WebGPUEnvironment`, the public target types, and `WebGPUDrawingBackend` fit together, how a flush reaches the GPU path, where explicit support probing fits, where fallback lives, how layer composition fits in, and how runtime/device-scoped state relates to flush-scoped work.
77

88
- [`WEBGPU_RASTERIZER.md`](d:/GitHub/SixLabors/ImageSharp.Drawing/src/ImageSharp.Drawing.WebGPU/WEBGPU_RASTERIZER.md)
99
Explains the staged scene pipeline itself: scene encoding, planning, resource creation, scheduling passes, fine rasterization, chunked oversized-scene execution, and submission.

src/ImageSharp.Drawing.WebGPU/WEBGPU_RASTERIZER.md

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,13 @@ In this codebase, the WebGPU rasterizer is not a single type with one scan-conve
1212

1313
Together, these types turn one prepared flush into a staged GPU scene, schedule that scene into tile-relative work, run the fine raster pass, and write final pixels.
1414

15+
This document starts after two earlier boundaries have already been crossed:
16+
17+
- public WebGPU setup has already selected or created a native target through `WebGPUWindow<TPixel>`, `WebGPURenderTarget<TPixel>`, `WebGPUDeviceContext<TPixel>`, or `WebGPUNativeSurfaceFactory`
18+
- `WebGPUDrawingBackend` has already decided that the flush should stay on the GPU path
19+
20+
Support probing through `WebGPUEnvironment` also sits outside this document. The rasterizer describes execution of one staged scene, not environment detection or object construction.
21+
1522
The staged GPU rasterizer is based on ideas and implementation techniques from Vello:
1623

1724
- https://github.com/linebender/vello
@@ -285,6 +292,12 @@ The backend decides:
285292
- how layer composition is handled
286293
- how flush-scoped work relates to runtime and device-scoped state
287294

295+
The public setup layer decides:
296+
297+
- how a caller acquires or owns the native target
298+
- whether support should be probed explicitly through `WebGPUEnvironment`
299+
- whether the caller is using a library-managed device or caller-owned native handles
300+
288301
That separation is why it helps to document them separately.
289302

290303
## Reading Guide

src/ImageSharp.Drawing/Processing/Backends/DEFAULT_DRAWING_BACKEND.md

Lines changed: 31 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -4,11 +4,25 @@
44

55
This document explains the backend as a system rather than as a list of methods. The goal is to help a newcomer understand:
66

7+
- where the CPU backend fits in the canvas/backend selection model
78
- what problem the CPU backend is solving
89
- why the backend is organized around a flush-scoped execution plan
910
- what `FlushScene` means in this architecture
1011
- how rasterization, brush application, and layer composition fit together
1112

13+
## Where The CPU Backend Fits
14+
15+
`DefaultDrawingBackend` is the standard CPU execution path behind `DrawingCanvas<TPixel>`.
16+
17+
The canvas architecture reaches this backend in two common ways:
18+
19+
- ordinary `DrawingCanvas<TPixel>` construction resolves `IDrawingBackend` from `Configuration`
20+
- specialized infrastructure can construct a canvas with an explicit backend instance
21+
22+
The CPU path usually uses the first route. The WebGPU helpers use the second route when they need a canvas that targets a native surface through `WebGPUDrawingBackend`.
23+
24+
That means the CPU backend is one backend implementation within the shared canvas architecture, not a separate public drawing model. It executes against any frame that exposes a writable CPU region, whether that frame is pure memory or a hybrid frame that also carries a native surface.
25+
1226
## The Main Problem
1327

1428
By the time work reaches `DefaultDrawingBackend`, the public drawing API has already been normalized into prepared commands. That is helpful, but it does not make CPU execution trivial.
@@ -57,6 +71,8 @@ If that idea is clear, most of the important types fall into place.
5771

5872
It does not own every detail of geometry planning or scan conversion.
5973

74+
It also does not own backend selection. By the time `FlushCompositions(...)` is called, `DrawingCanvas<TPixel>` has already chosen the backend instance that will receive the prepared scene.
75+
6076
### Scene
6177

6278
In the canvas architecture, the backend receives a `CompositionScene`. That scene already contains prepared commands and explicit layer boundaries.
@@ -166,6 +182,12 @@ The expensive work is delegated:
166182

167183
That split keeps each type focused on one class of problem.
168184

185+
The canvas layer above that split is also important:
186+
187+
- `DrawingCanvas<TPixel>` records public drawing intent
188+
- `DrawingCanvasBatcher<TPixel>` prepares commands and constructs `CompositionScene`
189+
- `DefaultDrawingBackend` executes the prepared scene on a CPU destination
190+
169191
## Building The Flush Scene
170192

171193
`FlushScene.Create(...)` turns the prepared command stream into an execution plan in several phases. Each phase changes the data into a form that is cheaper for the next phase to consume.
@@ -324,22 +346,25 @@ That ownership model keeps allocation and disposal aligned with real work lifeti
324346

325347
If you are new to this backend, read the code in this order:
326348

327-
1. `DefaultDrawingBackend.cs`
328-
2. `FlushScene.cs`
329-
3. `FlushScene.RetainedTypes.cs`
330-
4. `DefaultDrawingBackend.Helpers.cs`
331-
5. `DefaultRasterizer.cs`
349+
1. `DrawingCanvas{TPixel}.cs`
350+
2. `DrawingCanvasBatcher{TPixel}.cs`
351+
3. `DefaultDrawingBackend.cs`
352+
4. `FlushScene.cs`
353+
5. `FlushScene.RetainedTypes.cs`
354+
6. `DefaultDrawingBackend.Helpers.cs`
355+
7. `DefaultRasterizer.cs`
332356

333357
That order mirrors the runtime flow:
334358

335-
backend orchestration -> flush planning -> row execution structures -> worker helpers -> scan conversion
359+
canvas and backend selection -> backend orchestration -> flush planning -> row execution structures -> worker helpers -> scan conversion
336360

337361
## The Mental Model To Keep
338362

339363
The easiest way to keep this backend straight is to remember that it is not a command-at-a-time painter. It is a flush executor that converts visible commands into row-local retained raster work and then executes that work with reusable scratch.
340364

341365
If that model is clear, the major types fall into place:
342366

367+
- `DrawingCanvas<TPixel>` records intent and selects the backend
343368
- `DefaultDrawingBackend` orchestrates
344369
- `FlushScene` plans
345370
- `DefaultRasterizer` converts geometry to coverage

src/ImageSharp.Drawing/Processing/Backends/DEFAULT_RASTERIZER.MD renamed to src/ImageSharp.Drawing/Processing/Backends/DEFAULT_RASTERIZER.md

Lines changed: 22 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -8,11 +8,20 @@ This rasterizer is based on ideas and implementation techniques from the Blaze p
88

99
This document explains the rasterizer as a newcomer needs to understand it:
1010

11+
- where the rasterizer fits relative to `DrawingCanvas<TPixel>` and `DefaultDrawingBackend`
1112
- what problem the rasterizer is solving inside the CPU backend
1213
- why the rasterizer is split into retained geometry building and band execution
1314
- what retained geometry, bands, and coverage mean in this architecture
1415
- how scan conversion stays separate from brush shading and frame ownership
1516

17+
## Where The Rasterizer Fits
18+
19+
`DefaultRasterizer` sits below `DrawingCanvas<TPixel>` and `DefaultDrawingBackend`.
20+
21+
The canvas records commands, the batcher prepares them into a `CompositionScene`, and `DefaultDrawingBackend` chooses the row-oriented execution plan for the flush. `DefaultRasterizer` then handles the narrower geometry-to-coverage problem inside that CPU execution path.
22+
23+
That means the rasterizer does not select the backend, own the destination frame, or interpret the public drawing API directly. It receives already-prepared geometry through the CPU backend pipeline, and the backend later routes its coverage into whichever frame exposes the CPU region for the flush.
24+
1625
## The Main Problem
1726

1827
The CPU backend does not want to rediscover shape geometry every time it touches a destination row.
@@ -117,13 +126,14 @@ Coverage is the rasterizer's output.
117126

118127
The rasterizer does not decide final pixel colors. It decides how much geometric coverage each pixel receives. The backend later passes that coverage to a `BrushRenderer<TPixel>`, which decides how the destination pixels should be shaded.
119128

120-
## Where The Rasterizer Fits
129+
## Pipeline Placement
121130

122131
The rasterizer sits in the middle of the CPU backend pipeline.
123132

124133
Upstream:
125134

126135
- `CompositionCommand` preparation produces prepared geometry
136+
- `DrawingCanvas<TPixel>` and `DrawingCanvasBatcher<TPixel>` have already selected and called the CPU backend
127137
- `FlushScene` decides which items are visible and when they execute
128138

129139
Downstream:
@@ -411,15 +421,19 @@ That separation is one of the main architectural advantages of the current CPU p
411421

412422
If you are new to this part of the library, read the rasterizer in this order:
413423

414-
1. `CreateRasterizableGeometry(...)` in `DefaultRasterizer.cs`
415-
2. `Linearizer<TL>` and the concrete linearizers in `DefaultRasterizer.Linearizer.cs`
416-
3. retained line types in `DefaultRasterizer.RetainedTypes.cs`
417-
4. `ExecuteRasterizableBand(...)` in `DefaultRasterizer.cs`
418-
5. `Context` in `DefaultRasterizer.cs`
424+
1. `DrawingCanvas{TPixel}.cs`
425+
2. `DrawingCanvasBatcher{TPixel}.cs`
426+
3. `DefaultDrawingBackend.cs`
427+
4. `FlushScene.cs`
428+
5. `CreateRasterizableGeometry(...)` in `DefaultRasterizer.cs`
429+
6. `Linearizer<TL>` and the concrete linearizers in `DefaultRasterizer.Linearizer.cs`
430+
7. retained line types in `DefaultRasterizer.RetainedTypes.cs`
431+
8. `ExecuteRasterizableBand(...)` in `DefaultRasterizer.cs`
432+
9. `Context` in `DefaultRasterizer.cs`
419433

420434
That order mirrors the data lifecycle:
421435

422-
prepared geometry -> retained storage -> band execution -> coverage emission
436+
canvas intent -> prepared geometry -> retained storage -> band execution -> coverage emission
423437

424438
## The Mental Model To Keep
425439

@@ -429,6 +443,7 @@ it is a retained fixed-point polygon scanner that transforms prepared geometry i
429443

430444
If that model stays clear, the rest of the code becomes easier to read:
431445

446+
- the canvas and backend docs explain how execution reaches the CPU path
432447
- the linearizer explains where retained line data comes from
433448
- `RasterizableGeometry` explains what is stored
434449
- the `Context` explains how retained data becomes coverage

src/ImageSharp.Drawing/Processing/DRAWING_CANVAS.md

Lines changed: 28 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -94,6 +94,15 @@ It is the backend handoff boundary.
9494

9595
The backend receives a scene and a target frame. It decides how to execute that prepared work.
9696

97+
There are two backend-selection paths in the architecture:
98+
99+
- the ordinary public `DrawingCanvas<TPixel>` constructors resolve the backend from `Configuration`
100+
- specialized infrastructure can construct a canvas with an explicit backend
101+
102+
The ordinary CPU entry points also include the `CreateCanvas(...)` extension methods on `Image<TPixel>` and `ImageFrame<TPixel>`, which route into those same constructors.
103+
104+
That explicit-backend path matters for the WebGPU helpers. `WebGPUWindow<TPixel>`, `WebGPURenderTarget<TPixel>`, and `WebGPUDeviceContext<TPixel>` create canvases that point directly at their owned `WebGPUDrawingBackend` instance instead of storing that backend on the caller's `Configuration`.
105+
97106
### Frame
98107

99108
`ICanvasFrame<TPixel>` is the target abstraction that the backend renders into.
@@ -110,6 +119,7 @@ That abstraction lets the same canvas target:
110119

111120
- pure CPU memory with `MemoryCanvasFrame<TPixel>`
112121
- a native or GPU surface with `NativeCanvasFrame<TPixel>`
122+
- a combined CPU plus native target with `HybridCanvasFrame<TPixel>`
113123
- a clipped view over another frame with `CanvasRegionFrame<TPixel>`
114124

115125
The point is not to hide all differences. The point is to express the minimum target contract the backends need.
@@ -311,6 +321,14 @@ It benefits from the same canvas-level decisions:
311321
- layers already exist as explicit boundaries
312322
- the frame already describes whether a native surface is available
313323

324+
The WebGPU public helpers reach this point in a target-first way:
325+
326+
- `WebGPUWindow<TPixel>` acquires a presentable native target per frame
327+
- `WebGPURenderTarget<TPixel>` owns an offscreen native target and can pair it with CPU memory through hybrid frames
328+
- `WebGPUDeviceContext<TPixel>` wraps shared or caller-owned device state and creates native-only or hybrid frames and canvases over native textures
329+
330+
Those helpers all create `DrawingCanvas<TPixel>` instances with an explicit `WebGPUDrawingBackend`, so GPU execution stays attached to the WebGPU object that owns the native target and backend lifetime.
331+
314332
The backend is free to choose a very different execution model because the canvas has already solved the shared semantics problem.
315333

316334
## The Practical Mental Model
@@ -338,15 +356,18 @@ Once those ideas are clear, the code stops looking like a random collection of t
338356
If you want to move from the architecture into the code, this is the best order.
339357

340358
1. `DrawingCanvas{TPixel}.cs`
341-
2. `DrawingCanvasBatcher{TPixel}.cs`
342-
3. `CompositionCommand.cs`
343-
4. `CompositionCommandPreparer.cs`
344-
5. `DefaultDrawingBackend.cs`
345-
6. `FlushScene.cs`
346-
7. `WebGPUDrawingBackend` and its scene/dispatch types
359+
2. `DrawingCanvasExtensions.cs`
360+
3. `DrawingCanvasBatcher{TPixel}.cs`
361+
4. `CompositionCommand.cs`
362+
5. `CompositionCommandPreparer.cs`
363+
6. `DefaultDrawingBackend.cs`
364+
7. `FlushScene.cs`
365+
8. `WebGPUEnvironment.cs`
366+
9. `WebGPUWindow{TPixel}.cs`, `WebGPURenderTarget{TPixel}.cs`, and `WebGPUDeviceContext{TPixel}.cs`
367+
10. `WebGPUDrawingBackend` and its scene/dispatch types
347368

348369
That path follows the real runtime flow:
349370

350-
public API -> recorded command -> prepared scene -> backend execution
371+
public API -> recorded command -> prepared scene -> backend selection -> backend execution
351372

352373
Following the code in that order is much easier than starting from the backend internals first.

tests/ImageSharp.Drawing.Tests/Processing/ProcessWithDrawingCanvasTests.Text.cs

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -346,8 +346,8 @@ public void FontShapesAreRenderedCorrectly_WithLineSpacing<TPixel>(
346346

347347
Color color = Color.Black;
348348

349-
// NET472 is 0.0045 different.
350-
ImageComparer comparer = ImageComparer.TolerantPercentage(0.0046F);
349+
// Ubuntu on .NET 10 ARM reported a 0.0051% difference
350+
ImageComparer comparer = ImageComparer.TolerantPercentage(0.0052F);
351351

352352
provider.VerifyOperation(
353353
comparer,

0 commit comments

Comments
 (0)