Skip to content

DrawingCanvas API: Replace imperative extension methods with stateful canvas-based drawing model#377

Open
JimBobSquarePants wants to merge 230 commits intomainfrom
js/canvas-api
Open

DrawingCanvas API: Replace imperative extension methods with stateful canvas-based drawing model#377
JimBobSquarePants wants to merge 230 commits intomainfrom
js/canvas-api

Conversation

@JimBobSquarePants
Copy link
Copy Markdown
Member

@JimBobSquarePants JimBobSquarePants commented Mar 1, 2026

Prerequisites

  • I have written a descriptive pull-request title
  • I have verified that there are no overlapping pull-requests open
  • I have verified that I am following matches the existing coding patterns and practice as demonstrated in the repository. These follow strict Stylecop rules 👮.
  • I have provided test coverage for my change (where applicable)

Breaking Changes: DrawingCanvas API

Fix #106
Fix #244
Fix #344
Fix #367

This is a major breaking change. The library's public API has been completely redesigned around a canvas-based drawing model, replacing the previous collection of imperative extension methods.

What changed

The old API surface — dozens of IImageProcessingContext extension methods like DrawLine(), DrawPolygon(), FillPolygon(), DrawBeziers(), DrawImage(), DrawText(), etc. — has been removed entirely. These methods were individually simple but suffered from several architectural limitations:

  • Each call was an independent image processor that rasterized and composited in isolation, making it impossible to batch or reorder operations.
  • State (blending mode, clip paths, transforms) had to be passed to every single call.
  • There was no way for an alternate rendering backend to intercept or accelerate a sequence of draw calls.

The new model: DrawingCanvas

All drawing now goes through IDrawingCanvas / DrawingCanvas<TPixel>, a stateful canvas that queues draw commands and flushes them as a batch.

Via Image.Mutate() (most common)

using SixLabors.ImageSharp.Drawing;
using SixLabors.ImageSharp.Drawing.Processing;

image.Mutate(ctx => ctx.Paint(canvas =>
{
    // Fill a path
    canvas.Fill(Brushes.Solid(Color.Red), new EllipsePolygon(200, 200, 100));

    // Stroke a path
    canvas.Draw(Pens.Solid(Color.Blue, 3), new RectangularPolygon(50, 50, 200, 100));

    // Draw a polyline
    canvas.DrawLine(Pens.Solid(Color.Green, 2), new PointF(0, 0), new PointF(100, 100));

    // Draw text
    canvas.DrawText(
        new RichTextOptions(font) { Origin = new PointF(10, 10) },
        "Hello, World!",
        brush: Brushes.Solid(Color.Black),
        pen: null);

    // Draw an image
    canvas.DrawImage(sourceImage, sourceRect, destinationRect);

    // Save/Restore state (options, clip paths)
    canvas.Save(new DrawingOptions
    {
        GraphicsOptions = new GraphicsOptions { BlendPercentage = 0.5f }
    });
    canvas.Fill(brush, path);
    canvas.Restore();

    // Apply arbitrary image processing to a path region
    canvas.Apply(path, inner => inner.Brightness(0.5f));

    // Commands are flushed on Dispose (or call canvas.Flush() explicitly)
}));

Standalone usage (without Image.Mutate)

DrawingCanvas<TPixel> can be created directly from an image or frame using the CreateCanvas(...) extensions:

using var canvas = image.CreateCanvas(new DrawingOptions());

canvas.Fill(brush, path);
canvas.Draw(pen, path);
canvas.Flush();

using var canvas = image.CreateCanvas(frameIndex: 0, options: new DrawingOptions());
// ...

using var canvas = frame.CreateCanvas(new DrawingOptions());
// ...

Canvas state management

The canvas supports a save/restore stack (similar to HTML Canvas or SkCanvas):

int saveCount = canvas.Save();             // push current state
canvas.Save(options, clipPath1, clipPath2); // push and replace state

canvas.Restore();              // pop one level
canvas.RestoreTo(saveCount);   // pop to a specific level

State includes DrawingOptions (graphics options, shape options, transform) and clip paths. SaveLayer creates an offscreen layer that composites back on Restore.

IDrawingBackend — bring your own renderer

The library's rasterization and composition pipeline is abstracted behind IDrawingBackend. This interface has the following methods:

Method Purpose
FlushCompositions<TPixel> Flushes queued composition operations for the target.
TryReadRegion<TPixel> Read pixels back from the target (needed for Process() and DrawImage()).

The library ships with DefaultDrawingBackend (CPU, tiled fixed-point rasterizer). An experimental WebGPU compute-shader backend (ImageSharp.Drawing.WebGPU) is also available, demonstrating how alternate backends plug in. Users can provide their own implementations — for example, GPU-accelerated backends, SVG emitters, or recording/replay layers.

Backends are registered on Configuration:

configuration.SetDrawingBackend(myCustomBackend);

Migration guide

Old API New API
ctx.Fill(color, path) ctx.ProcessWithCanvas(c => c.Fill(Brushes.Solid(color), path))
ctx.Fill(brush, path) ctx.ProcessWithCanvas(c => c.Fill(brush, path))
ctx.Draw(pen, path) ctx.ProcessWithCanvas(c => c.Draw(pen, path))
ctx.DrawLine(pen, points) ctx.ProcessWithCanvas(c => c.DrawLine(pen, points))
ctx.DrawPolygon(pen, points) ctx.ProcessWithCanvas(c => c.Draw(pen, new Polygon(new LinearLineSegment(points))))
ctx.FillPolygon(brush, points) ctx.ProcessWithCanvas(c => c.Fill(brush, new Polygon(new LinearLineSegment(points))))
ctx.DrawText(text, font, color, origin) ctx.ProcessWithCanvas(c => c.DrawText(new RichTextOptions(font) { Origin = origin }, text, Brushes.Solid(color), null))
ctx.DrawImage(overlay, opacity) ctx.ProcessWithCanvas(c => c.DrawImage(overlay, sourceRect, destRect))
Multiple independent draw calls Single ProcessWithCanvas block — commands are batched and flushed together

Other breaking changes in this PR

  • AntialiasSubpixelDepth removed — The rasterizer now uses a fixed 256-step (8-bit) subpixel depth. The old AntialiasSubpixelDepth property (default: 16) controlled how many vertical subpixel steps the rasterizer used per pixel row. The new fixed-point scanline rasterizer integrates area/cover analytically per cell rather than sampling at discrete subpixel rows, so the "depth" is a property of the coordinate precision (24.8 fixed-point), not a tunable sample count. 256 steps gives ~0.4% coverage granularity — more than sufficient for all practical use cases. The old default of 16 (~6.25% granularity) could produce visible banding on gentle slopes.
  • GraphicsOptions.Antialias — now controls RasterizationMode (antialiased vs aliased). When false, coverage is snapped to binary using AntialiasThreshold.
  • GraphicsOptions.AntialiasThreshold — new property (0–1, default 0.5) controlling the coverage cutoff in aliased mode. Pixels with coverage at or above this value become fully opaque; pixels below are discarded.

Benchmarks

All benchmarks run under the following environment.

BenchmarkDotNet=v0.13.1, OS=Windows 10.0.26200
Unknown processor
.NET SDK=10.0.103
  [Host] : .NET 8.0.24 (8.0.2426.7010), X64 RyuJIT

Toolchain=InProcessEmitToolchain  InvocationCount=1  IterationCount=40
LaunchCount=3  UnrollFactor=1  WarmupCount=40

DrawPolygonAll - Renders a 7200x4800px path of the state of Mississippi with a 2px stroke.

Method Mean Error StdDev Median Ratio RatioSD
SkiaSharp 42.20 ms 2.197 ms 6.976 ms 38.18 ms 1.00 0.00
SystemDrawing 44.10 ms 0.172 ms 0.538 ms 44.05 ms 1.07 0.16
ImageSharp 12.09 ms 0.083 ms 0.269 ms 12.06 ms 0.29 0.05
ImageSharpWebGPU 12.47 ms 0.291 ms 0.940 ms 12.71 ms 0.30 0.05

FillParis - Renders 1096x1060px scene containing 50K fill paths.

Method Mean Error StdDev Ratio RatioSD
SkiaSharp 104.46 ms 0.356 ms 1.145 ms 1.00 0.00
SystemDrawing 148.53 ms 0.327 ms 1.033 ms 1.42 0.02
ImageSharp 66.32 ms 0.999 ms 3.083 ms 0.64 0.03
ImageSharpWebGPU 41.95 ms 0.457 ms 1.368 ms 0.40 0.01

@JimBobSquarePants
Copy link
Copy Markdown
Member Author

It's weird to see the CI failure appear, since my commit didn't touch product code or xunit tests. However, I keep seeing similar failures intermittently when running tests locally on my machine.

One explanation to these issues could be that the parallel logic is not stable: e.g. threads may be racing for shared pixels and slightly different results may be rendered based on which thread wins the race. I would consider such a behavior a bug.

Fixed. Daft mistake I was caching ToLinearGeometry() results in a non-threadsafe way.

@JimBobSquarePants
Copy link
Copy Markdown
Member Author

API

I find the ProcessWithCanvas method oddly verbose and hard to discover. Since the user actually wants to draw things here, how about naming the method Draw? Other simple names may also do the job.

image.Mutate(ctx => ctx.Draw(canvas =>
{
    // Fill a path
    canvas.Fill(Brushes.Solid(Color.Red), new EllipsePolygon(200, 200, 100));

    // Stroke a path
    canvas.Draw(Pens.Solid(Color.Blue, 3), new RectangularPolygon(50, 50, 200, 100));

    // Draw a polyline
    canvas.DrawLine(Pens.Solid(Color.Green, 2), new PointF(0, 0), new PointF(100, 100));
}));

Also, the image.Mutate(ctx => ctx.ProcessWithCanvas(canvas => line together with the closing parantheses is somewhat of an ergonomy killer. I think most of the cases, Drawing users want to only draw things in their Mutate calls without chaining in further processing calls. It would be nice to provide helpers to avoid using double-delegates, for example .MutateDraw() would look like this:

image.MutateDraw(canvas =>
{
    // Fill a path
    canvas.Fill(Brushes.Solid(Color.Red), new EllipsePolygon(200, 200, 100));

    // Stroke a path
    canvas.Draw(Pens.Solid(Color.Blue, 3), new RectangularPolygon(50, 50, 200, 100));

    // Draw a polyline
    canvas.DrawLine(Pens.Solid(Color.Green, 2), new PointF(0, 0), new PointF(100, 100));
});

I agree that ProcessWithCanvas was too implementation-shaped for the public API. I renamed the entry point to Paint, which fits better with the rest of the ImageSharp processing surface (Crop, Resize, GaussianBlur, etc.) while still covering the full canvas model rather than only stroke-style drawing.

I did not go with Draw because the callback can also Fill, Clear, Clip, draw text, and perform other canvas operations, and Draw already has a specific meaning on IDrawingCanvas itself.

I also considered your MutateDraw-style convenience API, but I do not think it works well as the primary shape here because it pulls drawing out of the normal IImageProcessingContext pipeline and makes composition with further chained operations more awkward. Keeping this as a context extension preserves the usual processing flow.

I’ve also updated the IntelliSense docs for the renamed extensions/processors and the related markdown/readme content to reflect the new API.

@JimBobSquarePants
Copy link
Copy Markdown
Member Author

Parallel performance

In a6de45e I pushed an equivalent of SixLabors/ImageSharp#3111: extending benchmarks with a single-threaded mode and a ManualBenchmarks project which is hosting a throughput benchmark implemented in DrawingThroughputBenchmark.cs.

I ran these on my AMD Ryzen 7 5800H with Radeon Graphics, 1 CPU, 16 logical and 8 physical cores.

Single vs multi-threaded runs of FillTiger

|                    Method | Dimensions |        Mean |     Error |      StdDev |      Median | Ratio | RatioSD |
|-------------------------- |----------- |------------:|----------:|------------:|------------:|------:|--------:|
|                 SkiaSharp |        100 |    858.7 us |   3.06 us |     9.37 us |    856.0 us |  1.00 |    0.00 |
|             SystemDrawing |        100 |  2,075.6 us |  49.87 us |   157.60 us |  2,036.1 us |  2.43 |    0.18 |
|                ImageSharp |        100 |  4,650.1 us | 115.22 us |   369.13 us |  4,729.9 us |  5.46 |    0.43 |
| ImageSharp_SingleThreaded |        100 |  7,618.4 us | 983.83 us | 3,151.81 us |  6,174.6 us |  9.29 |    3.59 |
|          ImageSharpWebGPU |        100 |  2,425.6 us | 118.34 us |   356.26 us |  2,397.2 us |  2.83 |    0.42 |
|                           |            |             |           |             |             |       |         |
|                 SkiaSharp |       1000 |  7,399.4 us | 101.66 us |   321.24 us |  7,339.4 us |  1.00 |    0.00 |
|             SystemDrawing |       1000 | 17,367.9 us | 215.58 us |   681.25 us | 17,131.4 us |  2.35 |    0.15 |
|                ImageSharp |       1000 |  6,790.0 us | 768.78 us | 2,484.95 us |  6,386.2 us |  0.92 |    0.36 |
| ImageSharp_SingleThreaded |       1000 | 15,816.6 us | 191.37 us |   610.30 us | 15,862.8 us |  2.14 |    0.13 |
|          ImageSharpWebGPU |       1000 |  4,113.7 us | 285.05 us |   900.79 us |  3,796.4 us |  0.56 |    0.13 |

The algorithm does scale with threads, but (according to this benchmark) it performs worse for small images than competitors, unlike in the large image case.

DrawingThroughputBenchmark runs

I ran Tiger rendering for 10 seconds with

  • large (2000x2000) and small (200x200) images
  • sequential (-p 1) and parallel (-p 16) execution

I averaged out MegaPixelsPerSec values of 10 runs for each configuration.

Size/Mode Sequential (MP/s) Parallel (MP/s)
Large 540 446
Small 67 55
Parallelization results in 15-20% reduction in throughput. Moreover, small images result in significantly worse MP/s values, which is unexpected. For image processors in the core library MP/s is typically higher on small images because of better memory locality.

I think it is worth to investigate these issues.

Thanks @antonfirsov for adding the benchmarks, they're VERY useful!

I dug into this a bit further and reran the throughput benchmark with the two concurrency dimensions separated:

  • internal drawing parallelism: -p
  • outer request concurrency: -c

That changes the picture quite a lot.

When I hold -c 1 and vary only -p, internal parallelism is clearly helping rather than hurting:

  • Large (2000x2000): p=1 -> 160 MP/s, p=8 -> 557 MP/s, p=16 -> 828 MP/s
  • Small (200x200): p=1 -> 18.6 MP/s, p=8 -> 56.1 MP/s, p=16 -> 56.5 MP/s

So I do not think the previous result supports the conclusion that our internal parallel execution is slower than serial.

For the small-image case, I also would not expect MP/s to behave like a typical core pixel processor. Tiger rendering has a substantial amount of per-scene work that does not scale with image area in the same way:

  • iterating the SVG elements
  • preparing paths and strokes
  • recording commands
  • batching and flush setup
  • brush/pen handling
  • thread coordination

That work still exists even when the target is small, so lower MP/s on small images is not by itself evidence of a regression. In absolute terms the small images are still much faster; they just amortize the fixed scene cost less effectively.

The earlier throughput result appears to have been confounded by mixing inner parallelism with outer request concurrency. If -p was varied while -c was left at its default, then the "parallel" case was not "one request, parallel" but rather "parallel work inside each request on top of already-high request concurrency", which is a different workload and can absolutely reduce overall MP/s through oversubscription.

I also tested the service-throughput shape explicitly on my machine:

  • c=8, p=1 -> 795 MP/s
  • c=4, p=2 -> 854 MP/s
  • c=2, p=4 -> 657 MP/s
  • c=1, p=8 -> 543 MP/s

That suggests the best throughput here comes from a balanced split between outer concurrency and inner parallelism, not from maximizing either one in isolation.

On the 8 vs 16 question: I agree that 8 is the sensible first comparison point on an 8-physical-core machine, but in this workload 16 did materially outperform 8 for a single large request, so simultaneous multithreading is helping here rather than hurting. For the small case, 16 buys almost nothing over 8, so the benefit is workload dependent.

Taken together, I think these results support keeping the current defaults for the ordinary non-concurrent case. For concurrent hosts, the optimal split between per-request parallelism and request-level concurrency is workload-dependent, so I think the right answer is exactly what we have now: sensible defaults, with MaxDegreeOfParallelism available for callers that need to tune for their own throughput profile.

drawing-throughput-benchmark-results.md

@antonfirsov
Copy link
Copy Markdown
Member

antonfirsov commented Apr 15, 2026

I renamed the entry point to Paint

Paint sounds awesome!


When I hold -c 1 and vary only -p, internal parallelism is clearly helping rather than hurting [...] So I do not think the previous result supports the conclusion that our internal parallel execution is slower than serial.

Single-user "slowness" wasn't the conclusion I wanted to imply. The data proves that given high concurrency, parallelism hurts throughput, which is something I've seen to happen only with Convolution2DProcessor in the core library. The reason this matters is because AFAIK server-side processing is still the primary use case of the CPU renderer, and server-side processing usually needs to scale well with concurrency. IMO good library design means delivering good defaults for the primary use case, i.e. not asking the user in the documentation to tune MaxDegreeOfParallelism for good throughput. Given you don't want to abandon parallelism, my suggestion is to root cause and fix this before the release.

I will try helping by running some profiling.

@antonfirsov
Copy link
Copy Markdown
Member

antonfirsov commented Apr 15, 2026

For the small-image case, I also would not expect MP/s to behave like a typical core pixel processor. Tiger rendering has a substantial amount of per-scene work that does not scale with image area in the same way:
[..]

  • preparing paths and strokes

Is there any more information that could be cached when the same path/stroke is being used?

[..]

  • thread coordination

In the core library, threading overhead is being reduced or sometimes eliminated for small images via. MinimumPixelsProcessedPerTask. Is there a way to introduce a similar mechanism?

@JimBobSquarePants
Copy link
Copy Markdown
Member Author

JimBobSquarePants commented Apr 15, 2026

For the small-image case, I also would not expect MP/s to behave like a typical core pixel processor. Tiger rendering has a substantial amount of per-scene work that does not scale with image area in the same way:
[..]

  • preparing paths and strokes

Is there any more information that could be cached when the same path/stroke is being used?

[..]

  • thread coordination

In the core library, threading overhead is being reduced or sometimes eliminated for small images via. MinimumPixelsProcessedPerTask. Is there a way to introduce a similar mechanism?

We memoise the result of IPath.ToLinearGeometry() when the transform is identity but other than that there's nothing else, we can cache.

I just ran some experiments using MinimumPixelsProcessedPerTask to determine partition counts during scene execution which is the only place it made sense. It led to a noticable degrade in performance.

Using

Scenario Size Width Height Concurrent Requests Drawing Parallelism Total Seconds MegaPixelsPerSec
SingleImage Large 2000 2000 1 1 10.03 129.65
SingleImage Large 2000 2000 1 8 10.00 420.67
SingleImage Large 2000 2000 1 16 10.00 625.07
SingleImage Small 200 200 1 1 10.00 15.09
SingleImage Small 200 200 1 8 10.00 36.57
SingleImage Small 200 200 1 16 10.00 42.25
ServiceThroughput Large 2000 2000 8 1 10.03 710.41
ServiceThroughput Large 2000 2000 4 2 10.01 718.99
ServiceThroughput Large 2000 2000 2 4 10.01 536.53
ServiceThroughput Large 2000 2000 1 8 10.01 420.91

Not using

Scenario Size Width Height Concurrent Requests Drawing Parallelism Total Seconds MegaPixelsPerSec
SingleImage Large 2000 2000 1 1 10.01 159.47
SingleImage Large 2000 2000 1 8 10.00 557.43
SingleImage Large 2000 2000 1 16 10.00 803.04
SingleImage Small 200 200 1 1 10.00 18.75
SingleImage Small 200 200 1 8 10.00 65.62
SingleImage Small 200 200 1 16 10.00 68.08
ServiceThroughput Large 2000 2000 8 1 10.03 782.69
ServiceThroughput Large 2000 2000 4 2 10.01 819.33
ServiceThroughput Large 2000 2000 2 4 10.01 662.20
ServiceThroughput Large 2000 2000 1 8 10.00 590.34

@JimBobSquarePants
Copy link
Copy Markdown
Member Author

I renamed the entry point to Paint

Paint sounds awesome!

When I hold -c 1 and vary only -p, internal parallelism is clearly helping rather than hurting [...] So I do not think the previous result supports the conclusion that our internal parallel execution is slower than serial.

Single-user "slowness" wasn't the conclusion I wanted to imply. The data proves that given high concurrency, parallelism hurts throughput, which is something I've seen to happen only with Convolution2DProcessor in the core library. The reason this matters is because AFAIK server-side processing is still the primary use case of the CPU renderer, and server-side processing usually needs to scale well with concurrency. IMO good library design means delivering good defaults for the primary use case, i.e. not asking the user in the documentation to tune MaxDegreeOfParallelism for good throughput. Given you don't want to abandon parallelism, my suggestion is to root cause and fix this before the release.

I will try helping by running some profiling.

I agree that high-concurrency service throughput is worth understanding, but I don't think it should automatically define the default optimization target for ImageSharp.Drawing.

I don't see this library primarily as a web-server drawing component in the same way some ImageSharp image-processing workloads are. The scenarios I have in mind are things like CAD-style rendering, charts and graphs, UI generation, tooling, and other programmatic rendering workloads where single-render performance and overall rendering capability matter more than maximizing throughput under heavily concurrent request load.

That makes the current single-request results directly relevant, and it also means I'm comfortable with MaxDegreeOfParallelism remaining configurable for hosts that do need to tune for saturated server throughput. The data you've gathered is exactly the tuning guidance those hosts need, and I'd rather surface that in the docs than bake in a default that trades away single-render performance for a server shape that isn't the primary use case.

I did explore whether the library could self-throttle under concurrent load without user tuning. The honest answer is that any in-library mechanism needs either a shared scheduler across requests or a runtime pool-pressure signal, neither of which exists in a form we can consume without significant upstream changes. The Parallel.For + MaxDegreeOfParallelism model has the same property everywhere it's used in the ecosystem, and I don't think ImageSharp.Drawing is the right place to invent a workaround for it.

@antonfirsov
Copy link
Copy Markdown
Member

antonfirsov commented Apr 15, 2026

CPU rasterizer parallel performance

The scenarios I have in mind are things like CAD-style rendering, charts and graphs, UI generation, tooling, and other programmatic rendering workloads where single-render performance and overall rendering capability matter more than maximizing throughput under heavily concurrent request load.

Scenarios that involve immediate, on-screen rendering are better addressed by the WebGPU renderer. Whatever is the current primary use-case of ImageSharp.Drawing V2, will be the primary use case of the CPU renderer going forward (which I assume is server, but I may be wrong).

That said, I just ran the same benchmarks with V2 rasterizer, and this PR is significantly improving every aspect perf-wise (🚀), regardless if parallelism is used or not, so I'm no longer pursuing to figure this out in the PR.

I did explore whether the library could self-throttle under concurrent load without user tuning.

I don't know if self-throttling is the only possible answer here, I would recommend to open a tracking issue once this is merged.

I'll move on reviewing other aspects now.

Comment thread samples/DrawShapesWithImageSharp/Program.cs Outdated
Copy link
Copy Markdown
Member

@antonfirsov antonfirsov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Plenty of notes although the review is still incomplete. Most importantly, there seems to be a bug in the GPU renderer, see the comment on the lines demo.

/// <summary>
/// Small offscreen WebGPU host used by the sample so the benchmark can drive the real backend without manual WebGPU bootstrap code.
/// </summary>
internal sealed class WebGpuBenchmarkBackend : IBenchmarkBackend
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With 100k lines the output differs from the CPU output and from skia outputs: lines seem to stack in a wrong order.

Copy link
Copy Markdown
Member

@antonfirsov antonfirsov Apr 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, with a bigger render area it reproduces with fewer lines. Some lines seem to be missing or cut:

Image

/// without carrying a separate success flag beside the handle. Disposing it releases the
/// reference acquired by <see cref="AcquireReference"/>.
/// </remarks>
internal sealed class HandleReference : IDisposable
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is a more common pattern to make such types structs so it's cheap to instantiate them, see Memory<T>.Pin() for an example.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

Comment on lines +528 to +529
this.deviceReference = this.deviceHandle.AcquireReference();
this.queueReference = this.queueHandle.AcquireReference();
Copy link
Copy Markdown
Member

@antonfirsov antonfirsov Apr 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should be extremely careful persisting these. Forgetting a dispose means leaking a handle. I would only do it when P/Invoking happens on such a a burning hot path that makes AddRef/Release measurably expensive.

Otherwise passing around the SafeHandle till the point when the native call happens should be way safer, and align with existing SafeHandle practices.

[OFF] I wish Silk.NET exposed interop APIs working with SafeHandles directly, but I guess they wanted to stay hardcore for gaming perf.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wish Silk.NET exposed interop APIs working with SafeHandles directly, but I guess they wanted to stay hardcore for gaming perf.

Now I know about them. Agreed.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

texture = flushContext.Api.DeviceCreateTexture(flushContext.Device, in textureDescriptor);
if (texture is null)
{
error = "Failed to create WebGPU composition texture.";
Copy link
Copy Markdown
Member

@antonfirsov antonfirsov Apr 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: passing around these error strings is odd, it would be cleaner to emit these messages to some sort of IErrorLogger.

toolbar.Controls.Add(this.CreateRunButton("1k", 1_000));
toolbar.Controls.Add(this.CreateRunButton("10k", 10_000));
toolbar.Controls.Add(this.CreateRunButton("100k", 100_000));
toolbar.Controls.Add(this.CreateRunButton("200k", 200_000));
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually wanted to make this 1M, but that makes the implementation hit buffer limits:

WebGPU (Failed: The staged-scene path tiles buffer requires 141001808 bytes, exceeding the current WebGPU binding limit of 134217728 bytes.)

Definitely not a show stopper, but some users may be limited by this, so it's worth noting down.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be possible to work around this. There are hard limits to what you can send though.

canvas.Flush();
}

stopwatch.Stop();
Copy link
Copy Markdown
Member

@antonfirsov antonfirsov Apr 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are closing down the perf gap to Skia, but it's still 6-7x on my machine. This is acceptable but worth noting down. According to the profile of rendering 200k lines, it's the buffer mapping that seems to take a lot of time:

Image

Copy link
Copy Markdown
Member Author

@JimBobSquarePants JimBobSquarePants Apr 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, at 200K I'm seeing about 4x. That's annoying. I'll see what I can do.

Comment on lines +53 to +63
public WebGPUWindow(Configuration configuration, WebGPUWindowOptions options)
: this(CreateConstruction(configuration, options))
{
}

private WebGPUWindow(WindowConstruction construction)
: this(construction.Window, construction.Configuration, construction.Format, construction.PresentMode)
{
}

private WebGPUWindow(
Copy link
Copy Markdown
Member

@antonfirsov antonfirsov Apr 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: merging these constructors and deleting the WindowConstruction class would simplify things.

Comment on lines +57 to +62
public WebGPURenderTarget(Configuration configuration, int width, int height)
: this(AllocateOwnedTarget(configuration, width, height))
{
}

private WebGPURenderTarget(OwnedTarget ownedTarget)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: merging constructors and deleting OwnedTarget would simplify things. The type name is especially confusing.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

private readonly bool ownsGraphics;
private bool isDisposed;

private WebGPURenderTarget(
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happened to the "public things go first" stylecop rule?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that was dumped from StyleCop years ago.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

/// Use <see cref="Run(Action{DrawingCanvas{TPixel}})"/> when you want the window to drive rendering for you, or <see cref="TryAcquireFrame(TimeSpan, out WebGPUWindowFrame{TPixel}?)"/> when you need to drive the frame loop yourself.
/// </summary>
/// <typeparam name="TPixel">The canvas pixel format.</typeparam>
public sealed unsafe class WebGPUWindow<TPixel> : IDisposable
Copy link
Copy Markdown
Member

@antonfirsov antonfirsov Apr 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This type has limited usability since it needs to create a brand-new popup window and doesn't allow rendering to a control of hosted by popular UI frameworks. I hardly see anyone using it for serious things except gamedevs who want to experiment with our API.

In #377 (review) I suggested exploring ways to create a canvas around an existing window handle that could be an entry point for WinForms/WPF/WinUI integration (maybe even Avalonia or MAUI). IMO such a feature would radically help adaption.

I can explore this, but I can't promise to do it fast.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

WebGPUWindow<TPixel> is a managed presentation surface intended for tools where we own the window. The embedding entry point you're looking for is WebGPUDeviceContext<TPixel>:

  • WebGPUDeviceContext(nint deviceHandle, nint queueHandle) wraps externally-owned device/queue handles without taking ownership.
  • CreateCanvas(nint textureHandle, nint textureViewHandle, WebGPUTextureFormatId format, int width, int height, DrawingOptions options) wraps an externally-owned texture/view into a DrawingCanvas<TPixel> for one frame.

The integration shape is the same on every platform:

  1. The host owns the WebGPU surface, swap-chain, device, and queue.
  2. The render loop acquires the current texture and view from the surface.
  3. The four handles go into WebGPUDeviceContext.CreateCanvas(...), draw, dispose, present through the host's loop.

So the primitive for "render to an existing window/control" is already there. Per-framework wrappers (WinForms control, WPF host, Avalonia control, etc.) that hide steps 1 and 2 would just need framework-specific swap-chain plumbing utilizing that type.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

2 participants