This guide shows you how to create a C# native addon that uses Windows Machine Learning (WinML) in your Electron app. WinML allows you to run machine learning models (ONNX format) locally on Windows devices for tasks like image classification, object detection, and more.
Before starting this guide, make sure you've:
- Completed the development environment setup
- Windows 11 or Windows 10 (version 1809 or later)
Note
WinML runs on any Windows 10 (1809+) or Windows 11 device. For best performance, devices with GPUs or NPUs are recommended, but the API works on CPU as well.
Let's create a native addon that will use WinML APIs. We'll use a C# template that leverages node-api-dotnet to bridge JavaScript and C#.
npx winapp node create-addon --template cs --name winMlAddonThis creates a winMlAddon/ folder with:
addon.cs- Your C# code that will call WinML APIswinMlAddon.csproj- Project file with references to Windows SDK and Windows App SDKREADME.md- Documentation on how to use the addon
The command also adds a build-winMlAddon script to your package.json for building the addon:
{
"scripts": {
"build-winMlAddon": "dotnet publish ./winMlAddon/winMlAddon.csproj -c Release"
}
}The template automatically includes references to both SDKs, so you can immediately start calling Windows APIs!
Let's verify everything is set up correctly by building the addon:
# Build the C# addon
npm run build-winMlAddonNote: You can also create a C++ addon using
npx winapp node create-addon(without the--templateflag). C++ addons use node-addon-api and provide direct access to Windows APIs with maximum performance. See the C++ Notification Addon guide for a walkthrough or the full command documentation for more options.
We'll use the Classify Image sample from the AI Dev Gallery as our reference. This sample uses the SqueezeNet 1.1 model for image classification.
- Install the AI Dev Gallery
- Navigate to the Classify Image sample
- Download the SqueezeNet 1.1 model (it supports CPU, GPU, and NPU)
- Click Open Containing Folder to locate the
.onnxfile
- Copy the
squeezenet1.1.onnxfile to amodels/folder in your project root
Note
The model can also be downloaded directly from the ONNX Model Zoo GitHub repo
Before adding the WinML code, we need to add two additional NuGet packages that are required for image processing and ONNX Runtime extensions.
Add the following package versions to the Directory.packages.props file in the root of your project (should have been created when you created the addon):
<Project>
<PropertyGroup>
<!-- Enable central package versioning -->
<ManagePackageVersionsCentrally>true</ManagePackageVersionsCentrally>
</PropertyGroup>
<ItemGroup>
<PackageVersion Include="Microsoft.JavaScript.NodeApi" Version="0.9.17" />
<PackageVersion Include="Microsoft.JavaScript.NodeApi.Generator" Version="0.9.17" />
<!-- Add these two packages for WinML -->
+ <PackageVersion Include="Microsoft.ML.OnnxRuntime.Extensions" Version="0.14.0" />
+ <PackageVersion Include="System.Drawing.Common" Version="9.0.9" />
<!-- These versions may be updated automatically during restore to match yaml -->
<PackageVersion Include="Microsoft.WindowsAppSDK" Version="2.0.0-experimental3" />
<PackageVersion Include="Microsoft.Windows.SDK.BuildTools" Version="10.0.26100.7175" />
</ItemGroup>
</Project>Open winMlAddon/winMlAddon.csproj and add the package references to the <ItemGroup>:
<ItemGroup>
<PackageReference Include="Microsoft.JavaScript.NodeApi" />
<PackageReference Include="Microsoft.JavaScript.NodeApi.Generator" />
<!-- Add these two packages for WinML -->
+ <PackageReference Include="Microsoft.ML.OnnxRuntime.Extensions" />
+ <PackageReference Include="System.Drawing.Common" />
<PackageReference Include="Microsoft.Windows.SDK.BuildTools" />
<PackageReference Include="Microsoft.WindowsAppSDK" />
</ItemGroup>What these packages do:
- Microsoft.ML.OnnxRuntime.Extensions - Provides additional operators and utilities for ONNX Runtime
- System.Drawing.Common - Enables image loading and manipulation for preprocessing
The AI Dev Gallery shows the complete implementation for image classification with SqueezeNet:
We've adapted this code for Electron and you can find the complete implementation in the electron-winml sample. The winMlAddon/ folder contains the modified code from the AI Dev Gallery.
You can either:
Option A: Copy from the sample
Copy the entire winMlAddon/ folder from samples/electron-winml/winMlAddon/ to your project root, replacing the one created in Step 1.
Option B: Manually update your addon
Open winMlAddon/addon.cs and update it with the code from the sample. The complete source is available at samples/electron-winml/winMlAddon/addon.cs.
Let's highlight the important parts of the implementation and key differences from the AI Dev Gallery code:
Unlike the AI Dev Gallery code, our Electron addon requires the JavaScript code to pass the project root path. This is necessary because:
- The addon needs to locate the ONNX model file in the
models/folder - Native dependencies (DLLs) need to be loaded from specific directories
[JSExport]
public static async Task<Addon> CreateAsync(string projectRoot)
{
if (!Path.Exists(projectRoot))
{
throw new Exception("Project root is invalid.");
}
var addon = new Addon(projectRoot);
addon.PreloadNativeDependencies();
string modelPath = Path.Join(projectRoot, "models", @"squeezenet1.1-7.onnx");
await addon.InitModel(modelPath, ExecutionProviderDevicePolicy.DEFAULT, null, false, null);
return addon;
}This automatically selects the best execution provider (CPU, GPU, or NPU) based on device capabilities.
The addon includes a PreloadNativeDependencies() method to load required DLLs. This approach works for both development and production scenarios without needing to copy DLLs to the project root:
private void PreloadNativeDependencies()
{
// Loads required DLLs from the winMlAddon build output
// This ensures dependencies are available regardless of the execution context
}This is called during initialization before loading the model, ensuring all native libraries are available.
To ensure the addon works correctly in production builds, you need to configure your packager to:
- Unpack native files - DLLs, ONNX models, and .node files must be accessible outside the ASAR archive
- Exclude unnecessary files - Keep the package size small by excluding build artifacts and temporary files
For Electron Forge, update your forge.config.js:
// From samples/electron-winml/forge.config.js
module.exports = {
packagerConfig: {
asar: {
// Unpack native files so they can be accessed by the addon
unpack: "**/*.{dll,exe,node,onnx}"
},
ignore: [
// Exclude .winapp folder (SDK packages and headers)
/^\/.winapp\//,
// Exclude MSIX packages
"\\.msix$",
// Exclude winMlAddon source files, but keep the dist folder
/^\/winMlAddon\/(?!dist).+/
]
},
// ... rest of your config
};What this does:
-
asar.unpack- Extracts DLLs, executables, .node binaries, and ONNX models toapp.asar.unpacked/- This makes them accessible at runtime via file system paths
- The JavaScript code adjusts paths automatically (see the
app.asar→app.asar.unpackedreplacement above)
-
ignore- Excludes from the final package:.winapp/- SDK packages and headers (not needed at runtime).msixfiles - Packaged outputswinMlAddon/source files - Keeps only thedist/folder with compiled binaries
📝 Note: If you're using a different packaging tool (electron-builder, etc.), you'll need to configure similar settings for unpacking native dependencies and excluding development files. Check your packager's documentation for ASAR unpacking options.
The ClassifyImage method processes an image and returns predictions:
[JSExport]
public async Task<Prediction[]> ClassifyImage(string imagePath)
{
// Loads the image, preprocesses it, and runs inference
// Returns top predictions with labels and confidence scores
}The complete implementation handles:
- Image loading and preprocessing (resizing, normalization)
- Running the model inference
- Post-processing results to get top predictions with labels and confidence scores
📝 Note: The full source code includes image preprocessing, tensor creation, and result parsing. Check the sample implementation for all the details.
The addon provides these main functions:
- CreateAsync - Initializes the addon and loads the SqueezeNet model
- ClassifyImage - Takes an image path and returns classification predictions
WinML automatically selects the best execution device (CPU, GPU, or NPU) based on availability.
Now build the addon:
npm run build-winMlAddonThis compiles your C# code using Native AOT (Ahead-of-Time compilation), which:
- Creates a
.nodebinary (native addon format) - Trims unused code for smaller bundle size
- Requires no .NET runtime on target machines
- Provides native performance
The compiled addon will be in winMlAddon/dist/winMlAddon.node.
Now let's test the addon works by calling it from the main process. Open src/index.js and follow these steps:
Add the require statements at the top:
const winMlAddon = require('../winMlAddon/dist/winMlAddon.node');Add this function to test image classification:
const testWinML = async () => {
console.log('Testing WinML addon...');
try {
let projectRoot = path.join(__dirname, '..');
// Adjust path for packaged apps
if (projectRoot.includes('app.asar')) {
projectRoot = projectRoot.replace('app.asar', 'app.asar.unpacked');
}
const addon = await winMlAddon.Addon.createAsync(projectRoot);
console.log('Model loaded successfully!');
// Classify a sample image
const imagePath = path.join(projectRoot, 'test-images', 'sample.jpg');
const predictions = await addon.classifyImage(imagePath);
console.log('Top predictions:');
predictions.slice(0, 5).forEach((pred, i) => {
console.log(`${i + 1}. ${pred.label}: ${(pred.confidence * 100).toFixed(2)}%`);
});
} catch (error) {
console.error('Error testing WinML:', error.message);
}
};Key points:
- The path adjustment (
app.asar→app.asar.unpacked) ensures the code works in both development and packaged apps - This accesses the unpacked native files configured in
forge.config.js
Add this line at the end of the createWindow() function:
testWinML();To test image classification:
- Create a
test-images/folder in your project root - Add some test images (e.g.,
sample.jpg,cat.jpg,dog.jpg) - The SqueezeNet model recognizes 1000 different ImageNet classes
When you run the app, you'll see the classification results in the console!
💡 Tip: For a complete implementation with IPC handlers, file selection dialogs, and a UI, see the electron-winml sample.
To ensure the Windows App SDK is loaded and available for usage, we need to ensure we setup debug identity which will ensure the framework is loaded whenever our app runs. Likewise, whenever you modify appxmanifest.xml or change assets referenced in the manifest (like app icons), you need to update your app's debug identity. Run:
npx winapp node add-electron-debug-identityThis command:
- Reads your
appxmanifest.xmlto get app details and capabilities - Registers
electron.exein yournode_moduleswith a temporary identity - Enables you to test identity-required APIs without full MSIX packaging
📝 Note: This command is already part of the
postinstallscript we added in the setup guide, so it runs automatically afternpm install. However, you need to run it manually whenever you:
- Modify
appxmanifest.xml(change capabilities, identity, or properties)- Update app assets (icons, logos, etc.)
Now run your app:
npm startCheck the console output - you should see the WinML test results!
⚠️ Known Issue: App Crashes or Blank Window (click to expand)
There is a known Windows bug with sparse packaging Electron applications which causes the app to crash on start or not render web content. The issue has been fixed in Windows but has not yet propagated to all devices.
See development environment setup for workaround.
Congratulations! You've successfully created a native addon that can run machine learning models with WinML! 🎉
Now you're ready to:
- Package Your App for Distribution - Create an MSIX package that you can distribute
Or explore other guides:
- Creating a Phi Silica Addon - Learn how to use the Phi Silica AI API
- Getting Started Overview - Return to the main guide
To fully integrate your ONNX model, you'll need to:
- Understand your model's inputs - Images, tensors, sequences, etc.
- Create proper input bindings - Convert your data to the format WinML expects
- Process the outputs - Parse and interpret the model's predictions
- Handle errors gracefully - Model loading and inference can fail
- WinML Documentation - Official WinML documentation
- winapp CLI Documentation - Full CLI reference
- Sample Electron App - Complete working example
- AI Dev Gallery - Sample gallery of all AI APIs
- Windows App SDK Samples - Collection of Windows App SDK samples
- node-api-dotnet - C# ↔ JavaScript interop library
- Found a bug? File an issue
- WinML questions? Check the WinML documentation
Happy machine learning! 🤖

