New Inference Engines now available in Procyon
May 1, 2025
We’re excited to announce that we are further expanding the number of supported AI Inference technologies in the Procyon AI Image Generation Benchmark with the addition of Qualcomm® AI Engine Direct (QNN) (INT8) and AMD-optimized ONNX (FP16).
Each of these implementations was developed with close communication with hardware vendors to make measuring and comparing AI image generation performance reliable and accurate, while still being as easy as clicking a button.
New Inference Engines and Windows on Arm supported
The Procyon AI Image Generation Benchmark can now be run on select Windows on Arm devices with the addition of Qualcomm® AI Engine Direct (QNN) inference engine. This option runs a model quantized using a precision weight of INT8, with an activation layer of INT16.
By choosing to use a higher precision activation layer than the model weight, a more optimal balance between image quality and performance can be achieved on Snapdragon X Elite hardware.
When comparing results from differing inference engines, it is also important to consider the differences in the quality of the results. This can be measured by the results produced by the inference engine (such as the visuals of generated images), as well as the Procyon AI quality metrics we publish on our site.
Test AMD-optimized ONNX with Image Generation performance
With the addition of AMD-optimized ONNX FP16 models for use with Stable Diffusion 1.5 and Stable Diffusion XL, the Procyon Image Generation Benchmark on Windows now covers inference runtimes optimized for AI accelerators from Nvidia, Intel, AMD and Qualcomm.
The AMD-optimized ONNX models can be used on systems with discrete and integrated GPUs.Please see the table below for further information on supported hardware and inference runtimes.
Vendor | Accelerator | Common Inference Engine | Optimized Inference Engine | Procyon Image Generation Benchmarks |
Intel | GPUs (integrated & discrete) | ONNX runtime - Microsoft Olive | Intel OpenVINO | SD 1.5 (INT8, FP16)
SD XL (INT8) |
NPU | Not currently supported | Intel OpenVINO | SD 1.5 (INT8) | |
AMD | GPU (integrated & discrete) | ONNX runtime - Microsoft Olive | AMD-optimized ONNX | SD 1.5 (FP16)
SD XL (FP16) |
NPU | Not currently supported
|
|||
NVIDIA | GPU | ONNX runtime - Microsoft Olive | TensorRT | SD 1.5 (INT8, FP16)
SD XL (FP16) |
Qualcomm | NPU | ONNX runtime - Microsoft Olive | QNN | SD 1.5 (INT8) |
With Procyon AI benchmarks supporting such a wide range of inference technologies, it’s important to ensure you keep Procyon benchmarks up to date. AI inference software updates can bring significant performance benefits, meaning that using an older version of Procyon may not provide you with the most accurate results.
Recent news
-
New Inference Engines now available in Procyon
May 1, 2025
-
Try out NVIDIA DLSS 4 in 3DMark
January 30, 2025
-
Test LLM performance with the Procyon AI Text Generation Benchmark
December 9, 2024
-
New DirectStorage test available in 3DMark
December 4, 2024
-
New Opacity Micromap test now in 3DMark for Android
October 9, 2024
-
NPUs now supported by Procyon AI Image Generation
September 6, 2024
-
Test the latest version of Intel XeSS in 3DMark
September 3, 2024
-
Introducing the Procyon Battery Consumption Benchmark
June 6, 2024
-
3DMark Steel Nomad is out now!
May 21, 2024
-
Procyon AI Inference now available on macOS
April 8, 2024
-
Procyon AI Image Generation Benchmark Now Available
March 25, 2024
-
Announcing the Procyon AI Image Generation Benchmark
March 21, 2024
-
3DMark Steel Nomad will be free for all 3DMark users.
December 20, 2023
-
Using 3DMark to measure sustained performance.
December 13, 2023
-
New NPUs supported by Procyon AI Benchmark
November 16, 2023