跳到主要内容 跳到页脚

New Inference Engines now available in Procyon

May 1, 2025

We’re excited to announce that we are further expanding the number of supported AI Inference technologies in the Procyon AI Image Generation Benchmark with the addition of Qualcomm® AI Engine Direct (QNN) (INT8) and AMD-optimized ONNX (FP16).

AI Image Generation Benchmark banner

Each of these implementations was developed with close communication with hardware vendors to make measuring and comparing AI image generation performance reliable and accurate, while still being as easy as clicking a button.

New Inference Engines and Windows on Arm supported

The Procyon AI Image Generation Benchmark can now be run on select Windows on Arm devices with the addition of Qualcomm® AI Engine Direct (QNN) inference engine. This option runs a model quantized using a precision weight of INT8, with an activation layer of INT16.

By choosing to use a higher precision activation layer than the model weight, a more optimal balance between image quality and performance can be achieved on Snapdragon X Elite hardware.

When comparing results from differing inference engines, it is also important to consider the differences in the quality of the results. This can be measured by the results produced by the inference engine (such as the visuals of generated images), as well as the Procyon AI quality metrics we publish on our site.

Qualcomm QNN vs Pytorch Procyon Image Generation

Test AMD-optimized ONNX with Image Generation performance

With the addition of AMD-optimized ONNX FP16 models for use with Stable Diffusion 1.5 and Stable Diffusion XL, the Procyon Image Generation Benchmark on Windows now covers inference runtimes optimized for AI accelerators from Nvidia, Intel, AMD and Qualcomm.

The AMD-optimized ONNX models can be used on systems with discrete and integrated GPUs.Please see the table below for further information on supported hardware and inference runtimes.


Vendor Accelerator Common Inference Engine Optimized Inference Engine Procyon Image Generation Benchmarks
Intel GPUs (integrated & discrete) ONNX runtime - Microsoft Olive Intel OpenVINO SD 1.5 (INT8, FP16)
SD XL (INT8)
NPU Not currently supported Intel OpenVINO SD 1.5 (INT8)
AMD GPU (integrated & discrete) ONNX runtime - Microsoft Olive AMD-optimized ONNX SD 1.5 (FP16)
SD XL (FP16)
NPU Not currently supported
NVIDIA GPU ONNX runtime - Microsoft Olive TensorRT SD 1.5 (INT8, FP16)
SD XL (FP16)
Qualcomm NPU ONNX runtime - Microsoft Olive QNN SD 1.5 (INT8)

With Procyon AI benchmarks supporting such a wide range of inference technologies, it’s important to ensure you keep Procyon benchmarks up to date. AI inference software updates can bring significant performance benefits, meaning that using an older version of Procyon may not provide you with the most accurate results.

Read more about the Procyon AI Image Generation Benchmark

隐私政策  |  Cookie 政策  |  EULA

Powered by