Procyon AI Text Generation

Procyon® AI Text Generation Benchmark

Simplifying Local LLM AI Performance testing

Testing AI LLM performance can be very complicated and time-consuming, with full AI models requiring large amounts of storage space and bandwidth to download. There are also many variables such as quantization, conversion, and variations in input tokens that can reduce a test’s reliability if not configured correctly.

The Procyon AI Text Generation Benchmark provides a more compact and easier way to repeatedly and consistently test AI performance with multiple LLM AI models. We worked closely with many AI software and hardware leaders to ensure our benchmark tests take full advantage of the local AI accelerator hardware in your systems.

今すぐ購入

Prompt 7 (RAG Query): How can benchmarking save time and money for my organization? How to choose a reference benchmark score for RFPs? Summarize how to efficiently test the performance of PCs for Enterprise IT. Answer based on the context provided.

結果とインサイト


Built with input from industry leaders

  • Built with input from leading AI vendors to take full advantage of next-generation local AI accelerator hardware.
  • Seven prompts simulating multiple real-world use cases, with RAG (Retrieval-Augmented Generation) and non-RAG queries
  • Designed to run consistent, repeatable workloads, minimizing common AI LLM workload variables.
Ai text generation benchmark benchmark scores
Ai text generation benchmark detailed scores

Detailed Results

  • Get in-depth reporting as to how system resources are being used during AI workloads.
  • Reduced install size vs testing with entire AI models.
  • Easily compare results between devices to help identify the best systems for your use cases.

AI Testing Simplified

  • Easily and quickly test using four industry standard AI Models of varying parameter sizes.
  • Get a real-time view of responses being generated during the benchmark
  • One-click to easily test with all supported inference engines, or configure based on your preference.
Ai text generation benchmark hardware monitoring

業界の専門知識を活かした開発


Procyon benchmarks are designed for industry, enterprise, and press use, with tests and features created specifically for professional users. The Procyon AI Text Generation Benchmark was designed and developed with industry partners through the UL Benchmark Development Program (BDP). BDPは、プログラムメンバーとの密接な連携により、適切かつ公平なベンチマークを作成することを目的としたUL Solutionsのイニシアチブです。

推論エンジンパフォーマンス

With the Procyon AI Text Generation Benchmark, you can measure the performance of dedicated AI processing hardware and verify inference engine implementation quality with tests based on a heavy AI image generation workload.

プロフェッショナル向け設計

We created our Procyon AI Inference Benchmarks for engineering teams who need independent, standardized tools for assessing the general AI performance of inference engine implementations and dedicated hardware.

高速で使いやすい

ベンチマークはインストールも実行も簡単で、複雑な設定は一切必要ありません。Run the benchmark using the Procyon application or via command-line. ベンチマークスコアやチャートを表示、または詳細な結果ファイルをエクスポートしてさらに分析できます。

Procyon AI Text Generation Benchmark

無償試用

試用リクエスト

サイトライセンス

見積もり 出版関係者向けライセンス
  • Annual site license for Procyon AI Text Generation Benchmark.
  • ユーザー数、無制限。
  • デバイス台数、無制限。
  • メールと電話による優先サポート。

BDP

お問い合わせ 詳細

お問い合わせ


Benchmark Development Program™ (BDP)は、技術系企業とのパートナーシップを構築するためのUL Solutionsのイニシアチブです。

OEM、ODM、部品メーカーおよびそのサプライヤーの皆様、ぜひ新しいAI処理ベンチマークの開発にぜひご参加ください。詳しくはお問い合わせください。

システム要件

All ONNX models

Storage: 18.25GB

All OpenVINO models

Storage: 15.45GB

Phi-3.5-mini

ONNX with DirectML
  • 6GB VRAM (discrete GPU)
  • 16GB System RAM (iGPU)
  • Storage: 2.15GB
Intel OpenVINO
  • 4GB VRAM (Discrete GPU)
  • 16GB System RAM (iGPU)
  • Storage: 1.84GB

Llama-3.1-8B

ONNX with DirectML
  • 8GB VRAM (Discrete GPU)
  • 32GB System RAM (iGPU)
  • Storage: 5.37GB

Intel OpenVINO

  • 8GB VRAM (Discrete GPU)
  • 32GB System RAM (iGPU)
  • Storage: 3.88GB

Mistral-7B

ONNX with DirectML
  • 8GB VRAM (discrete GPU)
  • 32GB System RAM (iGPU)
  • Storage: 3.69GB
Intel OpenVINO
  • 8GB VRAM (Discrete GPU)
  • 32GB System RAM (iGPU)
  • Storage: 3.48GB

Llama-2-13B

ONNX with DirectML
  • 12GB VRAM (Discrete GPU)
  • 32GB System RAM (iGPU)
  • Storage: 7.04GB
Intel OpenVINO
  • 10GB VRAM (Discrete GPU)
  • 32GB System RAM (iGPU)
  • Storage: 6.25GB

サポート

Latest 1.0.73.0 | 2024年12月9日

言語

  • 英語
  • ドイツ語
  • 日本語
  • ポルトガル語(ブラジル)
  • 簡体字中国語
  • スペイン語

Powered by