Private GenAI Copilot (RAG + Internal Knowledge)
Secure on-premises LLM infrastructure for internal copilots and document Q&A
Model No. 911261
GenAI Core (Single-Node | 2U)
CPU:
Dual-socket server CPU class
RAM:
512GB (Up to 1TB)
Storage:
2×1.92TB NVMe (OS/Cache) + 4×3.84TB NVMe (Data)
GPU:
LLM Inference GPU class (Mid)
Network:
2×25GbE (or 2×10GbE)
Remote Mgmt:
IPMI/iDRAC class
Rails:
Included
Best for
Internal copilots, RAG over policies/SOPs, secure document Q&A, 50–200 users (depending on concurrency)
Model No. 912261
GenAI Pro (Single-Node | 2U/4U)
CPU:
Dual-socket server CPU class
RAM:
1TB
Storage:
2×3.84TB NVMe (OS/Cache) + 6×3.84TB NVMe (Data)
GPU:
LLM Inference GPU class (High / max VRAM option)
Network:
2×25/100GbE
Remote Mgmt:
IPMI/iDRAC class
Rails:
Included
Best for
Higher concurrency copilots, multi-department usage, longer context workflows, and always-on private AI services
Computer Vision Inference Server (Production AI for Cameras)
Production-grade vision AI servers for multi-camera pipelines and real-time analytics
Model No. 921261
Vision Inference Core (Single-Node | 2U)
CPU:
Dual-socket server CPU class
RAM:
256–512GB
Storage:
2×1.92TB NVMe (OS) + 4×3.84TB NVMe (Hot data)
GPU:
Vision inference GPU class (Mid, higher VRAM preferred)
Network:
2×25GbE
Remote Mgmt:
IPMI/iDRAC class
Rails:
Included
Best for
Production vision inference, multi-camera pipelines, inspection/safety analytics, and stable low-latency deployments
Model No. 922261
Vision Inference Pro (Single-Node | 2U/4U)
CPU:
Dual-socket server CPU class
RAM:
512GB–1TB
Storage:
2×3.84TB NVMe (OS) + 6×3.84TB NVMe (Hot data)
GPU:
Vision inference GPU class (High / max VRAM option)
Network:
2×25/100GbE
Remote Mgmt:
IPMI/iDRAC class
Rails:
Included
Best for
High-throughput vision, more streams, higher resolution workloads, and mission-critical inference services
AI Dev &Shared Sandbox Server (Team Workspace)
Shared GPU infrastructure for team experiments, notebooks, and AI development
Model No. 931261
AI Dev Sandbox Core (Single-Node | 2U)
CPU:
Dual-socket server CPU class
RAM:
512GB
Storage:
2×1.92TB NVMe (OS) + 4×3.84TB NVMe (Projects)
GPU:
General AI GPU class (Mid)
Network:
2×25GbE
Remote Mgmt:
IPMI/iDRAC class
Rails:
Included
Best for
Shared GPU sandbox for teams, notebooks, experiments, evaluation pipelines, and controlled internal AI environments
Model No. 932261
AI Dev Sandbox Pro (Single-Node | 2U/4U)
CPU:
Dual-socket server CPU class
RAM:
1TB
Storage:
2×3.84TB NVMe (OS) + 6×3.84TB NVMe (Projects/Data)
GPU:
General AI GPU class (High)
Network:
2×25/100GbE
Remote Mgmt:
IPMI/iDRAC class
Rails:
Included
Best for
Larger teams, parallel experiments, heavier datasets, multi-project usage, and high headroom internal AI platform starter