node.AI

Efficient data delivery to power your AI/ML

SigmaX Apache Open Source Optimized AI Stack:

  • Optimized/configured for AI/ML algorithms with real-time data ingest needs
  • Apache Arrow inline data ingest and Durable Write support
  • Heterogeneous Accelerator support (GPU and FPGA)
  • 14 Major query engines supported
  • Apache Presto SQL
  • File system features: HDFS, Decentralized ledger, Tiered storage with Geo-replication.

The node.AI appliance is an AI/ML accelerated platform featuring:

  • Intel Stratix 10 PAC FPGA for accelerated Inference execution and real-time data operations
  • Intel OPTANE Persistent Memory in Application Direct or Memory Mode. Accelerates messaging performance and augments data recency
  • NVIDIA V100 AI and Inferencing Acceleration
  • Baseline 8x SSD drives for excellent storage bandwidth.

Available as an OPEN Platform: Optimized for AI, integrated, tested and supported. Develop your own solution without boundaries

Why SigmaX node.AI?

More flexibility than just Rapids

Apache Arrow in the SigmaX stack supports Rapids functionality for GPU and FPGA. Each type of accelerator excels in different computational spaces. Our stack supports efficient data flow to both technologies. This gives you ease of access to choose a matched accelerator for your algorithm.

NOD

Not Only Distributed. Apache Pulsar is distributed and decentralized. In-built support for Geo-replication and Tiered storage widen your AI data reach.

Based in and Supported from the USA

Based in Virginia, SigmaX has made significant investment in US based development and support. We offer customization services and include options for government customers.

Features

I need more for less

Want more processing power for your money? Stop wasting CPU cycles. Apache Arrow is an in-memory format that also can be durably written. It eliminates ETL and SERDES data operations which have been shown to consume up to 60% of CPU cycles in ML systems.

Real, Real-time processing

Our stack comes with in-built FPGA assisted ingest to coerce your data into Apache Arrow at the speed of the wire. Optionally introduce real-time custom analytics before your data even hits main memory. The on-board Stratix-10 FPGA has ample power. SigmaX can offer services as well as Intel design tools pre-installed.

Distributed SQL for Big Data

Apache Presto was written from the ground up for interactive analytics. Presto performance approaches the speed of commercial data warehouses while scaling to the size of organizations like Facebook. Query against data sources of all sizes, ranging up to Petabytes with performance.

Intel Stratix 10 Class FPGA Acceleration

The Intel D5005 programmable acceleration card features high speed interfaces (up to 100Gbps). It provides the performance and versatility of FPGA acceleration and is one of several platforms supported by the Acceleration Stack for Intel Xeon® CPUs with FPGAs. This acceleration stack provides a common interface for both application and accelerator function developers, and includes drivers, application programming interfaces (APIs), and an FPGA Interface Manager.

Engineered to work together

SigmaX builds a series of server appliances designed to work with each other and extend your distributed and decentralized compute infrastructure:

V100 Class GPU Acceleration

NVIDIA® V100 Tensor Core is the most advanced data center GPU ever built to accelerate AI and Data Science.

  • 32X Faster Training Throughput than a CPU
  • 24X Higher Inference Throughput than a CPU Server

node.AI Server

Acceleration
FPGA Intel Stratix 10 PAC

  • QSFP+ 4x 10G Network Interfaces
  • 1,150K Logic Elements
  • Acceleration Stack for Intel Xeon CPU
GPU NVIDIA V100 Tensor Core GPU

  • 16GB HBM2
Memory Intel Optane 512GB Standard

  • Application Direct Mode and Memory Mode supported
  • Quad 128GB Intel Optane 2666 SR DIMMS
Server Hardware
Chassis 2U 27.8″ Deep
CPU Dual Xeon Cascade Lake 4215R 8C/16T 3.2G 11M 9.6GT
Memory 128G DDR4 2933 2Rx8 LP ECC (8x16GB DIMMS)
Storage 8x Intel SSD 480GB/ea. 6Gb/s, 3D TLC 2.5″

RAID 0,1,5,6,10,50,60

Broadcom Supercap Cache Protection

Ports FPGA 1x QSFP 40G

2x 25G SFP28 LAN ports

1x RJ45 Dedicated IPMI LAN port

RAPIDS had its start from the Apache Arrow and GoAi projects based on a columnar, in-memory data structure that delivers efficient and fast data interchange with flexibility to support complex data models.

Did you know?