October 20, 2021
Flex Logix Technologies, which develops artificial intelligence edge inference accelerators and eFPGA IP, has announced the production availability of its InferX X1P1 PCIe accelerator board. Designed to bring high-performance AI inference acceleration to edge servers and industrial vision systems, the new board provides customers with AI inference capabilities where high accuracy, high throughput and low power on complex models is needed.
Using a unique dynamic TPU array architecture, the InferX X1 is designed around low latency processing of batch workloads with a special focus on challenging edge vision applications. The X1 offers high performance while remaining flexible to allow customers to migrate to new AI models in the future and adapt to changing system requirements and protocols, Flex Logix said.
“The X1P1 has consistently demonstrated a superior value proposition for customers looking for efficient yet high-performance inference acceleration in edge applications,” said Dana McCarty, vice president of sales and marketing for Flex Logic’s inference products. “Not only are we delivering on our promise to bring high-end AI capabilities to volume mainstream markets, but we are also allowing our customers to future-proof their designs by enabling them to support evolving models, which is something many competitor products fail to provide.”
Flex Logix said the X1P1 board offers the most efficient AI inference acceleration for edge AI workloads such as Yolov3. Many customers need high-performance, low-power objective detection and other high-resolution image processing capabilities for robotic vision, security, retail analytics and other applications.
The X1P1 board is available in production quantities starting in November 2021, and is priced starting at $399 for single unit quantities. The company is also offering a software tool kit to support customer model porting to the X1P1 board.
For more details, visit the Flex Logix website here.