April 20, 2021
Xilinx, which develops processing platforms for cloud, edge and endpoint applications, has announced the Kria portfolio of adaptive system-on-modules (SOMs), production-ready small form factor embedded boards that enable rapid deployment in edge-based applications. With a complete software stack and pre-built, production-grade accelerated applications, Kria SOMs can help bring adaptive computing to AI and software developers, the company said.
The first product in the portfolio is the Kria K26 SOM, which specifically targets vision AI applications in smart cities and smart factories. It features a custom-built Zynq UltraScale+ MPSoC device in a small form factor, ideal for production deployment in smart camera, embedded vision, security, retail analytics, smart city and machine vision applications.
It features a quad-core Arm Cortex A53 processor, more than 250,000 logic cells, and a H.264/265 video codec. It has 4GB of DDR4 memory and 245 IOs, which allow it to adapt to virtually any sensor or interface, Xilinx said. With 1.4 teraOPS of AI compute, the Kria K26 SOM enables developers to create vision AI applications that offer more than 3x higher performance at lower latency and power compared to GPU-based SOMs. This can be critical for smart vision applications such as security, traffic and city cameras, retail analytics, machine vision, and vision-guided robotics, Xilinx added.
“Xilinx’s entrance into the burgeoning SOM market builds on our evolution beyond the chip-level business that began with our Alveo boards for the data center and continues with the introduction of complete board-level solutions for embedded systems,” said Kirk Saban, vice president of product and platform marketing at Xilinx. “The Kria SOM portfolio expands our market reach into more edge applications and will make the power of adaptable hardware accessible to millions of software and AI developers.”
The company said it has invested in its tool flows to make adaptive computing more accessible to AI and software developers without hardware expertise. The turnkey applications eliminate all the FPGA hardware design work and only require software developers to integrate their custom AI models, application code, and optionally modify the vision pipeline – using design environments such as TensorFlow, PyTorch or Cafe frameworks, as well as C, C++, OpenCL, and Python languages. The company has also announced an embedded app store for edge applications, offering customers a wide selection of apps for Kria SOMs from Xilinx and its ecosystem partners. These open-source applications range from smart camera tracking and face detection to natural language processing with smart vision.
To further assist customers, Xilinx announced the Kria KV260 Vision AI Starter Kit (right), which provides an affordable and easy-to-use development platform for designing vision applications out of the box. The kit is purpose-built to support accelerated vision applications available in the Xilinx App Store, allowing developers to get up and running in less than an hour with no knowledge of FPGAs or FPGA tools. The starter kit costs $199, with commercial ($250) and industrial ($350) variants available.