The highly configurable Ethos-U55 works in concert with the Cortex-M core to achieve a small footprint while also delivering greater than 30x improvement in inference performance compared to Cortex-M alone, even in high performing MCUs. The Ethos-U55 is specifically designed to accelerate ML inference in area-constrained embedded and IoT devices. Its advanced compression techniques save power and reduce ML model sizes significantly to enable execution of neural networks that previously only ran on larger systems. In addition, a unified toolchain with Cortex-M gives developers a simplified, seamless path to develop ML applications within the familiar Cortex-M development environment. The end-to-end enablement, from training to run-time inference deployment for Ethos-U55, will be accessible through NXP’s eIQ machine learning development environment.
NXP’s comprehensive portfolio of ML compute elements (CPU, GPU, DSP and NPU) are enabled with its eIQ machine learning development environment, which provides choices of popular open-source inference engines that deliver the performance needed for the specific compute element. Using NXP’s edge processors and eIQ tools, customers can easily build many ML applications, including, object detection, face and gesture recognition, natural language processing, and predictive maintenance.