Monday, September 26, 2022
HomeElectronicsEdge AI processor slashes inference latency

Edge AI processor slashes inference latency

[ad_1]

GrAI Matter Labs (GML) unveiled the GrAI VIP, a sparsity-driven AI SoC optimized for ultra-low latency and low-power processing at the endpoint. According to the company, the vision inference processor drastically reduces application latency. For example, it can reduce end-to-end latencies for deep learning networks, such as Resnet-50, to the order of a millisecond.

A near-sensor AI solution, the GrAI VIP offers 16-bit floating-point capability to achieve best-in-class performance with a low-power envelope. The edge AI processor is based on GML’s NeuronFlow technology, which combines the dynamic dataflow paradigm with sparse computing to produce massively parallel in-network processing. Aimed at applications that rely on understanding and transforming signals produced by a multitude of sensors at the edge, GrAI VIP can be used in industrial automation, robotics, AR/VR, smart homes, and infotainment in automobiles.

GML demonstrated its Life-Ready AI SoC at this month’s Global Industrie exhibition. AI application developers looking for high-fidelity and low-latency responses for their edge algorithms can now gain early access to the full-stack GrAI VIP platform, including hardware and software development kits.

GrAI VIP product page

GrAI Matter Labs

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.



[ad_2]

Source link

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments