Chip design firm ARM announced Project Trillium, new suite of machine-learning dedicated chip architectures that will enable mobile devices to host AI computations locally

Briefing

Chip design firm ARM announced Project Trillium, new suite of machine-learning dedicated chip architectures that will enable mobile devices to host AI computations locally

March 2, 2018

Briefing

  • Machine Learning (ML)-Dedicated Chips – Chip design firm ARM unveiled Project Trillium, new suite of machine learning-dedicated chip architectures that will empower mobile devices to run artificial intelligence algorithms locally, instead of transmitting to cloud servers
  • Faster and Safer Computing – Increases data security and computing speed by eliminating need to transfer data from device to cloud servers and back, using less power, and moving data in and out of memory more efficiently
  • ML Intellectual Property Suite – Include Arm ML Processor that performs up to 4.6 trillion operations per second, Arm OD Processor that offers real-time image recognition in full HD processing at 60 frames per second, and ARM NN Software that is optimized for supporting neural network frameworks (e.g. TensorFlow, Caffe, and Android NN)
  • Commercial Availability – Will be available to hardware manufacturers by mid-2018, with expectation to see technology in mobile devices beginning early 2019

Accelerator

Sector

Information Technology

Organization

ARM Holdings

Source

Original Publication Date

February 13, 2018

Leave a comment