NVIDIA announced breakthrough in training and inference of open source AI language model BERT using proprietary AI platform

Briefing

NVIDIA announced breakthrough in training and inference of open source AI language model BERT using proprietary AI platform

August 14, 2019

Briefing

  • Breakthrough AI Training – NVIDIA announced its artificial intelligence (AI) platform trained one of most advanced AI language models Bidirectional Encoder Representations from Transformers (BERT) in 53 minutes compared to several days using other techniques
  • BERT – Pre-training AI model open sourced by Google in November 2018 able to extract textual information that can be applied to language tasks
  • Faster Inference – Achieved faster inference time (i.e. AI’s ability to infer meaning to data acquired through training) of only 2.2 milliseconds instead of over 40 milliseconds with optimized central processing units (CPUs)
  • AI Platform – Used NVIDIA DGX SuperPOD supercomputer, 92 NVIDIA DGX-2H servers running 1,472 NVIDIA V100 graphics processing units (GPUs) to train BERT, plus NVIDIA T4 GPUs running deep learning platform NVIDIA TensorRT for inference
  • Largest Model – Built and trained world's largest language model based on Transformers, foundational technology for BERT, with 8.3 billion parameters (i.e. numbers, values or weights plugged into functions), 24 times size of BERT-Large
  • Application – Will impact AI’s language understanding enabling real-time conversations with AIs, such as virtual assistants, search engines, and AI-based services used in banks, cars, retail, healthcare, hospitality and more

Accelerator

Business Model and Practices

Business Model
and Practices

Sector

Information Technology

Function

Customer Experience and Service, IT Infrastructure, Research and Development

Organization

Google Inc., Nvidia Corp.

Source

Original Publication Date

August 13, 2019

Leave a comment