Nvidia has released a new version of TensorRT, a runtime system for serving inferences using deep learning models through Nvidiarsquo;s own GPUs.Inferences, or predictions made from a trained model, can be served from either CPUs or GPUs.
Serving inferences from GPUs is part of Nvidiarsquo;s strategy to get greater adoption of its processors, countering what AMD is doingnbsp;to break Nvidiarsquo;s stranglehold on the machine learning GPU market.[ Revealed: AMDrsquo;s strategy to become a machine learning giant. | Roundup: TensorFlow, Spark MLlib, Scikit-learn, MXNet, Microsoft Cognitive Toolkit, and Caffe machine learning and deep learning frameworks. ]Nvidia claims the GPU-based TensorRT is better across the board for inferencing than CPU-only approaches. One of Nvidiarsquo;s proffered benchmarks, the AlexNet image classificationnbsp;test under the Caffe framework, claims TensorRT to be 42 times faster than a CPU-only version of the same testnbsp;— 16,041 images per second vs. 374mdash;when run on Nvidiarsquo;s Tesla P40 processor. (Always take industry benchmarks with a grain of salt.)To read this article in full or to leave a comment, please click here
Serving inferences from GPUs is part of Nvidiarsquo;s strategy to get greater adoption of its processors, countering what AMD is doingnbsp;to break Nvidiarsquo;s stranglehold on the machine learning GPU market.[ Revealed: AMDrsquo;s strategy to become a machine learning giant. | Roundup: TensorFlow, Spark MLlib, Scikit-learn, MXNet, Microsoft Cognitive Toolkit, and Caffe machine learning and deep learning frameworks. ]Nvidia claims the GPU-based TensorRT is better across the board for inferencing than CPU-only approaches. One of Nvidiarsquo;s proffered benchmarks, the AlexNet image classificationnbsp;test under the Caffe framework, claims TensorRT to be 42 times faster than a CPU-only version of the same testnbsp;— 16,041 images per second vs. 374mdash;when run on Nvidiarsquo;s Tesla P40 processor. (Always take industry benchmarks with a grain of salt.)To read this article in full or to leave a comment, please click here