Supermicro Shows Industry’s First Scale-Up AI and Machine Learning Systems based on the Latest Generation CPUs and NVIDIA Tesla V100 with NVLink GPUs for Superior Performance and Density
TAIPEI, Taiwan, May 30, 2018—Super Micro Computer, Inc. (NASDAQ: SMCI), a global leader in enterprise computing, storage, networking solutions and green computing technology, today announced that it is showcasing the industry’s broadest selection of GPU server platforms that support NVIDIA® Tesla® V100 PCI-E and V100 SXM2 Tensor Core GPU accelerators in its Platinum Sponsor Booth at the GPU Technology Conference (GTC) Taiwan 2018, Taipei Marriott Hotel on May 30-31.
For maximum acceleration of highly parallel applications like artificial intelligence (AI), deep learning, self-driving cars, smart cities, health care, big data, high performance computing (HPC), virtual reality and more, Supermicro’s new 4U system with next-generation NVIDIA NVLink™ interconnect technology is optimized for maximum performance. SuperServer 4029GP-TVRT, a part of the NVIDIA HGX-T1 class of GPU-Accelerated Server Platforms, supports eight NVIDIA Tesla V100 32GB SXM2 GPU accelerators with maximum GPU-to-GPU bandwidth for cluster and hyper-scale applications. Incorporating the latest NVIDIA NVLink technology with over five times the bandwidth of PCI-E 3.0, this system features independent GPU and CPU thermal zones to ensure uncompromised performance and stability under the most demanding workloads.
“On initial internal benchmark tests, our 4029GP-TVRT system was able to achieve 5,188 images per second on ResNet-50 and 3,709 images per second on InceptionV3 workloads,” said Charles Liang, President and CEO of Supermicro. “We also see very impressive, almost linear performance increases when scaling to multiple systems using GPU Direct RDMA. With our latest innovations incorporating the new NVIDIA V100 32GB PCI-E and V100 32GB SXM2 GPUs with 2X memory in performance-optimized 1U and 4U systems with next-generation NVLink, our customers can accelerate their applications and innovations to help solve the world’s most complex and challenging problems.”
“Enterprise customers will benefit from a new level of computing efficiency with Supermicro’s high-density servers optimized for NVIDIA Tesla V100 32GB Tensor Core GPUs,” said Ian Buck, vice president and general manager of accelerated computing at NVIDIA. “Twice the memory with V100 drives up to 50 percent faster results on complex deep learning and scientific applications and improves developer productivity by reducing the need to optimize for memory.”
Supermicro GPU systems also support the ultra-efficient Tesla P4 that is designed to accelerate inference workloads in any scale-out server. The hardware accelerated transcode engine in Tesla P4 delivers 35 HD video streams in real-time and allows integrating deep learning into the video transcoding pipeline to enable a new class of smart video applications. As deep learning shapes our world like no other computing model in history, deeper and more complex neural networks are trained on exponentially larger volumes of data. To achieve responsiveness, these models are deployed on powerful Supermicro GPU servers to deliver maximum throughput for inference workloads.
Supermicro is further demonstrating the NVIDIA SCX-E3 class of GPU-Accelerated Server Platforms, the performance-optimized 4U SuperServer 4029GR-TRT2 system that can support up to 10 PCI-E NVIDIA Tesla V100 accelerators with Supermicro’s innovative and GPU-optimized single-root complex PCI-E design, which dramatically improves GPU peer-to-peer communication performance. For even greater density, the SuperServer 1029GQ-TRT supports up to four NVIDIA Tesla V100 PCI-E GPU accelerators in only 1U of rack space and the new SuperServer 1029GQ-TVRT supports four NVIDIA Tesla V100 SXM2 32GB GPU accelerators in 1U. Both of the 1029GQ servers are part of the NVIDIA SCX-E2 class of GPU-Accelerated Server Platforms.
With the convergence of big data analytics and machine learning, the latest NVIDIA GPU architectures, and improved machine learning algorithms, deep learning applications require the processing power of multiple GPUs that must communicate efficiently and effectively to expand the GPU network. Supermicro’s single-root GPU system allows multiple NVIDIA GPUs to communicate efficiently to minimize latency and maximize throughput as measured by the NCCL P2PBandwidthTest.
For comprehensive information on Supermicro NVIDIA GPU system product lines, please go to www.supermicro.com/products/nfo/gpu.cfm.
For complete information on SuperServer® solutions from Supermicro, visit www.supermicro.com.
Follow Supermicro on Facebook and Twitter to receive their latest news and announcements.
About Super Micro Computer, Inc. (NASDAQ: SMCI)
Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology is a premier provider of advanced server Building Block Solutions® for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide. Supermicro is committed to protecting the environment through its “We Keep IT Green®” initiative and provides customers with the most energy-efficient, environmentally-friendly solutions available on the market.
Supermicro, Building Block Solutions and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc.
All other brands, names and trademarks are the property of their respective owners.
Media Contact:
Michael Kalodrich
Super Micro Computer, Inc.
pr@supermicro.com
SMCI-F