Major server manufacturers include accelerators Nvidia Tesla V100 system for AI

Yesterday, Nvidia and its partners — Dell, EMC, Hewlett Packard Enterprise, IBM, and Supermicro announced a series of servers that use accelerators Tesla V100 on the Nvidia Volta architecture. According to Nvidia, this is the most advanced GPU for artificial intelligence (AI) and other problems associated with intensive calculations.

Performance Nvidia V100, task deep learning of over 120 teraflops, allows scientists, researchers and engineers to cope with the tasks which were previously considered too complex or even impossible.

The list of systems from partners Nvidia, looks like this:

Dell EMC announced the PowerEdge model R740 support up to three GPU V100 for PCIe, model PowerEdge R740XD support up to three GPU V100 for PCIe and model PowerEdge C4130 supports up to four PCIe V100 for V100 or four GPU in the form factor SXM2 interface Nvidia NVLink;

HPE introduced the HP Apollo 6500 and HP ProLiant DL380 with support respectively to eight and three GPU V100 for PCIe;

IBM announced the upcoming release of the IBM Power Systems new generation Power9 processor with multiple GPU V100 and NVLink technology, used to connect to the GPU-to-GPU and CPU-to-GPU (the latter interaction is unique and is only available on systems with OpenPOWER);

Finally, Supermicro has introduced a line-up that includes workstation 7048GR-TR, servers 4028GR-TXRT, 4028GR-TRT 4028GR-TR2 and 1028GQ-TRT.

Graphics processors are equipped with V100 is optimized for architecture Volta software: CUDA 9.0 and SDK for deep learning, including TensorRT 3, DeepStream and cuDNN 7 and all the major frame library for application AI.

Tags:
Nvidia

Comment

(Visited 1 times, 1 visits today)
No tags for this post.