Nvidia on Tuesday announced technologies that could make itsupcoming Tesla graphics processors more accessible to clouddeployments in enterprises, while also reversing a trend ofrelegating highly parallel chips to specialized math and scientificcalculations. The company announced new Tesla graphics processors with hardwareand software hooks that allow the chips to be self-sufficient indeploying virtual machines and executing programs. Analysts saidthe new technologies could open the floodgates for clouddeployments through servers with virtualized graphics processors,which will rely less on components such as CPUs for task execution. Graphics processors are generally considered faster than CPUs,which are more relevant for everyday tasks such as productivityapplications. A larger number of servers are now harnessing theparallel-processing capabilities of CPUs and GPUs to scaleperformance, especially in supercomputing. The world'ssecond-fastest supercomputer, the Tianhe-1A system at the NationalSupercomputer Center in Tianjin, China, uses Nvidia's Tesla GPUsand Intel's Xeon processors to deliver 2.5 petaflops ofperformance. The new graphics processors include the Tesla K10, which hasstarted shipping, and the Tesla K20, which will ship in the fourthquarter. The new chips are based on the Kepler architecture, andare faster and more power-efficient than the older Fermiarchitecture, which was considered power hungry by design. Chipprices were not available. Nvidia has added to Kepler a new virtualization technology calledVGX, which virtualizes the GPU and makes it a resource that can beshared by multiple CPUs and threads, said Jeff Brown, generalmanager of the professional solutions group at Nvidia. Nvidia hasbuilt a memory management unit into the GPU, ensuring astraightforward deployment of a virtual machine. GPUs have in the past been used for virtualization. For example,Nvidia and its rival Advanced Micro Devices have offeredprofessional graphics cards for deployment of Windows 7 virtualdesktop from servers to client devices. But with VGX, now the GPUcan skip CPU cycles and directly deploy and manage virtualmachines. The new virtualization technology has interesting implications inserver designs and the deployment of cloud services to thin clientsand devices like tablets, said Dean McCarron, principal analyst atMercury Research. "We can see some shifting going on. For one, GPUs haven't shown upin the server environment outside the high-performance computingspace." McCarron said. "It opens the door for playing very complex,visually detailed games on a thin-client." The new technology also makes it easier and inexpensive to add GPUsto general server environments, McCarron said. For example, aserver-side virtualized GPU will be able to able to render ahigh-definition game and deliver it over the cloud while takeadvantage of GPU acceleration features. "Now you can start doing some interesting things with your workloadin terms of a client-server architecture," McCarron said. The VGX architecture removes a couple of major bottlenecks thatkept hybrid -- CPU and GPU --- systems from achieving maximum powerand performance efficiency, said Dan Olds, principal analyst atGabriel Consulting Group. "After these changes are implemented fully, users will see muchhigher CPU, GPU, and thus overall system utilization, which makesthe already compelling hybrid computing story even stronger," Oldssaid. A VGX hypervisor is connected to hypervisors from Citrix or VMware,and graphics boards are being developed based on VGX that can beplugged into PCI-Express 3.0 slots. Server makers including Dell,Hewlett-Packard, IBM and Cisco are adopting the new VGX technology,Nvidia's Brown said. Multiple workloads require GPUs in servers to execute programs moreefficiently. Nvidia has introduced new technology called HybridQ toimprove the parallelism and utilization of GPUs, said Sumit Gupta,senior director for the Tesla Business Unit. The GPU helps execute multiple tasks simultaneously and moreefficiently through a hardware scheduler, which ensures the GPU canprioritize and execute tasks more efficiently instead of loopingback to the CPU, Gupta said. "As soon as you start going back to the CPU too much, you lose thebenefit of the GPU," Gupta said. A part of CPU design involves a scheduler that makes sure theworkload and its branches get handled correctly. That kind ofcapability wasn't present in parallel processing units, andNvidia's implementation of HybridQ will scale GPU performance,Mercury Research's McCarron said. The GPU already a lot of capabilities of the CPU, and the HybridQwill bring them closer in features, McCarron said. Rather thanhaving specially tuned software, HybridQ will scale the number ofworkloads virtualized GPUs handle, McCarron said. The HybridQ technology can be applied to a number of tasks thatinclude computational fluid dynamic, bioinformatics, genomesequencing and circuit simulation, Gupta said. Programmers don'thave to make changes in code to benefit from the HybridQ technologyand the technology is compatible with Nvidia's CUDA framework,which enables programmers to write highly parallel programs. HybridQ will come with the Tesla K20, which is targeted at high-endservers and will become available in the fourth quarter. Agam Shah covers PCs, tablets, servers, chips and semiconductorsfor IDG News Service. Follow Agam on Twitter at @agamsh . Agam's e-mail address is agam_shah@idg.com. We are high quality suppliers, our products such as Wide Format Laminator , Roll Laminating Film Manufacturer for oversee buyer. To know more, please visits Laminating Pouch Film.
Related Articles -
Wide Format Laminator, Roll Laminating Film Manufacturer,
|