Monday 14 November 2011

The NVIDIA Blog

The NVIDIA Blog

Link to NVIDIA

Taking the Suspense (and Waiting) Out of Simulation

Posted: 14 Nov 2011 11:00 AM PST

Running on the combined horsepower of Quadro and Tesla GPUs, NVIDIA's new Maximus-powered workstations are expected to remove what has always been an inevitable part of engineering simulation—long anxious waits. Performing the dual functions of a traditional desktop machine and a mini-HPC (high performance computing) system, these workstations are designed to let you run your professional CAD software, visualize your design in photorealistic graphics, and perform finite element analysis to determine its structural integrity—all at the same time.

Let me explain. There's a good reason for engineers to postpone simulation jobs, especially those involving complex assemblies. The sheer volume of 3D geometric data could take up all the memories in a standard workstation. Similarly, the computation required may take up all the processing power available in the CPU, bringing the entire machine to a crawl. Therefore, engineers who cannot afford to put other operations on hold – operations such as CAD modeling, data management, and document processing – have learned to strategically time their simulation jobs so they start running during lunch time and after hours.

The engineers who have access to a high-performance computing (HPC) cluster fare a little better. They have the option to submit their computing-intensive jobs to the cluster, leaving their machine free to run CAD software or render a photorealistic scene. But the cluster is usually a resource shared by a number of engineers, often overburdened with more jobs than it can process in a timely fashion. Should you increase the thickness of your engine? Should you decrease the height of the mounting bracket? You wouldn't be able to make an informed decision until you see the results of your stress and thermal analysis. So, even though your personal workstation is still running at full speed, your workflow may come to a halt, depending on the length of the HPC job queue.

In today's fast-paced manufacturing, getting to market faster than your competitors gives you—and your clients—a huge financial advantage. So any wait time that forces you to put critical decisions on hold can be detrimental to the project. Yet, the suspenseful wait – those anxious hours and days before you find out whether your design can withstand the anticipated load and heat – is an inevitable part of simulation-driven design projects.

NVIDIA Maximus technology, which creates a new class of workstations powered by a combination of NVIDIA graphics processing units (GPUs), is designed to reduce the nail-biting suspense by giving you the option to run simulation jobs on your own machine, without depriving other applications of the computing power they need.

Engineers may not be accustomed to think of their personal workstation as an HPC system. However, the NVIDIA Tesla C2075 GPU inside Maximus-powered workstations functions like a mini-HPC system, giving you the parallel-processing power you need to run many simulation exercises typically delegated to HPC clusters. For highly complex, system-wide simulations (for example, studying the mechanical behaviors of an entire aircraft), you'll probably continue to rely on an HPC cluster. But for mid-level simulation tasks (for example, fluid flow inside a medical equipment's chamber), an NVIDIA Maximus-powered workstation should be more than adequate. In many cases, you may be able to determine the fitness of your design as you continue to refine its geometry in a CAD program and its aesthetics in a rendering program.

Previously, running a simulation job on a workstation provides you with the perfect excuse to take a lunch break: The machine is coming to a grinding halt, so what else is there to do? With an NVIDIA Maximus workstation, you'll have to find a new excuse for a break.

For more on my thoughts on NVIDIA Maximus, visit my Desktop Engineering blog post.

GPU Supercomputers Show Exponential Growth in Top500 List

Posted: 14 Nov 2011 06:15 AM PST

When we launched Tesla GPUs for supercomputing back in 2007, we had a vision that they might just change the scientific computing world forever.

Four years later, compelling validation has arrived that we were right. The new Top500 list published today includes 35 supercomputers accelerated by Tesla GPUs, including three of the top five.

Graph showing the exponential growth in gpu supercomputing as number of gpu accelerated supercomputers in the Top 500 supercomputing listThere are 14 new GPU supercomputers in the Top500 list just in the oil and gas industry alone.   Schlumberger (through  WesternGeco), Petrobras, Hess, Total and Chevron are among the major oil explorers that are using Tesla GPUs to more accurately determine where to drill.

The exponential growth in the number of GPU supercomputers in the Top500 list is one of the fastest adoptions of a new processor in the history of high performance computing.

GPU supercomputers in universities, research labs, and government labs are already leading to breakthroughs in the study of virus and bacteria, to analyze dam breaks and floods, and for predicting heart attacks.

And we're just getting started. Next year will mark another milestone when the new Titan GPU supercomputer will be deployed at the Oak Ridge National Labs. And it looks like more GPU supercomputers will soon be on the way.

World’s First ARM-based Supercomputer to Launch in Barcelona

Posted: 14 Nov 2011 06:00 AM PST

The Barcelona Supercomputing Center (BSC) – Spain's national supercomputing facility – made big news today in the supercomputing world, by announcing plans to build the world's first ARM-based supercomputer.

BSC is planning to build the first ARM supercomputer, accelerated by CUDA GPUs (PDF link), for scientific research. This prototype system will use NVIDIA's quad-core ARM-based Tegra 3 system-on-a-chip, along with NVIDIA CUDA GPUs on a hardware board designed by SECO., to accelerate a variety of scientific research projects.

In their search for more energy efficient architectures in supercomputers, BSC concluded that typical x86-based CPUs in today's supercomputers consume up to 40 percent of the system's total power. They've also realized that ARM CPUs are much more energy-efficient than x86 CPUs from Intel and AMD.

ARM's superior energy efficiency can be traced back to the origins of the ARM architecture. While ARM was originally designed for extremely small and low power embedded devices, Intel and AMD x86 CPUs were always trying to make the Windows operating system run faster with little consideration for power consumption.

BSC is using the NVIDIA GPU to accelerate supercomputing applications on the ARM-based Tegra 3. This combination of GPU and CPUs results in greater performance and high energy efficiency.

Supercomputers are becoming increasingly capped by power. Extreme scale supercomputers (petascale, exascale) are required for advancing science and technology, but the power consumption of these systems has already reached the 10 megawatt to 20 megawatt range. This means one of today's larger supercomputers will use as much power as a small town. This rate of power consumption is not sustainable.

To continue making advances in science and technology, we have to continue to build higher performance supercomputers without increasing power. The supercomputing and high performance communities are investing in heterogeneous systems as the path forward for these large supercomputers.

Barcelona's research project represents a big step forward in the march to exascale.

No comments:

Post a Comment