The NVIDIA Blog | |
| Microsoft Going All-in on GPU Computing Posted: 15 Jun 2011 10:39 AM PDT Great news……Microsoft today made an announcement that will accelerate the adoption of GPU computing (that is, the use of GPUs as a companion processor to CPUs). The software maker is working on a new programming language extension, called C++ AMP, with a focus on accelerating applications with GPUs. With Microsoft now embracing GPUs in their future higher level language and OS roadmap, it makes the decision to go with GPU computing even easier for those programmers still on the fence. Its intent with C++ AMP is to expose C++ language capabilities to millions of Windows developers with the goal of enabling them to take advantage of GPUs. It promises to give millions of C++ developers the option of using Microsoft Visual Studio-based development tools to accelerate applications using the parallel processing power of GPUs. CUDA C and CUDA C++ will continue to be the preferred platform for Linux apps or demanding HPC (high performance computing) applications that need to maximize performance. In the Spring 2007, there was just one language (CUDA C) supporting NVIDIA GPUs. Fast forward to today and our customers now have a much wider selection of languages and APIs for GPU computing – CUDA C, CUDA C++, CUDA Fortran, OpenCL, DirectCompute and in the future Microsoft C++ AMP. There are even Java and Python wrappers, as well as.NET integration, available that sit on top of CUDA C or CUDA C++. If you are a Windows C++ developer looking at GPU Computing for the first time, there is no need to wait. Visual C++ developers today use our high performance CUDA C++ with the Thrust C++ template library to easily accelerate applications by parallelizing as little as 1 to 5 percent of their application code and mapping it to NVIDIA GPUs. CUDA C++ comes with a rich eco-system of profilers, debuggers, and libraries like cuFFT, cuBLAS, LAPACK, cuSPARSE, cuRAND, etc. NVIDIA's Parallel Nsight™ for Visual Studio 2010 provides these Windows developers a familiar development environment, combined with excellent GPU profiling and debugging tools. The take away from Microsoft's announcement today is that the GPU computing space has reached maturity, with the company that produces the world's most widely used commercial C++ developer tools – Microsoft — completely embracing GPU computing in their core tools. Rest assured, NVIDIA continues to work closely with Microsoft to help make C++ AMP a success, and we will continue to deliver the best GPU developer tools and training. Stay tuned for more details.
|
| Posted: 14 Jun 2011 04:50 PM PDT GPU supercomputers and HPCs delivering 100x speed-ups, are they for real? A handful of the 30+ attendees at the June 6th meeting of the HPC & GPU Supercomputing Group of Silicon Valley seemed to think so, while the rest remained skeptical. Throughout the meetup's featured talk, Jike Chong, adjunct professor at Carnegie Mellon, principal application architect at Parasians (a parallel-computing startup) and the organizer of this HPC/GPU meetup group, raised five key questions to shed some light on this discussion:
Jike's talk focused on the critical role that application developers play in the changing semiconductor ecosystem. He distinguished the application developers' role from the roles of other important players in the industry, such as architecture researchers. He introduced the audience to the past and present practices of industry professionals and researchers, as they work towards answering the question about how to attain speed-ups across different processors and platforms. He then explained how, in the field of computational finance, enormous speedups are possible using parallel computing. With this background in place, he discussed how startups utilize parallel application development, and the mistakes they sometimes make. Finally, Jike made recommendations for organizations that are seeking 100x speedups. Using such game changing technology advances will enable significant cost savings and enable new revenue capabilities. (See slides below for additional information) 100x speedups, are they real? Take a look at the slides and let us know what you think! **Footnote: The "100x speedup, is it real?" talk builds on a recently published Berkeley Paper on this topic, which the speaker co-authored. Links: HPC & GPU Supercomputing Group of Silicon Valley Recently published Berkeley Paper on speedups |
| You are subscribed to email updates from NVIDIA To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
| Google Inc., 20 West Kinzie, Chicago IL USA 60610 | |


No comments:
Post a Comment