Friday 18 May 2012

The NVIDIA Blog

The NVIDIA Blog

Link to NVIDIA

Winners of Petaflop Supercomputer Contest Are…

Posted: 18 May 2012 09:25 AM PDT

tesla fermi key visual1

Two weeks ago, we posed an open-ended question to the research community: What scientific problems would you tackle with a petaflop supercomputer? Today, we reveal the three proposals selected to win exclusive early access to an NVIDIA Tesla K20 GPU.

Contest entries came in from hundreds of researchers around the world. Their proposals ranged from societal challenges to advancing biofuels, from easing greenhouse gas emissions to preventing the next financial meltdown – all with a common aspiration to change the world.

The NVIDIA Tesla K20

The buzz around this contest reminds us that the pace of innovation is limited not by human imagination, but by computing resources. Oak Ridge National Lab recently told us that computing access on the upcoming TITAN supercomputer, powered by Tesla GPUs, is heavily oversubscribed – and the system isn't even online yet.

This is where the Tesla K20 GPU can transform research. Using this GPU, based on our next-generation Kepler architecture, every research university could have access to petascale supercomputers, and speed up the pace of scientific discoveries.

After reviewing the flood of compelling proposals, our panel of academics handpicked three winning entries:


"Antiretroviral Therapies Against HIV-1"
Juan R. Perilla Ph.D., University of Illinois at Urbana-Champaign
Macromolecular Modeling and Bioinformatics Theoretical and Computational Biophysics Group

Summary: HIV-1 is increasingly acquiring new resistance to antiretroviral treatments. A petaflop supercomputer would help us see the complex dynamics that govern the space phase of the conical HIV capsid. The capsid protein plays critical roles in both late and early stages of the infection process and is widely viewed as an important unexploited therapeutic target that could offer the best hope of generating drugs that are active against all HIV-1 variants.


"Finding Biomarkers for Major Mental Disorders"
Stephen J. Glatt, Ph.D., SUNY Upstate Medical University
Director of Psychiatric Genetic Epidemiology & Neurobiology Laboratory (PsychGENe Lab)

Summary: PsychGENe Lab is working to find better ways to diagnose and prevent major mental disorders like autism, schizophrenia and many others. Discovery of additional risk genes and biomarkers for major mental disorders will, in turn, allow the development of personalized and more efficient treatments as well as earlier identification and prevention. A petaflop supercomputer would continually model existing and emerging datasets with more complex models that more likely resemble the true biological complexity underlying these insidious disorders.


"Tracking Oil Spills at Real Time for Immediate Cleanup Efforts"
Brandon Snow Richardson, Jet Propulsion Laboratory and Stanford University

Summary: During an average flight to track progress of the Deepwater Horizon oil spill in the Gulf of Mexico, AVIRIS, an in-flight NASA instrument, would generate more than 135 GB of data, which would take a week to process and create abundance maps in a CPU cluster. My research has shown that GPUs significantly accelerate spectral decomposition, and a petascale computer with GPUs will instantaneously produce abundance maps to help cleanup crews immediately respond.


Congratulations to the winners and thanks to all who participated in the contest!

GTC Wraps Up: By The Numbers

Posted: 17 May 2012 06:10 PM PDT

by-the-numbers

The third GPU Technology Conference (GTC) ended Thursday much as it began – with a jam-packed keynote session, standing-room only break-out sessions and a small galaxy of schmooze-fests along the sidelines.

Indeed, the final sessions were as crowded as those on the morning of Day One.

NVIDIA used the event – which is designed to nurture the GPU ecosystem – to unveil some news of its own. It introduced two new Tesla processors based on next-generation Kepler technology architecture, one of which has more than 7 billion transistors. And it announced that it’s taking the GPU into the cloud with two initiatives, the VGX platform for enterprises to deliver virtualized desktops to any device across their network and the GeForce Grid for the delivery of flawless online gaming.

Another way of looking at GTC is by the numbers. The conference….

  • Drew nearly 3,000 attendees from 54 countries
  • Offered 340+ conference sessions in 34 disciplines
  • Displayed 120 academic posters about CUDA applications
  • Featured 100+ HPC-focused exhibitors
  • And, was filled with enough hyphenated, multi-syllabic Latinate words to baffle all but the uninitiated…

If you missed this year's event, you can catch up on the GTC blog posts, here: http://blogs.nvidia.com/tag/gtc12. Watch the video below, for NVIDIA VP Ujesh Desai's wrap-up from the show.

Join us next year, March 19-22, for GTC 2013. We'll be right back here at the San Jose Convention Center.

Space: The Next, Though Not Final, Frontier for GPU’s

Posted: 17 May 2012 04:52 PM PDT

gtc-2012-part-time-scientist-keynote-1

Considering NASA completed multiple missions to the moon more than 40 years ago using the technological equivalent of chicken wire and duct tape, landing a robotic lunar rover with modern technology should be a pushover, right?

Not exactly, explained Robert Boehme and Wes Faler, of Part-Time Scientists, during their Day Three keynote address in front of a rapt audience at GTC. A modern mission still has to contend with temperature swings by 300 degrees Celsius and component-destroying lunar sand so fine it can penetrate "airtight" astronaut space suits. Not to mention 10-times the radiation from the sun than we experience on Earth.

Robert Boehme (blue) and Wes Faler (white) talk
about their quest to win the Google Lunar X Prize

The pair is part of a team of more than 100 volunteer scientists, engineers, researchers and students – even veterans of NASA's Apollo missions – vying to win the Google Lunar X PRIZE, and the $30 million that comes with it. To succeed, the team needs to be the first privately funded team to safely land a rover on the surface of the moon, drive the rover at least 500 meters, and transmit detailed video, images and data back to Earth.

To help ensure its success, the Part-Time Scientists are relying on NVIDIA GPUs to accelerate the mission's computationally intensive applications. This includes everything from simulating the landing craft's final orbit and approach, to modeling the rover's autonomous navigation of the lunar surface in one-sixth the Earth's gravity – tasks that require hundreds of millions of computing runs to determine possible parameters and their effects.

Once their rover, named Asimov, has reached its destination, the processing and broadcasting of high-resolution video and images back to Earth will require more than 34 trillion floating point operations for the square kilometer the team expects the rover to explore.

Robert and Wes left a surprise for the end of the keynote. They revealed, to the delight of the audience, that Asimov will be a self-driving rover using NVIDIA GPUs to autonomously roam the moon. They then instructed the crowd to check under their seats for an even bigger surprise: one lucky keynote attendee would walk away from GTC 2012 with the Asimov Junior rover prototype.

NASA's moon missions are famous for having helped bring into existence new technologies, like Teflon and Tang. Four decades on, NVIDIA's GPU technology is helping volunteer rocket scientists every step of the way on a journey back into space.

GPUs Processing Images From the Red Planet

Posted: 17 May 2012 03:41 PM PDT

mars-rover

The 150 million-plus miles from Earth to Mars is the least of the challenges facing researchers who are processing images from a rover on the Red Planet.

Consider some others: The rover's processor, based on generations-old technology, was meant to last  six months and is going on its seventh year. Mars provides a very "noisy" image environment. There are limited transfer times when an orbiting satellite can send images back to Earth. And the rover's pint-sized antenna looks like it was fashioned out of a paper clip. Worst of all, there's no onsite tech support.

But mathematician Brendan Babb, from the University of Alaska at Anchorage, is using GPUs to improve the image compression from data sent by the rover. He uses the CUDA programming language and NVIDIA Tesla GPUs to speed up what is called a genetic algorithm, which mimics natural evolution to derive clearer images from the ones coming from NASA's Jet Propulsion Laboratory.

A beautiful panorama of Mars Rover “Spirit” from “Troy” (false color)

The algorithm works by pairing neighboring pixels with a random one and then adjusting the random pixel based on whether it incrementally improves the original image. Babb described the algorithm as an "embarrassingly" parallel process, ideally suited to GPU acceleration. He estimates he has been able to achieve a 20 to 30 percent error reduction in image compression.

In fact, GPU technology has been so helpful to him that he said he would've been satisfied with a 20 percent quicker processing time three years ago. Now he describes himself as "jaded" to the 3-5X speedup he's achieving, and hopes to reach as much as 10X in the future.

Babb is also encouraging his colleagues at the University of Alaska to learn CUDA because the minimal changes in code required offer a big speedup in results. In the future, he hopes to access Fish, a forthcoming GPU-based supercomputer, to further progress in his work.

From BioDigital to Zoobe, Up-and-Coming Firms Tout Use of GPUs

Posted: 17 May 2012 12:53 PM PDT

ging-companies-summit-ecs-gtc-2012

GTC's Emerging Companies Summit (ECS) this week showcased nearly three dozen startups from around the world using GPUs to disrupt markets and delight customers. After a day of machine-gun style back-to-back mini-presentations, the event was capped by the announcement of five "One to Watch" awards. Winners raked in more than $20,000 in prizes each.

Presenting companies spanned a range of industries – from BioDigital, which is using 3D visualization of the human body to transform how medical information is communicated, to Zoobe, which lets people quickly share personalized, voice-animated video messages.

The ECS judging panel

The companies shared a resolve to solve hard problems with sophisticated offerings, in many cases with NVIDIA GPUs. Cortexica Vision Systems' GPU-based platform may put an end to the crossword puzzle-like QR codes using a new type of visual search. Fuzzy Logix aims to make the use of analytics pervasive by embedding it directly into business processes where data already resides.

Cloud computing emerged as a new theme of the event, now in its fourth year. Jeff Herbst, who runs the event for NVIDIA, said ECS's goal is to build a support network for promising companies, so they can learn from others and be inspired by a wider group of potential customers, partners and investors.

Among the presenting companies were: Rocketick, which is harnessing GPUs to help semiconductor companies to accelerate the chip verification process; Unity Technologies, which makes it simple for anyone to construct their own games full of vivid 3D experiences; and MirriAd, which uses NVIDIA Tesla GPUs to place products in video content, such as TV shows and movies, that are tailored for local audiences and then analyzed for impact.

Winners of the "One to Watch" awards, in addition to BioDigital and Unity Technologies, were:

  • Elemental Technologies, which provides processing technology that uses GPUs to quickly optimize video and stream it over IP networks
  • Numira Biosciences, which is working to accelerate the drug development pipeline for pharmaceutical companies by shortening compound discovery and pre-clinical testing processes
  • Splashtop, which is a provider of a highly rated remote desktop app that streams a PC or Mac to a smartphone with smooth, high-resolution video and audio.

No comments:

Post a Comment