Thursday 17 May 2012

The NVIDIA Blog

The NVIDIA Blog

Link to NVIDIA

NVIDIA CEO Shakes Out Future Of Tech

Posted: 17 May 2012 11:27 AM PDT

jhh-fireside-chat-ecs-gtc12-1

In a fireside chat at GTC with industry analyst Tim Bajarin, NVIDIA CEO and co-founder Jen-Hsun Huang  shared his vision of the future for everything from mobile devices to cloud computing to startup opportunities.

Jen-Hsun Huang speaking with Tim Bajarin at the GTC fireside chat
Attendees packed the room to see Jen-Hsun
shake out the future of tech

That vision centers, in part, on the notion that no single approach to technology will ever satisfy all users. Whether people should rely on the cloud or buy a certain device will continue to depend on their particular preferences, he said.

"Over time, what works for the mainstream isn't going to be desirable for the extremes," he said. "If everyone in this room has an iPhone, nobody's special."

That thought —  that heterogeneity will reign more than ever — was sprinkled throughout the discussion. Among his other observations:

  • On the future of mobile platforms: "We're early in the development of mobile computing. All of the disparate elements need to be integrated. Everyone's got an opinion. Microsoft's got an opinion, Apple's got an opinion, Oracle's got an opinion. And the alignment of these interested parties isn't likely in the early stages of a new market. Give it a little bit of time, and I think the horizontal structure of the industry will become an advantage."
  • On Microsoft's evolving mobile strategy: "It was genius to separate Windows 8 and Windows RT (the recently announced touch-optimized OS). You can't reposition what a PC is anymore. If they want to create a new computing platform that has virtues in that you can see documents from the PC universe, yet it's exquisitely designed, you can't do this in a Windows x86 design. "
  • On the future of laptops: "Computers are like cars. Some have two doors, some have four, or five doors. Some have seven seats. It's different strokes for different folks. You'll have some that have keyboards, some that don't. Some will be gesture based, some won't. The one thing that's going to be really exiting is that everything is in the cloud."
  • On the future of packaged consumer apps: "The idea of buying an application in a box is weird to me. Tomorrow, it's just wrong."

After telling the audience where he believes the greatest opportunities for startups lie (the mobile cloud), Jen-Hsun said that legendary Silicon Valley venture capitalist Don Valentine once told him to look for a huge market, assemble great people and develop killer technology. But his advice to young entrepreneurs today took the form of a series of questions: Is this an important problem to solve? Are you the one to solve it? Are you more passionate about it than the competition? Are you more prepared?

"You ask these questions, and so long as the answers are all 'yes,' then I'm a big proponent of trying things," he said.

First Achievement Award Bestowed By CUDA Centers of Excellence

Posted: 17 May 2012 09:28 AM PDT

ccoe-award-1

Researchers from Tokyo Institute of Technology snagged the first-ever Achievement Award for CUDA Centers of Excellence (CCOE), for their research with TSUBAME 2.0.

The team was among three other groups of researchers from CCOE institutions, which include some of the world's top universities engaged in cutting-edge work with CUDA and GPU computing.

Each of the world's 18 CCOEs was asked to submit an abstract describing their top achievement in GPU computing over the past year and a half. A panel of experts, led by NVIDIA Chief Scientist Bill Dally, selected four CCOEs to present their achievements at a special event during GTC 2012 this week in San Jose. CCOE peers voted for their favorite, who won bragging rights as the inaugural recipient of the CUDA Achievement Award 2012.

The four finalists – each of whom received an HP ProLiant SL250 Gen8 system configured with dual NVIDIA Tesla K10 GPU accelerators – are described below. Abstracts of their work are available on the CCOE Achievement Award website.


Barcelona Supercomputing Center, OmpSs: Leveraging CUDA for Productive Programming in Clusters of Multi-GPU Systems

OmpSs is a directive-based model through which a programmer defines tasks in an otherwise sequential program. Directionality annotations describe the data access pattern for the tasks and convey the runtime information it uses to automatically detect potential parallelism, to automatically perform data transfers and to optimize locality. Integrating this model with CUDA allows applications to leverage the dazzling performance of GPUs, enabling the same simple and clean code that would run on an SMP to run on multi-GPU nodes and clusters.


Harvard University, Massive Cross-Correlation in Radio Astronomy with Graphics Processing Units

The study of the universe is no easy task. Rather than struggle to build larger and larger telescopes in their challenge to understand our vast universe, Harvard University is using GPU computing technologies to help them create telescope arrays composed of many smaller telescopes. Harvard researchers have developed the Harvard X-Engine code to help integrate data from these types of telescope arrays, with an emphasis on removing data-crunching bottlenecks.


Tokyo Tech, TSUBAME 2.0

Researchers at the Tokyo Institute of Technology have designed and constructed Japan's first petascale supercomputer, known as TSUBAME 2.0, as well as a series of advanced software and research applications. Such activities have been rewarded with numerous results presented at top academic venues as well as numerous global accolades and press reports. Tokyo Tech highlighted the three core achievements of TSUBAME / CUDA CCOE, but the results are not just limited to them.


University of Tennessee, MAGMA: A Breakthrough in Solvers for Eigenvalue Problems

Scientific computing applications – ranging from those that help analyze how earthquakes propagate through a medium and affect bridges, to those that simulate energy levels of electrons in nanostructure materials – require the solution of eigenvalue problems. The Matrix Algebra on GPU and Multicore Architectures (MAGMA) project aims to develop algorithms that will speed up computations on heterogeneous multicore-GPU systems by at least one order of magnitude.

Exascale Apps Pave Way To Supercomputing Greatness

Posted: 16 May 2012 07:47 PM PDT

paved road

Just over the horizon, exascale computing promises 1,000 times more processing power than today's petascale systems. But there are still many questions about potential challenges and opportunities in the path to exascale.

A panel of experts told GTC attendees Wednesday that developing applications capable of leveraging exascale systems will be key to realizing the benefits of next-generation supercomputers.

"It's time to get serious about what we're going to do to make sure we have applications ready for exascale systems," said panel moderator Mike Bernhardt, publisher of The Exacale Report. He suggested that the race to exascale is likely to be won or lost based on how well the software industry optimizes its applications for massive parallelism.

NVIDIA’s Steve Scott (right) talks exascale

Panelists wholeheartedly agreed with that premise.

"I'm not worried that we won't have applications that can run on these platforms," said Olav Lindtjorn, HPC advisor for oil-services giant Schlumberger. "I'm more concerned about being able to run them in parallel."

Steve Scott, CTO of NVIDIA's Tesla business, said he's skeptical of vendor predictions that apps optimized to run on exascale systems will be available by the end of this decade. "Will apps run on them? Yes. Will they run well? Absolutely not," Scott said.

Panelists were divided in their opinion about whether new programming models were needed to drive the "exascaling" of applications. Scott said that regardless of which coding tools developers use, the software industry has to find a way to express locality and expose parallelism to take full advantage of exascale systems.

Jeffrey Vetter, distinguished R&D staff member and leader of the future technologies group at Oak Ridge National Laboratory, opined that new programming models will be most important in building robust exascale apps that can contend with system failures, load balancing requirements and the like.

Schlumberger's Lindtjorn, meanwhile, said he's not convinced that vendors will have the necessary programming tools ready in time. But, he believes existing tools can be used to achieve the kind of performance levels expected of exascale systems.

The panelists wrapped up the session on an encouraging note. They all agreed that, despite the remaining obstacles on the road to true exascale applications, the HPC community shouldn't let its enthusiasm for exascale wane.

"It's a great time to be a computer scientist," said Vetter. "There's a lot of exploration going on. The key is to remain optimistic that we're going to get there."

Satoshi Matsuoka, a computer scientist from Tokyo Institute of Technology, encouraged applications developers to seek out conversations with computer scientists for answers. "It's really enjoyable," Matsuoka said of getting such inquiries. "It gives me interesting problems to solve."

Scott left attendees with a word of caution: Think big if you have code that you'd like to see running on exascale systems several years from now. "Don't think about incrementally increasing your parallelism," he said. "You need to be thinking, 'Wow, how can I give myself 1,000 times as much parallelism than I have now?'"

Using GPUs to Decipher Animal (and Human) Crowd Behavior

Posted: 17 May 2012 09:58 AM PDT

iain-couzin-gtc-2012-keynote-2

You'd be hard-pressed to find an example of technology with the potential to change the course of humanity more than the one provided by behavioral ecologist Iain Couzin at Wednesday's GTC keynote address.

Couzin, a postdoctoral research fellow at Princeton's Department of Ecology and Evolutionary Biology, is conducting research that could help humans not only grasp the mysteries of collective animal behavior, but potentially apply that understanding to our own tendencies.

Thousands of attendees packed the keynote hall
for Prof. Couzin’s presentation

Couzin focuses on how and why animals collectively behave the way they do. And he credits CUDA with enabling him to simulate group behavior in ways that were previously impossible.

"The whole way I do science has been transformed by GPU computing," Couzin told the audience of some 2,500 attendees. "We can spend $500 [for a GPU] and suddenly have more computational power than we could have dreamed of the previous year."

Not that he's settling for such an off-the-shelf approach; Couzin is so jazzed by the impact GPUs on his work that he said he's working on getting funding to establish a larger, more established GPU-based system. He has his sights set on upgrading the four PCs packed with GeForce and Tesla boards currently used in his lab.

Those little colored dots on the screen represent
a school of simulated fish

One way he's using GPUs in his research is to simulate the movements of schooling fish – up to 32,000 of them. The GPUs allow him to simulate the impact of certain stimuli on collective behavior. "As a biologist, I want to get inside the heads of these individuals and understand how they communicate and coordinate," he said.

To illustrate, he provided compelling examples of how he's accomplishing this. These include:

  • Applying mathematical equations to understand why fish, when stimulated, naturally form a swirl around an empty center.
  • Modeling the behavior of fish in at-risk environments, such as the Gulf of Mexico, to determine how a deleterious event, like the BP oil spill, can impact group decision-making.
  • Using robotic predators to study responses to attacks, with the goal of determining  strategies for how to best stimulate, counter or otherwise contend with group behavior.
  • Studying the impact of how uninformed individuals affect group decision making.

In this last example, he made a startling discovery. Counter to the conventional wisdom that uninformed humans are more easily influenced by extremists, his findings suggest that the presence of those without strong views increases the odds that a group will go with the majority opinion.

Watch a replay of this GTC 2012 keynote here.

LEGO Locks In On CUDA To Build A Better Business

Posted: 16 May 2012 11:54 AM PDT

lego-session-gtc-2012-2

A few short years ago, the LEGO Group—makers of the iconic locking building-block toys so many parents have stepped on in the middle of the night—was plagued by uncontrolled technology sprawl.

Various business units within LEGO were purchasing redundant licenses for the same technologies, with each team using the technologies for different purposes. Many business unites had even established their own computing platforms. The result was unnecessary costs and added complexity.

Michael Schøler, part of a team from Danish consultancy Hinnerup Net, was brought in by LEGO (also based in Denmark) to help sort through the confusion. During a GTC 2012 session Tuesday, he said it was clear at the time that the company needed a unified technology platform that could do everything: facilitate high-end and low-end games, support mobile applications, power the LEGO.com website — you name it.

Henrik Høj Madsen (left) and Michael Schøler (right)
lead the GTC 2012 session

So LEGO turned to NVIDIA. Zeroing in on the CUDA computing platform, the company wanted not only fast rendering of 3D imagery, but also aspired to leverage CUDA to manage critical business functions. Now, three years later, LEGO is running much of its business on the platform.

"We have a proven system that's working well," Schøler said during an interview following his session.

CUDA also helped LEGO solve a very specific—and performance-draining—problem. Some 95 percent of the little circular knobs that enable LEGO pieces to interlock are invisible in a finished model, yet a massive amount of the company's compute power was being sucked up to render those polygons. With Hinnerup Net's help, LEGO tapped CUDA to purge the invisible polygons in its rendering systems, freeing up computing resources.

Interestingly, one asset LEGO has not yet ported to the CUDA platform is the company's high-end 3D rendering system, but Schøler said his team is working on that. They've developed a proof of concept, and it's performed well so far. All that's left is to convince the affected project groups at LEGO to give the green light to make a change.

"We're trying to convince the business that this is the way to go," said Schøler. "We're doing the marketing for NVIDIA."

No comments:

Post a Comment