IDC HPC User Forum – San Diego

Platform just returned from attending the IDC HPC User Forum held from Sept. 6-8 in San Diego.


As opposed to previous years, this year’s event seemed to have drawn fewer people from the second and third tiersof the HPC industry. Overall attendance for this event also appeared to be about half of that compared to the event in April in Texas.


This time the IDC HPC User Forum was dominated by a focus on software and the need for recasting programming models,. There was also a renewed focus on getting ISVs and open source development teams to adopt programming models that can scale far beyond the limits they currently have. Two factors are driving this emphasis
  • Extremely parallel internals for compute nodes (from both a multi-core and an accelerator [CUDA, Intel, AMD] points of view).
  • The focus on “exa” scale, which by all counts will be achieved by putting together ever increasing numbers of commodity servers

Typically there is a theme to the presentations for the multi-day event, and this forum was no different. Industry presentations were very focused on the material science being performed primarily by the US national labs and future applications of the results being obtained. The product horizon on the technologies presented was estimated at approximately 10 years.


In contrast to the rest of the industry which is very cloud focused right now, cloud computing was presented or mentioned only three times by various vendors and also mentioned by by the Lawrence Berkeley National Laboratory (LBNL) at the forum. When it comes to cloud, there seems to be a split between what the vendors are focusing on and what the attendees believe. Specifically, attendees from national laboratories tend to be focused on “capability” computing (e.g. large massively parallel jobs running on thousands of processors). Jeff Broughton from LBNL presented some data from a paper that showed how, for the most part, cloud computing instances are not ready for the challenge of doing HPC.


Though we can’t refute any of the data or claims made by Mr. Broughton, the conclusions that may be drawn from his data might extend beyond what the facts support. For instance, in our experience here at Platform, we’ve found that most HPC requirements in industry do not span more than 128 cores in a single parallel job nor do they require more than 256 GB of memory. The requirements for most companies doing HPC are significantly more modest and are therefore much more viable to be addressed by a cloud computing model.


We at Platform have long been fighting the “all or nothing” notion of HPC employing cloud technology. Rather, we believe that industry – especially in Tiers 2 and 3, to a lesser or greater degree, will be able to make extremely beneficial use of cloud computing to address their more modest HPC requirements. Platform is focused on developing products to help these customers easily realize this benefit. Stay tuned for more on Platform’s cloud family of products for HPC—there will be more on this in the coming months…

0 comments:

Post a Comment