News

Gates: Microsoft Ships Beta 2 of HPC Cluster Version of Windows

SEATTLE – Not satisfied with a place in the data center, Microsoft is making a play for the laboratory, too. Chairman and chief software architect Bill Gates told a standing room only audience at the Supercomputing 2005 conference held here this week that it has released the second beta test version of its upcoming high-performance computing (HPC) edition of Windows.

Microsoft had originally hoped to ship Windows Computer Cluster Server 2003 by the end of the year, but now it’s targeted for release in the first half of 2006. When it does ship, the software will only run on systems that support Intel’s and AMD’s 64-bit memory addressing technologies.

The system is intended for use in clusters of inexpensive machines that work simultaneously on the same problem. The server's base code is Windows Server 2003 Service Pack 1. Windows Computer Cluster Server will consist of two CDs -- a Compute Cluster Edition based on the x64 release of Windows Server 2003, and a Compute Cluster Pack consisting of the MPI layer, job scheduler, administrative console and management tools. The components can be used together or separately.

Microsoft has offered failover clustering to support high availability computing for years. However, this is the company’s first foray into the world of high-performance computing.

Gates also announced that the company is investing in ten HPC institutes worldwide.

The multiyear, multimillion-dollar plan will fund work at U.S. universities Cornell University, the University of Tennessee, the University of Texas at Austin, the University of Utah and the University of Virginia. Additionally, it will fund overseas institutes at the Nizhni Novgorod State University in Russia; Shanghai Jiao Tong University in the Peoples Republic of China, the Tokyo Institute of Technology in Japan, the University of Southampton in England and the University of Stuttgart in Germany.

At the high end, the supercomputing world is increasingly dominated by massive clusters of inexpensive Linux servers. Microsoft's new investments may broaden its presence among those high-end systems, but the company's plan to make money from the technology involves much less grandiose applications.

"We see as a key trend here … that we'll have supercomputers of all sizes, including one that will cost less than $10,000 and be able to sit at your desk or in your department," Gates said Tuesday. Users will be able to employ such "personal supercomputers" for preliminary results or relatively simple problems. Architectural continuity between the personal or workgroup supercomputers and a much larger supercomputing cluster at, say, company headquarters could allow the same computation to be run with a finer level of detail. "We need an approach here that scales from the smallest supercomputer that will be inexpensive up to the very largest," Gates said.

John Borozan, a group product manager in the Windows Server Division, says Microsoft sees a huge opportunity at the low-end of supercomputing. He points to IDC research showing 8 percent of all x86 servers are purchased for high-performance computing clusters today. "In spite of amazing growth in this market, it's still very difficult to build an HPC cluster. You need to get the hardware, you need to find interconnects, you've got to put an OS on there, you've got to put an MPI on there, a job scheduler," Borozan said.

"We're going to focus on the departmental and workgroup level. One of our acknowledged strengths as a technology vendor is bringing things mainstream and making them easy to use," Borozan said.

Just because Microsoft aims to bring clusters down from government labs into corporate departments doesn't mean Microsoft is settling for a small piece of the existing revenue pie. The company is placing a bet that high-performance clusters will become a major way that data is processed in the near future.

"We think the next big revolution in science and industry is going to be driven by data -- flooding in from sensors as well as more traditional data from computational models," Borozan said.

During his keynote, Gates provided an example of sensors that might one day feed a supercomputing cluster. The University of Washington's NEPTUNE project involves dropping thousands of sensors along the undersea fault off the West Coast of the United States to study "black smokers," which scientists think may hold clues to the origin of life on Earth. "We can see this in many of the sciences, that low-cost sensors that give us overwhelming amounts of data and yet that we want to control in real time, will be feasible," Gates said.

Scott Bekker contributed to this report.

About the Author

Stuart J. Johnston has covered technology, especially Microsoft, since February 1988 for InfoWorld, Computerworld, Information Week, and PC World, as well as for Enterprise Developer, XML & Web Services, and .NET magazines.

comments powered by Disqus
Most   Popular