Windows 2000 on the Mainframe

Datacenter Server isn’t just about the software. Its adoption also encompasses hardware and partnerships, mindset and expectations. Here’s what you can look forward to when you enter the glass house.

Sept. 26, 2000 marks the day Microsoft announced the release of the last member of the Windows 2000 operating systems family, Windows 2000 Datacenter Server. In the past when Microsoft released a new OS, it usually meant new software features; new utilities; a new GUI for management; and, most likely, more certification exams.

However, with Win2K Datacenter Server, the emphasis is less on software and more on packaging and a new mindset. This is Microsoft’s first orchestrated attempt at becoming a player in the corporate data center environment — the tough-to-crack “glass house.”

Software Differences
This isn’t to say that the software is identical to Win2K Advanced Server. There are a few differences: two new components (Process Management and WinSock Direct); some raised software limits; removal of most non-Datacenter HCL-compliant drivers; and built-in Service Pack 1.

Process management is a user-mode, user-interface application that provides an interface to the Job Object API set. The Job Object APIs are present on all flavors of Win2K and provide access to the NT kernel’s class-scheduling features. Class scheduling is a feature previously found on high-end proprietary systems. It allows the system administrator to carve out resources and assign them to tasks. It also allows resource limits to be placed on tasks, hence preventing a runaway application from taking over a system. Anyone could write a UI to Job Object APIs and make it work on all Win2K installations; but Microsoft provides one that only works on Win2K Datacenter Server.

WinSock Direct (WSD) is a set of drivers and kernel routines that allows streamlined WinSock communication over System Area Network (SAN — though not to be confused with Storage Area Network). Any applications that use WinSock will automatically get the performance boost from WSD.

Some of the features of Win2K Advanced Server have different limits than Win2K Datacenter Server, but those changes aren’t implemented with software modifications. Rather, the code implements the limits based on what OS version it detects. Datacenter Server raises the physical memory limit from 8G to 64G, the SMP CPU limit from eight CPUs to 32 CPUs, and the server cluster node limit from two to four.

As I discuss shortly, Datacenter Server has a distinctly different Hardware Compatibility List (HCL). Microsoft removed most of the drivers for the hardware that didn’t make it on the HCL. When compared to Advanced Server, this reduced the software footprint by about 33M.

Finally, SP1 is built into Datacenter Server so the initial version of the product should be more stable than the initial versions of the other Win2K flavors.

Relationship and Process Differences
If the software is basically identical, what makes Datacenter Server different? For starters, you won’t be able to buy it! It’ll only be available when shipped with certain approved (HCL) original equipment manufacturer (OEM) systems. Furthermore, once you have a distribution kit, it only installs on the hardware it’s intended for.

But the real difference shows up in how you support the system. Instead of being left to fend for themselves, MCSEs administering the system will be part of a large team that includes the OEM and Microsoft. One Microsoft requirement is that the OEM offer a specialized support infrastructure. That includes a priority support phone number and a Joint Support Team (JST) staffed by employees of the OEM as well as Microsoft. This ensures that problems are speedily diagnosed and sent to the correct engineering team for resolution — without finger-pointing. To top it off, Datacenter calls are given the highest priorities by the OEM support teams and Microsoft’s critical problem resolution (CPR) and quick-fix engineering (QFE) teams.

Partners on the High End

The following companies have announced intentions to offer support — with hardware and services — for Windows 2000 Datacenter Server.

Amdahl Fujitsu IBM
Bull Fujitsu Siemens Computers ICL
Compaq Hitachi Stratus Computer
Dell Computer Corp. Hewlett-Packard Company Unisys

Appearing to be at the head of the release pack are those companies that have already publicly demonstrated multi-node failover clustering configurations: Compaq, HP, Unisys, and IBM. The list, with links, is available at

The Datacenter Server HCL is quite different from the usual Win2K HCL. It includes complete combinations of systems — called stacks or bundles: server, peripherals, firmware, system software, and utilities. To get on the HCL, a maxed-out configuration of the bundle needs to pass a rigorous Windows Hardware Quality Lab (WHQL) stress test.

Taking an HCL system and blindly adding a component, like an NE2000-compatible NIC, will make the system lose its HCL status — and access to the JST for support. So once a system is on the HCL, the administrator won’t be required to blindly apply the latest hot fix or the latest firmware update before getting support. Rather, the opposite will happen; firmware updates or hot fixes won’t be applied to correct a problem unless planned, tested, and approved by the JST.

Mainframe Management
For MCSEs who have managed mainframes in a prior life, the transition will be easy; but for the rest of you, it’s going to take a while to get up to speed. Since Datacenter Server specifically targets the 24/7 operation, mainframe management techniques have to be used. Today, the difference between a mainframe and non-mainframe system comes down to how it’s managed. I can have a multi-processor system on my desktop with just as much power as the average “glass house” mainframe, but I wouldn’t manage them in the same way. When the system is purchased, the OEM and Microsoft provide you the support infrastructure, but — ultimately — whether the system offers true data center mainframe services relies on the administrator in charge and how the system is managed.

The cornerstones of the Datacenter Server computing environment are application availability (uptime), data integrity, and security. Uptime and data integrity rely on procedure and infrastructure. The infrastructure should provide redundant power feeds, redundant air conditioning, and adequately trained staff. When sizing a Datacenter Server, it’s imperative to have the “fort under siege” mentality. You wouldn’t allow single points of failure on your client network — what good is an up server if the client can’t access it? You might need to upsize your system to ensure infrastructure service availability.

Uptime means that problems can be resolved quickly. So it’s imperative to have application expertise and access to and knowledge of good troubleshooting tools (both OS and application) and be able to fix the problem without rebooting the server. Change control is also imperative to provide good uptime. Changes shouldn’t be applied until thoroughly tested and documented. A good backup should precede every change. And you need to keep a complete log for all changes (dates, who, what, how, backup location, and so on). If a change requires a reboot to take effect, you wouldn’t apply the change and postpone the reboot; otherwise problems could go undetected until the next reboot occurred and you might forget about the change.

In Win2K, Microsoft Cluster Server (MSCS) is still basically the same as in Windows NT Server 4.0 Enterprise Edition. It only allows application failover between servers — no Distributed Lock Manager (DLM), no dynamic load balancing, no process failover. By increasing the number of nodes in the cluster, Microsoft allows you to decrease the cost of consolidating standalone Win2K servers into a Datacenter Server cluster. You only need to provide one spare system to back up three production servers. In Win2K Advanced Server, with a two-node cluster, you need one full spare server per production server. Datacenter Server clustering also allows you to increase availability by having multiple backup servers for a very critical server. One major improvement is that Microsoft is also releasing a generation of truly cluster-aware BackOffice applications: Exchange 2000 (you’ll have to wait for SP1 to support Datacenter Server) and SQL Server 2000.

Data integrity means fault-tolerance in the storage sub-system (RAID and hot-spares) and a bulletproof backup procedure. The procedure must account for data consistency, backup media rotation, media re-use life expectancy, off-site media storage and retention, and frequent disaster recovery testing. The backup sub-system should offer adequate throughput to back up everything without having to postpone backing up some data.

Additional Information
The official Web area devoted to Windows 2000 Datacenter Server is at
To learn more about exam 70-226, Designing Highly Available Web Solutions with Microsoft Windows 2000 Server Technologies, which is expected to go into beta in April, visit

Finally, strive for and expect stability. The system should be sized to handle foreseeable growth and not require frequent hardware updates. Microsoft requires that Datacenter OEMs support the hardware for extended periods of time after they stop making it — a managed end-of-life (EOL) cycle. So even if you can’t buy additional CPUs or memory, if your server is sized correctly from the get-go, it will be supported and able to run your mission-critical application for much longer than the traditional life expectancy of a normal server.

Microsoft emphasizes that Datacenter Server can reduce TCO by allowing server consolidation. In theory, you could take four Win2K Servers, each with two CPUs and 4G of memory, and turn them into a Win2K Datacenter Server with eight CPUs and 16G of physical memory. Be careful that the savings on management cost and software licensing don’t translate into lower total uptime.

Great Expectations
I hope I’ve managed to provide you with a sense of what Win2K Datacenter Server is and isn’t. Unlike other flavors of Win2K, Datacenter Server isn’t just software. It’s a combination of hardware, software, and partnership. It’s also about mindset and expectations. And, yes, we can expect to see a high-availability certification exam coming from Microsoft! See you at the testing center.

comments powered by Disqus
Most   Popular