Win 2k in Mainframeland

When Microsoft Corp. rolls out its Windows 2000 Datacenter Server on Sept. 16, it won't be the technology alone that separates the operating system from previous versions of Windows.

It's also going to be the manner in which the product is packaged and delivered through qualified hardware partners that will boost its chances in the enterprise, users and analysts say.

Datacenter Server is Microsoft's most serious attempt yet to scale its technology into the glass house - a territory long controlled by mainframe and Unix vendors.

The operating system is being targeted mainly at high-end database, server-consolidation and application service provider markets.

Datacenter will support as many as 32 processors in symmetrical multiprocessing configurations and up to 64GB of memory. It will also include several availability and reliability features, such as four-node clustering, partitioning and dynamic load balancing.

Features such as these promise to bring new levels of reliability, availability and scalability to the Windows environment, says Bruss K. Bowman, founder of Quality Care Solutions Inc. (QCSI), a Datacenter beta tester and a provider of claims and benefits administration software for the health care industry.

"These attributes primarily allow us to have database servers capable of handling the largest health plans without having to split our database across multiple servers," adds Stephen Piazza, a senior software engineer at Phoenix-based QCSI.

Datacenter will offer QCSI "enterprise-class performance at both the database and business-process tiers," says Bowman.

Hardware Vendors Key

But a lot of its success will depend on Microsoft's hardware partners, who will preload, sell and support the operating system, say analysts.

In a significant break with tradition, customers won't be able to buy the software directly from stores or from Microsoft. Rather, they will only be able to purchase the application preloaded on qualified hardware sold by vendors that are part of Microsoft's Windows Datacenter Program.

Under the program, Microsoft and its server hardware partners will deliver Datacenter as a package of hardware, software and support services.

Only hardware vendors that have tested their products under a Microsoft program will be certified to license and support Datacenter Server.

The goal is to improve reliability and application uptime, says Paul Sinton-Hewitt, a U.K.-based manager at Blue Bell, Pa.-based Unisys Corp.

"It's a process that is equivalent to the kind of testing that IBM does on its mainframes," says Sinton-Hewitt. The idea is "to stress-test for a period of 14 days any element of the configuration that touches the kernel of the operating system."

As a result, Windows 2000 Datacenter Server "is not [about] Microsoft's technology per se ... it is the unique packaging," says John Enck, an analyst at Gartner Group Inc. in Stamford, Conn.

"There has been a lot of really intense work to test out Datacenter [on specific hardware configurations]," Enck adds. "We think that's a good thing for users."

Primary Contacts

Microsoft so far has certified only about a dozen hardware vendors, including IBM, Hewlett-Packard Co., Compaq Computer Corp., Dell Computer Corp. and Unisys, to package and sell Datacenter Server configurations.

The hardware vendors will be primary contacts for all Datacenter support issues. Most of them are setting up support teams combining their staff with Microsoft's support staff.

Compaq, for instance, has set up a permanent support facility at Microsoft's headquarters in Redmond, Wash., says Datacenter product manager Tim Golden.

"This is the second line of support for any Datacenter customer. They will be on the phone directly with engineers intimately involved" with Datacenter, he claims.

This kind of support is crucial, says Scott Newton, a senior technical architect at cookie maker Otis Spunkmeyer Inc. in San Leandro, Calif. "You are talking about a really heavy-hitting operating system here. You are talking about major changes and the need for a lot of expertise," he says.

"It's not something you are going to be able to peddle through Joe's Coffee Shop," says Newton, whose company recently deployed Microsoft's Windows 2000 across 63 of its offices.

Also, some of the reliability and stability woes that have dogged Windows for years have been the result of interoperability issues relating to third-party hardware and add-on software such as virus seekers.

For instance, "nearly 40 percent of all errors [in Windows NT] were coming from badly produced drivers from third-party device makers," claims Sinton-Hewitt.

Microsoft has been unable to do much because - unlike other major enterprise operating system vendors such as Sun Microsystems Inc., IBM or HP - it has had little control over the hardware on which its operating system runs.

The certification process is aimed at addressing that issue.

"The hardware qualifications Microsoft is using to create approved configurations will guarantee a rock-solid platform," says Piazza.

The basis for that stability, of course, will be some of the new technology in the core operating system itself, analysts say.

Here too, Microsoft appears to have taken steps to build in the features needed to tackle heavy duty application loads.

Key Capabilities

Users and analysts say Datacenter's key capabilities include the following:

-- Symmetric multiprocessing support for up to 32 Intel processors. This provides for significantly more scalability than existing Windows versions, which top out at eight processors.

-- Four-node fail-over capability for increased application availability. When a server in a cluster fails, the remaining servers automatically pick up the work of the failed servers. Previous Windows versions allowed users to link two servers in such a configuration. Four-node support will allow for greater reliability and better distribution of workloads.

-- Support for up to 64GB of main memory. This will greatly boost the performance of applications that manipulate large amounts of data, such as databases and engineering applications. That's because increasing the amount of data that can be kept in memory makes for much faster processing.

-- An application memory-tuning capability called 4GB tuning that basically boosts application performance by maximizing the amount of memory available to an application.

-- A process-control tool for managing tasks such as allocation of system resources and fault isolation.

-- An enhanced version of Microsoft's WinSock technology that boosts the speed at which applications will be able to communicate with one another in a network.

-- Network load-balancing services to optimize network utilization.

Features such as these will make Datacenter especially suited as an application consolidation platform, says Tom Meile, director of infrastructure at Penn National Insurance in Harrisburg, Pa.

Penn National is in the process of consolidating applications from 60 servers to a Windows NT environment running on a 32-processor Intel server that it purchased from Unisys earlier this year.

Datacenter will allow Penn to eventually treat the server "more like a stable mainframe environment, where we will be able to run multiple jobs and balance the workload much better," says Meile.

Maybe so - but only eventually, caution analysts.

Don't expect Datacenter to immediately offer any significant price and performance benefits over the Unix and mainframe technologies that it will be competing against, says Al Gillen, an analyst at International Data Corp. in Framingham, Mass.

Features Already Available

Though the scalability and availability promised in Datacenter are new to Windows, they have been available for quite a while in the Unix and mainframe markets.

Take Sun's market-leading Solaris Unix operating system, for instance. The environment has supported 64 processors, dynamic partitioning and failure isolation for several years now.

Compaq's Tru64 Unix implements dynamic load balancing and TruCluster clustering technology for increasing application uptime. TruCluster software allows users to manager up to eight individual Web servers as a single system.

Although Microsoft is is still trying to make a case for users to consider Datacenter as an enterprise system, Unix vendors will be playing from rich experience, Gillen says.

"Unix vendors have a head start on Microsoft. That experience will translate into some sort of a core competitive advantage," when Datacenter server starts shipping, he adds.

The fact that Datacenter's implementations, at approximately US$250,000, will likely have a much higher starting price than most Windows applications will also contribute to a cautious deployment strategy, predicts Enck.

"We expect fewer than 1,500 production servers to be running Windows 2000 Datacenter Server in the first 12 months after general availability," Enck says.

Meile says he doesn't foresee a quick migration to Datacenter despite its potential benefits.

Indeed, Penn National doesn't even plan to test it seriously until "maybe mid-2001 to late 2001," he says.

"We are going to have it," predicts Meile. "But we need time to spend on it. We need to play with it to make sure that we understand it fully."

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about CompaqCompaqDell ComputerGartnerGartnerHewlett-Packard AustraliaIBM AustraliaIntelMicrosoftPhoenixSun MicrosystemsUnisys Australia

Show Comments
[]