From disk subsystems to file formats and permissions,
here are some best practices to follow when implementing
Windows NT.
Windows NT Best Practices
From disk subsystems to file formats and permissions,
here are some best practices to follow when implementing
Windows NT.
As you read this, another year is already underway. The
installed base of Windows NT has grown dramatically the
past couple of years. Theres a chance that thisll
be the year of the long-awaited Windows 2000 (Windows
NT 5.0). Given those facts, I thought Id use this
column to present several best practices that
you should implement with any individual Windows NT machine
or, in some cases, the Windows NT network services.
Hardware Check
The place to start with Windows NT is always the hardware.
Make sure that all of the components of your hardware
platform are on the Hardware Compatibility List (HCL).
As obvious as this is, its often overlooked and
the results are frustrating. This is particularly important
if youre building or buying custom-built machines.
In these cases, make sure to check that components such
as the motherboard and controllers are on the list. As
a second protection, write into your purchase order that
the hardware as configured will reliably support Windows
NT. The HCL changes over time, so be sure to review it
frequently. (Microsoft posts it online at www.microsoft.com/hcl/default.asp.)
Finally, remember that this list will change dramatically
for Windows 2000.
The next place to check is memory. The Windows NT marketing
literature states that the minimum RAM configuration should
be 16M (and beta 2 of NT 5.0 requires a minimum of 500M
of free disk space). Although this number will help you
get certified, we all know its ridiculous. Windows
NT isnt the OS of choice if your main goal is to
use as few hardware resources as possible in the process.
Windows NT loves memory. The more you add, the more itll
allocate to processes and cache. The minimum kernel and
executive-services memory footprint of NT Workstation
is 24Mand this is before any applications are loaded.
If you have anything less, youre going to page code
thats providing OS services. Today, realistic minimum
RAM on an NT workstation is 32M, and I could build a strong
case to make it 64M. Dont be surprised when Windows
2000 comes out and you a hear recommendation of 128M of
memory for a workstation (and beta 2 of NT 5.0 requires
a minimum of 500M of free disk space).
A related issue is the paging file used for virtual memory
management. You want to make sure that the page file isnt
on the same drive as the system partition. Microsoft often
recommends that the file not be on the same partition,
but you really want it off the physical drive entirely.
One of the keys to disk access is that youre dealing
with spindles, not just partitions. A partition is just
a logical construct on the drive. When data resides on
different partitions, the controller and head must deal
with another layer of abstraction. I also recommend that
you avoid multiple partitions on the same disk and use
the entire disk for each partition. Furthermore, you should
spread the page file over several spindles to create a
stripe environment for the file. My best practice recommendation
for page files, however, is not to page. Friends dont
let friends page memory to disk. If you follow the best
practices in terms of adding enough memory, youll
avoid most paging in the first place.
Another aspect of disk consideration is fault tolerance.
NT has some software-based disk management, including
extended partitions, simple striping, disk mirroring,
and striping with parity. First, dont use extended
partitions. Theyre useful in a pinch if, for some
reason, you havent been monitoring disk consumption
and you need some more space on the fly. But dont
use them on an ongoing basis. If any of the underlying
drives in the partition fail, youll lose all your
data and will have to rely on a backup.
The best place to create disk fault-tolerant subsystems
is in hardware. For example, if you create a stripe set
with parity within NT, you have a couple of things working
against you.
First, the parity algorithm must be computed with the
CPU thats also trying to execute application code.
This work is better offloaded to the hardware controller
in a RAID box below the operating system.
Second, if one drive fails, the recovery process requires
that you reboot the server. With the proper hardware RAID
system, you can remove the bad drive and replace it with
the new drive without rebooting the system. You really
should monitor the health of the drives (usually this
software comes with the server hardware), because you
might not actually notice the performance decrease after
a drive fails. When a second drive fails in the RAID array,
youre sunk. The software RAID is an interesting
aspect of NT, but the best way to implement disk fault-tolerant
systems is with external hardware.
Storing Files and Giving Access
Once youve worked out your drive subsystems, you
need to choose a format in which to store your files.
With Windows NT 4.x, you can choose between FAT and NTFS.
The only time to use FAT is if you need to build a dual-boot
system.
First of all, dont build dual-boot systems unless
youre using them for testing purposes. NTFS, on
the other hand, offers many features that just arent
available under FAT, including hotfix capabilities and
smaller sector size in large disks. Never use FAT to change
sector sizes, such as particular record sizes. FAT also
doesnt offer NTFS transaction rollback capabilities,
local security. Meanwhile, NTFS has transaction functionality
whereby the file system rolls back to a stable state.
NTFS can also audit disk resources, a capability that
FAT lacks. Essentially, FAT in Windows NT is a migration
tool, and it should be treated as such.
Once you choose NTFS as the file format, you introduce
new problems. When you format a disk with NTFS, for example,
the default permission is EveryoneFull Control.
Several issues surrounding this option can cause problems
down the road. One is that with Full Control every user
will have access to any files created or copied to this
directory or any directory created below the root that
hasnt had its permissions modified. They also will
be able to change the permissions of the files and directories.
Change the Full Control to Change so that all directories
that are created afterward will inherit this permission.
This creates a better starting point for specifying more
granular permissions as you move forward.
The other issue is with the group Everyone. The SID S-1-1-0
behind this group exists on all NT systems. This leaves
a significant security hole in your system. SP3 added
a group to the system called Authenticated Users. This
group only includes users that actually exist in that
particular database, because the SID behind it is unique
to your installation. The other group you want to make
sure has access to each entire drive is Administrators
and System with Full Control. This allows the operating
system to have access to the drive for such things as
the paging file, if you happen to place it on a particular
drive.
When setting up permission on any resource, the accepted
practice is to use the following guidelines within Domains.
First, create a local group on the machine where the resource,
such as a directory or printer share, resides. Give the
permissions that you want for each individual group. Create
a Global Group and add the users to whom you want to allocate
the resource. Then add the Global Group into the Local
Group to assign permissions. This may sound like a lot
of work to give permissions that could easily be given
directly to individual accounts. However, this method
allows system administration to scale across a large domain
and, more important, across multiple domains. Always use
groups to manage accounts and permissions.
Invaluable Extras
Everytime you make a significant change to the system,
run RDISK /S and update the Emergency Repair Disk. This
simple but valuable tool allows the administrator to replace
corrupted files on the system. If you dont update
the repair disk frequently, you can set the system back
to an earlier configuration, and you also might lose important
information. The /S switch also saves part of the Security
Accounts Manager database from the Registry to the \WINNT\REPAIR
directory. You can also use this directory as the source
for repairing the system instead of the ER disk thats
normally used. The directory can also contain information
that wont fit on the ER disk.
Finally, use Performance Monitor. This tool has the distinction
of being the most talked-about but seemingly least-used
utility. Its ability to track performance over time with
the logging feature lets you create a baseline for your
system. You can then use Performance Monitors alert
capabilities to notify the administrator when system performance
drops below your norm and before a component actually
fails.
These are some of the best practices that I think are
important. Undoubtedly there are more good ideas out there.
I encourage you to send me your best practices. Ill
compile them and publish them for the benefit of others
rolling out new systems.
About the Author
Michael Chacon, MCSE, MCT, is a directory services architect, focusing on the business and technical issues surrounding identity management in the enterprise. He is the co-author of new book coming from Sybex Publishing that covers the MCSA's 70-218 exam.