In-Depth
Let iSCSI Do the Talking
Why iSCSI will doubtless lead the way in tomorrow's SAN/NAS installations
- By Bill Heldman
- 12/07/2005
Storage requirements seem to be growing exponentially these days as storage
gets cheaper and cheaper, so every server administrator must deal with
the issue of managing storage and archives, whether through server-attached
disks, a storage area network or network-attached storage devices or tape
drives. Add to that, the cost to migrate from an ordinary "I've got
a decent server with six 100 GB SCSI drives in it" to "I have
a decent server talking to a 2 Terabyte SAN array" can be astronomical
— almost out of reach for small to medium-sized businesses. And acquiring
enterprise-class SAN gear can mean hundreds of thousands of dollars in
capital outlay and significant time to deploy. Even costs for SMB-class
gear can be as dramatic.
So, what's the most effective way to span all this storage, minimize
costs and still have a reliable and fast solution? SAN/NAS solutions for
the SMB market, coupled with the iSCSI protocol are the ticket because
you can span across disparate vendor arrays with only moderate expense
and complexity. (Note I said moderate, not simple.)
Why Even Think of Fibre Channel?
With enterprise-class gear, Fibre Channel is the protocol of choice. But
it's a dedicated buy. With FC, you're faced with buying the storage devices
themselves, as well as Fibre Channel network cards (called Host Bus Adapters,
or HBAs), fiber-optic cabling and, with a lot of hosts connecting to the
array, FC switchgear (probably director-class FC switches). HBAs are in
the $1,000-$1,500 range and basic FC switchgear can cost $100,000. Even
in a small deployment, you could end up spending a couple million bucks
and take a year just to get things set up and working, not to mention
putting out ongoing maintenance costs in the tens of thousands per year.
On top of that, FC isn't the best choice for long-haul storage requirements.
The name of today's storage game is location-independence. Your server
in Minneapolis needs to talk to databases in Chicago, Toronto, Buffalo
and the UK. Setting up dedicated FC circuits for such operations, then
programming asymmetric copy operations can be expensive, time-intensive
and complex.
All Roads Lead to iSCSI
Enter iSCSI. The IETF
working group has developed a standard for encapsulating Small Computer
Systems Interface (SCSI) commands inside TCP/IP, and iSCSI is the result.
iSCSI allows ordinary servers to communicate with iSCSI-based storage
or archival devices (or through iSCSI to FC switchgear or routers to communicate
with FC storage or archival devices) using a reliable transport mechanism
and optional encryption capabilities. The standard also provides a Simple
Network Management Protocol (SNMP) Management Information Base (MIB) so
you can manage your storage and archives. Because they're TCP/IP-based
the commands can, with little or no configuration, traverse typical Network
Address Translation (NAT) devices and firewalls.
iSCSI is fairly straightforward. The operating system sends SCSI commands
and data requests to the storage device, the location of which we do not
care because we're in a TCP/IP world and rely on address headers —
hence, location-independence is addressed. The commands are encapsulated
and (potentially) encrypted into TCP/IP. A special packet header is added
to the IP transport mechanism. Commands are de-encrypted and de-encapsulated
at the destination site. Because iSCSI is bidirectional, return response
data can be simultaneously sent back to the originator.
There are two scenarios in which you'll consider iSCSI:
- You're architecting a brand new (called "green field") storage
installation and you want to stay away from FC because it is cost-prohibitive
and adds yet another protocol to the mix. You'll use natively capable
iSCSI servers to talk to natively capable iSCSI storage devices. This
is a good approach for smaller scale operations, not necessarily the
best for large enterprises, in which a mix of FC SAN arrays talking
to downstream (so-called "stranded") servers and arrays is
the best option.
- You want to introduce non-FC storage gear or servers into a FC environment.
This will necessitate FC to iSCSI conversion gear, such as a router
or switch (see Fifure 1). For example, suppose you have an iSCSI capable
server that needs to connect to a FC-based storage array. You'll need
a router or switch that can translate from iSCSI to FC.
|
Figure 1. iSCI knows no bounds, as it can be used to marry non-FC-based storage into an FC environment, with some translation help. (Click image to view larger
version.) |
Making the Connection
Because iSCSI is TCP/IP-based, you can use a regular network interface
card in your servers to talk to the iSCSI-based storage system. However,
this could potentially add immense load to the server's processors, as
the server's CPUs would have to handle all of the encapsulation/de-encapsulation
and encryption/de-encryption activities. This, in turn, maps to decreased
throughput and could add up to sluggish response times in data retrieval
(which, of course, maps to unhappy users).
To counteract this problem, you can purchase TCP Offload Engine (TOE)
NICs. TOEs have built-in microprocessors that offload the server's CPUs
from all that nasty encapsulation and encryption activity. Companies such
as Alacritech are in the business of manufacturing TOE NICs for iSCSI
connectivity.
Note: TOEs are expensive! Plan on outlaying a thousand dollars
or more for high-quality ones.
What's interesting about iSCSI is that Microsoft has long been involved
in the technology and has produced software — so-called iSCSI initiators
— that help enhance Microsoft-based system performance in iSCSI environments
(find out more at http://www.microsoft.com/Windows
Server2003/technologies/storage/iscsi/default.mspx).
Most TOE manufacturers work with the Microsoft initiators, but check first!
$1.2
Million Decision |
I did a stint as a project manager on a moderately-sized
SAN/NAS installation. The requirements were fairly straightforward:
- The system had to be able to handle mainframe and
open systems (Windows and Linux) file systems.
- We needed as much space as possible, but had a limited
budget.
- We wanted one box at site A, and another at site
B, about 7.5 miles from one another.
- We'd SAN-connect (e.g. using HBAs) less than a dozen
servers.
- Our Exchange Server 2003 environment had to utilize
the SAN for the databases, and Exchange had to be
set up to operate in a cluster, across the FC wire
from site A to site B. In other words, cluster node
0 would live at site A and replicate its SAN data
to cluster node 1 at site B.
- We wanted a NAS for our regular users to have a
place to store important data.
- We wanted high-quantity scalability and 24x7x365
reliability.
The mainframe element was the showstopper for all but
the Big Three (EMC, IBM and HDS). All companies play
well with Unix/Linux/Windows environments on one box.
Add the mainframe and most bow out.
We reviewed what the Big Three had to offer and settled
on the EMC Symmetrix, mostly because another department
already ran a Symm and the attractiveness of being able
to connect to them for disaster recovery purposes. Had
the other Symm not been in place, we probably would
have opted for HDS, as it was a more powerful player
at the time. IBM had a relatively good product, but
company stakeholders got pretty nasty at RFP time and
refused to comment with regard to some of the questions
we asked.
We wound up purchasing two Symms, one with 1.5 TB and
another with 5. We also purchased two Celerra devices,
along with a McData director-class FC switch, several
HBAs, fiber-optic cabling, software, management workstations,
project management and contracting support.
The project cost $1.2 million, took nearly 1 and a
half years to get approved, and six months to install
and run. EMC's sales, engineering and project management
support were top drawer. There was never a time when
they were not there for us.
We ran into a snag with our Exchange administrators.
In the time we (not EMC) took to get the project off
the ground, the Exchange admins had purchased a poorer
quality, $40,000 NAS device and were trying to use it
for their databases. They ran into myriad difficulties
and wound up relegating the NAS to a PST backup site.
EMC brought some amazing Exchange Rangers from Avanade
to the table, we hammered out a design, convinced the
Exchange admins that the proposal would work and began
the Exchange leg of the project. Within another couple
months, we had Exchange clusters 0 and 1 sitting apart
from one another, separated by a 7.5-mile string of
FC fiber-optic cable. The Exchange databases were sitting
on two SAN arrays, simultaneously being written to,
thanks to SRDF. And EMC's clustering software, Geospan,
was used in conjunction with MSCS to create a fault-tolerant
Exchange cluster array. Testing took place, the systems
ran as advertised and our Exchange admins were happy
with the new setup.
Ongoing maintenance costs for the complete system are
somewhere in the neighborhood of $100K/year. This cost
provides automatic updates and 24x7x365 monitoring of
the gear by EMC Corporation.
Monday Morning Quarterback
Had I known more about the Celerra at the time,
I probably would've opted for a different NAS solution.
The Celerra NAS simply doesn't bring enough to the table
to justify the price and complexity of the box.
Note that this project did not take into consideration
any tape back-up connectivity. This mostly had to do
with the costs involved — we had to trim somewhere.
That being said, NetAPP and LeftHand Networks and others
are doing some interesting things with near storage.
I'm not so sure the days of tape backups are here to
stay. Perhaps a low-cost ATA solution makes more sense
in the long run.
At any rate, it is an easy trick to HBA-connect your
favorite tape back-up system to the SAN and pull backups
from it. Shadow-copy operations allow you to take a
snapshot of the current environment and move it off
to another piece of SAN space. Point your back-up operations
to the snapshot and you can rapidly back up your operations
during the day without disrupting regular activity.
|
|
|
iSCSI Not the Only Game in Town
So you buy an EMC Symmetrix with, say, 10TB of storage in it. The
array will undoubtedly utilize FC. Why? Well, for starters FC gives you
2Gb (moving toward 10Gb) throughput on the bus. You can't make that happen
in the iSCSI world. The best you can hope for is 1Gb. But, you say, most
servers can't even run up to that kind of throughput, so you're safe,
right? Not so fast! FC storage devices themselves can hang with this kind
of throughput — indeed, must hang with it, especially in large
block database environments where you're moving vast quantities of data
across the wire. In localized scenarios like this, an FC SAN makes a lot
of sense.
Suppose you have some of these devices and you want to long-haul the
data. Why can't you simply encapsulate FC inside TCP/IP? The answer is:
You can. So-called FC over TCP/IP (FCIP) technology is available. But
the larger question is this: Why would you want to do this when iSCSI
meets the need at lower cost, with equivalent reliability and less complexity?
In other words, given that you need to quickly move a lot of data
across a wire to and from a downstream source the consideration is this:
Do you need that data to instantaneously be there? Or do you have the
luxury of some lapse time? In the case of FC, you can make the data transfer
fast (but not instantaneously fast for large geographies), and the high
FC costs are still there (not to mention the FCIP complexity). This establishes
the case for iSCSI and accomplishes those same goals at a much lower cost,
is almost as fast, and is just as reliable a transport mechanism.
Consider, for example, a scenario in which you have an EMC Symmetrix
with several TB of data on it. You want to periodically move this data
from the SAN to an ATA-class Clariion array downstream. The iSCSI technology
allows you to make this happen. Rather than nailing up a high-cost Symmetrix
with FC connectivity in a remote site you'll only visit once a year, you
bring down the costs with SMB-class iSCSI devices and good old TCP/IP
technology that you understand.
In another example, you want to accomplish the same kind of activity
with an IBM DS8000, moving data across an Internet wire to a NetAPP or
LeftHand or other device many miles away for disaster recovery purposes.
The same detail applies. The DS8000 requires FC, but the architecture
of the long-distance environment doesn't lend itself well to FCIP. iSCSI
is the better choice and there are plenty of devices to help you get the
job done.
NAS
Technology |
While we're on the subject of storage, you should understand
that there is a substantial difference between the way
that EMC thinks about NAS and all other companies. The
EMC Celerra is a NAS Gateway, not a true NAS in the
way that NetAPP and LeftHand, among others, think about
NAS. The idea with the EMC Celerra is that you have
a SAN device (probably a Symmetrix) sitting somewhere
— one that has ample extra space on it. Since SAN
space requires FC connectivity (and hence FC HBAs on
the servers), not all hosts can connect to the SAN.
So you carve out some of your Symm's SAN space and dedicate
it to the Celerra. Then, you allocate the space as necessary.
Your users can connect using CIFS (Windows shares) or
NFS without requiring an HBA. When the users request
files from the NAS, the Celerra's Data Movers proxy
the request from the SAN on their behalf. The scalability
and speed is intense.
The Celerra is a completely different animal than a
simple NAS that is hung out on the network, has enough
of a miniaturized OS to handle file operations, backups
and management, and advertises NFS or CIFS-based shares.
And, of course, the dollars involved are vastly different.
When to use Celerra NAS over ordinary workgroup NAS?
Scalability, reliability, cost, distance and other factors
come into play when architecting such a solution. There
is a place for the Celerra, but I don't see it as the
option of choice except in enterprise-class deployments.
In most cases, it's probably better to go with conventional
and less much-less-expensive NAS solutions and, if needed,
interconnect them with your FC SAN arrays through iSCSI
technology.
Keep in mind that architecting a SAN/NAS solution is
98 percent of the battle. With so many fine products
to choose from, and the breadth of variety that IBM,
EMC and others bring to the table, as well as the standardization
of the FC and iSCSI protocols, you should be able to
architect a solution that more than adequately meets
your needs.
|
|
|
Wrapped Up in iSCSI Goodness
There are a lot of religious wars going on in the SAN/NAS field. But the
good news is that there are many, many high-quality vendors who are manufacturing
equipment that uses standardized protocols, iSCSI among them, to allow
you to architect the storage solution you need. If you want or need an
iSCSI solution, you'll find plenty of people to help you achieve your
goal.
Is iSCSI for you? Undoubtedly so, especially in SMB environments. It's
an easy to grapple with technology, there are many players out there who
embrace it, and it's got a multi-year foothold in the industry. You can
trust it.
The larger issue is in getting your mind wrapped around all of the technologies
and the vendors to sort out exactly what it is that you need. To this
end, my opinion is that Microsoft has not done enough work in creating
an actual Storage Certification (MCSTOR, for example). If such a certification
existed, it would force technicians to drill into all of the various nuances
associated with this vast field, and would raise the bar in terms of understanding
the technology. Today, confusion among technologists reigns. Tomorrow,
who knows? Maybe MCPs can drive the storage market.