Dell EqualLogic’s newest Host Integration Tools for Linux (v1.1)

 

It was just last September that I wrote about Using the Dell EqualLogic HIT for Linux (HIT/LE) Version 1.0.  At the time, the HIT/LE was beginning to play an important role in how we housed large volumes of data, and I wanted to share with others what I learned in the process.  While it has been running well in our environment, it was definitely a 1.0 product when it came to features and configuration, so I was anxious to see what was in store for the next version.  Version 1.1 was released in April of 2012, and it addressed some of the observations I had about HIT/LE 1.0.  Here are a few highlights.

  • Better distribution support.  CentOS, the binary compatible/clone to RHEL is now supported.  Versions 5.7 through 6.2 of CentOS are now supported.  According to the documentation, RHEL 5.5 is no longer supported, which is a change from the previous edition.  Suse Enterprise Linux is also supported.
  • Auto Snapshot Manager, Linux Edition. (ASM/LE).  A new feature that will allow you to create, manage, and schedule volume snapshots (Smart Copies), clones, and replicas from inside of the guest.  This is huge improvement. 
  • A new installer and configuration process. 
  • Better documentation.  This wasn’t listed in the release notes, but was immediately a noticeable improvement.

Version 1.0 did a good job applying the benefits of guest volumes to Linux based Operating Systems.  The problem was that it left out key abilities that prevented an automated way to manage those snapshots for specific purposes.  The biggest challenge I had was finding an automated way to take snapshots of these Linux guest attached volumes, and mount them to a Linux media server so that the data could be archived onto tape.  No amount of glue or duct tape helped in bridging the functions needed with snapshot manipulation inside the guests.

Configuring and Connecting
The configuration and connection of volumes seems to be greatly simplified.  Below demonstrates a simplified method of connecting an existing volume to a VM running the new HIT/LE 1.1.  Compare this to the instructions I provided on my post about HIT/LE 1.0, and you’ll see quite a difference.

  1. Add access to a PS Series Group called "MYEQLGRP”
    rswcli –add-group-access –gn MYEQLGRP –gip 10.10.10.100

    VERIFICATION: List the group added above
    rswcli –l

  2. Discover iSCSI targets
    iscsiadm -m discoverydb

    VERIFICATION: Confirm by viewing current list of discovered targets
    iscsiadm -m node | sort –u

    (returns the iqn needed in the next step)

  3. Log into a volume name and automatically connect at boot:
    ehcmcli login –target iqn.2001-05.com.equallogic:0-8a0906-3a7da1609-e720013e5c54e679-nfs100 –login-at-boot

    (returns the new device bound to a subdirectory below /dev/eql)

    VERIFICATION: Confirm device connection:
    ehcmcli status

  4. Mount it (and add to fstab for automatic mounting if desired)
    mount /dev/eql/nfs100 /mnt/myexport

In-Guest Volume Snapshots (Smart Copies)
The old version of HIT/LE didn’t offer any way of creating a snapshot inside the guest.  One could create volume snapshots from the Group Manager GUI, and even schedule them.  However, when it came to manipulating that snapshot from a guest, such as turning it online, or connecting to it, there was no way to do so.  Since the snapshots generate their own unique IQN, one needed a way to query for, and pass these variables as parameters. 

The new version offers a complete command set that fills the void.  At the root of the new found intelligence is the “asmcli” command.  The asmcli help command will provide you with a complete listing of options.  I’m not going to dive into each option, but rather, provide a simple example of how one can create a smart copy, and mount it if needed.

Before you get started, you may wish to choose or create a dedicated account on your PS Group that has volume administrator privileges.  Each system that has ASM/LE installed needs an account to interact with the volumes, and this offers the least privilege necessary to interact with the Smart Copies. The example below uses an account named “asmleadmin”

  1. Create PS group access (one time configuration step)
    asmcli create group-access –-name MYEQLGRP –-ip-address 10.10.10.100 –-user-name asmleadmin 

    VERIFICATION:  Confirm group access is set the way you want it.
    asmcli list group-access

  2. Create Smart Copy of the guest attached volume mounted to /mnt/myexport
    asmcli create smart-copy –-source /mnt/myexport

    VERIFICATION:  List all available Smart Copies
    asmcli list smart-copy –verbose 2

    (this will provide the object ID used in the next step)

  3. Mount a Smart Copy to a temporary location of /mnt/smartcopy
    asmcli mount smart-copy –-source /mnt/myexport –-object \f-f6a7e0-234b7ce30-d9c3f81bedbb96ba –-destination /mnt/smartcopy

  4. Unmount a Smart Copy mounted in the previous step
    asmcli unmount smart-copy –-object f-f6a7e0-234b7ce30-d9c3f81bedbb96ba –source /mnt/smartcopy

When documentation becomes a feature
The combination of a refined product, and improved documentation allowed for complete configuration and operation by just reading the manual.  It contained real examples of commands and actions, and even a few best practices.  No fumbling around due to an absence of detail or accuracy.  No need to search the net or call Technical Support this time.  Installation and configuration procedures reflected exactly what I experienced when testing out the new version.  What a nice surprise.  I wish this was more common.

More Tips for using the HIT/LE
Since my initial deployment of the HIT/LE, I had to do a fair amount of testing with these Linux systems running guest attached volumes to make sure they were satisfying performance needs; in particular, file I/O.  From that testing, and observations of the systems in production, here are a few things worth noting.

  • Getting data that lives on guest attached volumes onto traditional backup media does require extra thought and consideration, as traditional backup solutions that use the vCenter API can’t see these volumes. Take this into consideration when deciding use cases for guest attached volumes.
  • Don’t skimp on Linux VM memory.  Linux file I/O can be really impressive, but only if it has enough RAM.  If you have a lot of file I/O, linux will need the RAM.  I found going with anything less that 2GB of RAM had a pretty big impact on performance.
  • Review the role of the Linux VM so that it can be right sized.  I ran into a case where I was replacing a very important physical server with a Linux VM for our Development group, but unbeknownst to me, it was performing duties I was not aware of.
  • Make sure there aren’t traditional routines that unnecessarily manipulate that data on the guest volume.  This is reflected as changed block data, and could dramatically reduce the number of snaphots or replicas you can retain at any given time.
  • Take a quick look at your vSwitch and port group configuration in vSphere for your guest attached volumes to make sure you are getting the most out of MPIO.  Will Urban has written a great post Data Drives in VMware which addresses this topic. 

In summary, the newest edition of the HIT/LE is definitely new. In fact, it feels like a complete re-write, and leaves me baffled as to why it didn’t warrant a 2.0 version designator. Nevertheless, the specific features added allow for real protection workflows to be achieved. I need to spend some more time with it to incorporate many of the new features into our environment.  If you were interested in guest attached volumes in Linux, but were intimidated by the complexity of the old version, give HIT/LE 1.1 a try.

Using the Dell EqualLogic HIT for Linux

 

I’ve been a big fan of Dell EqualLogic Host Integration Tools for Microsoft (HIT/ME), so I was looking forward to seeing how the newly released HIT for Linux (HIT/LE) was going to pan out.  The HIT/ME and HIT/LE offer unique features when using guest attached volumes in your VM’s.  What’s the big deal about guest attached volumes?  Well, here is why I like them.

  • It keeps the footprint of the VM really small.  The VM can easily fit in your standard VMFS volumes.
  • Portable/replaceable.  Often times, systems serving up large volumes of unstructured data are hard to update.  Having the data as guest attached means that you can easily prepare a new VM presenting the data (via NFS, Samba, etc.), and cut it over without anyone knowing – especially when you are using DNS aliasing.
  • Easy and fast data recovery.  My “in the trenches” experience with the guest attached volumes in VM’s running Microsoft OS’s (and EqualLogic’s HIT/ME) have proven that recovering data off of guest attached volumes is just easier – whether you recover it from snapshot or replica, clone it for analysis, etc. 
  • Better visibility of performance. Thanks to the independent volume(s), one can easily see with SANHQ what the requirements of that data volume is. 
  • More flexible protection.  With guest attached volumes, it’s easy to crank up the frequency of snapshot and replica protection on just the data, without interfering with the VM that is serving up the data.
  • Efficient, tunable MPIO. 
  • Better utilization of space.  If you wanted to serve up a 2TB volume of storage using a VMDK, more than likely you’d have a 2TB VMFS volume, and something like a 1.6TB VMDK file to accommodate hypervisor snapshots.  With a native volume, you would be able to use the entire 2TB of space. 

The one “gotcha” about guest attached volumes is that they aren’t visible by the vCenter API, so commercial backup applications that rely on the visibility of these volumes via vCenter won’t be able to back them up.  If you use these commercial applications for protection, you may want to determine if guest attached volumes are a good fit, and if so, find alternate ways of protecting the volumes.    Others might contend that because the volumes aren’t seen by vCenter, one is making things more complex, not less.  I understand the reason for thinking this way, but my experience with them have proven quite the contrary.

Motive
I wasn’t trying out the HIT/LE because I ran out of things to do.  I needed it to solve a problem.  I had to serve up a large amount (several Terabytes) of flat file storage for our Software Development Team.  In fact, this was just the first of several large pools of storage that I need to serve up.  It would have been simple enough to deploy a typical VM with a second large VMDK, but managing such an arrangement would be more difficult.  If you are ever contemplating deployment decisions, remember that simplicity and flexibility of management should trump simplicity of deployment if it’s a close call.  Guest attached volumes align well with the “design as if you know it’s going to change” concept.  I knew from my experience with working with guest attached volumes for Windows VM’s, that they were very agile, and offered a tremendous amount of flexibility.

But wait… you might be asking, “If I’m doing nothing but presenting large amounts of raw storage, why not skip all of this and use Dell’s new EqualLogic FS7500 Multi-Protocol NAS solution?”  Great question!  I had the opportunity to see the FS7500 NAS head unit at this year’s Dell Storage Forum.  The FS7500 turns the EqualLogic block based storage accessible only on your SAN network into CIFS/NFS storage presentable to your LAN.  It is impressive.  It is also expensive.  Right now, using VM’s to present storage data is the solution that fits within my budget.  There are some downfalls (Samba not supporting SMB2), but for the most part, it falls in the “good enough” category.

I had visions of this post focusing on the performance tweaks and the unique abilities of the HIT/LE.  After implementing it, I was reminded that it was indeed a 1.0  product.  There were enough gaps in deployment information that I felt it necessary to provide information on exactly how I actually made the HIT for Linux work.  IT Generalists who I suspect make up a significant amount of the Dell EqualLogic customer base have learned to appreciate their philosophy of “if you can’t make it easy, don’t add the feature.”   Not everything can be made intuitive however, especially the first time around.

Deployment Assumptions 
The scenario and instructions are for a single VM that will be used to serve up a single large volume for storage. It could serve up many guest attached volumes, but for the sake of simplicity, we’ll just be connecting to a single volume.

  • VM with 3 total vNICs.  One used for LAN traffic, and the other two, used exclusively for SAN traffic.  The vNIC’s for the SAN will be assigned to the proper vswitch and portgroup, and will have static IP addresses.  The VM name in this example is “testvm”
  • A single data volume in your EqualLogic PS group, with an ACL that allows for the guest VM to connect to the volume using CHAP, IQN, or IP addresses.  (It may be easiest to first restrict it by IP address, as you won’t be able to determine your IQN until the HIT is installed).  The native volume name in this example is “nfs001” and the group IP address is 10.1.0.10
  • Guest attached volume will be automatically connected at boot, and will be accessible via NFS export.  In this example I will be configuring the system so that the volume is available via the “/data1” directory.
  • OS used will be RedHat Enterprise Linux (RHEL) 5.5. 
  • EqualLogic’s HIT 1.0

Each step below that starts with word “VERIFICATION” is not a necessary step, but it helps you understand the process, and will validate your findings.  For brevity, I’ve omitted some of the output of these commands.

Deploying and configuring the HIT for Linux
Here we go…

Prepping for Installation

1.     Verify installation of EqualLogic prerequisites (via rpm -q [pkgname]).  If not installed, run yum install [pkgname]

openssl                    (0.9.8e for RHEL 5.5)

libpcap                    (0.9.4 for RHEL 5.5)

iscsi-initiator-utils      (6.2.0.871 for RHEL 5.5)

device-mapper-multipath    (0.4.7 for RHEL 5.5)

python                                          (2.4 for RHEL 5.5.) 

dkms                       (1.9.5 for RHEL 5.5)

 

(dkms is not part of RedHat repo.  Need to download from http://linux.dell.com/dkms/ or via the "Extra Packages for LInux" epel repository.  I chose Dell website location because it was a newer version.  Simply download and execute RPM.). 

 

2.     Snapshot Linux machine so that if things go terribly wrong, it can be reversed

 

3.     Shutdown VM, and add NIC’s for guest access

Make sure to choose iSCSI network when adding to VM configuration

After startup, manually specify Static IP addresses and subnet mask for both.  (No default gateway!)

Activate NIC’s, and reboot

 

4.     Power up, then add the following lines to /etc/sysctl.conf  (for RHEL 5.5)

net.ipv4.conf.all.arp_ignore = 1

net.ipv4.conf.all.arp_announce = 2

 

5.     Establish NFS and related daemons to automatically boot

chkconfig portmap on

chkconfig nfs on

chkconfig nfslock on

 

6.     Establish directory which will ultimately be used to export for mounting.  In this example, the iSCSI device will mount to a directory called “eql2tbnfs001” in the /mnt directory. 

mkdir /mnt/eql2tbnfs001

 

7.     Make symbolic link called “data1” in the root of the file system.

ln -s /mnt/eql2tbnfs001 /data1 

 

Installation and configuration of the HIT

8.     Verify that the latest HIT Kit for Linux is being used for installation.  (V1.0.0 as of 9/2011)

 

9.     Import public key

      Download the public key from eql support site under HIT for Linux, and place in /tmp/ )

Add key:

rpm –import RPM-GPG-KEY-DELLEQL (docs show lower case, but file is upper case)

 

10.  Run installation

yum localinstall equallogic-host-tools-1.0.0-1.e15.x86_64.rpm

 

Note:  After HIT is installed, you may get the IQN for use of restricting volume access in the EqualLogic group manager by typing the following:

cat /etc/iscsi/initiatorname.iscsi.

 

11.  Run eqltune (verbose).  (Tip.  You may want to capture results to file for future reference and analysis)

            eqltune -v

 

12.  Make adjustments based on eqltune results.  (Items listed below were mine.  Yours may be different)

 

            NIC Settings

   Flow Control. 

ethtool -A eth1 autoneg off rx on tx on

ethtool -A eth2 autoneg off rx on tx on

 

(add the above lines to /etc/rc.d/rc.local to make persistent)

 

There may be a suggestion to use jumbo frames by increasing the MTU size from 1500 to 9000.  This has been omitted from the instructions, as it requires proper configuration of jumbos from end to end.  If you are uncertain, keep standard frames for the initial deployment.

 

   iSCSI Settings

   (make backup of /etc/iscsi/iscid.conf before changes)

 

      Change node.startup to manual.

   node.startup = manual

 

      Change FastAbort to the following:

   node.session.iscsi.FastAbort = No

 

      Change initial_login_retry to the following:

   node.session.initial_login_retry_max = 12

 

      Change number of queued iSCSI commands per session

   node.session.cmds_max = 1024

 

      Change device queue depth

   node.session.queue_depth = 128

 

13.  Re-run Eqltune -v to see if changes took affect

All changes took effect, minus the NIC settings added to the rc.local file.  Looks to be a syntax error from Eql documentation provided.  It has been corrected in the documentation above.

 

14.  Run command to view and modify MPIO settings

rswcli –mpio-parameters

 

This returns the results of:  (seems to be good for now)

Processing mpio-parameters command…

MPIO Parameters:

Max sessions per volume slice:: 2

Max sessions per entire volume:: 6

Minimum adapter speed:: 1000

Default load balancing policy configuration: Round Robin (RR)

IOs Per Path: 16

Use MPIO for snapshots: Yes

Internet Protocol: IPv4

The mpio-parameters command succeeded.

 

15.  Restrict MPIO to just the SAN interfaces

Exclude LAN traffic

            rswcli -E -network 192.168.0.0 -mask 255.255.255.0

 

VERIFICATION:  List status of includes/excludes to verify changes

            rswcli –L

 

VERIFICATION:  Verify Host connection Mgr is managing just two interfaces

      ehcmcli –d

 

16.  Discover targets

iscsiadm -m discovery -t st -p 10.1.0.10

(Make sure no unexpected volumes connect.  But note the IQN name presented.  You’ll need it for later.)

 

VERIFICATION:  shows iface

[root@testvm ~]# iscsiadm -m iface | sort

default tcp,<empty>,<empty>,<empty>,<empty>

eql.eth1_0 tcp,00:50:56:8B:1F:71,<empty>,<empty>,<empty>

eql.eth1_1 tcp,00:50:56:8B:1F:71,<empty>,<empty>,<empty>

eql.eth2_0 tcp,00:50:56:8B:57:97,<empty>,<empty>,<empty>

eql.eth2_1 tcp,00:50:56:8B:57:97,<empty>,<empty>,<empty>

iser iser,<empty>,<empty>,<empty>,<empty>

 

VERIFICATION:  Check connection sessions via iscsiadm -m session to show that no connections exist

[root@testvm ~]# iscsiadm -m session

iscsiadm: No active sessions.

 

VERIFICATION:  Check connection sessions via /dev/mapper to show that no connections exist

[root@testvm ~]# ls -la /dev/mapper

total 0

drwxr-xr-x  2 root root     60 Aug 26 09:59 .

drwxr-xr-x 10 root root   3740 Aug 26 10:01 ..

crw——-  1 root root 10, 63 Aug 26 09:59 control

 

VERIFICATION:  Check connection sessions via ehcmcli -d to show that no connections exist

[root@testvm ~]# ehcmcli -d

 

17.  Login just one of the iface paths of your liking (shown in red here).  Replace the IQN here (shown in green) with yours. The HIT will take care of the rest.

iscsiadm -m node -T iqn.2001-05.com.equallogic:0-8a0906-451da1609-2660013c7c34e45d-nfs001 -I eql.eth1_0 -l

 

This returned:

[root@testvm ~]# iscsiadm -m node -T iqn.2001-05.com.equallogic:0-8a0906-451da1609-2660013c7c34e45d-nfs001 -I eql.eth1_0 -l

Logging in to [iface: eql.eth1_0, target: iqn.2001-05.com.equallogic:0-8a0906-451da1609-2660013c7c34e45d-nfs001, portal: 10.1.0.10,3260]

Login to [iface: eql.eth1_0, target: iqn.2001-05.com.equallogic:0-8a0906-451da1609-2660013c7c34e45d-nfs001, portal: 10.1.0.10,3260] successful.

 

VERIFICATION:  Check connection sessions via iscsiadm -m session

[root@testvm ~]# iscsiadm -m session

tcp: [1] 10.1.0.10:3260,1 iqn.2001-05.com.equallogic:0-8a0906-451da1609-2660013c7c34e45d-nfs001

tcp: [2] 10.1.0.10:3260,1 iqn.2001-05.com.equallogic:0-8a0906-451da1609-2660013c7c34e45d-nfs001

 

VERIFICATION:  Check connection sessions via /dev/mapper.  This is going to give you the string you will need to use making and mounting the filesystem.

[root@testvm ~]# ls -la /dev/mapper

 

 

VERIFICATION:  Check connection sessions via ehcmcli -d

[root@testvm ~]# ehcmcli -d

 

18.  Make new file system from the dm-switch name.  Replace the IQN here (shown in green) with yours.  If this is an existing volume that has been used before (from a snapshot, or another machine) there is no need to perform this step.  Documentation will show this step without the “-j” switch, which will format it as a non-journaled ext2 file system.  The –j switch will format it as an ext3 file system.

mke2fs -j -v /dev/mapper/eql-0-8a0906-451da1609-2660013c7c34e45d-nfs001

 

19.  Mount the device to a directory

[root@testvm mnt]# mount /dev/mapper/eql-0-8a0906-451da1609-2660013c7c34e45d-nfs001 /mnt/eql2tbnfs001

 

20.  Establish iSCSI connection automatically

[root@testvm ~]# iscsiadm -m node -T iqn.2001-05.com.equallogic:0-8a0906-451da1609-2660013c7c34e45d-nfs001 -I eql.eth1_0 -o update -n node.startup -v automatic

 

21.  Mount volume automatically

Change /etc/fstab, adding the following:

/dev/mapper/eql-0-8a0906-451da1609-2660013c7c34e45d-nfs001 /mnt/eql2tbnfs001 ext3 _netdev  0 0

Restart system to verify automatic connection and mounting.

 

Working with guest attached volumes
After you have things configured and operational, you’ll see how flexible guest iSCSI volumes are to work with.

  • Do you want to temporarily mount a snapshot to this same VM or another VM? Just turn the snapshot online, and make a connection inside the VM.
  • Do you need to archive your data volume to tape, but do not want to interfere with your production system? Mount a recent snapshot of the volume to another system, and perform the backup there.
  • Do you want to do a major update to that front end server presenting the data? Just build up a new VM, connect the new VM to that existing data volume, and change your DNS aliasing, (which you really should be using) and you’re done.
  • Do you need to analyze the I/O of the guest attached volumes? Just use SANHQ. You can easily see if that data should be living on some super fast pool of SAS drives, or a pool of PS4000e arrays.  You’ll be able to make better purchasing decisions because of this.

So, how did it measure up?

The good…
Right out of the gate, I noticed a few really great things about the HIT for Linux.

  • The prerequisites and installation.  No compiling or other unnecessary steps.  The installation package installed clean with no fuss.  That doesn’t happen every day.
  • Eqltune.  This little utility is magic.  Talk about reducing overhead in preparing a system for MPIO and all things related to guest based iSCSI volumes.  It gave me a complete set of adjustments to make, divided into 3 simple categories.  After I made the adjustments, I re-ran the utility, everything checked out okay.  Actually, all of the command line tools were extremely helpful.  Bravo!
  • One really impressive trait of the HIT/LE is how it handles the iSCSI sessions for you. Session build up and teardown is all taken care of by the HIT for Linux.

The not so good…
Almost as fast as the good shows up, you’ll notice a few limitations

  • Version 1.0 is only officially supported on RedHat Enterprise Linux (RHEL) 5.5 and 6.0 (no 6.1 as of this writing).  This might be news to Dell, but Debian based systems like Ubuntu are running in enterprises everywhere for it’s cost, solid behavior, and minimalist approach.  RedHat clones dominate much of the market; some commercial, and some free.  Personally, upstream Distributions such as Fedora are sketchy, and prone to breakage with each release (Note to Dell, I don’t blame you for not supporting these.  I wouldn’t either).  Other distributions are quirky for their own reasons of “improvement” and I can understand why these weren’t initially supported either.  A safer approach for Dell (and the more flexible approach for the customer) would be to 1.) Get out a version for Ubuntu as fast as possible, and 2.)  Extend the support of this version to RedHat’s, downstream, 100% binary compatible, very conservative distribution, CentOS.  For you Linux newbies, think of CentOS as being the RedHat installation but with the proprietary components stripped out, and nothing else added.  While my first production Linux server running the HIT is RedHat 5.5, all of my testing and early deployment occurred on a CentOS 5.5 Distribution, and it worked perfectly. 
  • No AutoSnapshot Manager (ASM) or equivalent.  I rely on ASM/ME on my Windows VM’s with guest attached volumes to provide me with a few key capabilities.  1.)  A mechanism to protect the volumes via snaphots and replicas.  2.)  Coordinating applications and I/O so that I/O is flushed properly.  Now, Linux does not have any built-in facility like Microsoft’s Volume Shadow Copy Services (VSS), so Dell can’t do much about that.  But perhaps some simple script templates might give the users ideas on how to flush and pause I/O of the guest attached volumes for snapshots.  Just having a utility to create Smart copies or mount them would be pretty nice. 

The forgotten…
A few things overlooked?  Yep.

  • I was initially encouraged by the looks of the documentation.  However, In order to come up with the above, I had to piece together information from a number of different resources.   Syntax and capitalization errors will kill you in a Linux shell environment.  Some of those inconsistencies and omissions showed up.  With a little triangulation, I was able to get things running correctly, but it quickly became a frustrating, time consuming exercise that I felt like I’ve been through before.  Hopefully the information provided here will help.
  • Somewhat related to the documentation issue is something that has come up with a few of the other EqualLogic tools;  Customers often don’t understand WHY one might want to use the tool.  Same thing goes with the HIT for Linux.  Nobody even gets to the “how” if they don’t understand the “why”.  But, I’m encouraged by the great work the Dell TechCenter has been doing with their white papers and videos.  It has become a great source for current information, and are moving in the right direction of customer education.   

Summary
I’m generally encouraged by what I see, and am hoping that Dell EqualLogic takes on the design queues of the HIT/ME to employ features like AutoSnapshot Manager, and an equivalent to eqlxcp (EqualLogic’s offloaded file copy command in Windows).  The HIT for Linux  helped me achieve exactly what I was trying to accomplish.  The foundation for another easy to use tool in the EqualLogic line up is certainly there, and I’m looking forward to how this can improve.

Helpful resources
Configuring and Deploying the Dell EqualLogic Host Integration Toolkit for Linux
http://en.community.dell.com/dell-groups/dtcmedia/m/mediagallery/19861419/download.aspx

Host Integration Tools for Linux – Installation and User Guide
https://www.equallogic.com/support/download_file.aspx?id=1046 (login required)

Getting more IOPS on workloads running RHEL and EQL HIT for Linux
http://en.community.dell.com/dell-blogs/enterprise/b/tech-center/archive/2011/08/17/getting-more-iops-on-your-oracle-workloads-running-on-red-hat-enterprise-linux-and-dell-equallogic-with-eql-hitkit.aspx 

RHEL5.x iSCSI configuration (Not originally authored by Dell, nor specific to EqualLogic)
http://www.equallogic.com/resourcecenter/assetview.aspx?id=8727 

User’s experience trying to use the HIT on RHEL 6.1, along with some other follies
http://www.linux.com/community/blogs/configuring-dell-equallogic-ps6500-array-to-work-with-redhat-linux-6-el.html 

Dell TechCenter website
http://DellTechCenter.com/ 

Dell TechCenter twitter handle
@DellTechCenter