HPE Storage Solutions Forum - Storage Area Networks (SAN) - Small and Medium Business
Share |



MSA 2040 Tiering very low IOPS and awful performanceOpen in a New Window

something is wrong with the MSA 2040 performance tiering, i can only have about 5k IOPS out of it while i have another SAN P2000 G3 which is providing 16K for the same workload.

Both SAN are FC are connected through HP 8/24 SAN switches. can you help me diagnosis the issue ?as i am running out of ideas

we are using HP DL 360p Gen8 as hyperV host for the virtualised workload.

 

HBA DriverOpen in a New Window

Dear Friends,

I am looking for following HBA driver (Linux) to work with HPE MSA 2040 SAN

HP P/N :AJ764 - 63002
SP P/N: 489191 - 001
S/N: 8C972112EC
MFR P/N: PX2810403 -20 M

I will be thankful if you can direct me to download page
Regards,

Khalid

 

HDD for MSA2312saOpen in a New Window

Hi All,

I have a hdd with P/N 627114-001 and GPN 507129-010, is it compatible with my MSA MSA2312sa?

thanks

 

MSA2324i - Degraded Out Port?Open in a New Window

We have a degraded Out Port on one of our SAN controllers and also expericing slugging performance. Could anyone shed some light into how we can fix this? 

I have attached a screenshot so you know what I mean. Below are the results of the show system and versions console comands:

 

# show system
System Information
------------------
System Name: MSA2324i
System Contact: Admin
System Location: Server Room
System Information: Shelf1
Vendor Name: HP StorageWorks
Product ID: MSA2324i
Product Brand: MSA Storage
SCSI Vendor ID: HP
Enclosure Count: 3
Health: OK

 

 

 

# versions
Controller A Versions
---------------------
Storage Controller CPU Type: Athlon 2600+ 1600MHz
Storage Controller Code Version: M114P01
Memory Controller FPGA Code Version: F300R22
Storage Controller Loader Code Version: 19.009
Management Controller Code Version: W441R57
Management Controller Loader Code Version: 12.015
Expander Controller Code Version: 1118
CPLD Code Version: 8
Hardware Version: 53

Controller B Versions
---------------------
Storage Controller CPU Type: Athlon 2600+ 1600MHz
Storage Controller Code Version: M114P01
Memory Controller FPGA Code Version: F300R22
Storage Controller Loader Code Version: 19.009
Management Controller Code Version: W441R57
Management Controller Loader Code Version: 12.015
Expander Controller Code Version: 1118
CPLD Code Version: 8
Hardware Version: 53

 

MSA2324fc Unable to communicate with storage controllerOpen in a New Window

hello,
we use a msa2324fc with 2 controllers. The disks are runnig fine but we are not able to configure anything anymore. Via SMU we get "The system is currently unavailable" and cli can't execute any config commands we get "Unable to communicate with Storage Controller. Please retry the command." Also we are not able to do restart SC and rescan doesn't help too.
The attached show configuration output looks very strange.
Any how we can solve the issue without any downtime?

 

MSA 2040 Auto Tiering, Terrible performance ...Open in a New Window

Hi all,

Hoping someone might be able to help.  We have a HP 2040 with auto tiering, the disk groups, pools and volumes are configured like so:

4x 1TB SSD RAID5

9x 1.8TB 10k SAS RAID5

9x 1.8TB 10k SAS RAID5

All in a single Virtual Pool.  In the virtual pool I have two volumes configured  Vol1, Vol2 at 10TB (or there abouts) assigned as Cluster Shared Volumes (CSV) volumes are set to no affinity re-tiering as per the best practices.

I am using Windows 2016 with Hyper-V and failover clustering.  We currently have two nodes.

Our hosts are directly connected via two 10g NICS (one to each controller) on the same IP subnet, for testing purposes I have disabled one NIC and configured round robin as failover only.  Jumbo frames is not configured but even when it is the performance difference is negligable.

Performance wise from a Hyper-V VM I use IOMETER and I load a 20GB disk with 4kb 100% sequential write access profile and get a pitiful 8k IOPS. 

From the Hyper-V host I do the same at get a better, but not by much, 18K IOPS.  I know the 4KB 100% seq/write is a lousy real world test but should be one the SAN can easily fulfill to up to around 80,000IOPS from what I read.

Can't readily see any errors on the SAN or the host.

My question is, what the hell have I done wrong :)

 

 

MSA 2040, snapshots problemOpen in a New Window

Hello!

We had one controller failure, it had been repaced, but after this we have problem with snapshots creation:

 

 create snapshots volumes volume snap
 Error: The specified name is already in use. (2017-08-03  09:30:24)

Really snapshot is created, I can see it using

show snapshots snap-pool pool-name

It can be even mapped to host, but host doesn't see it :-(

There are also frequent errors during snapshot deletion:

delete snapshot snap
Error: Command was aborted by user. - One or more snapshots were not deleted.

And sometimes I get:

Error: The MC is not ready. Wait a few seconds then retry the request.

 

How can I solve this problem?

Thank you!

 

Windows XP 64 - MSA 2040 Chassis DriverOpen in a New Window

I'm trying to find a Windows XP 64 driver for the MSA 2040 chassis to prevent Windows from prompting to install a driver each time the server is restarted.

We have a mix of Win7 x64, Win2012-R2 and several WinXP 64 servers and only the XP servers are unable to identify the chassis. It's more of a cosmetic prompt but annoying the customer.

Regards,

R3.

 

MSA 2040 iSCSI "All Other Initiators" working onlyOpen in a New Window

Hello,

I have an MSA 2040 and I have setup a volume that I am trying to present to 3 ESXi 5.5 host.  I have other iSCSI Storage technology in my environment so I am familiar with mapping the iSCSI initiator to the volume.  However, with the MSA 2040 the only way the initiaor seems to detect the volume is if I use the "All Other Initiators" options. 

I have setup all my host with their respective initiator, and even grouped them.  I did come across the following in the best practice document:

The HPE MSA 2040/2042 enclosure is presented to vSphere as a fibre channel enclosure using LUN 0. This means all volumes mapped to the vSphere hosts or clusters must not use LUN 0. The SMU software doesn’t automatically detect this and for each mapping created, it defaults to LUN 0.

So when I created the mapping I set the LUN to 1, and I am using  a Host Group I created on the MSA.  That did not make a different.  If I change the mapping to use "All Other Initiators" the hosts detect the volume during the rescan process.

I really hope someone can point me in the right direction on this.

 

MSA2040FC - IO traffic only on controller AOpen in a New Window

Hello,

I have strange problem with MSA2040FC on customer site. To this array are connected directly two linux hosts. Array has two storage pools, on each pool is configured one volume.  Multipathing is configured correctly, on both volume I can see IO on volume statistics but I see IO traffic only on A1 and A2 ports, not B1 and B2.

Pool configuration below. Volume configuration, IO statistics and multipath in attchement files.

------------------------------------------------------------------------------------
Name  Size     Free   Own Pref RAID   Class    Disks Spr Chk  Status Jobs  Job% Serial Number                    Spin Down SD Delay Sec Fmt   Health     Reason Action
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
dg01  5995.1GB 25.1MB A   A    RAID10 Linear   12    0   512k FTOL              00c0ff265c150000a5fd665600000000 Disabled  0        512n      OK                      
dg02  4995.9GB 37.7MB B   B    RAID10 Linear   10    0   512k FTOL   VRSC  63%  00c0ff266b6900009efe665600000000 Disabled  0        512n      OK                      
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------

Do you have any idea why I dont observe IO on B-ports?

 

 

Best way to setup an MSA 2042/2052Open in a New Window

Hi there,

I am thinking about replacing my MSA with a brand new one, so probably the new 2052. Anyone has any negative experiences with AutoTiering or Virtual Storage as a whole? Currently I am using a G4 model with just lineair storage. That never failed on me before. But I think a lot of the code of the bigger EVA platform made it to the MSA now. The techniques look very familier.

That being said, I can use the SSDs for read cache or for performance tiering. Somehow I feel some hesitation towards performance tiering. This is my primary storage system and it must be a set it and forget it system. Can't afford any downtime on this building block. Can anyone say something about the stability of the platform, esp comparing read cache vs performance tiering. And can you even have two pools (one per controller) set up as a performance tier? I thought this was a limitation in the 2042, but couln't find anything for the 2052.

Then, suppose performance tiering is the way to go, how would you set up three enclosures, if you do not want to spread disk groups over different enclosures. Or is it fine to do so? So first enclosure will have 4 x SSD (performance tier setup, need RAID1 at least for the SSD diskgroup, it contains data). That leaves 20 unused bays for the 1st enclosure. Following the power of 2 rule, I can add at most 9 HDDs in a diskgroup (8 data and 1 parity, RAID5). This is per pool, so 18 HDDs in total. Leaves me with 2 free bays. I can add two spares, one per pool?

The next enclosures would have a mix of SAS and MDL. Following the power of 2 here again, leaves me with 6 free bays in the enclosure, coz 18 (2 x (8+1)) will be used by disks. So now what? Again 2 spares? that leaves me with 4 bays unused.

To summarize, I find it rather difficult to find a setup that I can expand per enclosure., like I do now with lineair storage.

Am I missing something? I will be hosting a lot of VMs served by three DL380's on this storage array.

Any advice is welcome.

Grtz,

Ronald

 

 

 

HP IP Distance Gateway MPX110Open in a New Window

I try connect HP MPX110 to new Brocade SAN switch. This switch not support Loop-back connection, only F_PORT.

How i can change "Connection mode" on MPX110? GUI combobox is grayed, CLI not show this command.

Untitled.pnglarge.png

 

MSA 2040Open in a New Window

 So, I am receiving emails every hour from our SAN. From what I found out about the message it not a big deal. I have download all the log and saved them. Then tried to clear them out. Any thoughts on how to stop/fix this issue? See error message below. This is for a HP MSA 2040.

Event: Managed logs: A log region has reached the level at which it should be archived. (region: SC debug, region code: 5) EVENT ID:#A17 EVENT CODE:400 EVENT SEVERITY:Informational EVENT TIME:2017-07-18 20:52:05   Bundle A Version:GL220P009 MC A Version:GLM220P008-01 SC A Version:GLS220P08-01 Bundle B Version:GL220P009 MC B Version:GLM220P008-01 SC B Version:GLS220P08-01  Additional Information:  None.  Recommended Action:  - No action is required

 

.

 

Decomissioning HP MSA2324iOpen in a New Window

Hello all,

We are currently using an MSA2324i, with an MSA70 attached to it for our storage needs. We will soon be transitionsing away from it to another solution, but I'm really not sure what to do with it. Obviously, my first order of business would be to dispose of the disks in it, but once it is empty, is there any point in trying to sell technology that old? Is it worth trying ot sell on ebay or somewhere else? would it be of any use to run in a test environment, or somewhere that it wouldn't be relied upon?

I had hoped to possibly reuse it at a DR site, but one of the main reasons I'm replacing it is the antiquated web gui.

MSA Web GUI

Are there any suggestions on what to do with it?

 

Thanks

 

 

 

MSA shared storage MSA2312saOpen in a New Window

Hello ALL 

 

When i use the command show vdisks i saw the number of disk 8, but using the command show disks only 7 are displayed, any explaination.

Thanks

 

 

HP MSA 2300i Unhealthy controllerOpen in a New Window

Hi,

I have HP MSA 2300i with controller A and B. Currently there is warning unhealthy controller and PSU. When I check from controller A, it is showing B is unhealthy and from controller B, it is showing controller A is unhealthy . could anybody help me on this.

Best Regards,

 

MSA2324i physical migration prerequisitesOpen in a New Window

Hi experts,

There's a site consolidation, where, two MSA2423i arrays from two different sites are going to be lifted and shifted to a new third site. The host to storage connectivity is over iSCSI. What are the pre-requisites, precautions, prechecks, 

What are the pre-requisites, precautions, prechecks, and procedures to be followed pre and post physical migration?

Note: The IP addresses are all going to change, but the number of hosts and the storage volumes isn't going to undergo any sort of change.

 

MSA2042 Unmap on ESX6.5Open in a New Window

Hi,
I hope anyone here is running a setup similar to mine, because I can't figure this out based on the documentation and I can't live-test this right now.

Setup:
*ESXi 6.5 (full patched soft & firmware) on HPE DL360p's
*HPE MSA2042 w/ one-to-latest firmware (008)
*Fibre Channel over Brocade Fabric switches, QLogic HBA's (HPE variant) in the hosts

*all thin provisioned VMFS volumes on MSA2042

*all datastores in VMFS6 (thin prov), and all datastores have the auto-unmap turned on (by default Low prio).

Issue:
*the deletion of VM's doesn't free the space on the Volume and thus on the Disk Groups. It shows free in the VMFS. To get the space back in the SAN, I need to empty & delete the volume.

*VAAI counters such as MBDEL in ESXTOP remain at 0 unless I manually run "esxcli storage vmfs unmap" on the datastore, which recovers some but not all the space. More importantly, I cannot run this anymore as it seems to have had such a performance impact a few datastores went down (APD?!?!) so no testing this before confirmation by VMware I can run it safely.
*!!! I should not need to run it

Note: If you are using VMFS6 in ESXi 6.5, this article is not applicable."

*the MSA2042's best practices sheet (https://www.hpe.com/h20195/v2/GetPDF.aspx/4AA4-6892ENW.pdf) clearly states VAAI and T10 unmap compatibility

*the vmware HCL lists the 2042 as capable of: "VAAI-Block Thin Provisioning, HW Assisted Locking, Full Copy, Block Zero" VMW HCL SAN

*according to this article by Cormac Hogan, the HCL should list a "footnote" to support Automatic Unmap

Question: should I be expecting auto-unmap on this array?

Vmware support pointed me to HPE. On ESXi side, everything is ok (VAAI support detected, enabled, etc.)

 

ps: support case id 17508974807

thank you
Quentin

 

HPE D3600 EnclosureOpen in a New Window

Problem is the Enclosure displays  on POS screen  an error message : 4 TB SATA 512e HDD at Port 1E : BOX 1

Could not be authenticated as a genuine HPE drive. The smart RAID/HBA control will not control the  LEDs to this drive.

even i  have replaced the drive  to eliminate this message , but  still it is appear on the screen .

 

could any  have a solution for this error

 

 

HP 2012fc DC Modular Smart Array crashed controller and multiple drive failuresOpen in a New Window

Hi, New here so be nice to me please..

We have a customer who has 2 Storage Works with the attached SAN Smart Array.

Last year one failed and we had to rebuild the smart array and re-archive everything.

Now the other has failed, the logs appear to show 6 failed drives and a controller. This seems to much of a coincidence to happen all at once. Is anyone aware of something that may have caused this and how we could recover without having to re-install it again?

The attached screenshots show 3 disks in leftover mode, the current firmware versions (J200P46 / W420R58 / 3206). There is also a log file covering the period of failure.

I know this is a support question really but they are wau out of warranty and the customer is reluctant to pay for a service agreement they may only use the once.

Any ideas?

Many Thanks,

John

Contact Us

Vivit Worldwide
P.O. Box 18510
Boulder, CO 80308

Email: info@vivit-worldwide.org

Mission

Vivit's mission is to serve
the Hewlett Packard
Enterprise User
Community through
Advocacy, Community,
and Education.