HPE Storage Solutions Forum - Storage Area Networks (SAN) - Small and Medium Business
Share |

Best way to setup an MSA 2042/2052Open in a New Window

Hi there,

I am thinking about replacing my MSA with a brand new one, so probably the new 2052. Anyone has any negative experiences with AutoTiering or Virtual Storage as a whole? Currently I am using a G4 model with just lineair storage. That never failed on me before. But I think a lot of the code of the bigger EVA platform made it to the MSA now. The techniques look very familier.

That being said, I can use the SSDs for read cache or for performance tiering. Somehow I feel some hesitation towards performance tiering. This is my primary storage system and it must be a set it and forget it system. Can't afford any downtime on this building block. Can anyone say something about the stability of the platform, esp comparing read cache vs performance tiering. And can you even have two pools (one per controller) set up as a performance tier? I thought this was a limitation in the 2042, but couln't find anything for the 2052.

Then, suppose performance tiering is the way to go, how would you set up three enclosures, if you do not want to spread disk groups over different enclosures. Or is it fine to do so? So first enclosure will have 4 x SSD (performance tier setup, need RAID1 at least for the SSD diskgroup, it contains data). That leaves 20 unused bays for the 1st enclosure. Following the power of 2 rule, I can add at most 9 HDDs in a diskgroup (8 data and 1 parity, RAID5). This is per pool, so 18 HDDs in total. Leaves me with 2 free bays. I can add two spares, one per pool?

The next enclosures would have a mix of SAS and MDL. Following the power of 2 here again, leaves me with 6 free bays in the enclosure, coz 18 (2 x (8+1)) will be used by disks. So now what? Again 2 spares? that leaves me with 4 bays unused.

To summarize, I find it rather difficult to find a setup that I can expand per enclosure., like I do now with lineair storage.

Am I missing something? I will be hosting a lot of VMs served by three DL380's on this storage array.

Any advice is welcome.






HP IP Distance Gateway MPX110Open in a New Window

I try connect HP MPX110 to new Brocade SAN switch. This switch not support Loop-back connection, only F_PORT.

How i can change "Connection mode" on MPX110? GUI combobox is grayed, CLI not show this command.



MSA 2040Open in a New Window

 So, I am receiving emails every hour from our SAN. From what I found out about the message it not a big deal. I have download all the log and saved them. Then tried to clear them out. Any thoughts on how to stop/fix this issue? See error message below. This is for a HP MSA 2040.

Event: Managed logs: A log region has reached the level at which it should be archived. (region: SC debug, region code: 5) EVENT ID:#A17 EVENT CODE:400 EVENT SEVERITY:Informational EVENT TIME:2017-07-18 20:52:05   Bundle A Version:GL220P009 MC A Version:GLM220P008-01 SC A Version:GLS220P08-01 Bundle B Version:GL220P009 MC B Version:GLM220P008-01 SC B Version:GLS220P08-01  Additional Information:  None.  Recommended Action:  - No action is required




Decomissioning HP MSA2324iOpen in a New Window

Hello all,

We are currently using an MSA2324i, with an MSA70 attached to it for our storage needs. We will soon be transitionsing away from it to another solution, but I'm really not sure what to do with it. Obviously, my first order of business would be to dispose of the disks in it, but once it is empty, is there any point in trying to sell technology that old? Is it worth trying ot sell on ebay or somewhere else? would it be of any use to run in a test environment, or somewhere that it wouldn't be relied upon?

I had hoped to possibly reuse it at a DR site, but one of the main reasons I'm replacing it is the antiquated web gui.


Are there any suggestions on what to do with it?






MSA shared storage MSA2312saOpen in a New Window

Hello ALL 


When i use the command show vdisks i saw the number of disk 8, but using the command show disks only 7 are displayed, any explaination.




HP MSA 2300i Unhealthy controllerOpen in a New Window


I have HP MSA 2300i with controller A and B. Currently there is warning unhealthy controller and PSU. When I check from controller A, it is showing B is unhealthy and from controller B, it is showing controller A is unhealthy . could anybody help me on this.

Best Regards,


MSA2324i physical migration prerequisitesOpen in a New Window

Hi experts,

There's a site consolidation, where, two MSA2423i arrays from two different sites are going to be lifted and shifted to a new third site. The host to storage connectivity is over iSCSI. What are the pre-requisites, precautions, prechecks, 

What are the pre-requisites, precautions, prechecks, and procedures to be followed pre and post physical migration?

Note: The IP addresses are all going to change, but the number of hosts and the storage volumes isn't going to undergo any sort of change.


MSA2042 Unmap on ESX6.5Open in a New Window

I hope anyone here is running a setup similar to mine, because I can't figure this out based on the documentation and I can't live-test this right now.

*ESXi 6.5 (full patched soft & firmware) on HPE DL360p's
*HPE MSA2042 w/ one-to-latest firmware (008)
*Fibre Channel over Brocade Fabric switches, QLogic HBA's (HPE variant) in the hosts

*all thin provisioned VMFS volumes on MSA2042

*all datastores in VMFS6 (thin prov), and all datastores have the auto-unmap turned on (by default Low prio).

*the deletion of VM's doesn't free the space on the Volume and thus on the Disk Groups. It shows free in the VMFS. To get the space back in the SAN, I need to empty & delete the volume.

*VAAI counters such as MBDEL in ESXTOP remain at 0 unless I manually run "esxcli storage vmfs unmap" on the datastore, which recovers some but not all the space. More importantly, I cannot run this anymore as it seems to have had such a performance impact a few datastores went down (APD?!?!) so no testing this before confirmation by VMware I can run it safely.
*!!! I should not need to run it

Note: If you are using VMFS6 in ESXi 6.5, this article is not applicable."

*the MSA2042's best practices sheet (https://www.hpe.com/h20195/v2/GetPDF.aspx/4AA4-6892ENW.pdf) clearly states VAAI and T10 unmap compatibility

*the vmware HCL lists the 2042 as capable of: "VAAI-Block Thin Provisioning, HW Assisted Locking, Full Copy, Block Zero" VMW HCL SAN

*according to this article by Cormac Hogan, the HCL should list a "footnote" to support Automatic Unmap

Question: should I be expecting auto-unmap on this array?

Vmware support pointed me to HPE. On ESXi side, everything is ok (VAAI support detected, enabled, etc.)


ps: support case id 17508974807

thank you


HPE D3600 EnclosureOpen in a New Window

Problem is the Enclosure displays  on POS screen  an error message : 4 TB SATA 512e HDD at Port 1E : BOX 1

Could not be authenticated as a genuine HPE drive. The smart RAID/HBA control will not control the  LEDs to this drive.

even i  have replaced the drive  to eliminate this message , but  still it is appear on the screen .


could any  have a solution for this error



HP 2012fc DC Modular Smart Array crashed controller and multiple drive failuresOpen in a New Window

Hi, New here so be nice to me please..

We have a customer who has 2 Storage Works with the attached SAN Smart Array.

Last year one failed and we had to rebuild the smart array and re-archive everything.

Now the other has failed, the logs appear to show 6 failed drives and a controller. This seems to much of a coincidence to happen all at once. Is anyone aware of something that may have caused this and how we could recover without having to re-install it again?

The attached screenshots show 3 disks in leftover mode, the current firmware versions (J200P46 / W420R58 / 3206). There is also a log file covering the period of failure.

I know this is a support question really but they are wau out of warranty and the customer is reluctant to pay for a service agreement they may only use the once.

Any ideas?

Many Thanks,



Scripting for faulty drive checkOpen in a New Window

I have 10 HP MSA P2000 G3 ARRAY.

I am monitoring these arrays everyday. Everyday I have to login to this array and check the enclosures. Its taking so much time through GUI.

So I want to know that a single script in which  whenever I want I can run the scripts and get the results within a minute.

Can you give me any example like this so that i can make a script for these 10 Arrays.

IPs are, like this 

user id : abcd

pw: abcdef

Can you please suggest me a good script so that I can prepare a script.


MSA2042 connect a P2000 to itOpen in a New Window

Hi all


I need some help here.


We use VMware and we have 30VM's.

1 x SBS 2008 (DC, File Server, Exchange is stopped since we moved to office365 and Wsus is off)

1 x SQL Server

1 x Application Server

1 x VoIp Server

1 x Firewall

25 x Win7 client workstations (simple 3gb ram 2 vcpu 64GB disks thin provisioned)


We currently have a MSA P2000G3/FC iscsi LFF that has on it

8x600GB 15K SAS (Raid 10) for VM's

3x3TB 7.2K SAS (Raid5) for Data (no DB's or anything, just file serving)

3 HP servers (2xDL80G9, 1 DL380G6) are connected to it with FC 8GB Cards and all working fine. All 3 servers are basic with no special raid controllers in them and they just have a system disk.


What we want to do is to make the whole system a bit faster as matter IOPS. We found after extensive reading that the best value for money for our needs is to do SSD Caching and to achieve this we have 2 Scenarios since ALL Flash is too expensive yet.

In both scenarios, we don’t care for the current data since when this will happen we will do backup and recreate all the raids etc. and all the Vcenter config also. Basically, it will be a clean install in all servers and restore of VM's.


  1. To upgrade to MSA2042 SFF that includes 2x400GB SSD and that MSA will connect the P2000 as disk enclosure and it will see the disks on it and control them. Then we will setup the SSD's to do Caching and the overall speed will be improved. All the host will connect to 2042 and disconnect from P2000. Off course we will also gain all the virtualization benefits that the new MSA generation has.


  1. To upgrade the Servers. Each server will get the latest Raid controller from HP (all 3 will get a P840/4GB) and 1 x 400GB SSD with HP SmartCache enabled. The 600GB disks of the MSA will go to the 3 servers (we will buy additional 1 x 600GB so each server will get 3 x 600GB in raid 5).
    The 3TB disks will stay to MSA and maybe we will add more disks 3TB to it if we need more file space.


So, the question is if anyone can help... between the 2 scenarios in our mind the 1st solution is better because we like the NAS side of things and the ability that in a failure of a host (scenario 2) we will be able to run the VM's that the problematic host had to the other 2.  Also we can add more hosts to it as we grow.
Off course if the MSA fails then all 3 servers will go down but since 2010 we had the P2000 we never had any kind of trouble and it was rock solid except that we changed in all this years 3 disks that went off. Also, we do daily backups of everything (offsite also) so in a total loss we can do a full restore under 6 hours.


Another thing that puzzles me in the technical document of the MSA2040/2 is that it says that it can have the P2000 as enclosure (which is what we want to do in 1st scenario) but it cannot use the SSD drives for other reason as simple disks, in specific it says, "When using the P2000 G3 Storage Enclosure with MSA 2040 controllers, you will not be able to use SSD drives or have some of the performance benefits of the MSA 2040 Storage Enclosure."

What that really means? if that means that if I connect an SSD in the P2000 it cannot use it as Cache which is logical, or that means in general no matter where the SSD is putted in 2040/2 or P2000 it cannot use it as Cache? if the 2nd is true then our whole planning in scenario 1 is pointless.

Some expert guidance or opinion is needed here :)


Thx in advance



Athens - Greece



Is the P 2000 G3 FC just capable of SNMP v1 ?Open in a New Window


we have a MSA array P 2000 G3 FC, and i'd like to integrate it into HP SIM.

From what i see it knows only SNMP v1. Is that true ?




Managed logs warning:Open in a New Window

Managed logs warning: A log region has reached its warning level and should be archived before log data is lost. (region: SC debug, region code: 5)



I am receiving this message continueously and I have clear the events on both controllers and restart the controller but still i am receving this alert . any suggestions to solve this problem


Upgrade Model 4354R Enclosure with Single Bus I/O ModulesOpen in a New Window

I purchased a used MSA1000 from a gov't surplus auction.  It came with an unused Model 4354R Enclosure.  I read in the manual that I can connect two enclosures so I purchased a second used enclosure.

I did not realize at the time that these came with Dual Bus modules, requiring two SCSI cables to connect one Enclusre.  So, I am not able to connect the second enclosure.  It looks like I can purchase and install Single Bus modules to allow one cable to access each enclosure.

Can I just purchase the I/O modules for the enclosure?  Or, would I need to change the EMU, as well?  Once I change the I/O module to single, I can then connect both enclosures, correct?

Thank you in advance for your time and assistance!


P2000 firmware urls not workingOpen in a New Window

I am trying to download a firmware for a p2000 controller, but none of the download links work;


How can I get the correct urls to download the update?




MSA 2312saOpen in a New Window


The storage in question is equipped with 12 HDD of two different cuts: 450 GB (x4) and 600 GB (x8).

There are two Vdisks:

- the first uses four 450GB HDDs

- the second uses seven 600 GB HDDs

The latest 600 GB disk is used as Global Spare

One of the 450GB HDDs was corrupted and the system replaced it automatically with the GS.

The customer replaced the 450GB hard disk that was configured as a new GS.

I would like to restore this last disk as a member of the first vdisk and reconfigure the 600 GB disk as GS.

I don't know much about this storage and I imagine that I should expand vdisk by adding the 450 GB disk and then removing the 600 GB disk.

Is there a simpler solution?

Thank you



la storage in oggetto è equipaggiata con 12 hdd di due tagli diversi: 450 GB (x4) e 600 GB (x8).

Esistono due Vdisks:

- il primo utilizza i 4 hdd da 450 GB

- il secondo utilizza 7 hdd da 600 GB

L'ultimo disco da 600 GB è utilizzato come Global Spare

Uno degli HDD da 450 GB si è guastato ed il sistema lo ha rimpiazzato automaticamente con il GS.

Il cliente ha sostituito il disco da 450 GB guasto che è stato configurato come nuovo GS.

Vorrei ripristinare quest'ultimo disco come membro del primo vdisk e riconfigurare il disco da 600 GB come GS. 

Non conosco molto questo storage e pensavo che per farlo avrei dovuto espandere il vdisk aggiungendo il disco da 450 GB e successivamente rimuovere quello da 600 GB.

Esiste una soluzione più semplice?






Error on MSA WebinterfaceOpen in a New Window


We have the following error on a MSA Storage P2000 G3:

On Webinterface: A subcomponent of this component is unhealthy

Putty: shwo enclosure-status

FAN      01 OK        592267-002   CN8247T672           --
FAN      02 OK        592267-002   CN8247T673           --
PSU      01 OK        592267-002   CN8247T672           --
PSU      02 OK        592267-002   CN8247T673           --
Temp     01 OK        AW592B       CN8320M988           temp=39 C
Temp     02 OK        AW592B       CN8334N249           temp=36 C
Temp     03 OK        592267-002   CN8247T672           temp=33 C
Temp     04 OK        592267-002   CN8247T673           temp=37 C
Voltage  01 OK        AW592B       CN8320M988           voltage=11.86
Voltage  02 OK        AW592B       CN8320M988           voltage=5.03
Voltage  03 OK        AW592B       CN8334N249           voltage=11.86
Voltage  04 OK        AW592B       CN8334N249           voltage=5.03
Voltage  05 OK        592267-002   CN8247T672           voltage=11.98
Voltage  06 OK        592267-002   CN8247T672           voltage=5.06
Voltage  07 OK        592267-002   CN8247T672           voltage=3.38
Voltage  08 OK        592267-002   CN8247T673           voltage=12.01
Voltage  09 OK        592267-002   CN8247T673           voltage=5.05
Voltage  10 OK        592267-002   CN8247T673           voltage=3.38
Disk     01 OK        582938-002   2S6301C074           addr=0
Disk     02 OK        582938-002   2S6301C074           addr=1
Disk     03 OK        582938-002   2S6301C074           addr=2
Disk     04 OK        582938-002   2S6301C074           addr=3
Disk     05 OK        582938-002   2S6301C074           addr=4
Disk     06 OK        582938-002   2S6301C074           addr=5
Disk     07 OK        582938-002   2S6301C074           addr=6
Disk     08 OK        582938-002   2S6301C074           addr=7
Disk     09 OK        582938-002   2S6301C074           addr=8
Disk     10 OK        582938-002   2S6301C074           addr=9
Disk     11 OK        582938-002   2S6301C074           addr=10
Disk     12 Abwesend  582938-002   2S6301C074           addr=11

Putty: Show system

Health: Beeinträchtigt
Health Reason: Eine Unterkomponente dieser Komponente ist nicht funktionstüchtig.
Supported Locales: English (English), Spanish (español), French (français), German (Deutsch), Italian (italiano), Japanese (æ¥æ¬èª), Dutch (Nederlands), Chinese-Simplified (ç®ä½ä¸­æ), Chinese-Traditional (ç¹é«ä¸­æ), Korean (íêµ­ì´)

  Unhealthy Component
  Component ID: Gehäuse 1, Controller A, CompactFlash
  Health: Fehler
  Health Reason: Die Komponente ist nicht vorhanden.
  Health Recommendation: - Ersetzen Sie die FRU, die diese Komponente enthält.

  Unhealthy Component
  Component ID: Gehäuse 1, Controller B, CompactFlash
  Health: Fehler
  Health Reason: Die Komponente ist nicht vorhanden.
  Health Recommendation: - Ersetzen Sie die FRU, die diese Komponente enthält.

  Unhealthy Component
  Component ID: Gehäuse 1, Controller B, Verwaltungsanschluss
  Health: Beeinträchtigt
  Health Reason: Das Netzwerkanschluss-Ethernet-Kabel ist abgezogen, oder das Netzwerk funktioniert nicht.
  Health Recommendation: - Ãberprüfen Sie, ob der Netzwerkanschluss des Controllers ordnungsgemäà mit dem Netzwerk verbunden ist.
  - Wenn dies der Fall ist, überprüfen Sie auf Netzwerkprobleme.

What can we do? I don't understand what's the problem.

Thank you!



Management controller failedOpen in a New Window


We have a shared storage SN 2S6938D039, unfortunately we can't connect to it via web, ssh, telent, so we will try to restart the Management controller via cli port, in the interface of controller cli port is wrote SR232 micro-DB9.
Can anyone please share the cable reference should i use .



reset disk status flag from failed to OK or Pred. failureOpen in a New Window

Just bad luck ... 2 failing disks in one raid5 array. The system reported 2 failed disks during a reboot. After a power cycle and "F2" we were able to get the array back online and the system started to rebuild 1 disk. I don't know how it selected that one, but during the rebuilding the second bad disk gave a ton of read errors and rebuilding stopped... 2 disk marked failed.

Is there a way to reset that failure flag on the disk and trigger the rebuilding of the 1st disk? Accepting some read errors/failures.

Windows server with smart array 420i controller

=> ctrl slot=0 ld 2 modify reenable forced

Error: This operation is not supported with the current configuration. Use the
       "show" command on devices to show additional details about the  configuration.
Reason: Array status not ok

=> ctrl slot=0 show config

Smart Array P420i in Slot 0 (Embedded)    (sn: xxxxxxxxxxxx)

   Internal Drive Cage at Port 1I, Box 2, OK

   Internal Drive Cage at Port 2I, Box 2, OK
   array A (SAS, Unused Space: 0  MB)

      logicaldrive 1 (279.4 GB, RAID 1, OK)

      physicaldrive 1I:2:1 (port 1I:box 2:bay 1, SAS, 300 GB, OK)
      physicaldrive 1I:2:2 (port 1I:box 2:bay 2, SAS, 300 GB, OK)

   array B (SAS, Unused Space: 0  MB)

      logicaldrive 2 (2.2 TB, RAID 5, Failed)

      physicaldrive 1I:2:3 (port 1I:box 2:bay 3, SAS, 600 GB, Failed) {failed first}
      physicaldrive 1I:2:4 (port 1I:box 2:bay 4, SAS, 600 GB, OK)
      physicaldrive 2I:2:5 (port 2I:box 2:bay 5, SAS, 600 GB, OK)
      physicaldrive 2I:2:6 (port 2I:box 2:bay 6, SAS, 600 GB, OK)
      physicaldrive 2I:2:7 (port 2I:box 2:bay 7, SAS, 600 GB, Failed) {failed after 20 min }
      physicaldrive 2I:2:8 (port 2I:box 2:bay 8, SAS, 600 GB, OK, spare)  {was rebuilding bay3}

   SEP (Vendor ID PMCSIERA, Model SRCv8x6G) 380 

I know it's an older box and it will be replaced, but I just like to know if I can reset/force that.

And when can I use this command ... seems just not to work as expected

ctrl slot=0 ld 2 modify reenable forced

Contact Us

Vivit Worldwide
P.O. Box 18510
Boulder, CO 80308

Email: info@vivit-worldwide.org


Vivit's mission is to serve
the Hewlett Packard
Enterprise User
Community through
Advocacy, Community,
and Education.