- About Vivit
- LUGs & SIGs
- Vivit Blogs
- News & Events
- Knowledge Base
|HPE Storage Solutions Forum - Storage Area Networks (SAN) - Small and Medium Business|
something is wrong with the MSA 2040 performance tiering, i can only have about 5k IOPS out of it while i have another SAN P2000 G3 which is providing 16K for the same workload.
Both SAN are FC are connected through HP 8/24 SAN switches. can you help me diagnosis the issue ?as i am running out of ideas
we are using HP DL 360p Gen8 as hyperV host for the virtualised workload.
I am looking for following HBA driver (Linux) to work with HPE MSA 2040 SAN
HP P/N :AJ764 - 63002
I will be thankful if you can direct me to download page
I have a hdd with P/N 627114-001 and GPN 507129-010, is it compatible with my MSA MSA2312sa?
We have a degraded Out Port on one of our SAN controllers and also expericing slugging performance. Could anyone shed some light into how we can fix this?
I have attached a screenshot so you know what I mean. Below are the results of the show system and versions console comands:
# show system
Controller B Versions
we use a msa2324fc with 2 controllers. The disks are runnig fine but we are not able to configure anything anymore. Via SMU we get "The system is currently unavailable" and cli can't execute any config commands we get "Unable to communicate with Storage Controller. Please retry the command." Also we are not able to do restart SC and rescan doesn't help too.
The attached show configuration output looks very strange.
Any how we can solve the issue without any downtime?
Hoping someone might be able to help. We have a HP 2040 with auto tiering, the disk groups, pools and volumes are configured like so:
4x 1TB SSD RAID5
9x 1.8TB 10k SAS RAID5
9x 1.8TB 10k SAS RAID5
All in a single Virtual Pool. In the virtual pool I have two volumes configured Vol1, Vol2 at 10TB (or there abouts) assigned as Cluster Shared Volumes (CSV) volumes are set to no affinity re-tiering as per the best practices.
I am using Windows 2016 with Hyper-V and failover clustering. We currently have two nodes.
Our hosts are directly connected via two 10g NICS (one to each controller) on the same IP subnet, for testing purposes I have disabled one NIC and configured round robin as failover only. Jumbo frames is not configured but even when it is the performance difference is negligable.
Performance wise from a Hyper-V VM I use IOMETER and I load a 20GB disk with 4kb 100% sequential write access profile and get a pitiful 8k IOPS.
From the Hyper-V host I do the same at get a better, but not by much, 18K IOPS. I know the 4KB 100% seq/write is a lousy real world test but should be one the SAN can easily fulfill to up to around 80,000IOPS from what I read.
Can't readily see any errors on the SAN or the host.
My question is, what the hell have I done wrong :)
We had one controller failure, it had been repaced, but after this we have problem with snapshots creation:
create snapshots volumes volume snap
Really snapshot is created, I can see it using
show snapshots snap-pool pool-name
It can be even mapped to host, but host doesn't see it :-(
There are also frequent errors during snapshot deletion:
delete snapshot snap
And sometimes I get:
Error: The MC is not ready. Wait a few seconds then retry the request.
How can I solve this problem?
I'm trying to find a Windows XP 64 driver for the MSA 2040 chassis to prevent Windows from prompting to install a driver each time the server is restarted.
We have a mix of Win7 x64, Win2012-R2 and several WinXP 64 servers and only the XP servers are unable to identify the chassis. It's more of a cosmetic prompt but annoying the customer.
I have an MSA 2040 and I have setup a volume that I am trying to present to 3 ESXi 5.5 host. I have other iSCSI Storage technology in my environment so I am familiar with mapping the iSCSI initiator to the volume. However, with the MSA 2040 the only way the initiaor seems to detect the volume is if I use the "All Other Initiators" options.
I have setup all my host with their respective initiator, and even grouped them. I did come across the following in the best practice document:
The HPE MSA 2040/2042 enclosure is presented to vSphere as a fibre channel enclosure using LUN 0. This means all volumes mapped to the vSphere hosts or clusters must not use LUN 0. The SMU software doesn’t automatically detect this and for each mapping created, it defaults to LUN 0.
So when I created the mapping I set the LUN to 1, and I am using a Host Group I created on the MSA. That did not make a different. If I change the mapping to use "All Other Initiators" the hosts detect the volume during the rescan process.
I really hope someone can point me in the right direction on this.
I have strange problem with MSA2040FC on customer site. To this array are connected directly two linux hosts. Array has two storage pools, on each pool is configured one volume. Multipathing is configured correctly, on both volume I can see IO on volume statistics but I see IO traffic only on A1 and A2 ports, not B1 and B2.
Pool configuration below. Volume configuration, IO statistics and multipath in attchement files.
I am thinking about replacing my MSA with a brand new one, so probably the new 2052. Anyone has any negative experiences with AutoTiering or Virtual Storage as a whole? Currently I am using a G4 model with just lineair storage. That never failed on me before. But I think a lot of the code of the bigger EVA platform made it to the MSA now. The techniques look very familier.
That being said, I can use the SSDs for read cache or for performance tiering. Somehow I feel some hesitation towards performance tiering. This is my primary storage system and it must be a set it and forget it system. Can't afford any downtime on this building block. Can anyone say something about the stability of the platform, esp comparing read cache vs performance tiering. And can you even have two pools (one per controller) set up as a performance tier? I thought this was a limitation in the 2042, but couln't find anything for the 2052.
Then, suppose performance tiering is the way to go, how would you set up three enclosures, if you do not want to spread disk groups over different enclosures. Or is it fine to do so? So first enclosure will have 4 x SSD (performance tier setup, need RAID1 at least for the SSD diskgroup, it contains data). That leaves 20 unused bays for the 1st enclosure. Following the power of 2 rule, I can add at most 9 HDDs in a diskgroup (8 data and 1 parity, RAID5). This is per pool, so 18 HDDs in total. Leaves me with 2 free bays. I can add two spares, one per pool?
The next enclosures would have a mix of SAS and MDL. Following the power of 2 here again, leaves me with 6 free bays in the enclosure, coz 18 (2 x (8+1)) will be used by disks. So now what? Again 2 spares? that leaves me with 4 bays unused.
To summarize, I find it rather difficult to find a setup that I can expand per enclosure., like I do now with lineair storage.
Am I missing something? I will be hosting a lot of VMs served by three DL380's on this storage array.
Any advice is welcome.
I try connect HP MPX110 to new Brocade SAN switch. This switch not support Loop-back connection, only F_PORT.
How i can change "Connection mode" on MPX110? GUI combobox is grayed, CLI not show this command.
So, I am receiving emails every hour from our SAN. From what I found out about the message it not a big deal. I have download all the log and saved them. Then tried to clear them out. Any thoughts on how to stop/fix this issue? See error message below. This is for a HP MSA 2040.
Event: Managed logs: A log region has reached the level at which it should be archived. (region: SC debug, region code: 5) EVENT ID:#A17 EVENT CODE:400 EVENT SEVERITY:Informational EVENT TIME:2017-07-18 20:52:05Â Â Bundle A Version:GL220P009 MC A Version:GLM220P008-01 SC A Version:GLS220P08-01 Bundle B Version:GL220P009 MC B Version:GLM220P008-01 SC B Version:GLS220P08-01Â Additional Information:Ã‚Â None.Â Recommended Action:Ã‚Â - No action is required
We are currently using an MSA2324i, with an MSA70 attached to it for our storage needs. We will soon be transitionsing away from it to another solution, but I'm really not sure what to do with it. Obviously, my first order of business would be to dispose of the disks in it, but once it is empty, is there any point in trying to sell technology that old? Is it worth trying ot sell on ebay or somewhere else? would it be of any use to run in a test environment, or somewhere that it wouldn't be relied upon?
I had hoped to possibly reuse it at a DR site, but one of the main reasons I'm replacing it is the antiquated web gui.
Are there any suggestions on what to do with it?
When i use the command show vdisks i saw the number of disk 8, but using the command show disks only 7 are displayed, any explaination.
I have HP MSA 2300i with controller A and B. Currently there is warning unhealthy controller and PSU. When I check from controller A, it is showing B is unhealthy and from controller B, it is showing controller A is unhealthy . could anybody help me on this.
There's a site consolidation, where, two MSA2423i arrays from two different sites are going to be lifted and shifted to a new third site. The host to storage connectivity is over iSCSI. What are the pre-requisites, precautions, prechecks,
What are the pre-requisites, precautions, prechecks, and procedures to be followed pre and post physical migration?
Note: The IP addresses are all going to change, but the number of hosts and the storage volumes isn't going to undergo any sort of change.
*all thin provisioned VMFS volumes on MSA2042
*all datastores in VMFS6 (thin prov), and all datastores have the auto-unmap turned on (by default Low prio).
*VAAI counters such as MBDEL in ESXTOP remain at 0 unless I manually run "esxcli storage vmfs unmap" on the datastore, which recovers some but not all the space. More importantly, I cannot run this anymore as it seems to have had such a performance impact a few datastores went down (APD?!?!) so no testing this before confirmation by VMware I can run it safely.
*the MSA2042's best practices sheet (https://www.hpe.com/h20195/v2/GetPDF.aspx/4AA4-6892ENW.pdf) clearly states VAAI and T10 unmap compatibility
*the vmware HCL lists the 2042 as capable of: "VAAI-Block Thin Provisioning, HW Assisted Locking, Full Copy, Block Zero" VMW HCL SAN
*according to this article by Cormac Hogan, the HCL should list a "footnote" to support Automatic Unmap
Question: should I be expecting auto-unmap on this array?
Vmware support pointed me to HPE. On ESXi side, everything is ok (VAAI support detected, enabled, etc.)
ps: support case id 17508974807
Problem is the Enclosure displays on POS screen an error message : 4 TB SATA 512e HDD at Port 1E : BOX 1
Could not be authenticated as a genuine HPE drive. The smart RAID/HBA control will not control the LEDs to this drive.
even i have replaced the drive to eliminate this message , but still it is appear on the screen .
could any have a solution for this error
Hi, New here so be nice to me please..
We have a customer who has 2 Storage Works with the attached SAN Smart Array.
Last year one failed and we had to rebuild the smart array and re-archive everything.
Now the other has failed, the logs appear to show 6 failed drives and a controller. This seems to much of a coincidence to happen all at once. Is anyone aware of something that may have caused this and how we could recover without having to re-install it again?
The attached screenshots show 3 disks in leftover mode, the current firmware versions (J200P46 / W420R58 / 3206). There is also a log file covering the period of failure.
I know this is a support question really but they are wau out of warranty and the customer is reluctant to pay for a service agreement they may only use the once.