Print Page   |   Contact Us   |   Sign In   |   Register

Join Vivit
Contact Vivit
Become a Leader
Become a Sponsor
Community Search

Digital Transformation with HPE Cloud Management

Deliver Amazing Apps Fast in the Idea Economy: a DevOps Transformation

Virginia / Mid-Atlantic VIVIT Chapter Meeting

Chicago Chapter Webinar

Learn how HPE’s Mobile Solutions Revolutionize Synthetic Monitoring

LinkedInTwitterFacebookGoogle Plus

HPE Storage Solutions Forum - HPE StoreVirtual Storage/LeftHand
Share |

Over-Provisioned and 100% Full Message When Adding NodeOpen in a New Window

Hello all, I have a cluster with 5 P4500G2 nodes using 86% of the space so I'm trying to add another 4530 node to gain a little bit extra.  I am able to add the node to the Management Group but as soon as I add the node to the cluster I receive an warning stating the cluster is Over-Provisioned and 100% full.  I would expext the space to be added to the cluster and the data to start striping but this is not the case.  I know that these are different models but I have other clusters with the same mixture of models and the work fine together.   All the nodes are running v12.6 of the OS.  Any help is appreciated. 


StoreVirtual Delete NIC bond popup an errorOpen in a New Window

HI, we have an exisiting bond on a pair of StoreVirtual 4530.   the bond has 2 NICs .  I want to recreate the bond with 4 NICs.  There are no option to add more NIC to an existing bond , so I guess I must delete the bond and recreate it.

The problem is when I try to delete the bond I get a popup with this message.

"Entry not deleted. Cannot delete bond. This Bond is set as the Lefthand OS interface and will not have active interfaces after deletion"

Any idea how I can delete this bond ?  



HP MEM module, stable nowOpen in a New Window

Wondering who is all using the HP MEM driver (12.6) in esxi 6.0 and later realeses?

Reading through older posts it sounded like the HP MEM module was a nightmare, so I'm wondering if it is any more reliable at this point?

Are there real performance advantages of running this over the proven Round Robin method?


Thanks for the help


Reboot Single NodeOpen in a New Window

Hi all,

I have a four node p4500G2 cluster in NR5, LeftHand OS12 on each node and esxi 5.5 on the hosts. I discovered one of the nodes has the raid cache incorrectly set and is causing bad io latency.

Same issue as mention here;

My question is can i reboot the single node via the cmc so i can get into the cache settings without affecting the running vm's ?



Add to an Existing Mgmt Group "stuck" on "Syncing configuration"Open in a New Window


We moved our production storage over to 3PAR (yea!) and I was finally able to get around to repurposing four nodes of P4300G2 over to our DR site last week. Since we were breaking up our existing cluster (discontinuing usage of a pair of P4300G1 nodes), I factory reset everything and updated the G2s to v. 12.5.x, etc. When I got them on-site, I had some "goofiness", and I ended up deleting the cluster and management group and starting over. There were no volumes in biggie, right?

One of the nodes requried us to remove it from the "old" managment group from the console (CMC and CLI weren't working). Once we did that, it finally showed up in CMC under the Available Systems (and not as the "old" mgmt group). Meanwhile, I had already created a new management group with the 3 "good children" nodes and all I had left was to add the 4th node.

It has currently been sitting at "Syncing configuration" during the add to mgmt group process for about 4 hours now. I had not created any volumes or even been able to attempt adding it to the cluster.

These units are currently not under a support contract.




StoreVirtual MultiPath extension moduleOpen in a New Window


after installation of the latest StoreVirtual MEM ( there are a lot of these warnings 

in vmkernel-log of our ESX- Servers (5.5 U3):

-> Warning: char_driver: ALL DONE (2016-10-17T17:41:24.007Z cpu1:34768)

Is this a normal behavior ? Has anyone experience with StoreVirtual MEM on ESX ?

Thanks for help !


Cannot get updatesOpen in a New Window

I've tried getting the latest updates this weekend and got the following errors:-

File not found:-Patch 45019_00_release_note.html

File not found:-Patch 45020_00_release_note.html

File not found:-Patch 50016_00_release_note.html

File not found:-Patch 50017_00_release_note.html

File not found:-Patch 55016_00_release_note.html

File not found:-Patch 56002_00_release_note.html

any one else got the same?

 I do have the latest version of the software which is using HTTPS for downloads


Storevirtual 4330 with SPF+ connection direct connect to HP DL390 SPF+Open in a New Window

Hi guys,

I have 2 Storevirtual 4330's and 2 DL390's.

I have set them up with RAID 10 / ALB and LCAP and am getting a nice throughput (4 x 1000mbit NICS) via iSCSI.

I have just purchased 4 SPF+ Cards, installed them and have had a go at configuring the SANs through the SPF+ connection, but have had no luck. All I can get to is the management console through one of the SPF+ NICS. I have the DAC dables connected etc.

I set static IP's on all interfaces with no gateways.

I would like to achieve this without buying a SPF+ Switch..

Canyone please point me in the right direction?

Many thanks



VSA performance and diagnosis (Hyper-V)Open in a New Window


i'm experiencing troubles with my new 3*4tb VSA nodes in Hyper-V. Network raid 10. All three VSA nodes are connected by 1x10GE interface each, and Hyper-v hosts are connected by 2x1GE interfaces using MPIO (HP DSM installed). All are in one physical switch (S5700), separated by vlan from all other traffic, JUMBO frames are not enabled yet. At first all seemed to be ok, but recently we discovered that performance of VSAN dropped and moved out all our VMs back to local storage. I do not have tests results before the drop, but i think speeds were ok. The thing is i cannot tell what we did, because we did a lot of things and not sure what could cause problems. I think we've updated VSAs from 12.5 to 12.6, made several updates to Hyper-v hosts and to SCVMM, swapped network vswitches several times, enabled and later disabled NPAR, there also was a powerfailure once. VSAs were installed on Hyper-v hosts before i added them in scvmm.

At this point i get:

80-100 Mbits/sec with iperf test between VSA node and host.
250 Mbits/sec with iperf test between VSA nodes.
up to 1 gigabits/sec load on one of the 1GE host interfaces with dskspd (it seems that MPIO does not aggregate several interfaces, and uses them only if primary one fails).
diskspd.exe -b256k -d60 -o8 -t16 -h -r -w25 -L-Z1G -c5G C:\ClusterStorage\Volume1\iotest.dat gives 60-160 MB/s and 1000-1300 IOPS. While local storage gives 370 MB/s 3000 IOPS, so i expect to have VSAN with 400 MB/s and 4500 IOPS.
I also have free 3*1tb VSAs for testing. What i noticed is that free nodes have NICs named "Microsoft corporation Hyper-v VGA" while 4tb nodes NICs are named as "Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX Host bridge (AGP disabled)". (sadly free nodes does not have network diagnostic with iperf).
The questions are:

can i really expect 400 MB/s and 4500 IOPS with diskspd test or my current results correspond with HW?
what else i should use to diagnose those perfomance issues and find the bottleneck?


VSA High LatencyOpen in a New Window

I´m new to StoreVirtual VSA and have some strange Latency Issues with a newly installed vsa environment.

Setup:  2x DL380G9, 3x 480GB SSD, 5x 900GB SAS, ESX 5.5 U3 with latest patches. 10GBit/s iSCSI Network

Two VSAs with AO enabled for all Volumes, latest version of MEM for 5.5, Delayed Ack disabled 

My Problem: The read and write latencys on ESX-Server 2 / VSA2 are always very high

Average is more than 10ms with spikes up to more than 400 ms.

If i  migrate the virtual servers from ESX2 to ESX1, the latency goes down to Average 1-2ms and the

spikes are max 10-20 ms. 

any help would be appreciated. Thanks.


StoreVirtual updaterOpen in a New Window


I'm attempting to update the Storevirtual CMC and I'm getting a warning about an HTTPS error.  This doesn't make sense though because these updates ran fine about a month ago.

The upgrades.xml file points to here:

There's enough drive space

The firewall hasn't been edited

Proxy hasn't been changed.

I deleted all the contents of the download folder

This has happened before and a work around was at this link but the link doesn't work now:




Store VSA IP addressOpen in a New Window

Can someone advise me does the store VSA management IP address need to be same range of ISCSI / Virtual IP address ? Also do we need to assign 2 Qty of management IP address for the store VSA.



P4500 G2 Upgrade OS to 12.5Open in a New Window

HI All,

 I got two P4500 G2 SAN and installed the latest CMC (12.6) but there is no options to upgrade?



Also when i select stay with the current OS and looking for patches asking to upgrade the 10Gb cards so how do i do this with two nodes? look like its required the reboot?




 This is an iso file also forgot the ILO username and password?




Best method to swap out old HP P4000 for New P4530 win cluster with existing P4500g2Open in a New Window

Best method to swap out old HP P4000 for New P4530 win cluster with existing P4500g2

I have an existing cluster with 4 nodes in it 2 P4000 and 2 P4500g2 and I have purchased 2 P4530 to replace the P4000.  Because the P4000 are running version I can't just add the P4530 to the management group and cluster and then after the array rebuilds remove the P4000.  The failover manager is running  at and the P4500g2 are running on  I'm trying to determine what my options are for getting the new units into the cluster.  I have 14TB provisioned so I won't have enough space to use the cluster swap at least I don't think.

any ideas?




Location to run the HP SV VSA 2014 applianceOpen in a New Window

We are got 4 HP servers with each server 16 Qty 900gb disk and 4 Qty of 2TB disk for VSA storage. Each host runs esx 5.5 via SD card. The server consists with HP Array 440ar card. What is the best practise to install or run the HP VSA appliance? On a USB stick or separate RAID disk (Eg RAID 1).

We were trying to partition volume via HP Smart array but it's not allowing us only option to use 2  Qty of 900GB disk RAID 1 and add to VMware and install the appliance which is bit of waste.



Bizarre VSA Upgrade Behavior - 12.5 to 12.6Open in a New Window


Running the upgrade from CMC 12.6 on a 2-node 12.5 Cluster with FOM.

Platform is a 2-node vSphere 5.5 u3b on DL380 Gen9.

Upgrade proceeds normally until it come time to reboot the first VSA. Instead of rebooting, the VSA just locks up. After waiting ages I have resorted to a manual reboot which brings it back, however a duplicate VSA now appears in the management group and the storage services of the VSAthat is still showing in the cluster no longer start.

The only way I have found to get this back without loosing all the volumes is to set the cluster storage system to repair, remove the duplicate storage system of the same name from the management group, re-add it to the management group and then exchange the storage systems. However this triggers a very long and time consuming restripe.

Anyone ever seen behavious like this before? It is repeatable......!



Unrecoverable I/O errorOpen in a New Window

I had one of two VMware hosts go down.  Last night I rebuilt it, the datastores and the VSA node.  The shared datastore started re-striping and I went to sleep.  This morning I wake up and the drive has finished restriping and all of the VMs are up and running, but I have an error showing in the CMC:  Volume x in cluster y has unrecoverable I/O errors.

How do I fix this?

I have two VMware hosts running ESXi 6.0 and two VSA nodes running 12.6 and a FOM also running 12.6.



Adding more disc to HP VSA nodesOpen in a New Window

Quick question guys, I'm trying to add more disc to our HP VSA nodes, and I can't seem to find any good documentation on how to complete this. 

The new discs were added to the physical server, the datastores were created, the VMDK's were added to the HP vsa virtual machines. I can now see the disc in the CMC, but it shows as uninitialized. I don't have an option to just right click and Add To Raid. My option is to Add Storage, then I get a warning about adding this disc to Raid, and it may take a long time. Not sure if that is the correct option?

Also, these new discs are SSD, and I want to set them as Tier 0, I believe I can do that after finally getting the disc added to the CMC?


This all seems like pretty straight forward stuff, but with the lack of basic admin guides for this configuration I wanted to see if I can verify my process. Attached screen shot of what I see in CMC


CMC won't log into VSA nodes, why??Open in a New Window

Does anyone have any advice about how networking should be configured to enable the CMC to log into the StoreVirtual VSA nodes??

The VSA appliances have their own vSwitch in each vSphere host, the networking is all correct for the hosts/VMs to use the VSA-based storage. The VSA appliances have two NICs each, one for local subnet, one for iSCSI subnet.

BUT!! The CMC on the local subnet can not log into the iSCSI nodes without *manually* having to *find* the local subnet IPs every time. This is NOT right.

And HP tech support seems clueless how to fix this..

AND I need this fixed so I can set up and use a quorum witness instead of a FOM.

Is there ANY documentation that *specifically* explains how things are supposed to be networked wrt switch, firewall, vSphere, etc. so the CMC will properly log into the VSA nodes from the local subnet????????????

What settings on switch or elsewhere are required for communication from the CMC to the VSA nodes??

Local subnet is 172.31.x.x, VSAs are in VLAN 20 on the switch/vSphere in 10.10.100.x subnet.

I looked at HP's network/vSphere documentation, it mostly focuses on hardware not the SDN StoreVirtual VSA.

Any examples would be helpful, we have a 2-node VSA setup with a FOM.

Note: The storage VSAs do work, the communication from 172 subnet to 10 subnet is what's needed so I can set up a quorum witness.

Thank you, Tom


Lefthand P4300 G2 conflict between Update and Compatibility MatrixOpen in a New Window


On a P4300 G2 cluster my CMC indicate that I need to patch  my HP P410 storage controller firmware to version 6.64 (update 10166-00) and my ILO2 firmware to version 2.27 (update 10184-00).

Problem is the compatiblity matrix indicate :

RAID Controller Recommanded firmware version 5.70 and ILO2 2.12.

So why is the compatiblity Matrix telling me not to update and my CMC otherwise and which one should i choose?

Thank you.

Sign In

Forgot your password?

Haven't registered yet?

Vivit Blog