Running the upgrade from CMC 12.6 on a 2-node 12.5 Cluster with FOM.
Platform is a 2-node vSphere 5.5 u3b on DL380 Gen9.
Upgrade proceeds normally until it come time to reboot the first VSA. Instead of rebooting, the VSA just locks up. After waiting ages I have resorted to a manual reboot which brings it back, however a duplicate VSA now appears in the management group and the storage services of the VSAthat is still showing in the cluster no longer start.
The only way I have found to get this back without loosing all the volumes is to set the cluster storage system to repair, remove the duplicate storage system of the same name from the management group, re-add it to the management group and then exchange the storage systems. However this triggers a very long and time consuming restripe.
Anyone ever seen behavious like this before? It is repeatable......!
I had one of two VMware hosts go down. Last night I rebuilt it, the datastores and the VSA node. The shared datastore started re-striping and I went to sleep. This morning I wake up and the drive has finished restriping and all of the VMs are up and running, but I have an error showing in the CMC: Volume x in cluster y has unrecoverable I/O errors.
How do I fix this?
I have two VMware hosts running ESXi 6.0 and two VSA nodes running 12.6 and a FOM also running 12.6.
Quick question guys, I'm trying to add more disc to our HP VSA nodes, and I can't seem to find any good documentation on how to complete this.
The new discs were added to the physical server, the datastores were created, the VMDK's were added to the HP vsa virtual machines. I can now see the disc in the CMC, but it shows as uninitialized. I don't have an option to just right click and Add To Raid. My option is to Add Storage, then I get a warning about adding this disc to Raid, and it may take a long time. Not sure if that is the correct option?
Also, these new discs are SSD, and I want to set them as Tier 0, I believe I can do that after finally getting the disc added to the CMC?
This all seems like pretty straight forward stuff, but with the lack of basic admin guides for this configuration I wanted to see if I can verify my process. Attached screen shot of what I see in CMC
Does anyone have any advice about how networking should be configured to enable the CMC to log into the StoreVirtual VSA nodes??
The VSA appliances have their own vSwitch in each vSphere host, the networking is all correct for the hosts/VMs to use the VSA-based storage. The VSA appliances have two NICs each, one for local subnet, one for iSCSI subnet.
BUT!! The CMC on the local subnet can not log into the iSCSI nodes without *manually* having to *find* the local subnet IPs every time. This is NOT right.
And HP tech support seems clueless how to fix this..
AND I need this fixed so I can set up and use a quorum witness instead of a FOM.
Is there ANY documentation that *specifically* explains how things are supposed to be networked wrt switch, firewall, vSphere, etc. so the CMC will properly log into the VSA nodes from the local subnet????????????
What settings on switch or elsewhere are required for communication from the CMC to the VSA nodes??
Local subnet is 172.31.x.x, VSAs are in VLAN 20 on the switch/vSphere in 10.10.100.x subnet.
I looked at HP's network/vSphere documentation, it mostly focuses on hardware not the SDN StoreVirtual VSA.
Any examples would be helpful, we have a 2-node VSA setup with a FOM.
Note: The storage VSAs do work, the communication from 172 subnet to 10 subnet is what's needed so I can set up a quorum witness.
Thank you, Tom
On a P4300 G2 cluster my CMC indicate that I need to patch my HP P410 storage controller firmware to version 6.64 (update 10166-00) and my ILO2 firmware to version 2.27 (update 10184-00).
Problem is the compatiblity matrix indicate :
RAID Controller Recommanded firmware version 5.70 and ILO2 2.12.
So why is the compatiblity Matrix telling me not to update and my CMC otherwise and which one should i choose?
Just wanted to run something by your guys & girls. I have a 10 node stretched cluster between two datacentres. This management group has the correct number of managers & FOM. I wanted to shutdown the entire management group and the 3 clusters inside it so I can perform a fibre upgrade on the backend switches. Replacing this fibre will break the connection between the two datacentres so best practice will be to shutdown.
The enviroment only hosts volumes with virtual server data contained in them. I was speaking to a HP support engineer who said I should do the following:
1. Shutdown all the VM guests.
2. shutdown or places hosts in maintenance mode.
3. In CMC unassign servers from each volume from each cluster in management group.
4. Shutdown the SAN.
5. Shutdown/Perform maintenance on the network LAN/SAN switches.
6. Power on network switches.
7. Power on SAN.
8. Turn maintenance mode off for VM hosts or power on hosts.
9. Turn on VM guests.
My only questions to the above is for point 3, aslong as all guests ahve been shutdown that should break the iSCSI connections so do I need to remove access to the volumes in CMC? Point 4 is there an order required for shutdown and startup or as we have shutdown all guests no writes will be made to the Volumes so we can shutdown and startup in any order?
We have 2 x P4300 ( BK716A ), with 8 x 450GB SAS disks in each.
We are running out of space and I would like to replace the disks with 8 x 2 TB disks ( 507616-B21 ).
Does anybody know if this is possible . I have the SAN IQ re-install disks, and I am aware that I will have to configure from scratch, but I wonder if there are limitations as to what I can upgrade on these boxes.
Thank you for your help
I have a LeftHand infrastructure based on 2 HP P4xxx clusters and several storage.
The host currently running the CMC (V. 12.5) should be converted to a VM by using VMware converter.
How can I ensure that the CMC and - even more - the Left-Hand clusters will go on wirking and will not be damaged in any way?
Is there a best practice to use?
Should I migrate the CMC to a different host before virtualizing the curren host?
I have been looking for a test server to run up a heap of VMs on and we got offered a HP P4300 G2. I can only seem to find documents where the storage on this device is shared to other servers and lots of clustering information.
Am I able to install vSphere Hypervisor 6.0 on it or maybe just an older version?
My hope is it will work the same as a standalone DL380 server would with vSphere installed.
Any other suggestions or information would be great too.
Hi Folks -
Have an old 5-node DL320S Lefthand cluster running SANiQ 9.5. At the suggestion of our hardware maintenance provider, we attempted to deploy Patch set 05 (25050-02) Saturday night. Along with that, we deployed a couple other patches that were available.
Sadly, the upgrade failed towards the end, and left one of our nodes in an 'Unknown' state in the CMC. Additionally, at the console, after you hit 'Login', we are presented with 'Cannot log in to the console because the storage system has not yet fully initialized. Wait a minute and try again."
On a good note, the node boots enough so that all of the volumes (all Network RAID 10) re-sync without issue, so we are confident it's not totally hosed. We even get alerts saying the node is up!
Additionally, the node in the 'Unknown' state says it is on software version 9.5.00.1237, where the other four are all on 9.5.00.1215.0. Seems like one of the patches it was trying to install bumped the version number, and that's probably where it puked.
At this point, we are considering removing that node from the cluster, rebuilding it from scratch and re-admitting it to the cluster. We have plenty of excess capacity, and given the rebuild process looks pretty simple.
Does anyone have any suggestions on things we can try prior to the removal/rebuild?
I have a problem of licensing Lefthand 11.5 OS on StoreVirtual 4130, i have a feature key but when trying to install it says the key is too long sometimes says could not read feature key from USB device.
Can anyone please help me.
We recently purchased a Gen 9 server to use for SAN storage. The plan is to install Left Hand OS 12.6 on the server. Instuctions reference the installation of virtual software such as Hyper-V or VMWare in coordination with StoreVirtual to create a storage environment. Apparently I'm not quite understanding the architecture or concepts of HPE StoreVirtual. Is the installation of a virtual environment necessary for StoreVirtual or can the Left Hand software be installed on the server without virtualization software? Any assistance would be greatly appreciated.
Hi to all
I have a problem to login to 2x P4300 Servers on the same Mngmnt_Group...
One has Raid Status ok, the other after booting has status unknown. But they answer to ping from the CMC-Server!
On the same CMC i have an other P4300 and an P4530 Clusters, they working fine!
When i try to login i receve an error: Connection reset.
Or on the one i bootet: Could not find storage system with serial XX:XX:XX:XX:XX:XX
Can someone help me?
We have just purchased two StoreVirtual 4530 (pn: B7E26B) and noticed they come shipped with a dual 10GBE sfp network card. According to the quickspecs sheet the network card is the NC552SFP model. The original plan was to connect the 4530's using the intergrated four port 1GBE network card, however, as we have some of the new Netgear M4300 switches (include both 10GBE SFP and 10GBE copper ports) we are now looking at connecting the 4530's over 10GBE.
We have spare Netgear 10GBE SFP+ single mode transceivers (pn: AXM762). Would these work in the NC552SFP network card or would we need to purchase HP branded SFP modules.
The 4530's and the Netgear fibre switches are no more than 3meters apart from one and other. If the Netgear SFP transeivers arent an option, is there a HP 10GBE DAC cable we can purchase (to connect to either 10GBE SFP or 10GBE copper) on the Netgear switches?
I have a p4300 G2 in which I have configured a management group on which connects to my ESXi host... the only trouble is that I have forgotten my account details. How would I go about getting password recovery? Would this be a case of contacting HP support (not ideal as product is out of warranty) or using the HPE StoreVirtual Quick Restore DVD?
I have tried the "SHIFT + LHN" method but this made no difference...
Any info anyone can give would be really appreciated...
Under the feature registration tab is says Instant-On license expires at a certain date. What will happen after that date? Will my storage lockup?
I recently built up a new three-node HP VSA (LH OS 12.6) cluster, but I had to run it with only two nodes for a couple of weeks while transitioning. In that interim, I ran a FOM to maintain quorom until the third node was in-place.
I made the mistake of not cleanly removing the FOM before the server was brought in, and now it's in a "Manager Offline" status, and I don't see any way to remove it from the CMC. Is this something that I am going to be able to take care of, or will I need to invoke HP support? I have a 3-year license that should cover the software support for it, but I'm not sure what my serial number is or where to find it since I never physically received software and the order e-mails don't have a serial present on them, which has complicated my attempts at getting support (was just hung up on and pretty frustrated with hpe support right now).
After the most recent round of security updates for the Storevirtual system and the attached SANs, the SNMP service is not reporting back to our monitoring systems. Has anyone else seen this occur?
I have lefthand 4330. I was created one volumen 5 TB with Thin Provisioning.
In vmaware i was added new iscsi adapter but when i try to migrate host to new storage i got error VMname/VMname.vmdk is larger than the maximum size supported by datastore. I have also enother storage from qnap and it works normaly.
what I need to change to have the ability to migrate machines?
We have a 4 node P4300 G2 SAN with 3 of the nodes running Software Version 11.5.00.0673.0 but one of the nodes somehow failed during the upgrade process and is stuck at 10.5.00.0149.0. Everytime I try to "Check for Upgrades", I get an error connectiong to the FTP server. If I select Use Local Media, the whole process crashes due to the follow message: The storage system "10.1.3.100" running a manager has disconnected. The management group TCP connection timed out after 60 seconds. You will be logged out.
Aside from running a recovery on the one node, is there any other way of upgrading the software on this? Thanks!