- About Vivit
- LUGs & SIGs
- Vivit Blogs
- News & Events
- Knowledge Base
|HPE Storage Solutions Forum - HPE StoreVirtual Storage/LeftHand|
I was having some trouble upgrading the VSA storage capacity.
So I wanted to reinstall de VSA Virtual Machine and build it from the ground up again (then restripe and do the save for host 2).
I did put the machine in maintance mode, deleted it, install a new one and give it the old IP and MAC address.
But instead of replacing the old one in repair mode, a new host is created.
Now the old one is useless and can be deleted but I cannot figure out how to the one in repair mode
Can someone please give me some advice?
Has anyone gone through the process of upgrading an existing cluster to 10Gb? We have installed the 10 Gb card and RAM, but the process of changing the IP is pretty vague in the 10 Gb upgrade documentation that I've been able to locate.
Our existing environment consists of 2 P4500 G2 nodes running LHOS 12.5, and a FOM running 12.6. The nodes have their two onboard NICs bonded as bond0. All communication occurs on the 192.168.160.x subnet.
What I'd like to bond the two new 10 Gb NICs together (bond1) and either use that for all communication on the 192.168.160.x subnet, or use bond1 for LHOS and iSCSI, and bond0 for management on the 192.168.170.x subnet.
My first attempt was to assign a new 192.168.160.x IP to bond1 with no gateway specified. When I did so, I found that I couldn't ping or communicate with that IP. Moving the LHOS and management to bond1 didn't reestablish connectivity, so I stopped at that point because I was afraid that I wouldn't be able to manage the node via the CMC.
For my second attempt, I went through the same process, but I assigned a different subnet that the CMC could still communicate with. This prevented the CMC warning about the bonds being on the same subnet and allowed me to manage and ping the new IP. However, as I began moving services to bond1, I received a warning that all of my volumes would become inaccessible until the node became discoverable again.
At this point, I'm thinking that my safest option is to add a third node to the cluster and then remove one node at a time from the cluster and management group, work through the IP changes, and then add the node back. This sounded pretty safe to me, as long as we were OK with the restriping that would take place.
Is there another approach that someone could recommend? I would be willing to take a maintenance window and shutdown the whole management group, but in the lab I never found a way to have the cluster in some type of maintenance mode and still be allowed to make IP changes.
Any comments would be welcomed.
Really hoping someone can give me a few ideas on what to do here... Background: We had a switch fail that two of our four nodes, and our FOM, were connected to.
Once this was fixed, one of the nodes couldn't be connected to : "Login failed. Read timed out". All nodes pinged fine except the one having the problem, it would not respond to jumbo packet pings. It did respond to default size packet pings so I changed the packet size on the CMC machine to default and successfully logged on. I checked the switch port and jumbo packets were enabled so I presumed it must be an issue with the NIC on the node. I 'repaired the system' to take it out of the management group and resync the config. This got stuck for a couple of hours so I cancelled it.
The SAN is still up but now I cannot connect to any of the four managers. They all just sit at "Waiting 60 seconds to connect and log in..." then bring up the same "Login failed. Read timed out" error.
I've tried rebooting the server with the CMC on it mulitple times, I've tried switching jumbo frames on/off, I've tried connecting to each manager in turn. I'm just getting a response from any of them.
Any tips as how I can get these working? Reboot one/all of the nodes? I don't really want to switch them off/on in case I lose access to the data. These hold all our business critical data so I can't afford for it to be down. Fortunately I do have the option to migrate all VMs to a different SAN, it'll just take days, but I can trash everything and start from scratch if need be.
We have an existing cluster of (2) P4500 G2 system running LHOS 12.5, with FOM running on an ESXi server with local storage. We recently acquired a 3rd node with matching specs.
The problem I've encountered is that when I attempted to add the new node to the existing management group, I received an error and it resulted in the attached screenshot. The status of the 3rd node is "Joining/leaving management group, Storage system state missing". The 3rd node isn't running a manager.
If I attempt to log into the 3rd node, the CMC is unable to do so. If I attempt to log into the 3rd node via the console, I'm prompted for a user and password, but it doesn't accept the mgmt group's username and password. I did, however, find that admin/admin works.
If I right-click on the node in the CMC and select 'Remove from Management Group' I initially received an error stating that I couldn't do so because the management group was running a FOM. I believe that I now receive an error indicating that it is unable to login.
I think my only option at this point is to use the admin/admin credentials to remove the node from the management group. However, I've never used that option, so I'm unclear on the potential impact. This issue never arose in the lab, so I'm a little gun-shy about trying it on a production system.
We have a 2 node Single Cite Cluster on P4300, SAN IQ 12.5. We are wanting to migrate to a Multisite Cluter. We are adding an additional node to the current Cluster, and have 3 more nodes to build the additional site with. Can we just change form Single to Multisite Cluster? is there a migration guide that can be used for this?
Does anyone have any information on the end of life notice for the StoreVirtual 4000 line that HP recently posted?
I thought it may just be a current model refresh, but it looks like all models are shown in the EOL announcement, plus this line stood out to me:
"Customers can continue to purchase HPE StoreVirtual 4000 SKUs until they reach the obsolescence (OBSO) phase. Customers should be aware that any orders for EOL SKUs placed after the discontinuance (DISC) date will only be accepted and fulfilled based on material availability. Customers should plan their last buy activities BEFORE the DISC date of 31 July 2017 to ensure acceptance and availability. There are no new generations of HPE StoreVirtual 4000 appliance planned."
I'm hoping this is just another rebrand scenario (as the reference the OS as Lefthand OS 12.7 rather than the most recent rebrand to StoreVirtual OS). I've not seen anyone post on this, so hoping to have my fears assuaged (as I've used these since they were Lefthand Networks... (since 2006 - SAN/iQ 6).
Appreciate any input that you have to clarify the situation.
This maybe better suited on a VMware forum, but wanted to try here too just in case.
I’d like to be able to programmatically correlate VSA Volumes to VMware datastores. Does anyone know if this can be done with the PowerCLI and possibly the HP CLI?
Currently using Lefthand 12.6 and vSphere 5.5.
At our company we have deployed VSA using an isolated network (via vlan), I would like to minimize the attack surface so I am trying to use a linux server as the only link between the two servers (appart of a machine with CMC that I can use via vmware console)
So far I haven't been able to launch the program but I'll tell what I did in case someone has an idea.
The linux install I'm using is ubuntu 16.04 64bit
I need some i386 dependencies because the software from hpe is only 32bit
apt-get install libc6-i386 lib32z1 lib32stdc++6 libxtst6:i386 libxrender1:i386 libxi6:i386 libfreetype6:i386 libjpeg62:i386
Installing it with sudo
I need to use ssh with x forwarding enabled
ssh -X username@linuxserver
Finally, booting the CMC with the bundled java
cd /opt/HPE/StoreVirtual/UI/ ./jre/bin/java -jar UI.jar
So far this is the part that's working, unfortunately when I run the last command I get the folliwing:
Requesting you to provide the details whether ALM and Performance Center 12.53 will support windows server 2016 and MS SQL server 2016
I have 4 box of HP Storvirtual 4730 storages, and two of them shows " Cache Enabled status Disabled" indicated with red cross, I know that I have to change the cache adapter and cache battery, as this SAN is on production environment, if I change the cache adapter is there any risk of data loss, and what is the recommended procedure to replace it.
We currently run our vmware 5.5 virtual environment on a left hand san using os 11.5 and our nodes are a mix of 4530's and P4500G2's.
We are planning to upgrade vmware to 6.0 U3 and left hand OS 12.0, can anyone see a problem with that?
Hoping you can help. Our Left Hand is currently running OS 11.5 and we are looking to upgrade to version 12.0 (some of our nodes are P4500G2 which i believe limits us to v12 as I think they are not compatible with later OS versions)
We install the cmc console on our workstations and connect that way but are thinking that maybe a better thing to do would be to start using the cmc virtual appliance.
I was wondering if anyone has gone through the same process? if so, is there any doco that takes you through the process in a step by step way to make sure we get it right.
I have two 4730 connected by 10Gb link with a FOM on separate hardware. The quorum is two.
When the FOM and one 4730 are lost the remaining 4730 crashes.
Is it possible in planned maintenace to remove the quorum requirement to enable the remaining 4730 to keep running?
This would also be a requiement in a disaster recovery situation where one 4730 and the FOM are destroyed but the remaining 4730 is operational.
I have not been able to find an answer to this in the documentation or forum. If it is available please advise.
I've got HPE Lefthand P4530 G2 Array that I accessed using CMC console.
Since I'm using Storage Array snapshot, I have to use Thin Provisioned LUN.
So how can I setup the alarm to alert me when the fre space in the Array is reaching 96% full ?
So far I can only check manually but that's getting cumbersome to check it during the after hours.
Any help would be greatly appreciated.
I'm using HPE Lefthand P4530 G2. My currentFree unused array is just 1620 GB, since I'm using RAID-10 (2-way) so effectively it is just 810 GB available.
Since my backup software using Storage Array Snapshot as per this article: https://helpcenter.veeam.com/docs/backup/vsphere/backup_from_storage_snapshots_hiw_hp.html?ver=95
Therefore based on this discussion: https://forums.veeam.com/veeam-backup-replication-f2/lefthand-snapshots-t20207.html
I have been told by the support team to use Thin LUN to utilize the Storage Snapshot with less disk space usage.
i must install a 2 node VSA cluster, for each VSA Cluster i have 3 NVME SSDs, for the VSA VM i have a single ssd for each Host....
What is the better configuration? create 6 Datastore and in this datastore create 3 vmdks per host (jbod) and over the 3 x 2 Disk create the network raid...or is better associate the single vmdks with RDM to the VSA 2014 vm?
We have a SV3200 with a Storevirtual 13.5 OS. We have setup email notification for all events (Critical, Warning and Informational) to email us when anything happens on the storeage array and it all seems to work fine, however we don't seem to get notifications when there are software updates available to be installed.
Is this as per design or do we need to configure something to be able to get notified when new updates/patches arrive on our system?
Thank you in advance.
I’m looking for a recommendation regarding subnets for a multi-site stretched cluster with StoreVirtual 3200.
Is it best practice to have each node in its own vLAN and subnet? Would routing affect the performance if put into different subnets?
I got two nodes and FOM now i cannot login to one node. CMC is version 12.6 but Nodes are 9.5. V
VM's are running Ok but noticed that Host got some performance issues.
noticed that need to start the "hydra" service?
But is this ok to upgrade to 10.5 with these condition? or reboot this node?
This got network raid 10 so can i reboot this node without any VM migration?
We are looking to purchase two HPE StoreVirtual 3200 SANs. Both nodes will be located in the same LAN/office and we intend to setup network raid10.
To ensure quorum we would like to delploy a FOM vapp or a file witness. Would we need to purchase additional licenses/support for the FOM/witness server? We are getting conflicting advise from our resellers.