- About Vivit
- LUGs & SIGs
- Vivit Blogs
- News & Events
- Knowledge Base
|HPE Storage Solutions Forum - HPE StoreVirtual Storage/LeftHand|
We have a failed battery on our P4500G2. We've got the replacement however I can't seem to find any documentation on how i go about replacing it? I don't even know if we need to power down the unit to replace it - I'm hoping not of course as we have live production systems running on it and powering down isn't really an option!
Any help on this would be appriciated! :)
News: support for VMware ESX 6.5 and Windows Server HyperV hypervisors as well as integration with HPE Recovery Manager Central.
Next new features? I don't know, beacuse link for Relase Notes 12.7 point to old version 12.6. Maybe it will be OK tomorrow...
I need urgent help,
All of my ESXi servers has just lost access to the iSCSI VMFS LUN that is presented from my HPE Lefthand P4300G2 like below:
As the above screenshot shows, it was full, I need to delete some old VMs to continue using it, but somehow I cannot understand why when the LUN is presented to the servers group, the ESXi server cannot be accessible through vSphere console ?
Even when the Re-scan was succesful from the vSphere console, the LUN is listed as Innaccessible.
But when I unpresent it, the ESXi server is now accessible or manageable from the vSphere console.
Any help would be greatly appreciated
Thanks in advance.
Hello, I have a customer that wants to move from a multi-site to a single site, and physically move nodes into one site. Should I move the nodes into the new site first, then adjust the cluster to a single-site? Any guidance would be helpful. Thanks.
How do i reset ILO on P4500 SAN ?
I can see the Patch_10118-00.iso file in Download folder but how do i install this to continoue with patching?
Trying upgrade os from 9.5-->11.5--->12.5
A customer has a situation with probably a badly designed SAN network.
Basically they have 2x buildings. In each building they have 1x Storevirtual 4330, 2x SAN switch, 1x ESXI server. Both buildings are connecting to eachother over the SAN switches with 4x fiber. The StoreVirtuals are setup as a Multi-Site configuration so that the ESXi servers have shared storage and High Availability.
Building 1 room is setup as Multi-Site Primary running the majority of VMs. 4330 has 1x local manager.
Building 2 room is setup as Multi-Site "Secondary" running non-critical VMs, but also has a FOM on a seperate fysical computer in the same room. 4330 has 1x local manager.
Now we have seen that this system is quite redundant. A full power failure of building 1 will still cause building 2 to be fully operational and even have VMware HA succesfully trigger. When building 2 goes down, same scenario: every affected VM boots succesfully in the other building. So far everything perfect this was verified multiple times.
But when I started reading the HP Multi-Site manuals, it turns out this solution is not in the 'recommended' list. Basically recommended is to have the FOM inside the primary 1st building. Also, in that scenario, failure of the 1st building would result in a manual intervention in the secundary room to reactivate the 4330 there. So basically we have something better now because no intervention was required. Does that make sense?
The doom scenario that comes to my mind now is: what if only the network between both buildings goes down and we go to a split brain scenario. Will then both 4330 become active? I assume primary will remain active, and secundary will also remain active because the FOM is in that building? What happens if the network comes online again? There is only 1x VSI for storage. Will we get total data corruption? Will building 1 become the 'primary' again and overwrite any changes that happened in the 2nd room? Chances of network outage are low because all switches, UPS'es and fibers are hot redundant. But what would happen if someone snaps all 4 fibers?
And more importantly: how do we go from this situation to a 'supported' situation that can withstand a network failure and with full active redundancy without requirement of manual interventions. I cannot find this in the documentations.
Hi I was able to upgrade my Node to 12.5 on my P4300G2 systems but the Failover Manager is still at Lefthand OS 10.5. Is this the latest and there is no upgrade? If so I am unable to do it as it shows as no upgrade.
OK, so after my NSM160 nightmare has been resolved trying to get some 4130s configured to move critical stuff off the NSM160s.
There wasn't enough space with just the 4 600GB drives so I purchased 4 additional drives for each unit. Had the unit fully updated with the 4 drives and made a management group. SHutdown system, add drives, used ACU to add the 4 new drives and increase the 2nd logical array (the 1st is a little 36GB raid6 array that I guess holds the lefthand stuff). Everything looked good until I get into the CMC after and it shows raid off unconfigured and it doesn't seem to be accepting the 4 new drives. Only 4 show up in disk setup. Tried to configure raid and it just disconnects me from the unit.
Am I missing something?
Hi, I have a customer with 2 x 4300 G2 in a cluster, they are wanting more space and performance, is it possible to replace the 500gb disks with 1TB SSD's?
I have 2 StoreVirtual 3200 SANs running 13.5 they are currently configured with SAS 10K drives. Can I add SSD drives to enable Disk Tiering and Adaptive Optimization on the fly, or would I need to reconfigure bothe SAN Nodes?
We have a two-node HPE StoreVirtual VSA setup, basically just two LUNs.
One LUN has really high latency, the other one does not, the two servers are alike in terms of configuration.
How to diagnose and fix why the latency problem??
If I think I need HPE support to fix this where/how do I find out how much this costs??????????????
We don't have any consultants in our area fluent with this product, are there any in Albany, NY????????
Or even better yet, Burlington, VT???????? Would also consider Central NY consultants.
Thank you, Tom
Hello, the FTP site in the CMC still points to ftp.hp.com. Does someone have the new string to HPE.com that I can put into preferences.txt?
I just did a new download of the CMC from the HPE site but it has the wrong data.
We have a sv3200 returned from a customer. We are trying to login to the management port by following the guide:
i.e. set the laptop ip to 172.16.253.205 255.255.255.248 and trying to browse to https://172.16.253.201 through 3 different browsers. No ping either.
Now if I setup my mini-network with my router, the management port appears to get an IP and i can ping it but still no luck logging in with any browser through https.
I also tried putty to the micro usb serial service port but i didn't see anything in there for changing any ip or factory resetting.
So it seems there is no way to reset these back to factory configuration without doing that through the management port with a browser.
It almost seems like they disabled web management and set the management port to dhcp.
Whats the best way to try and get the management web page working or reset the IP configuration back to factory configuration so we can access that.
Thanks for your time,
NIC in StoreVirtual VSA on Hyper-V is not running at wire-speed when attached to a 10 Gb Windows 201
We use two HP StoreVirtual VSA 12.5 nodes installed at Windows Server 2012 R2 hosts with 10 Gb network adapters. But communication with VSA is not running at speed 10 Gb.
I think that the problem is in drivers inside VSA virtual server based on Linux. Virtual servers with Windows Server 2012 R2 operating system with enabled SR-IOV option are running at 10 Gb speed without any problems.
I have found notice in document HP LeftHand OS Version 12.0 Release Notes at chapter Workarounds at page 19 ( http://h20564.www2.hpe.com/hpsc/doc/public/display?docId=c04514442 ) :
NIC in StoreVirtual VSA on Hyper-V is not running at wire-speed when attached to a 10 Gb Windows 2012 Hyper-V vSwitch
When a Hyper-V VSA NIC is connected to a 10 Gb Windows 2012 Hyper-V Switch, it does not run at the full potential of the physical NIC. Currently, there is no workaround. The issue is being investigated by Microsoft.
The problem is not solved in version 12.5 that we use. Is the problem solved in version 12.6 ?
There is currently a vib that automatically rescans the iSCSI adaptors after the HP VSA VM has started but the latest release is for ESXi 6.0:
metadata-hpe-iscsi-rescan_6.0.0-184.108.40.206-offline_bundle.zip / hpe-iscsi-rescan-mem-6.0.0-220.127.116.11.vib
Is anyone able to advise either:
1. that is will work with ESXi 6.5, or
2. when a release of the package suitable for ESXi 6.5 is likely?
With a StoreVirtual 4000 Hardware System I can create a faulttolerant ISCSI Network
With a a StoreVirtual VS softwarebased, I can't do it
How can I make StoreVirtual VSA ISCSI Network faulttolerant?
We have a couple of StoreVirtual VSA 2014 in production and every time we need support it's very hard to open a case.
StoreVirtual VSA Storage Software comes with Part Number ( such as D4U01A) and Entitlement Order Number.
But when you're about to submit a case in HPE Support Center, the Support Case Manager ask you for a Contract, Warranty ID, Service Agreement ID, Packaged Support ID, or Product Serial Number.