- About Vivit
- LUGs & SIGs
- Vivit Blogs
- News & Events
- Knowledge Base
|HPE Storage Solutions Forum - HPE StoreVirtual Storage/LeftHand|
I have two Storvirutal 4330 runing as a multisite with a FOM server.
My FOM server died with hardware and everthing, i am not able to get the FOM back, the hardware, software and everthing gone. I created a new FOM server and want to attach it in my Management Group, but i get the following error: [You cannot add a Failover Manager into a management group that already has a Failover Manager.] see also screenshot
When i click on my FOM in the Manage group, i am not able to do anything, it says: [Could not find storage system with serial number xxx:xxx:xx:xx:xx:xx] see screenshot.
Please let me know how can remove or delete the died FOM server/vm.
Or do i really have to delete the whole Management group and create a new one which means 2-3 dayes of work?
I'm trying to setup a simple HPE StoreVirtual VSA infrastructrure :
HV01 and HV02 got their 1GbE connected to a switch.
10GbE : 10.10.10.0/28
Both Hyper-V servers can communicate between each other and with VSA virtual machines through the 10GbE adapter.
CMC has been deployed on HV01, I created a Management Group which contains SRV-VSA01 and SRV-VSA02.
Once done, I just had to refresh ISCSI targets and get the freshly created volume.
And here we are now. I'm faced with some performances issues and I have to say, I'm not that quite confident about my setup / configuration.
When I run a CrystalDiskMark I have extremely low write speed.
Results when I run CrystalDiskMark (5/1GB) on the hard-drive directly on the HyperV:
Results when I run CrystalDiskMark (5/1GB) on the volume presented by VSA (Network-Raid10):
Results when I run CrystalDiskMark (5/1GB) on the volume presented by VSA (Network-Raid0):
Results on SRV-HV02 are quite similar.
I've looked and looked and looked and come up empty....so here I am! :-)
I'm going to be likely purchasing the 3-pack of 4TB StoreVirtual 2014 VSAs. Can anyone tell me specifically where that 4TB limitation is implemented?
I'm actually replacing two older VSA VMs with this and continuing to mirror between the two, so the aggregate output size of their volumes will be 4TB for me either way. I just don't want the new VSAs to gripe if they see for example 4.5TB of "Raw Space" from their hosts.
Is it possialbe on the new StoreVirtual 3200 to add 4 SSD 400g drives to present to servers as a seperate LUN, from our second array of 8 - 900g SAS. It appears that it will only allow us to create volumes from a combined total from both arrays? How does that affect the proformance of the 12 SSD drives?
I have a VSA 12.6 implementation that I just integrated into our AD. However, I noticed when I logged in using my AD credentials, which has a period in the password, I get an error message stating that passwords with a "." are not permitted.
I opened a ticket, and support advised that no passwords, even externally authenticated AD passwords, can have special characters, like a period, in them.
We try to use complex passwords, which may contain some of these characters that are not permitted, in our AD environment. I don't know if this is the correct place for feature requests, but I would like to request that AD integrated logins not be bound by those password restrictions in order to maintain password complexity.
Just wondering if anyone had come across an issue with LHOS 12.6 when you can't log into a node with the error: "Login failed. Socket is closed"?
I have a new environment I'm building with a single non-redundant VSA in a new MG. The volumes I've created are as such NRAID0 - this design is intentional :-).
I had to take the VSA down for host maintenance and when I powered it back on I am unable to log into the MG. The VSA is running as the volumes are online and available to the vSphere hosts.
Any ideas? I've tried rebooting the VSA and have been looking in the CLIQ reference to see what options I have but no luck yet...
I'd rather not rebuild the MG unless I have to :-)
Any help would be much appreciated.
I have a Lefthand infrastructure managed by a CMC installed on an HPE Proliant DL360 server running Windows Server 2008 R2.
On the Proliant server I see a folder whose path is: C:\Program Files (x86)\HP\P4000\UI\downloads containing more that 13 GB of files.
Are they all necessary? I suspect that many files are not still necessary.
Can I safely delete them?
Which files should I keep in case I need to reinstall the CMC or inmstall it on another server?
Is it possible to remote snapshot from a manaement group with SANs running 12.6 to a management group with SANs running 9.5? Or will I run into a compatibalty issues with the differnt LefthandOS versions?
I have a HP P4500 running LefthandOS 9.5 I have an error stating that "The 'Log Partition' status is 'Read Only'"
How do I put the parition back into a read/write status to clear the error?
Im hoping some can help me, ive configured a two node hyper-v failover cluster using HP VSA "Converged" shared storage.
All HP kit end to end, so two DL 380 G9, 4 x 10GB ports for each server, P440 4GB RAID Controllers, two HP 3800 Aruba 10GB switches stacked and Server 2012 R2 as the OSE.
When i use IOmeter using Open Performance profile to test onboard server storage which is RAID1 for Tier0 and RAID5 for Tier1 I get expected results but the minute i connect to an iSCSI volume through VSA which sits on these very logical disks i get horrible performance, sometimnes 10 fold worse than the native storage disk results.
Why is this, ive read all documentation and engaged with HP support but still have not got a satisfactory answer to why the storage tests perform fine when connected to native storage but when ran through VSA the performance drops so dramtiacally.
One HP guy did mention that customer found issue's have been reported and apparently 12.7 which isnt released till next year will resolve/fix these performance issue which only seem to happen on Hyper-V not VMware due to integration services used by the VSA linux virtual machines. Anyone else come accross this?
I just need this to do what it says on the tin, ive tweeaked every 10GB nic setting i can find tuned the switch to no avail? Please some save my sanity :)
I'm using HP StoreVirtual VSA 12.6.00.0155.0. I've set the advanced ESXi option /VMFS3/EnableBlockDelete to 1 to enable guest UNMAP passthrough, then tried to reclaim the disk space from the Windows 2012 guest. It did not work.
The regular vmfs unmap with "esxcli storage core device vaai status get -l unmap_test" works as expected and reclaimed the disk space on the LUN after I've deleted the test vmdk disk.
Does StoreVirtual VSA work with guest OS UNMAP?
Details on guest OS UNMAP: http://www.codyhosterman.com/2015/04/direct-guest-os-unmap-in-vsphere-6-0/
I have an HPE LeftHand (p4xxx) infrastructure used to host the datastores managed by a VMware vSphere infrastructure.
LUNs are accessed via iSCSI from the ESXi hosts configured as a cluster.
I use Veeam Backup and replication to manage backup of VMs.
To speed up the backup I configured the physical Veeam server and the virtual Veeam proxies (all Windows Server 2008 R2) to access the same LUNs via iSCSI in Read only mode.
I have some trouble about the right configuration.
Is it correct setting the VMware cluster access the LUNs in read/write mode and the Veeam proxies in read only mode?
What additional settings should i implement to avoid possible damages to the datastores hosted by the LUNs?
Should I configure the Veeam proxies as single nodes or as a cluster?
Is there any further setting I should consider and/or apply?
(as the StoreFront Forum is quite dead I´ll post again here)
maybe someone had the same issue.
I tried to integrate the StoreFront Analytics Pack (3.1.620) to my vRealize Operations Appliance (188.8.131.5243153) and added my StoreVirtual (12.6), which looked good at the beginning, but after the collector ran a few minutes I receive errors "Objects are not receiving data from adapter instance" for my FailoverManager. Many objects in HP Storage have no data at all, like performance data for example.
Anyway I receive a lot of alarms for single volumes like "write latency is at critical level", but the alarms are cleared automatically after 1-2 minutes and I do not even see the volume in the environment.
Checking the logs of the StorageAdapter shows every 5 minutes the same 2 lines:
 2016-11-02 14:36:48,373 ERROR [pool-6619-thread-1] (166) com.integrien.alive.common.adapter3.AdapterBase.getResourceCollectResult - Failed to retrieve collect result for resource of adapter 'HPStorageAdapter': Resource 'MANAGEMENTGROUP1', resId=null is not in monitoring state.  2016-11-02 14:36:48,373 ERROR [pool-6619-thread-1] (166) com.integrien.alive.common.adapter3.AdapterBase.addEvents - Could not add external event to collect result for adapter 'HPStorageAdapter', resource collect result is null.
Credentials seem to be OK and have full access to the StoreVirtual.
Anyone has an idea how to fix this? Having this monitoring integration in vROPS is too promising to give up :-)
Or is there another possibility how to get some analytics out of the Storevirtual with vROPS?
Many thanks in advance.
I have an HP DL 320s running lefthand 9.5 I had an HDD fail, I replaced the HDD and the RAID array rebuilt itselft but the disk is still showing as uninitialized under the disk setup tab. When I right click on the disk all I get is view disk status or help.
I am using HP CMC 12.0 to manage the SAN.
Hello - I have a licensing question that I hope someone can help me with. We're looking at getting StoreVirtual VSA for three nodes with about 8 TB total disk space each. Does that mean we would need three 10 TB VSA licenses?
I have an HP clustered storage infrustructure, my engineer use a wrong cable to attach to the nexus switch. And add to the volue. Once the re-striping started I got an an error message E0060100 EID: Latency_status_Excessive and another error E00060205: EID_S_Server_Status_Overloaded. After 3 days the restriping is 5% in the volume. Is it possible to change the cable one by one with the right cable. Am afried not to loose my storage. ALB is configured in the NIC.
we had a cluster of two nodes and wanted to add a third node to be able to increase the volume size of the existing volumes. So we added a new LeftHand 12.6 Node to the existing 11.5 cluster. My understanding here is that you can add a node with a higher LeftHand OS verson to a cluster with a lower version, the HP support confirmed that.
So we added the third node to the cluster and the system started its restripe. The second we added the node though, there was an error that the iSCSI IP of that node could not be reached by any of the VIPs (we have only one). The restripe continued though and pings from that node to the VIP were successful. Pings with mtu set to 9000 to our three ESX hosts were successful too.
Now after approx. 30 minutes one of the five datastores lost its connection. As soon as we power off the newly added node the datastore is online again.
I already have a support case open but the HP support cannot find any problems nor a solution.
We checked the settings on swtiches, ESX hosts, the node and the cluster several times but we cannot seem to find anything wrong.
The configuration looks something like this:
Nodes have a 10GBe bond and a 1GBe bond. 10GBe is iSCSI and 1GBe is management. We have two Core switches (also HP) in HA with one of the 10GBe cables going into one core, the other goes into the other core. Same for the management bond and same for the 10GBe iSCSI of the ESX hosts. iSCSI network and management network are in a seperate VLAN each. Jumbo frames are active in the iSCSI VLAN and on all iSCSI nics of the ESX hosts and the vSwitch.
The ESX 5.5 hosts have a seperate vSwitch for the two iSCSI nics which have an IP-adress in the iSCSI Network each. Standby adapter is only configured on the vSwitch.
Support Case is 5314437093
Did someone maybe had a similar problem and can give any advice ?
Thanks in advance !
Hello, I would like to know when I can expect an upgrade for official Windows Server 2016 support especially for DSM for MPIO driver.
I have SAN storage HP storageworks P4300 G2 build RAID 6.
2 SAS 450 GB hard disks are showing predictive failure. I tried to get the new ones but the problem is they need about three weeks to be delivered as they are not available in my country region.
I found a segate SAS hard disk 600 GB that is available now.
is this compatible to be installed on my SAN storage to be hot swap and rebuild or will it affect my SAN storage?