HPE Storage Solutions Forum - HPE StoreVirtual Storage/LeftHand
Share |



Died FOM server in a multi site environment with Storevirtual 4330 SANOpen in a New Window

Hi all

I have two Storvirutal 4330 runing as a multisite with a FOM server.

My FOM server died with hardware and everthing, i am not able to get the FOM back, the hardware, software and everthing gone. I created a new FOM server and want to attach it in my Management Group, but i get the following error: [You cannot add a Failover Manager into a management group that already has a Failover Manager.] see also screenshot 

When i click on my FOM in the Manage group, i am not able to do anything, it says: [Could not find storage system with serial number xxx:xxx:xx:xx:xx:xx] see screenshot.

Please let me know how can remove or delete the died FOM server/vm.

Or do i really have to delete the whole Management group and create a new one which means 2-3 dayes of work?

 

StoreVirtual VSA : Misconfiguration ? Bad write performancesOpen in a New Window

Hello,

I'm trying to setup a simple HPE StoreVirtual VSA infrastructrure :
This is a lab, not a production, just for training stuff.

HV01:
Bi-Xeon L5630 - 32GB Mem.
HDD0: 1x 1TB - HDD1: 1x 2TB
NIC: 2x1 GbE + 2x 10GbE (1 used)

HV02:
Bi-Xeon L5630 - 32GB Mem.
HDD0: 1x 1TB - HDD1: 1x 2TB
NIC: 2x1 GbE + 2x 10GbE (1 used)

HV01 and HV02 got their 1GbE connected to a switch.
HV01 and HV02 got their 10GbE connected each other with a DAC. (1 used)

10GbE : 10.10.10.0/28
1GbE: 192.168.1.0/24

HV01:
10GbE: 10.10.10.3
HPE VSA DSM has been installed.
  VM SRV-VSA01 (on HV01):
  IP: 10.10.10.1
  Jumbo framed configured at 9000.

HV02:
HPE VSA DSM has been installed.
10GbE: 10.10.10.4
  VM SRV-VSA02 (on HV02):
  IP: 10.10.10.2
  Jumbo framed configured at 9000.

Both Hyper-V servers can communicate between each other and with VSA virtual machines through the 10GbE adapter.

CMC has been deployed on HV01, I created a Management Group which contains SRV-VSA01 and SRV-VSA02.
I created a VSA cluster which contains SRV-VSA01 and SRV-VSA02.
I created a Network-Raid 10 volume.
I added SRV-HV01 and SRV-HV02 as servers, created a server cluster.
I assigned the previous created volume to this server cluster.

Once done, I just had to refresh ISCSI targets and get the freshly created volume.
I just had to put it online, initialize it,  create a new volume, format it as ReFS, assign a letter.

And here we are now. I'm faced with some performances issues and I have to say, I'm not that quite confident about my setup / configuration.

When I run a CrystalDiskMark I have extremely low write speed.

Results when I run CrystalDiskMark (5/1GB) on the hard-drive directly on the HyperV:

SRV-HV01
-----------------------------------------------------------------------
CrystalDiskMark 5.2.0 x64 (C) 2007-2016 hiyohiyo
Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

Sequential Read (Q= 32,T= 1) : 179.562 MB/s
Sequential Write (Q= 32,T= 1) : 157.042 MB/s
Random Read 4KiB (Q= 32,T= 1) : 1.580 MB/s [ 385.7 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 1.553 MB/s [ 379.2 IOPS]
Sequential Read (T= 1) : 67.319 MB/s
Sequential Write (T= 1) : 65.847 MB/s
Random Read 4KiB (Q= 1,T= 1) : 0.235 MB/s [ 57.4 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 0.552 MB/s [ 134.8 IOPS]

Test : 500 MiB [G: 0.7% (12.9/1862.9 GiB)] (x5) [Interval=5 sec]
Date : 2016/12/05 1:27:24
OS : Windows Server 2012 R2 Datacenter (Full installation) [6.3 Build 9600] (x64)

Results when I run CrystalDiskMark (5/1GB) on the volume presented by VSA (Network-Raid10):

SRV-HV01
-----------------------------------------------------------------------
CrystalDiskMark 5.2.0 x64 (C) 2007-2016 hiyohiyo
Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

Sequential Read (Q= 32,T= 1) : 289.112 MB/s
Sequential Write (Q= 32,T= 1) : 23.408 MB/s
Random Read 4KiB (Q= 32,T= 1) : 4.776 MB/s [ 1166.0 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 0.440 MB/s [ 107.4 IOPS]
Sequential Read (T= 1) : 182.402 MB/s
Sequential Write (T= 1) : 11.324 MB/s
Random Read 4KiB (Q= 1,T= 1) : 0.946 MB/s [ 231.0 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 0.098 MB/s [ 23.9 IOPS]

Test : 500 MiB [E: 0.0% (0.1/299.9 GiB)] (x5) [Interval=5 sec]
Date : 2016/12/05 1:34:16
OS : Windows Server 2012 R2 Datacenter (Full installation) [6.3 Build 9600] (x64)

Results when I run CrystalDiskMark (5/1GB) on the volume presented by VSA (Network-Raid0):

SRV-HV01
-----------------------------------------------------------------------
CrystalDiskMark 5.2.0 x64 (C) 2007-2016 hiyohiyo
Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

Sequential Read (Q= 32,T= 1) : 353.499 MB/s
Sequential Write (Q= 32,T= 1) : 36.200 MB/s
Random Read 4KiB (Q= 32,T= 1) : 5.495 MB/s [ 1341.6 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 1.080 MB/s [ 263.7 IOPS]
Sequential Read (T= 1) : 199.443 MB/s
Sequential Write (T= 1) : 13.633 MB/s
Random Read 4KiB (Q= 1,T= 1) : 1.001 MB/s [ 244.4 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 0.116 MB/s [ 28.3 IOPS]

Test : 500 MiB [F: 0.2% (0.1/49.9 GiB)] (x5) [Interval=5 sec]
Date : 2016/12/05 1:40:31
OS : Windows Server 2012 R2 Datacenter (Full installation) [6.3 Build 9600] (x64)

Results on SRV-HV02 are quite similar.

 

 

How to set up a mirrored virtual disk in Hyper-V?Open in a New Window

Hi there,
I've not used this product before, but I have a simple requirement.

I have a standalone Hyper-V 2016 Core server. This server has two Windows Server 2012 R2 VMs. I want to replicate a virtual disk from one VM to the other. Is this possible with HPE StorVirtual without having to install anything directly on the Hyper-V server?

Which components do I need to install?

Many thanks

 

StoreVirtual 2014 VSA - Storage Limitation *SPECIFIC* LocationOpen in a New Window

I've looked and looked and looked and come up empty....so here I am! :-)

I'm going to be likely purchasing the 3-pack of 4TB StoreVirtual 2014 VSAs. Can anyone tell me specifically where that 4TB limitation is implemented?

  • The VSA's "Raw Space" - host storage presented to VSA from ESXi? (each VSA has 4TB with which to work)
  • The VSA's "Usable Space" - the VSA caps its usable space at 4TB regardless of if it has more listed in "Raw Space".
  • Total size of LUNs exported from VSA back to hosts? (a total of 4TB to use as datastores for other VMs)
  • Something else?

I'm actually replacing two older VSA VMs with this and continuing to mirror between the two, so the aggregate output size of their volumes will be 4TB for me either way. I just don't want the new VSAs to gripe if they see for example 4.5TB of "Raw Space" from their hosts.

Thanks!

 

StoreVirtual 3000 mixed hard drive arraysOpen in a New Window

Is it possialbe on the new StoreVirtual 3200 to add 4 SSD 400g drives to present to servers as a seperate LUN, from our second array of 8 - 900g SAS. It appears that it will only allow us to create volumes from a combined total from both arrays? How does that affect the proformance of the 12 SSD drives? 

 

Feature Request - External AD Authentication Complex PasswordsOpen in a New Window

Hello,

I have a VSA 12.6 implementation that I just integrated into our AD. However, I noticed when I logged in using my AD credentials, which has a period in the password, I get an error message stating that passwords with a "." are not permitted.

I opened a ticket, and support advised that no passwords, even externally authenticated AD passwords, can have special characters, like a period, in them.

We try to use complex passwords, which may contain some of these characters that are not permitted, in our AD environment. I don't know if this is the correct place for feature requests, but I would like to request that AD integrated logins not be bound by those password restrictions in order to maintain password complexity.

Thanks

 

Storevirtual VSA - Unable to login - Socket is closedOpen in a New Window

Hi all,

Just wondering if anyone had come across an issue with LHOS 12.6 when you can't log into a node with the error: "Login failed. Socket is closed"?

I have a new environment I'm building with a single non-redundant VSA in a new MG. The volumes I've created are as such NRAID0 - this design is intentional :-).

I had to take the VSA down for host maintenance and when I powered it back on I am unable to log into the MG. The VSA is running as the volumes are online and available to the vSphere hosts.

Any ideas? I've tried rebooting the VSA and have been looking in the CLIQ reference to see what options I have but no luck yet...

I'd rather not rebuild the MG unless I have to :-)

Any help would be much appreciated.

Cheers,

Ben

 

CMC software - Wasted space?Open in a New Window

I have a Lefthand infrastructure managed by a CMC installed on an HPE Proliant DL360 server running Windows Server 2008 R2.

On the Proliant server I see a folder whose path is: C:\Program Files (x86)\HP\P4000\UI\downloads containing more that 13 GB of files.

Are they all necessary? I suspect that many files are not still necessary.

Can I safely delete them?

Which files should I keep in case I need to reinstall the CMC or inmstall it on another server?

Regards

M.N.

 

Remote SnapshottingOpen in a New Window

Is it possible to remote snapshot from a manaement group with SANs running 12.6 to a management group with SANs running 9.5?  Or will I run into a compatibalty issues with the differnt LefthandOS versions?

 

Log Partition status is Read OnlyOpen in a New Window

I have a HP P4500 running LefthandOS 9.5 I have an error stating that "The 'Log Partition' status is 'Read Only'"

How do I put the parition back into a read/write status to clear the error?

 

Converged VSA on Hyper-V - Performance issuesOpen in a New Window

Hi

Im hoping some can help me, ive configured a two node hyper-v failover cluster using HP VSA "Converged" shared storage.

All HP kit end to end, so two DL 380 G9, 4 x 10GB ports for each server, P440 4GB RAID Controllers, two HP 3800 Aruba 10GB switches stacked and Server 2012 R2 as the OSE.

When i use IOmeter using Open Performance profile to test onboard server storage which is RAID1 for Tier0 and RAID5 for Tier1 I get expected results but the minute i connect to an iSCSI volume through VSA which sits on these very logical disks i get horrible performance, sometimnes 10 fold worse than the native storage disk results.

Why is this, ive read all documentation and engaged with HP support but still have not got a satisfactory answer to why the storage tests perform fine when connected to native storage but when ran through VSA the performance drops so dramtiacally.

One HP guy did mention that customer found issue's have been reported and apparently 12.7 which isnt released till next year will resolve/fix these performance issue which only seem to happen on Hyper-V not VMware due to integration services used by the VSA linux virtual machines. Anyone else come accross this?

 

I just need this to do what it says on the tin, ive tweeaked every 10GB nic setting i can find tuned the switch to no avail? Please some save my sanity :)

https://www.hpe.com/h20195/v2/GetPDF.aspx/4AA4-8440ENW.pdf

Bluesky

 

HP StoreVirtual 12.6 does not work with vSphere guest OS UNMAPOpen in a New Window

Hi all,

I'm using HP StoreVirtual VSA 12.6.00.0155.0. I've set the advanced ESXi option /VMFS3/EnableBlockDelete to 1 to enable guest UNMAP passthrough, then tried to reclaim the disk space from the Windows 2012 guest. It did not work.

The regular vmfs unmap with "esxcli storage core device vaai status get -l unmap_test" works as expected and reclaimed the disk space on the LUN after I've deleted the test vmdk disk.

Does StoreVirtual VSA work with guest OS UNMAP?

Details on guest OS UNMAP: http://www.codyhosterman.com/2015/04/direct-guest-os-unmap-in-vsphere-6-0/

Thanks!

 

 

Read-Only mode access via iSCSI from Veeam proxiesOpen in a New Window

I have an HPE LeftHand (p4xxx) infrastructure used to host the datastores managed by a VMware vSphere infrastructure.

LUNs are accessed via iSCSI from the ESXi hosts configured as a cluster.

I use Veeam Backup and replication to manage backup of VMs.

To speed up the backup I configured the physical Veeam server and the virtual Veeam proxies (all Windows Server 2008 R2) to access the same LUNs via iSCSI in Read only mode.

I have some trouble about the right configuration.

Is it correct setting the VMware cluster access the LUNs in read/write mode and the Veeam proxies in read only mode?

What additional settings should i implement to avoid possible damages to the datastores hosted by the LUNs?

Should I configure the Veeam proxies as single nodes or as a cluster?

Is there any further setting I should consider and/or apply?

Regards

M.N.

 

StoreFront for vRops issue with StoreVirtualOpen in a New Window

(as the StoreFront Forum is quite dead I´ll post again here)

Hey guys,

 

maybe someone had the same issue.

I tried to integrate the StoreFront Analytics Pack (3.1.620) to my vRealize Operations Appliance (6.3.0.4443153) and added my StoreVirtual (12.6), which looked good at the beginning, but after the collector ran a few minutes I receive errors "Objects are not receiving data from adapter instance" for my FailoverManager. Many objects in HP Storage have no data at all, like performance data for example.

Anyway I receive a lot of alarms for single volumes like "write latency is at critical level", but the alarms are cleared automatically after 1-2 minutes and I do not even see the volume in the environment.

 Checking the logs of the StorageAdapter shows every 5 minutes the same 2 lines:

[3995] 2016-11-02 14:36:48,373 ERROR [pool-6619-thread-1] (166) com.integrien.alive.common.adapter3.AdapterBase.getResourceCollectResult - Failed to retrieve collect result for resource of adapter 'HPStorageAdapter': Resource 'MANAGEMENTGROUP1', resId=null is not in monitoring state.
[3996] 2016-11-02 14:36:48,373 ERROR [pool-6619-thread-1] (166) com.integrien.alive.common.adapter3.AdapterBase.addEvents - Could not add external event to collect result for adapter 'HPStorageAdapter', resource collect result is null.

 Credentials seem to be OK and have full access to the StoreVirtual.

Anyone has an idea how to fix this? Having this monitoring integration in vROPS is too promising to give up :-)

 Or is there another possibility how to get some analytics out of the Storevirtual with vROPS?

Many thanks in advance.

M.

 

HP DL320s Disk Uninitialized.Open in a New Window

I have an HP DL 320s running lefthand 9.5 I had an HDD fail, I replaced the HDD and the RAID array rebuilt itselft but the disk is still showing as uninitialized under the disk setup tab. When I right click on the disk all I get is view disk status or help.

I am using HP CMC 12.0 to manage the SAN. 

 

StoreVirtual VSA LicensingOpen in a New Window

Hello - I have a licensing question that I hope someone can help me with.  We're looking at getting StoreVirtual VSA for three nodes with about 8 TB total disk space each.  Does that mean we would need three 10 TB VSA licenses?

 

Wrong fiber cable used in a clusterOpen in a New Window

Hi,

I have an HP clustered storage infrustructure, my engineer use a wrong cable to attach to the nexus switch. And add to the volue. Once the re-striping started I got an an error message E0060100 EID: Latency_status_Excessive and another error E00060205: EID_S_Server_Status_Overloaded. After 3 days the restriping is 5% in the volume.  Is it possible to change the cable one by one with the right cable. Am afried not to loose my storage. ALB is configured in the NIC.

 

 

 

Adding a third node to a cluster causes one of five datastores to fail in the VMware environmentOpen in a New Window

Hi,

short pre-story:

we had a cluster of two nodes and wanted to add a third node to be able to increase the volume size of the existing volumes. So we added a new LeftHand 12.6 Node to the existing 11.5 cluster. My understanding here is that you can add a node with a higher LeftHand OS verson to a cluster with a lower version, the HP support confirmed that.

So we added the third node to the cluster and the system started its restripe. The second we added the node though, there was an error that the iSCSI IP of that node could not be reached by any of the VIPs (we have only one). The restripe continued though and pings from that node to the VIP were successful. Pings with mtu set to 9000 to our three ESX hosts were successful too.

Now after approx. 30 minutes one of the five datastores lost its connection. As soon as we power off the newly added node the datastore is online again.

I already have a support case open but the HP support cannot find any problems nor a solution.

We checked the settings on swtiches, ESX hosts, the node and the cluster several times but we cannot seem to find anything wrong.

 

The configuration looks something like this:

Nodes have a 10GBe bond and a 1GBe bond. 10GBe is iSCSI and 1GBe is management. We have two Core switches (also HP) in HA with one of the 10GBe cables going into one core, the other goes into the other core. Same for the management bond and same for the 10GBe iSCSI of the ESX hosts. iSCSI network and management network are in a seperate VLAN each. Jumbo frames are active in the iSCSI VLAN and on all iSCSI nics of the ESX hosts and the vSwitch.

The ESX 5.5 hosts have a seperate vSwitch for the two iSCSI nics which have an IP-adress in the iSCSI Network each. Standby adapter is only configured on the vSwitch.

Support Case is 5314437093

Did someone maybe had a similar problem and can give any advice ?

Thanks in advance !

 

Kind regards,

Eric

 

 

 

Windows Server 2016 supportOpen in a New Window

Hello, I would like to know when I can expect an upgrade for official Windows Server 2016 support especially for DSM for MPIO driver.

 

HP StorageWorks P4300 G2Open in a New Window

Good day,

I have SAN storage HP storageworks P4300 G2 build RAID 6.

2 SAS 450 GB hard disks are showing predictive failure. I tried to get the new ones but the problem is they need about three weeks to be delivered as they are not available in my country region.

I found a segate SAS hard disk 600 GB that is available now.

is this compatible to be installed on my SAN storage to be hot swap and rebuild or will it affect my SAN storage?

 

Thank you 

Contact Us

Vivit Worldwide
P.O. Box 18510
Boulder, CO 80308

Email: info@vivit-worldwide.org

Mission

Vivit's mission is to serve
the Hewlett Packard
Enterprise User
Community through
Advocacy, Community,
and Education.