HPE Storage Solutions Forum - HPE StoreVirtual Storage/LeftHand
Share |



Network RAID Storevirtual 3200. DisableOpen in a New Window

Hello,

Is possible to disable Network RAID in Storevirtual 3200?. I will use Multi-site strech cluster (Synchronous Replication) with 2 nodes and I don't want use Network RAID for save storage.

Thank you in advanced.

Regards.

 

 

 

4 Node VSA on Synergy for VMWOpen in a New Window

Hi I'm planning to build this using HPE Synergy12000 Frame

2 X HPE Synergy D3940 CTO Storage Modules with site A & B config (Plus HPE Synergy D3940 IO Adapter, HPE Synergy 12Gb SAS Connection Module, HPE VC SE 40Gb F8 Module etc...)

4 X HPE SY 480 Gen9 w/o Drv Bys CTO Compute Modules with 0.5 TB RAM each 

8 X HP 1.92TB 6G SATA MU-3 SFF SC SSD

32 X HP 1.8TB 12G SAS 10K 2.5in SC 512e HDD

With 4 X HPE SV VSA 2014 50TB 

Which gives 

2x 1.92TB R1 = 3.84TB

8x 1.8TB R5(3+1) = 14.4TB 

18.24TB/node for a cluster total of approx 36TB 

Please advise you thoughts on this please. Applicate any help 

Cheers

 

How to configure a single node HPE StoreVirtual 4730? Client will buy a second node later.Open in a New Window

Hi,

My client has purchased a single node HPE StoreVirtual 4730. I have highlighted all the risks associated with it such as - No Network RAID, only 1 Hot Spare that kicks in when a disk failure occurs. 

They are going to add a second node later. I will be thankful if someone could point me to a guide that shows how to configure a single node HPE StoreVirtual 4730?

Do we still go through the normal steps and instead of having two nodes in a cluster, we have just one node in a cluster and rest of the stuff remains the same?

Do we still have to do nic bonding between the two 10 Gb ports ? But in that case it will be connected to only one Network Switch. So should one 10 Gb port be connected to one Network Switch and the other 10Gb port be connected to another Network Switch?

Please let me know. I have been through the thread where a single node possibility was considered, just looking for more details.

 

TCP Status VSA 2014 (HV)Open in a New Window

Hi everyone.

I have a cluster with 4 vsa 2014 (Hyper-V).

The VSA run in hyper-v 2012 core with 2 NIC 10G teaming

When i check the TCP Options for speed and Flow Control in CMC, appears  unavailable.

I know we can't change this parameters in VSA, so i configured the fNIC auto-negotiate and with Flow Control Enable.

All swichts have the same configuration.

There is the configuraton of the team

Name : 10Gbit
Members : {Embedded FlexibleLOM 1 Port 2, Embedded FlexibleLOM 1 Port 1}
TeamNics : 10Gbit
TeamingMode : SwitchIndependent
LoadBalancingAlgorithm : Dynamic
Status : Up

The NIC Configuration:

PS C:\> Get-NetAdapterAdvancedProperty -Name * -DisplayName "Flow Control"

Name DisplayName DisplayValue RegistryKeyword RegistryValue
---- ----------- ------------ --------------- -------------
Embedded FlexibleLOM ...2 Flow Control Rx & Tx Enabled *FlowControl {3}
Embedded FlexibleLOM ...1 Flow Control Rx & Tx Enabled *FlowControl {3}


PS C:\> Get-NetAdapterAdvancedProperty -Name * -DisplayName "Speed & Duplex"

Name DisplayName DisplayValue RegistryKeyword RegistryValue
---- ----------- ------------ --------------- -------------
Embedded FlexibleLOM ...2 Speed & Duplex Auto Negotiation *SpeedDuplex {0}
Embedded FlexibleLOM ...1 Speed & Duplex Auto Negotiation *SpeedDuplex {0}

But CMC still this parameters unavailable.

What i do wrong?

Thanks on advance.

 

 

HPVSA driver downgrade on VMwareOpen in a New Window

A customer experienced major problems in Vmware configured with VSA. We then downgraded the HPVSA driver to version x.x.x-88, which solved the problem. However, this document is old and provides only a workaround, not solution. Is there a newer version of this driver available, where this bug is fixed?

http://h20564.www2.hpe.com/hpsc/doc/public/display?docId=c05026755

How did we receive the failed driver in a first place? Was it part of some VMware upgrade bundle that was installed?

-Sto

 

P4300 - keep re-syncing, massive performance dropOpen in a New Window

Hi all,

We have 2x P4300 in to two buildings, devices are mirrored.

Last week I've had to cold restart one of the devices to clear warning from June last year. Powered it back on, warning gone, devices keep resyncing - fine.

Once sync is done, after an hour or more I've received e-mail notification:

1. The storage system 'DR-1' status in cluster 'CLUSTER' is 'Down'.

2. The manager status on 'DR-1' is 'Down'.

3. Due to a system change the volume 'xxxxxx' status in cluster 'CLUSTER' is 'Unprotected'. Data protection level is insufficient to ensure high availability.

I keep getting message no. 3 for all the volumes.

I've left SAN run during all weekend but that didn't fix my issue.
I've had to power it off because all the virual servers are super slow.

Could you advise anything?
Thanks

 

 

How to get via snmp the Read Hard Errors from (since reset)Open in a New Window

We were using the OID: .1.3.6.1.4.1.232.3.2.5.1.1.16 to get the Read Hard Errors from our storage nodes P4530 but it seems that this OID get only the value from the "since Factory" and not from "since reset" values of the ADU report.

Smart Array P420i in Embedded Slot : Internal Drive Cage at Port 1I : Box 1 : Drive Cage on Port 1I : Physical Drive (2 TB SAS) 1I:1:9 : Monitor and Performance Statistics (Since Factory)


   Read Errors Hard                     0x00000048
 

Smart Array P420i in Embedded Slot : Internal Drive Cage at Port 1I : Box 1 : Drive Cage on Port 1I : Physical Drive (2 TB SAS) 1I:1:9 : Monitor and Performance Statistics (Since Reset)

 
   Read Errors Hard                     0x00000000

Is there any way to get the actual Read Hard Errors of the drive from "since reset" so we can have the actual picture of its status?

 

Thanks

 

 

How do I find the serial number for my HP VSAOpen in a New Window

Support are asking for the serial number for my virtual SAN. I have looked on the hosts and no sign of a serial number which I'm told will start CZ3 or SGH.

Can anyone help with this please.

 

Thanks

 

Unable to get upgrade HP P4000 VSA (ESXI) from 12.5 to 12.6Open in a New Window

I’m trying to upgrade HP P4000 VSA(ESX) and for some reason, It is stuck on 12.5.00.0563.0 and it’s not getting the latest 12.6 upgrade. CMC says that software is all up to date. I have downloaded upgrades Manifest as well as downloaded all upgrades file bit no luck.  Restarted VSA one by one and then synced with CMC each time. Resinstallation of CMC didn't do much as well.. Any suggestions?

 

 

Using Performance Monitor to check performanceOpen in a New Window

Imagine a VMware vSphere infrastructure based on 3 ESXi hosts connected via 10Gbit iSCSI connection to 2 HPE StoreVirtual P4xxx clusters.

Using Performance monitor I see an average “Throughput value” ranging between 4,000,000 B/s and 8,000,000 B/s between each ESXi host and each StoreVirtual cluster.

How can I detect if the value is acceptable?

Even more, how can I detect if other values fall in an acceptable range or if there is any performance issue?

Regards

M.N.

 

EOL for the Storevirtual VSA is 11/01/2018Open in a New Window

The above statement was found in THIS thread:

https://community.hpe.com/t5/HPE-StoreVirtual-Storage/Support-Options/td-p/6940734

Can anyone VERIFY this??

The person apparently received no explanation WHY it's going EOL.

What realistic options exist??

 

 

Storevirtual patching - errorOpen in a New Window

Hello, I'm trying to apply patches: 55022-00 and 56007-00 to my five node SAN.  However, I'm getting the following error for all the SAN nodes when the update gets to about 25% complete:

Failed to get the upgrade state information from SAN. Upgrade partition is not currently mounted.

A follow on pop-up stated to reboot the SAN, wait for resync then try again. So I tried that with one of the SAN nodes but the result was the same.

Any help would be appreciated.

 

 

Support OptionsOpen in a New Window

We've got some long standing licnese for VSAs (the original purchase goes back to the Lefthand days), these have been kept under support & maintenance today & are currently running Version 12.5. The HPE support paperwork still shows the old P4000 SKU & we are now being told we can't renew support past next year. Apart from purchasing new licenses (Not going to happen)  what options do we actually have? the implication of the above to me is that the VSA is (effectivly) a dead product.

 

 

What is a good formula to calculate HP StoreVirtual IOPs??Open in a New Window

Two servers, each with 10 x 15k 600GB SAS drives in RAID 5, then the HP StoreVirtual puts these into network RAID 10.

What's the proper formula to calculate the theoretical IOPs available?? (assume 60% read 40% write)

Do I calculate the IOPs per RAID5 server?? -- then how do I account for the network RAID 10 part????????

Thank you, Tom

 

How does one calculate how much flash storage is required??Open in a New Window

We need to determine whether and if to augment our two-node HP VSA 12.6 cluster with flash/ssd drives to use adaptive optimization.

How do we calculate how much flash we need?? I've seen the figure 10-20% -- but 10-20% of what???

The two servers are both HP DL380 Gen8 2U LFF drive servers which presently contain 15k 600GB SAS drives.

Thank you, Tom

 

2012 R2 guests on ESXi 6.5 fail to boot when using EFIOpen in a New Window

I'm setting up a new vSphere 6.5 cluster and starting to provision my first machines. My backing storage is a StoreVirtual 3200 unit. I've set up a template for future guests, which uses EFI instead of BIOS. Upon rebooting the guests, they fail to start properly.

The issue is identical to this one I found on the KB: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2146167

I have followed the steps and set my configuration accordingly. Here is the output from esxcli storage core path list:

iqn.1998-01.com.vmware:esx-mk1-58d288d3-00023d000003,iqn.2015-11.com.hpe:storevirtual.rpl,t,3-naa.6000eb3b01323a960000000000001bf0

   UID: iqn.1998-01.com.vmware:esx-mk1-58d288d3-00023d000003,iqn.2015-11.com.hpe:storevirtual.rpl,t,3-naa.6000eb3b01323a960000000000001bf0

   Runtime Name: vmhba64:C0:T0:L10

   Device: naa.6000eb3b01323a960000000000001bf0

   Device Display Name: LEFTHAND iSCSI Disk (naa.6000eb3b01323a960000000000001bf0)

   Adapter: vmhba64

   Channel: 0

   Target: 0

   LUN: 10

   Plugin: NMP

   State: active

   Transport: iscsi

   Adapter Identifier: iqn.1998-01.com.vmware:esx-mk1-58d288d3

   Target Identifier: 00023d000003,iqn.2015-11.com.hpe:storevirtual.rpl,t,3

   Adapter Transport Details: iqn.1998-01.com.vmware:esx-mk1-58d288d3

   Target Transport Details: IQN=iqn.2015-11.com.hpe:storevirtual.rpl Alias= Session=00023d000003 PortalTag=3

   Maximum IO Size: 131072

 

iqn.1998-01.com.vmware:esx-mk1-58d288d3-00023d000004,iqn.2015-11.com.hpe:storevirtual.rpl,t,3-naa.6000eb3b01323a960000000000001bf0

   UID: iqn.1998-01.com.vmware:esx-mk1-58d288d3-00023d000004,iqn.2015-11.com.hpe:storevirtual.rpl,t,3-naa.6000eb3b01323a960000000000001bf0

   Runtime Name: vmhba64:C1:T0:L10

   Device: naa.6000eb3b01323a960000000000001bf0

   Device Display Name: LEFTHAND iSCSI Disk (naa.6000eb3b01323a960000000000001bf0)

   Adapter: vmhba64

   Channel: 1

   Target: 0

   LUN: 10

   Plugin: NMP

   State: active

   Transport: iscsi

   Adapter Identifier: iqn.1998-01.com.vmware:esx-mk1-58d288d3

   Target Identifier: 00023d000004,iqn.2015-11.com.hpe:storevirtual.rpl,t,3

   Adapter Transport Details: iqn.1998-01.com.vmware:esx-mk1-58d288d3

   Target Transport Details: IQN=iqn.2015-11.com.hpe:storevirtual.rpl Alias= Session=00023d000004 PortalTag=3

   Maximum IO Size: 131072

 

As per the article, I have taken the maximum IO size of 131072 (bytes) and set the 'Disk.DiskMaxIOSize' setting on each of my hosts to 128 (kilobytes). Output of esxcli system settings advanced list -o "/Disk/DiskMaxIOSize" | grep Int\ Value | grep -v Default'

 

Int Value: 128

 

Here's a snippet of vmware.log for a guest that fails to start:

 

2017-02-14T15:15:16.849Z| vcpu-0| I125: Guest: About to do EFI boot: Windows Boot Manager

2017-02-14T15:15:16.886Z| vcpu-0| I125: HBACommon: First write on scsi0:0.fileName='/vmfs/volumes/58986c57-beebfb62-20dc-941882e61b50/dc-mk1/dc-mk1.vmdk'

2017-02-14T15:15:16.886Z| vcpu-0| I125: DDB: "longContentID" = "9183ad927c82a0f735b6ad050dce9686" (was "b8e64cc6686f60edd799a37495b96386")

2017-02-14T15:15:16.939Z| vcpu-0| I125: DISKLIB-CHAIN : DiskChainUpdateContentID: old=0x95b96386, new=0xdce9686 (9183ad927c82a0f735b6ad050dce9686)

2017-02-14T15:15:20.002Z| vmx| I130: Vigor_ClientRequestCb: dispatching Vigor command 'Ethernet.queryFields'

2017-02-14T15:15:38.459Z| vcpu-0| I125: Tools: Tools heartbeat timeout.

 

There is a consistent 22 seconds between "about to do EFI boot" and "tools heartbeat timeout" when the guest fails to start.

 

I'm keen to resolve this ASAP so I can commission my new hardware in earnest - any ideas?

 

Drive Predictive FailureOpen in a New Window

Hello everyone, I have an HP Lefthand node (P4500 G2) that is reporting a predictive failure on a drive that has just been replaced.  The node started reporting the failure so we replaced the drive with a new drive.  The rebuild process took place and everything seemed good until the system started reporting a the predictive failure again on the new drive.  Does anyone have any experience with this situation and if so what is the solution?

 

VSA Cluster Nodes with SSD and 7200/MDLOpen in a New Window

We're currently running a few clusters of 4330's with 10k's and 7200 MDL drives (no AO, completely separate volumes presented to the virtual hosts accessing the data).  We have VM's with compute layer living on the 10k's and data/file VHD's living on the 7200's and are not seeing any issues.  We're considering starting to use VSA's on new hardware with SSD's and 7200 MDL drives, similar to the hybrid shelves but with additional capacity instead of the 10k's for our needs and using AO.

I know the design configurations push for SSD's with 10k drives, but we need the additional capacity MDL's bring.  Is anyone currently running a configuration with the "performance gap" in AO?  If so, how does it perform?  Is the performance difference mentioned more of a potential perceived difference if certain pages for a VHD lives on tier0 vs. tier1, and potentially causing the VM's operations (not just file access) to break or hang?

 

Up-to-date documentationOpen in a New Window

Where can I locate the latest manuals for HPE P4xxx storage systems?

Regards

M.N.

 

What are options for upgrading a 2-node StoreVirtual VSA cluster to improve its disk storage??Open in a New Window

We have a 2-node StoreVirtual VSA cluster running on StoreVirtual VSA 12.6 with an NFS quorum witness, two HP DL380s with 10 x 15k 600GB SAS drives in RAID 5 per server, then HPVSA makes the two servers into network RAID 10.

We're presently collecting data to determine if the two servers' arrays do or do not supply enough IOPs for the VMs involved, all running on vSphere 5.5 Update 2, hopefully to upgrade to 6.0 this spring. Would prefer vSphere 6.5 but HPE has not caught up to 6.5, wanting everyone to wait for 12.7.

The problem we're having is that the two servers each have 10 x 15k 600GB SAS drives in RAID 5 per server, then HPVSA makes this into network RAID 10, meaning a lot more writes and IOPs and disk latency than originally anticipated.

Anyway, If we get a 3rd server and add it to the cluster, can this 3rd server be all or mostly flash/SSD drives and the other two servers remain all 15k 600GB SAS drives?? and juse adaptive optimization to properly situate data being used by SQL and Exchange servers??

Or must the over-priced low-capacity HPE Gen8 flash/SSD disks be added into the two existing servers, replacing the SAS drives?? That is, for the adaptive optimization to work, must I have flash/SSD drives in all the servers or can I just have the flash/SSD drives in one new additional Gen9 server and leave the other two servers alone??

I'm asking b/c it would really be easier to just add the 3rd server than buy over-priced low-capacity Gen8 flash/SSD drives...

What other options do I have to increase IOPs and reduce disk latency with this 2-node cluster?? besides adding a 3rd server (and all the storage reconfiguration that will ensue no matter what)??

Also, when using adaptive optimization how should the disks/LUNs be configured wrt the physical disks, must each server contain some flash/SSD disks or can one (1) server contain all the flash/SSD drives in it??

Thank you, Tom

 

Contact Us

Vivit Worldwide
P.O. Box 18510
Boulder, CO 80308

Email: info@vivit-worldwide.org

Mission

Vivit's mission is to serve
the Hewlett Packard
Enterprise User
Community through
Advocacy, Community,
and Education.