HPE Storage Solutions Forum - HPE StoreVirtual Storage/LeftHand
Share |

Pair of StoreVirtual 3200 or 4335?Open in a New Window


Having trouble making our minds up what to go for it is a choice between storevirtual 4335(and possibly a pair of 4335/4730 down the line(if budget allowed)) or a pair of 3200

What we must achieve is data mirrored across two locations in an active/active scenario so if one storevirtual goes down the other continues serving the data. What is also important is auto tiering of data

System is to run on VMWare 6.0. What can one do that the other can't? Can all disks be accessed by each controller in the 3200? or is it half and half?




Basic design question for 2 storage system HA configuration.Open in a New Window


I'm planning to build  HA cluster on Windows  Hyper-v  with  storage clusterization.  2 server nodes +  dedicated DC  and  2 SAN nodes with  data sync and automatic failover ,  which is why i'm considering   HPE SV3200 now. 

I've never had an expirience with  HP storage and can now use only   manuals and google to get some  basic  view.

Could i ask you to clarify some details?

1. AFAIU i need to use quorum witness  inside my Managment Group.  The  Entity "Cluster"  would be created with  creation of  MG or vice-versa?

2. AFAIU  i can have only one quorum witnessin MG,  and it cannot be placed inside the LUN  of my SV3200, so i cannot cluster the  VM which  would serve as  NFS share.  Did i get it right,  if my share fails  with server cluster node where it is located,  there would be no reaction from my Network RAID cluster untill it's node  loss each other?  If my share disappears i just can create the new and configure  MG to use it with no downtime?  

3. I consider FC8 storage with no FC switches.  So basiccally i just need a shared storage for my hyper-v cluster,  majority of my LC cables from  both  server nodes will be connected to  SV3200_1 node controllers,  could a put the second  SV3200_2 in standby mode awaiting for  failover OR IO activivty will be occuring  simultaneously on  both nodes? So, will i see all  my physical  LUNs on  managment console or it will be a virtual LUNs with  2 physical behind each?



Adding disks to StoreVirtual VSAOpen in a New Window

I have an ESXi 6.0 HA cluster with 2 DL380 Gen9 (P440ar), each with 2x 1.6TB SSD in RAID-1 + 6x 1.8TB SAS HDD in RAID-6.
Storage is configured as a single 7.8TB N-RAID-10 volume with Adaptive Optimization (Lefthand OS 12.6).

I want to add 5x 1.8TB SAS drives to each host, so (besides drives) I have ordered a drive cage, a SAS expander card and a VSA license upgrade for each host.

I have made the following plan for the task, but I would really like to know if this is a viable plan (especially the marked parts), and if I have missed something:

Backup, backup, backup!
Disable High Availability

For each host:
. vMotion all VM's (except VSA) off host
. power off VSA (with CMC or ESXi?)
. set host maintenance mode
. power off host
. mount drive cage, SAS expander card and cables
. power on host
. hot-add disks
. confirm added disks with: /opt/hp/hpssacli/bin/hpssacli ctrl slot=0 pd all show
---> expand array (and logical drive) with: /opt/hp/hpssacli/bin/hpssacli ctrl slot=0 ld 2 add drives=au
---> refresh datastore/array controller and add "new" storage
---> add new disk SCSI(1:2) using "new" storage to the VSA
. increase momory to 12GB for the VSA
. power on VSA
. rescan storage
. wait for full syncronization with other node in CMC

Rinse and repeat for the other host.

Next in CMC:
---> Cluster->Systems->VSA01: Add disk to RAID (Tier 1?)
---> Cluster->Systems->VSA01: Add disk to RAID (Tier 1?)
. check storage is available
. Cluster->Volumes->[Name]: Edit Volume: Adjust "Reported Size" (or should I create a new volume?)

Finally rescan storage on both hosts.

Any comments are welcome! :)



Minimum HPE Product Requirements to Provide Storage Space on ServersOpen in a New Window

our small company would be interested in buying HPE Hardware to provide our customers a storage space to store their enterprise data in our facility. As a start we would like to offer around 300-800 TB of storage space to our customers. Each customer needs a different size of space from 1 TB to 100 TB and will have access to it over internet connection. Thus we thought of buying a model of the HPE StoreVirtual 4000 Series to sell Software-Defined Storage.
Could you give me an idea of the required minimum and optimum hardware and software needed to do so? Are there licences needed? Is there a good paper about the required hardware and software to sell storage space?Thank you.


Reset HP Lefthand P4300Open in a New Window

We've got two old HP Lefthand P4300 SAN, which we would like to use for training perposes. Because we don't know the passwords anymore, we would like to reset the setup using the HP P4000 Storage System Quick Restore.iso

I've downloaded the version 10.5 11.5 and 12.6

When using versions 11.5 and 12.6 the following error will be displayed after inserting the USB with the valid license key:

umount: /mnt/cdrom: not mounted
eject: tried to use '/mnt/cdrom' as device name but it is no block device
eject: unable to find or open device for: 'cdrom'
Auto Imaging done

When using version 10.5 the following erro will be displayed after inserting the USB with the valid license key:

This is not a virtual platform
hwid = 24
DEBUG: Invalid hardware unit - unit type number is [24]
Failed to find new OS partition, cannot create blatz file
Auto Imaging Done

In both cases the passwords aren't reset after a reboot. 

Can somebody point me into the right direction, I'm lost.


Does Storevirtual 4730 has vsa integrated?Open in a New Window

Storevirtual noob here. After reading the storevirtual whitepaper, my understanding is:
1) Storevirtual is the VSA Appliance.
2) The Storevirtual 4730 is basically a sort of a blade server (gen 8 or gen 9) that has additional NIC Ports, 25 disks and storevirtual vsa integrated into it.
3) One can just buy the storevirtual vsa and deploy it on existing servers or hardware.
My questions:
1) How do we make storage visible to the servers. I come from a traditional SAN background where we have storage port wwns that are zoned with server hba wwns. A storage group with LUNs, the server and storage array wwns is created and then the server sees the disks after scanning.
How's it done with iSCSI and storevirtual?
Please let me know if there are some guides that could make it clear. I've gotta ramp up with Storevirtual asap and will be very thankful if someone could provide pointers and or guidance.


StoreVirtual 12.6 - Warning Event E010A0301:EID_NIC_ERRORS_EXCESSIVEOpen in a New Window

Hi folks,


I updated my storevirtual 4730 nodes to 12.6.

Since there I have NIC warnings on the primary node telling me:

Network Interface 'NICSlot2:Port1' error status = 'Excessive'

Network Interface 'NICSlot2:Port2' error status = 'Excessive'

The Network Interface 'NICSlot2:Port1' error status is 'Excessive' at  2.00%.

The Network Interface 'NICSlot2:Port2' error status is 'Excessive' at  0.93%.


What´s wrong or can I disable these message, cause two percentage or less I can accept?







Adding Storage to ClusterOpen in a New Window

We currently have 4X 4530 Storevirtual SANs in a Cluster and have some Volumes using 4Way Mirroring.  All 4 nodes have MDL SAS 7.2K drives. We want to add 2X 4530s with 10K SAS Drives for Data Tiering and Adaptive Optimization. I know there will be restriping across the cluster for this, but how or will the 4Way Mirrors be affected since there wil now be 6 Nodes in the cluster?


Storevirtual 12.5 - The 'Log Partition' status is 'Read Only'.Open in a New Window

I have a Storevirtual VSA 2014 running on Vsphere hosts.  We have two storage nodes and a separate FOM.  Recently, we took one vsa offline to expand the underlying raid storage.  This was done, and the expanded space was added to the drive space allocated to the VSA virtual machine.  No further changes within Storevirtual were applied, as we wanted to resync first.  Next, the Storage System VM was turned on, and the VSA began to resync.  A few hours in, a critical alarms popped up that says: 

Severity: Critical
Date and Time: 1/5/17 9:40:40 PM CST
Component: Storage System
User: System
Object Type: Mount Point
Object Name: Log Partition
Management Group: MYGROUP
Cluster: MYCluster
IP/Hostname: VSA02
Message: The 'Log Partition' status is 'Read Only'.

I stopped the manager, and attempted to "Repair Storage System" on the Storage System tasks, but it claimed the storage system was operating normally.  Then I noticed that the resync appears to still be in progress.  The VSA02, says Resyncing (57%, 7 hours remaining) and the VSA01, which has been operational for the last few days, says Resyning (72%, 19 hours).  

Is VSA02 broken?  Is this a normal error?  Do I leave it alone?

Any help would be appreciated.  Thanks.


P4300 Lefthand iQ San 9.5 need to UpgradeOpen in a New Window


   I am trying to upgrade my old Lefthand San Storageworks P4300 to 12.5 OS. I am currently at 9..5 but when I log into the CMC and do Upgrade it will not allow me. I tried using Advance and upgrade to 10.5 since it is not supported to upgrade from 9.5 upto 12.5 (the latest my san can goto). It also states the 9.5 is not supported to 10.5 and is greyed out. I do not see any option to upgrade to 10.0, so how can I upgrade my SAN? I am looking to repurpose this for testing but would like it to be fully updated. Any help would be greatly appreciated.




HP StoreVirtual CMC - check for upgrades not workingOpen in a New Window

Hi, I have a P4500G2 storage (software version 10.5) and CMC version 12.0.00 (build 0725.0) installed on Windows Server 2008R2.

Now when I want to check for upgrades in CMC it doesn't work. I have got the message "An error occurred trying to connect to the FTP server". 

When I click the button "View notifications" I have got the error message "Notifications cannot be found at this time. Try again later." 

There is no firewall blocking FTP communication on port 21. My perimeter firewall is MS TMG 2010 and when i turn on logging and in CMC click to the button "Test FTP connection . . ", I can see communication in TMG logs. 

But no communications occurs on TMG when i click button "Check for Upgrades" in CMC. 

My last successful check for upgrades was on 27.09.2016. 

Can someone help me with this problem, please? 

Thank you


How to remove offline, failed storage system from CMCOpen in a New Window


A vsphere vsa was reinstalled and I neglected to remove the old storage system from the CMC.

The CMC shows the old, failed storage system as Unknown, Offline, and with a MAC address no longer valid as the vsa instance no longer exists. All available options on the failed storage system lead nowhere - it cannot be power on or off, cannot edit hostname, cannot add to new cluster -- because the system cannot be found on the network.

The failed storage system does not belong to the working cluster though it is a member of the management group.

I cannot make changes to the working cluster, such as changing network bonding, because the CMC reports "There are one or more other storage systems in the  (managment group) that are not ready. Wait until the other storage systems are ready and try again."

What are the options for removing a failed storage system from the CMC?

Thanks for your help.


prevent failure when coming from VSA version before 10.5 (P4000)Open in a New Window


today I would like to share my latest nightmare with vsa to give you a chance to prevent it.

We have two two-site-vsa-clusters with 2 and 4 nodes each. They where installed 2011 with version 9.5 and afterwards upgraded up to 12.5. At two weeks ago we wanted to extend a lun. After opening the CMC all volumes where inaccessible. The vsa's seems to crashed. After reboot it looked ok until we do an iSCSI rescan in ESXi. All VSA's crashed again. The CMC shows many errors like cim down, raid off... With the help of HP we were able to exchange the damaged VSA's and rebuild the data. This cost us 4 days and nights of work and several years of lifetime.

So what was the cause? After that I've analysed the filesystems of the broken vsa's. The root volume was full and had no bytes free anymore. This seems to happen on all VSA's at the same time.

VSA Version prior 10.5 was deployed with a disk size of 8GB. From 10.5 upward the disk size is 32GB per default. Versions prior 10.5 are called Model P4000. The support of this Model will end up 2017!

Admins: if you've got this Model then please have a look at root volume free space in your VSA. And exchange them ASAP! HP support knows the right procedure.

best regards Björn


Configure Windows NFS for quorum witnessOpen in a New Window


I'm configuring a VSA cluster (2 VSA nodes). I want use Quorum Witness but I have only Windows Servers in the environment.

Settings for NFS share is "Allow R/W" for everyone, no authentication. Network is reachable but, when I check NFS connectivity ( It fails

SAN network is separated from LAN, NFS server got one interface in SAN Network.

VSA doesn't have access to DNS, is it a problem ?

Can someone post a screenshot of his Windows NFS Share config ?


Thanks !


StoreVirtual VSA vs. StoreVirtual 4330Open in a New Window

Hello - We are currently using SV 4330 appliances, but are considering migrating to SV VSA.  To start out, I'm looking at VSA 10 TB licenses and three HPE Proliant DL360 Gen9's with these quick specs:

- Dual Xeon E5-2640V4 2.4 GHz. procs

- Smart Array P440ar with 2GB FBWC

- (8) 1.2 TB - 10K RPM SAS 12Gb/s disks

- 256 GB RAM

- Gb NIC's (2 would be dedicated to iSCSI)

- 32 GB flash SD for ESXi boot partition

Any ideas on how these would compare performance-wise to the 4330's?  I'm mostly asking about adequate performance using all spinning disks (all the example's I've seen have a mix with SSD's), and if nesting VM virtual disks on top of virtual disks for the VSA hinders performance?  Our current 4330 nodes all have (8) 900 GB 10K RPM 6Gb/s SAS disks.


Thanks...  Mike


Remote Snapshots Hang at CopyingOpen in a New Window

We have StoreVirtual Hardware at two sites.   Lately I have noticed that Remote Snapshots from Site B to Site A stay at Copying 0%  with the Current Rate is 0Kb/Sec and never finish.   Remote Snapshots from Site A to Site B are working normally.

The only thing that has changed recently is we changed the Data Vlan IP scheme at Site A (not the iSCSI Vlan)  But I have connectivity between the iSCSI Vlans, and CMC can see the StorageNodes at both sites, from either site.

Both Sites are running Software, and we are running CMC 12.5


Thank you in advance


CIM error in StoreVirtual CMCOpen in a New Window



Has anyone received the below error in the Centralized Management Console (CMC)? If so, what was the resolution?

'CIM Server' server = 'Down'


Died FOM server in a multi site environment with Storevirtual 4330 SANOpen in a New Window

Hi all

I have two Storvirutal 4330 runing as a multisite with a FOM server.

My FOM server died with hardware and everthing, i am not able to get the FOM back, the hardware, software and everthing gone. I created a new FOM server and want to attach it in my Management Group, but i get the following error: [You cannot add a Failover Manager into a management group that already has a Failover Manager.] see also screenshot 

When i click on my FOM in the Manage group, i am not able to do anything, it says: [Could not find storage system with serial number xxx:xxx:xx:xx:xx:xx] see screenshot.

Please let me know how can remove or delete the died FOM server/vm.

Or do i really have to delete the whole Management group and create a new one which means 2-3 dayes of work?


StoreVirtual VSA : Misconfiguration ? Bad write performancesOpen in a New Window


I'm trying to setup a simple HPE StoreVirtual VSA infrastructrure :
This is a lab, not a production, just for training stuff.

Bi-Xeon L5630 - 32GB Mem.
HDD0: 1x 1TB - HDD1: 1x 2TB
NIC: 2x1 GbE + 2x 10GbE (1 used)

Bi-Xeon L5630 - 32GB Mem.
HDD0: 1x 1TB - HDD1: 1x 2TB
NIC: 2x1 GbE + 2x 10GbE (1 used)

HV01 and HV02 got their 1GbE connected to a switch.
HV01 and HV02 got their 10GbE connected each other with a DAC. (1 used)

10GbE :

HPE VSA DSM has been installed.
  VM SRV-VSA01 (on HV01):
  Jumbo framed configured at 9000.

HPE VSA DSM has been installed.
  VM SRV-VSA02 (on HV02):
  Jumbo framed configured at 9000.

Both Hyper-V servers can communicate between each other and with VSA virtual machines through the 10GbE adapter.

CMC has been deployed on HV01, I created a Management Group which contains SRV-VSA01 and SRV-VSA02.
I created a VSA cluster which contains SRV-VSA01 and SRV-VSA02.
I created a Network-Raid 10 volume.
I added SRV-HV01 and SRV-HV02 as servers, created a server cluster.
I assigned the previous created volume to this server cluster.

Once done, I just had to refresh ISCSI targets and get the freshly created volume.
I just had to put it online, initialize it,  create a new volume, format it as ReFS, assign a letter.

And here we are now. I'm faced with some performances issues and I have to say, I'm not that quite confident about my setup / configuration.

When I run a CrystalDiskMark I have extremely low write speed.

Results when I run CrystalDiskMark (5/1GB) on the hard-drive directly on the HyperV:

CrystalDiskMark 5.2.0 x64 (C) 2007-2016 hiyohiyo
Crystal Dew World : http://crystalmark.info/
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

Sequential Read (Q= 32,T= 1) : 179.562 MB/s
Sequential Write (Q= 32,T= 1) : 157.042 MB/s
Random Read 4KiB (Q= 32,T= 1) : 1.580 MB/s [ 385.7 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 1.553 MB/s [ 379.2 IOPS]
Sequential Read (T= 1) : 67.319 MB/s
Sequential Write (T= 1) : 65.847 MB/s
Random Read 4KiB (Q= 1,T= 1) : 0.235 MB/s [ 57.4 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 0.552 MB/s [ 134.8 IOPS]

Test : 500 MiB [G: 0.7% (12.9/1862.9 GiB)] (x5) [Interval=5 sec]
Date : 2016/12/05 1:27:24
OS : Windows Server 2012 R2 Datacenter (Full installation) [6.3 Build 9600] (x64)

Results when I run CrystalDiskMark (5/1GB) on the volume presented by VSA (Network-Raid10):

CrystalDiskMark 5.2.0 x64 (C) 2007-2016 hiyohiyo
Crystal Dew World : http://crystalmark.info/
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

Sequential Read (Q= 32,T= 1) : 289.112 MB/s
Sequential Write (Q= 32,T= 1) : 23.408 MB/s
Random Read 4KiB (Q= 32,T= 1) : 4.776 MB/s [ 1166.0 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 0.440 MB/s [ 107.4 IOPS]
Sequential Read (T= 1) : 182.402 MB/s
Sequential Write (T= 1) : 11.324 MB/s
Random Read 4KiB (Q= 1,T= 1) : 0.946 MB/s [ 231.0 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 0.098 MB/s [ 23.9 IOPS]

Test : 500 MiB [E: 0.0% (0.1/299.9 GiB)] (x5) [Interval=5 sec]
Date : 2016/12/05 1:34:16
OS : Windows Server 2012 R2 Datacenter (Full installation) [6.3 Build 9600] (x64)

Results when I run CrystalDiskMark (5/1GB) on the volume presented by VSA (Network-Raid0):

CrystalDiskMark 5.2.0 x64 (C) 2007-2016 hiyohiyo
Crystal Dew World : http://crystalmark.info/
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

Sequential Read (Q= 32,T= 1) : 353.499 MB/s
Sequential Write (Q= 32,T= 1) : 36.200 MB/s
Random Read 4KiB (Q= 32,T= 1) : 5.495 MB/s [ 1341.6 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 1.080 MB/s [ 263.7 IOPS]
Sequential Read (T= 1) : 199.443 MB/s
Sequential Write (T= 1) : 13.633 MB/s
Random Read 4KiB (Q= 1,T= 1) : 1.001 MB/s [ 244.4 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 0.116 MB/s [ 28.3 IOPS]

Test : 500 MiB [F: 0.2% (0.1/49.9 GiB)] (x5) [Interval=5 sec]
Date : 2016/12/05 1:40:31
OS : Windows Server 2012 R2 Datacenter (Full installation) [6.3 Build 9600] (x64)

Results on SRV-HV02 are quite similar.



How to set up a mirrored virtual disk in Hyper-V?Open in a New Window

Hi there,
I've not used this product before, but I have a simple requirement.

I have a standalone Hyper-V 2016 Core server. This server has two Windows Server 2012 R2 VMs. I want to replicate a virtual disk from one VM to the other. Is this possible with HPE StorVirtual without having to install anything directly on the Hyper-V server?

Which components do I need to install?

Many thanks

Contact Us

Vivit Worldwide
P.O. Box 18510
Boulder, CO 80308

Email: info@vivit-worldwide.org


Vivit's mission is to serve
the Hewlett Packard
Enterprise User
Community through
Advocacy, Community,
and Education.