HPE Storage Solutions Forum - HPE StoreVirtual Storage/LeftHand
Share |

Hard time configuring iSCSI on new hpe sv3200Open in a New Window


We have a new storage in our server room, the sv3200.

We got a 4 fiber ports and 4 1Gb port.

I wont to configure the 1Gb port to let some of my hosts use it and I am filing at it.

I did configure the network and I created a bond. I am able to send ping to the hots and the host sends to the 1Gb lan port.

I cannot find the place I am configuring the iSCSI initiator or anything. All I see are server configuration using fiber.

I did search the internet for some guide with no success finding any useful info.




Problem SV3200 with Windows 2008 R2 cluster.Open in a New Window


We have a cluster with 2 x HP Lefthand P4300. To this cluster we have connect a infraestructure vmware vsphere 5.5 and a failover cluster Windows Server 2008 R2. We want migrate to stretch cluster with 2 x HPE Storevirtual 3200 with controllers 10GbE.

When we connect Windows 2008 R2 cluster we are having issues. The Windows ISCSI initiator stuck randomly and we lost P4300 volumes and SV3200 volumes. We need restart servers to correct the problem, but suddenly the problem happen again. If we disconnect SV3200 to the Windows server then all is Ok. With vmware we have not problem.

There any incompatibility between Windows 2008 R2 with SV3200? Or any problem to connect Windows ISCSI initiator between 1Gb ISCSI (P4300) and 10GbE ISCSI (SV3200)?

Thank you very much.


Swap from FC to iSCSIOpen in a New Window


We have a StoreVirtual 4730fc connected to Hyper-V via FC. Hyper-V is setup as a Microsoft Failover Cluster with MPIO so 4 hosts to the StoreVirtual.

I want to switch to using iSCSI so planned on shutting down the VM's, disconnecting the storage and rediscovering it via Microsoft iSCSI initiator. The just fire up the VMs

Stupid question but that all sounds ok right? no data loss or having to format drives when I reconnect via iSCSI? Just over analysing it due to it being live data




NSM-160 nightmareOpen in a New Window

Ok, let me begin by saying this is a long post of the issues I have had. Pleaseprovide any advice, insight, or words of wisdom you can provide that will help guide me through this.

Wehave a cluster with five NSM-160s with 2TB drives that are raid0 (yea I know we should have had them raid5 but when we acquired these (before HP bought Lefthand) it was an acceptable practice to maximize performance for running VMs. Anyway we were running at about 57% usage and early in the month unit 4 storage went offline and access to the LUNS started messing up.

It wouldn't repair the unit and even though raid went back normal after a reboot, storage offline remained. Since we had plenty of space I elected to remove the unit from the cluster. Ended up with 200 active tasks that went really slow. In the meantime all LUNS became unavailable. The are configure Network RAID10.

Anyway after a few days we were finally down to 19 and then it all seemed to hit a roadblock. Several have restriping/resynching at 100% but still show unavailable due to cluster edit.


SO that is when the fun started. Contacted HPE only to be told they were legacy and I couldn't even buy support. Just screwed as far as they were concerned. They couldn't even enter my serial numbers into their system. They even went so far as to suggest I was reading them wrong because my serials were 15 digits and they only expected 10. Any way I was eventually told the only way they could support me and my legacy stuff would be as a migration case on new units with support.

So I went searching for some new units in hopes of getting a ticket number. Found some units very decently priced with an Amazon retailer. Just got them and even though I unboxed them and they were new units, the paperwork on them is from Mar of 2013 and it seems that support clock starts ticking automatically even if they are on a shelf somewhere. I was able to activate my licenses, but nada support.


That said at least I have units new enough to pay for support which I am now waiting for.


Thought I would go ahead and post in hopes that someone might have some guidance on getting my restripe/resync unhung and one other issue.


The new units 4130s all came up fine and I have them showing up in the CMC as available, but one went to try and boot from the network. Thought I might pop the cover and look for a loose cable or something since at least on my 160s they boot from a flash and there are two of them, just not sure what to look for with these 4130s.


TIA for any help/advice.


Parity Initialization Status is 'Failed'.Open in a New Window

Hi All,

We have this HP Left hand storage

[Model] P4500G2
[Storage System Software] Version

last week, i see there is an warning, I am not expert in lefthand, however while doing bundle analysis, i saw bellow,

[RAID] Normal
Rebuild Rate Low
Unused Devices (none)
Statistics 2 Arrays
Array 1 /dev/cciss/c0d1p2 : DATA Partition Raid 5 2741.08 GB Normal
Array 2 /dev/cciss/c0d2p2 : DATA Partition Raid 5 2794.40 GB Normal (parity initialization failed)

[RAID OS Partitions] Normal
Statistics 2 Arrays
Array 1 /dev/cciss/c0d0p5 : LOG Partition Raid 10 980.14 MB Normal
Array 2 /dev/cciss/c0d0p7 : LeftHand OS Partition Raid 10 2932.48 MB Normal

From ADU report, i see  there is an mdedium error,

how do we know why Paritiny initialization failed ? how to fix it, would appreciated any help.


Top ] → [ Smart Array P410 in slot 3 ] → [ Internal Drive Cage at Port 1I : Box 1 ] → [ Drive Cage on Port 1I ] → [ Physical Drive (600 GB SAS) 1I:1:5 ] → Physical Drive Status SCSI Bus 0 (0x00) SCSIID 12 (0x0c) Block Size 512 Bytes Per Block (0x0200) Total Blocks 600 GB (0x45dd2fb0) Reserved Blocks 0x00010000 Drive Model HP EF0600FARNA Drive Serial Number 6SL8GFBH0000N4410N5P Drive Firmware Revision HPD6 SCSI Inquiry Bits 0x02 Compaq Drive Stamped Stamped For Monitoring (0x01) Last Failure Reason Medium Error 2 (0x0a)


Storevirtual 3200 Latency IssueOpen in a New Window

Hi All,

I'm wondering if anyone can help or has experianced a similar problem. Its a long post so please bear with me!

We have recently purchased a Storevirtual 3200 SFF 10GB unit.

Its currently configured with 7 x 10K SAS in 2 x 3 disk RAID 5 sets along with a spare. We have exported a single volume (network RAID 0) to a single host which uses the Microsoft ISCSI initiator with multipathing enabled (4 x 1Gb connections).

On this volume we are running a single VM using Hyper-V 2012 R2 the VM has nothing running on it, it doesn't even have its network connected.

There is no other workload currently on the unit.

With this setup we are experiancing high write latency to the storage.

The VM hosts reports 40-50ms latency on average to the exported volume.  Intermittantly the latency will drop to what we consider normal ie 1-2ms and will remain at this level for several hours before jumping back to the 40-50ms level. Within the VM the latency will be slightly higher and although it doesn't seem to cause any issues it does seem sluggish and would probably implode if any load was placed on it.

When we look at IOPS on the datastore we see 1-2 IOPS regardless of latency.

We have also tried connecting from other hosts and the same issues persists.

At first we thought we had a networking issue however we've realised that the latency is present in the performance charts on the Storevirtual itsself so this seems unlikley. Also when copying large files to the unit throughput is good and easily saturates 1Gb ethernet.

We have also noticed some strange behaviour when failing over between storage controllers. If we failover to either storage controller the latentcy dissapears when we failback the latency returns.

Oddly if we leave the controller failed over for a long period of time at some point the latency will return.

Out of interest we have run Microsofts Diskspd programs to check the IOPs on the unit and compared this to the drives on the host server.

On the host server which has 2 x 10K SAS SFF in RAID 1 with a 2GB FBWC (ar440) we see very high IOPS and throughput. If we disable the FBWC using HP SSA things look far more as we'd expect and vaguly inline with performance suggested by a RAID calculator for 2 10K disks in RAID 1.

The Storevirtual on the same test doesn't behaviour as if it has a write cache at all and performs inline with what a RAID calculator suggest for two RAID 5 arrays with a stripe.

Even without the cache I wouldn't expect to see this latency when there is no load on the unit in fact i wouldn't expect this on a single SATA drive!

Has anyone seen this before? Does anyone have a similar setup? Am I expecting too much? It just doesn't seem right to me.

I do have a case open with support but its slow and we keep going around in circles.

Thanks for looking!

PS I have graphs and screen shots which I will upload if I can work out how to!


Design Question - Connect two nodes DL360 server and one SV3200 using 10GbE DAC without SAN SwitchOpen in a New Window

I am designing a two node load balanced Hyper-V failover cluster with SV3200 as SAN storage. Due to budgetary constraint, I wanted to omit 10GbE switches, can I direct attach the DAC from each of the server to SV3200. 


Lefthand NSM150Open in a New Window

Hello all.   I have a question that maybe someone can help with.  We have a set of NSM150's that we have had offline for a few years.  The server that had the management software, either version 6.6 or 7.0 has been long ago retired, and I am trying to come up with a copy of it.  The HP site shows both versions, but when you try to download them, it says unable to redirect.  Any idea where I can get this software? 

Also, are there any updates to get this thing more current if I do get it back online?  Thanks.


VSA Node Server Node CrachedOpen in a New Window

Hello i have a question

We have a 2 node VSA Cluster with a quorum disk on a nas. One node1 crached Controller defect. The server is repaired an started network cables are not connected at this time. Can i set this server online so the nodes are syncing ? We use Hyper-V


HP StoreVirtual Storage (Network RAID10)Open in a New Window

Good day,

I have a small question, I cannot find the storage space calculator for below config, so hopefully somebody can help me out.

Current configuration:

2x HP Letfhand P4000, 2x HP Lethand P4200:

  1. Raw Space:  3.27 TB
  2. RAID (Hardware): 5
  3. Usable Space: 2.8 TB
  4. Provisioned Space: 2.59


  • Network Raid10 2-way mirror
  • Total Space: 11.5 TB
  • Available Space 0.88 TB
  • Utilization: 92%
  • Disks: (Thick) Full provisioned

What will happen if I add the next SAN node number 5 (HP Lethand 4300) on the same config, or what will the Utilization be?

Best regards,

Henry van der Leer


SSD compatibility with HP VSAOpen in a New Window

I have the the following setup: two HP DL360 Gen9 servers, each with two HPE 1,6TB 12G SAS Write Intensive SSD HDD. Those two disks are in RAID1, on booth servers. I use VMware ESXI 6.0 u3 , and on booth servers I install the HPE StoreVirtual VSA 2014. There is  one cluster with these SSD disks, with several VM on it.

In this setup i have very small value of IOPS, and that makes me problem what I cant resolve.

Did someone had a similar problem?

I would be grateful for any recommendations, how to overcome this problem.

Thanks in advance.



how to reset the login password of HP storageworks P4300 G2.Open in a New Window

Dear All ,

we lost the login password of our SAN which is HP stoargeworks P4300 G2 and I have no idea how to recover the password. Kindly help.



StoreVirtual VSA and Workload acceleratorsOpen in a New Window


looking to test some Workload accelerators with StoreVirtual VSA 2-node cluster. Idea is to put in each server one accelerator card and then make network raid-1 over 10GbE links.  I know they are not built for this purpose but I am interested is this supported configuration with VSA without going thru SPOCK. Anybody doing it already?



Problem disconnect iSCSI Link SV3200 since upgrade OS 13.5Open in a New Window


I have two systems SV3200 in stretch cluster. Since I upgrade to 13.5 version I have a problem with the iSCSI ports in one of SV3200. Continually is Resync ports:LOGSV3200.JPG

The iSCSI port on Storage Controllers are DOWN, iSCSI communication on this storage system is down.

Any idea about our problem?.

Thank you.



Storage cluster with - One SV3200 and one MSA, is it possible?Open in a New Window


Can I make a cluster using one SV3200 and one MSA?


Multisite SV3200 Cluster with NR10 and NR0Open in a New Window

Hi there,
At the moment i wont come to a conclustion how a System reacts in a SV3200 multisite scenario where i have NR10 and NR0 Volumes mixed. Will SV3200 keep the NR0 Volume on one Site so if that site goes down only this Volume is affected?

Backgroud is that i have some apps that are mirroring itself so i wont need NR10 and others don´t so i´d leverage both but if NR0 means that these Volumes get stretched to both sites and every NR0 volume will be offline in case one site is failing it is useless to me.



Two Node HPE StoreVirtual 3200 questionsOpen in a New Window


Is it possible to have a two node HPE StoreVirtual 3200 configured in a cluster? Or is it that, each node is supposed to be configured seperately with upto 3 enclosures and I cannot have two nodes in 1 cluster?

The documentation does not state how to create a cluster for the 3200. Hence the question.


Replacing P4500 with 4530Open in a New Window

Dear all,

we plan to replace our old P4500 nodes with new 4530 nodes.

Our configuration is a multisite SAN with 3 nodes per site.

Now I search a best practice guide to replace the old P4500 nodes with 4530 nodes.

If possible without destroying the configured cluster.

Thank you for your support and your ideas or links.



P4500G2 LH Cluster Showing Incorrect Storage ValuesOpen in a New Window

Good day everyone,

I have a P4500G2 cluster with 8 nodes running 12.5.  When we added the 8th node to the cluster it suddenly went to 100% utilization, where if you look at the used space .vs. total space we should be a little less than 50% (see attached screen shot).  Because it "thinks" the cluster is at 100% we can delete volumes, shrink volumes, or any other management functions.  

Any thoughts on how we might get this cluster to recalulate space?  


P4300 No QuorumOpen in a New Window

We have 2 node p4300 G2. One of the nodes ( Node1 )  start normal,. the second node ( Node2 )  give error  - Error opening a connection. We lost also FOM. Now we cant connect to ours storage. In CMC we get error - No Quorum.

What can we do to open ours storage .

Contact Us

Vivit Worldwide
P.O. Box 18510
Boulder, CO 80308

Email: info@vivit-worldwide.org


Vivit's mission is to serve
the Hewlett Packard
Enterprise User
Community through
Advocacy, Community,
and Education.