- About Vivit
- LUGs & SIGs
- Vivit Blogs
- News & Events
- Knowledge Base
|HPE Storage Solutions Forum - HPE StoreVirtual Storage/LeftHand|
We have a cluster with 2 x HP Lefthand P4300. To this cluster we have connect a infraestructure vmware vsphere 5.5 and a failover cluster Windows Server 2008 R2. We want migrate to stretch cluster with 2 x HPE Storevirtual 3200 with controllers 10GbE.
When we connect Windows 2008 R2 cluster we are having issues. The Windows ISCSI initiator stuck randomly and we lost P4300 volumes and SV3200 volumes. We need restart servers to correct the problem, but suddenly the problem happen again. If we disconnect SV3200 to the Windows server then all is Ok. With vmware we have not problem.
Thank you very much.
Ok, let me begin by saying this is a long post of the issues I have had. Pleaseprovide any advice, insight, or words of wisdom you can provide that will help guide me through this.
Wehave a cluster with five NSM-160s with 2TB drives that are raid0 (yea I know we should have had them raid5 but when we acquired these (before HP bought Lefthand) it was an acceptable practice to maximize performance for running VMs. Anyway we were running at about 57% usage and early in the month unit 4 storage went offline and access to the LUNS started messing up.
It wouldn't repair the unit and even though raid went back normal after a reboot, storage offline remained. Since we had plenty of space I elected to remove the unit from the cluster. Ended up with 200 active tasks that went really slow. In the meantime all LUNS became unavailable. The are configure Network RAID10.
Anyway after a few days we were finally down to 19 and then it all seemed to hit a roadblock. Several have restriping/resynching at 100% but still show unavailable due to cluster edit.
SO that is when the fun started. Contacted HPE only to be told they were legacy and I couldn't even buy support. Just screwed as far as they were concerned. They couldn't even enter my serial numbers into their system. They even went so far as to suggest I was reading them wrong because my serials were 15 digits and they only expected 10. Any way I was eventually told the only way they could support me and my legacy stuff would be as a migration case on new units with support.
So I went searching for some new units in hopes of getting a ticket number. Found some units very decently priced with an Amazon retailer. Just got them and even though I unboxed them and they were new units, the paperwork on them is from Mar of 2013 and it seems that support clock starts ticking automatically even if they are on a shelf somewhere. I was able to activate my licenses, but nada support.
That said at least I have units new enough to pay for support which I am now waiting for.
Thought I would go ahead and post in hopes that someone might have some guidance on getting my restripe/resync unhung and one other issue.
The new units 4130s all came up fine and I have them showing up in the CMC as available, but one went to try and boot from the network. Thought I might pop the cover and look for a loose cable or something since at least on my 160s they boot from a flash and there are two of them, just not sure what to look for with these 4130s.
TIA for any help/advice.
We have this HP Left hand storage
last week, i see there is an warning, I am not expert in lefthand, however while doing bundle analysis, i saw bellow,
From ADU report, i see there is an mdedium error,
how do we know why Paritiny initialization failed ? how to fix it, would appreciated any help.
Top ] → [ Smart Array P410 in slot 3 ] → [ Internal Drive Cage at Port 1I : Box 1 ] → [ Drive Cage on Port 1I ] → [ Physical Drive (600 GB SAS) 1I:1:5 ] → Physical Drive Status SCSI Bus 0 (0x00) SCSIID 12 (0x0c) Block Size 512 Bytes Per Block (0x0200) Total Blocks 600 GB (0x45dd2fb0) Reserved Blocks 0x00010000 Drive Model HP EF0600FARNA Drive Serial Number 6SL8GFBH0000N4410N5P Drive Firmware Revision HPD6 SCSI Inquiry Bits 0x02 Compaq Drive Stamped Stamped For Monitoring (0x01) Last Failure Reason Medium Error 2 (0x0a)
I'm wondering if anyone can help or has experianced a similar problem. Its a long post so please bear with me!
We have recently purchased a Storevirtual 3200 SFF 10GB unit.
Its currently configured with 7 x 10K SAS in 2 x 3 disk RAID 5 sets along with a spare. We have exported a single volume (network RAID 0) to a single host which uses the Microsoft ISCSI initiator with multipathing enabled (4 x 1Gb connections).
On this volume we are running a single VM using Hyper-V 2012 R2 the VM has nothing running on it, it doesn't even have its network connected.
There is no other workload currently on the unit.
With this setup we are experiancing high write latency to the storage.
The VM hosts reports 40-50ms latency on average to the exported volume. Intermittantly the latency will drop to what we consider normal ie 1-2ms and will remain at this level for several hours before jumping back to the 40-50ms level. Within the VM the latency will be slightly higher and although it doesn't seem to cause any issues it does seem sluggish and would probably implode if any load was placed on it.
When we look at IOPS on the datastore we see 1-2 IOPS regardless of latency.
We have also tried connecting from other hosts and the same issues persists.
At first we thought we had a networking issue however we've realised that the latency is present in the performance charts on the Storevirtual itsself so this seems unlikley. Also when copying large files to the unit throughput is good and easily saturates 1Gb ethernet.
We have also noticed some strange behaviour when failing over between storage controllers. If we failover to either storage controller the latentcy dissapears when we failback the latency returns.
Oddly if we leave the controller failed over for a long period of time at some point the latency will return.
Out of interest we have run Microsofts Diskspd programs to check the IOPs on the unit and compared this to the drives on the host server.
On the host server which has 2 x 10K SAS SFF in RAID 1 with a 2GB FBWC (ar440) we see very high IOPS and throughput. If we disable the FBWC using HP SSA things look far more as we'd expect and vaguly inline with performance suggested by a RAID calculator for 2 10K disks in RAID 1.
The Storevirtual on the same test doesn't behaviour as if it has a write cache at all and performs inline with what a RAID calculator suggest for two RAID 5 arrays with a stripe.
Even without the cache I wouldn't expect to see this latency when there is no load on the unit in fact i wouldn't expect this on a single SATA drive!
Has anyone seen this before? Does anyone have a similar setup? Am I expecting too much? It just doesn't seem right to me.
I do have a case open with support but its slow and we keep going around in circles.
PS I have graphs and screen shots which I will upload if I can work out how to!
I am designing a two node load balanced Hyper-V failover cluster with SV3200 as SAN storage. Due to budgetary constraint, I wanted to omit 10GbE switches, can I direct attach the DAC from each of the server to SV3200.
Hello all. I have a question that maybe someone can help with. We have a set of NSM150's that we have had offline for a few years. The server that had the management software, either version 6.6 or 7.0 has been long ago retired, and I am trying to come up with a copy of it. The HP site shows both versions, but when you try to download them, it says unable to redirect. Any idea where I can get this software?
Also, are there any updates to get this thing more current if I do get it back online? Thanks.
Hello i have a question
We have a 2 node VSA Cluster with a quorum disk on a nas. One node1 crached Controller defect. The server is repaired an started network cables are not connected at this time. Can i set this server online so the nodes are syncing ? We use Hyper-V
I have a small question, I cannot find the storage space calculator for below config, so hopefully somebody can help me out.
2x HP Letfhand P4000, 2x HP Lethand P4200:
What will happen if I add the next SAN node number 5 (HP Lethand 4300) on the same config, or what will the Utilization be?
Henry van der Leer
I have the the following setup: two HP DL360 Gen9 servers, each with two HPE 1,6TB 12G SAS Write Intensive SSD HDD. Those two disks are in RAID1, on booth servers. I use VMware ESXI 6.0 u3 , and on booth servers I install the HPE StoreVirtual VSA 2014. There is one cluster with these SSD disks, with several VM on it.
In this setup i have very small value of IOPS, and that makes me problem what I cant resolve.
Did someone had a similar problem?
I would be grateful for any recommendations, how to overcome this problem.
Thanks in advance.
Dear All ,
we lost the login password of our SAN which is HP stoargeworks P4300 G2 and I have no idea how to recover the password. Kindly help.
looking to test some Workload accelerators with StoreVirtual VSA 2-node cluster. Idea is to put in each server one accelerator card and then make network raid-1 over 10GbE links. I know they are not built for this purpose but I am interested is this supported configuration with VSA without going thru SPOCK. Anybody doing it already?
I have two systems SV3200 in stretch cluster. Since I upgrade to 13.5 version I have a problem with the iSCSI ports in one of SV3200. Continually is Resync ports:
The iSCSI port on Storage Controllers are DOWN, iSCSI communication on this storage system is down.
Any idea about our problem?.
Can I make a cluster using one SV3200 and one MSA?
Backgroud is that i have some apps that are mirroring itself so i wont need NR10 and others don´t so i´d leverage both but if NR0 means that these Volumes get stretched to both sites and every NR0 volume will be offline in case one site is failing it is useless to me.
Is it possible to have a two node HPE StoreVirtual 3200 configured in a cluster? Or is it that, each node is supposed to be configured seperately with upto 3 enclosures and I cannot have two nodes in 1 cluster?
The documentation does not state how to create a cluster for the 3200. Hence the question.
we plan to replace our old P4500 nodes with new 4530 nodes.
Our configuration is a multisite SAN with 3 nodes per site.
Now I search a best practice guide to replace the old P4500 nodes with 4530 nodes.
If possible without destroying the configured cluster.
Thank you for your support and your ideas or links.
Good day everyone,
I have a P4500G2 cluster with 8 nodes running 12.5. When we added the 8th node to the cluster it suddenly went to 100% utilization, where if you look at the used space .vs. total space we should be a little less than 50% (see attached screen shot). Because it "thinks" the cluster is at 100% we can delete volumes, shrink volumes, or any other management functions.
Any thoughts on how we might get this cluster to recalulate space?
We have 2 node p4300 G2. One of the nodes ( Node1 ) start normal,. the second node ( Node2 ) give error - Error opening a connection. We lost also FOM. Now we cant connect to ours storage. In CMC we get error - No Quorum.
What can we do to open ours storage .