HPE Storage Solutions Forum - HPE StoreVirtual Storage/LeftHand
Share |



HP StoreVirtual Storage (Network RAID10)Open in a New Window

Good day,

I have a small question, I cannot find the storage space calculator for below config, so hopefully somebody can help me out.

Current configuration:

2x HP Letfhand P4000, 2x HP Lethand P4200:

  1. Raw Space:  3.27 TB
  2. RAID (Hardware): 5
  3. Usable Space: 2.8 TB
  4. Provisioned Space: 2.59

Currently:

  • Network Raid10 2-way mirror
  • Total Space: 11.5 TB
  • Available Space 0.88 TB
  • Utilization: 92%
  • Disks: (Thick) Full provisioned

What will happen if I add the next SAN node number 5 (HP Lethand 4300) on the same config, or what will the Utilization be?

Best regards,

Henry van der Leer

 

SSD compatibility with HP VSAOpen in a New Window

I have the the following setup: two HP DL360 Gen9 servers, each with two HPE 1,6TB 12G SAS Write Intensive SSD HDD. Those two disks are in RAID1, on booth servers. I use VMware ESXI 6.0 u3 , and on booth servers I install the HPE StoreVirtual VSA 2014. There is  one cluster with these SSD disks, with several VM on it.

In this setup i have very small value of IOPS, and that makes me problem what I cant resolve.

Did someone had a similar problem?

I would be grateful for any recommendations, how to overcome this problem.

Thanks in advance.

 

 

how to reset the login password of HP storageworks P4300 G2.Open in a New Window

Dear All ,

we lost the login password of our SAN which is HP stoargeworks P4300 G2 and I have no idea how to recover the password. Kindly help.

 

 

StoreVirtual VSA and Workload acceleratorsOpen in a New Window

Hi,

looking to test some Workload accelerators with StoreVirtual VSA 2-node cluster. Idea is to put in each server one accelerator card and then make network raid-1 over 10GbE links.  I know they are not built for this purpose but I am interested is this supported configuration with VSA without going thru SPOCK. Anybody doing it already?

Tnx

 

Problem disconnect iSCSI Link SV3200 since upgrade OS 13.5Open in a New Window

Hello,

I have two systems SV3200 in stretch cluster. Since I upgrade to 13.5 version I have a problem with the iSCSI ports in one of SV3200. Continually is Resync ports:LOGSV3200.JPG

The iSCSI port on Storage Controllers are DOWN, iSCSI communication on this storage system is down.

Any idea about our problem?.

Thank you.

 

 

Storage cluster with - One SV3200 and one MSA, is it possible?Open in a New Window

Hi,

Can I make a cluster using one SV3200 and one MSA?

 

Multisite SV3200 Cluster with NR10 and NR0Open in a New Window

Hi there,
At the moment i wont come to a conclustion how a System reacts in a SV3200 multisite scenario where i have NR10 and NR0 Volumes mixed. Will SV3200 keep the NR0 Volume on one Site so if that site goes down only this Volume is affected?

Backgroud is that i have some apps that are mirroring itself so i wont need NR10 and others don´t so i´d leverage both but if NR0 means that these Volumes get stretched to both sites and every NR0 volume will be offline in case one site is failing it is useless to me.

Regarss

 

Two Node HPE StoreVirtual 3200 questionsOpen in a New Window

Hi,

Is it possible to have a two node HPE StoreVirtual 3200 configured in a cluster? Or is it that, each node is supposed to be configured seperately with upto 3 enclosures and I cannot have two nodes in 1 cluster?

The documentation does not state how to create a cluster for the 3200. Hence the question.

 

Replacing P4500 with 4530Open in a New Window

Dear all,

we plan to replace our old P4500 nodes with new 4530 nodes.

Our configuration is a multisite SAN with 3 nodes per site.

Now I search a best practice guide to replace the old P4500 nodes with 4530 nodes.

If possible without destroying the configured cluster.

Thank you for your support and your ideas or links.

KR,

 

P4500G2 LH Cluster Showing Incorrect Storage ValuesOpen in a New Window

Good day everyone,

I have a P4500G2 cluster with 8 nodes running 12.5.  When we added the 8th node to the cluster it suddenly went to 100% utilization, where if you look at the used space .vs. total space we should be a little less than 50% (see attached screen shot).  Because it "thinks" the cluster is at 100% we can delete volumes, shrink volumes, or any other management functions.  

Any thoughts on how we might get this cluster to recalulate space?  

 

P4300 No QuorumOpen in a New Window

We have 2 node p4300 G2. One of the nodes ( Node1 )  start normal,. the second node ( Node2 )  give error  - Error opening a connection. We lost also FOM. Now we cant connect to ours storage. In CMC we get error - No Quorum.

What can we do to open ours storage .

 

Free 1TB StoreVirtual VSA License Expiration Shows 2040Open in a New Window

Hi - I've downloaded and successfully installed (from OVF) the free StoreVirtual VSA on two ESXi 6.0 (free version) hosts, using an NFS share for quorum. It all seems to work fine. When I look at the reported info for each VSA instance, it shows the license as expiring in year 2040, and also AO is enabled. I thought limitations of the free version were 3 year expiry, and no AO??

Thanks,

Sera

 

situation with two vsa's and a fomOpen in a New Window

Hello,

I have a two-node cluster and a fom. When I shutdown both nodes, off course the volumes are offline.

Now, when fom remains running and I start one of the nodes, say node1, the storage comes online but in degraded state, which is normal.

But, when I do the same thing, but I start with node2 instead of node1, the storage remains unavailable until also node1 is started.

What could be the reason for this?

 

1TB license included with DL380Open in a New Window

Hello,

My customer has bougth two DL380 servers and I have installed on each the VSA software. It is working OK. However I'm puzzled by the licensing. I would like to make use of the free 1TB offer. I understand I cannot use some features like adaptive optimization, >1TB volumes, two-tiering, ...

When I go to CMC, it states that I'm still in evaluation, and that volumes which contain violations go offline in 6 days.

Should I worry?

 

Provisioning LUN for ESXi server group on HP P4500 Virtualization SAN ?Open in a New Window

Hi All,

Can anyone here please share some step by step instructions in how to provision the new LUN from the newly built or upgraded P4500 array ?

I need to commission 4x 1 TB LUN so that I can present it to my VMware ESXi server group.

URL: http://h20566.www2.hpe.com/hpsc/doc/public/display?docLocale=en_US&docId=emr_na-c03738428&sp4ts.oid=89018

Thanks in advance.

 

StoreVirtual storage system connection methods and limitationsOpen in a New Window

Hello!

I have started work on a system and have concerns around the storage setup but would love someone to verify my thoughts. They have a StoreVirtual 4730FC with 4 shelves, 20TB per shelf as Raid 1+0 so 10TB usable. Out of the 40TB total its setup with network raid-10 so shelves 1 + 3 present 1 x 20TB LUN which is mirrored to shelves 2 + 4. So 20TB usable you can lose 2 shelfs (as long as it’s either 1+3 or 2+4) and multiple disks with the same shelf (as long as it’s not 2 that are part of the same raid1)

I'm happy with all that. However....

It’s the way that they are connected that worries me. All 4 have FC cards, they are connected to HP StorageWorks switches but are only used to connect the VHOSTS to the storage! The storage shelves (or "storage systems") themselves are connected via iSCSI. Ok that will work but it gets worse. 1 + 3 are connected to a switch used for general traffic (a 2900) and then hops through another switch, over an internet line! (1Gb) to another site, through another switch, to another 2900 and then to shelves 2 + 4.

With the fact you will only ever get 125MBytes/s throughput on a 1Gbit line and the fact it is used for so much other traffic (150+ users sit at the remote site) and HP recommend minimum 100MBytes/s per shelf I believe this is why they are seeing issues.

The issues they get are if a disk blows, or the network experiences any slight interrupt the SAN stops presenting itself to the Hyper-V cluster and all VM's shut down. I assume this is the SAN protecting the data integrity.

I suggest they need to move all 4 shelves to the same site and connect them using dedicated switches, either iSCSI or the FC StorageWorks switches they have with the ports zoned off from the StoreVirtual -> Hyper-V cluster traffic.

 

Any thoughts on if I make sense?

Thanks,

Mark

 

2nodes + FOM - removed hosts - added new hosts - no quorumOpen in a New Window

Hello all,

stupid thing but I`ve done it...

ESXI01 storing VSA01 500gb

ESXI02 storing VSA02 500gb

1 ESXI host storing the FOM

One management Group named VSAMG

I needed more space as per Free1Tb Storevirtual so I went and added additional hdd`s and thought to reconfigure RAID`s one by one

Moved the VM`s to ESXI02, rebooted ESXI01 - added HDD`s - reconfigured RAID (stupidly enough without removing VSA01 from the CMC) - all good - I can install VSA01_1TB fine 

Stupid move #2 - I`ve added the new VSA01_1TB to the same MGMT Group - VSAMG

since ESXI02 with VSA02 was still there I had quorum and it worked to move the VM`s to the new increased 1TB storage

when I took down ESXI02 and reconfigured RAID (of course I haven`t removed VSA02 prior) all went south due to lack of quorum

Now my CMC shows 2 clusters, 1 offline with 2 nodes unreachale and 1 available with 1 host and I can`t do anything as I cannot regain quorum

I`ve tried to run recoverQuorum on my alive host VSA01_1TB - result - cliqutilityfailed

I`ve tried to set up 2 new VSA`s with same name and MAC address - I get that error in CMC telling that the VSA`s should be in MGMT Group but they thing that they don`t belong there - Managers are presented as offline - stuck

a pic attached 

Q1 - coordinating manager is the alive 1tb appliance so far but I can`t manage to regain quorum due to CMC seeing 4hosts - 3 regular 1 failover hence not being able to add systems/storage/delete volumes anything

Q2 - I have no contract/subscription with HP - how am I supposed to open a SR with them - their websites are insane 

Hope I can solve it with your help 

Many thanks

Andrei

 

Old P-4000-G1 2 Node ClusterOpen in a New Window

Hi I have this fully functional with some spare HDD's (See attached) what the best use for this please ? I know it's old & not suported anymore.

 

P4300 G2 Cluster Re-config RAID 10Open in a New Window

We have a 2 node P4300 G2 cluster utilizing a FOM. 

I want to re-configure nodes in RAID 10 from the current 5. I've seen people discuss evicting a node, change the RAID, add it back to the management group however it was always 3 nodes. 

How can I do this with 2 nodes. When you try evict a node with a FOM it tells you to remove the FOM from the management group first. If I remove the FOM, then remove a node from the cluster I'm going to lose quorum and the cluster will go down no?

 

 

Network RAID Storevirtual 3200. DisableOpen in a New Window

Hello,

Is possible to disable Network RAID in Storevirtual 3200?. I will use Multi-site strech cluster (Synchronous Replication) with 2 nodes and I don't want use Network RAID for save storage.

Thank you in advanced.

Regards.

 

 

Contact Us

Vivit Worldwide
P.O. Box 18510
Boulder, CO 80308

Email: info@vivit-worldwide.org

Mission

Vivit's mission is to serve
the Hewlett Packard
Enterprise User
Community through
Advocacy, Community,
and Education.