- About Vivit
- LUGs & SIGs
- Vivit Blogs
- News & Events
- Knowledge Base
|HPE Storage Solutions Forum - HPE StoreVirtual Storage/LeftHand|
Having trouble making our minds up what to go for it is a choice between storevirtual 4335(and possibly a pair of 4335/4730 down the line(if budget allowed)) or a pair of 3200
What we must achieve is data mirrored across two locations in an active/active scenario so if one storevirtual goes down the other continues serving the data. What is also important is auto tiering of data
System is to run on VMWare 6.0. What can one do that the other can't? Can all disks be accessed by each controller in the 3200? or is it half and half?
I'm planning to build HA cluster on Windows Hyper-v with storage clusterization. 2 server nodes + dedicated DC and 2 SAN nodes with data sync and automatic failover , which is why i'm considering HPE SV3200 now.
I've never had an expirience with HP storage and can now use only manuals and google to get some basic view.
Could i ask you to clarify some details?
1. AFAIU i need to use quorum witness inside my Managment Group. The Entity "Cluster" would be created with creation of MG or vice-versa?
2. AFAIU i can have only one quorum witnessin MG, and it cannot be placed inside the LUN of my SV3200, so i cannot cluster the VM which would serve as NFS share. Did i get it right, if my share fails with server cluster node where it is located, there would be no reaction from my Network RAID cluster untill it's node loss each other? If my share disappears i just can create the new and configure MG to use it with no downtime?
3. I consider FC8 storage with no FC switches. So basiccally i just need a shared storage for my hyper-v cluster, majority of my LC cables from both server nodes will be connected to SV3200_1 node controllers, could a put the second SV3200_2 in standby mode awaiting for failover OR IO activivty will be occuring simultaneously on both nodes? So, will i see all my physical LUNs on managment console or it will be a virtual LUNs with 2 physical behind each?
I have an ESXi 6.0 HA cluster with 2 DL380 Gen9 (P440ar), each with 2x 1.6TB SSD in RAID-1 + 6x 1.8TB SAS HDD in RAID-6.
I want to add 5x 1.8TB SAS drives to each host, so (besides drives) I have ordered a drive cage, a SAS expander card and a VSA license upgrade for each host.
I have made the following plan for the task, but I would really like to know if this is a viable plan (especially the marked parts), and if I have missed something:
Backup, backup, backup!
For each host:
Rinse and repeat for the other host.
Next in CMC:
Finally rescan storage on both hosts.
Any comments are welcome! :)
We've got two old HP Lefthand P4300 SAN, which we would like to use for training perposes. Because we don't know the passwords anymore, we would like to reset the setup using the HP P4000 Storage System Quick Restore.iso
I've downloaded the version 10.5 11.5 and 12.6
When using versions 11.5 and 12.6 the following error will be displayed after inserting the USB with the valid license key:
umount: /mnt/cdrom: not mounted
In both cases the passwords aren't reset after a reboot.
Can somebody point me into the right direction, I'm lost.
Storevirtual noob here. After reading the storevirtual whitepaper, my understanding is:
1) Storevirtual is the VSA Appliance.
2) The Storevirtual 4730 is basically a sort of a blade server (gen 8 or gen 9) that has additional NIC Ports, 25 disks and storevirtual vsa integrated into it.
3) One can just buy the storevirtual vsa and deploy it on existing servers or hardware.
1) How do we make storage visible to the servers. I come from a traditional SAN background where we have storage port wwns that are zoned with server hba wwns. A storage group with LUNs, the server and storage array wwns is created and then the server sees the disks after scanning.
How's it done with iSCSI and storevirtual?
Please let me know if there are some guides that could make it clear. I've gotta ramp up with Storevirtual asap and will be very thankful if someone could provide pointers and or guidance.
I updated my storevirtual 4730 nodes to 12.6.
Since there I have NIC warnings on the primary node telling me:
Network Interface 'NICSlot2:Port1' error status = 'Excessive'
Network Interface 'NICSlot2:Port2' error status = 'Excessive'
The Network Interface 'NICSlot2:Port1' error status is 'Excessive' at 2.00%.
The Network Interface 'NICSlot2:Port2' error status is 'Excessive' at 0.93%.
What´s wrong or can I disable these message, cause two percentage or less I can accept?
We currently have 4X 4530 Storevirtual SANs in a Cluster and have some Volumes using 4Way Mirroring. All 4 nodes have MDL SAS 7.2K drives. We want to add 2X 4530s with 10K SAS Drives for Data Tiering and Adaptive Optimization. I know there will be restriping across the cluster for this, but how or will the 4Way Mirrors be affected since there wil now be 6 Nodes in the cluster?
I have a Storevirtual VSA 2014 running on Vsphere hosts. We have two storage nodes and a separate FOM. Recently, we took one vsa offline to expand the underlying raid storage. This was done, and the expanded space was added to the drive space allocated to the VSA virtual machine. No further changes within Storevirtual were applied, as we wanted to resync first. Next, the Storage System VM was turned on, and the VSA began to resync. A few hours in, a critical alarms popped up that says:
Event: EID_MOUNT_MOUNTED_RO E02030102
I stopped the manager, and attempted to "Repair Storage System" on the Storage System tasks, but it claimed the storage system was operating normally. Then I noticed that the resync appears to still be in progress. The VSA02, says Resyncing (57%, 7 hours remaining) and the VSA01, which has been operational for the last few days, says Resyning (72%, 19 hours).
Is VSA02 broken? Is this a normal error? Do I leave it alone?
Any help would be appreciated. Thanks.
I am trying to upgrade my old Lefthand San Storageworks P4300 to 12.5 OS. I am currently at 9..5 but when I log into the CMC and do Upgrade it will not allow me. I tried using Advance and upgrade to 10.5 since it is not supported to upgrade from 9.5 upto 12.5 (the latest my san can goto). It also states the 9.5 is not supported to 10.5 and is greyed out. I do not see any option to upgrade to 10.0, so how can I upgrade my SAN? I am looking to repurpose this for testing but would like it to be fully updated. Any help would be greatly appreciated.
Hi, I have a P4500G2 storage (software version 10.5) and CMC version 12.0.00 (build 0725.0) installed on Windows Server 2008R2.
Now when I want to check for upgrades in CMC it doesn't work. I have got the message "An error occurred trying to connect to the FTP server".
When I click the button "View notifications" I have got the error message "Notifications cannot be found at this time. Try again later."
There is no firewall blocking FTP communication on port 21. My perimeter firewall is MS TMG 2010 and when i turn on logging and in CMC click to the button "Test FTP connection . . ", I can see communication in TMG logs.
But no communications occurs on TMG when i click button "Check for Upgrades" in CMC.
My last successful check for upgrades was on 27.09.2016.
Can someone help me with this problem, please?
A vsphere vsa was reinstalled and I neglected to remove the old storage system from the CMC.
The CMC shows the old, failed storage system as Unknown, Offline, and with a MAC address no longer valid as the vsa instance no longer exists. All available options on the failed storage system lead nowhere - it cannot be power on or off, cannot edit hostname, cannot add to new cluster -- because the system cannot be found on the network.
The failed storage system does not belong to the working cluster though it is a member of the management group.
I cannot make changes to the working cluster, such as changing network bonding, because the CMC reports "There are one or more other storage systems in the (managment group) that are not ready. Wait until the other storage systems are ready and try again."
What are the options for removing a failed storage system from the CMC?
Thanks for your help.
today I would like to share my latest nightmare with vsa to give you a chance to prevent it.
We have two two-site-vsa-clusters with 2 and 4 nodes each. They where installed 2011 with version 9.5 and afterwards upgraded up to 12.5. At two weeks ago we wanted to extend a lun. After opening the CMC all volumes where inaccessible. The vsa's seems to crashed. After reboot it looked ok until we do an iSCSI rescan in ESXi. All VSA's crashed again. The CMC shows many errors like cim down, raid off... With the help of HP we were able to exchange the damaged VSA's and rebuild the data. This cost us 4 days and nights of work and several years of lifetime.
So what was the cause? After that I've analysed the filesystems of the broken vsa's. The root volume was full and had no bytes free anymore. This seems to happen on all VSA's at the same time.
VSA Version prior 10.5 was deployed with a disk size of 8GB. From 10.5 upward the disk size is 32GB per default. Versions prior 10.5 are called Model P4000. The support of this Model will end up 2017!
Admins: if you've got this Model then please have a look at root volume free space in your VSA. And exchange them ASAP! HP support knows the right procedure.
best regards Björn
I'm configuring a VSA cluster (2 VSA nodes). I want use Quorum Witness but I have only Windows Servers in the environment.
Settings for NFS share is "Allow R/W" for everyone, no authentication. Network is reachable but, when I check NFS connectivity (10.1.1.25:/quorum) It fails
SAN network is separated from LAN, NFS server got one interface in SAN Network.
VSA doesn't have access to DNS, is it a problem ?
Can someone post a screenshot of his Windows NFS Share config ?
Hello - We are currently using SV 4330 appliances, but are considering migrating to SV VSA. To start out, I'm looking at VSA 10 TB licenses and three HPE Proliant DL360 Gen9's with these quick specs:
- Dual Xeon E5-2640V4 2.4 GHz. procs
- Smart Array P440ar with 2GB FBWC
- (8) 1.2 TB - 10K RPM SAS 12Gb/s disks
- 256 GB RAM
- Gb NIC's (2 would be dedicated to iSCSI)
- 32 GB flash SD for ESXi boot partition
Any ideas on how these would compare performance-wise to the 4330's? I'm mostly asking about adequate performance using all spinning disks (all the example's I've seen have a mix with SSD's), and if nesting VM virtual disks on top of virtual disks for the VSA hinders performance? Our current 4330 nodes all have (8) 900 GB 10K RPM 6Gb/s SAS disks.
We have StoreVirtual Hardware at two sites. Lately I have noticed that Remote Snapshots from Site B to Site A stay at Copying 0% with the Current Rate is 0Kb/Sec and never finish. Remote Snapshots from Site A to Site B are working normally.
The only thing that has changed recently is we changed the Data Vlan IP scheme at Site A (not the iSCSI Vlan) But I have connectivity between the iSCSI Vlans, and CMC can see the StorageNodes at both sites, from either site.
Both Sites are running Software 12.0.00.0725.0, and we are running CMC 12.5
Thank you in advance
Has anyone received the below error in the Centralized Management Console (CMC)? If so, what was the resolution?
'CIM Server' server = 'Down'
I have two Storvirutal 4330 runing as a multisite with a FOM server.
My FOM server died with hardware and everthing, i am not able to get the FOM back, the hardware, software and everthing gone. I created a new FOM server and want to attach it in my Management Group, but i get the following error: [You cannot add a Failover Manager into a management group that already has a Failover Manager.] see also screenshot
When i click on my FOM in the Manage group, i am not able to do anything, it says: [Could not find storage system with serial number xxx:xxx:xx:xx:xx:xx] see screenshot.
Please let me know how can remove or delete the died FOM server/vm.
Or do i really have to delete the whole Management group and create a new one which means 2-3 dayes of work?
I'm trying to setup a simple HPE StoreVirtual VSA infrastructrure :
HV01 and HV02 got their 1GbE connected to a switch.
10GbE : 10.10.10.0/28
Both Hyper-V servers can communicate between each other and with VSA virtual machines through the 10GbE adapter.
CMC has been deployed on HV01, I created a Management Group which contains SRV-VSA01 and SRV-VSA02.
Once done, I just had to refresh ISCSI targets and get the freshly created volume.
And here we are now. I'm faced with some performances issues and I have to say, I'm not that quite confident about my setup / configuration.
When I run a CrystalDiskMark I have extremely low write speed.
Results when I run CrystalDiskMark (5/1GB) on the hard-drive directly on the HyperV:
Results when I run CrystalDiskMark (5/1GB) on the volume presented by VSA (Network-Raid10):
Results when I run CrystalDiskMark (5/1GB) on the volume presented by VSA (Network-Raid0):
Results on SRV-HV02 are quite similar.