- About Vivit
- LUGs & SIGs
- Vivit Blogs
- News & Events
- Knowledge Base
|HPE Storage Solutions Forum - HPE StoreVirtual Storage/LeftHand|
we have two different customer environments, each with two new StoreVirtual 3200 systems configured as a cluster. In both environments we see very high latency and even daily connection drops for 10-40 seconds.
We already opened two support cases regarding this issue and as every time HPE tries to blame other components in the environment. Now they even say, that it is unsupported to use Veeam in the same environment as the StoreVirtual 3200 and no, they don't mean using it as a backup location but as a primary storage from which I take backups. We tried to explain them, that Veeam uses the standard VMware VADP API and nothing special, but they don't understand and repeated multiple that we have to remove Veeam from the customers environments.... really?!
I have a customer who wants to ensure their SQL database sits on the SSD disks on their VSA platform - I am struggling to see a mechanism for ensuring this as I don't see a way of building a volume and assigning this to the SSD disks only?
I can only see the entire available cluster disk space which if course includes the Tiered SSD & SAS disks - which I would imagine is by design.
Anyone been able to see a way around this?
I test an SV 3200 with linux servers.
Our SV3200 has 2 LACP bonds (4x 1Gbps each) with Cisco 3850 (2-stack).
The way we connected it is very similar to this (taken from SV3200 user guide).
LACP seems to be estabilished properly, however, we're observing very random MAC Flaps few times a day:
*Sep 5 13:28:31.891: %SW_MATM-4-MACFLAP_NOTIF: Host 0060.0869.97ef in vlan 99 is flapping between port Po12 and port Po11
Any ideas what could be causing this?
Or at least any clue how to troubleshoot this? This MAC address is strange and it doesn't belong to any device - 0060.0869.97ef
I am setting up my first StoreVirtual san, it is a SV3200 rnning StoreVirtual OS 13.5, the SV3200 has 6x 900GB sas drives. I accepted the defaults during setup and ended up with two raid 5 volumes with a usable capacity of 3.18TB. (out of 4.91TB raw). We have never setup one of these before so have a few questions.
Should I be deleting these raid volumes and just creating a single raid 5 volume? Our idea was to only use one storage controller and not both. Or should we be using both for maximum performance?
Secondly the management IP is still on the default 172.16.253..201/172.16.253.203. Should I be setting the management IP addresses to the same network as our server network so that we can administer the SV3200 and apply software updates or should we be just adding a gateway on the same subnet?
The SV3200 will be used as storage for a single Hyper-V running 2016 datacentre. Should we be installing StoreVirtual DSM on the Hyper-V host for management and maintenance of the SV3200 ? Is this the recommended best practise ?
i have just experienced a very annoying problem after adding the repaired node storevirtual 4730 OS version 11.5 to the cluster it show that license of storage is violating and it gave me 60 days of evaluation after this period all drives will become unavailable,
what i did: i have 4 nodes of 4730 one of the nodes needed repair i put it in repair mode and a ghost node with created in its place, when i added the repaired node into cluster mistakenly i didnt exchange it, instead i add it to the cluster, and now both the ghost and repaired node are available in the cluster, i assume that due to same MAC address ( in ghost and repaired node) it gives me the license violation, any solution please how can i come over this issue
I saw in hold post (2011), it's not possible to change the name of a management Group.
His this always true in 2017 ?
Thanks for your help
I am looking for some clarification for my HPE Storage study.
I got to the SoreVirtual and Quorum configuration.
What will happen in 2 node StoraVirtual VSA configuration if quorom will be not available?
Other question is in real word when you will stick to quorom configuration and when you will use Failover Manager.
I'm using HP Performance Center 12.20 and trying to get detailed information about past runs through REST API.
Unfortunately, information i get in responses is very limited and doesn't contain such critical information like "execution-date".
<Runs xmlns="http://www.hp.com/PC/REST/API"> <Run> <TestID>xxx</TestID> <TestInstanceID>xxx</TestInstanceID> <PostRunAction>Do Not Collate</PostRunAction> <TimeslotID>xxx</TimeslotID> <VudsMode>false</VudsMode> <ID>xxx</ID> <Duration>xxx</Duration> <RunState>Before Creating Analysis Data</RunState> <RunSLAStatus>Not Completed</RunSLAStatus> </Run> </Runs>
I've tryed to get additional fields with queries but to no avail. It is always the same structure regardless.
Am i missing something? Is there any way to get additional information about past runs?
StoreVirtual SAN OS is upgraded to 12.7 but hardware BIOS and iLO remains at older version. What are the best practices for upgrading the hardware? Can I use the latest SPP to upgrade the hardware? Looking for hardware/software/firmware/driver recipe for Lefthand OS.
Thanks in advance.
Helllo, I'm looking at compatibility matrix before upgrading my VSA 2014 12.5 to 12.7. Currently I have the CMC installed over a Windows 7 OS client (x64). It works without issues. On the paper I read that actually the CMC is no longer supported on Microsoft client OSes since the 12.6 version (the one I'm using now).I really have to create a VM on a server with all consequent configuration (two nics for example for management and replication network) before update the CMC to 12.7?
Can the Out-of-space warning threshold be ajusted, by the customer?
From the 12.7 Release notes:
The out-of-space warning threshold for clusters was adjusted to a lower value so as to trigger earlier warnings.
CU want to raise the alarm level. He is Currently at 82%
VSA units connected at 10Gb - ping latency <1ms - however since the upgrade to 12.7 they are reporting as having a slow network connection?
Anyone else seeing this?
Network is performing without issue and ping latency to all units is <1ms so I am unable to see an issue, this is flagging as a warning which is strange.
I am working with a customer who has a SAN cluster with 1 x 4130 (4x600Gb) and 1 x 4330 (8x900Gb) - don't ask!
I am wanting to increase the number of disks in the 4130 to 8x600Gb so that we can increase the available capacity of the cluster.
I have inserted a drive in slot 5, the lights come on, but nothing shows in the CMC.
Is the 4130 limited to 4 disks?
I cannot find information which confirms or denies this - several references to 4 free bays, pictures of units with 8 drives, but also diagrams showing only 4 drives.
Your help would be most appreciated.
The failover manager is runnig on local storage on one of the VMWare hosts, we need to perform mantenance on that host, can we shut the failover manager down and will the remainig two SANs keep quorom? Or will shutting down the failover manager cause an issue?
I have finished my initial set up of the SV3200, but I have fail to connect the storage to the server. The following were the configurations;
Controller 1: bond 0: 10.1.1.153, 10.1.1.154 /24 on 10GbE ports
Controller 2: bond 0: 10.1.1.151, 10.1.1.152 /24 on 10GbE ports
VIP : 10.1.1.155 /24
I am connected to same switch with the clien server .
I cannot ping between the different bonds: 10.1.1.153(on sc1) to 10.1.1.151(sc2), 10.1.1.153 to 10.1.1.152
I cannot connect/ping to A SERVER 10.1.1.160 when I use iSCSI connection from the bonded interfaces.
Please urgently help.
If i dont care about having the VSA esxi host visible in vcenter, any problems with using a free esxi license?
Few months ago we had an incident about a firmware bug of P4330 controller. HP gave us a patch for this issue.
Recently we had the same problem on a node without the controller patch. If during the first incident, node out of order didn't switch. With the last issue nodes were in 12.6. During the first issue, HP explains non switching, by an old lefthand version.
Does somebody knows, how to recover complete log on CMC?
the cluster has a total capacity of 40TB, provisioned 36TB (full), free 4TB
The last updates where always successful, but the latest to 12.7 fails immediately with the message
Is there any way to perform the upgrade without reducing or cancelling volumes? Or do I have to change one volume to thin provisioning to free up space?
since I'm not able to find any useful setup guide, I'm asking the community. Here what I have in place:
2 x SV3200 with 2 x 2 port 1GB each (adv. license installed)
1 x SV3200 Quorum appliance in 3rd site
all are on v 13.5 and most current patch level
I'd like to build a multi-site SAN
Is there any step-by-step manual or self-made documentation?
Many thanks in advance