HPE Storage Solutions Forum - HPE StoreVirtual Storage/LeftHand
Share |



Veeam backup from StoreVirtual 3200 "unsupported" ???Open in a New Window

Hello,

we have two different customer environments, each with two new StoreVirtual 3200 systems configured as a cluster. In both environments we see very high latency and even daily connection drops for 10-40 seconds.

We already opened two support cases regarding this issue and as every time HPE tries to blame other components in the environment. Now they even say, that it is unsupported to use Veeam in the same environment as the StoreVirtual 3200 and no, they don't mean using it as a backup location but as a primary storage from which I take backups. We tried to explain them, that Veeam uses the standard VMware VADP API and nothing special, but they don't understand and repeated multiple that we have to remove Veeam from the customers environments.... really?!

How is that possible? Who needs an enterprise storage system without the possibility to backup? Is it true that Veeam is unsupported? Does anyone faced the same problem with HPE support regarding the StoreVirtual 3200?

Regards,
radion1

 

Can I dedicated SSD to a specific volume?Open in a New Window

I have a customer who wants to ensure their SQL database sits on the SSD disks on their VSA platform - I am struggling to see a mechanism for ensuring this as I don't see a way of building a volume and assigning this to the SSD disks only?

I can only see the entire available cluster disk space which if course includes the Tiered SSD & SAS disks - which I would imagine is by design.

Anyone been able to see a way around this?

 

SV 3200 Multipath LinuxOpen in a New Window

Hi,

I test an SV 3200 with linux servers.
I would like configure the multipath configuration on initiator.
What's good options on the multipath.conf for SV 3200 ? :
path_selector
path_grouping_policy
path_checker
.....

Thx

 

SV3200 MAC flappingOpen in a New Window

Hello there,

Our SV3200 has 2 LACP bonds (4x 1Gbps each) with Cisco 3850 (2-stack).

The way we connected it is very similar to this (taken from SV3200 user guide).

Untitled.png

LACP seems to be estabilished properly, however, we're observing very random MAC Flaps few times a day:

*Sep 5 13:28:31.891: %SW_MATM-4-MACFLAP_NOTIF: Host 0060.0869.97ef in vlan 99 is flapping between port Po12 and port Po11 
*Sep 5 13:33:46.353: %SW_MATM-4-MACFLAP_NOTIF: Host 0060.0869.97ef in vlan 99 is flapping between port Po11 and port Po12
*Sep 5 13:33:46.447: %SW_MATM-4-MACFLAP_NOTIF: Host 0060.0869.97ef in vlan 99 is flapping between port Po11 and port Po12 
*Sep 5 13:39:01.887: %SW_MATM-4-MACFLAP_NOTIF: Host 0060.0869.97ef in vlan 99 is flapping between port Po12 and port Po11
*Sep 5 13:39:01.978: %SW_MATM-4-MACFLAP_NOTIF: Host 0060.0869.97ef in vlan 99 is flapping between port Po12 and port Po11 
*Sep 5 13:44:16.548: %SW_MATM-4-MACFLAP_NOTIF: Host 0060.0869.97ef in vlan 99 is flapping between port Po11 and port Po12
*Sep 5 13:44:16.648: %SW_MATM-4-MACFLAP_NOTIF: Host 0060.0869.97ef in vlan 99 is flapping between port Po11 and port Po12 
*Sep 5 13:49:31.986: %SW_MATM-4-MACFLAP_NOTIF: Host 0060.0869.97ef in vlan 99 is flapping between port Po12 and port Po11
*Sep 5 13:49:32.078: %SW_MATM-4-MACFLAP_NOTIF: Host 0060.0869.97ef in vlan 99 is flapping between port Po12 and port Po11 

 

Any ideas what could be causing this?

Or at least any clue how to troubleshoot this? This MAC address is strange and it doesn't belong to any device - 0060.0869.97ef

 

Thanks!

 

 

StoreVirtual 3200 First Time Setup QuestionsOpen in a New Window

I am setting up my first StoreVirtual san, it is a SV3200 rnning StoreVirtual OS 13.5, the SV3200 has 6x 900GB sas drives. I accepted the defaults during setup and ended up with two raid 5 volumes with a usable capacity of 3.18TB. (out of 4.91TB raw). We have never setup one of these before so have a few questions.

Should I be deleting these raid volumes and just creating a single raid 5 volume? Our idea was to only use one storage controller and not both. Or should we be using both for maximum performance?

 Secondly the management IP is still on the default 172.16.253..201/172.16.253.203. Should I be setting the management IP addresses to the same network as our server network so that we can administer the SV3200 and apply software updates or should we be just adding a gateway on the same subnet?

The SV3200 will be used as storage for a single Hyper-V running 2016 datacentre. Should we be installing StoreVirtual DSM on the Hyper-V host for management and maintenance of the SV3200 ?  Is this the recommended best practise ?

Thanks,

  

 

HP Storevirtual 4730 LHOS 11.5 License Issue after re-adding repaired storage to the clusterOpen in a New Window

Hello Dears, 

i have just experienced a very annoying problem after adding the repaired node storevirtual 4730  OS version 11.5 to the cluster  it show that license of storage is violating and it gave me 60 days of evaluation after this period all drives will become unavailable, 

what i did:  i have 4 nodes of 4730 one of the nodes needed repair i put it in repair mode and a ghost node with created in its place, when i added the repaired  node into cluster mistakenly i didnt exchange it, instead i add it to the cluster, and now both the ghost and repaired node are available in the cluster, i assume that due to same MAC address ( in ghost and repaired node) it gives me the license violation,  any solution please how can i come over this issue 

 regards

Parsa

 

Renaming Management GroupOpen in a New Window

Hi,

I saw in hold post (2011), it's not possible to change the name of a management Group.

His this always true in 2017 ?

Thanks for your help

 

StoreVirtual and Quorum WitnessOpen in a New Window

Hi All, 

 

I am looking for some clarification for my HPE Storage study. 

I got to the SoreVirtual and Quorum configuration. 

What will happen in 2 node StoraVirtual VSA configuration if quorom will be not available?

Other question is in real word when you will stick to quorom configuration and when you will use Failover Manager. 

 

Thanks, 

Piotr

 

Performance center REST API helpOpen in a New Window

 Hi.

 I'm using HP Performance Center 12.20 and trying to get detailed information about past runs through REST API. 

Unfortunately, information i get in responses is very limited and doesn't contain such critical information like "execution-date".

 

<Runs xmlns="http://www.hp.com/PC/REST/API">

  <Run>

    <TestID>xxx</TestID>

    <TestInstanceID>xxx</TestInstanceID>

    <PostRunAction>Do Not Collate</PostRunAction>

    <TimeslotID>xxx</TimeslotID>

    <VudsMode>false</VudsMode>

    <ID>xxx</ID>

    <Duration>xxx</Duration>

    <RunState>Before Creating Analysis Data</RunState>

    <RunSLAStatus>Not Completed</RunSLAStatus>

  </Run>

</Runs>

 

I've tryed to get additional fields with queries but to no avail. It is always the same structure regardless. 

 

Am i missing something? Is there any way to get additional information about past runs?

 

How to upgrade StoreVirtual or P4000 firmwareOpen in a New Window

StoreVirtual SAN OS is upgraded to 12.7 but hardware BIOS and iLO remains at older version. What are the best practices for upgrading the hardware? Can I use the latest SPP to upgrade the hardware? Looking for hardware/software/firmware/driver recipe for Lefthand OS.

 

Thanks in advance.

 

StoreVirtual VSA 2014 CMC OS requirmentsOpen in a New Window

Helllo, I'm looking at compatibility matrix before upgrading my VSA 2014 12.5 to 12.7. Currently I have the CMC installed over a Windows 7 OS client (x64). It works without issues. On the paper I read that actually the CMC is no longer supported on Microsoft client OSes since the 12.6 version (the one I'm using now).I really have to create a VM on a server with all consequent configuration (two nics for example for management and replication network) before update the CMC to 12.7?

Thank you,
Francesco B. B.

 

Cattura.PNGCattura1.PNG

 

 

 

 

 

 

P4000 CMC 12.7 - Out of space warning.Open in a New Window

Hello

Can the Out-of-space warning threshold be ajusted, by the customer?

From the 12.7 Release notes:

The out-of-space warning threshold for clusters was adjusted to a lower value so as to trigger earlier warnings.

CU want to raise the alarm level. He is Currently at 82%

Thanks

Jan

 

 

12.7 Upgrade - VSA networking now reporting slow connectionOpen in a New Window

VSA units connected at 10Gb - ping latency <1ms - however since the upgrade to 12.7 they are reporting as having a slow network connection?

Anyone else seeing this?

Network is performing without issue and ping latency to all units is <1ms so I am unable to see an issue, this is flagging as a warning which is strange.

 

HPE 4130 upgrade to 8 disksOpen in a New Window

I am working with a customer who has a SAN cluster with 1 x 4130 (4x600Gb) and 1 x 4330 (8x900Gb) - don't ask!

I am wanting to increase the number of disks in the 4130 to 8x600Gb so that we can increase the available capacity of the cluster.

I have inserted a drive in slot 5, the lights come on, but nothing shows in the CMC.

Is the 4130 limited to 4 disks?

I cannot find information which confirms or denies this - several references to 4 free bays, pictures of units with 8 drives, but also diagrams showing only 4 drives.

Your help would be most appreciated.

Thanks

Martin

 

Failover Manager runnig on VMWare hostOpen in a New Window

The failover manager is runnig on local storage on one of the VMWare hosts, we need to perform mantenance on that host, can we shut the failover manager down and will the remainig two SANs keep quorom?  Or will shutting down the failover manager cause an issue?

 

FAILED CLIENT SERVER TO SV3200 STORAGE CONNECTIONOpen in a New Window

I have finished my initial set up of the SV3200, but I have fail to connect the storage to the server. The following were the configurations;

Controller 1:  bond 0:    10.1.1.153, 10.1.1.154 /24 on 10GbE ports

Controller 2:  bond 0:    10.1.1.151, 10.1.1.152 /24 on 10GbE ports

VIP : 10.1.1.155 /24

I am connected to same switch with the clien server .

 I cannot ping between the different bonds:  10.1.1.153(on sc1) to 10.1.1.151(sc2), 10.1.1.153 to 10.1.1.152

I cannot connect/ping  to A SERVER 10.1.1.160 when I use iSCSI  connection from the bonded interfaces.

Please urgently help.

 

ESXI version for VSAOpen in a New Window

If i dont care about having the VSA esxi host visible in vcenter, any problems with using a free esxi license?

 

P4330 (B4E17A) controler errorOpen in a New Window

Hello,

Few months ago we had an incident about a firmware bug of P4330 controller. HP gave us a patch for this issue.
Does somebody gets a same problem ?
I don't know why that patch is not included inside the last SPP version.

Recently we had the same problem on a node without the controller patch. If during the first incident, node out of order didn't switch. With the last issue nodes were in 12.6. During the first issue, HP explains non switching, by an old lefthand version.
If I saw controller error in ILO Log and SNMP log platform. I don't get any log in CMC Event Log. When I'm trying to export MG Support Bundle, log about nodes and FOM are empty. Error message is, "Couldn't create nsm support info Failed to collect events.log., lhn/public/system/info/vloggen/"

Does somebody knows, how to recover complete log on CMC?

 

Regards

 

 

 

upgrade from 12.6 to 12.7 fails (cluster reaching its total capacity)Open in a New Window

Hello,

the cluster has a total capacity of 40TB, provisioned 36TB (full), free 4TB

The last updates where always successful, but the latest to 12.7 fails immediately with the message
"Cannot perform the upgrade because cluster xxx is reaching its total capacity and has exceeded the lmit of 90% full"

Is there any way to perform the upgrade without reducing or cancelling volumes? Or do I have to change one volume to thin provisioning to free up space?

best regards,

 

 

SV3200: need good documentation to setup a multi-site SAN w/ two SV3200 and one quorumOpen in a New Window

Hi,

since I'm not able to find any useful setup guide, I'm asking the community. Here what I have in place:

2 x SV3200 with 2 x 2 port 1GB each (adv. license installed)

1 x SV3200 Quorum appliance in 3rd site

all are on v 13.5 and most current patch level

I'd like to build a multi-site SAN

Is there any step-by-step manual or self-made documentation?

Many thanks in advance

Albert

Contact Us

Vivit Worldwide
P.O. Box 18510
Boulder, CO 80308

Email: info@vivit-worldwide.org

Mission

Vivit's mission is to serve
the Micro Focus User
Community through
Advocacy, Community,
and Education.