HPE Storage Solutions Forum - HPE StoreVirtual Storage/LeftHand
Share |



メールにて通知される「Server Agents:~」の対処についてOpen in a New Window

お世話になっております。HP Strageworks X1600を使用しております。

以下のようにほぼ定期的に①・②を繰り返してメール通知されます。

内容を見るとどうもストレージのベイにて異常な温度(245℃)を通知されているようです。

①Server Agents: 温度ステータス 劣化

②Server Agents: 温度ステータス OK

サーバ自体に異常がないようなので対処に困っております。

対処方法をご教示願います。

 

12.7 Upgrade - VSA networking now reporting slow connectionOpen in a New Window

VSA units connected at 10Gb - ping latency <1ms - however since the upgrade to 12.7 they are reporting as having a slow network connection?

Anyone else seeing this?

Network is performing without issue and ping latency to all units is <1ms so I am unable to see an issue, this is flagging as a warning which is strange.

 

HPE 4130 upgrade to 8 disksOpen in a New Window

I am working with a customer who has a SAN cluster with 1 x 4130 (4x600Gb) and 1 x 4330 (8x900Gb) - don't ask!

I am wanting to increase the number of disks in the 4130 to 8x600Gb so that we can increase the available capacity of the cluster.

I have inserted a drive in slot 5, the lights come on, but nothing shows in the CMC.

Is the 4130 limited to 4 disks?

I cannot find information which confirms or denies this - several references to 4 free bays, pictures of units with 8 drives, but also diagrams showing only 4 drives.

Your help would be most appreciated.

Thanks

Martin

 

Failover Manager runnig on VMWare hostOpen in a New Window

The failover manager is runnig on local storage on one of the VMWare hosts, we need to perform mantenance on that host, can we shut the failover manager down and will the remainig two SANs keep quorom?  Or will shutting down the failover manager cause an issue?

 

FAILED CLIENT SERVER TO SV3200 STORAGE CONNECTIONOpen in a New Window

I have finished my initial set up of the SV3200, but I have fail to connect the storage to the server. The following were the configurations;

Controller 1:  bond 0:    10.1.1.153, 10.1.1.154 /24 on 10GbE ports

Controller 2:  bond 0:    10.1.1.151, 10.1.1.152 /24 on 10GbE ports

VIP : 10.1.1.155 /24

I am connected to same switch with the clien server .

 I cannot ping between the different bonds:  10.1.1.153(on sc1) to 10.1.1.151(sc2), 10.1.1.153 to 10.1.1.152

I cannot connect/ping  to A SERVER 10.1.1.160 when I use iSCSI  connection from the bonded interfaces.

Please urgently help.

 

ESXI version for VSAOpen in a New Window

If i dont care about having the VSA esxi host visible in vcenter, any problems with using a free esxi license?

 

P4330 (B4E17A) controler errorOpen in a New Window

Hello,

Few months ago we had an incident about a firmware bug of P4330 controller. HP gave us a patch for this issue.
Does somebody gets a same problem ?
I don't know why that patch is not included inside the last SPP version.

Recently we had the same problem on a node without the controller patch. If during the first incident, node out of order didn't switch. With the last issue nodes were in 12.6. During the first issue, HP explains non switching, by an old lefthand version.
If I saw controller error in ILO Log and SNMP log platform. I don't get any log in CMC Event Log. When I'm trying to export MG Support Bundle, log about nodes and FOM are empty. Error message is, "Couldn't create nsm support info Failed to collect events.log., lhn/public/system/info/vloggen/"

Does somebody knows, how to recover complete log on CMC?

 

Regards

 

 

 

upgrade from 12.6 to 12.7 fails (cluster reaching its total capacity)Open in a New Window

Hello,

the cluster has a total capacity of 40TB, provisioned 36TB (full), free 4TB

The last updates where always successful, but the latest to 12.7 fails immediately with the message
"Cannot perform the upgrade because cluster xxx is reaching its total capacity and has exceeded the lmit of 90% full"

Is there any way to perform the upgrade without reducing or cancelling volumes? Or do I have to change one volume to thin provisioning to free up space?

best regards,

 

 

SV3200: need good documentation to setup a multi-site SAN w/ two SV3200 and one quorumOpen in a New Window

Hi,

since I'm not able to find any useful setup guide, I'm asking the community. Here what I have in place:

2 x SV3200 with 2 x 2 port 1GB each (adv. license installed)

1 x SV3200 Quorum appliance in 3rd site

all are on v 13.5 and most current patch level

I'd like to build a multi-site SAN

Is there any step-by-step manual or self-made documentation?

Many thanks in advance

Albert

 

SV3200 replace SV4300 with VMWare SRM runningOpen in a New Window

Hi, 

We having 2 Sets of SV4330 and SV4300 running in HQ and DR Sites. These StoreVirtual used for VMWare vSphere and Site Recovery Manager. Volumes are replicated to from SV4330 in HQ to SV4300 in DR.

Our SV4300 are out of space and we want to replace it with new SV3200.

My question is:

1) Does SV4330 support volume replication to SV3200 and vice versa? There are basically 2 different system.

2) I can't find any Storage Replication Adapter for SV3200 from Internet, does SV3200 come with SRA to support VMWare SRM 6.0?

3) I suppose SV3200 come with OS v13.x where SV4330 highest is v12.7. Does they can work together?

Thank you

 

VSA Cluster Power OFF One NodeOpen in a New Window

I have a 2 node cluster and I would like to power off one of the units, which appears to be the "stand by unit" due to no disk activity lights, to move it to a different location in the rack. Do you reccommend power the entire cluster down or can I just power this stand by unit down, move it, power it back up and it will just re-sync.

 

Adding disks to an existing 2-node VSA clusterOpen in a New Window

I currently have a configuration where there are 2 x dl380 Gen9 servers with 2 existing physical arrays:-

8 x 1.8Tb SAS

3 x 400Gb SSD

This is presented within the VSA as 2 disks Tier0 and Tier1.

I need to add 2 x 1.8Tb disks to each host and want to be certain of the method to do this and what downtime would be involved.

Do I create a new array at the controller level (so a new RAID1 array?) or can I expand the existing 8 x 1.8Tb array to a 10 x 1.8Tb array on each host?

Within the VSA I understand I cannot expand the existing logical disk so will need to add the new storage as new disks and then expand the existing volume etc - does this require downtime on the VSA?

Thanks

 

Status Resycing Volume "Quedo parado"Open in a New Window

Buenas Tardes Gente

Junto con saludarlos, tengo un inconveniente el cual he tratado de resolver pero no he podido llegar con la solucion. Bueno, el escenario es el siguiente, tengo un HP-p4500 con 4 nodos y cada uno en RAID 5. De estos nodos se crearon 15 LUN (Volumes). Hace un tiempo atras uno de los discos de 2do nodo se daño y se repuso por uno nuevo. La RAID demoro en construir pero no hubo mayor inconvenientes sobre eso. El tema es que al momento de terminar el RAID, los volumes comenzaron a realizar resyncing y quedaron en ese estado en un par de horas. El punto es que hasta el dia de hoy, uno de mis volumes quedo en estado Resyncing 83% hace ya varias semanas y no ha variado ese porcentaje y ademas quedo en restriping en un 100%.

Intente hacer una variacion de espacio de ese volume, cambiandolo de 1.2 Tb (estado anterior) a 1.5 (Estado actual) y se retraso en 82% en el resyncing pero sin ningun cambio hasta hoy

Me comentan por favor si pueden ayudarme con este caso.

 

Saludos.

 

Upgrading a 2-node cluster to a 3-node clusterOpen in a New Window

Hello all,

I'm currently a the sistuation where I have three HP DL380's as a hypervisor. At the moment of the installation someone configured it in such a way that only one of these is used with local storage attached.

According to the design the machines should all share their storage using VSA so this is the situation I'd like to go to. To minimize downtime the idea is to configure VSA on the other two hypervisors and transfer the data to these machines. Once this is done the first machine can be added to the 2-node cluster we created. A fourth machine will be configured as the HP VSA FOM.

Could you please tell me if the above scenario is possible? Can we first create a 2-node cluster and than later add a 3rd node to it?

Best regards,

Luc

 

HPE SV VSA supportOpen in a New Window

Why is it so difficult to get the required support on your products? I just want to open a ticket for our StoreVirtual VSA product however getting it linked to my account has proven impossible. We purchased 6 HPE SV VSA 2014 3yr E-LTU licenses for our 6 Gen 9 Host and have those licenses and that agreement linked to my HPE account. We also had purchased for us a VSA Carepack and OneView Care Pack. When I go to open a ticket I only get the hardware option and that team says they cannot support the SV VSA. They only support the physical hardware of the host. The SV team says they need a contract number in order to be able to open a ticket for me, however the only contract number we were provided they say is not correct. Why is it so difficult to look up the contracts my company has purchased or had purchased for them and open a ticket that way?

 

VSA node not dissaperingOpen in a New Window

I was having some trouble upgrading the VSA storage capacity.

So I wanted to reinstall de VSA Virtual Machine and build it from the ground up again (then restripe and do the save for host 2).

I did put the machine in maintance mode, deleted it, install a new one and give it the old IP and MAC address.

But instead of replacing the old one in repair mode, a new host is created.

Now the old one is useless and can be deleted but I cannot figure out how to the one in repair mode

 

Can someone please give me some advice?

2017-07-22 16_05_33-hmbvhv01 - Verbinding met extern bureaublad.png

 

Existing Cluster - 10Gb upgradeOpen in a New Window

Hello,

Has anyone gone through the process of upgrading an existing cluster to 10Gb? We have installed the 10 Gb card and RAM, but the process of changing the IP is pretty vague in the 10 Gb upgrade documentation that I've been able to locate.

Our existing environment consists of 2 P4500 G2 nodes running LHOS 12.5, and a FOM running 12.6. The nodes have their two onboard NICs bonded as bond0. All communication occurs on the 192.168.160.x subnet.

What I'd like to bond the two new 10 Gb NICs together (bond1) and either use that for all communication on the 192.168.160.x subnet, or use bond1 for LHOS and iSCSI, and bond0 for management on the 192.168.170.x subnet.

My first attempt was to assign a new 192.168.160.x IP to bond1 with no gateway specified. When I did so, I found that I couldn't ping or communicate with that IP. Moving the LHOS and management to bond1 didn't reestablish connectivity, so I stopped at that point because I was afraid that I wouldn't be able to manage the node via the CMC.

For my second attempt, I went through the same process, but I assigned a different subnet that the CMC could still communicate with. This prevented the CMC warning about the bonds being on the same subnet and allowed me to manage and ping the new IP. However, as I began moving services to bond1, I received a warning that all of my volumes would become inaccessible until the node became discoverable again.

At this point, I'm thinking that my safest option is to add a third node to the cluster and then remove one node at a time from the cluster and management group, work through the IP changes, and then add the node back. This sounded pretty safe to me, as long as we were OK with the restriping that would take place.

Is there another approach that someone could recommend? I would be willing to take a maintenance window and shutdown the whole management group, but in the lab I never found a way to have the cluster in some type of maintenance mode and still be allowed to make IP changes.

Any comments would be welcomed.

 

Cannot log in to management groupOpen in a New Window

Hi,

Really hoping someone can give me a few ideas on what to do here... Background: We had a switch fail that two of our four nodes, and our FOM, were connected to.

Once this was fixed, one of the nodes couldn't be connected to : "Login failed. Read timed out". All nodes pinged fine except the one having the problem, it would not respond to jumbo packet pings. It did respond to default size packet pings so I changed the packet size on the CMC machine to default and successfully logged on. I checked the switch port and jumbo packets were enabled so I presumed it must be an issue with the NIC on the node. I 'repaired the system' to take it out of the management group and resync the config. This got stuck for a couple of hours so I cancelled it.

The SAN is still up but now I cannot connect to any of the four managers. They all just sit at "Waiting 60 seconds to connect and log in..." then bring up the same "Login failed. Read timed out" error.

I've tried rebooting the server with the CMC on it mulitple times, I've tried switching jumbo frames on/off, I've tried connecting to each manager in turn. I'm just getting a response from any of them.

Any tips as how I can get these working? Reboot one/all of the nodes? I don't really want to switch them off/on in case I lose access to the data. These hold all our business critical data so I can't afford for it to be down. Fortunately I do have the option to migrate all VMs to a different SAN, it'll just take days, but I can trash everything and start from scratch if need be.

 

Add Node to Management ErrorOpen in a New Window

Hello,

We have an existing cluster of (2) P4500 G2 system running LHOS 12.5, with FOM running on an ESXi server with local storage.  We recently acquired a 3rd node with matching specs.

The problem I've encountered is that when I attempted to add the new node to the existing management group, I received an error and it resulted in the attached screenshot.  The status of the 3rd node is "Joining/leaving management group, Storage system state missing".  The 3rd node isn't running a manager.

If I attempt to log into the 3rd node, the CMC is unable to do so.  If I attempt to log into the 3rd node via the console, I'm prompted for a user and password, but it doesn't accept the mgmt group's username and password.  I did, however, find that admin/admin works.

If I right-click on the node in the CMC and select 'Remove from Management Group' I initially received an error stating that I couldn't do so because the management group was running a FOM.  I believe that I now receive an error indicating that it is unable to login.

I think my only option at this point is to use the admin/admin credentials to remove the node from the management group.  However, I've never used that option, so I'm unclear on the potential impact.  This issue never arose in the lab, so I'm a little gun-shy about trying it on a production system.

Any advice?

 

Migrate from P4300 Single Cluster to Multi site Cluster.Open in a New Window

We have a 2 node Single Cite Cluster on P4300, SAN IQ 12.5. We are wanting to migrate to a Multisite Cluter. We are adding an additional node to the current Cluster, and have 3 more nodes to build the additional site with. Can we just change form Single to Multisite Cluster? is there a migration guide that can be used for this?

Contact Us

Vivit Worldwide
P.O. Box 18510
Boulder, CO 80308

Email: info@vivit-worldwide.org

Mission

Vivit's mission is to serve
the Hewlett Packard
Enterprise User
Community through
Advocacy, Community,
and Education.