HPE Storage Solutions Forum - Storage Area Networks (SAN) - Small and Medium Business
Share |



Help connecting two new Brocade SAN switches to C7000 Enclosure. Port Mapping missing.Open in a New Window

I have 2 C7000 Blade Enclosures that I have pieced into one large one (Combined two datacenters into one location). I have 6 HP p2000 storageworks SANs that I am now trying to connect to this single C7000 and I connected two new Brocade 8/12c SAN switches (BAY 7 & 8) to support the FC connects to the added on SANS. When i go to the OA  the cards seem fine as pictured below but they arent auto mapping to any of the blades. I cant find anywhere to enable them in the OA so that all blades can see these cards and connect to their respective SANs. What am I missing? Brocade1.PNGBrocade2.PNG

 

como activo puertos de msa 2042 de FC a iscsiOpen in a New Window

 

Tengo la msa 2042 la cual por defecto estan habilitados para   FC quiero habilitar a iscsi  de la controladora 1 los puertos 3 y 4 , pero en el modo grafico  no se puede

 

I have the msa 2042 which by default are enabled for FC I want to enable isci of the controller 1 ports 3 and 4, but in the graphical mode can

 

MSA 2040 SNMP uptimeOpen in a New Window

Hello,

from console, show controller-statistics both I see Power On Time several months (in seconds of course). 

But from SNMP OID 1.3.6.1.2.1.1.3.0 I get uptime of only few days.

I don't see anything in logs and I am not aware of storage restart. Did someone experience this? The OID should be standard for system uptime. What else could have been restarted but controllers on MSA?

Thanks for any idea.

 

HP MSA P2000Open in a New Window

We have a P2000 G3 iSCSI SAN , we have configured Vdisks and two Terabyts are availble and not configured.

When we tried to create Vdisk for the available 5 TB , I found no disks to select.

Please advice,

Thanks for any help in advance.

 

Regards,

Mohammed 

 

New firmware for the MSA 2040/2042/1040 and MSA 2050/2052Open in a New Window

Howdy All,

I wanted to make sure everyone saw HPE just released a new firmware version for both the MSA 2040/2042/1040 and the new MSA 2050/2052. 

 It is highly recommended to immediately update to the new firmware versions. 

For the MSA 2040/2042/1040 here is the link to the bulletin for the new GL220P010 firmware:  http://h20566.www2.hpe.com/hpsc/doc/public/display?docId=emr_na-a00025799en_us 

For the MSA 2050/2052 here is the link to the bulletin for the new VL100P001 firmware:  http://h20564.www2.hpe.com/portal/site/hpsc/public/kb/docDisplay/?docId=emr_na-a00025800en_us

Cheers!
/Kipp

 

MSA2040 SAN Controller failureOpen in a New Window

Hi everyone,

First of all, I'm not an expert when it comes to HP Storage, I'm a networker myself. I have what appears to be a failed controller on on an enlcosure. Correct me if I'm wrong, there are 2 controllers (A and B), that are connecting 4 encosures we have. Controller B seems to have failed. The SAN is still functioning, but we have no redundancy. I have reboot controller B, but it still did not come back up. I'm not quite sure where to go from here. The product is out of warranty, so I'm not quite sure where to go from here. Where should I go from here?

 

MSA 2040 iSCSI 10gb speed optionOpen in a New Window

I've recently setup a MSA 2040 with 10gb iSCSI connection but in the SMU the host port speed option only have auto or 1gb. Is this normal?

 

MSA2312i controller firmware neededOpen in a New Window

Hello,

We have an old MSA2000i G2 which was used in a test lab.  Its controllers (model AJ803A) died because of a power outage - They display the error "Warning The system is currently unavailable." when accessing them via the web interface.

The office is being closed next month and we are scrapping most of the old equipment.  However, as I believe this old MSA still has some value, we would like to donate it to a charity, but this is impossible in its current, inoperable state.

As a result, I would like to have one last chance at resurrecting the device, otherwise it will have to be scrapped (which seems wasteful).  As it appears to be a firmware issue, I am assuming that this would still be possible to fix it via the FTP upload method.  However, I no longer have the firmware files and the support contract has long since expired.

Is this something that anyone can help with?

Thanks in advance,

 

Stephen.

 

メールにて通知される「Server Agents:~」の対処についてOpen in a New Window

お世話になっております。HP Strageworks X1600を使用しております。

以下のようにほぼ定期的に①・②を繰り返してメール通知されます。

内容を見るとどうもストレージのベイにて異常な温度(245℃)を通知されているようです。

①Server Agents: 温度ステータス 劣化

②Server Agents: 温度ステータス OK

サーバ自体に異常がないようなので対処に困っております。

対処方法をご教示願います。

 

MSA 2040 presenting one large volumeOpen in a New Window

hi all,

we have a MSA 2040 and a second shelf. 48 drives in total. We are using the Storage Management Utility to configure the disks.

We have created three 16 drive vRAID-10 1 disks. Each vdisk has one volume. Right now each volume shows up as an extent (mount point/share) on the fiber channel network.

is there any way to show these 3 volumes as 1 large virtual volume?

i could change the vdisks to a different RAID so one volume would have more disks. But a single vdisk is not able to have all 48 drives in it. so i would have the same question with 2 volume rather than 3.

any help or comments would be appreciated!

 

There is a problem with a FRU - MSA2324iOpen in a New Window

Hey guys,

One of the disks is amber and I saw this error on on one of the vdisks

There is a problem with a FRU. (FRU type: disk, enclosure: 1, device ID: 5, vendor: HP , product ID: DG0300FAMWN , SN: 3SEXXFE200009013YXXX, version: HPDC, related event serial number: A22614, related event code: 55).

Can I just replace the hard drive with a new one? and will it will rebuild?

I'm very new to the company and I'm still trying to understand the way things are set up here.

 

 

 

 

commande(s) pour arrêt de baie SAN MAS2040Open in a New Window

Bonjour,

En cas de coupure électrique prolongée, je souhaite pouvoir automatiser l'arrêt de ma baie SAN et de son extension de baie, via des commandes déclenchées par mes onduleurs.

Pour cela, j'ai contacté le support hp téléphonique, qui m'a donné une réponse insatisfaisante : il faut être physiquement devant la baie pour l'éteindre... Or vu la ducumentation HP de la baie, il semble pourtant que ce soit possible en ligne de commande.

Comme je ne suis pas un expert en script, pourriez-vous m'aider en me fournissant un script ou bien la ligne de commande complète qui va bien pour déclencher cet arrêt "propre" de la baie SAN svp ?

idem pour son extension de baie  : l'arrêt de la baie principale (MSA2040) déclenche-t-il automatiquement l'arrêt de l'extension de baie (HP StorageWorks) ? et si non dans quel ordre les arrêter ? et quelles lignes de commande pour arrêter l'extension ?

Merci pour votre aide,

Cyril

 

MSA 2040 Tiering very low IOPS and awful performanceOpen in a New Window

something is wrong with the MSA 2040 performance tiering, i can only have about 5k IOPS out of it while i have another SAN P2000 G3 which is providing 16K for the same workload.

Both SAN are FC are connected through HP 8/24 SAN switches. can you help me diagnosis the issue ?as i am running out of ideas

we are using HP DL 360p Gen8 as hyperV host for the virtualised workload.

 

HBA DriverOpen in a New Window

Dear Friends,

I am looking for following HBA driver (Linux) to work with HPE MSA 2040 SAN

HP P/N :AJ764 - 63002
SP P/N: 489191 - 001
S/N: 8C972112EC
MFR P/N: PX2810403 -20 M

I will be thankful if you can direct me to download page
Regards,

Khalid

 

HDD for MSA2312saOpen in a New Window

Hi All,

I have a hdd with P/N 627114-001 and GPN 507129-010, is it compatible with my MSA MSA2312sa?

thanks

 

MSA2324i - Degraded Out Port?Open in a New Window

We have a degraded Out Port on one of our SAN controllers and also expericing slugging performance. Could anyone shed some light into how we can fix this? 

I have attached a screenshot so you know what I mean. Below are the results of the show system and versions console comands:

 

# show system
System Information
------------------
System Name: MSA2324i
System Contact: Admin
System Location: Server Room
System Information: Shelf1
Vendor Name: HP StorageWorks
Product ID: MSA2324i
Product Brand: MSA Storage
SCSI Vendor ID: HP
Enclosure Count: 3
Health: OK

 

 

 

# versions
Controller A Versions
---------------------
Storage Controller CPU Type: Athlon 2600+ 1600MHz
Storage Controller Code Version: M114P01
Memory Controller FPGA Code Version: F300R22
Storage Controller Loader Code Version: 19.009
Management Controller Code Version: W441R57
Management Controller Loader Code Version: 12.015
Expander Controller Code Version: 1118
CPLD Code Version: 8
Hardware Version: 53

Controller B Versions
---------------------
Storage Controller CPU Type: Athlon 2600+ 1600MHz
Storage Controller Code Version: M114P01
Memory Controller FPGA Code Version: F300R22
Storage Controller Loader Code Version: 19.009
Management Controller Code Version: W441R57
Management Controller Loader Code Version: 12.015
Expander Controller Code Version: 1118
CPLD Code Version: 8
Hardware Version: 53

 

MSA2324fc Unable to communicate with storage controllerOpen in a New Window

hello,
we use a msa2324fc with 2 controllers. The disks are runnig fine but we are not able to configure anything anymore. Via SMU we get "The system is currently unavailable" and cli can't execute any config commands we get "Unable to communicate with Storage Controller. Please retry the command." Also we are not able to do restart SC and rescan doesn't help too.
The attached show configuration output looks very strange.
Any how we can solve the issue without any downtime?

 

MSA 2040 Auto Tiering, Terrible performance ...Open in a New Window

Hi all,

Hoping someone might be able to help.  We have a HP 2040 with auto tiering, the disk groups, pools and volumes are configured like so:

4x 1TB SSD RAID5

9x 1.8TB 10k SAS RAID5

9x 1.8TB 10k SAS RAID5

All in a single Virtual Pool.  In the virtual pool I have two volumes configured  Vol1, Vol2 at 10TB (or there abouts) assigned as Cluster Shared Volumes (CSV) volumes are set to no affinity re-tiering as per the best practices.

I am using Windows 2016 with Hyper-V and failover clustering.  We currently have two nodes.

Our hosts are directly connected via two 10g NICS (one to each controller) on the same IP subnet, for testing purposes I have disabled one NIC and configured round robin as failover only.  Jumbo frames is not configured but even when it is the performance difference is negligable.

Performance wise from a Hyper-V VM I use IOMETER and I load a 20GB disk with 4kb 100% sequential write access profile and get a pitiful 8k IOPS. 

From the Hyper-V host I do the same at get a better, but not by much, 18K IOPS.  I know the 4KB 100% seq/write is a lousy real world test but should be one the SAN can easily fulfill to up to around 80,000IOPS from what I read.

Can't readily see any errors on the SAN or the host.

My question is, what the hell have I done wrong :)

 

 

MSA 2040, snapshots problemOpen in a New Window

Hello!

We had one controller failure, it had been repaced, but after this we have problem with snapshots creation:

 

 create snapshots volumes volume snap
 Error: The specified name is already in use. (2017-08-03  09:30:24)

Really snapshot is created, I can see it using

show snapshots snap-pool pool-name

It can be even mapped to host, but host doesn't see it :-(

There are also frequent errors during snapshot deletion:

delete snapshot snap
Error: Command was aborted by user. - One or more snapshots were not deleted.

And sometimes I get:

Error: The MC is not ready. Wait a few seconds then retry the request.

 

How can I solve this problem?

Thank you!

 

Windows XP 64 - MSA 2040 Chassis DriverOpen in a New Window

I'm trying to find a Windows XP 64 driver for the MSA 2040 chassis to prevent Windows from prompting to install a driver each time the server is restarted.

We have a mix of Win7 x64, Win2012-R2 and several WinXP 64 servers and only the XP servers are unable to identify the chassis. It's more of a cosmetic prompt but annoying the customer.

Regards,

R3.

Contact Us

Vivit Worldwide
P.O. Box 18510
Boulder, CO 80308

Email: info@vivit-worldwide.org

Mission

Vivit's mission is to serve
the Micro Focus User
Community through
Advocacy, Community,
and Education.