- About Vivit
- LUGs & SIGs
- Vivit Blogs
- News & Events
- Knowledge Base
|HPE Storage Solutions Forum - Storage Area Networks (SAN) - Small and Medium Business|
I have 2 C7000 Blade Enclosures that I have pieced into one large one (Combined two datacenters into one location). I have 6 HP p2000 storageworks SANs that I am now trying to connect to this single C7000 and I connected two new Brocade 8/12c SAN switches (BAY 7 & 8) to support the FC connects to the added on SANS. When i go to the OA the cards seem fine as pictured below but they arent auto mapping to any of the blades. I cant find anywhere to enable them in the OA so that all blades can see these cards and connect to their respective SANs. What am I missing?
Tengo la msa 2042 la cual por defecto estan habilitados para FC quiero habilitar a iscsi de la controladora 1 los puertos 3 y 4 , pero en el modo grafico no se puede
I have the msa 2042 which by default are enabled for FC I want to enable isci of the controller 1 ports 3 and 4, but in the graphical mode can
from console, show controller-statistics both I see Power On Time several months (in seconds of course).
But from SNMP OID 188.8.131.52.184.108.40.206.0 I get uptime of only few days.
I don't see anything in logs and I am not aware of storage restart. Did someone experience this? The OID should be standard for system uptime. What else could have been restarted but controllers on MSA?
Thanks for any idea.
We have a P2000 G3 iSCSI SAN , we have configured Vdisks and two Terabyts are availble and not configured.
When we tried to create Vdisk for the available 5 TB , I found no disks to select.
Thanks for any help in advance.
I wanted to make sure everyone saw HPE just released a new firmware version for both the MSA 2040/2042/1040 and the new MSA 2050/2052.
It is highly recommended to immediately update to the new firmware versions.
For the MSA 2040/2042/1040 here is the link to the bulletin for the new GL220P010 firmware: http://h20566.www2.hpe.com/hpsc/doc/public/display?docId=emr_na-a00025799en_us
For the MSA 2050/2052 here is the link to the bulletin for the new VL100P001 firmware: http://h20564.www2.hpe.com/portal/site/hpsc/public/kb/docDisplay/?docId=emr_na-a00025800en_us
First of all, I'm not an expert when it comes to HP Storage, I'm a networker myself. I have what appears to be a failed controller on on an enlcosure. Correct me if I'm wrong, there are 2 controllers (A and B), that are connecting 4 encosures we have. Controller B seems to have failed. The SAN is still functioning, but we have no redundancy. I have reboot controller B, but it still did not come back up. I'm not quite sure where to go from here. The product is out of warranty, so I'm not quite sure where to go from here. Where should I go from here?
I've recently setup a MSA 2040 with 10gb iSCSI connection but in the SMU the host port speed option only have auto or 1gb. Is this normal?
We have an old MSA2000i G2 which was used in a test lab. Its controllers (model AJ803A) died because of a power outage - They display the error "Warning The system is currently unavailable." when accessing them via the web interface.
The office is being closed next month and we are scrapping most of the old equipment. However, as I believe this old MSA still has some value, we would like to donate it to a charity, but this is impossible in its current, inoperable state.
As a result, I would like to have one last chance at resurrecting the device, otherwise it will have to be scrapped (which seems wasteful). As it appears to be a firmware issue, I am assuming that this would still be possible to fix it via the FTP upload method. However, I no longer have the firmware files and the support contract has long since expired.
Is this something that anyone can help with?
Thanks in advance,
お世話になっております。HP Strageworks X1600を使用しております。
①Server Agents: 温度ステータス 劣化
②Server Agents: 温度ステータス OK
we have a MSA 2040 and a second shelf. 48 drives in total. We are using the Storage Management Utility to configure the disks.
We have created three 16 drive vRAID-10 1 disks. Each vdisk has one volume. Right now each volume shows up as an extent (mount point/share) on the fiber channel network.
is there any way to show these 3 volumes as 1 large virtual volume?
i could change the vdisks to a different RAID so one volume would have more disks. But a single vdisk is not able to have all 48 drives in it. so i would have the same question with 2 volume rather than 3.
any help or comments would be appreciated!
One of the disks is amber and I saw this error on on one of the vdisks
There is a problem with a FRU. (FRU type: disk, enclosure: 1, device ID: 5, vendor: HP , product ID: DG0300FAMWN , SN: 3SEXXFE200009013YXXX, version: HPDC, related event serial number: A22614, related event code: 55).
Can I just replace the hard drive with a new one? and will it will rebuild?
I'm very new to the company and I'm still trying to understand the way things are set up here.
En cas de coupure électrique prolongée, je souhaite pouvoir automatiser l'arrêt de ma baie SAN et de son extension de baie, via des commandes déclenchées par mes onduleurs.
Pour cela, j'ai contacté le support hp téléphonique, qui m'a donné une réponse insatisfaisante : il faut être physiquement devant la baie pour l'éteindre... Or vu la ducumentation HP de la baie, il semble pourtant que ce soit possible en ligne de commande.
Comme je ne suis pas un expert en script, pourriez-vous m'aider en me fournissant un script ou bien la ligne de commande complète qui va bien pour déclencher cet arrêt "propre" de la baie SAN svp ?
idem pour son extension de baie : l'arrêt de la baie principale (MSA2040) déclenche-t-il automatiquement l'arrêt de l'extension de baie (HP StorageWorks) ? et si non dans quel ordre les arrêter ? et quelles lignes de commande pour arrêter l'extension ?
Merci pour votre aide,
something is wrong with the MSA 2040 performance tiering, i can only have about 5k IOPS out of it while i have another SAN P2000 G3 which is providing 16K for the same workload.
Both SAN are FC are connected through HP 8/24 SAN switches. can you help me diagnosis the issue ?as i am running out of ideas
we are using HP DL 360p Gen8 as hyperV host for the virtualised workload.
I am looking for following HBA driver (Linux) to work with HPE MSA 2040 SAN
HP P/N :AJ764 - 63002
I will be thankful if you can direct me to download page
I have a hdd with P/N 627114-001 and GPN 507129-010, is it compatible with my MSA MSA2312sa?
We have a degraded Out Port on one of our SAN controllers and also expericing slugging performance. Could anyone shed some light into how we can fix this?
I have attached a screenshot so you know what I mean. Below are the results of the show system and versions console comands:
# show system
Controller B Versions
we use a msa2324fc with 2 controllers. The disks are runnig fine but we are not able to configure anything anymore. Via SMU we get "The system is currently unavailable" and cli can't execute any config commands we get "Unable to communicate with Storage Controller. Please retry the command." Also we are not able to do restart SC and rescan doesn't help too.
The attached show configuration output looks very strange.
Any how we can solve the issue without any downtime?
Hoping someone might be able to help. We have a HP 2040 with auto tiering, the disk groups, pools and volumes are configured like so:
4x 1TB SSD RAID5
9x 1.8TB 10k SAS RAID5
9x 1.8TB 10k SAS RAID5
All in a single Virtual Pool. In the virtual pool I have two volumes configured Vol1, Vol2 at 10TB (or there abouts) assigned as Cluster Shared Volumes (CSV) volumes are set to no affinity re-tiering as per the best practices.
I am using Windows 2016 with Hyper-V and failover clustering. We currently have two nodes.
Our hosts are directly connected via two 10g NICS (one to each controller) on the same IP subnet, for testing purposes I have disabled one NIC and configured round robin as failover only. Jumbo frames is not configured but even when it is the performance difference is negligable.
Performance wise from a Hyper-V VM I use IOMETER and I load a 20GB disk with 4kb 100% sequential write access profile and get a pitiful 8k IOPS.
From the Hyper-V host I do the same at get a better, but not by much, 18K IOPS. I know the 4KB 100% seq/write is a lousy real world test but should be one the SAN can easily fulfill to up to around 80,000IOPS from what I read.
Can't readily see any errors on the SAN or the host.
My question is, what the hell have I done wrong :)
We had one controller failure, it had been repaced, but after this we have problem with snapshots creation:
create snapshots volumes volume snap
Really snapshot is created, I can see it using
show snapshots snap-pool pool-name
It can be even mapped to host, but host doesn't see it :-(
There are also frequent errors during snapshot deletion:
delete snapshot snap
And sometimes I get:
Error: The MC is not ready. Wait a few seconds then retry the request.
How can I solve this problem?
I'm trying to find a Windows XP 64 driver for the MSA 2040 chassis to prevent Windows from prompting to install a driver each time the server is restarted.
We have a mix of Win7 x64, Win2012-R2 and several WinXP 64 servers and only the XP servers are unable to identify the chassis. It's more of a cosmetic prompt but annoying the customer.