- About Vivit
- LUGs & SIGs
- Vivit Blogs
- News & Events
- Knowledge Base
|HPE Storage Solutions Forum - Storage Area Networks (SAN) - Enterprise|
Is there any configurable parameter in scli which can avoid a rescan of devices when there is connection failure between the server and Storage array? The storage array is an XP array and the server has the Qlogic FC card.
I am having issues finding an upgrade path or files for our old Lefthands. They are P4500 and are still running 8.1.00.0047.0 the upgrade using FTP doesnt work and i really need to get these upgraded any idea's?
Does anyone know how to enable mult-connections to a volume on an MSA 2012i? The array has one volume setup this way and I can't for the life of me figure out how it was done. I'm setting up a HyperV cluster, which is why I need this capablity for a second volume on the stack. I have searched The Google for hours and cannot find any information on this.
Thanks for you help!
Last week one of our ESXi machines died. We got a replacement and successfully managed to roam the drives over.
Now we're at the moment to fire the VSA back on. But we don't know what the expect and the right procedure is.
The VSA runs in a cluster with network RAID 10.
Hope this is sufficient information, if need any more please let me know.
So I got these switches Brocade 16Gb/28c Pwr Pk+ SAN Switch, so the things is how do I monitor the Power Pack, where do I download the software to monitor the Power Pack, o how do I use the power pack?
these features come with the power pack:
Advanced Performance Monitoring
hope your comments...Regards!
Does anyone have EOS/EOL support information for AG638A-StorageWorks M6412?
At one site, I´ve 2 DL3890 G9 with 8gb emulex HBA 2 ports crossing FC multimode fiber for to 2 FC controllers of MSA 2040 each ctrl with 2 Ports FC + 2 Eth iSCSI + web management, FC cable multimode LC -LC 50/125, now I have other center connected with singlemode fiber. ruler from center A is LC singlemode, and ruler at Center B is also LC singlemode.
Putting a SAN FC switch at each center, I must have several SFP´s multimode 8 Gbps. LC-LC to connect all HBA and MSA FC Ports at each side but to conect center A to center B I must have one SFP at each SAN switch singlemode conection LC-LC using 9/125 cables.
Centar A and Centar B are at same lan as they have a singlemode link with near 500 mts.
With brocade FC SAN switches what can you recommend to connect Center A and Center B as I have a mixed multimode and singlemode.
Any small help will be appreciated.
Recently a customer acquired an HP storage structure used. He purchased all the hardware, but not the management software.
Thanks for attention!
I have Array P6350 HP SAN and now i want to reset password for controller. Can you tell me how to do. And i dont want loss any data on SAN.
I am not sure if there is a better group for this question or not. Please forgive my limited knowledge on this subject. I am trying to get a better understanding. We have a couple of HPUX virtual machines that can move between 3 BL860 i4's. The virtual machines have virtual HBA's defined each with a unique WWN. When creating a zone the WWN is presented as a hostname (not the vm name) and WWN combination which is perfect until the vm migrates to a different host. Now the origional hostname is replaced with the new hostname which I believe breaks the connection between the SAN and the server.
I would like to find out if it is possible to define a zone that will allow the virtual machine to connect to the SAN in the case that the vm has move to a different host?
Any suggestion, documentation or help will be appreciated.
A little clarification please. Based upon an earlier support session I had with HP, I was lead to beleive that the Disk Group Allocation Level should not be above a certain percentage based upon the performance level of the disks in the group. So much so that, when using Near-line 1TB 7200 RPM FATA drives, the allocation level shouldn't go much higher than 50%-60% allocated (DG is single drive protection level, 84 drives). This seemed a bit excessive, however when we scaled back the allocation levels on our DGs (two other DGs with 15K or 10K drives) to levels less than 80% we did notice a marked improvement in performance.
I realize that fewer presentations means less I/O, but I'm looking for a best practice guide that spells this practice for me. In doing so, I'm finding many Community postings with people stating that they are at 95% Allocated and that the Allocation Level is considered only when things such as disk failure and leveling are taken into consideration. I should also point out that the hosts attached are either VMware ESXi, MSSQL clustered servers, or Oracle ASM RAC 11.2 or 12.
My question is this: Given that I don't have the benefit of Performance Advisor to check LUN and host port utilization, etc., how should I go about calculating required Allocation Levels in order to maintain low latencies? I know that without the tools this will be an in-exact science; however, any guidance is appreciated!!!
I am trying to replace 2 bad Battery caches in a production environment. After several attempts of replace per new battery parts, Both news parts come dirctely in fault state after replace procedure. I have tryed to replace the Controller Enclosure but problem persists.
Does anyone have any idea / document to solve this problem?
Thanks a lot,
I have a HPUX 11.11 server with Timefinder, EMC symmetrix storage is using for BCV sync/Split and mount on Dev box to take backup. It has 3TB data on 96 LUNs, all were functioning well till last month. BCV incr job normally take hardly 5 min to complete all these years. Now it is taking almost 2 hours, mainly split operation. There was an IO in one of the LUN on storage, due to this time we had an outage also. Now storage is corrected and no IO erros on the LUN. But BCV split is still running around 2 hours. Help please.
I asked the question in a disk-based system sub-category but mybe I shuld ask in this sub category. So:
On the PRimary side is EVA P6300, in working order.
EVA4400 has two controllers HSV300, Firmware: 11300000
After Power down of EVA4400(using Command View) and switching on, contoller 1 does not turn on.
But, the problem is that the replications stopped working.
What should I do to start replications on the storage where working only second controller?
Can it be force enable write cache on blades with the battery in a failed state? If yes, what is the risk in implementing this ?
Hello, my EVA 5000 battery failure, the replacement of the battery when the controller has been found to be battery corrosion.
I have a client who has a HP EVA4400 Storage array with 2 Shelves fully populated with 450Gb SAS disks. After a power failure at the datacenter, one shelf has all its disks on solid amber while the other shelf is good. There are no luns visible on Command View and hosts have lost access as well to the Storage.
Numerous power cycles havent and component resits havent helped. Wondering if anyone has surived such an event. Would appreciate any help. (PS: Equiptment isnt covered under Warranty or Support.)
Can some one help me with the download links for the latest version of SAN Loader.
The chassis and controller part numbers appear to be the same so what is different between the two? Can I put the D2600 firmware on a enclosure that was originally a M6612? Trying to do the update shows no supported hardware found.