- About Vivit
- LUGs & SIGs
- Vivit Blogs
- News & Events
- Knowledge Base
|HPE Storage Solutions Forum - Storage Area Networks (SAN) - Enterprise|
I am not sure if there is a better group for this question or not. Please forgive my limited knowledge on this subject. I am trying to get a better understanding. We have a couple of HPUX virtual machines that can move between 3 BL860 i4's. The virtual machines have virtual HBA's defined each with a unique WWN. When creating a zone the WWN is presented as a hostname (not the vm name) and WWN combination which is perfect until the vm migrates to a different host. Now the origional hostname is replaced with the new hostname which I believe breaks the connection between the SAN and the server.
I would like to find out if it is possible to define a zone that will allow the virtual machine to connect to the SAN in the case that the vm has move to a different host?
Any suggestion, documentation or help will be appreciated.
A little clarification please. Based upon an earlier support session I had with HP, I was lead to beleive that the Disk Group Allocation Level should not be above a certain percentage based upon the performance level of the disks in the group. So much so that, when using Near-line 1TB 7200 RPM FATA drives, the allocation level shouldn't go much higher than 50%-60% allocated (DG is single drive protection level, 84 drives). This seemed a bit excessive, however when we scaled back the allocation levels on our DGs (two other DGs with 15K or 10K drives) to levels less than 80% we did notice a marked improvement in performance.
I realize that fewer presentations means less I/O, but I'm looking for a best practice guide that spells this practice for me. In doing so, I'm finding many Community postings with people stating that they are at 95% Allocated and that the Allocation Level is considered only when things such as disk failure and leveling are taken into consideration. I should also point out that the hosts attached are either VMware ESXi, MSSQL clustered servers, or Oracle ASM RAC 11.2 or 12.
My question is this: Given that I don't have the benefit of Performance Advisor to check LUN and host port utilization, etc., how should I go about calculating required Allocation Levels in order to maintain low latencies? I know that without the tools this will be an in-exact science; however, any guidance is appreciated!!!
I am trying to replace 2 bad Battery caches in a production environment. After several attempts of replace per new battery parts, Both news parts come dirctely in fault state after replace procedure. I have tryed to replace the Controller Enclosure but problem persists.
Does anyone have any idea / document to solve this problem?
Thanks a lot,
I have a HPUX 11.11 server with Timefinder, EMC symmetrix storage is using for BCV sync/Split and mount on Dev box to take backup. It has 3TB data on 96 LUNs, all were functioning well till last month. BCV incr job normally take hardly 5 min to complete all these years. Now it is taking almost 2 hours, mainly split operation. There was an IO in one of the LUN on storage, due to this time we had an outage also. Now storage is corrected and no IO erros on the LUN. But BCV split is still running around 2 hours. Help please.
I asked the question in a disk-based system sub-category but mybe I shuld ask in this sub category. So:
On the PRimary side is EVA P6300, in working order.
EVA4400 has two controllers HSV300, Firmware: 11300000
After Power down of EVA4400(using Command View) and switching on, contoller 1 does not turn on.
But, the problem is that the replications stopped working.
What should I do to start replications on the storage where working only second controller?
Can it be force enable write cache on blades with the battery in a failed state? If yes, what is the risk in implementing this ?
Hello, my EVA 5000 battery failure, the replacement of the battery when the controller has been found to be battery corrosion.
I have a client who has a HP EVA4400 Storage array with 2 Shelves fully populated with 450Gb SAS disks. After a power failure at the datacenter, one shelf has all its disks on solid amber while the other shelf is good. There are no luns visible on Command View and hosts have lost access as well to the Storage.
Numerous power cycles havent and component resits havent helped. Wondering if anyone has surived such an event. Would appreciate any help. (PS: Equiptment isnt covered under Warranty or Support.)
Can some one help me with the download links for the latest version of SAN Loader.
The chassis and controller part numbers appear to be the same so what is different between the two? Can I put the D2600 firmware on a enclosure that was originally a M6612? Trying to do the update shows no supported hardware found.
I have a problem hsv450 xcs 11300000 controller A battery 2 and 3 , I already changed the batteries and the controller. Continues to exhibit the same problem. Any tips or help ?
I have a HP P6350 controller that is connected to a storageworks D2600 disk enclosure. The enclosure appears in my command view software, but when I try to initialize the disks, it says "Error, drive is unsupported." and all of the drives light up amber.
The controllers are both HSV340 controllers connected via Mini-SAS to the disk enclosure. These controllers are running firmware: 11001100. The drives are all MB2000FBZPN running the latest firmware. They all appear OK and operational before initializing, but display a failed state when attempting to initialize. These drives have been tested and working on other systems.
There are no further errors that the system gives me, so I am a little stuck here. Any help will be much appreciated.
We have recently moved our EVA4100 unit from one rack to another, including cabling, and since reboot ALL disks are showing "One loop connection lost" (see attached image)
Is this an issue with the disks themselves or the cabling?
I have read that it can be due to one of the following:
1. EVA not being restarted in the correct order
2. Issue with connection on back of EVA
3. Disks needing to be reseated.
Does anyone know of any steps we can follow to resolve this issue?
I have some vdisks on a EVA P6350 under the command view we don't see the presentations hosts.
under the utility SSSU we see this hosts. the firmware of storage is XCS1300000 and the version of CV 10.3.3 ;
on others vdisks we see hosts. we reboot the manap and the pb leave.
any suggest please
I need to do dome firmare upgrades from v7.2.x to v8.0.x on a couple of SN3000B switches. Reading through the release notes I can either do this distruptively but in fewer steps or non-disruptively in more steps. To minimise risk I would rather do more steps that are non-disruptive but I want to know if this is truely non-disutptive using this approach. The environment is very simple with 2 x FC switches, a small number of hosts and single storage array. Whereas some hosts are multipathed there are a couple of hosts with only a single HBA port and therefore connected to just one switch. My interpretation of 'non-disrutpive' is that a single pathed host will not suffer any outage when the switch is upgraded - is this what 'non-disruptive' means in the HPE\Brocade world or do I need to **bleep**down any single pathed hosts first?
Thanks in advance
We have an array EVA P6500 with two controllers. This array was bought in october of 2011.
Please provide me any clues according your experience.
I get the basic understanding of what a block data is, in the sense the blocks are the amount of data stored physically in a hard disk. Is my understanding correct? (I am referring to the context of SAN storage technology). How does it write the block into the disk? Whats the relation with file system and block?
What type of Fibre optics cables available(Single and Multi Mode)? I would like to understand the distance, speed limitations. I am very much new to SAN technology, hence wanted to understand how it works. Also how can we achieve a fibre optic cable connectivity between 2 Data Centers separated by 100-500KM?