- About Vivit
- LUGs & SIGs
- Vivit Blogs
- News & Events
- Knowledge Base
|HPE Storage Solutions Forum - 3PAR StoreServe Storage|
Is this a possible license for the 3PAR 8000 and you can order this and install the license in the 3par OS?
(1) I understand how to convert a full VV to thin on the 3par side of the fence. I am wondering for existing full/live VV's if I need to do anything on the windows OS side to notify the hyper-V hosts that the volumes have been converted to thin? Namely, if I do an online conversion to thin for my VHDX volumes, do I then need to reboot all the hosts in the hyper-v cluster in order for them to recognize that the storage has become thin? or just rescan? or... and...
(2) Does one need to run 'optimize volumes' after the conversion? From just one cluster host? or???? Also, does that cause performance problems for the guest VM's to use the 'optimize volumes' feature from within Windows? and...
(3) Should guest windows 2012 instances within the hyper-v cluster also be running 'optimize volumes'?
Basically, I am interested in the host-side steps surrounding _CONVERSION_ of VV's from thick to thin. I scoured documents and can't find a good 3par cookbook for what needs to be done when you are starting out with thick/full VV's. Just little chunks of information, ha!
We currently updating our 3PAR OS to 3.2.1 and done installing SSMC 3.0 but we're not able to connect in our arrays. As i search some random blogs, it's says that we need to create a certifcate first for browser. Can you please help me what are the steps for this?
When the SQL Server Instances in "A" Data Center are migrated to "B" Data Center, using the HP StorageWorks MPX200 Applicance , to transfer the data between the 3PAR SAN Storages.
Let me know what need to be tested after this data replication .
To ensure the Health and Readiness of the SQL Server instacnes, what need to be validated?
Like, SQL Server Services, database status, conenctivity.
I have a 3PAR management console (physical) with the following configuration:
Service Processor Onsite Customer Care - SPOCC:
Local Notification: Enabled
Service Processor - Customer Controlled Access:
This console monitors an array 7400c.
I don't understand why it say "yet it is not functional because the CCA is not set to OFF."
CCA must be OFF ? Why ?
I 've recieved e-mails "Customer notification from HP ... Realtime Alert Process", and I know HPE support also can recieve notificacionts.
Please provide me any clues according your experience
I posted on this in the proliant forum, but it looks to be gone now? So I'll ask here.
Trying to get my head wrapped the CPG HA availability and the 3PAR chunlets concept.
Lets say we have 4 enclosures and CPG of RAID 5 3+1 and choose cage availabiltiy. How is the chunklets get striped exactly if "no two members of the RAID set can be in the same drive enclosure"? One chunklet on one drive then the next chunklet on the next drive on a different enclosure , then next drive next enclosure and so on?
Lets say we have the same conecpt as above, 4 enclosures and CPG of RAID 5 3+1. Does a chunklet get striped across one chunlet to one drive at a time through the enclosure then vertical to the next drive enclosrue?
ALSO: If we have a 4 node system and 2 additional drive enclosure, does that mean the total of enclosures is 4? (each node pair with a row of drives + the 2 additional enclosure)
Is there a way to reset the admin password on the rmc (3.0) appliance?
I've setup a 3 node Windows Cluster with 3PAR Storage.
Two of the nodes are on DC called A and the 3rd node is on another DC called B.
I am able to failover beween the same DC A, while failover to the 3rd node (DC B) is failing.
Error I am getting is “A device attached to the system is not functioning. For more data, see ‘Information Details’.
We're trying to migrate LUNs from a Clariion CX4 to a 3PAR 7200. We're getting the following error during create of the migration.
MIGRATIONID TYPE SOURCE_NAME DESTINATION_NAME START_TIME END_TIME STAT
1488915650847 offline CLARiiON+CKM00114400633 BRSCSST03PAR01 Tue Mar 07 14:40:50 EST 2017 -NA- prep
arationfailed(-NA-)(:OIUERRDST0008:Admit has failed. Volume 6006016027902E00D16D8902ACB0E111 was not admitted;0 VVs admitted.;)
The error code calls out a possibility of LUNs being too small < 256M, but all of our LUNs are 500M or larger.
Has anyone run into this?
I want to implement a Storage area network solution based on iscsi and FC, so I have an HP 3PAR StoreServ 7200 2-node Storage Base and 24 rackable servers with ethernet ports only (gigabit ethernet), so in the server side I want to use iscsi initiators (using windows server system) and in storage array side I use FC.
The problem is that i don't know if a such solution can be implemented, I have read some articles about iscsi gateway and i want know if HP 3PAR StoreServ 7200 can be integrated with it.
I have a quetsion about the space caculation of 3par. When I want to check the used space of the entire storage. I found the various displays of the used space.
1) showcpg, the user used space is 582656MB, this should be the summary of all data vvs' used space.
2) showsys -space, the used space of user, snapshot, admin is higher than "showcpg"
3) showvv, the used space is consistent with showcpg.
So, how is it caculated the used space in showcpg and showsys? why are they different with each other?
I have a basic Remote Copy task configured, as you know it must configured the ports in a replication mode, it's possible to use thoses port in another mode in the same time?
I'm looking for any documentation or website that I can read and follow to know more about 3PAR StoreServ array.
Can anyone please share the resource here ?
I also need to know how to provision a new LUN and presenting it to ESXi server group for new VMFS LUN or Raw Device Mapping as well from the Storage Array perspective. I know how to do it in VMware level but not sure how to do it in HP Storage level.
Thanks in advance.
I've a problem with configuration of File Persona (3par 8200 - two controller). All nodes have proper ip address and vlanid etc. but after starting FP one node is in starting mode. Output of showfs:
Node FSNode State Active InCluster ----Version----- ---N:S:P--- BondMode MTU
Could someone help me to start FP on two nodes? Thanks.
The company owns a 3PAR StoreServ 7200 Storage Array and a physical Service Processor. The 3PAR serves as a RAID array connected to two rx2800 Integrity Servers running HP-UX11i v3. These devices are on a dedicated LAN with no external connections (due to the sensitive nature of our data). I have been asked to provide some files from our system to a colleague in Linux (or Windows) format. The files I need to provide are under my home directory, and our home directories are NFS-mounted in a Veritas (vxfs) format. I have a USB external hard drive that I can format any way needed (ext3, ext4, xfs, etc). I can, of course, plug the external drive into the Integrity Server, or the SP.
It seems this should be trivial to do, but I cannot figure out how I can transfer a file from my home directory on the 3PAR (on a vxfs filesystem) to the external hard drive (in a ext3, ext4, or xfs filesystem). Any help would be appreciated.
We are seeing authentication failures on all of our arrays (30+). I ran the CLI commands to find the source and it is always our SSMC server. This is annoying because the alerts are a "degraded" level alert so we constantly have our arrays in a degraded state and we have missed other alerts because of this. In our configuration I have created a local user on the array with the browse role. This user is then what I use to connect the array to SSMC. Has anybody else seen this type of issue and how did you solve it?
Good afternoon all.
I have a situation here where one of our engineers asked me if I could move a VV from one of our 3PAR arrays to another. I'm guessing the only way I would be able to accomplish this is with a Remote Copy license. I'm not seeing much anywhere else on the Internet with regard to this situation. Does anyone have any information that they could provide?
Does anyone know if there is a way to "reimage" the node SSD? .. long story.. but.. i lost both nodes in a power glitch.. HPE replaced the nodes.. and.... when repaired, they put in the wrong serial number... didn't realize for a couple of months, as I had other projects going on and this wasn't in production...
So, how do I COMPLETELY start over and input the original serial number? I hear there may be something called "bitb" - back in the box? .. or.. just the same - if i pop out the SSD in the nodes and reimage them..?
I need your suggestions, how i can reclaim free space on CPG's [FC,SSD] .
3par storage 8440, vcenter 5.5, 6.0
Do we have a new document related reclaim free space? i've read many posts on this community but most of them are old as technology changed [by version :P] are we still using sdelete on windows and UNMAP on vmware -in order to reclaim free space.
Please share your thoughts.