- About Vivit
- LUGs & SIGs
- Vivit Blogs
- News & Events
- Knowledge Base
|HPE Storage Solutions Forum - Disk Arrays|
I have a MDS-600 direct attached to a P812 controller on a DL-380P G8 running CentOS. After a power cut the MDS-600 will not power on and flashes the blue LED's in the front and rear on both drawers of disks. I have unplugged it and reseated all fans and power supplies, as well as the IO modules. From the documentation I have read, the blue lights indicate it is in "discovery mode" or thinks a firmware upgrade is taking place. Any ideas?
Is it possible to connect D2600 to ML310Gen8 v2?
I have updated to Windows Server 2016 and use the new HP Proliant SPP to update my Firmware. This has Updated my D2700 with the new Firmware 0150. after this i have a error on it. it says the Temperature is Overheated
External Storage Enclosure Overheating (Temperature Sensor 1, Location Storage, Box 1, Port 1E, Slot 2)
I have a Single Domain Config anf have updated both Controller, all Controller give me this Error or Warning after the Firmware update.
is here a possible issue and can i downgrade to the older Firmware?
hello, I have the same enclosure, although, I am unable to get the ACU to see the physical drives, it sees the controllers, but thats it.
How did you get it configured? Upon install, did you simply run the ACU and create a volume, and your OS recognized it?
I can't get Windows to access it from Disk Management, nor can I get any of the utilities to see the physical disks.
Its pretty bad.. I need help..
Mod Message: This post was moved from an closed thread:
I seem to have several threads on this issue, but I have yet to get an answer.
I have one d2700 direct connected via SAS to a P411 controller inside a DL580 G8.. ORCA sees and attempts initialize the disks through the P411, but it doesn't detect the disks. Going through Post, my DL580 sees the controller, and I get message 289-Important. so the ACU sees the controller, but not the disks again. I have updated the firmware, and changed the slot and cables, but still nothing.
What am I missing?
HP has no answers either.
I have an HP D2700 disk enclosure with twelve (12) HP 3TB SAS HHDs installed, configured in RAID 6 as a single 30TB volume formatted XFS.
The array has been working fine but has not been used very much until recently. Now that I want to actually read and write to it, it periodically fails with I/O errors to the OS (RHEL 7.2).
When I inspect the enclosure, I see ALL green lights everywhere except ALL the drives are showing steady amber. If only ONE drive showed amber or red, I would know to replace that drive and let the rebuild process do its thing. However, clearly all the drives have not failed simultaneously.
If I cycle power on the array, all the drives glow happily green again but the server will not access the drive unless I reboot the (production) server to which it is attached. If I do reboot it, all is well until I try do do any substantial disk activity on the array, then it happens again.
If I try to get the OS to recognize the drive without reboothing the server, the RAID controller marks it as bad and I have to manually tell the RAID controller at the consol during boot to put the array back online, since it has now taken it offline.
The manual for the D2700 does not mention this error condition. Can anyone tell me what it means, if anything else can be done to troubleshoot it (perhaps one drive really is bad, but which do I swap out?), and how to address the issue?
The drives and the enclosure all have been updated to the latest firmware.
Currently I have three HP D2600 filled with 600GB SAS drives in raid 6 that is used for an imaging services bureau (18 million images / 18TB / accessed by 8 people). I’m in the process of adding a fourth D2600 and 12 more 600GB SAS drives.
My primary capacity is a developer but I also handle all of the networking / infrastructure. I’m hoping someone with more knowledge can help me with.
Question 1 - These D2600 are rated for up to daisy 4X 2600’s X 4 ext SAS on the P822 controller = 192 drives. I’m getting nervous having 48 drives on one array.
At what point do I hit “Too many drives” for a raid 6 (or RAID 60) array? I have Array controller, 3 enclosures, 6 ext SAS connections as failure points. Continuing to grow extends my points of failure - can someone help me gauge if these things were really intended to have 100+ drives on a single array?
Question 2 - Tentatively my next build out I need to support up to 100TB, 100 mil files & 50 employees . The build out needs to start slow (immediate needs) & grow over time. I see three options (keeping things in the HP fold)
A - Keep adding to existing array with 600GB SAS 3.5 drives.
B - Use D2700 / 1.2GB 2.5 10K SCSI drives which will cut down on my enclosures / ext SAS connectors (less likely to fail?).
C - Get two MDS600 - two boxes to put everything in. (Max 74TB high speed SAS w Raid 60).
But for whatever reason NONE of those options feel safe to me. Maybe it’s because I don’t have experience running 100+ drives in a single array? Or it’s actually a horrible idea?
Question 3 - Prior to an upgrade - the imaging department ran a Raid 6 Array using 25 7,200 RPM hard drives in an EMD Netstore (company no longer exists). It supported up to 15 users doing imaging work.
Can I go back to 7,200 RPM HP Sata drives and support 50 people hitting the array?
P.S. Budget is important.
I purchased a used storage array to provide additional storage for an HP ProLiant DL980 equipped with a Smart Array P812 SAS controller. The array is an HP StorageWorks D2700 configured with 12 3TB SAS hard drives.
The array starts up fine, green lights all around, no errors.
However, when I try to configure it using the ROM utility included with the P812 controller, it sees all 12 drives but says their capacity is "0.0GB SAS HDD" and won't configure the RAID array using any options I can find. Meanwhile, the solid green lights on all the drives switch over to a blinking blue.
Can anyone give me some steps I can take to overcome this? If the array is faulty, I can send it back for a refund but I need to determine that soon.
kindly help me out to find the hp dynamic smart array b120i controller for windows 7.
driver requried for windows 7.
kindly send me the link to downloads.
I used the HP Smart Storage Administrator to add my 3 new 8TB disks and extend the array. The logical drive shows that the space has been added, but in Disk Management the space is not allocated. When I go to extend the disk I get the messsage "The volume cannot be extended because the number of clusters will exceed the macimum number of clusters supported by the file system.". There for I have around 20TB of space that I cannot added to the Logical Drive.
So it appears it's a limitation to the cluster size. The current cluster size is set to 16K which will only allow 64TB partitions. I've allocated as much as I could to the existing partition, leaving about 1.4TB unallocated.
I've noticed that in my Storage Controller settings in the Smart Storage Admin, the Cache is not enabled ?
If I just click on enabled is there any outage or data loss ?
The reason I enable it is because I need to improve the prformance of my RAID 5 LUNs.
Thanks in advance.
I have attached few D2600 and D2700 DAS to my HP DL 385 G7 using P411, but somehow all of the File System Info shows:
command: fsutil fsinfo ntfsinfo D:
Bytes Per Physical Sector <Not supported>
So I wonder what is the type of the disks in those DAE ?
Thanks in advance.
I'm trying to reconfigure the Strip Size / Full Stripe Size of an existing LUNs, but somehow I've found something that is confusing.
In some of the Logical Drive I mark with the red, when you click on those two logical drives in the screenshot below, the Delete Logical Drive is missing ?
But when you click on the Logical Drive 3 as shown above, the delete button is there.
Does this also means when I need to change the Strip Size / Full Stripe Size of Logical Drive 1, 2 and 3 I will need to delete all of them destroying all of the data ?
Any help would be greatly appreciated.
Today I have a technician who add a new disk on a raid 5 and he migrate the RAID 5 to RAID 6 ADG.
He made a mistake because we need to ADD a disk to the existing RAID 5 to have more space available.
At this time the server is in production and I want to know if it is possible to migrate to RAID 5 wihout data lost.
The option is selectable in Smart Storage Administrator
Thank you for your ASAP response.
I've just installed HP Storage D2600 Disk Enclosure that is filled with 12x 3 TB disks, to be used as backup server.
Mostly the file size will be very large greater than 1 GB up to 4-6 TB.
The File system will be NTFS, so I wonder here if anyone can assist me with the best practice for configuring the LUN for the RAID stripe size as per below:
Thanks in advance.
I'm looking for a way to check when a specific device would become end of sale or reach end of life.
I want to check HPE Disk Enclosure D2600 end of life and show the report to customer.
I'd be really grateful if you could kindly help me
Thanks in advance for your kind help and support
Can anyone here please share some documentation or configurator for stacking StorageWorks D2600 and D2700 ?
I am currently using Smart Array P411 as the SAS controller on my HP DL 385 G7 with the following configuration:
Smart Array P410i In Slot 0 (Embedded Slot) [Front disk on the DL 385 G7 server]
Smart Array P411 In Slot 2 [D2600]
Any help would be greatly appreciated.
Can anyone please help me to identify what is this unused SAS/SCSI slot in the screenshot below:
I'm trying to add the fifth D2700 enclosure but somehow I wonder if I should attach it to the empty slot 1 above or daisy chain it to the last (fourth) D2700 enclosure from screenshot below ?
Here's some more additonal details from the Storage View of System Management Homepage:
Smart Array P410i In Slot 0 (Embedded Slot)
Smart Array P411 In Slot 2
On my existing HP DL 385 G7 Server I'm trying to add one more rack with one more HP Storage D2700 Disk Enclosure (AJ941A) which already has 4 connected with P411.
So can I just buy one more HP Storage D2700 Disk Enclosure (AJ941A) populated with 1 TB HDD and then 2x SCSI cable conencted to the empty ports on the 4th Rack as the picture above ?