Hard Drive failure in a Raid 0 array

Okay, so I have a friends NAS in my possession as he can’t figure out what is going on with it. He has an 8 TB array over 4 2 TB drives setup using Raid0.

I believe one of the drives went bad, but I’m torn on which one. The NAS is complaining about Drive 2, however, when I extracted it and connected it to my main PC, gdisk reports no issues (for all drives). So I’m confused on how to resolve this.

Anyone have any experience with resolving bad sectors on a GPT file system?

Some more information:
Running testdisk analysis actually ran fine on Drive 4, produces weird output on Drives 3, 2, and 1. So I think that’s an issue with the software than with the drives… but heck if I know.

Hopefully someone else has dealt with this before. As I’m clueless on this.

RAID 0 has no redundancy whatsoever – it actually features negative redundancy – and the data is likely hosed. If you need to recover it I would stop doing anything at all to the disks and call a professional data recovery outfit. Be prepared to pay dearly.

Okay, that was what I was figuring, so that’s good to know.

Using photorec, I’ve been able to recover data, but of course it has jumbled file names and no directories for organization, etc and obviously won’t be able to recover everything. I’ll see what he wants to do about it.

.

Another question, maybe someone can give me an idea on this. If I were to replace Drive 2 and stick it into the NAS, would it effectively then try and rebuild the RAID 0 array wiping out all data on all drives?

You cannot rebuild a Raid 0. The data is gone. Raid 0 turns multiple hard drives into 1 hard drive, so if one of those hard drives crashes or is corrupted, all data is lost just like if it was a single disk. The only difference here is that the hard drives that are still good, can still be wiped and reused.

You do not store important data in a Raid 0 and I’ve never even heard of anyone storing that much in one. Usually you only hear about Raid 0 in high performance gaming machines, where losing their data isn’t the end of the world. But that’s kinda dying out because of SSDs. It’s very risky, even for a gaming machine, as all hard drives will eventually crash, they are even rated with a Mean Time till Failure (MTTF) number when you buy one.

If this person cared about their information, they should have went with a Raid 5 or a Raid 0+1. Both methods are made for high performance and high redundancy. Raid 5 can even be hot swapped without a hiccup.

Like wwb_99 said, you can shell out a few grand and possibly get the data recovered by a professional data recovery company, or you can chock it up to “oops” and a learning experience.

Thank you. I was figuring that would be the case, and thankfully it isn’t my data, but someone elses :slight_smile: But I don’t think they’ll care too much if it is considered “destroyed”. I’ve proven I can use photorec to recover quite a bit of data, but you lose a lot (the file name, the organization, etc). So I’ve given my friend the two options I see. 1) buy the new drive, put it in, and lose all data telling the system to reformat. 2) buy 4 new drives, put them in, format them into the setup he wants, and continue the long painful process of using photorec and the even longer process of opening every file and renaming it.

The way I’ve handled my personal NASes is I set them up RAID0 (as they are 2 drive NASes) and I buy two of them then mirror the first one to the second. When a hardware failure occurs, the task to mirror the devices won’t run. So I can then point all connections to the backup NAS (which becomes the primary NAS now). I then repair the first one and have it the mirror destination now once repaired.

Of course, all data I put on either NAS is also backed up, so if data corruption would occur and propagate, I can restore without any issues. Thanks for the help guys, it really has been beneficial :slight_smile:

Yeah, that’s a Raid 0+1 setup. It basically means a Raid0 and a Raid1 mirroring of the Raid0.