Does anyone have any advice (other than sending to a professional data recovery service) for how to access data on an HDD whose controller card stopped working, and refuse to accept a spare controller card taken from a brand new identical HDD?
Since many years (like 15 or more) the PCB of a hard disk contains a chip (essentially an eeprom or flash device) containing so-called "adaptive data", a set of data written in the factory that are specific to the disk drive (head/platters) the PCB is mounted on.
There is specialized hardware (and software) to be able to extract the data and save on another board's memory, but the poorman's way is that of transferring the actual chip from the old board to the new (identical) one.
This, commonly referred to as "ROM swap" is not particularly difficult[1] as the chip is usually a rather simple 8 pin one, if you are not into this kind of things a hardware repair shop (like a phone repair one) will normally make this work for you.
However newish hard disks may have not this separate chip, it has to be seen which model yours is.
Cool it down to freezer temperatures and see if it works. Heat it up to ~60C and see if it works. This applies to both hard drives and SSD's.
A good half of "failed" drives can be made to work for at least a few hours longer with that method.
Unlike OP, If you see signs of life, don't mess with clamps or reflowing. Just leave it in the freezer/oven while you take data off it, with longish power/sata cables to a machine just outside the freezer/oven door.
I recommend GNU ddrescue for getting data off - when you only have a few hours of service life left till it is dead-dead, it maximizes data recovery in a given time. There are various ways to generate a mapfile to skip recovering free blocks, which are worth using if you suspect the drive is mostly empty.
And for the freezer method, be careful with humidity when you take it out. Worth putting it in a bag with some dehumidifiers and just take the cable out
Freezing / heating? Other than the risk of humidity as mentioned, not really. Unless the head crashes into the platter, it's unlikely to do anything that would permanently kill the drive.
That said, if you ever have a drive with absolutely critical, must-have data, don't bother with any of this and just ship it to professionals. You'll pay dearly, but they'll get your data out.
There are bios chips on the controller board than need to be transfered to the new one. In a reasonably modern drive that would be pretty challenging and rewuire a hot air station.
It's my suspicion, at least. In fact I have two of those drives, bought at the same time, and both died in the same sudden way with no more than two weeks between.
Yes, there was a well published Seagate issue [1] that I think was related to uptime, and you could fix it after the fact by grabbing a shell through the ttl serial on the jumper pins. (Edit: thanks jaclaz for providing a link with more details!) It was claimed Seagate would fix it for you under warranty as well, although shipping a drive always has risks.
HPe had two rounds of enterprise SSDs that failed because the uptime counter overflowed, but I never saw content about fixing those after the fact. And I think I had seen a different SSD uptime based failure a year or so before.
IMHO, it's best to avoid same batch storage, and if that's not possible, stagger the online time to try to give enough time to notice a failure, obtain replacement storage, install replacement stotage, and migrate data. Backups are important too, but it's nicer to have a path towards mostly online recovery. And some mostly replacable data is hard to justify backups for (do I need three copies of format shifted media? probably not, if my online storage fails, I can re-rip)
I don't recall hearing about this for Western Digital drives, but there's some xbox360 stuff that I thought involved the ttl serial on WD drives... It's certainly worth exploring. WD green drives do also have a very short default timeout to park the drive, and as a result can experience a large number of parking cycles in some applications, and the parking ramp can wear out; I don't think this is really recoverable, the heads are likely to get damanged and debris may damage the platters.
Only for the record, at the time the Seagate issue was due to a bug in the firmware, when the drive was powered and found a counter at certain values, it went into a sort of loop and failed to "boot" (the internal OS) further.
This happened only on some disk drives because it was initially triggered by a defective testing equipment only on some production lines, see "Root cause" here:
Doesn't ring a bell for me, but given the highly questionable tiering/marketing methods from HDD manufacturers these days and the fact that they have been reducing warranty durations by some 50%, that type of planned obsolescence wouldn't surprise me.
The drives are 3.5" 2TB WD Green purchased around 2012, and had been in use for about 1 year when they both died.
If that's not a typo, then it seems like those drives have been powered off for about 10 years.
I think powered off hard drives are commonly said to retain data for um... maybe 3 years (from rough memory). So, your drives have probably lost their magnetism (and thus the data). :(
Naah, that may apply to SSD's, not to good ol' (rotating platters) hard disks, the magnetism does not evaporate.
The only issue that may happen on an unpowered for several years hard disk is so called bearing seizing, the (fluid) bearing of the motor/platter may become stuck, but it is relatively rare, though some particular make/models are more prone to this, and though (usually) fixable, in some cases it can be made to rotate freely again, but you need the services of a specialized service, as the disk needs to be opened, in some other cases the bearing can be replaced, and some specialized tools are needed:
That reminds me of a work colleague a few years ago. He got an ancient drive working again by tapping it on the side with a screwdriver while powering it on, to get it "unstuck".
Not a typo, they've been powered off for around a decade. But HDDs (just like floppies) don't lose their magnetized property in just 3 years, or even 10 years, barring the very rare case of an extreme fluke or environmental exposure. Flash memory (memory cards, USB sticks, SSDs) loses electrical charge relatively fast, however. That might be what you're thinking of.
Double check the numbers on the controller boards; HDD's are complicated little computers and the manufacturers change things during production of a particular model of drive fairly often.
If you've got the controller board your drive wants and still nothing, then its time for professional help or considering the data lost, imo.
The PCBs are of the exact same model. My fear is a DRM type scheme connected to serial numbers etc. stored on the platters having to match those of the controller.
doesn't even have to be DRM, just "media mapping" where its adjusting tot he individual platters and such. like factory low-level format stuff.
I don't know how complex HDDs have got, but I recall giggling at someone installing linux on an HDD controller board several years ago. So I bet its much worse now.
Why the snark? I will survive without the data, but I'm curious about alternatives to throwing $4000 at the problem and forfeiting my privacy and my data's integrity.
There are many people out there who thought what they could recover the data by themselves, only to swallow a bitter pill later, that's why.
Because if you really need the data then you go to people who makes a living by recovering data.
But if you are okay to lose the if unsuccessful then it's okay to try, but you should know/tell that beforehand.
Reading through the other comments - you have a very low chance to succeed, because if you want to swap controller boards then you need to move adaptive data too, as other had said.
But I'm curios what exactly happened, WD Green from 2012 are not the worst drives out there. How exactly they failed, what happens now when you power them on, with SATA connected, without? Did you try external USB2SATA converters/boxes?
>"Reading through the other comments - you have a very low chance to succeed, because if you want to swap controller boards then you need to move adaptive data too, as other had said."
I'm pretty decent with soldering iron and hot-air gun. Migrating the SMD flash/EEPROM chip shouldn't be too hard.
>"But I'm curios what exactly happened, WD Green from 2012 are not the worst drives out there. How exactly they failed, what happens now when you power them on, with SATA connected, without? Did you try external USB2SATA converters/boxes?"
The discs spin up but the controller no longer communicates with the host. The computer doesn't see that the drives are attached to the SATA bus. There were no signs of problems coming, they just suddenly stopped working from one power-up to the next. I tried with different motherboards and a couple of SATA-USB bridges, all same result.
> Migrating the SMD flash/EEPROM chip shouldn't be too hard.
Well, good luck then.
> The discs spin up but the controller no longer communicates with the host
Now this is strange, if the controller would be dead then there would be no spin-up. If you hear the heads working than the controller is definitely not dead.
You tried to search forums dedicated to data recovery with your exact P/N?
No discernable sound of the heads moving. Drive just spins up and that's it. Last I searched was back in 2012-2013 and I didn't find any other advice than trying a PCB swap.
Just a double check... you said in thread you have two of these drives that died at 1 year old. Did you have a third, working, drive you're transplanting boards from?
I have a third drive, working and unused, with same PCB model number. Others in this thread have said that my chances are good if I bring the DIP-8 EEPROM from the broken drives over to the working PCB.
Historically they did contain a map of physically unusable sectors particular to the physical platters in the drive, and I'd be surprised if they didn't now. So the consequence of a controller swap was making the assembly unstable because you were using a foreign sector map; usually it was still enough to recover as long as you weren't writing new data.
But nowadays a drive controller is far more complex, e.g. it might implement transparent hardware AES encryption, in which case swapping the board loses the key. And I've no doubt there are many modern manufacturing-related tricks for yield that go into them as well, any of which might make a different set of platters than what was shipped unreadable.