Problems with transferring large amounts of data

I've been running into problems transferring large amounts of data successfully inasmuch that I first had problems with my drives crashing. I mean to a standstill and freezing the system. I did some work and was convinced that it was down to one particular nvme and the enclosure it was in. I swapped out the drive to a different make of enclosure and copying was fine. I wasn't convinced that the system should have froze the way that it did so i did a fresh install of 24.10. Things appeared to be going well till today. This time I was using gparted to copy one drive to another of the same make, model number, etc. Basically identical drives. Both are ssd from 2017 that have not been used extensively. Both are Adata 250GB and have given no trouble till today. For some reason one of the drives could not be read. It simply would not work with the system and I had to use gparted to sort out the formatting and partitioning. At least that's what I hoped. About one third of the way into copying about 130GB or so the process seemed to stall. No progress was made for over two hours so I had to kill the program. I am putting this latest incident down to the drive going faulty and it remains in the dust pile by itself for the moment. So here's my dilemma. Do I persist in using Ubuntu Mate 24.10 with what is possibly a bug in the system when it comes to copying large amounts of data or do I move away from it altogether. I am veering toward the latter. The main reason being that I don't trust Ubuntu Mate 24.10. I am used to copying 40 to 50GB at a time but now that I want to copy whole drives of data of 100GB and more I am having hiccups. I can copy over a few folders with around the 40 to 50GB mark but I am doubting that any more won't give problems and will just waste my time. I have also tried dd to copy whole drives but, after one stupid mistake in copying the wrong way round and actually deleting the drive with data I don't trust myself not to do this again. So, I may be stepping away from Linux after so many years...

Sorry to hear about the troubles with the large transfer between SSDs.

8 years old is pretty old for an SSD. Unpowered or powered, those cells definitely degrade over time. Because of the intensive 130 GB / 250 GB copy, one of the drives could have overheated or ran into bad blocks. S.M.A.R.T. data should have more details on their health.

The kernel logs should have answers at the time. It can be checked with a terminal:

sudo dmesg

The drive likely encountered an I/O error, usually indicating a hardware fault or bad cable. The kernel usually knows what happened.

I can't see how it's Ubuntu's fault, but if you did want to rule it out, there are tools like GParted Live (based on Debian). There might be non-Linux tools that can do full disk clones, but I'd bet they'd equally freeze/crash if they're nearing the end of their life. SSDs do have finite cycles before they start acting up like this.

dd and copying via GParted is really inefficient and more suited for hard disks. Because both methods are doing bit-by-bit copies, it will wear out the drive faster as it has to read and write everything - even if it's zero. That's just as bad as defragmenting SSDs.

If it was me, I'd use a method to "NAND clear the memory cells" (trim/sanitise) to essentially factory reset the drive to a fresh state; create the partitions (and attributes) identical to the original, and then either copy files or sudo rsync -av the lot from A to B if it needs to preserve file attributes/permissions.

However, some of these SSDs don't sound in good health, so I would recommend backing up the files onto a reliable spinning metal plate (HDD).

4 Likes

Thanks very much for your insight Luke. I am aware of the chips fragility. I used to be electronics supervisor in a university department and worked a lot with computers. I never quite got to grips with programming but that's by the by. The SSDs have never been heavily used and I've been doing the copying over and between various drives including mechanical. To be honest I can't think if any problems were purely SSD. The attempted gparted copy was SSD to SSD. I must try a couple of HDDs when I get a moment and check my notes from previous. You've given me food for thought. Thanks.

3 Likes

Just for a case, I'd like to mention that entire drives are better copied unmounted. Say, backing up system drive takes loading computer from live CD/USB.

P.S. Surely, file-level copying of data drives is done with drives mounted.

3 Likes

If copying goes well except for large files, it might be a good idea to check your RAM.

3 Likes

We need more informations on your setup.
What are the applications used to transfert these files ?
How many Go per files and whole?
Can you give us the output of:

lsblk -o name,fstype,size,fsused,fsuse%,fsavail | grep -Ev "loop"

On my setup, if i download very large file with bittorrent to ntfs partition, then it freeze, no way to finish. But, works fine if ext4 partition used. Take a look if a similar issue on your side.

1 Like