Copying UM16.04 to smaller drive

I was successful copying SDA to SDD. Everything looks as expected from Gparted. I still can boot to SDA, The GRUB still shows 12.04 on SDD2, so no booting options there. I tried booting to SDD from my bios and all that did was return me to the Grub menu. Now before the dd copy command, my bios showed a SDD and a separate entry for Ubuntu 12.04. I could boot by selecting 12.04, but if I selected SDD I was returned to the Grub menu just like it does now.

So it seems to me I need to repair the Grub. Can this be done being logged in or do I need to do this from a live disk. I would like both disks to be bootable before I install 18.04. I see instruction here. But that is for a single disk boot.

An update: I am not able to mount SDD. Also, although I can mount all other drives, looking at my computer under places, I see this.


The OCZ-VERTEX3 is my SDD.
I also noticed both SDA and SDD have the same UUID. Should I have saved the previous UUID from SSD? Or can I apply a new random UUID and random PARTUUID?

$ blkid
/dev/sda1: UUID="fa83ad0f-b87c-4640-82a8-786f18f91ba5" TYPE="ext4" PARTUUID="2ef6e257-01"
/dev/sr0: UUID="2016-04-20-23-18-37-00" LABEL="Ubuntu-MATE 16.04 LTS amd64" TYPE="iso9660" PTUUID="6d9d647d" PTTYPE="dos"
/dev/sdb: LABEL="DataFiles" UUID="7cdbf0c0-d4df-471e-a1db-07fbfdb61ef2" TYPE="ext4"
/dev/sdc1: LABEL="MultiMedia" UUID="1986ba60-63d8-49aa-a4c2-47317d18ed79" TYPE="xfs" PARTUUID="000ba044-01"
/dev/sdd1: UUID="fa83ad0f-b87c-4640-82a8-786f18f91ba5" TYPE="ext4" PARTUUID="2ef6e257-01"
/dev/sde1: LABEL="ElementsBackUp" UUID="46144a6a-c62d-4d8a-acb0-4a689736136a" TYPE="ext4" PARTUUID="4407701c-01"
$

Your observation about the UUID’s being the same is right on the money, as Linux wants UUID’s to be UNIQUE – so the duplication can indeed keep the backup from mounting from an identical UUID MATE 16.04 installation as you were trying to do

– that said, you should still be able to reverse the boot order in your BIOS – in which case the situation should reverse – which would mean that, after booting from the 110GB drive, you would find that MATE would then refuse to mount the 73GB MATE partition on the 220GB drive.

I did not suggest trying to change the UUID’s on the copied 16.04 drive, since this creates other complications, and any conflict will go away later on its own after you verify that your copied 16.04 on your 110GB can boot on it’s own correctly, and blow away the original 73GB partition on your 220GB drive, because when you re-partition that drive, and install 18.04 WITH A NEW PARTITION TABLE AND NEW PARTITIONS (which will have NEW UUIDs) it will eliminate the duplicate UUID conflict.

– does this sound right to you? ( I assumed that you wanted to recover the space on your 220GB drive and re-partition and install 18.04 there – is that right?).

SO THE FIRST ORDER OF BUSINESS IS TO MAKE SURE THAT YOUR COPIED 16.04 MATE INSTALLATION DID BACKUP CORRECTLY, AND WILL BOOT PROPERLY FROM THE 110GB DRIVE.

As I noted above, the duplicate UUID’s should not keep you from reversing the boot order in your BIOS and booting from the 110GB drive (which should now have an identical GRUB boot sector, partition table, and first 73GB partition) – but just in case your BIOS or MATE is still getting confused in some way, have you tried temporarily completely disabling your 220GB drive in the BIOS so that Linux will think that it has a single 110GB drive which is sda?

If you can’t get the 73GB MATE partition on the 110GB drive to boot up properly no matter what, even after trying this, then something obviously went wrong with the copy operation, so here are a few simple questions:

What was the actual dd copy command that you used? – Most importantly, did you copy the full base device to base device i.e. sda, to sdb, sdc, sdd, etc. – or did you only copy partitions i.e sda1, sdb1, sdc1, etc???

To preserve the boot sector in the copy we need to copy base device to base device (sda, sdb, sdc, sdd), with the upper bound of the dd copy set high enough to guarantee that we get EVERYTHING in your current MATE 16.04 (when calculating the minimum upper bound don’t forget the difference between GB = 1,000,000,000 vs GB=1024x1024x1024).

Also, although I mentioned that you should copy using a live session, one thing I assumed, but should probably have stated explicitly, was to also make sure that whatever devices your 220GB and 110GB drives are mapped to (sda, sdb, etc) are all UNMOUNTED in your LIVE SESSION while you are doing the copy (you can check this with the mount command from a terminal, or more easily by using the “disks” GUI application from the preferences/hardware/disks menu.)

This ability to have everything UNMOUNTED is the whole reason we use a LIVE SESSION in the first place.

Sorry, should have noted this, but I have done this so many times, it’s tricky to remember all the little details that snag you up first time through.

EDIT:
If you get everything sorted out, so your 16.04 is booting correctly from the 110GB drive, you can eliminate the unneeded Ubuntu 12.04 entry in the 16.04 GRUB boot menu by running:

sudo update-grub

When you install 18.04, it will install it’s own new copy of GRUB on the 220GB drive boot sector, and as part of that process, it should automatically detect your 16.04 MATE install on the 110GB drive.

This should happen even if you don’t bother to sort out the GRUB bootloader issues on 110GB 16.04 MATE installation, but getting all the boot issues sorted out for the copied 16.04 MATE installation is still worthwhile though, since it will give confidence that the copy operation was completed without errors, and will allow the 110GB drive to boot on it’s own at some future date, if your 220GB drive has a failure.

Personally, when I backup or restore a disk image from a backup file, I like to work in “paranoia mode” where I double-check and byte-by-byte verify EVERYTHING.

This is simple 3 step process, but is a bit more time consuming (quite a bit more time consuming for large backups) but does give you a ‘warm fuzzy’ feeling of confidence that you have a solid clean copy, so you don’t have to worry about blowing away the original.

To have any chance of the MD5 check below verifying correctly you MUST boot from a live session and be very careful to make sure that the source and destination disk partitions are UNMOUNTED, and STAY UNMOUNTED while you complete ALL backup and verification steps (do the full copy and verification before you try to re-mount or re-boot either the original or copy disk image).

Here are the 3 simple steps to copy and verify a disk image in “paranoia mode”:

1) First we boot into a live session and make sure that no partitions are mounted on either our source or destination drive - and then do a MD5 crypto-hash of the original:

sudo dd if=/dev/sdX bs=1M count=80000 status=progress | md5sum -b

… with sdX being replaced by our actual source drives device (sda, sdb, sdc, sdd, etc)

2) Leave this terminal window open as a reference, open another terminal window on the live session desktop, and do the actual copy operation:

sudo dd if=/dev/sdX of=/dev/sdY bs=1M count=80000 status=progress

… with sdX being replaced by our actual source drive device (sda, sdb, sdc, sdd, etc)
… and sdY being replaced by our actual destination drive device (sda, sdb, sdc, sdd, etc)

3) Now we can then verify that our copy is byte-for-byte identical by checking the MD5 of the copied region on the destination drive with:

sudo dd if=/dev/sdY bs=1M count=80000 status=progress | md5sum -b

… where /dev/sdY is our destination drive (note that this is the exact same syntax as the first command above, but with the drive replaced with the destination so we are checking the MD5 hash of the copy)

If the MD5 sums match, we can assume that the disk image we just copied is byte for byte identical to the original.

Then you can of course also double check by disabling the original drive, and booting from the copy, but note that you should only do this after you do the MD5 check to verify the copy, since the very act of booting a drive changes the MD5 signature. This is because, there will always be small changes to the drive on every boot cycle (like log file updates) so the MD5 will always change.

So here is where I am and how I got there.

As mentioned earlier everything appeared to copy over properly. When I selected sdd to boot in my bios the Grub menu loaded. The Grub only provides option for 16.04 on sda and 12.04 on sdd2. Against your advise I did try changing the UUID. This did make the drive mountable, but again no luck booting. Keep in mind, I have never been able to boot directly by manually choosing sdd. My bios showed sdd, but also had a “Ubuntu 12.04” (which was in sdd2) listed which did boot when manually selected. So I don’t see how to force a boot without the Grub. I also noticed the GPT error still existed on sdd. So I “sudo gdisk /dev/sdd” and removed both the GPT (since sda did not have a GPT) and MBR, then added a partition. Boot to live disk and copied SDA to SDD again “sudo dd if=/dev/sda of=/dev/sdd bs=1M count=80000 status=progress”. Yes, I did verify prior the drives identification and did a md5sum check which came out exact. The corrupt GPT is now gone and the two drives look identical. See below:

$ sudo gdisk -l /dev/sda | grep -A4 '^Partition table scan:'
Partition table scan:
  MBR: MBR only
  BSD: not present
  APM: not present
  GPT: not present
$ sudo gdisk -l /dev/sdd | grep -A4 '^Partition table scan:'
Partition table scan:
  MBR: MBR only
  BSD: not present
  APM: not present
  GPT: not present


Now I did verify the two drives have the same UUID using Gparted. What is odd now is “blkid” returns nothing.

$ blkid
$

So where I was hoping to go was to have a Grub supporting the booting of both drives with 16.04. But based on your input above I understand that maybe more difficult than just installing 18 on sda and let the installation fix the rest. I have not tried running boot-repair which I see they have a live-disk on sourceforge.

Just curious if you have any other recommendations before I try boot-repir first then 18.06 installation? Just a reminder, I have image backups of all drives although I do not know if 12.04 is restoreable since I deleted GPT on sdd, but 12.04 is really no longer needed. Thanks again for you patience and help. I have learned a lot.

if the MD5 was ok, I don’t see any reason that both drives should not be bootable since they are now effectively identical – which, by the way, means that the UUIDs are once again identical, unless you somehow broke the UUID on your original 220GB source drive – in which case they would now BOTH be broken.

So, this would be a good time to enable the drives one as a time if possible, and check to see if you can still boot at all from either drive.

While you are doing this, I hope you understand that when you boot GRUB from the 110GB drive, you should NOT expect to be able to boot it as “sdd” using some newly found GRUB entry, or by using the old 12.04 entry - because after your BIOS reverses the BOOT order, GRUB (and the subsequently booted MATE 16.04) will think they are booting sda not sdd. — So if you do succeed in booting up 16.04 from your main GRUB menu on the 110GB drive as “sda” it should boot as “sda” and think the 110GB drive is now “sda” .

At least that’s how it has always worked for me… (although, as I mentioned, you might have to temporarily disable the 220GB drive in you BIOS to avoid the system becoming confused due to the duplicate UUIDs)

Be careful at this point, because with the duplicate UUID, and confusion about which drive is really booting as sda, it would be pretty easy to corrupt your working 16.04 on the 220GB drive, before you insure that the 110GB drive is bootable.

At this point, the safest way to proceed is to use the BIOS to temporarily disable the 110 GB drive, make sure that the 220GB drive can still boot, and then reverse the process, and disable the 220GB drive and check to see if you can boot from the 110GB drive.

If you can’t do it in your bios, there is a way to do it manually, but it’s a little dangerous so it would be better is there is a simple BIOS setting that you can use.

It is technically possible to preserve both 16.04 installations, but you would have to change the UUIDs of one of them (both the disk UUID and the partition UUID).

I am fairly certain that other measures like Boot repair will not be able to fix things unless you change the disk and partition UUIDs of a least one disk.

After that, Ubuntu will not boot on that partition where you changed the UUIDs till you edit the /etc/fstab on that partition (the one where you changed the UUID’s) so that Ubuntu knows how to mount the root partition ( look at the /etc/fstab and you will see that the partition UUID appears in the line that mounts / )

You mentioned that 16.04 would not boot on the partition where you changed the UUID, and this is most like why that happened - because Ubuntu uses the UUID to mount the partition, rather than the /dev/sda1 device ID.

If you change the UUID of one of the drives and partitions, and edit the fstab on that partition to update it to the new UUID for the root partition, then everything should work.

Then you would be free to install 18.04 in the free space on either drive, and everything should work.

Sorry if this seems like a royal pain, but this is what I was referring to when I said that things would be fairly simple if the only goal was to move 16.04 from drive to drive, but would be “more complex” if you need to preserve BOTH partitions.

There is probably a way, but I do not see how to disable a drive in the bios. The only way I know how to disable a drive is to physically disconnect it. I can try that later when I have time to relocate PC to a better location for that type of work.

For some reason, the blkid command is working again. I did confirm both UUID and PARTUUID are still identical. In the mean time I will think more about changing the UUID & PARTUUID and updating fstab. None of those tasks seem difficult.

If you can select boot devices in the BIOS, but not disable devices, there is a simple way to do it in software, but like the backup operation itself, you have to be VERY careful about what you are doing.

The procedure is straightforward 2 step process.

  1. back up your boot sector (first 512 bytes of the drive)

  2. write zeros over the boot sector to ‘disable’ the drive.

Later when you want to ‘re-enable’ the drive you just restore the boot sector.

Theoretically you already have one backup, created when you copied sda to sdd
|
You can easily check that the boot sectors on these drives are still identical with:

sudo dd if=/dev/sda bs=512 count=1 | md5sum -b

sudo dd if=/dev/sdd bs=512 count=1 | md5sum -b

Note: these commands only read the first 512 bytes from each partition, so they should complete within seconds (as soon as the drive spins up) - and because we are only reading (and not writing) data from the drive, these commands should be completely harmless (although, as always, you should be extremely cautious when evoking ANY dd command on a hard drive, since seemingly minor typo’s can sometimes have disastrous consequences).

If they are identical, then you already have ONE backup, but just to be safe you can easily create a second backup by booting into a live session, a nd writing an additional boot sector backup to any writable drive ( other than sda or sdd of course )

To do this simply open a folder on the writable drive where you want to store the backup, and then use the right-click menu to select -> “Open in Terminal” and execute the command:

sudo dd if=/dev/sda of=my-sda-mbr.dat bs=512 count=1

This should create a file named “my-sda-mbr.dat” in the folder where you opened the terminal above.

As a final check, you can verify the files MD5 from the terminal with:

md5sum -b my-sda-mbr.dat

… which should give the same md5sum as you got above when using dd and md5 sum with sda and sdb.

Now that we have multiple backups for the MBR sector of sda and sdd, we can temporarily disable either drive by simply using dd to write ZEROs over the MBR.

For example to disable sda so we can verify positively the the system can boot from sdd we would use:

sudo dd if=/dev/zero of=/dev/sda bs=512 count=1

This will temporarily disable sda and make it look like an un-partitioned drive.

If you then set your BIOS to boot from the 110G drive, then it should boot up thinking that drive is sda, and that your 220G drive is un-partitioned drive.

The purpose of this exercise is that it will let you positively verify that the 110G drive does boot and run Linux normally.

When you are confident that the 110G drive is working well, you have the option of either leaving the 220 drive un-partitioned and just running the MATE 18.04 install, or if you want to try to fix the UUID’s and preserve 2 copies of 16.04, then you can restore the boot sector on the 220G drive.

To do this you can use dd to restore the MBR from the backup file by changing to the same folder where you copied your backup MBR file my-sda-mbr.dat and executing the command:

sudo dd if=my-sda-mbr.dat of=/dev/sdX bs=512 count=1

… where /dev/sdX is the device id for the 220G drive (may be sdd now or some other device, since Linux will likely change your device ordering due to the change to booting from the 110G drive)

This should restore the first 512 bytes of the drive from the backup, restoring the MBR, and then you are free to experiment with changing the partition and drive UUIDs if that’s the way you want to proceed.

I finally had some time to come back to this challenge.

History: So I copied sda to sdd. I could not get sdd to boot because of the same uuid. Even if I selected sdd in the bios, system went to the grub only allowing me to select sda drive. Just a reminder, selecting sda did the same thing, brought me to the grub.

So, I changed ssd’s UUID, PARTUUID, fstab, and grub to show new UUID. Although the drive was now mountable I still could not boot it. So I finally opened the PC and disconnect sda. No luck booting but I did get the grub terminal. At this point I started over and re-copied sda to sdd.

Update: I copied sda to sdd. Shut down PC, disconnected sda. System boots just like before using the grub.

Questions: My understanding now is I should be able to install 18.04 to sda and the installation process will fix the UUID/PARTUUID conflict and fix the grub so that I can boot to 18.04 on sda and 16.04 on sdd. Is this a correct understanding?

Also, as mentioned in the 4th post, the sdd apparently had a damaged gpt when 12.04 was installed. I don’t understand why it is damaged again, as I did fix it earlier when I made sdd mountable. The damage returned after my recent copying sda to sdd. The system does boot with this damage. Do you recommend attempting to fix prior to 18.04 install?

Sorry you encountered boot issues trying to get both 16.04’s to run at the same time, but so long as you are satisfied that the 110GB drive is booting and working on it’s own, you should be safe to proceed with your 18.04 install.

The only thing that seems out of place is that you are seeing a damaged gpt, as this should not be the case since a simple MBR partition table with up to 4 partitions only occupies the first 512 bytes of the first sector on the HD, so this should have copied over identically, leaving both the larger 220GB and smaller 110GB with identical healthy MBR partitions.

But, as I mentioned before, GPT partitions have a backup table at the end of the drive, and since you previously had the 110GB drive partitioned as GPT, and since the copy operation only copied the first 80GB from sda to sdd, it’s likely that Gparted may still be seeing that left over backup GPT partition table at the end of the smaller 110GB drive, and because this GPT table doesn’t agree with the MBR partitioning at the start of the drive, it’s shown as a broken GPT.

This has no relevance since the drive is now using the simple old style MBR partition table at the beginning of the drive. You could clear out this extra unneeded GPT entry at the end of the drive manually by writing zeros to the last few megabytes ot the 110GB drive using dd with a “seek=” type parameter, but I wouldn’t bother unless the problem persists after you re-create a swap partition for your 16.04 installation on sdd (as I described above in a previous post) using Gparted, because Gparted should correct this type of partition table mismatch errors when it updates the partition table to add the new swap partition.

So far as installing 18.04 goes, to fully avoid the dreaded UUID issue cropping up again, you should zero out the 220GB UUID’s by erasing the drive before installing 18.04, which will allow 18.04 to create NEW drive and partition UUIDs as part of the install. To save time, you could just zero the first 512 bytes, as I described previously, but to avoid future issues if the drive should ever become corrupted and you have to try to use recovery tools, I would wipe the whole drive so you are starting with a clean slate.

As I said before, it is VERY important to double check your device mapping in the Live-Session and make sure that the 220GB drive’s device is still /dev/sda

Once you are sure of your device mapping, how you should go about erasing the 220GB drive depends on what type of drive you are dealing with.

If it’s an old-school spinning platter drive - and assuming that it is indeed still mapped to /dev/sda - you can boot from the Live-Session and write zeros over the full 220GB drive with:

sudo dd if=/dev/zero of=/dev/sda bs=1M

Since we are not specifying an end point or limiting block count, this will ZERO THE FULL DRIVE.

If the 220GB drive is a Solid State Drive (SSD), then a MUCH better alternative is to simply boot from the Live-Session and do a “block discard” operation on the full drive using:

sudo blkdiscard /dev/sda

This will send a command to the SSD that ALL the sectors on the drive are UNUSED, which will effectively ERASE THE FULL DRIVE in only a few seconds.

Aside from the speed advantage, the difference between blkdiscard and just writing zeros to the drive is that blkdiscard tells the drive that the sectors are not used, and this allows the SSD to put them back into the wear leveling pool, which is very important to maintaining SSD speed and reliability.

Either of the above methods (dd’ing zeros or blkdiscard) will give you a pristine unpartitioned 220GB drive to install to, which is the best possible scenario for the 18.04 installer, since then you are free to create any partition arrangement you want (boot / root and home all together, or as separate partitions).

Since I was able to get both to boot (even though I had to physically remove the one drive) I went ahead and installed 18.04 on sda. Installation went fine, it took a couple of boots for the system to fix the grub.Now the first thing I did when logging into 18.04 was in Mate Tweak, which destroyed my panels, so I will start a new thread on that.

BTW…The bad GPT was on the sdd (110GB SSD). Like I said, it is still bootable, that is why I went ahead and installed 18.04 to.

I forgot to add, I had no issue restoring swap file to the 16.04 sdd (110GB) installation.

@Crotchety Thanks again for all your help and instructions. I know it takes patience on your part to help someone at my level. But it is the help like that comes from this forum that make Ubuntu Mate so great. Thanks again for all your help!

The GPT glitch is related to what I was trying to say above, there is just a bogus GPT entry at the end of 110GB drive, which should straighten itself out when you recreate your 16.04 swap partition on sdd.

If everything is booting and mounting ok now under 18.04, you should be able to check to see that all your drives now have unique UUIDs by simply running:

sudo blkid

If so, then hopefully grub is now working correctly and showing your old 16.04 partition on 110GB drive as a valid boot option, and you should now be able to boot up from either 18.04 or your old 16.04 install.

EDIT:
It looks like we had crossing posts…
Glad you got everything worked out.

Everything has a unique UUID, but the GPT glitch is still there. Keep in mind it was there prior to the copy command when 12.04 was still installed. See below:

$ sudo gdisk -l /dev/sdd | grep -A4 '^Partition table scan:'
Caution: invalid main GPT header, but valid backup; regenerating main header
from backup!

Warning! Main partition table CRC mismatch! Loaded backup partition table
instead of main partition table!

Warning! One or more CRCs don't match. You should repair the disk!

Partition table scan:
  MBR: MBR only
  BSD: not present
  APM: not present
  GPT: damaged

Also, I assume there are two grubs, one on sda and one on sdd. Occasionally I get the one on sdd that still shows the 12.04 installation. Just a minor fix I will need to search for.

First, about the GPT error that you are seeing, notice the text that says:

Caution: invalid main GPT header, but valid backup; regenerating main header
from backup!

This is an indication of the fact that gdisk starts with the assumption that your disk probably has a GPT table (unlike fdisk and other older utils which first look for an MBR type table).

When you run gdisk on the 110G drive, it’s not finding the main GPT table at the beginning of the drive (because that has been replaced with the 16.04 MBR table), but it is finding the old no-longer-used backup table at the end of the drive.

If you post a Gparted screen capture of your current 110G drive partitioning, we can look at how to fix this manually by zeroing out this table with a special dd command using the skip= option, to skip over the 16.04 O.S. and delete the unwanted GPT backup table.

You may want to do this in conjunction with resizing your 16.04 working partition back to the max size possible, let me know, it’s not that difficult.

Now moving on to the strange grub boot loader behavior . . .

If you boot up into your old 16.04 installation and from your 16.04 mate terminal run:

sudo update-grub

… it should see the 110GB drive as sda, and the update-grub should fix your older 16.04 grub boot loader so that it will drop the now-nonexistent 12.04 entry, and correctly show both the 18.04 and 16.04 boot options (although the order may be reversed – 16.04 first, 18.04 second )

The issue of your system sometimes booting from the wrong grub is a little strange, because which boot loader loads should purely depend on the boot drive order specified in your BIOS.

What could be potentially happening is that for some reason your 220GB drive is a little slow to spin up, and sometimes the system moves on to boot from the other drive, and so loads the other grub…

You might be able to fix that by removing the 110GB drive from the boot device list in your BIOS so the system will be forced to wait for the 220GB drive and boot from that.

With MBR partitions we are booting in legacy non-UEFI mode, and sometimes your PC BIOS will also have some additional setting related to “legacy compatibility” for the SATA ports that might help.

Also, you mentioned that all your partitions now have unique UUIDs, which should be the case if you reinstalled using the MATE 18.04 installer.

BUT - Disks also have an overall “disk identifier” that is stored in the MBR, and this value may not be unique if you did not go through the additional step of zeroing out the MBR on the 220G drive before you installed MATE 18.04. This is true because the MATE installer will try to preserve the existing partition table and only update and change the partitions where it’s actually installing something.

So, even if you told mate to delete everything and install Mate from scratch, it may not have deleted and recreated the partition table or disk identifier.

Not sure if this will cause weird behavior with grub or not, but it’s a possibility, since this “disk identifier” is ALSO supposed to be unique.

You can easily check the “disk identifier” for all your installed storage devices using the terminal command:

sudo fdisk -l

Sorry I can’t be more definite, but what you are describing really shouldn’t be happening, so just tossing out a few idea’s here . . .

I already install 18.04 prior to your suggestion of manually zeroing out the table. I also went into the bios and adjusted the boot order, so the default boot now is sda. Some how the boot order changed during this process. I already added swap partition and extended boot partition. Below is a gparted screenshot. Interestingly, I see UM 18.04 installation did not create a swap partition. Is this normal or did this happen because no swap was on sda at time of installation?

At this point, since you have already resized the working partition and recreated swap, and it seems to be working, I would not worry about that erroneous GPT error, so long as everything seems to be ok otherwise.

From the way you described things working (with the old 12.04 phantom info sometimes showing up when you boot) it sounds like you are correct in assuming that you now have TWO grub boot-loaders installed, one for the new 18.04 and one for your older 16.04 installation.

In general this is a good thing, since if one grub install gets corrupted somehow, you should be able to just boot from the other drive in your BIOS.

While you were tweaking your BIOS boot settings, did you get a chance to boot up into 16.04 from the 110G drive version of grub and clear out the bogus 12.04 entries by running “sudo update-grub”?

If so, did the old grub installation then properly detect your 18.04 install on the other drive?

Not sure what to make of the missing swap partition on the 18.04 install, unless you had already restored the 16.04 swap partition on the other drive before you installed 18.04, and the 18.04 installer decided to use the existing swap rather than create a new one.
:
To find out for sure, just boot up into your new MATE 18.04 desktop and open a terminal and run:

swapon -s

EDIT:
Just a quick added note, to say that in general having two Linux installs share a swap partition is not an issue so long as you don’t try to use “hibernate” mode standby.

In hibernate mode, the system writes out RAM contents to the swap area before shutting down, then restores from swap when waking, which would obviously get boogered up if two installations were sharing a swap area . With large amounts of RAM, hibernate is MUCH slower than suspend to RAM sleep mode, so it’s normally disabled by default. This can be confusing in recent versions of Ubuntu, where the power manager will say it’s set to ‘hibernate’ for some settings but just use ‘sleep’ anyway.

Yes, I did restore SWAP on 16.04 before installing 18.04.

From 18.04
$ swapon -s
Filename Type Size Used Priority
/swapfile file 2097148 0 -2

So if the SWAP partition on sdd is being used by 18.04 on sda, I don’t understand the size indicated as the swap partition on sdd is 8GB. Looking at the 18.04 fstab it indicates no swap is being used.:
/swapfile none swap sw 0 0

Wouldn’t it be a good idea to add one, and if so, I think it makes senses for each distribution have its own swap on the same drive as the boot.

BTW…I just logged into 16.04 via the bios, and it seems the grub some how was fixed without me doing anything except logging in back and forth between distributions as I build up 18… The sdd grub now indicates 16.04 first then 18.04 as expected. The 12.04 is no longer listed.

Great!

It sounds like you have things on track.

The reason I suggested that you check the swap status with swapon -s is that Ubuntu 18.04 can install with multiple swap options.

If you assign a dedicated swap partition, or install with LVM/LUKS encryption, then 18.04 will use a separate swap partition, but since version 17 Ubuntu can also use a simple swap file.

From the swapon -s response that you posted, it looks like your MATE 18.04 installation is using a simple 2 Gig swap file (2,097,148 1K blocks).

There are several ways to enable swap using a swap file in Debian based releases, but I suspect that they handle it like they did in the earlier Ubuntu 17 release using a simple /etc/fstab entry.

If you want to shift back to using the traditional disk partition method, look in your /etc/fstab for an entry like:

/swapfile none swap sw 0 0

… and change it to something like:

UUID=xxxxxxxx-xxxx-xxxx-xxxxxxxxxxxx none swap sw 0 0

… where the UUID matches your new swap partition (just like you did with 16.04)