Installer wants to put extra partitions on a new SSD

I'm trying to install Ubuntu-MATE 18.04.4 on a new, empty, external standalone 1 terabyte SSD.

I have 2 other drives in my desktop workstation, so in the installer I selected "do not use" for the swap spaces on those drives.

I then selected to create an 8 GB swap space for the SSD as a "logical" or extended partition at the end of the SSD's empty space. (Since my desktop box has 8 GB of RAM.)

Then I set up one large primary partition for both "/" (root) and /home directories from "Beginning of space" using the ext4 filesystem.

I selected for the bootloader to be on "sde" which is the SSD.

But when I click on "Install now," the informational box that pops up says that the large primary partition will be partition 2, and that the 8 GB swap partition will be partition 5.

The installer wants partitions 1, 3 and 4 to be "unallocated space" partitions. Is this normal? I defined just 2 partitions. I don't understand why the installer wants to create five partitions on my SSD.

Since I want to understand what's going on with this, I did not go ahead with the install.

Can anyone shed light on what's going on here?

What would be "standard"? Seems like a lot of space is getting wasted with three "unallocated space" partitions. How would I correct this so as to limit the number of partitions and also not have unused or wasted space. Or do I want some small bit of free or unallocated space as, say, one partition?

Could I avoid all this if the first partition I create is the large / and home as primary, from the beginning of the space (leaving only 10 GB at the end), then, secondly, create the 8 GB swap space from "end of space" (or even from "beginning of space" which would be at the end of the primary partition, with 2 GB at the end - thoough maybe that would create a 3rd partition.)

Will I still end up with more than 2 partitions? Or rather, how can I do this to have only 2 (and is it ok to want just 2, or is there some need for a third partition to have "free space". Is there generally a need for free space with an SSD?

My hardware:
Metal: System76 Wild Dog, 64-bit, 4-core Q9650 3GHz, 8 GB ram.
Graphics: PNY Nvidia GeForce GTS 450, driver v. 367.44.
OSs: Ubuntu Mate 16.04 & 14.04.
Drives: 2 960-GB Sandisk SSDs.

You aren't ending up with extra partitions, the partitions are being spaced out. There's a very great and simple explanation of why this is, posted here. The extra chunks of free space are usually only 1MB each, so you're not losing too much data.

You can sort of work around this by manipulating what order you create partitions in, but you shouldn't really mess with it — it is fine. While I've had not much trouble with parts butting up against each other, that's because GParted's judgment consider this okay while I was creating it.

About SSDs, if it's an especially old one there needs to be consideration for byte alignment. Usually, 1MB free space at the beginning can help to speed up an especially-slow system if byte alignment is an issue, but modern SSDs should have the firmware compensate for user error and this byte alignment weirdness isn't so necessary with them.

If you are dual-booting and the disk uses GUID Partition Table (GPT) format, don't be afraid to sacrifice a handful of megabytes to duplicate the EFI partition and adjust boot flags so either the original or duplicate cannot be used. While this varies based upon if you want to use SecureBoot, or what the motherboard firmware will do about that pre-boot, I am sure you can figure it out well enough on your own.

And lastly — don't be afraid to fail. So long you can fall back on anything and you are smart about backups, failure is always an option with your own hardware.

You aren't ending up with extra partitions

Well, I tried to do this yesterday on a different SSD, and I definitely did end up specifically with what at least appeared to be extra partitions and which were demarecated as extra partitions: Partitions 1, 3 and 4 were "unallocated space" totaling about 40 or 50 GB of wasted space. I don't know why or how it happened like that.

Failure for this install is not a good idea for me right now: My 16.04 drive broke and gives me a blank screen, so I'm working from an older 14.04 spinner (internal) HDD that is crashing on me every 10 minutes and it re-boots slowly, Hence I'm in a very painful place with this.

Also, I'm supposed to be working from home right now, and they're starting to wonder what's up with me.

So I'd like to get 18.04 up and running asap, and I'll need to be able to boot the external 18.04 SSD.

My workstation uses BIOS, not UEFI.

Over the weekend or as soon as I can, I'll replace the internal 14.04 spinner with the new 18.04 SSD. Right now I need a working system.

I'd like to limit the number of partitions on the new install. Five seems unnecessary. I'm fine with 1 or 2 MB between the 2 partitions, but I fear that's not what's going on. I will probably try it again, setting it up the way I described above, putting the large partition first, leaving 10 or so GB at the end, and putting swap at the end.

Meanwhile please don't think I don't appreciate your response. I do.

Have you try Smartmontools to see health of hard drives?
Install Smartmontool
sudo apt-get install smartmontools --no-install-recommends

and then

check a-b-c
sudo smartctl -a /dev/sda

And check that results as possible close to 0

1 Raw_Read_Error_Rate
5 Reallocated_Sector_Ct
7 Seek_Error_Rate
10 Spin_Retry_Count
188 Command_Timeout

Thanks - I'll install this on 14.04, which I'm using now only as an emergency backup soon to be replaced w 18.04. But the problems on 14.04 are because of support having ended. However, I can use Smartmontools to see what might be wrong w my 16.04 desktop. I can sort of see that everything is "behind" the 16.04 blank screen which flashes off for a second at startup.

A post about black screen on startup from a greek forum
Auto translate by google

Many users mistakenly think that if they see a black screen on computer startup it is equivalent to the blue screen on windows. This is not the case.
This is the terminal, it is the original interface without the use of graphics. Linux is 100% functional in this format. Consider that linux servers only work with a terminal.

But if you had a graphical interface and now it didn't load anything has happened.
If it doesn't give us a username and password, press Alt + ctrl + f1 and there we run everything.
When we ask for a code, it doesn't look like we are writing for security reasons, but it will get it right.

  1. If it happened during the upgrade, it may not be complete. This is sometimes noticed if we upgrade from wireless.
    It is a good idea to plug in a cable and run the following commands from a terminal.

    sudo apt-get update
    sudo apt-get upgrade
    sudo apt-get dist-upgrade
    sudo reboot

  2. If the upgrade did not remove closed graphics card drivers, then the new operating system cannot work with the old driver and must be removed. Reboot to load with the open driver and then reinstall the closed one if we wish.

    sudo apt-get purge nvidia *
    sudo reboot

  3. If it has suddenly happened and our disk is mechanical it is best to run a disk diagnostic.

  4. If it throws us out of the graphic login then we ran with sudo some graphics application that should not. We run the following to correct it.

    rm ~ / .Xauthority
    sudo reboot

  5. If the available disk space is running out. (Sometimes it happens in an upgrade where many new packages are downloaded and we have a small disk.)
    We're running


and see if a mount is 100% full.
We try the commands

sudo apt-get autoremove
sudo ap-get clean

and we see if space is liberated. Otherwise we will have to delete some of our files with the rm command to free up space.

That was the results of Smartmontools of my a hdd ho is crashed !

1 Raw_Read_Error_Rate 0x000f 116 099 006 Pre-fail Always - 105858248
3 Spin_Up_Time 0x0003 097 097 000 Pre-fail Always - 0
4 Start_Stop_Count 0x0032 099 099 020 Old_age Always - 1821
5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 0
7 Seek_Error_Rate 0x000f 076 060 030 Pre-fail Always - 42154596
9 Power_On_Hours 0x0032 086 086 000 Old_age Always - 12510
10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0
12 Power_Cycle_Count 0x0032 098 098 020 Old_age Always - 2083
183 Runtime_Bad_Block 0x0032 100 100 000 Old_age Always - 0
184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0
187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0
188 Command_Timeout 0x0032 100 099 000 Old_age Always - 0 0 10
189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0
190 Airflow_Temperature_Cel 0x0022 072 056 045 Old_age Always - 28 (Min/Max 16/28)
191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 0
192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 65
193 Load_Cycle_Count 0x0032 099 099 000 Old_age Always - 2114
194 Temperature_Celsius 0x0022 028 044 000 Old_age Always - 28 (0 4 0 0 0)
197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0
240 Head_Flying_Hours 0x0000 100 253 000 Old_age Offline - 13259h+24m+03.425s
241 Total_LBAs_Written 0x0000 100 253 000 Old_age Offline - 8329305295
242 Total_LBAs_Read 0x0000 100 253 000 Old_age Offline - 566990505568