Long boot time - same as a solved case

@Norbert_X, you solved this issue with "A job taking 1:30 minutes from boot time" (From Dec 2020)

That thread that shows solved, I wanted to have you compare my /etc/fstab and see if there is a mount that doesn't exist and should be commented out:

mickee@mickeymouse:~$ cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/sda2 during installation
UUID=fd109e46-893c-46fc-b6f5-4b8a820e539c /               ext4    errors=remount-ro 0       1
# /boot/efi was on /dev/sda1 during installation
UUID=8CD9-AF8D  /boot/efi       vfat    umask=0077      0       1
/swapfile                                 none            swap    sw 

I do see a reference to an error, but I am inexperienced with modifying the fstab and could use a hand to see if there is a bad mount point? I could take a picture of my boot screen where it s taking a long time to load, if needed. if you don't think it's related I will close this thread. Thanks so much in advance!

Hi, @mickee (Brian Bogdan).

Let me try to help you here :

1 - Run the following command to see if the UUID of the "EFI" partition matches the UUID that you have in your "/etc/fstab" for the "/boot/efi" mount point:

lsblk --fs | grep -i 'efi'

2 - Run the following comand to see if the UUID for the "/" mount point matches the one you have in your "/etc/fstab" for that same mount point (in all likelihood it will match or else the system almost certainly would NOT boot at all!):

blkid info $(df -h | grep --regexp '/$' | awk '{ print $1 }')

3 - Run the following command to see what swap devices are being used:

swapon --show

For other people reading this: when @mickee mentions "the issue "A job taking 1:30 minutes from boot time" (From Dec 2020)" he is referring to the following Topic here in the "Ubuntu MATE Community":

2 Likes

the UUIDs are the same in both examples, the results for swap ae:

NAME      TYPE SIZE USED PRIO
/swapfile file   2G   0B   -2

Thanks for the feedback, @mickee . In that case, all the 3 entries in your "/etc/fstab" file (the entry for the EFI partition mounted on "/boot/efi"; the entry for the "/" mount point and the entry for the swap file) seem to be correct. So, I think that your "slow boot time" is not related to any mistakes in your "fstab".

By the way: what do you call a "long boot time"? In my case, in a 5 years old HP laptop (dual boot system of Windows 10 and Ubuntu MATE 22.04.1) that has a conventional HDD (Hard Disk Drive) of 5400 RPM (Rotations/Revolutions Per Minute) - so, it's not an SSD - the "boot time" is about 1 minute and 8 seconds:

$ systemd-analyze time
Startup finished in 6.781s (kernel) + 1min 1.308s (userspace) = 1min 8.089s
graphical.target reached after 1min 1.297s in userspace

In my case, it seems that about 30 seconds of the time is spent in things related to VirtualBox services:

$ systemd-analyze critical-chain
The time when unit became active or started is printed after the "@" character.
The time the unit took to start is printed after the "+" character.

graphical.target @1min 1.297s
└─multi-user.target @1min 1.297s
  └─vboxballoonctrl-service.service @1min 1.262s +33ms
    └─vboxdrv.service @30.978s +30.279s
      └─basic.target @29.776s
        └─sockets.target @29.775s
          └─snapd.socket @29.767s +5ms
            └─sysinit.target @29.558s
              └─snapd.apparmor.service @27.608s +1.948s
                └─apparmor.service @25.790s +1.816s
                  └─local-fs.target @25.786s
                    └─boot-efi.mount @25.668s +116ms
                      └─systemd-fsck@dev-disk-by\x2duuid-AECE\x2d6B1C.service @22.092s +3.552s
                        └─dev-disk-by\x2duuid-AECE\x2d6B1C.device @22.088s

Having said that, there are usually some "caveats" in these kinds of analysis using "systemd-analyze" commands:

https://www.freedesktop.org/software/systemd/man/systemd-analyze.html

From that "systemd-analyze" page:

" (...)
systemd-analyze critical-chain [UNIT...]

This command prints a tree of the time-critical chain of units (for each of the specified *UNIT*s or for the default target otherwise). The time after the unit is active or started is printed after the "@" character. The time the unit takes to start is printed after the "+" character. Note that the output might be misleading as the initialization of services might depend on socket activation and because of the parallel execution of units. Also, similarly to the blame command, this only takes into account the time units spent in "activating" state, and hence does not cover units that never went through an "activating" state (such as device units that transition directly from "inactive" to "active"). Moreover it does not show information on jobs (and in particular not jobs that timed out). (...)"

1 Like

1 min 11 seconds! I guess not bad with old fashioned hard disks. I should be satified with it for sure

$ systemd-analyze critical-chain
The time when unit became active or started is printed after the "@" character.
The time the unit took to start is printed after the "+" character.

graphical.target @1min 11.212s
└─multi-user.target @1min 11.212s
  └─blueman-mechanism.service @47.888s +23.323s
    └─basic.target @47.847s
      └─sockets.target @47.847s
        └─snapd.socket @47.845s +1ms
          └─sysinit.target @47.691s
            └─systemd-timesyncd.service @47.528s +162ms
              └─systemd-tmpfiles-setup.service @46.072s +1.450s
                └─systemd-journal-flush.service @12.179s +33.891s
                  └─systemd-remount-fs.service @11.769s +407ms
                    └─systemd-journald.socket @11.472s
                      └─-.mount @11.468s
                        └─-.slice @11.468s

Thanks for your help

2 Likes