Notes from my server upgrades to Ubuntu 22.04 LTS

Almost two years after I wrote my notes from my server upgrades to Ubuntu 20.04 LTS I am here again to write about my upgrades to the current LTS which is 22.04 (Jammy Jellyfish).

As is good practice for such upgrades I waited till the .1 release of the LTS was out to rule out any initial issues from the release. This time around I staggered out and planned out the upgrades over multiple weekends and I left the most difficult servers for last. I also kept notes of all the various issues/quirks I encountered when I did upgrades inside my Obsidian vault.

So let’s get into it.

Continue reading “Notes from my server upgrades to Ubuntu 22.04 LTS”

Notes from my server upgrades to Ubuntu 20.04 LTS

This past weekend I learned that Ubuntu 20.04.1 LTS was out. This meant that I could start upgrading my servers from Ubuntu 18.04 LTS to Ubuntu 20.04 LTS as that is when do-release-upgrade makes the next LTS release available for upgrade.

I started with my web server i.e the server this blog is hosted on. Mostly because I’ve been wanting PHP 7.4 for my WordPress sites and Ubuntu 20.04 would give me that. The upgrade went without issue and after a small configuration change to nginx to make sure it used the new Unix socket for PHP 7.4 FPM, we were all good to go.

From then on, I went ahead and started doing more of the servers, in the order of the servers that were least likely to break with the upgrade.

Breakages and Manual Interventions

Some of my servers failed to upgrade the grub-pc package during the dist-upgrade process. dpkg simply returned a error code. This was fixed by purging grub-pc and installing the grub2 package and then configuring it according to Linode’s documentation on GRUB2. I’m not entirely sure why the grub-pc package failed to upgrade.

All my servers also needed the apt sources list entry for the Icinga repository manually updated to use focal. My Mastodon servers needed repository entries for nodesource manually updated as well.

Ubuntu 20.04 brought with it PostgreSQL 12 and that means manually upgrading the installs and databases on my PostgreSQL 10 servers. This involves dropping the newly created empty 12 ‘main’ cluster and then running pg_upgradecluster on the 10 ‘main’ cluster. After that completed, the 10 ‘main’ cluster can be dropped. I didn’t run into any issues here. I did take a manual backup of database(s) before I started the process and I would recommend everyone do that if possible.

I will however note here that since the process creates a copy of the database into the 12 cluster, you will nearly double your disk usage. This can be a problem if you have large pgsql databases and do not have enough spare disk space on your servers to complete the process. Something to keep in mind if you are planning on doing one of these.

Since PostgreSQL was upgraded to 12, the pgbackrest configuration and stanza needed to be updated. Their documentation goes through the process but I didn’t need to use it as it is fairly straight forward.

Bits, Bobs and Final Thoughts

Overall the process of upgrading my servers to Ubuntu 20.04 was fairly smooth. I didn’t encounter any catastrophic failures or data loss. If I had I could have reverted to the manual snapshots I took of the servers before I started.

I’m glad to finally have PHP 7.4 on my web server and WordPress no longer complaining about having to use PHP 7.2.

Another neat thing to note is that one of my servers was initially provisioned on 14.04 and has over the years been upgraded through LTS Ubuntu releases. So 14.04 -> 16.04 -> 18.04 -> 20.04.

That’s all from me!

Ubuntu 14.04 Server and IPv6 Temporary Addresses

So, as we all know Ubuntu 14.04 was released today. I downloaded the server ISO to test in VirtualBox.

Let us see what we have here:

ss@trusty-testing:~$ cat /etc/lsb-release

ss@trusty-testing:~$ ip -6 addr show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qlen 1000
 inet6 2001:470:1d:96b:70bb:7393:2071:faa2/64 scope global temporary dynamic
 valid_lft 597675sec preferred_lft 78675sec

Wait what? Am I going blind or is that an IPv6 temporary address [0] on a supposedly server image?

Investigating further:

ss@trusty-testing:~$ sudo sysctl -a | grep tempaddr
net.ipv6.conf.all.use_tempaddr = 2
net.ipv6.conf.default.use_tempaddr = 2
net.ipv6.conf.eth0.use_tempaddr = 2
net.ipv6.conf.lo.use_tempaddr = 2

What the hell? Not only did they leave temporary addresses turned on, they set the sysctl value at 2 which means that the system will prefer temporary addresses over standard ones for making connections. [1]

I asked around and apparently this is the case on Ubuntu 12.04 server as well.

ss@ubuntu-testing:~$ cat /etc/lsb-release

ss@ubuntu-testing:~$ sudo sysctl -a | grep tempaddr
net.ipv6.conf.all.use_tempaddr = 2
net.ipv6.conf.default.use_tempaddr = 2
net.ipv6.conf.eth0.use_tempaddr = 2
net.ipv6.conf.lo.use_tempaddr = 2

So that is two LTS server releases with IPv6 temporary addresses turned on and set to 2.

Why are temporary addresses bad on a server?

Unpredictability – anything depending on source address validation. Even SLAAC addresses are more predictable because they can be calculated given the MAC address of the NIC.

Ideally, you should be configuring your server’s addresses statically. Leaving temporary addresses turned on on a server image is just a bad default.

[0] –
[1] –