Quick summary since last post

The site you see, up until the last post from last week, is restored from a back up found on an external hard drive, taken in March 2021.  There may have been newer posts, there may have been a newer back-up, not correctly saved and lost to the sands of time.  I was impressed this backup could be restored.  I think I had tried to restore this back up a few months since.

The whole site was taken down at the start of an experiment about 10 months ago, shortly after I got my new consumer fibre-to-the-premises broadband, courtesy of brsk (happy customer, not endorsement, 500 up and down and much less than I was forking out for Virgin Media).

With brsk, I got something called Carrier Grade NAT (Network Address Translation) on IPv4. The point of NAT is so that one public IP address can serve many devices in one building and was one of the early fixes to the shortage of IPv4 addresses. This meant that devices could, by default, initiate a connection with the outside world but the outside world could not initiate a connection to a device.  Port forwarding resolved this by forwarding specific ports to specific devices at the router, allowing routing from public to private IPv4 networks.  The definition of a public IP address being one issued by your ISP, the external IP address so to speak, on the actual internet, and a private one being of the 192.168 variety on the inside of a router.

And so, because websites are by default served on ports 80 and 443, if you have a computer in your network configured to serve a website, you can forward requests from the internet on ports 80 and 443 to that one computer.  Forward more ports to the same device, or other devices, for more services inside your network to become available outside.  You do that, and your website or NAS drive can be accessed from the outside world.

Due to the IPv4 shortage, ISPs are now no longer allowing you to host your own services on IPv4 with port forwarding as there are now two layers of Network Address Translation, and you and maybe a few dozen (or a few thousand) others are sharing one public IP address and you can’t have (or your ISPs won’t allow) port forwarding at the upper layer.

So what’s the solution, if I want my NAS drive to be available?  IPv6 works but only between two ISPs that support IPv6.  Friends couldn’t access our shared folders on my NAS.  I couldn’t from my mobile network.

I didn’t know much about IPv6 before this.  I didn’t know that choosing to install my own wireless router on my old ISP (Virgin Media) had the side-effect of disabling CGNAT. This got me a semi-static not-shared public IP address which I registered with DDNS, allowing me to host my NAS drive.

So what was the workaround for being behind CGNAT at home? I tore down my website and started tinkering with my virtual server in the cloud, making it an ipv4 to v6 forwarder to my NAS drive, but it didn’t work well.  And I had ripped my website up,  and when I got round to rebuilding 3 months later, had mislaid a backup.

The workaround, in the end, was asking brsk nicely if they’d take me off CGNAT.  The support chat I got back by email was with a real techie-minded person who opened my eyes to CGNAT in the first place (and how rubbish it is), and explained the only way to be off CGNAT on their system was to have a  static public IP address and as it wasn’t a service for home users, they planned to do so soon, so in the meantime he gave me a free static IPv4 address.  Maybe I shouldn’t mention this, as I’ve had it for eight or nine months now, free of charge.

So I was without a website for a while, and now it is back, restored from one back up or another, with this ‘ere new content.  I’ve changed job, now providing IT support in house for a factory-and-office building.

I’ve moved my server from OVH to Ionos, who happen to be my name server (DNS registrar).  When I restored my website, I was aiming for maximum likelihood of recovery, so I started with a blank Ubuntu of the version I was using at time of backup, version 20.04 and likewise with the version of WordPress. I had to fiddle and faff with the back up and recovery tool to work properly.  I then updated WordPress to the latest and, happy with my work, I took a snapshot backup using the function on OVH’s interface, (one that charges a few quid a month). I then thought, why not have a prod and poke further, at upgrading to Ubuntu 22.04 from 20.04 and when I attempted that, WordPress fell apart (WordPress doesn’t guarantee compatibility with PHP8 and the upgrade from 7.2 to 8 as part of the Ubuntu upgrade was the culprit).  I restored from the OVH snapshot (as I had done a few times before) and the snapshot would not restore.  I checked in with their customer support and was advised snapshot was like a first tier of backup and you can only have one snapshot at a time.  Once restored, can’t be re-done. And if something goes wrong, can’t be undone. They said “We do not suggest that is is a backup.” I wrote back said you do suggest it is, by offering it as a paid service, that suggests that it is supposed to be used, it suggests that it works as intended. And so I decided to restart the whole process on Ionos, a blank Ubuntu 20.04 , that specific version of WordPress from March 2021, that difficult restore from backup, made easier by having done it once already, updates, and I’m back up and running, sticking to Ubuntu 20.04 until the update in Ubuntu from PHP 7.2 to 8 is supported by WordPress.  I don’t think my virtual server has IPv6 but a static IPv4, as expected.

And here I am.

I’m back! Again!

More soon.

By the way, it is my personal opinion, but backed up with some good experience, that OVH virtual private server blows. Specifically, blows raspberries up its own @$$.

Changing the desktop environment (fudging the final step, breaking the system and fixing it)

I am a fan of KDE most of the time.  It is elegant and polished.  My first introduction to Linux left me decidedly a Gnome 2 kinda guy and I waltzed over to KDE since I didn’t like either Unity or Gnome 3.  But sometimes I experience bugs in KDE and for that reason, I’ve been running my laptop on Cinnamon.  Yesterday, I decided to convert my desktop to Cinnamon, without reloading.  But let me first waffle on about the bugs…

So, the two bugs in KDE that have been particularly oining; firstly, there’s something weird in X11 or Nvidia drivers that I can’t always double-click a desktop icon.  I suspected it was Nvidia because when I had two monitors, with one on Intel and other on Nvidia, it would only be a bug on the screen connected to Nvidia.  After getting a DVI to VGA adapter to connect both screens to  Nvidia (allowing me to entirely disable Intel graphics), I wondered if the bug would demonstrate on both screens; instead it went away.  Until one day the bug came back!  That was around the same time I installed gamemode package to tune the system for gaming.  I think it is brilliant, by the way, that a few of the big name games are coming out on Linux, like the last three games in both the Hitman and Tomb Raider series.

The second bug that oins me about KDE is how the file manager (dolphin) handles file transfers via the WEBDAV protocol.  The file transfer progress bar is completely off.  For example, suppose you have four files, of equal size, and you copy them to your webdav server.  It’ll race up to 25% in a matter of seconds (faster than the network connection on my WEBDAV server), and then pause for ages, and then race up to 50%, 75% and 100%, pausing at each stage.  I think it is showing me how fast it loads the file to RAM, not how fast it moves along the network. Very oining.  Cinnamon shows you actual network file transfer rates, moving smoothly from nought to one hundred percent.

So, I decided, bye bye KDE, hello Cinnamon.  Thing is, if I’m removing KDE, then I’m removing its display manager.  If you don’t know, a display manager handles your log-in, among other things.  KDE’s default display manager is SDDM and Cinnamon is quite happy with either LightDM and GDM3 or others.  So the plan was, install cinnamon with LightDM and remove KDE and SDDM.  First couple of commands were something like:

sudo apt install cinnamon-desktop-environment lightdm 
sudo dpkg-reconfigure lightdm

Rebooted OK, logged in to system with LightDM and Cinnamon, logged out, switched to tty2 so I was working in pure commandline environment and typed sudo apt remove kded5 lightdm and hit enter.  It suggested I had a lot of packages I didn’t need anymore so I ran sudo apt autoremove.

Hang on.

Did I just remove LightDM?  That should have been SDDM.  And I don’t think SDDM works without KDE being present.  Now what?  Without a desktop, I have many tools missing, so I can’t get a network connection easily, can I?  How can I apt?

Oh.

expletive.

So I tried using netplan, the  command-line way configuring the network but that was unsuccessful since netplan works with Ubuntu Server not Ubuntu Desktop.  I may have eventually got it online, but I was struggling.  So, how do I get a package installed on a PC without network?  The answer is, install something called apt-offline.  That was a bit of an arse.

Using my laptop and Ubuntu’s online index of their full software repository,  I downloaded apt-offline on to a pendrive, plugged that into my offline desktop, mounted the pendrive since I have no GUI to do it automatically for me and typed the command sudo apt install /media/pendrive/apt-offline.version-blahblahblah and was told two dependencies missing. So I unmounted the pendrive, got back on my laptop and got the dependencies and installed them.  I had to go back again, installing five packages in total before I got apt-offline working .  I suppose I could have done the same to reinstall LightDM but i suspect waaaaaaaaaaaay more dependencies to fix.  So, apt-offline it is.  How do I use it?

Running the command man apt-offline, I read the manual but stumbled over it a bit.  In summary, by using sudo apt-offline set, you save a list of your desired package or packages, and the current state of your offline system into a single signature file. Transfer that file to an online computer, then run sudo apt-offline get to download all the files necessary to install your new packages, dependencies and updates.  Back to your offline computer and sudo apt-offline install pulls all files from your source, but crucially, stores them in a cache and does not actually install anything.  To do that, do your normal command, in my case sudo apt install lightdm .

And my system was repaired, with Cinnamon instead of KDE.  I learnt something new, but I would prefer to have my Sunday afternoon back!

Recent geekage

I’ve had a mess around with stuff recently.  Here’s a summary;

On a server formerly running Microsoft SBS 2015 (or summat) I installed on Software RAIDed a commandline instance of Ubuntu 20.04 server, installed three concurrent versions of Ubuntu desktop and configured to remote in to them from the internet using a web interface called Apache Guacamole.  It is off at the moment; it has no SSL on it, so I’m not inclined to leave it turned on.  It was the combination of:

  1. Reading about Apache Guacamole in a random page, so knowing it was possible
  2. Wanting to have a way of showing potential Linux converts what it is like to use, without them coming to my place of work during Covid.
  3. Wanting to learn how to install virtual PCs within Windows Server by command-line.

It was probably two months ago, so I’m not entirely sure, but I probably used these instructions for the Guacamole side of things, and this I’m sure is the very detailed and useful guide to installing, configuring and deleting virtual machines with remote desktop displays from the command-line.

I migrated my Nextcloud instance off my server and on to a new Raspberry Pi 4b4GB.  The reason was that I had Nextcloud in a Snap.  A Snap is a fully packaged app with all its dependencies built in; Since Nextcloud is a webapp, that includes php, Apache webserver and MySQL (or equivalent) and even the Let’s Encrypt cert-bot, for self-renewing, short-term SSL certificates.  Same as most websites.  Now, if Apache and a Snap with Apache are running on the same server, they can’t be on the same port number.  Changing the port number to 44300 (instead of the default 443) means you can’t use the Snap’s built in Let’s Encrypt cert-bot, only the self-signed bot.  So any visitors via app or web interface have warning messages or compatibility issues, what with non-default ports and self-signed certificate warnings like YOU HAVE JUST HAD ALL YOUR PASSWORDS HACKED BY VISITING A WEBSITE WITH A SELF-SIGNED CERTIFICATE, YOU IDIOT!!! QUICK, PHONE YOUR BANK BEFORE YOUR ACCOUNT IS EMPTIED!!! Or words almost as ominous, I mean, it’s only a self-signed certificate.  I do declare that I am myself, and need no other authority to say that I am who I am or as may be, that I am not who I am, and, or, that I am who I am not, and or, or maybe not, that I am not who I am not.

Got it?

So, Nextcloud is on a Pi with a 2TB HD attached.  Not one of my five or six users would know without me telling them that it was under the stairs, on top of the Guacamole server.

My third geek was to install Ubuntu Server 20.10 on a laptop and install just the packages I wanted to, to have a custom build.  I installed cinnamon then lightdm, expecting to have a desktop on next boot up, but one didn’t link to the other until I did sudo apt reinstall cinnamon.  On boot, I did a sudo apt install firefox thunderbird libreoffice terminator then I tweaked around with a couple of themes. One other tweak was to suppress a tendency for it to pause for a minute at boot to see if the LAN port was going live – useless since it has wireless capability from having the cinnamon desktop installed. It is really tidy, and so I’m inclined to do a similar set up on my guacamole server

Fourth geek – I have wrapped my head around the usage of PGP public and private keys to encrypt emails attachment. You can now send me an encrypted email that only I can read, by using my public key, found at https://digitaltinker.co.uk/digitaltinker.asc

And you’re up to date.

Slightest tweak

The site has received its first change in a long time. Only the slightest tweak, and only visible to those in the know. A clue; A man is not dead while his name is still spoken.

Aaargh, drivers!

So, the OTRS server is going to be a hardware box in the corner of the room at work, rather than on a virtual server online (mine or a dedicated one).  The only problem is getting the drivers to work, namely, the Realtek 8111c LAN chip.

The driver, I gather, is a closed-source kernel module, and needs to be added to the Kernel to work.  So, there’s an app for that, and if you sudo apt install r8168-dkmsit is supposed to add the driver for many Realtek chips to the kernel.

I built the system on a little laptop, perfectly adequate for the job, and it had a LAN chip that was either Intel or Broadcom, it doesn’t matter which, because it worked (I checked, it’s Broadcom).  I unplugged the HD and stuck it in to the faster system I wanted to use going forward (after installing r8168-dkms) and it was  having none of it.  There’s a manual way of adding the driver to the kernel, but since the r8168-dkms package said everything installed fine, thank you, I suspect the manual install just still won’t work.  And it is less desirable to manually install the kernel module when an apt package can do it for you, as it is written to re-add the module whenever a new kernel is installed.

Getting hold of a consumer class motherboard without Realtek’s chip is unusual, and all the spare second user socket 1155 and 1156 boards we’ve acquired at work have Realtek.

An Intel PCI-Express LAN board may be necessary.  VIA looked promising until I realised that virtually nobody has put the VIA chip on a PCI-express board.  I’ve heard that some Broadcom isn’t a bad choice either for Linux, but not all.

Intel seem a bit pricey at around double the Realtek ones, but that may be a price you have to pay for quality with proper Linux support.  Still, £25 isn’t that much.  Broadcom are just as pricey.

By the way, whatever happened to VIA?  They were as ubiquitous 10-12 years ago as Realtek.

The Details part 3

So there seems to be this thing about snap and snappy apps and suchlike.  I have no idea, I’ve dipped my toes in to it, but it seems like Snappy is the structure on which snaps are placed that interact with each other like little virtual instances on your main virtual instance (or dedicated piece of hardware), providing modularised services to each other to create a website. I think WordPress can be a snap, and so too can Nextcloud.  I’ve tried installing the Nextcloud site as a snap as per this Digital Ocean guide but came up with nothing good, it came apart during the Let’s Encrypt SSL stage.  So I installed it just as I used to, following these instructions here from TechRepublic and a bit of this to allow for the utf8mb4 thingy.

With everything in place, OTRS was a doddle, though ironically (I think, I’d have to check with Alanis Morrisette), OTRS insisted on having the SQL database configured in utf8.

So, it would seem that my master database is set up with the best version of utf8 character encoding (utf8mb4) which allows 4 bytes per character and is compatible with all the world’s alphabets (maybe Tolkien and Trek languages too!) and emoticons.🏆  Which is great for WordPress.  Regular utf8 encoding is 3 bytes per character, and lacks all alphabets and emoticons, a bit of a botch job fixed by MySQL when utf8mb4 came along, but never removed.  This is all from various forums I read up in the last few days.

Within your MySQL installation, each database can have its own character encoding, so regardless of the SQL master database being utf8mb4, it was no issue creating a database just for OTRS that was utf8.  OTRS offers to create the database for you, or use an existing DB but can only make the new one when the master DB is utf8. That being so, I just created its database with utf8 as required, within phpmyadmin.  So it is nice that WordPress is on utf8mb4, but for OTRS, the whole rip-it-up-and-start-again needn’t have happened.  All good practice though.  The standard instructions to OTRS install on their website was easy enough, with some slight tweakages in how I set up my sites-available/enabled and conf-available/enabled creating the links the a2ensite and a2enconf commands.  It was the second or third time through playing with the OTRS install anyway.

(In Apache, the configuration files are named *.conf and are stored in /etc/apache2/mods-available and conf-available and sites-available .  They’re activated by linking the conf files within, to similar folders named mods-enabled conf-enabled and sites-enabled.  In the instructions for WordPress and Nextcloud installs, the right way to make that link is with an apache commands a2ensite site.conf a2enconf config.conf a2enmod mod.conf and similarly a2dissite a2dissconf a2dismod to break that link, to take a site offline or disable a configuration.  Here endeth the parenthesis.)

More recently, I’ve been trying to wrap my head around the OTRS system itself and how it might best suit my place of work.

The Details Part 2

Continued…

Then I recreated my LAMP stack to support all my usual pages; WordPress, NextCloud, phpMyAdmin and OTRS ultimately.  I built the bog standard info.php file, named it index.php so it was served up by default by apache (and so detected by Lets Encrypt and got my cerificate for all my sites at once.

Soon as I got on the MySQL database, I corrected it to fully support utf8mb4 character set, to incorporate multi-lingual posts and comments, and emojis 🤞.  I found these instructions for MySQL 5.7 on Ubuntu 16.04, worked fine for the same MySQL on 18.04.  This is also important for OTRS when I get that installed later this afternoon or tomorrow.  OTRS won’t work without utf8mb4 support.

As an aside, figuring out where to apply mysql instructions in files was a bit arsey, since various instruction I’ve read references config files in /etc/mysql  but the master control file is /etc/mysql/mysql.conf.d/mysqld.cnf and I suppose this may be a feature of Ubuntu 18.04.  Still, found it in the end.  The file suggested in some posts exists but has barely any config in it.  If config is applied, and it contradicts the config in /etc/mysql/mysql.conf.d/mysqld.cnf then MySQL just doesn’t run. Best to apply everything in the main file.

I created phpMyAdmin first, configuring it the way I always have with 2 layer password authentication, one for access to the login page, one for the sql users login that phpMyAdmin uses by default.  I then installed WordPress and imported my back-up.  What worked nicely this time, for some unknown reason, was after importing the site from my back-up that I’d done first thing, I could change the permalink settings.  When I did my export-import last week, from my AWS to this server, that didn’t work so well, and had to have my blogs posted with page id in the URL, rather than date and page title as you see here  /2018/09/11/the-details-part-2/ for this page.  Its nice to have that back.

Time now to reinstall nextcloud, paying attention to utf8mb4 nuances.  I’ll write that up in details part 3.

The Details

Today, I ripped it up and started again, doing so in a particular order to make sure that I got almost everything right, first time, every time, nearly.  So I backed up WordPress using All-in-One WP Migration and pressed the button on OHV server control panel to reset back to vanilla Ubuntu server 18.04.

After powering back up, it comes with root account that you’re emailed the password for and an ubuntu user that I don’t know the password for, which is weird.

I deleted ubuntu and recreated with a nice 20 character password, and logged in as such.  But I’m used to being logged in automatically through SSH, as user ubuntu, since that’s how Amazon Web Services comes preconfigured.  This involves public and private key pairs and the public key being submitted automatically on connection.  I found these steps on good ol’ Digital Ocean which I had to amend since it is written assuming your SSH app is SSH for Linux commandline and not PuTTY for Windows.  That being the case, I swapped step 1 for the PuTTYgen method, copied the public key created, then pasted in to file ~/.ssh/authorized_keys (being logged in as ubuntu, it saved to /home/ubuntu/.ssh/authorized_keys ).  I saved PuTTYgen’s private key to my Windows Home Directory My Documents and connected with it in PuTTY.  Bang, I’m in!

I continued the steps recommended on Digital Ocean so password authentication is disabled, only SSH key pairs work.  I have a key for root, and a key for ubuntu.   The purpose of the exercise was to use an account that requires sudo for any system-altering commands (which most are when installing various platforms).

More later, I’m going out to buy some green hair dye.

It’s been a while…

Geeking has been minimal.  My efforts and concentration has been directed at my place of work, where we’ve hired a new technician replacing one who left earlier, gone through a staff buy-out, and relocated.

But as of late, I’ve found my amazon charges creeping up, especially since some of my first-year discounts have expired.  So I’ve bought a new Virtual Private Server for 12 months, got 40GB SSD, 1 core, 4GB RAM (yup, 4GB) to host my site, my nextcloud, my experiments.  And it’s on 18.04 Ubuntu, none of this 16.04 of yesteryear (or the year before yesteryear).

I’m pleased with my 4GB of RAM.  On Amazon, I launched another instance with 2GB for an experiment with a web-app for a few days and it struggled to run the web app under certain circumstanced, ground to a halt and free RAM evaporated.  The cost of running that extra instance is quite high compared to my new VPS.

Cost £60 + VAT.

So far, this blog is the only thing I’ve moved.  Though I did a dry run by moving it to xfer.digitaltinker.co.uk (now no longer a site).