Quick summary since last post

The site you see, up until the last post from last week, is restored from a back up found on an external hard drive, taken in March 2021.  There may have been newer posts, there may have been a newer back-up, not correctly saved and lost to the sands of time.  I was impressed this backup could be restored.  I think I had tried to restore this back up a few months since.

The whole site was taken down at the start of an experiment about 10 months ago, shortly after I got my new consumer fibre-to-the-premises broadband, courtesy of brsk (happy customer, not endorsement, 500 up and down and much less than I was forking out for Virgin Media).

With brsk, I got something called Carrier Grade NAT (Network Address Translation) on IPv4. The point of NAT is so that one public IP address can serve many devices in one building and was one of the early fixes to the shortage of IPv4 addresses. This meant that devices could, by default, initiate a connection with the outside world but the outside world could not initiate a connection to a device.  Port forwarding resolved this by forwarding specific ports to specific devices at the router, allowing routing from public to private IPv4 networks.  The definition of a public IP address being one issued by your ISP, the external IP address so to speak, on the actual internet, and a private one being of the 192.168 variety on the inside of a router.

And so, because websites are by default served on ports 80 and 443, if you have a computer in your network configured to serve a website, you can forward requests from the internet on ports 80 and 443 to that one computer.  Forward more ports to the same device, or other devices, for more services inside your network to become available outside.  You do that, and your website or NAS drive can be accessed from the outside world.

Due to the IPv4 shortage, ISPs are now no longer allowing you to host your own services on IPv4 with port forwarding as there are now two layers of Network Address Translation, and you and maybe a few dozen (or a few thousand) others are sharing one public IP address and you can’t have (or your ISPs won’t allow) port forwarding at the upper layer.

So what’s the solution, if I want my NAS drive to be available?  IPv6 works but only between two ISPs that support IPv6.  Friends couldn’t access our shared folders on my NAS.  I couldn’t from my mobile network.

I didn’t know much about IPv6 before this.  I didn’t know that choosing to install my own wireless router on my old ISP (Virgin Media) had the side-effect of disabling CGNAT. This got me a semi-static not-shared public IP address which I registered with DDNS, allowing me to host my NAS drive.

So what was the workaround for being behind CGNAT at home? I tore down my website and started tinkering with my virtual server in the cloud, making it an ipv4 to v6 forwarder to my NAS drive, but it didn’t work well.  And I had ripped my website up,  and when I got round to rebuilding 3 months later, had mislaid a backup.

The workaround, in the end, was asking brsk nicely if they’d take me off CGNAT.  The support chat I got back by email was with a real techie-minded person who opened my eyes to CGNAT in the first place (and how rubbish it is), and explained the only way to be off CGNAT on their system was to have a  static public IP address and as it wasn’t a service for home users, they planned to do so soon, so in the meantime he gave me a free static IPv4 address.  Maybe I shouldn’t mention this, as I’ve had it for eight or nine months now, free of charge.

So I was without a website for a while, and now it is back, restored from one back up or another, with this ‘ere new content.  I’ve changed job, now providing IT support in house for a factory-and-office building.

I’ve moved my server from OVH to Ionos, who happen to be my name server (DNS registrar).  When I restored my website, I was aiming for maximum likelihood of recovery, so I started with a blank Ubuntu of the version I was using at time of backup, version 20.04 and likewise with the version of WordPress. I had to fiddle and faff with the back up and recovery tool to work properly.  I then updated WordPress to the latest and, happy with my work, I took a snapshot backup using the function on OVH’s interface, (one that charges a few quid a month). I then thought, why not have a prod and poke further, at upgrading to Ubuntu 22.04 from 20.04 and when I attempted that, WordPress fell apart (WordPress doesn’t guarantee compatibility with PHP8 and the upgrade from 7.2 to 8 as part of the Ubuntu upgrade was the culprit).  I restored from the OVH snapshot (as I had done a few times before) and the snapshot would not restore.  I checked in with their customer support and was advised snapshot was like a first tier of backup and you can only have one snapshot at a time.  Once restored, can’t be re-done. And if something goes wrong, can’t be undone. They said “We do not suggest that is is a backup.” I wrote back said you do suggest it is, by offering it as a paid service, that suggests that it is supposed to be used, it suggests that it works as intended. And so I decided to restart the whole process on Ionos, a blank Ubuntu 20.04 , that specific version of WordPress from March 2021, that difficult restore from backup, made easier by having done it once already, updates, and I’m back up and running, sticking to Ubuntu 20.04 until the update in Ubuntu from PHP 7.2 to 8 is supported by WordPress.  I don’t think my virtual server has IPv6 but a static IPv4, as expected.

And here I am.

I’m back! Again!

More soon.

By the way, it is my personal opinion, but backed up with some good experience, that OVH virtual private server blows. Specifically, blows raspberries up its own @$$.

Recent geekage

I’ve had a mess around with stuff recently.  Here’s a summary;

On a server formerly running Microsoft SBS 2015 (or summat) I installed on Software RAIDed a commandline instance of Ubuntu 20.04 server, installed three concurrent versions of Ubuntu desktop and configured to remote in to them from the internet using a web interface called Apache Guacamole.  It is off at the moment; it has no SSL on it, so I’m not inclined to leave it turned on.  It was the combination of:

  1. Reading about Apache Guacamole in a random page, so knowing it was possible
  2. Wanting to have a way of showing potential Linux converts what it is like to use, without them coming to my place of work during Covid.
  3. Wanting to learn how to install virtual PCs within Windows Server by command-line.

It was probably two months ago, so I’m not entirely sure, but I probably used these instructions for the Guacamole side of things, and this I’m sure is the very detailed and useful guide to installing, configuring and deleting virtual machines with remote desktop displays from the command-line.

I migrated my Nextcloud instance off my server and on to a new Raspberry Pi 4b4GB.  The reason was that I had Nextcloud in a Snap.  A Snap is a fully packaged app with all its dependencies built in; Since Nextcloud is a webapp, that includes php, Apache webserver and MySQL (or equivalent) and even the Let’s Encrypt cert-bot, for self-renewing, short-term SSL certificates.  Same as most websites.  Now, if Apache and a Snap with Apache are running on the same server, they can’t be on the same port number.  Changing the port number to 44300 (instead of the default 443) means you can’t use the Snap’s built in Let’s Encrypt cert-bot, only the self-signed bot.  So any visitors via app or web interface have warning messages or compatibility issues, what with non-default ports and self-signed certificate warnings like YOU HAVE JUST HAD ALL YOUR PASSWORDS HACKED BY VISITING A WEBSITE WITH A SELF-SIGNED CERTIFICATE, YOU IDIOT!!! QUICK, PHONE YOUR BANK BEFORE YOUR ACCOUNT IS EMPTIED!!! Or words almost as ominous, I mean, it’s only a self-signed certificate.  I do declare that I am myself, and need no other authority to say that I am who I am or as may be, that I am not who I am, and, or, that I am who I am not, and or, or maybe not, that I am not who I am not.

Got it?

So, Nextcloud is on a Pi with a 2TB HD attached.  Not one of my five or six users would know without me telling them that it was under the stairs, on top of the Guacamole server.

My third geek was to install Ubuntu Server 20.10 on a laptop and install just the packages I wanted to, to have a custom build.  I installed cinnamon then lightdm, expecting to have a desktop on next boot up, but one didn’t link to the other until I did sudo apt reinstall cinnamon.  On boot, I did a sudo apt install firefox thunderbird libreoffice terminator then I tweaked around with a couple of themes. One other tweak was to suppress a tendency for it to pause for a minute at boot to see if the LAN port was going live – useless since it has wireless capability from having the cinnamon desktop installed. It is really tidy, and so I’m inclined to do a similar set up on my guacamole server

Fourth geek – I have wrapped my head around the usage of PGP public and private keys to encrypt emails attachment. You can now send me an encrypted email that only I can read, by using my public key, found at https://digitaltinker.co.uk/digitaltinker.asc

And you’re up to date.

Slightest tweak

The site has received its first change in a long time. Only the slightest tweak, and only visible to those in the know. A clue; A man is not dead while his name is still spoken.

Aaargh, drivers!

So, the OTRS server is going to be a hardware box in the corner of the room at work, rather than on a virtual server online (mine or a dedicated one).  The only problem is getting the drivers to work, namely, the Realtek 8111c LAN chip.

The driver, I gather, is a closed-source kernel module, and needs to be added to the Kernel to work.  So, there’s an app for that, and if you sudo apt install r8168-dkmsit is supposed to add the driver for many Realtek chips to the kernel.

I built the system on a little laptop, perfectly adequate for the job, and it had a LAN chip that was either Intel or Broadcom, it doesn’t matter which, because it worked (I checked, it’s Broadcom).  I unplugged the HD and stuck it in to the faster system I wanted to use going forward (after installing r8168-dkms) and it was  having none of it.  There’s a manual way of adding the driver to the kernel, but since the r8168-dkms package said everything installed fine, thank you, I suspect the manual install just still won’t work.  And it is less desirable to manually install the kernel module when an apt package can do it for you, as it is written to re-add the module whenever a new kernel is installed.

Getting hold of a consumer class motherboard without Realtek’s chip is unusual, and all the spare second user socket 1155 and 1156 boards we’ve acquired at work have Realtek.

An Intel PCI-Express LAN board may be necessary.  VIA looked promising until I realised that virtually nobody has put the VIA chip on a PCI-express board.  I’ve heard that some Broadcom isn’t a bad choice either for Linux, but not all.

Intel seem a bit pricey at around double the Realtek ones, but that may be a price you have to pay for quality with proper Linux support.  Still, £25 isn’t that much.  Broadcom are just as pricey.

By the way, whatever happened to VIA?  They were as ubiquitous 10-12 years ago as Realtek.

The Details

Today, I ripped it up and started again, doing so in a particular order to make sure that I got almost everything right, first time, every time, nearly.  So I backed up WordPress using All-in-One WP Migration and pressed the button on OHV server control panel to reset back to vanilla Ubuntu server 18.04.

After powering back up, it comes with root account that you’re emailed the password for and an ubuntu user that I don’t know the password for, which is weird.

I deleted ubuntu and recreated with a nice 20 character password, and logged in as such.  But I’m used to being logged in automatically through SSH, as user ubuntu, since that’s how Amazon Web Services comes preconfigured.  This involves public and private key pairs and the public key being submitted automatically on connection.  I found these steps on good ol’ Digital Ocean which I had to amend since it is written assuming your SSH app is SSH for Linux commandline and not PuTTY for Windows.  That being the case, I swapped step 1 for the PuTTYgen method, copied the public key created, then pasted in to file ~/.ssh/authorized_keys (being logged in as ubuntu, it saved to /home/ubuntu/.ssh/authorized_keys ).  I saved PuTTYgen’s private key to my Windows Home Directory My Documents and connected with it in PuTTY.  Bang, I’m in!

I continued the steps recommended on Digital Ocean so password authentication is disabled, only SSH key pairs work.  I have a key for root, and a key for ubuntu.   The purpose of the exercise was to use an account that requires sudo for any system-altering commands (which most are when installing various platforms).

More later, I’m going out to buy some green hair dye.

LLL and Updates!

(Didn’t keep this up, I’m afraid.  Old news.  Nothing to see here.)  Blogging about geeking is going to be less frequent, as I devote my extra-curricular geeking to Linux Learners NorthWest or Linux Learners Lancs, which ever name I settle on.  The first meet will be two weeks on Wednesday, 11th October 2017.  I’m working on the website and first meet content this weekend.

I like how Linux Learners Lancs rolls along the lips à la alliteration of ells.  And it abbreviates to LLL too.  Northwest is a little more inclusive for Cheshire, Merseyside and Greater Manchester but people should only feel restricted by how fay they will travel.  I always thought of Yorkshire as being an eastern collection of counties, but did you know that the most western part of North Yorkshire is as far west as the most western part of Blackburn with Darwen?  And only ten miles from Morecambe Bay?

If I go for Linux Learners NorthWest, I’d need to register another domain, for the cost of a take-away pizza.

So anyway, open to all 😃.  Come from Windermere, Sheffield or Wrexham if you can be bothered!

Last night I upgraded WordPress.  It tried to upgrade automatically from 4.5.1 to 4.5.2 (I think they) but failed because my permission settings were too tight.  I like how it tells you which files it couldn’t change during the failed upgrade, so you can correct them.  One thing that flummoxed me briefly was WordPress asking for ftp settings on one of them and not on others (I run three presently, but only this one with any real content as yet).  The reason for this was, my WordPress config file didn’t have a particular command set so it assumed my WordPress files and my Operating System were in different locations.  Hence, WordPress was configured to be updated by ftp, or some such nonsense.  I have ftp disabled on my server anyway by having never opened the ports.  So far, it has only ever needed to fetch files by http, and if necessary, I can SSH files up to it, or even go via NextCloud.

Back-ups were made by means of Amazon’s virtual server system.  A single snapshot was taken of the OS partition (I have three partitions, OS with home folder and websites, NextCloud data and swap).