Changing the desktop environment (fudging the final step, breaking the system and fixing it)

I am a fan of KDE most of the time.  It is elegant and polished.  My first introduction to Linux left me decidedly a Gnome 2 kinda guy and I waltzed over to KDE since I didn’t like either Unity or Gnome 3.  But sometimes I experience bugs in KDE and for that reason, I’ve been running my laptop on Cinnamon.  Yesterday, I decided to convert my desktop to Cinnamon, without reloading.  But let me first waffle on about the bugs…

So, the two bugs in KDE that have been particularly oining; firstly, there’s something weird in X11 or Nvidia drivers that I can’t always double-click a desktop icon.  I suspected it was Nvidia because when I had two monitors, with one on Intel and other on Nvidia, it would only be a bug on the screen connected to Nvidia.  After getting a DVI to VGA adapter to connect both screens to  Nvidia (allowing me to entirely disable Intel graphics), I wondered if the bug would demonstrate on both screens; instead it went away.  Until one day the bug came back!  That was around the same time I installed gamemode package to tune the system for gaming.  I think it is brilliant, by the way, that a few of the big name games are coming out on Linux, like the last three games in both the Hitman and Tomb Raider series.

The second bug that oins me about KDE is how the file manager (dolphin) handles file transfers via the WEBDAV protocol.  The file transfer progress bar is completely off.  For example, suppose you have four files, of equal size, and you copy them to your webdav server.  It’ll race up to 25% in a matter of seconds (faster than the network connection on my WEBDAV server), and then pause for ages, and then race up to 50%, 75% and 100%, pausing at each stage.  I think it is showing me how fast it loads the file to RAM, not how fast it moves along the network. Very oining.  Cinnamon shows you actual network file transfer rates, moving smoothly from nought to one hundred percent.

So, I decided, bye bye KDE, hello Cinnamon.  Thing is, if I’m removing KDE, then I’m removing its display manager.  If you don’t know, a display manager handles your log-in, among other things.  KDE’s default display manager is SDDM and Cinnamon is quite happy with either LightDM and GDM3 or others.  So the plan was, install cinnamon with LightDM and remove KDE and SDDM.  First couple of commands were something like:

sudo apt install cinnamon-desktop-environment lightdm 
sudo dpkg-reconfigure lightdm

Rebooted OK, logged in to system with LightDM and Cinnamon, logged out, switched to tty2 so I was working in pure commandline environment and typed sudo apt remove kded5 lightdm and hit enter.  It suggested I had a lot of packages I didn’t need anymore so I ran sudo apt autoremove.

Hang on.

Did I just remove LightDM?  That should have been SDDM.  And I don’t think SDDM works without KDE being present.  Now what?  Without a desktop, I have many tools missing, so I can’t get a network connection easily, can I?  How can I apt?

Oh.

expletive.

So I tried using netplan, the  command-line way configuring the network but that was unsuccessful since netplan works with Ubuntu Server not Ubuntu Desktop.  I may have eventually got it online, but I was struggling.  So, how do I get a package installed on a PC without network?  The answer is, install something called apt-offline.  That was a bit of an arse.

Using my laptop and Ubuntu’s online index of their full software repository,  I downloaded apt-offline on to a pendrive, plugged that into my offline desktop, mounted the pendrive since I have no GUI to do it automatically for me and typed the command sudo apt install /media/pendrive/apt-offline.version-blahblahblah and was told two dependencies missing. So I unmounted the pendrive, got back on my laptop and got the dependencies and installed them.  I had to go back again, installing five packages in total before I got apt-offline working .  I suppose I could have done the same to reinstall LightDM but i suspect waaaaaaaaaaaay more dependencies to fix.  So, apt-offline it is.  How do I use it?

Running the command man apt-offline, I read the manual but stumbled over it a bit.  In summary, by using sudo apt-offline set, you save a list of your desired package or packages, and the current state of your offline system into a single signature file. Transfer that file to an online computer, then run sudo apt-offline get to download all the files necessary to install your new packages, dependencies and updates.  Back to your offline computer and sudo apt-offline install pulls all files from your source, but crucially, stores them in a cache and does not actually install anything.  To do that, do your normal command, in my case sudo apt install lightdm .

And my system was repaired, with Cinnamon instead of KDE.  I learnt something new, but I would prefer to have my Sunday afternoon back!

The Details part 3

So there seems to be this thing about snap and snappy apps and suchlike.  I have no idea, I’ve dipped my toes in to it, but it seems like Snappy is the structure on which snaps are placed that interact with each other like little virtual instances on your main virtual instance (or dedicated piece of hardware), providing modularised services to each other to create a website. I think WordPress can be a snap, and so too can Nextcloud.  I’ve tried installing the Nextcloud site as a snap as per this Digital Ocean guide but came up with nothing good, it came apart during the Let’s Encrypt SSL stage.  So I installed it just as I used to, following these instructions here from TechRepublic and a bit of this to allow for the utf8mb4 thingy.

With everything in place, OTRS was a doddle, though ironically (I think, I’d have to check with Alanis Morrisette), OTRS insisted on having the SQL database configured in utf8.

So, it would seem that my master database is set up with the best version of utf8 character encoding (utf8mb4) which allows 4 bytes per character and is compatible with all the world’s alphabets (maybe Tolkien and Trek languages too!) and emoticons.🏆  Which is great for WordPress.  Regular utf8 encoding is 3 bytes per character, and lacks all alphabets and emoticons, a bit of a botch job fixed by MySQL when utf8mb4 came along, but never removed.  This is all from various forums I read up in the last few days.

Within your MySQL installation, each database can have its own character encoding, so regardless of the SQL master database being utf8mb4, it was no issue creating a database just for OTRS that was utf8.  OTRS offers to create the database for you, or use an existing DB but can only make the new one when the master DB is utf8. That being so, I just created its database with utf8 as required, within phpmyadmin.  So it is nice that WordPress is on utf8mb4, but for OTRS, the whole rip-it-up-and-start-again needn’t have happened.  All good practice though.  The standard instructions to OTRS install on their website was easy enough, with some slight tweakages in how I set up my sites-available/enabled and conf-available/enabled creating the links the a2ensite and a2enconf commands.  It was the second or third time through playing with the OTRS install anyway.

(In Apache, the configuration files are named *.conf and are stored in /etc/apache2/mods-available and conf-available and sites-available .  They’re activated by linking the conf files within, to similar folders named mods-enabled conf-enabled and sites-enabled.  In the instructions for WordPress and Nextcloud installs, the right way to make that link is with an apache commands a2ensite site.conf a2enconf config.conf a2enmod mod.conf and similarly a2dissite a2dissconf a2dismod to break that link, to take a site offline or disable a configuration.  Here endeth the parenthesis.)

More recently, I’ve been trying to wrap my head around the OTRS system itself and how it might best suit my place of work.

The Details Part 2

Continued…

Then I recreated my LAMP stack to support all my usual pages; WordPress, NextCloud, phpMyAdmin and OTRS ultimately.  I built the bog standard info.php file, named it index.php so it was served up by default by apache (and so detected by Lets Encrypt and got my cerificate for all my sites at once.

Soon as I got on the MySQL database, I corrected it to fully support utf8mb4 character set, to incorporate multi-lingual posts and comments, and emojis 🤞.  I found these instructions for MySQL 5.7 on Ubuntu 16.04, worked fine for the same MySQL on 18.04.  This is also important for OTRS when I get that installed later this afternoon or tomorrow.  OTRS won’t work without utf8mb4 support.

As an aside, figuring out where to apply mysql instructions in files was a bit arsey, since various instruction I’ve read references config files in /etc/mysql  but the master control file is /etc/mysql/mysql.conf.d/mysqld.cnf and I suppose this may be a feature of Ubuntu 18.04.  Still, found it in the end.  The file suggested in some posts exists but has barely any config in it.  If config is applied, and it contradicts the config in /etc/mysql/mysql.conf.d/mysqld.cnf then MySQL just doesn’t run. Best to apply everything in the main file.

I created phpMyAdmin first, configuring it the way I always have with 2 layer password authentication, one for access to the login page, one for the sql users login that phpMyAdmin uses by default.  I then installed WordPress and imported my back-up.  What worked nicely this time, for some unknown reason, was after importing the site from my back-up that I’d done first thing, I could change the permalink settings.  When I did my export-import last week, from my AWS to this server, that didn’t work so well, and had to have my blogs posted with page id in the URL, rather than date and page title as you see here  /2018/09/11/the-details-part-2/ for this page.  Its nice to have that back.

Time now to reinstall nextcloud, paying attention to utf8mb4 nuances.  I’ll write that up in details part 3.

It’s been a while…

Geeking has been minimal.  My efforts and concentration has been directed at my place of work, where we’ve hired a new technician replacing one who left earlier, gone through a staff buy-out, and relocated.

But as of late, I’ve found my amazon charges creeping up, especially since some of my first-year discounts have expired.  So I’ve bought a new Virtual Private Server for 12 months, got 40GB SSD, 1 core, 4GB RAM (yup, 4GB) to host my site, my nextcloud, my experiments.  And it’s on 18.04 Ubuntu, none of this 16.04 of yesteryear (or the year before yesteryear).

I’m pleased with my 4GB of RAM.  On Amazon, I launched another instance with 2GB for an experiment with a web-app for a few days and it struggled to run the web app under certain circumstanced, ground to a halt and free RAM evaporated.  The cost of running that extra instance is quite high compared to my new VPS.

Cost £60 + VAT.

So far, this blog is the only thing I’ve moved.  Though I did a dry run by moving it to xfer.digitaltinker.co.uk (now no longer a site).

Greek Beefly and Goldy Blow

When I’m excusing myself from Netflix (or Amazon) and chill, I’ll usually make a spoonerism and say “I’m going to greek beefly”.  Implies that a bit of Linux stuff is going on.  The other I say is beak griefly.

There you go, an insight.

So, all that I’ve done of late is kept my server, WordPress and plug-ins up to date, just maintenance stuff.  I usually go on once a week or so.

I notice that when I’m applying a WordPress update, the security setting that I used in the first place to install WordPress were too strict for WordPress to update easily.  In other words, I ask WP to update, and it says it can’t; it hasn’t the permission.  It lists all the files it couldn’t write to and rolls back the changes.

But, it only does that for new versions of old files.  I’ve noticed that when trying to add new files to directories without the right permissions, WP reports the error, fails to roll back and often breaks.

Thankfully, I use Amazon to take a snapshot of the volume that holds the websites.  By clicking about 20 clicks over 2 minutes, I perform the following steps on my virtual server

  • I shut it down at the command line sudo shutdown now, then switch to the Amazon console
  • I virtually disconnect the virtual SATA lead from the virtual SSD
  • Virtually clone the snapshot to a new virtual SSD
  • Virtually plug the virtual SATA cable in to the new virtual SSD
  • Virtually press the virtual power button on the virtual front of my virtual server
  • Virtually toss the old virtual SSD in the virtual incinerator

Then I log back in to the server, change the suggested directory permissions and try the WordPress upgrade again.  Given that I’ve installed three independent copies of WordPress on my server, I can create a command to fix the permissions on one, then repeat it in the other two WordPress sites.

cd /var/www/site_1
sudo chmod g+w /wp-includes/long_list_of_many_files
cd ../site_2
[keys up up enter]
cd ../site_3
[up up enter]

I think the last time I did all this, I reversed the permission changes after the upgrade was successful.  This time, I didn’t; I figure that they should have been writeable in the first place.

One other thing I’ve had a play with is Docker – I launched a new instance on Amazon just to try it out.  I don’t think I get it.  If you get the WordPress docker and install it, does it take care of MySQL, Apache, PHP, configured all neatly and ready to go?  And what if I want Nextcloud to run on the same instance?  How does that mesh with the WordPress MySQL?  Is Docker more secure?  Does it impact performance?  Do you get less customisability or control with Docker apps?  I need to read up some more.

Lastly, there was no second Linux meet the month after the first one; I forgot to follow up the meet with a recurring entry on the meetup.com site.  And there were Rail strikes that meant transport was limited.  The second will go ahead 13th December.  It’ll  mostly be a presentation, I guess, a crash course in how I made this site.

And I’ve gone off on a tangent and decided to use a wget script to download sequentially named htm files of Star Trek The Original Series episode transcripts, and scan them with grep for the phrase “I can’t do it Cap’n, I don’t have the power”.  The nearest I got was a question not from Kirk but from McCoy to Scotty.  Episode 77 The Savage Curtain

MCCOY: Can we beam the captain and Spock back up?
SCOTT: We don’t have the power. They’ll come aboard a mass of dying flesh.

There you have it.  A search online for the full phrase invariably brings up an Ace Ventura Pet Detective moment of zaniness from Jim Carey.  I downloaded the animated Star Trek transcripts and the movies and still came up with nothing closer than the above.


I’ve re-read some of my ‘blog today, and realised that I did the “virtual chucking out of the hard drive” shtick before, in one of my earliest posts.  Ho-hum. (Sept 2018)

Security

Yesterday, I hardened my security. File ownership and permissions tweaked in all my WordPress installs.  Prior to this, my site might have been hackable.  Not that it was hacked, because I’ve also installed some security plug-ins for WordPress that confirm that all WordPress files are identical to the original download.

This plug-in also firewalls known hacker attempts somehow.  I don’t quite get how it does since a hacker need only change their IP through a VPN.  I wonder if the hacks are coming from bots so the bot builder doesn’t bother hiding the IP behind a VPN. But it has been a fascinating insight in to what goes on as soon as a domain is registered and a website is active on it.  The hackers descend.

Rip it up and start again

So, whilst trying to get my server configured for email using the two similar recipes laid out here and here, I decided to do over, again. This time…

  • I discovered a bug in Amazon Web Services, or an easy to misconfigure option. I firewall blocked ports 80 and 443, the two ports necessary to host a website.
  • I found a way to introduce files in to nextcloud without having to upload again (put them in place, set the owner and permissions, run a command).
  • I opened my mail ports to the world early in my set up of the mail server and, half way through, when I was encouraged to go through log files, I saw page upon page of failed log in attempts from hackers. I closed the ports and I believe they continued, which is why I reloaded again.

    Bring on the paranoia! 

    Oh well.

    Second site is up

    It has no content yet.

    I’ve found a band I’d like to audition to drum for, on joinmyband.com and so I’m putting together my second WordPress site (taken down now – Sep’18) for a couple of drumming videos. I’ve rearranged my drum kit to make it easier to capture on camera, recorded me playing my electronic kit and then it all went piriform.

    First of all, my electronic kit can record performances, but doesn’t store more than five at a time and doesn’t store them after powering off. And it powers off if it has no input for 30 minutes. I didn’t know this. So I lost my audio recordings that I was going to dub over the video recordings before posting online.

    And Windows 10 isn’t detecting line-in, so when I do make a recording, I’ll be doing it in Linux! Or fixing the Windows problem, because…

    At least installing a second WordPress site on a subdomain was a breeze. There is apparently a way to run multiple WordPress sites per installation of WordPress but this method has restrictions, I believe.  I have it configured so it’s an independent WordPress install, on its own subdomain, on the same server, and as such, takes up an extra 24MB of my server HD space.

    Meh.

    Minimal geeking

    Just updated NextCloud 12.0.1

    Fiddly, but that was because I followed one site’s recommendation of making nextcloud user root and group www-data when the rest of the recommendations I’ve read say u and g should be www-data

    So, there you go.

    Compound Fiasco

    Eeh, I made a mess of my server yesterday.

    There was something still not quite right about how it was or wasn’t forwarding from digitaltinker.co.uk to https://digitaltinker.co.uk or something of that kidney. I decided that it was to be my geek project for the Saturday.

    So I played around with my settings, and started to paint myself in to a corner.

    I tried uninstalling my SSL from my websites, but something I couldn’t find somewhere was still redirecting my www to a disabled https even though my other subdomains were without SSL.  Thinking back, it might have been phpmyadmin, though I think I removed that too. So I started to peel away more layers, unfortunately with the accuracy of a chainsaw serial killer trying delicate brain surgery. By trying to remove the settings set by letsencrypt, I removed the keys for all SSL connections, including Remote Terminal SSH. When I tried opening a second terminal and got a message about no keys, I realised that as soon as I closed my current terminal, I could kiss my server goodbye forever.

    Oh f…………

    I had no recent back up of my WordPress site. I searched my browser’s cache and saved a copy of my ‘blog to desktop. And I logged in to Amazon to delete my server’s main HD and launch another base installation of Ubuntu and rebuilt my system.

    This time, I’ve done it right! (I think).

    All my WordPress pages redirect correctly; where https and www was absent, they’re added automatically.

    I’ve made a change to phpmyadmin by moving it to its own subdomain so it doesn’t cohabit www with WordPress. Also, all passwords are upgraded to 16 character randomly generated.

    Phpmyadmin turned out to be a pain though, because after installation it said some of its code was no longer supported and tumpty tumpty doodah, hang on, I’ll find the proper message…

    Deprecation Notice in ./../php/php-gettext/streams.php#48  
    Methods with the same name as their class will not be constructors in a future version of PHP; StringReader has a deprecated constructor
    
    Backtrace  
    ./../php/php-gettext/gettext.inc#41: require()  
    ./libraries/select_lang.lib.php#477: require_once(./../php/php-gettext/gettext.inc)  
    ./libraries/common.inc.php#569: require(./libraries/select_lang.lib.php)  
    ./index.php#12: require_once(./libraries/common.inc.php)

    And many others messages, all Deprecation Notices in ./../php/php-gettext/streams.php so it turns out that the Ubuntu maintained copy of phpmyadmin was a bit obsolete, so the Ubuntu volunteers had their own repository on launchpad that I had to add.

    They recommended also updating to a more up-to-date version of php with another launchpad repository.  And that was when I found php 7.1 was available.  Jees, Linux is hard work!!  I recognise the importance of having back-ups of my working system so I am taking snapshots of sda1 when I make big changes, I don’t care if it costs me an extra 27p on Amazon or whatever.  Php 7.1 didn’t affect WordPress but borked NextCloud (maybe I should have put it in maintenance mode before upgrading php to 7.1*) so my snapshot saved my NextCloud install.

    This has been the geekiest weekend in a while, and mostly recovering from the clusterf compound fiasco that was yesterday.  Phew.

    *Update – did php7.1 again, this time with sudo -u www-data php /var/www/nextcloud/occ  maintenance:mode –on and NextCloud, in fact, everything survived the php upgrade 😄.  Then Ubuntu recommended removing php7.0 but NextCloud isn’t installed from apt repositories, so apt didn’t know NextCloud was dependent on php7.0 so when 7.0 was removed, NextCloud broke, and when 7.0 was reinstalled, it all came back!