System Maintenance for Linux PCs 9

Posted by JD 06/24/2011 at 19:00

May 2021 Update


  • Added kernel, header, module removed command to purge them from APT.

  • Clarified /forcefsck options, slightly.

Jan 2020 Update
A little cleanup.

June 2018 Update
The big ideas below haven’t changed. Really the main change is to using apt instead of aptitude or apt-get for package management. apt is a newer, simpler, front-end to apt-get that does some housekeeping things automatically. I’ve been using apt for about 2 yrs.

Nov 2015 Update
If you want 5 years of support for your Ubuntu system, then it is important to check the Ubuntu Release Support webpage to verify the official support dates. For example,

  • 14.04.1 support ends April 2019
  • 14.04.2 support ends August 2016
  • 14.04.3 support ends August 2016
  • 15.10 support ends July 2016
    What does this mean?
    Use aptitude update on 14.04.1 systems to maintain the LTS support. If aptitude dist-update is used, then support time is significantly reduced. For a desktop that will be updated to 16.04 LTS, it probably doesn’t matter. For a server that will not be update before August 2016, this is very important.

2014 Update
After years of using apt-get, I’ve finally seen the aptitude light. Aptitude has solved a few dependency problems that apt-get puked over. It is smarter. Now I’m recommending that aptitude be used over apt-get. That is the only change below and for almost every common use, swapping apt-get for aptitude is the only change. That is the situation in this article. I did not update any comments to reflect this change. Learn more about aptitude from the Debian Wiki.

2013 Update
With newer Linux installs, there has been a huge problem with old kernels not being cleaned up automatically. For some people, this has caused their package manager to get stuck with an out of storage error. Until they can remove the issue, their system is stuck in APT-Hell. Not good at all. This article has been updated to add cleaning up kernels to the list.

Original Article Continues

I decided to write this entry after reading an article over a Lifehacker by Whitson Gordon titled What Kind of Maintenance Do I Need to Do on My Windows PC.

What kind of maintenance do I need to do on my Ubuntu/Debian/APT-based PC? Good question. It is pretty simple … for desktops. This article is for APT-based desktop system maintenance, NOT for Linux servers. Linux servers need just a little more love to stay happy. I haven’t used RPM-based distros in many years, so I’m not comfortable providing commands to accomplish the things you need to do, but the methods will be similar.

Let’s get started.

Install System and Application Patches/Updates

This will patch the OS and all your applications.

$ sudo apt  update; sudo apt full-upgrade

Done.

Don’t worry. This only updates the current distro to new packages and new kernels. It will not install a new release. If you need to stay on the current kernel, use

sudo apt safe-upgrade
. I’ve needed this only a few times in 15+ yrs of being a Linux administrator.

The apt manpage is pretty good and explains the subtle differences between upgrade, safe-upgrade and full-upgrade options. man apt will show it.

Read about more tips below.

Backup Your Hard Disks

Backup, backup, backup. Eventually, you will thank me. Often, you need a phased solution for backups since pushing 2TB of data into the Cloud is a bad idea and will take months to complete.

  • Local – Everything needs a local backup. Everything. The key is to make it automatic, versioned and recoverable. The backup needs to be on a different physical disk too. I like some simple tools for this.
    • Back-In-Time
    • rdiff-backup (Been using this since 2011-ish)
  • Remote – Critical files like KeePassCX password database and other highly critical data (wedding photos, births, Quicken data, etc.) need to be encrypted then pushed to a remote server.
    • Crashplan – a good option
    • Work out a deal with a friend to exchange veracrypt ’d backup volumes. Any backup that leaves your primary location must be encrypted.
    • duplicity
    • aptik – a simple Ubuntu-centric user, theme, and package list backup tool

Before I backup my systems or HOME directory, I’m certain to place some really important files in the HOME to make life easier later, during recovery. Files like my personal crontab and a list of all software installed on the system. Here’s how:


\# Capture some important information
\# installed packages
sudo apt-mark showmanual | tee $HOME/apt-mark.manual

\# my crontab
$ crontab -l > ${HOME}/crontab.${LOGNAME}

Here’s a link to a working HOME backup script using rdiff-backup, which has a usage very similar to rsync. With the list of software, restoring all those tools to a different system becomes trivial using

$ sudo apt-mark manual $(cat apt-mark.manual)
$ sudo apt-get -u dselect-upgrade

command.
Previously, we suggested using dpkg —get-selections and dpkg —set-selections. apt-mark is better.

Clean Up Old Kernels

Update May 2021

dpkg -l 'linux*' | awk '/^rc/{print $2}' | xargs sudo apt purge -y

That will completely remove-purge packages that have been removed already. They show up as ‘rc’ in the dpkg -l output, so they are already gone, just wasting some space inside APT, which can drive people nuts.

Original Section:

Sometime in the recent few years, the process that removed old kernels was broken or removed from Debian or/and Ubuntu for some reason. That means old kernels just keep getting added and added and never cleaned up. On systems with limited storage, eventually, bad things happen when there isn’t any more storage. The bad things usually happen when trying to patch the system and leave APT in a bad state. By then, it is too late. We need to be proactive about cleaning up old kernels.
This script will generate a command for your specific system to clean up the old kernels. Read more about the script.. I haven’t needed to use that script in a few years.

In theory, apt will maintain 2-3 kernels and automatically remove any that are older. This is one of the main reasons I like apt over the other options. YMMV. It has been working at least 4 yrs now.

For systems with limited storage, it is a good idea to clean up old APT package files too.

sudo apt autoclean
sudo apt autoremove

will do that.

Clean Up Temporary Files

On UNIX/Linux systems, people use the /tmp directory for temporary files. Just open any files for temporary needs in the editor of choice. Perhaps vim /tmp/t is a good example. If you came from Windows, nobody told you to do this, so start now. The area, /tmp, gets cleaned up automatically at reboot. If it does become filled over the months of uptime (and that is typical for Linux), the you can just delete the files under there. Sometimes special files will be placed there that you don’t want to remove, but I’ve never seen any real harm come from removing anything in /tmp.

GUI Linux systems often have a ~/.Trash/ directory where stuff deleted using a file manager might be moved. In theory, the empty Trash option will do that, but in early 2020, some reports on the Ubuntu Forums have shown the Empty Trash task doesn’t always work. Check ~/.local/share/Trash for files.

There is no registry on Linux, so you don’t need a registry cleaner.

The cleanup for most other temporary files are handled automatically, but some editors (vim, nano, emacs, etc) may leave files ending with a ‘~’ character laying around. Cleaning these files up is a pretty simple find command. You can clean them up under your HOME as a normal user or, if you are root, you can do it for the entire system. Doing this can be extremely dangerous. Running it without the rm command first is a really good idea.

$ find $HOME -type f -name “*~” -print

After that appears to do what you want, add the -delete part. Be extremely careful or you’ll be using those backups for recovery. You’ve been warned. I speak from experience.
$ find $HOME -type f -name “*~” -print -delete \; 

When a system is out of storage, finding a few huge files that can be deleted or moved is very handy. Find all the files over 3GB in size.
$ find / -type f -size +3G -print 

Just change the size as needed. +500M will find files over 500MB in size. Best to start really large, then slowly reduce the size.

Years ago, kernel crashes happened more often and wrote those core files under /var . You must be root to clean those files up, assuming you aren’t saving them for debugging or don’t have the necessary skills to do that.

$ sudo find /var -type f -name “core*” -print

The default settings on most Linux distros do not create core files any more, so this is almost never an issue.

For other files that are temporary, but I don’t want to be placed into /tmp, I’ll schedule their removal in the future with at. For example, I often place files on a web server that are temporary and there for a specific person, but not password protected. Looking now, I see 3 at jobs scheduled for later this year. These will survive reboots and once run, never show up again. Learn more about at scheduling
A quick example:

echo “rm /var/www/html/Pete-file.odf”| at now + 38 days
The trick is with specifying the time in a way that at understands.

  • noon Sun # Sunday
  • 6:42 fri # Friday in the morning (24-hr clock)
  • 20:12 thu # 8:12 pm Thursday
  • 11:59 pm
  • 11:59 pm Dec 24

As long as the system is running when the time happens, it will run the task, script, jobs. Reboots, shutdowns between now and then don’t matter.

Honestly, I spend more effort on cleaning up flash, macromedia permanent objects than temporary files. I want them removed BEFORE a backup runs. Here’s how:

$ rm -rf ${HOME}/.macromedia/* ${HOME}/.adobe/*

Simple. Run that command before my nightly automatic backups. With every OS release, it is worth checking that the placement has not moved.

Uninstalling Programs

If you use the package manager to install software, then you should use the package manager to remove software. For APT-based systems, here’s how:

$ sudo apt purge [package]

Or if you don’t want to remove all your custom settings, but still want the remove the program, use:
$ sudo apt remove [package]

On a new system, I immediately remove nano (I hate that editor)

$ sudo apt purge nano

Defragment? No, but Run FSCK Occasionally

Defrag – Linux file systems do not have a need to be defragmented. fsck should run automatically based on the number of reboots. I think 30 or 60 is the common number. If you reboot daily, it will happen enough. If, like me, you reboot about 12 times a year, that would be multiple years before an automatic fsck was run.
Plus, extra partitions don’t generally have the settings to force an fsck periodically. To run fsck, a partition cannot be mounted. The same applies to logical volumes, LVs, if you use LVM2. Often the easiest way to accomplish this is by booting from alternate boot media (CD/DVD/flash) and using that OS to run fsck on the internal disks connected to your Linux system. Booting from alternate media is a common step in solving many boot issues with Linux, so it is something worth practising.

Full Hard Disks
However, if you let the core OS partitions get really full, like above 95% full, you will see some serious system slowdowns. If you let the really important file systems, like /var or / get full, you may crash the system. Being full comes in two ways on Linux.

  1. Out of storage space – just like under Windows
    $ df -hT
  2. Out of inodes – which is just as bad, but not as quick for a new-to-Linux user to see. Check your inodes with:
    $ df -i

Usually, there will be plenty of inodes available, but if you are seeing out of disk space errors, check the inodes. Running out of inodes doesn’t happen too often anymore, but it can still happen. Just a few months ago, the file system on a virtual machine here ran out of inodes while still having over 30% of the storage free. It turned out there were many, many, many tiny files being created by a process due to a configuration change that I’d made. Manually removing all those files brought the inode use back to 60% and the machine started behaving again.

fsck is a logical file system checker. There is a different version for each Linux/UNIX file system type, usually named as fsck.ext3 or fsck.jfs or fsck.xfs for example. If the base fsck program can’t determine the type of file system, you can either tell it which type with the -t option or manually call the correct program yourself. If you call the wrong program, hopefully it will refuse to run, but since this is Linux/UNIX, you can force it and completely destroy the underlying file system, if you chose the incorrect type. You need to unmount (umount without the ‘n’ is the command) the file system before you can run fsck and make any corrections.

First, you need to determine the mounted device – usually something like /dev/sda8. Use df -hT to see the mounted file systems.

$ df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda2 4161216 2660112 1291392 68% /
varrun 524396 60 524336 1% /var/run
varlock 524396 0 524396 0% /var/lock
udev 524396 16 524380 1% /dev
devshm 524396 0 524396 0% /dev/shm

That raises an issue. / is the only file system mounted on this machine. I can’t umount (yes, that is the correct spelling) it while the system is running, but I can force an fsck at the next reboot. How?
$ sudo touch /forcefsck
$ sudo shutdown -r now

Since systemd took over around 2016, the /forcefsck trick doesn’t work anymore. Unfortunately, with systemd, the simple method has been replaced by a convoluted answer that I always have to look up. It involves adding a boot parameter on the grub screen, pre-boot.

Let’s suppose that /dev/sdb8 was mounted on /backups in the example above. Umounting /backups and running fsck can be done by

$ sudo umount /backups
$ sudo fsck -y /dev/sdb8

You’ll want to do this on all the different mounted file systems.

I should mention that fsck will automatically be run every X reboots. The actual count between automatic fsck runs is a tunable parameter in the file system when it is created or you can run tune2fs. tune2fs is an advanced tool for ext2/3/4 file systems and not for casual Linux users. If you leave your system running 24/7, you may find that no fsck has been run in over a year. This isn’t necessarily bad, but neither is forcing a check. I force one about every 6 months immediately after a kernel reboot has been required. Just for extreme clarity, that’s 2 different reboots.

# new kernel, reboot
$ sudo touch /forcefsck, reboot

Don’t do both at the same time, please. If something bad happens, it will be easier to troubleshoot 1 big change.

Update May 2021

The sudo touch /forcefsck trick stopped working when systemd took over. It is ugly what we have to do now and all of the required methods need console access, so forcing an fsck from a remote location over ssh isn’t possible anymore. Systemd requires on of these options:


  • Modify grub boot option, update-grub and reboot

  • Edit the grub boot options during boot and add the fsck option (whatever that is)

  • Use the Advanced Boot choice from the Grub menu and hope there is a “Check all Filesystems” option displayed.

I miss the old method.

End Update

If using LVM, the device used for the fsck command must point to a logical volume with a file system. I’ll assume if you are using LVM, then you will be able to figure this out. A handy alias:

alias lsblk=‘lsblk -o name,size,type,fstype,mountpoint’

More advanced file systems like ZFS validate the file system, the data written and read from the drive hardware. Some day, EXT4 and later versions may get these capabilities, but for now, we have fsck.

Once you get a non-root file system unmounted (/home, /export, /backups), you still want to run fsck with:

$ sudo fsck -y /dev/sda8

where sda8 is the device that gets mounted. I suppose you could do this with the UUID, but I never have and don’t know if that works. You can check /etc/fstab for the mount point to UUID/device mapping or look in /dev/disk/by-uuid or simply use df -Th to find the device.

On some systems, you’ll find ntfsfix and fsck.vfat. Those could be helpful is you have issues with your Windows hard disks, when Windows can’t solve the issue. Why isn’t ntfsfix named fsck.ntfs? I don’t know, but there’s probably a good reason.

However, these tools were created by reverse engineering the file systems and cannot be 100% capable of everything that a native, Windows tool can do. Always start fixing file system issues using the native OS, so use Windows to fix ntfs or fat-whatever issues and Linux to fix ext2/3/4, ZFS, btrfs, xfs, jfs, and the other 20+ native-to-Linux file systems.

Clean Your Registry

Linux often uses dot files for settings. They are named that way because any file that begins with a [period] will not be displayed in normal directory listings. A .vimrc is common in your HOME directory, but here are lots of others like the .bashrc and settings for almost every program that you use on Linux. Directories that begin with a . are also hidden. Some examples are .adobe, .cache, .cpan, .freemind, but anything is common.

Regularly Reinstall to Clean up Cruft?

This is not needed. If you use the package manager to install and remove software, you won’t have any left over cruft like in other operating systems. If you install using some other method, there is probably a de-install tool included. If not, you can 99.9% just delete the files that were installed. Be careful just deleting files that were installed with a package manager. Doing that can / will cause problems later.

Update Antivirus?

Sure, you can run an antivirus tool, but it will look for MS-Windows virus signatures. By doing this, you are being a better netizen, but not really helping your Linux PC much. If you have any MS-Windows PCs on your network, this is still a really good idea. ClamAV is the standard AV for Linux systems. Personally, I don’t run AV on my systems, but most people will have more risks because they run more Windows computers and use those computers for high-risk activities on the internet. My Windows computers aren’t used on the internet except with extremely specific tools. No web browser, email, social networks on Windows.

The best thing you can do to deal with Linux viruses is to stay patched and not use the root account all the time. Using a non-privileged account is a security technique.

Reboots Needed After All Patches?

The only time you need to reboot is after a kernel update and maybe after a libc update. Most people using other operating systems have been trained to reboot to clean up system memory or reset things. It seems to work on those other operating systems, but it almost completely unnecessary under Linux. Any program or system patches should automatically restart the program that needs restarting for you or automatically restart the daemon for you. If that doesn’t happen, you can usually run a restart command manually, like

$ sudo /etc/init.d/mariadb restart

or
$ sudo /etc/init.d/apache2 restart

There are newer methods service and systemctl from the old scripted init systems. You can find how-to guides for those methods.
$ sudo service mariadb restart

or
$ sudo service apache2 restart

Rebooting your Linux System without a good reason is just a waste of time. Certainly if you are putting new hardware internally, a reboot will be necessary, but most external hardware will be discovered on a running system. This applies to USB devices, but others, like eSATA may also be hot-plugged too.

Newer Linux versions are migrating away from the init.d scripting that has worked well for 30+ yrs to a program called upstart (and then systemd) which is supposed to have advanced features, be quicker and make life easier. Call me an optimistic skeptic. Unlearning 20 yrs of habit isn’t going to be easy for old-timers like me. A little history about starting/stopping different system daemons on Linux:

  • init.d/ scripts
    • upstart (ubuntu only)
      • service (most distros)
        • systemctl (most dstros using systemd for init)

systemd also introduced journalctl to access and search log files. So many monitoring tools have been watching text log files for decades that the binary log format for systemd has to dumpt the old data to the same log files that have been used 40+ yrs.

Firewall Checkups

If you have a computer, any computer, you need to be running a firewall on it. Linux has a firewall built into the kernel and different interface programs are used by humans to setup rules. iptables CLI interface can be daunting for newer Linux users. ufw is a CLI interface to iptables, while still being much easier to use for simple configurations. If you want to block all inbound requests, except ssh, here’s what you need to type.


$ sudo -s
\# ufw reset
\# ufw default deny incoming
\# ufw allow ssh
\# ufw enable

Sorry about the leading \# for each line. Don’t type them in. The # just means we are using the root userid and the \ before it is required to prevent this blog software from interpretting it as a blog format command.

That should result in

# ufw status
Status: active

To Action From
- - - - - -
22 ALLOW Anywhere


If you telnet to any open port on the system (e.g. telnet localhost 80), you should see a [UFW BLOCK] message in the syslog (tail -f /var/log/syslog). The connection is blocked before any listener has a chance to respond.

If you aren’t running ufw, you can always check iptables directly with

$ sudo iptables -L

If you are running fail2ban to protect your ssh connection, which is a really good idea, ufw doesn’t appear to harm that tool in any way. It still works. fail2ban rocks. Highly recommended.

Graphics Driver Updates

If your graphics drivers are working for you, then it is probably a good idea to leave them alone unless there is a real reason to update. Notice that I didn’t say upgrade. My experience is with nVidia proprietary drivers and calling some of their released drivers stable would be a lie. Still, the non-proprietary drivers may be slower and just as buggy. So, if you do decide to update your graphics drivers, be prepared to do some maintenance afterwards.

  • Rebuild the kernel
  • Relink the graphics drivers into the kernel
  • Re-setup your dual or multi-monitor setup

Don’t forget that every time there is a new kernel, you may need to re-install the proprietary graphics drivers to re-link or rebuild the modules for the new kernel. In theory, DKMS should automate those processes, but sometimes, seldom, things break.

A few other articles here about graphics drivers and/or dual monitors:

Since Intel CPUs started included iGPUs, I haven’t needed an external GPU. Current Intel drivers have been provided through the package management system.
On Ubuntu LTS systems, nvidia proprietary drivers should be really easy to install for 16.04 and later through a GUI.

Summary

That just about covers it. If you just perform the first three, you’ll be pretty safe. Those were

  1. Install patches and update your apps
  2. Backups
  3. Cleanup Old Kernels and junk/trash

Simple. Now go and do at least these 3 things on your Linux PCs.

You may notice that I didn’t tell you where to point and click inside any GUI programs. Why not? To me, GUI instructions are full of error opportunities, while giving you a command line example lets you take those commands, modify them for your specific needs, place them into a script to be run periodically, as needed. With a GUI, you’d have to start and stop 15 different programs and spend much more time pointing and clicking every time you wanted to clean up your system. To me, that’s inefficient. I like for computers to work FOR ME, not have me work them.

  1. bzzzwa 06/24/2011 at 15:11

    It should better be named “System Maintenance for Debian based Linux PCs”… But thanks anyway.

  2. alex 06/24/2011 at 19:04

    Thank you :)

  3. SamD 06/24/2011 at 20:49

    In the backup section, shouldn’t “truetype” be “truecrypt”?

    Also, I hope you will agree to let Lifehacker repost this; lots of Linux users need to know this stuff.

  4. fireshadow 06/24/2011 at 21:03

    “truetype ’d backup volumes” should be “truecrypt ’d backup volumes”.

  5. Robin Turner 06/25/2011 at 02:12

    Nice stuff. People pasting into the command line should beware of quotation marks and dashes coming out wrong.

  6. JD 06/25/2011 at 10:59

    Wow! I didn’t realize the copy/paste was broken like that. I promise, I copied and pasted those commands directly from an xterm. Thank you blogging software.

    You can see where some \# was needed to prevent the software from automatically inserting numbers when I needed a comment or want to show a root shell. I’m using \<pre\> blocks for the code areas that you see in green/black. Other code marking methods have failed in much worse ways.

  7. Tyler 06/25/2011 at 11:49

    I thought Upstart was getting pushed out by systemd now?

  8. PistolPete 06/25/2011 at 17:19

    Thanks for providing this; although I keep an XP system so I can help my friends / family / clients solve their Windoze problems, I currently have three Linux systems (LinuxMint KDE, PCLOS, and LinuxXP 10.10), and I’ve learned a lot from reading this. I’ve experimented with many of the KDE-based distros (not a fan of Gnome), along with Puppy, Kubuntu, and SimplyMEPIS, and I greatly enjoy learning more about Linux whenever possible.

    I’m completely self-taught with Linux (I’m a compulsive auto-didact), so although I already knew some of this material, a good portion of it is new to me, and I appreciate your efforts to help educate others. I can see that I’m going to have to devote some time to reading this entire blog, which should keep me off the streets and out of trouble for a while.

    Thanks again.

  9. JD 07/07/2011 at 12:17

    Came across an article called What’s an inode?, but it was behind a pay-wall. Instead, check this out to learn about i-nodes.