Another Seagate HDD Bites It

Posted by JD 08/24/2016 at 19:32

Poor quality of Seagate disks is a well-know issue for people using spinning disk storage at home. I hear their enterprise HDDs aren’t bad, but that isn’t what we purchase.

My sample size is very small. From 1990 – 2005 I went out of my way to purchase Seagate HDDs. They lasted for the sizes I bought. Used some 320G Seagate disks in an array for 7+ yrs and NONE of those failed. They made quality HDDs.

Updated Article-System Maintenance for Linux PCs

Posted by JD 08/23/2013 at 13:02

Recently update the System Maintenance for APT-based Linux PCs article here. Seems that some things that used to be handled automatically are NOT handled automatically anymore.

BTW, the article was published on Lifehacker a few years ago. I based it on their Maintenance for MS-Windows PC article. As you know, maintaining Linux systems ist 100x easier than maintaining MS-Windows.

Best Practices for Home Desktop Computer Backups 2

Posted by JD 11/12/2011 at 03:00

The Checklist

  1. Stable / Works Every Time
  2. Automatic
  3. Different Storage Media
  4. Fast
  5. Efficient
  6. Secure
  7. Versioned
  8. Offsite / Remote
  9. Restore Tested

When you are looking for a total backup solution, those are the things you want from it.

Optimized Backups for Physical and Virtual Machines 4

Posted by JD 10/08/2011 at 15:00

My old backup method was a little cumbersome. To ensure a good backup set, I’d take down the virtual machine, mount the VM storage on the host (Xen), then perform an rdiff-backup of the entire file system, before bringing the VM back up again. This happened daily, automatically, around 3:30am. It has been working for over 3 years with very few hiccups. I’ve had to restore entire VMs and that has worked too. One day I needed to restore the Zimbra system ASAP. From the time I decided to do the restore until end-users could make use of the system was 20 minutes. That’s pretty sweet in my book.

There are some issues with the current setup.

  • Backups are performed locally, to a different physical disk before being rsync’ed to the backup server. This is necessary because the backup tool versions are different and incompatible between Ubuntu 8.04 and 10.04 LTS servers.
  • Each system is completely shutdown for some period of time during the backup process. It is usually 1-4 minutes, but still that is downtime.
  • Most of the systems are still using 8.04 paravirtual machines under Xen. A migration of some type is needed to a newer OSes. I should use this opportunity to make things better.
  • Some of the systems are running old versions of software which are not up to current patch levels. I guess this happens in all IT shops. None of that is available outside the VPN, so the risks are pretty low.

think I can do better.

New Blog Software and OS 2

Posted by JD 08/31/2011 at 20:00

Since this is a technology blog, I figure some of you may be interested in a major change that happened out of necessity here today.

This is the very first blog article on our new physical server, running in a completely different virtual machine. For the next week, everything here is a test.

Due to some sort of outage issue earlier today, I was forced to upgrade everything involved with this blog. I had attempted to perform this upgrade previously and failed. As you can see, this time, there was success. Nobody was shocked more than I.

Readers Ask About ... Using Virtualization with Media Storage 1

Posted by JD 08/14/2011 at 05:00

Below is the 3rd of 6 questions from a reader. I definitely don’t have all the answers, but I’m not short on opinions. ;)

Previous articles:
Part 1 – LVM+JFS+RAID | Part 2 – Service Virtualization | Part 3 – Virtualizing Media Storage | Part 4 – Hosting Email

duijf asks:

Q3: I intent (sic) to provide quite a lot of media to my internal network, if I choose for virtualisation, will the VMs be able to access the disk space outside of the container? I do not want to create TB size containers (or should I?). I will probably use the SMB protocol here.

System Maintenance for Linux PCs 9

Posted by JD 06/24/2011 at 19:00

May 2021 Update


  • Added kernel, header, module removed command to purge them from APT.

  • Clarified /forcefsck options, slightly.

Jan 2020 Update
A little cleanup.

June 2018 Update
The big ideas below haven’t changed. Really the main change is to using apt instead of aptitude or apt-get for package management. apt is a newer, simpler, front-end to apt-get that does some housekeeping things automatically. I’ve been using apt for about 2 yrs.

Nov 2015 Update
If you want 5 years of support for your Ubuntu system, then it is important to check the Ubuntu Release Support webpage to verify the official support dates. For example,

  • 14.04.1 support ends April 2019
  • 14.04.2 support ends August 2016
  • 14.04.3 support ends August 2016
  • 15.10 support ends July 2016
    What does this mean?
    Use aptitude update on 14.04.1 systems to maintain the LTS support. If aptitude dist-update is used, then support time is significantly reduced. For a desktop that will be updated to 16.04 LTS, it probably doesn’t matter. For a server that will not be update before August 2016, this is very important.

2014 Update
After years of using apt-get, I’ve finally seen the aptitude light. Aptitude has solved a few dependency problems that apt-get puked over. It is smarter. Now I’m recommending that aptitude be used over apt-get. That is the only change below and for almost every common use, swapping apt-get for aptitude is the only change. That is the situation in this article. I did not update any comments to reflect this change. Learn more about aptitude from the Debian Wiki.

2013 Update
With newer Linux installs, there has been a huge problem with old kernels not being cleaned up automatically. For some people, this has caused their package manager to get stuck with an out of storage error. Until they can remove the issue, their system is stuck in APT-Hell. Not good at all. This article has been updated to add cleaning up kernels to the list.

Original Article Continues

I decided to write this entry after reading an article over a Lifehacker by Whitson Gordon titled What Kind of Maintenance Do I Need to Do on My Windows PC.

What kind of maintenance do I need to do on my Ubuntu/Debian/APT-based PC? Good question. It is pretty simple … for desktops. This article is for APT-based desktop system maintenance, NOT for Linux servers. Linux servers need just a little more love to stay happy. I haven’t used RPM-based distros in many years, so I’m not comfortable providing commands to accomplish the things you need to do, but the methods will be similar.

Let’s get started.

Install System and Application Patches/Updates

This will patch the OS and all your applications.

$ sudo apt  update; sudo apt full-upgrade

Done.

Don’t worry. This only updates the current distro to new packages and new kernels. It will not install a new release. If you need to stay on the current kernel, use

sudo apt safe-upgrade
. I’ve needed this only a few times in 15+ yrs of being a Linux administrator.

The apt manpage is pretty good and explains the subtle differences between upgrade, safe-upgrade and full-upgrade options. man apt will show it.

Read about more tips below.

New Mulit-Boot Loader for USB Drives 1

Posted by JD 05/31/2011 at 22:00

The folks over at PenDriveLinux have been busy. They have a new version of their multi-boot creation tool for flash drives, YUMI (Your Universal Multiboot Installer). YUMI-0.0.1.7.exe is the current released version, replacing MultibootISO.

The MultibootISO tool never worked for me. I was using unetbootin to load a single ISO onto a single flash drive, but often I’ve needed gparted, then DBAN, then PARTIMG, then an full Linux like Ubuntu 10.04 or Puppy or TinyCore. With YUMI, you can have all of those on a single flash drive and select which to use at boot time. It seems to work fine.

They finally added an Unknown ISO option so ANY ISO you have with a distro can be added to the boot menus. The boot-up screens are automatically organized nicely by type of tool.

I just placed about 5 ISO files onto a single 2GB flash drive. As I write this, Android-x86 is booting on a netbook. SWEET! I can’t wait to try it out for an hour or so before trying out the new MeeGo x86 release. As long-time readers know, I run Maemo today, so MeeGo would be the next update for that device.

Well, I’ve attempted to boot 3 different OSes.

  1. MeeGo failed almost immediately.
  2. Lubuntu displayed the boot screen, asked for a language and eventually failed.
  3. Android x86 was left to boot for over 30 minutes – the ……………. just kept coming.

The gparted ISO that I specified didn’t show up in the boot menu – I used a different ISO at the 3rd decimal point – mine was newer. I probably should have put it into the Unknown ISO group.

Some Good News

SpinRite did work perfectly. It is running now across all the partitions to refresh any lazy bits.
I moved the gparted ISO into the Unknown ISO group. Hopefully, it will work better there.

Optical Data Recovery Technique with ddrescue and par2

Posted by JD 06/12/2011 at 07:00

Many of us backup important data to optical disks like CDROM or DVD media. Over time, that media is known to fail. This means that every 5-10 years, a plan to migrate all the critical data to newer media needs to be included. It also means that when data is stored to this type of media, steps should be taken to protect the data. Recently, I had a need to pull some data, old family movies, from a DVD. The movies were stored as xvid/mp3 data inside an AVI container. Anyway, after loading the disk onto a network drive, the movie began playing, then abruptly stopped about 2 minutes into the hour long movie. I have other copies on other media … somewhere, but this would be a good opportunity to try a contingency plan that I’ve been using for at least 10 years.

Read more below.

Gparted Empty Partition Table 1

Posted by JD 06/07/2011 at 04:02

Today I wanted to add another OS to a netbook, an Asus Eee. My common practice is to boot a gparted ISO from a USB flash drive, move some data and partitions around and add a new logical partition to the end of the extended partition space. Write everything back out to disk. Then I’d boot the install disk/ISO and install to that newly created partition. Life was good, usually.

Today, I was greeted with gparted showing unallocated for the entire drive, all 160GB – unallocated. Ouch. This is the first time I’ve had partition table issues, ever, in over 20 yrs.