Software RAID - Migration 3

Posted by JD 02/12/2010 at 19:34

Today I migrated a RAID5 external array from 1 server to my newly built Core i5 server. I specifically chose to use software RAID when I installed it in 2007 just so this migration would be possible and easy.

Basically, everything went as planned with only one small issue.

The Plan

My plan was to swap all disks – OS, Mirror, and the external array between a Core 2 Duo E6600 machine with a Core i5 machine. In theory, this will retain everything about my settings, OS, and data with no data loss. I planned to swap the video cards and completely remove a Promise FastTrak TX4310 RAID controller from both machines. It may be easier to think of this as a motherboard, RAM, CPU swap. Of course, I backed up all array data, just in case I had to rebuild the software array from mdadm —create. That was about 1TB of data.

The Result

So the disk swap worked. All 6 disks were swapped, the machine was booted and the OS came up. ZERO, yes, ZERO issues. The boot drive was found. The backup drive and mount points were found and mounted. The software array controlled by mdadm was found, assembled, mounted and is running. I didn’t have to do a thing. No change to any configuration. No real special SATA cable locations. The first SATA cable locations worked perfectly. I wasn’t stupid about the SATA cables, but the drives are definitely not on the devices in the new machine either.

Since I swapped the video cards, I removed the nVidia specific graphics driver from the X/Windows config file. The default driver will have to do.

However, the graphics card swap was less than perfect. I took the nVidia 7600 GS PCIx out and put the S3-ViRGE PCI card from 2002 into the machine. That worked as well as should be expected. Then I tried to run X/Windows and it came up, but with 640×480 resolution. I know the S3 card supports 1280×1024 so I was off to find a driver/method to make that happen. My first attempt prevented X from running again. Normally, that wouldn’t be a problem, I’d just go to another machine and google away. This week, my laptop became mostly dead and the machine I’m working on is my backup desktop, er … with no graphical environment.

Another issue was that network cards were seen as new to the OS. At boot, it created eth2 and eth3 rather than reusing eth0 and eth1. Just to get up and on the network, I edited /etc/network/interfaces and swapped /eth0/eth2/ everywhere. Ran
sudo /etc/init.d/network restart
and the network was working perfectly again. I’ll want to come back and convince the OS to forget eth0 and eth1 in the future. I believe that happens in the /etc/udev directory.

Solved

In order to research solutions, I needed a web browser. To run a web browser, I needed a graphical environment, X/Windows. X wasn’t starting. The solution – swap the video cards back. Start X – life is good. Web research, email, and work can begin again. It worked well enough that I’ve decided to retire the S3-ViRGE card and purchase as-cheap-a PCIx video card as can be had, when is stops snowing here.

Summary

Software RAID migration worked better than expected. No data was lost and no special setup or other configuration modifications were needed. That UUID stuff for disk drives is why this all worked with no issues. I’m certain. The small hassle it adds to /etc/fstab is well worth it.

Next step, load VirtualBox so I can have my WinXP and normal xubuntu desktops available.

Trackbacks

Use the following link to trackback from your own site:
https://blog.jdpfu.com/trackbacks?article_id=497

  1. JD 02/13/2010 at 09:23

    As I begin daily use of this new computer while it still runs all the old software (migrated over 3+ systems since 1998), I noticed it has 32-bit Ubuntu. See, it has 8GB of RAM and I’d like to use all that RAM. The easiest way to 8GB use is a 64-bit OS. I run 64-bit Linux on 2 other machines and 10+ VMs, so that isn’t really the issue holding me back.

    I’d really like to convert the physical system into a VM, then wipe it and load Ubuntu 10.04 x64 (perhaps 9.10 x64 server while I await the next LTS release) with virtualization. I’m concerned that I’ll lose important processes and settings that I’ve forgotten. Sure, I’ll backup everything (about 2TB) before moving forward, but what about those things that just work and have been working quietly for years. Things like ddclient, no-ip2, parts of my internal Apache web sites. I can recall these 3 things, but I’m worried about the things that I don’t recall being lost. Some of these things may be keeping my network on the internet. There have been so many tweaks over the years, that I simply don’t recall them all anymore.

    I did create an Infra VM last year, but I haven’t migrated any of these long working services into it out of fear and storage complexity. See, some of the services use almost 1TB of storage. It doesn’t make sense to place that into a VM. Rather, I need to allow access to the storage from the VM using iSCSI or NFS or some other method.

    The main fear is that I’ll get to the point to shutdown the old system and have forgotten something. Forgotten anything. That would be bad.

    There is much to accomplish.

  2. JD 04/22/2010 at 19:25

    Just to complete this migration post, I need to mention that I migrated the infrastructure services to a VM on a different server then installed Ubuntu 9.10 x64 and everything is working. Over the following week or so, I installed other apps as necessary.

    That all happened on 3/6/2010, about 6 weeks ago. It was a non-event. The fresh installation has been extremely stable and has become my daily desktop machine while also allowing me to test KVM and a number of different VM settings. I loaded just the plain server version with an ssh server initially. Then I added the xubuntu-desktop package.

    All is good here.

  3. JD 10/22/2010 at 10:30

    So it is 8 months later and this box has been a workhorse with ZERO issues. The array is still going – no problems.

    The new video card (nVidia 240) was put into a Win7 box and an old nVidia 7600GS was installed into this machine. It drives dual monitors just fine. The Ubuntu Server 10.04 LTS with LXDE was installed about a week after April release – rock solid except then VirtualBox or X/Windows drivers crashed. Since I stopped using vbox on this machine, it has been up except when I need to reboot for a kernel update.

    The disk array was purchased in Aug 2007 and has been running all that time as Software RAID under Linux. That’s over 3 yrs with just vibration on the cables inside the array causing issues the first few months. Since placing some old carpet remnants under all the equipment on my rack (bread rack), vibration has been minimized and sound is much less.

    Migrations from Ubuntu 8.04 to 9.10 to 10.04 have been non-events. mdadm just works. Even switching between x64 and x32 and back to x64 didn’t matter.

    I’m very pleased. Software RAID with mdadm rocks. BTW, I’m still using JFS on the array.

    Disk Upgrades

    I’ve been thinking about swapping the 320GB drives out for 1.5TB drives. Everyone needs more disk storage, right? That begs the question, how will I back up all this storage? We all know that RAID is not a backup. Well, if I partition the 1.5TB physical drives into 330GB slices, then create multiple RAID5 arrays spread across each, then each RAID-array would be about 1TB – easily backed up on a single 1 or 1.5 TB disk. Further, I wouldn’t need to expand the physical backup storage immediately. Filling the other 3.5+TB of storage would take some time (years?) and 2GB disks are already out – perfect for backups (not proven enough to be used as primary storage, IMHO). Unless I can solve the backup problem, then I don’t want the primary storage available, period. Storage without backups is crazy.

    I really should draw a diagram to help explain this.