Ubuntu Hardy Depots Missing?

Posted by JD 12/29/2009 at 10:09

Err http://ppa.launchpad.net hardy/main Packages
  404 Not Found
Err http://ppa.launchpad.net hardy/universe Packages
  404 Not Found
W: Failed to fetch http://ppa.launchpad.net/madman2k/ubuntu/dists/hardy/main/binary-i386/Packages.gz  404 Not Found

W: Failed to fetch http://ppa.launchpad.net/madman2k/ubuntu/dists/hardy/universe/binary-i386/Packages.gz  404 Not Found

E: Some index files failed to download, they have been ignored, or old ones used instead.

Ouch.

I knew hardy support would eventually go away, but not before the next LTS release which isn’t scheduled for 4 months.

Fixed, 3 Days later ….

Virtualization Survey, an Overview 1

Posted by JD 12/22/2009 at 20:40

Sadly, the answer to which virtualization is best for Linux isn’t an easy one to answer. There are many different factors that go into the answer. While I cannot answer the question, since your needs and mine are different, I can provide a little background on what I chose and why. We won’t discuss why you should be running virtualization or which specific OSes to run. You already know why.

Key things that go into my answer

  1. I’m not new to UNIX. I’ve been using UNIX since 1992.
  2. I don’t need a GUI. Actually, I don’t want a GUI and the overhead that it demands.
  3. I would prefer to pay for support, when I need it, but not be forced to pay to do things we all need to accomplish – backups for example.
  4. My client OSes won’t be Windows. They will probably be the same OS as the hypervisor hosting them. There are some efficiencies in doing this like reduced virtualization overhead.
  5. I try to avoid Microsoft solutions. They often come with additional requirements that, in turn, come with more requirements. Soon, you’re running MS-ActiveDirectory, MS-Sharepoint, MS-SQL, and lots of MS-Windows Servers. With that come the MS-CALs. No thanks.
  6. We’re running servers, not desktops. Virtualization for desktops implies some other needs (sound, graphics acceleration, USB).
  7. Finally, we’ll be using Intel Core 2 Duo or better CPUs. They will have VT-x support enabled and 8GB+ of RAM. AMD makes fine CPUs too, but during our recent upgrade cycle, Intel had the better price/performance ratio.

Major Virtualization Choices

  1. VMware ESXi 4 (don’t bother with 3.x at this point)
  2. Sun VirtualBox
  3. KVM as provided by RedHat or Ubuntu
  4. Xen as provided by Ubuntu

I currently run all of these except KVM, so I think I can say which I prefer and which is proven.

ESXi 4.x

I run this on a test server just to gain knowledge. I’ve considered becoming VMware Certified and may still get certified, which is really odd. I don’t believe many mainstream certifications mean much, except CISSP, VMware, Oracle DBA and Cisco. I dislike that VMware has disabled things that used to work in prior versions to encourage full ESX deployments over the free ESXi. Backups at the hypervisor level, for example. I’ve been using some version of VMware for about 5 years.

A negative, VMware can be picky about which hardware it will support. Always check the approved hardware list. Almost every desktop motherboard will not have a supported network card and may not like the disk controller, so spending another $30-$200 on networking will be necessary.

ESXi is rock solid. No crashes, ever. There are many very large customers running thousands of VMware ESX server hosts.

Sun VirtualBox

I run this on my laptop because it is the easiest hypervisor to use. Also, since this works on desktops, it includes USB pass thru capabilities. That’s a good thing, except, it is also the least stable hypervisor that I use. That system locks up about once a month for no apparent reason. That is unacceptable for a server under any conditions. The host OS is Windows7 x64, so that could be the stability issue. I do not play on this Windows7 machine. The host OS is almost exclusively used as a platform for running VirtualBox and very little else.

Until VirtualBox gains stability, it isn’t suitable for use on servers, IMHO.

Xen (Ubuntu patches)

I run this on 2 servers each running about 6 client Linux systems. During system updates, another 6 systems can be spawned as part of the backout plan or for testing new versions of stuff. I built the systems over the last few years using carefully selected name brand parts. I don’t use HVM mode, so each VM runs with 97% of native hardware performance by running the same kernel.

There are downsides to Xen.

  1. Whenever the Xen kernel gets updated, this is a big deal, requiring the hypervisor be rebooted. In fact, I’ve had to reboot the hypervisor 3 times after a single kernel update before it takes in all the clients. Now I plan for that.
  2. Kernel modules have to be manually copied into each VM, which isn’t a big deal, but does have to be done.
  3. I don’t use a GUI, that’s my preference. If you aren’t experienced with UNIX, you’ll want to find a GUI to help create, configure and manage Xen infrastructure. I have a few scripts – vm_create, kernel_update, and lots of chained backup scripts to get the work done.
  4. You’ll need to roll your own backup method. There are many, many, many, many options. If you’re having trouble determining which hypervisor to use, you don’t have a chance to determine the best backup method. I’ve discussed backup options extensively on this blog.
  5. No USB pass thru, that I’m aware. Do you know something different?

I’ve only had 1 crash after a kernel update with Xen and that was over 8 months ago. I can’t rule out cockpit error.
Xen is what Amazon EC2 uses. They have millions of VMs. Now, that’s what I call scalability. This knowledge weighed heavily on my decision.

KVM

I don’t know much about KVM. I do know that both RedHat and Ubuntu are migrating to KVM as the default virtualization hypervisor in their servers since the KVM code was added to the Linux kernel. Conanacal’s 10.04 LTS release will also include an API 100% compatible with Amazon’s EC2 API, binary compatible VM images, and VM cluster management. If I were deploying new servers today, I’d at least try the beta 9.10 Server and these capabilities. Since we run production servers on Xen, until KVM and the specific version of Ubuntu required are supported by those apps, I don’t see us migrating.

Did I miss any important concerns?

It is unlikely that your key things match mine. Let me know in the comments.

Solved - Change Windows7 Window Border Thickness and X-Mouse

Posted by JD 12/21/2009 at 21:42

Windows7 is an improvement over other versions in many ways, except they decided to waste too much screen with pretty and thick boarders by default. Additionally, the window title bar seems to be 2x larger than under XP. No thanks.

It has bothered me for a few months, but not enough to search and try a few things until today. Even with the changes, the window borders and title bar are still too think for my tastes, but at least it is a little better.

The settings can be found in the Window Color and Appearance settings of Windows7. The Items to change are:

  1. Active Title Bar – size 17 is the smallest it will accept
  2. Border Padding – size 0 is the smallest setting
  3. Active Window Border – size 1 is the smallest

You can also make scroll bars thinner, if you like. Initially, I went too thin and had to make them a little larger so usability wasn’t completely lost. Here’s the resulting border. Sadly, there is still way too much wasted space in my opinion.
Windows 7 Border Example
I have no use for the part of the window with the Organize or Include in Library stuff. That entire menu is worthless to me. Let me know if you know how to remove that section. Please.

If you miss X-Mouse from Tweak-UI in the PowerToys, here’s a solution to have the focus follow the mouse. I like option 3 and have pulled the regedit file down. Unix people will appreciate this. It is nice to have the active window not necessarily pulled to the foreground just because it is active.

Ubuntu 10.04 Photo Management - Looking Ahead

Posted by JD 12/21/2009 at 10:33

You may not have heard that the Ubuntu guys are planning to remove The Gimp from the default desktop installs in the next LTS release of Ubuntu. Good. The Gimp is very capable, but I’ve never found a use for it. Never. It is too complex for my rotate, crop, remove red-eye needs.

There are a few excellent options, but it seems most of them have issues for me. I like a lite desktop – no Gnome, no KDE, so anything that requires those libraries is to be avoided. The only think worse is to include Mono. Mono is an FOSS implementation of Microsoft’s .NET libraries.

I generally deal with photos very little, unless I’m using scripts to attach GPS lat/lon to the EXIF data in the files or rotate them. Recently, I installed digiKam, a KDE app, but only because it made attaching GPS EXIF data easier. I avoid using it.

Well, I came across an article concerning 3 Gimp replacements that got me thinking. That link was really, really slow for me too. Of the choices, only 1, f-spot, is in the default repositories for my LTS distribution. For fun, I did an install, here’s the dependency data.


$ sudo apt-get install f-spot
Reading package lists… Done
Building dependency tree
Reading state information… Done
The following extra packages will be installed:
cli-common libart2.0-cil libflickrnet2.1.5-cil libgconf2.0-cil
libglade2.0-cil libglib2.0-cil libgnome-vfs2.0-cil libgnome2.0-cil
libgtk2.0-cil libgtkhtml3.14-19 libgtkhtml3.16-cil libmono-addins-gui0.2-cil
libmono-addins0.2-cil libmono-cairo1.0-cil libmono-cairo2.0-cil
libmono-corlib1.0-cil libmono-corlib2.0-cil libmono-data-tds1.0-cil
libmono-data-tds2.0-cil libmono-security1.0-cil libmono-security2.0-cil
libmono-sharpzip0.84-cil libmono-sharpzip2.84-cil libmono-sqlite2.0-cil
libmono-system-data1.0-cil libmono-system-data2.0-cil
libmono-system-web1.0-cil libmono-system-web2.0-cil libmono-system1.0-cil
libmono-system2.0-cil libmono0 libmono1.0-cil libmono2.0-cil
libndesk-dbus-glib1.0-cil libndesk-dbus1.0-cil mono-common mono-gac mono-jit
mono-runtime sqlite
Suggested packages:
monodoc-gtk2.0-manual libgtkhtml3.14-dbg libmono-winforms2.0-cil libgdiplus
libmono-winforms1.0-cil sqlite-doc
Recommended packages:
dcraw libmono-i18n1.0-cil libmono-i18n2.0-cil

WOW! That’s a bunch of crap to be forced to load for 1 app that I’ll use perhaps once a month. No thanks. Further, the last package, sqlite, is concerning, since I use the sqlite3 package all the time. Forcing an older package – boo.

I hope the Ubuntu guys consider bloat, which is what they are trying to get away from by not including The Gimp after all. Some people like iTunes and others like the original WinAMP. I’m in the later group. Keep it simple, please.

SysUsage 3.0 Installation Steps 1

Posted by JD 12/16/2009 at 15:34

We’ve been using SysUsage to monitor general performance of our Linux servers for a few years. Version 3 was released recently with a new web GUI and simpler installation, but not quite the trivial apt-get install that we’d all love. View a demo.

Anyway, go grab a copy of the source tgz and follow along.

 tar zxvf SysUsage-Sar-3.0.tar.gz
 cd Sys*0
 sudo apt-get install sysstat rrdtool librrds-perl
 perl Makefile.PL
 make
 sudo make install
 sudo crontab -e

Drop these lines into the root crontab.


*/1 * * * * /usr/local/sysusage/bin/sysusage > /dev/null 2>&1
*/5 * * * * /usr/local/sysusage/bin/sysusagegraph > /dev/null 2>&1

I performed these steps using Cluster SSH on almost all our Ubuntu 8.04.x servers; each installation worked.

If you have Apache running in the normal place, browse over to http://localhost/sysusage/ ,
If you don’t run a web server, try firefox /var/www/htdocs/sysusage/index.html to see the results.

Further, our simple rsync over ssh scripts to pull the SysUsage output back to a central performance server are still working. Some of the old data from the v2.12 of the program is still inside the RRD files. It isn’t clear at this point whether the data will be used in the new graphs or not. It takes about a day for the graphs to become useful.

Trivial Lifehacker Profile Monitoring Script

Posted by JD 12/16/2009 at 09:25

Lifehacker is a site that many of us watch daily for tips. If you join the community, you may find that the LH site can sometimes become … er … slow. The cause of this can be many things, but personally, I think they’ve gone overboard with all the javascript.

Someone mentioned they had written a real-time notification script to tell him about relies to his posts and he asked others what they would like of the script. I started thinking and determined it would be a fairly trivial script to do what I wanted – email a list of replies in simple HTML.

As with all scripts, they are never really done and prone to tweaks for the next year. I think the next tweak will be to try the feed.xml that LH provides. Perhaps it will be smaller, faster and easier to parse?

So I attempted to attach the script to this article using tools build into the blogging system. Both have failed.

  1. Upload – so RSS feed readers see an attachment link – no joy
  2. Excerpt – I’ve used this in previous versions of the blog successfully. No more. The issue is probably due to my theme.

Anyway, here’s the code


#!/usr/bin/perl

  1. #####################################################
  2. Display Lifehacker Profile data
  3. Recent Replies (first pg only)
  4. Followers
  5. Friends
    #
  6. Known Linux Dependencies:
  7. - perl and LWP and Getopt modules
  8. - sudo cpan -i LWP::Simple
    #
  9. Installation
  10. 1) Change the Your-LH-Profile-name-Here below to yours
  11. 2) chmod +x lh_profile_monitor.cgi
  12. 3a) Either run as is $0 > output.html
  13. Or
  14. 3b) Setup as a CGI on your web server
  15. Or
  16. 3c) Setup as a crontab entry which will automatically email the results
    #
  17. $Id: lh_profile_monitor.cgi,v 1.5 2009/12/16 14:32:55 jdfsdp Exp jdfsdp $
    #
  18. #####################################################
    use strict;
    use LWP::Simple;
    use Getopt::Long;
  1. #####################################################
    sub life_hacker_data();
  1. #####################################################
    my $profile_name=“Your-LH-Profile-name-Here”;
    my $doHelp = 0;
    my $download = 1;
    my $profile_page=“http://lifehacker.com/people/$profile_name/”;
  2. my $profile_page=“http://lifehacker.com/people/$profile_name/feed.xml”;
    my $date=`date +%r`;
    my $ret=GetOptions (“help|?” => \$doHelp,
    “profile=s” => \$profile_name,
    “download=i” => \$download);

my $html_header = “Content-type: text/html\n\n


”refresh\" CONTENT=\“600\”>
$profile_name-Recent LifeHacker
";
my $html_footer = “”;

  1. ########################################################
    sub Usage()
    {
    print “\nUsage:
    $0 [-download 0/1] -profile LifeHacker_Profile\n”;
    exit 1;
    }
  1. #####################################################
  2. main()
  3. The output is a filtered html file to stdout
    #
  4. Grab the new page
    Usage() if ( $doHelp );
  1. `/usr/local/bin/curl $profile_page > $profile_tmp_file` if ($download);
    my $content = get( $profile_page ) if ($download);
    my @lh_content = split(/\n/, $content);
    print “$html_header\n”;
    print “

    List of Recent Replied to Messages


    Pulled from ”$profile_page\“>$profile_page at $date
    1. ”;
      my $ret=life_hacker_data();
      print “$ret\n”;
      print “\n$html_footer\n\n”;
    1. #####################################################
      sub life_hacker_data()
      {
    2. Display the total count of
      1. Friends
      2. Followers
      3. Msgs with replies
        my $ret="";
        foreach (@lh_content){
        if (m#/friends/$profile_name|/followers/$profile_name|replied# ){
        $ret .= $_;
        }
        }
        $ret =~ s#
#
  • #g;
    $ret =~ s#href=“/friends#href=”http://lifehacker.com/friends#g;
    $ret =~ s#href=“/followers#href=”http://lifehacker.com/followers#g;
    $ret =~ s#

    #

  • \n

    #i;
    $ret =~ s#^[.]Click here to view all[.]$##ig; # Remove excess lines
    $ret =~ s#view all##ig; # Remove excess lines
    $ret =~ s#»##ig; # Remove excess lines return $ret;

    }

    WPA Passphrase Cracking for Sale $40

    Posted by JD 12/16/2009 at 09:01

    Saw an article today that someone has decided to sell WPA passphrase cracking service for about $40. It takes about 40 minutes, but on average just 20 minutes. Seems he has a Beowulf compute cluster with idle time.

    If this is what some guy can do, imagine what different governments can do.

    Once again, no consumer-grade WiFi is deemed non-secure. Go wired if you care at all. There is no secure wifi/radio networking, none.

    Big Server OS Installs Is a Problem

    Posted by JD 12/15/2009 at 08:27

    Many companies don’t really consider the bloating of server operating systems as a real problem to be addressed. This is wrong because as soon as you write any data to disk, you’ve just signed up your company to safeguard that data multiple times (3-90) for the next 3-5 years, if not longer.

    How did I come up with this?

    Assumptions – hopefully realistic for your situation

    • Windows 2008 Server – 20GB installation for the OS only (MS says 32GB of disk is the min)
    • Data is stored on a SAN, so we will ignore it. The size of data isn’t the issue in this article.
    • Compressed and incremental backups are performed with 30 days retained.
    • At least 1 copy is maintained off site for DR

    Break down of backup disk use

    • Install image – 20GB of storage
    • OS Backup – 20GB of storage
    • Off site Backup – 20GB of storage
    • 2 extra copies of backup – 40GB of storage

    Total is 100GB of storage media for a single Windows 2008 Server install. Not all that bad, really. Then consider that even small businesses probably have 5 computer servers, that becomes 500GB of storage. Still not so bad. Heck, your DR plan is just to copy the last backup to an external drive and take it home every Friday. Good enough.

    Now imagine you have 50 or 100 or 1000 or 20,000 installations. Now it gets tougher to deal with. Those simple backups become 25TB, 50TB, 500TB and 10PB of storage and you haven’t got anything but the OS backed up – no data.

    Alternatives?

    1. Data deduplication on archive storage frames
    2. Fixed OS images – if they are all the same, you only need 1 backup
    3. Use a smaller OS image

    Data Deduplication

    Data Deduplication has been an expensive option that small companies with normal data requirements wouldn’t deploy due to cost, complexity and lacking skills. This is about to change with the newest Sun ZFS that should be out early 2010. It is already available in OpenSolaris, if you want to get started with trials. I’ve seen demonstrations with 90% OS deduplication. That means for every added server OS install, you only add 10% more to be backed up. Obviously, this will increase whenever a new OS or patch deployment over weeks and months occur, but this solution is compelling and will easily pay for itself with any non-trivial server infrastructure.

    Fixed OS Images

    This is always a good idea, but with the way that MS-Windows performs installations, files are written all over the place and registry entries are best applied only by installation tools. Configuration methods on Windows tends to be point and click, which can’t be scripted effectively.

    On UNIX-like operating systems, base images can be installed, application installation scripted and overall configuration settings scripted too. There are a number of tools that make this easy, like Puppet. This is FOSS.

    Use a Smaller OS

    Xen Ubuntu Linux 8.04.x running a complete enterprise messaging system with over a years worth of data is under 8GB including 30 days of incremental backups. Other single purpose server disk requirements are smaller, much smaller. This blog server is 2.6GB with 30days of incremental backups. That’s almost a 10x factor smaller than MS-Windows server. Virtualization helps too. JeOS is a smaller Ubuntu OS install meant for virtual servers.

    No Single Answer

    There is no single answer to this problem. I doubt any company can run completely on Linux systems only. Data deduplication is becoming more and more possible for backups, but it isn’t ready for transactional, live systems. Using fixed OS images is a best practice, but many software systems demand specialized installation and settings which make this solution exponentially complex.

    A hybrid solution will likely be the best for the next few years, but as customers, we need to voice our concerns over this issue with every operating system provider.

    Cold Backup for Alfresco

    Posted by JD 12/13/2009 at 20:16

    The script below was created as part of an Alfresco upgrade process and meant to be run manually. This is fairly trivial cold backup script for Alfresco 2.9b, which is a dead release tree from our friends at Alfresco. It hasn’t been tested with any other version and only backs up locally, but could easily backup remote with sshfs or nfs mounts or even rdiff-backup commands swapped in.

    For nightly backup of our production servers, we actually perform rdiff-backups of shutdown virtual machines, which take about 3 minutes each. That little amount of downtime to have a differential backup of the entire VM is worth it to us.

    #!/bin/sh
    # ###############################################################
    # This script should not be run from cron. It will wait for the mysql
    # DB password to be entered.
    # 
    #  Created by JDPFU 10/2009
    # 
    # ###############################################################
    # Alfresco Backup Script - tested with Alfresco v2.9b
    #   Gets the following files
    #    - alf_data/
    #    - Alfresco MySQL DB
    #    - Alf - Extensions
    #    - Alf - Global Settings
    # ###############################################################
    export TOP_DIR=/opt/Alfresco2.9b
    DB_NAME=alfresco_2010_8392
    export EXT_DIR=$TOP_DIR/tomcat/shared/classes/alfresco/extension
    export BACK_DIR=/backup/ALFRESCO
    export BACKX_DIR=$BACK_DIR/extension
    
    # Shutdown Alfresco
    /etc/init.d/alfresco.sh stop
    
    # Backup the DB and important files.
    # dir.root setting will change in the next version
    /usr/bin/mkdir  -p $BACK_DIR
    cd  $BACK_DIR/; 
    /usr/bin/rsync  -vv -u -a --delete --recursive --stats --progress $TOP_DIR/alf_data $BACK_DIR/
    
    echo "
      Reading root MySQL password from file
    "
    /usr/bin/mysqldump -u root \
        -p`cat ~root/bin/$DB_NAME.passwd.root` $DB_NAME | \
        /bin/gzip > $BACK_DIR/${DB_NAME}_`date +%Y%m%d`.gz
    /usr/bin/find  $BACK_DIR -type f -name "$DB_NAME"/* -atime 60 -delete
    
    /usr/bin/cp  $TOP_DIR/*sh $BACK_DIR
    /usr/bin/mkdir  -p $BACKX_DIR
    /usr/bin/rsync  -vv -u -a --delete --recursive --stats --progress  $EXT_DIR/* $BACKX_DIR/
    
    # Start Alfresco
    /etc/init.d/alfresco.sh start
    

    Why a cold backup? Unless you have a really large DB, being down a few minutes isn’t really a big deal. If you can’t afford to be down, you would already be mirroring databases and automatically fail over anyway. Right?

    We use a few extensions for Alfresco, that’s why we bother with the extensions/ directory.
    There are many ways to make this script better. It was meant as a trivial example or starting point to show simple scripting methods while still being useful.

    Customer Loyalty Communications

    Posted by JD 12/13/2009 at 09:13

    The last few years, companies have added customer loyalty programs to their marketing. Most of these fail for a number of reasons.

    Which companies have the highest customer loyalty and why? Which have failed, at least for me?

    Successes

    Coke – People like to drink Coke everywhere in the world. When Coke changed their flavoring based on taste testing, the world cried out to put back the old flavor almost like an addict would. Flavored sugar water doesn’t mean much to me.

    Apple – Apple fans go crazy about their products and will tell EVERYONE how great each is. Apple product cost between 20% and 100% more than similar products that aren’t as easy to use. People are willing to pay more for that. I’m not a fan of Apple – mostly because they charge more and their fans are obnoxious.
    I did get a phone call from Apple last year because someone was trying to use a credit card with my name on it to buy an iPhone and iTunes stuff. This call was from Apple, not my credit card company. I became hostile towards to nice man on the phone immediately, before I gave him a chance to explain the issue. He never wavered and was always polite and professional – without any accent in his speech. While this hasn’t changed my negative opinion of Apple product pricing, it hasn’t added any more negative thoughts either.
    Apple, when will your customers be able to multi-task on an iphone? When will they be allowed to change the battery? When will they be allowed to select from any application that can run on the device?

    Google – Google does most things they do VERY WELL and don’t ask me directly for anything in return. They make their money by correlating all my web data together, building a profile about me and selling ads around that data. Most of us don’t really know what this means and we don’t care. I avoid google without filtering personal connection, use, computer data. Further, I avoid sending email to gmail addresses.

    Airlines – Delta and United FF programs. They aren’t really that useful to me anymore. I’ve used Continental and AA FF programs in the past but never used an award ticket from them. Which FF program works best for you depends on where you live and where you travel. I have turned in some Delta points for a $1400 international ticket, which made it completely worth while. My United miles expired before I could use them, so I transferred them to a charity.

    McDonald’s – Kids, advertising, convenience. I don’t get it at all. I haven’t eaten at McD’s in perhaps 2.5 years. It was an emergency the last time I did because I needed something to eat, quick, on the way to a once in a lifetime event. The closest restaurant to my home is a McDonald’s. I could walk there. I have never been to that store.

    Twitter – You love it or your don’t care. I don’t care. Why didn’t AIM or gTalk or MSN setup interfaces with SMS texts? Maybe they did, but I just didn’t know about it?

    What’s missing?

    Customer loyalty needs to feel like a friend telling another friend about something great that they know is likely to be relevant to them them, not just something good. My friends know the types of things I’m interested in based on prior communications. They contact me when they see something really interesting to me. When was the last time you got any great insight from a customer loyalty communication. Seriously? Most of these communications are a list of 50 things on sale and none are of interest. None. The same old marketing like newspaper inserts. It needs to be targeted and on point for my needs.

    Acura – I’ve owned two Acura vehicles and I’m mostly pleased. My interactions with most Acura dealers has been pleasant enough too. When I purchased my last Acura, my last name was misspelled on all the documents and on the title. Boo. A single attempt to correct that through Acura failed, so I gave up. When my annual registration comes due, I initially tried to correct it, but that failed too. My name gets misspelled a lot, so this isn’t a big deal. At least the Acura misspelling result isn’t offensive. Every quarter, an Acura magazine arrives with stories, lifestyle articles, travel hints and offers – Free Augusta National Golf tickets and the like. I don’t golf, but the offer is appreciated. Some of the other deals are interesting and generally leave a favorable impression of Acura.

    My next vehicle will probably be another Acura in a few years. The last purchase occurred without visiting the dealership. The papers were signed on my kitchen table on the day the vehicle was delivered to my home. That impression is hard to beat even with the misspelled name.

    TiVo – These guys are similar to Apple, except I like them. Their product works better than any alternative, but it costs more than any alternative. I dislike that a monthly plan is even offered and I wish the lifetime plans weren’t so expensive. I’ve been a tivo owner since 2003. That same device is working. I swapped the disk drive a few years ago to get more storage. It is about time to swap the drive again to further increase the lifetime. I don’t use any of the paid add-on options, but I do have it download free internet content like Tekzilla and hak5 weekly shows. Convenience rules.

    Failures

    Hilton Hotels – I signed up for a Hilton awards program a few years ago due to conference attendance. I tied my room reservation to it, then attended. After my visit, I checked that it was recorded to my HH program, it wasn’t so I sent the information about my stay to the feedback link on the program site. A few days later, I started receiving emails from the hotel manager asking how my stay was. I provided good feedback and explained that the program hadn’t connected my stay with the frequent stay program ID. I attempted to connect it once more. No joy. It has been a year and still isn’t connected. I get monthly emails from Hilton which reminds me they don’t follow through. Attempts to leave their email marketing list have failed too, which frustrate me even more, every month. I’m at the point where I avoid staying at Hilton Hotels or any of their 10 other names. FAIL.

    Microsoft – The two most common communications I get from Microsoft is patch your PC and your antivirus is out of date. Is that really the message they want to send weekly? Microsoft has lost my trust. Every time they create something new, I immediately wonder how it will prevent me from using anyone elses’ stuff or how much it will cost me. exFAT file system is their latest push for memory cards to support large media files. I don’t understand why all the memory manufacturers don’t just use the FOSS ext2 file systems instead? Oh – because Microsoft doesn’t (and won’t) support ext2. OTOH, WinXP and earlier OSes don’t support exFAT either.

    Linux / Ubuntu – This isn’t really fair. Linux isn’t a company and has no advertising budget. Ubuntu doesn’t seem to have much advertising budget either, at least for the masses. What can Linux do better? Well, they can show 30 second clips of people using the software to solve a real problem with FOSS. It would be best of the problem highlighted something that Windows or Macs don’t do well at all. #1 – every time should show price followed by system maintenance and upgrade processes (click the red triangle in the corner). Currently, failing. Yes, I know that Linux is just the kernel and that no users actually use it directly. We all use some higher level tool created by GNU or Ubuntu or Red Hat or SuSE or Mandrake or some developer in his basement.

    Amazon – I shop on Amazon for price and convenience. I maintain a wish list of things to make gifts easier and as reminders for things to purchase later. I don’t think I’ve ever purchased anything recommended for me from Amazon. They know the types of things I buy with over 200 purchases. If I bought a router, I probably don’t need another. 3 months later, I don’t need to see CAT5e cables or a switch either. I’ve had a few issues with Amazon product shipments over the years, but Amazon has always made me whole again, always. Their customer service does a good job. Their product suggestions, not so much.

    Travelocity – They know where I’ve traveled, how long I’ve stayed and when I tend to go. They also know my searches for destinations. Yet, they don’t send deals for those destinations or worse, keep sending them when I’m already back home. I want international travel deals. I doubt I’ll ever take a vacation to Los Vegas or fly to Ashville, NC. STOP OFFERING THOSE DEALS, Travelocity. Offering a flight from Atlanta to Savannah is a waste of your time too. I’d end up spending more time dealing with airport garbage than a simple drive there. I’m not going to fly commercially to Savannah, ever. I’ve routinely searched for flights to Bali, Singapore, New Zealand, Australia, London, Europe, Chili, and Peru. Get the hint and target those deals, please?

    My Senators – About once a year, I get an email from my senators claiming to have stopped some bill that is bad for the country. I wrote to them a few years ago about some of my concerns which they responded to by a carefully copy/pasted paragraph about each of my concerns. Most recently, it was about the health care bill, which I’ve never written to them about. Nice. Fail.

    Grocery Stores – They give small discounts for the cost of you letting them see what you purchase. I’ve never had a grocery store loyalty card. My privacy is worth more than $100/yr. When my local Kroger started pushing them, I spoke with the store manager about my displeasure. He wasn’t helpful, I stopped shopping at Kroger. Publix is a local competitor where I started stopping. They also had a discount card, but if I didn’t have one, the cashier always scanned hers so I got the discount. Kroger – FAIL, Publix – Success. I suppose manufacturers would be snail-mailing coupons to me if I had a card? That local Kroger went out of business. I doubt I had anything to do with that, but the store manager definitely did. Good Bye.

    Customer Loyalty Programs

    Which programs work for you and which have failed? Why?