A problem with high speed rail in the USA
There’s a least 1 major issue with high speed rail in the USA, once you get where ever you are going, then you either need to rent a car or take a taxi to get around. That isn’t true everywhere, but even in Washington D.C. and Philadelphia, which have great subway systems, you probably need a car to get around. I can’t speak for other locations, since I’ve never been.
If the local mass transit systems don’t serve the local people well enough to be used, why are we spending money on high speed rail lines to connect cities when airlines already perform this task cheaply.
Joe Biden’s love of rail, doesn’t make it a good way to spend my tax dollars. Sorry, Joe.
I could use Amtrak rail to get between my home town and relatives 7 hours away by car. I used to fly, before air travel became such a hassle.
Mode | Dur | Cost | Issues |
Fly | 2 hrs | $183 | Security checks |
Amtrak | 15 hrs | $87 | Overnight change train, late night layover |
Greyhound | 14 hrs | $180 | All day travel |
Is Amtrak competitive? Flying is the quickest way and cost competitive, IMHO.
I live in a metro area with a subway and many bus lines. I’ve worked where a subway stop was below the building, yet it didn’t make any sense for me to use mass transit for my commute requirements. I even paid about $50/month for parking. Many of the people working in and around that location did use the subway and bus systems. It worked for them, however, I know of a number of people who lived and worked on a subway line, yet still didn’t use the subway because of the perceived extra cost to use it and inconvenience to access areas that aren’t served. In this town, you really have to want to use mass transit.
Why didn’t I use mass transit to commute?
- Driving commute was 35 min each way.
- Drive to bus (20 min), take bus into city (40 min) – almost double the commute time, each way.
- Occasionally, my job required me to travel to different locations in the metro area that were not served by mass transit. Perhaps 4 times a month, I’d need to drive into work and pay for daily parking at $6/day.
- Couldn’t group work and errand drives together – yes, this is a weak excuse.
- Mass transit cost – $75/month. I figured my commute costs were $80/month with the car.
- Type-A personality. I like to be in control. As I drive to/from work, I’d routinely pass the express bus going to my part of town.
- As telecommuting became more and more possible, commute costs dropped due to fewer trips and parking costs going down due to increased parking availability. I parked at 5 different parking lots over the years, following the cheapest rates. Paying for parking is just wrong.
Somali Pirates - What to do about them?
Depending on your personal beliefs and where you live, how to deal with the Somali pirates may range from kill them all, to take away there tools, to give them money so they don’t need/want to be pirates.
Critical Items to any Solution
- Deterrence. Any solution that isn’t a deterrent to other Somalis to start or continue being pirates is unacceptable.
- The risks must outweigh the rewards. Break a leg or an arm at every encounter with every pirate. Some have suggested that cutting off fingers or tongues would be a better deterrent. That would be permanent. A broken leg would cause 6 weeks of healing.
- Remove the tools of the trade every time there’s an encounter. They leave every encounter without communications, guns, GPS devices, with just enough food and fuel to get back to the mainland. Sadly, we can’t take their small boats, since they will need them to fish and return home. Unless we sink those boats and parachute drop them back home. Hummmm.
- No rewards. No payments. Period. We shouldn’t pay them for not being pirates or jail them outside Somalia or pay any ransoms. Let them keep the cargo. It appears that most of the ships taken recently carried food aid. What can they do with grain when at sea? Send in the SEALS, SAS, Dutch, French, German commandos for target practice.
- I read here that Somalia was 100% Islamic. I’d be curious to know what Islamic law says should happen to the pirates after they are inhospitable towards travelers of the sea.
Follow the pirates back home or to the mother ship and take all the guns, GPS, radios, and as many of the boats as possible away. Place all the pirates on board the smallest craft possible 20 miles from shore.
Certainly a private security firm or 10 could work out a business model to provide arms and/or soldiers for legs of the trip where pirates exist in world. These security firms would need to be registered with the countries of origin and destination. That registration would help with any laws preventing fire arms on ships in ports. Obviously, this is possible since military ships dock in ports around the world today and they include trained marines with guns AND rifles.
Combat pirates with multiple countries working together. There can’t be more than a few thousand actual pirates. After a while, most of them will have broken legs, no guns, no GPS, no radios and no boats with which they can continue. Every year, a few new, young pirates will take up the effort. After a few of them get broken legs or “don’t come back”, the risk will clearly outweigh the rewards and it will cease to be a problem like it is today.
Obviously, there are some issues to be resolved with this plan.
Computer Information Lawyers Need to Know
INAL, but as an enterprise architect, here are a few key things I can think of that law offices need to add to their existing network and computer security practices.
- Truecrypt – encryption is critical. Use it on all laptops and any data transferred off site.
- par2 – parity to ensure the data isn’t corrupted. If it is important enough to write to a CDR or DVD, then it is important enough to include 10% parity files.
- Encrypt all backups at the time of backup.
- Don’t store data off-site unless it has strong encryption and the service provider doesn’t have access to the keys.
- RAID-10 – For critical data, it isn’t worth anything less. That includes backups.
- Physical security for your data and backups. Lock the server room and lock the rack access to servers and storage.
- Consider partnering with another law office to hold each others backup data securely, assuming you don’t have multiple locations 50+ miles apart.
- ssh with keys (not passwords) for file transfer of all data between the 2 locations.
- VPN for all remote access. No exceptions.
- No Wifi in the office. No exceptions. Use a cable. If wifi is mandatory, so is using the VPN when you are on it.
- Setup and use HTTPS protected web access for legal document transfers with clients. Don’t email them unless you and they setup GPG or PGP encryption. Your clients will appreciate this level of paranoia. Also, it is the only way to ensure they don’t accidentally transmit sensitive data via an open, unencrypted email by accident if all documents have to be uploaded.
- Only use Blackberry remote email devices due to security concerns and require a complex password and auto lockout. Avoid iPhone, WM6x, Google Android, and smart phones as the security of those devices is harder to enforce and maintain. If the lawyers are serious about security, deploy a BES, Blackberry Enterprise Server, so there’s no question about policy control and enforcement. Avoid Android like it is THE PLAGUE if you care about privacy.
- Keep all systems that access your network patched. Be aggressive about anti-virus use. There are routers/switches that verify compliance every time a device is connected. These may be a good option in offices with 100 or fewer devices. They also VLAN off unapproved devices from the rest of your network.
- Thinking about using Cloud Computing? See what Seyfarth Shaw Law Firm says. about the risks.
We’ve all worked with lawyers and have seen some items that perhaps could be improved. Wouldn’t you rather have a paranoid lawyer over an uninformed-about-security one?
Peek Pronto Handheld Email
There’s a new competitor to Blackberry, Windows Mobile and Smartphones available, the Peek Pronto.
It looks like the RIM 957 with color. No phone or web browser, just email and texting. This is great for companies that want their people connected, but not with a cell phone ripe for abuse.
I’m concerned when an email-only device doesn’t clearly state the security features. A lack of network and data encryption and remote wiping is discouraging. At a minimum, HTTPS and IMAPS and POP3S need to be clearly supported. A device password lock with encrypted file system would be easy to add, IME. In that way, even if the device were lost, the data on it would be protected provided the password wasn’t hacked. Of course, real security goes beyond a “password” and complex passwords, autolocking, mandatory change periods, no password reuse, etc. are needed too.
But keeping it simple is a good thing. The Pronto seems to do this.
- email (5 acnts),
- texting,
- view images,
- view DOCs and PDFs.
- No web.
- No cell phone.
4/2009
- $80 for the device.
- $20/month for nationwide GSM service
There is an older device that is cheaper, has the same monthly plan costs, but doesn’t support text or anything other than email.
Blackberry Still Wins
Blackberry security still beats all the hand held devices, that hasn’t changed. Windows Mobile devices win on flexibility. Both cost significantly more than the Peek-Pronto.
Netbooks are becoming more and more viable to replace all these devices for those who need to get work done while on the road, not just check email.
Nokia Internet Tablets
Anyone who knows me, knows that I love the Nokia N800/N810 Internet Tablets. These devices should be on any list that a Peek Pronto is on and any list that an iTouch, WM6, Blackberry or Netbook is on too. Both the N800 and iTouch use WiFi and Bluetooth for connectivity – no data plan is required, therefore, no monthly data plan is required. This is a major plus.
Summary
The Peek Pronto is a low end email device that requires a monthly data plan to be useful. Security may or may not meet your requirements. We can’t tell based on the advertising.
This page was written without actually touching or seeing the device ourselves. It is based on what the getpeek website says (and doesn’t say). Without touching the device, it is impossible to determine whether the keyboard feel is good or not. That can be a critical decision factor for hand held devices.
Verify Your Backups, Please.
Step 1 – backup your data.
Step 2 – recover your data as a test from a friends home or business.
The stuff you learn in step 2 is critical. We don’t backup data just to see it complete. We intend to get that data back at some point.
- Do you have access to the encryption keys used during backup? No encryption? – WHAT!? ARE YOU CRAZY?
- Do you have enough of the backup software (or can you down load it) to recover your data from bare metal, if needed?
Testing is critical to know what does and what doesn’t work. Don’t forget to fix the uncovered restore issues.
Online Backups for Home and Small Business Servers
Recently, I’ve been running IT for a small business. Backups and Disaster Recovery are critical for us AND our customers. With our background in enterprise solutions, we were limited in knowledge for low-end solutions that didn’t cost an arm AND a leg to implement. High end solutions from EMC, Sun/StorageTek, IBM, and HP were our expertise. OTOH, we know that we need to do lower cost solutions better than anyone else does since technical architecture is our business. Having an outage due to a system failure is unacceptable. If a disaster occurs, we need to be up and running with acceptable data loss the next day. Period. Unplanned downtime for a trivial reason simply isn’t allowed. It can’t happen.
Requirements
- Trivial Backup of data – Backups need to be easy to automate. If they aren’t completely automatic, then they won’t happen.
- Even easier restoration of data – Backup is 10% of the problem. Recovery is 90%. Recovery at 3am after a bad day and little sleep is the goal.
- Encrypted transfer – No peaking at our data, please. Strong, industry standard encryption. Claiming FIPS compliance, but not saying the real encryption used is just scary.
- Encrypted disk storage – No peaking at our data, Mr. Service provider. Strong, industry standard encryption.
- Differential backups / Incremental – We only want to transmit data that has changed since yesterday of the internet. This keeps bandwidth costs low after the initial backup seed.
- Selection of recovery from last night or 3 weeks ago – 30 days worth of backups where we can recover data from last night or 3 weeks ago just as easily.
- Compression – 60% compression of data is fairly standard.
- Pay by use – Pay by the GB or TB, not by the amount of packets transferred (Sorry Amazon). 1TB needs to be in the $150/month range at most.
- Windows / Linux support – Desktop, laptop, and Server OS supported.
- Open file backup – Windows is known for keeping files open and preventing a good backup from completing. Linux will keep a few files open all the time too, but that doesn’t mean we don’t want them backed up.
- Recovery by the file, not the entire backup set.
- Full Virtual Machine backup from outside a VM. VMware and Xen and VirtualBox supported.
- Near and Far Backup support – Off-site backups are great, until your network connection is down. If a user just noticed a lost a file due to corruption that happened last week, it needs to be easy to recover the single file from 8 days ago.
Possible Solutions
There are too many to list, but in our search, we found:
- Home made script; cron job, rdiff-backup, gzip, mcrypt, and rsync to remote location.
- Numerous backup solutions, but no low cost solutions appear to run on both Linux and Windows-whatever. If a backup server platform can be dictated, the an optimal solution may be possible. AMANDA, Bacula, BackupPC, rdiff-backup and many others may be suitable.
- Mozy.com – part of EMC now and appears to have everything we need, except Linux support.
- Rotate USB drives connected nightly/weekly then a mirror or incremental backup. Then take them home or off-site daily/weekly with the rotation.
My Answer and Why
I’ve selected the home made script with rdiff-backup at the core. Most of our production infrastructure runs on Linux inside Xen-based virtual machines. We automatically shutdown each VM nightly for a few minutes, run the rdiff-backup and bring the machine back up. All of our efforts require less than 3 minutes of downtime each. Very acceptable for a known clean file system, IMHO. Then the output is packaged into a single file per server, compressed, encrypted and transmitted to another local machine with protected disks. The amount of daily change data is relatively small – 10MB per server for a complete VM (OS, applications and all data). Over a 30 day period, retaining 30 days of incremental backups, about 1GB of extra data is incurred above the compressed initial full backup size. Most of our Xen VM backups are under 2GB is size. 5 servers, 10GB, not bad? There are exceptions to these sizes. One of our server VM backups is 14GB compressed.
Mozy would be a viable solution for Windows provided you don’t have too much data to backup. The cost really explodes since 100GB is over $100/month. Also, it doesn’t support local backups. The cost and security are well within reason for your most critical data and Mozy is part of EMC – who else would you trust with important data. If critical data on your CxO laptops aren’t being backed up nightly, What are you thinking? Be certain the data is encrypted on the laptop too by using Truecrypt too.
Google is an Advertising Agency
Someone on /. wrote an interesting blog entry about google blocking tethered laptop access to their G1 cell phone.
Be careful out there and keep those tinfoil hats extra shiny!
Understanding Software As A Service
Background for SaaS
SaaS, Software as a Service, is the rage in the computing world today. Every big company is getting into it and using Web 2.0 as the initial buzzword. I read a few IT news aggregaters every day and each of them constantly link to SaaS websites. You know them too. A short list:
- Google, Google Mail, Google Docs, Google Voice, Google Maps, Google-Checkout, Google-whatever
- Yahoo Mail, Yahoo Small Biz
- eBay, Paypal, RIM/Blackberry
- online stock trading
- Twitter, 30boxes, iTunes, Don’t forget the milk, Wordpress, Blogger, salesforce
- Microsoft-Live, Live Meeting, Microsoft SaaS, Small business packages
- GotoMyPC.com, Pandora, Hulu,
- 37signals – project management, contacts, collaboration, chat, project organization for small businesses
Basically, any website that provides a useful interface and requires your internet connection is SaaS. Email is the simplest SaaS even if you use Outlook or Thunderbird as clients. Without the server, there isn’t any communication happening.
If you are a SaaS provider, you probably love cloud computing, but that’s a different article.
The Good about SaaS
- Quick deployment. Usually, you’re up and running in a few hours.
- Low initial costs – $20/month per user per service is common. It really is impressive what you can get for this small price and you didn’t have to buy a server, tapes, backup, IT guru, anything. Of course, $99+/month for higher end solutions exists too.
- You don’t run any network or server infrastructure for the application
- In theory, THE experts run the software
- Upgrades are handled by the experts. Usually the people who created to software. If something bad happens, it isn’t just your 10 users, so they work really hard to get it fixed, now.
- Backups aren’t your problem
- Disaster Recovery isn’t your problem
- System Security isn’t your problem
- You don’t have to have people to manage this system, send them to training, keep them up to date or worry about their career goals.
The Bad about SaaS
- Internet down? So are your applications
- Privacy? What’s that? Google applications are mostly free. How can they do that? By indexing everything and building a profile of you based on it. Any data that goes through the SaaS provider can be misappropriated for alternate use that you didn’t approve, didn’t know about and can’t stop. It may not even be the service providers’ fault since the server may be hacked for months before they realize it. | Monster Hacked. It is unlikely you’ll ever know that a service is hacked. Current laws in most locations do not require service providers to tell anyone.
- Run by experts? Perhaps not. SaaS providers can be Fortune 50 companies or some guy in their basement. You can’t tell. The guy in the basement isn’t necessarily bad, but he probably doesn’t have the resources or skill to build fault tolerance, redundancy, and disaster recovery into his systems.
- System redundancy? How many physical locations is your data stored? If the WAN link goes down, what happens? Google provides 3 addresses, but 37signals only provides 1. That doesn’t mean there are 20 systems behind that single address, but it usually means a single location for all the servers.
- Secure Customer Data? Maybe not. Most of these SaaS solutions store your data in the same database with all the other customers’ data. If you main competitor uses the same SaaS provider, it is just a software glitch away from being provided to any of the users.
- Security and Access Controls? Can you configure the SaaS access to be connected to your internal enterprise authentication? What happens to these accounts when someone leaves your company? Are they automatically disabled?
- Backups performed – maybe not
- Disaster Recovery
- All that data you enter – how do you get it out? Suppose you go with Salesforce.com, a leader in SaaS CRM applications. Your sale team enters 1,000s of leads and contacts into this system. The more you use it, the more useful it is, so the more they can charge you. Makes sense. For 100 users, it is $150K/yr subscription for the enterprise version. Don’t get trapped.
- You want to switch providers – how do you migrate from SaaS-A to SaaS-B or bring the solution internal?
- Larger WAN connectivity needs and you’ll probably want a redundant link too. You hope the provider has redundant WAN connectivity too.
- Even the best providers fail. gmail down 1 | gmail down 2 | gmail down 3 If google can’t keep email up, what hope do the other SaaS providers have?
- Who owns the data? You or the service provider? Be certain that you know.
SaaS isn’t all good and it isn’t all bad. It definitely has a place, but be certain that if you go with this model for your business, be certain you understand the limitations and get a contract executed that addresses your downtime concerns sufficiently.
Backup Schedules and Retention
Backup Schedule
Backups are boring, when done properly. They’re boring when done wrong too, but you can’t tell the difference until the day comes – and it will – that you need to recover data. There’s nothing new below, but it may be new to you.
Over the last 40 years, a standard, minimal, backup schedule has been developed to address many of the shortcomings that non-standard backup schedules experience. There are too many reasons to describe all the issues that the schedule below solves. Know that straying too far (or any) away from this schedule places your systems and data at risk.
Below is THE STANDARD monthly backup schedule:
S M T W T F S
- – - – - – -
D D D D D D M
D D D D D D F
D D D D D D F
D D D D D D F
- D = Daily differential backup – changed data only, unless you can perform full backups within your backup window without impacting users
- F = Full backups – this limits the number of daily backups to be restored if there is an issue that week to 6 or 7
- M = Monthly – mark the first full backup of each month as the monthly and store it
- If you have limited backup infrastructure, you may need to split the weekly/Full backups between multiple days like Friday, Saturday and Sunday. Since this complicates your overall solution, this should be avoided.
Backup Retention
Doing the backup is just a small part of this solution. You also need to retain the backup as long as it is useful, plus 1 extra copy.
- Dailies – keep at least 2 weeks
- Weeklies – keep at least 4 weeks
- Monthlies – keep at least 2 months, but consider retaining 6 months
- Legal requirements may demand longer or shorter retention periods. If you can, keep all backups on the same retention schedule. Be extremely clear on the backup media what the purpose and retention is.
- Test the backups. If you don’t test it, you’ll never know whether that hard work is actually working.
- So we’re keeping a bunch of copies following this method. You’ll retain 5 full backups between weekly and monthly versions. If you have issues or get hacked, you’ll have ample recovery options. With all these copies, you understand that the lowest cost media is usually deployed.
Ruby Gems Suck?
A few days ago, a friend asked me what I’d suggest they do to get a website up and running AND he was headed towards a big, complex CMS, Content Management System. He wants to maintain a few websites, but most of it would be static webpages that need the same look-and-feel.
I don’t have anything against CMSs, I’ve tried and deployed more than a few of them. But for someone new to hands-on web site development, that isn’t the way to go when you work alone. Static HTML is what you need to start. A good templating engine is the second step. There’s a ruby program called webgen that does this. So, I decide to install the newest webgen on a virtual machine.
I already have ruby and rubygems installed in the normal Ubuntu way. This should be quick …. NOT!
The instructions for installing webgen say the easy way is: `sudo gem install webgen` – I get dependency errors, so I run it again, and again, and again and again until they are resolved. Nice.
Go to the area that I’d like to create my website and run `webgen create MySite` – webgen not found. Huh? I’ve become spoiled with Ubuntu installations. Programs that get installed just work, like you’d expect. There’s no mention of manual configuration required in the webgen documentation page. I search and discover there’s a different version of gem installed that isn’t compatible with some other dependency – fine. I try `sudo gem update` to update all gems on the system. More dependency issues – so I run it 5+ more times until they are resolved.
`webgen` still doesn’t work. I read a little more – it’s been over 2 hours screwing around with this stuff now. I create a softlink to ~/bin/ for it – no go, but at least the program is found. I export GEM_HOME and RUBYLIB environment variables. No joy. I try matching exact dependencies for webgen – and there are over 15 of them. How much more is needed to get this program working?
I’ll never know – after 3 hours of trying to setup webgen, I give up. Someone will call me an idiot on the blog a few hours, that’s fine. Clearly, someone with my minimal development experience and completely lacking in computer skills isn’t fit to run a trivial ruby program.
Oh, BTW, this blog has been running on a Ruby on Rails system, Typo, for over a year and I’ve been a programmer for over 20 years. Yep, I must just be stupid that gems was able to prevent my use of webgen.