End Open WiFi Access Points Now!

Posted by JD 11/06/2010 at 22:00

Open WiFi is convenient, but not secure. All of us need to help people and businesses providing Open WiFi understand the issues so they will stop providing it.

The real problem is that most people do not understand how insecure Open WiFi is. There’s a new Firefox extension that grabs social network connection credentials from people around on an open wifi network. That extension works on Windows, Mac OSX, and soon, Linux. It is named FireSheep and anyone can get the FireSheep extension here AND it is trivial to use. This extension lets the nearby cracker act as if they are you on the social websites. They can post to twitter as you, they can update photos on facebook. For all important uses, they ARE you with just a click of a button.

The Fix – Easy

What is the fix? It is simple, just enable a trivial WPA passphrase for the WiFi access point. That’s it. This method is useful for all those small businesses to prevent most of the hijacking computer attacks, while still not really causing issues for their clients. For a simple example passphrase, Starbucks could use … er … “starbucks.” That would be enough to foil the FireSheep extension.

Not Secure Enough for Home or Business WiFi Networks

Ok, so this fix is just for places that provide an open wifi hotspot for clients and definitely should not be used by any business for their private network or by any of us in our homes. For small businesses and homes, you really want to follow my WiFi Security Checklist.

The Best Fix

Another way to solve this issue – a better way – is for all websites with a login to use SSL encryption for everything, all data. No exceptions. 10 yrs ago, that would have been computationally unreasonable. These days, having everything SSL encrypted adds about 3% overhead to bandwidth and compute requirements. That isn’t a big deal for almost any website to handle. The newest CPUs from Intel include special instructions to make AES encryption/decryption even less computationally intensive – becoming a non-issue.

If you have a website with encryption, please force SSL connections. There are some very easy ways to do this without touching the website. Simply use a reverse proxy like pound to provide the SSL connection handling, then forward the request to the back end web servers. This web site, jdpfu, uses pound to proxy both SSL and to perform load balancing of traffic across 3 server instances. Connections with logins stay on the same server instance, so there’s no session confusion between the different server. All the web servers read and write to the same DB instance. SSL connections are all handled in pound and the application doesn’t know anything about it.

If you need help setting up pound, let me know below.

What You Need To Have A Web Site 2

Posted by JD 11/05/2010 at 10:55

To have a web site on the internet, you need just 4 things.

  1. Registrar – these guys sell you the .com, .net, .org, .co.country, etc …. They maintain the ‘whois’ record. That’s it. The Registrar needs a record that points to your … DNS provider – also called a name server and backup name server record.
  2. DNS – Domain Name Service. This connects the domain name that you bought to the IP address(es) of the computers where the web site runs.
  3. Public IP Address – Any public IP address that is not on a private network or filtered for the service you want to make available. The service is usually HTTP on port 80 and/or HTTPS on port 443. Those are the default ports. Most people/companies will pay a hosting provider for both an IP and a server.
  4. Web Server – this is the computer program that listens on either port 80 or 443 and responds with the content you specify. While any ports can be used, end users are use to ports 80 and 443, so it is unusual to see other ports used. I’ve used other ports and seen how that lowers traffic, but it also breaks many content spamming programs.

Optionally, you may also need an SSL Certificate for encrypted web connections. These days, many websites have decided that only allowing SSL-based connections is worthwhile.

That’s all you need. Do you see how each of these things fit together so my-neat-domain.com becomes an IP and then shows a web page from a web server? Simple and it works billions of times every hour.

Simple Linux Firewall Tricks

Posted by JD 11/04/2010 at 12:38

The 7 Uncommon Uses of iptables over at linuxaria shows fairly easy to use solutions for the following:

  1. Block known dirty hosts from reaching your machine; block spammers and other known bad networks
  2. unlock a pre-determined port, once someone “knocks” ; ie "port knocking_
  3. use a restricted externally, but a high port on the server – port forwarding
  4. use your proxy only for external access, not in the local LAN – I’ve done this with PAC files
  5. Limit the number of ssh connections to 10
  6. Limit ssh to have just 1 session every 15 seconds
  7. Give multiple directives with a single command

Fail2ban can be used to address some concerns, but you may need to limit the connection count and rate from some IP addresses that could be considered system abusers.

Your Computer is Impacting Foreign Elections

Posted by JD 11/04/2010 at 10:45

The BBC is reporting that internet connectivity with Burma (Myanmar) has been effectively shut down in advance of the first elections held there in 20 years.

Only 200 PCs Needed

If the BBC report is true, it would only take 200 relatively low speed internet connected PCs to take the country of Burma off line. Let me explain. In the BBC story about Burma, it is stated the entire country is connected to the internet over a 45Mbps link, that’s a DS3 to the network and telecom people. It isn’t much bandwidth for an entire country.

To take any network or servers off line, all that any attacker needs to do is effectively cause your network to be too busy for user connections to get through. Just like a busy signal on your telephone. Doing that’s isn’t very hard.

Only 15 PCs connected with common home bandwidth could take down the country of Burma. That isn’t many PCs is it. Even the slowest broadband connections have 256 Kbps, which means only 200 PCs are needed with that upstream connectivity to take Burma effectively off line. If a botnet controller wanted to attack an IP and they have 100,000 PCs, that translates to 25 Gbps. Most companies, even with large pipes like a Fortune 100 company has, would be taken off line. 200 PCs is a small number and could be quickly blocked, which is why botnet owners have 100,000 – 5M PCs.

MKV Containers - Why Use Them + Scripts 5

Posted by JD 11/02/2010 at 10:30

So the HD-Nation video-cast (available online or on your TiVo) did a few episodes about what you can do with MKV containers for your media.

Below are a few other links about MKV Containers and a few shell scripts to get the MKVs to playback correctly.

Solved-Increase KVM VM Image File Size 3

Posted by JD 10/31/2010 at 13:00

Seems that 2GB isn’t enough for some specialized PBX Linux solutions to build, so I found myself needing to increase the size of a KVM virtual machine image on running Ubuntu Server 10.04 Lucid Lynx in the VM. This technique probably will not work for sparse or VMDK-based VM images. It should work for Xen and KVM IMG-base VM files, however. Anyway, below is how I did it.

Attempt 1 - OpenQRM on Ubuntu Lucid 1

Posted by JD 10/30/2010 at 10:54

This morning, I decided to install KVM and OpenQRM on a spare machine here. The machine is suitable to be a VM host with lots of CPU and 8GB of RAM. It is not a blank machine, rather, I wanted to add openQRM to it and leave the existing services running there … untouched. The existing services are for a storage server and DLNA/media server. Nothing too fancy, but there are some non-default settings that proved to be small issues when attempting the OpenQRM install.

Following the sparsely written guide Setup_your_own_openQRM_Cloud_on_Ubuntu_Lucid_Lynx from the openQRM team, I was hopeful that this complex system wouldn’t be too complex that I couldn’t get it running quickly and easily.

Linux Training and Documentation Resources 2

Posted by JD 10/29/2010 at 11:27

If you want to learn something about Linux, there are a wide range of learning materials available out there.
Much is for beginners, but there are some intermediate and advanced course materials available too.

The best place to begin is with the documentation from your distribution.

Internet search engines will find lots of documentation for other distros too, but knowing that Distro-Z is based on Distro-Y means that the documentation for Distro-Y probably works for Distro-Z too. A concrete example – Ubuntu is based on Debian, so if you use Ubuntu and can’t find the document under Ubuntu, look for it under Debian.

Eventually, you will want know something that isn’t in those documents. To address this, each major distro also has forums and email-list-servers.

Be certain to spend at least 45 minutes searching the forums for your question and answer before you post. Read the Acceptable Use Policies for each forum too. Basically, if you are on-topic, respectful and cordial, then you won’t have any issues.

Some general information about Linux and HowTos also exist.

Because Linux is very much like UNIX, much of the information and techniques used and documented for UNIX systems over the last 30+ years will work on Linux. Don’t be afraid to read UNIX How-To Guides that you find out there.

Books – I find that anything written in a book is out of date by the time it gets published. That doesn’t mean you don’t want a classic like UNIX System Security in your collection, just that the details of an implementation covered in the book are probably out of date. The architecture coverage is probably just fine.

Just because you can do something doesn’t mean it is a good idea and doesn’t impact your security. When you read any online information that tells how to do something – ask yourself how it impacts your privacy and system security.

Linux Backups via Back-In-Time

Posted by JD 10/28/2010 at 08:55

One of the main reasons that people give for not performing backups is that it is too difficult. The Back In Time program solves that issue for anyone using Linux, Ubuntu, Redhat, Slackware, etc. Both Gnome and KDE version are available.

Back-In-Time uses file system hardlinks to manage snapshots efficiently. This trick has been used for 20+ years on UNIX operating systems to provide backups. That means it has been well proven, but it also means this technique doesn’t work on Windows because hardlinks in Windows work differently. After the first complete copy is made to the backup area, any snapshots made after that point use hardlinks for each file that doesn’t change. Basically, it costs ZERO storage to make additional hardlinks. Neato.

File Copy Performance for Large Files 3

Posted by JD 10/27/2010 at 18:36

The last few days, I’ve been trying to improve the manner that I copy large (2+GB) files around both locally and between systems.

I looked at 4 different programs and captured very simple stats using the time command. The programs tested were:

  1. cp
  2. scp
  3. rsync
  4. bigsync

I’d considered trying other programs like BigSync, but really wanted something at supported incremental backups to the same file and handled it without too much complexity. I would have liked to use zsync, but my understanding is that is an HTTP protocol and can’t be used for local copies. I wasn’t interested in setting up apache with SSL between internal servers.