Running Windows7 Media Center Inside a KVM VM 19

Posted by JD 02/06/2012 at 04:00

It has been many months, so I figured an update could be helpful.
I ended up not following the original plan for media center deployment.

Here are the highlights:

  • Already had a KVM virtual server running on Ubuntu Server 10.04 LTS x64
  • Created a virtual machine to hold Windows7 x32 ultimate thanks MS for the free Ultimate license
    • Created a 40GB OS/apps HDD (virtual); it barely fit
    • Created a 50GB Data/TV HDD (virtual); this is a 2nd HDD container; easily resized
  • Bought and installed the networked HD-Homerun HDHR3 (dual ClearQAM tuner); actually had this about 6 months earlier
  • Switched to Limited Basic CableTV ($26/month); No CNN, TVGuide, FoxNews, ESPN, etc. That happened in 2010.

Update; 5/2014 – This exact same VM and 7MC install has been running on the same VM server all this time. 1 hickup happened last year related to schedule data inside Windows. No other issues.
The only issue has been that all attempts to migrate the VM to a newer VM host have failed. Seems that only a fresh install will work for Windows on newer KVM emulated hardware. If there was a clean way to migrate the recorded TV history, channel configs, all the other crap-ola that Windows media center keeps, I would have migrated last year to 12.04. Now that 14.04 is out, there is a real urge to move somehow.

Commercials, Transcoding, Closed Captions

WTV files are a hassle in a house that prefers open data formats. Here is an outline of the method to get the files converted to something useful and efficient.

  • Automatically convert WTV files to mpeg2 and cut commercials from WTV files using a scripted VideoRedo-TV Suite app ($50 for the upgrade to TV Suite )
  • Transfer mpeg2 files to a NAS device – this could be automatic, but I haven’t bothered; a script does it ad-hoc.
  • Automatic transcode mpeg2 files into mpeg4/h.264 ; Linux / HandbrakeCLI
  • Automatic extract closed caption data for CC1 using ccextractor
  • Automatic MKVmerge to merge video and CC1 into an .mkv file.

These scripts are written in Perl for both Windows AND Linux.

Playback Devices, Methods and Limitations

There is a downside to having 7MC inside a VM running on a remote server. There is no way to view TV inside the VM or playback video over RDP or VNC. Setting up the channels meant recording each for long enough to determine what each channel was, then manually setting the guide data source. The VM setup has been working perfectly since July 2011.

  • Playback-A is thru a WD TV Live HD ; Easily handles 1080p, subtitles, multiple languages. This device is completely silent, HDMI output, gorgeous picture and 5.1 DD sound.
  • Playback-B is thru a netbook with XBMC/Linux SD only due to processing power and Stereo-only sound. The XBMC interface is beautiful.
  • Playback-C is thru a powerful laptop with XBMC/Linux thru HDMI connection with more than enough processing power for 1080p, but not always available due to other uses and travel.
  • Playback-D is thru an old MediaGate MG35 – AVI/xvid/divx player ; SD only; This device isn’t used very often anymore. It does not support MKV, h.264 or HiDef content.

Update 5/2014: The playback machines have been completely swapped out.

  • $99 E-350 APU box running XBMC/Ubuntu + Plex Media Server with an internal 4TB HDD. This thing is AMAZING!
  • WDTV Live HD – handles newer video file formats, but not some older versions.
  • Chomebook C720 running Ubuntu 14.04 (ChromeOS was wiped day 1); it has a wired GigE ethernet USB3 dongle – wired blows away wifi-whatever.
  • ChromeCast device – not used much – it is picky about video and audio formats – h.264 + AAC only. Only useful for recently transcoded recordings since getting the ChromeCast – does NOT work with either MPEG2 or/and AC3 audio commonly provided in OTA broadcasts. Mandatory use of Plex Media Server too … since they really want only internet streamed content.
  • Roku3 – not used much. Mandatory use of Plex Media Server. Streaming Amazon Prime 98% of the uses.

All playback is streamed from a central server either over GigE or WiFi-G. Wifi is avoided and not usually used. No files are stored on the playback devices … except the E350-APU. More and more recordings are ending up on that device.

If the video is transcoded into MKV, wifi-G works. If not, wired ethernet is required to avoid stuttering and major buffer delays. I prefer wired GigE connections even if a device is only 10/100 base-tx. All servers and the powerful laptop are GigE.

Recorded TV Storage

I started with a 50GB 2nd HDD container, but 7MC complained constantly that it would run out of storage – CONSTANTLY. Increased it to 100GB and it complained a little less, but still too much. Went on a vacation and needed a little more storage. Created a 400GB virtual disk to prevent any missed/deleted recordings over that time. It only used 150GB for a week of recordings. This is for temporary storage only.

Stability

KVM has been rock solid stable. Win7 has been rock solid stable under KVM. Disk performance is slower than all the other VMs by a factor of 2, but those are Linux. I’m using virtio drivers for both disk and networking. Tried SATA and IDE disk drivers and didn’t see any throughput improvements, but did see higher CPU loads. Also tried to force 7MC to write the TV recordings to network drives. I never got that working – even to a CIFS share on the same physical box. It wasn’t going to be any slower than a V-HDD, but 7MC freaked out and refused. I tried to hack the registry to make it work … but couldn’t figure out the needed permissions and gave up. I won’t use 777 permissions, period. The virtual-HDD is on the same physical disk as the CIFS share was.

Arbitrary limitations is one of the reasons I dislike proprietary software. I get that this might not be optimal, but I have fast storage and my NAS is pretty fast too (70+Mbps writes thanks to great caching).

  1. JD 07/30/2012 at 13:21

    Be certain to check out the new article about Improving KVM Performance. It was written specifically for this virtual machine.

    Those tips really make running a 7MC VM inside KVM much more efficient.

  2. Mike 10/29/2012 at 15:41

    I’d like some advice if you have the time.

    I have a similar setup that is giving me trouble. Windows 7 Media Center in KVM running on Debian 6 (Squeeze) using an HDHomeRun Prime. Prior to this weekend it was running flawlessly for a year in a Hyper-V VM on the same machine, so I know the hardware is more than capable.

    I only watch recordings from my Xbox360 extenders. What’s happening is I’m getting occasional glitches in the recording, showing up as pauses and jerkiness in playback. On top of that I believe I have a network issue as I sometimes lose control of the playback for seconds at a time. The strangest thing, however, is if I record an hour show, when I play it back on the Xbox, the timeline at the bottom shows over double that (almost 1hr50min). However it actually is only one hour of recorded video. I have no idea what that is about.

    I need to fix this fast or I will be forced back to my hyper-v solution. :(

  3. JD 10/30/2012 at 19:40

    @Mike: I cannot recommend anything without any facts.

    What sort of performance does your KVM VM achieve?

    • disk throughput (real and theory)
    • network throughput (real and theory)
    • CPU utilization
    • Which CPU do you have, how many did you allocate to the VM?
    • How much RAM did you allocate to the VM?
    • Is the real hardware reasonably capable? Trying to run a VM off a USB2 disk is a bad idea, for example.
    • wifi is a no, no, for streaming. Perhaps wifi-N-300 could work, I know that wifi-G is not sufficient even under ideal situation for HD streaming.

    Did you follow the tuning recommendations outlined?

    I see between 90 and 95% of native performance from the VM. That means you need to know what native performance numbers are for your equipment first. Real world numbers, not from the side of a box.

    I don’t have an xbox360.
    I’ve never tried “hyper-v”, whatever that is. I guess it is commercial and costs $100+?

    Specifics are required if you want any help.

  4. Mike 10/31/2012 at 01:35

    Thanks.

    I’ve followed some of the performance tuning, but not yet all.

    I’m pretty new to linux in general, so I’m still struggling to find things. I’m not sure how to test the hardware performance under linux, but I can give you some more info:

    It’s a Pentium Core2 quad-core running 2.8Ghz. I have 6GB RAM, and two SATA hard drives. The first HD is for Debian and the Media Center HD img file used by KVM. The second HD is just for recorded TV and is dedicated to the VM. I allocated 2CPU’s and 2GB of RAM to the media center VM. I have two gigabit NICs in the machine and one is used by the host and the other is bridged and used by the VM.

    To test the HD for recording I copied a recorded show to another folder using explorer and it averaged 15-20MB/s with peaks over 30MB/s.

    I’ve tried changing the VM’s video (vga, cirrus), nic (virtio, e1000), and storage (ATA, virtio) but nothing seems to affect the end result.

    The most bizarre thing is the displayed recording lengths not matching the actual content. Today I discovered the playback issue exists even on the sample video that came with Media Center, so I no longer think it’s a recording issue. I suspect the playback issues are due to DRM side-effects, but I’m not sure. I will try to play back through XBMC/Linux and see what happens.

    Note: Hyper-V is the virtualization solution developed by Microsoft and available in Windows Server. A stand-alone version is available as a free download, but the stand-alone only allows managing of VM’s from console (no GUI) or remote. It is very powerful and rock-solid. Until recently I was running the Media Center VM on hyper-v in Windows Server 2008 R2.

  5. Mike 11/03/2012 at 03:56

    I tried everything I could to get it wokring in KVM, but the weird playback issues persisted. I was going to try virtualbox to compare but couldn’t get it to run because it was missing some kernel module which was supposed to be automagically compiled in but apparently wasn’t…

    Today I spent some time figuring out Xen. So far it’s working great, much better than in KVM. Playback shows the correct timeline length. Video no longer cuts out after watching more than 10-15 minutes (this was happening regularly in KVM, both on recorded and live TV), requiring me to change channels or start another recording to fix. Xen was a bit harder to set up for a linux newb like me, but not too bad.

  6. JD 11/07/2012 at 14:37

    @Mike: I’m happy that you found a solution with Xen. I’ve run Xen ParaVs here for years and about 2 yrs ago decided to switch to KVM-based VMs to make maintenance easier and clearer. I’ve migrated most of the Xen VMs to KVM and a few VirtualBox VMs to KVM too. For me, KVM rocks completely. It is easier to manage and easier to tune for great performance than Xen was.

    Xen has certainly improved over the years, but the few times that the Xen-based VMs refused to boot after kernel updates left a bad taste in my mouth. I hope that doesn’t happen to you. Managing Xen, kernels, and kernel modules was not fun.

    With that said, I’m happy that some other F/LOSS virtualization solutions are working for other people. The proprietary versions just are not necessary any more for non-Microsoft installations. Where MS is the primary deployed OS, I still think the proprietary solutions ARE the best overall solutions. Whether that is VMware or Hyper-V, I don’t know.

    If you still want help, then details, facts and data are required. The main ideas for setting up high performance VMs will be the same across all virtualization systems – Microsoft included. Performance is split into these subcategories:

    • Disk I/O
    • Network I/O
    • CPU
    • RAM
      It really is that simple. Just tune each for the specific workload. CPU and RAM are about having just enough, but no more than required. I/O is about being as efficient as possible – using the best available drivers and most efficient storage. Knowing where you start on performance is important. Without that, any changes can’t be measured against the base. Making changes and hoping for the best outcome is crazy to me. Pros want to know that a change improved the results, not guess that it did.

    I was joking about not knowing Hyper-V – I’d heard of it, but never knew anyone that actually used it. The businesses where I worked all tested it inside labs, but decided to deploy VMware ESX/i instead. That was the safer choice. It has been a few years, so perhaps Microsoft has matured their solution to the point of being competitive with VMware’s suite of options? I dunno. None of the recent clients have been deploying much Microsoft, rather electing to use Linux-based solutions instead. There are fewer and fewer 100% Microsoft shops that I see. OTOH, a 100% MS shop probably wouldn’t hire me.

    As Microsoft has matured their solution, the F/LOSS tools, LXC, KVM, Xen, and others have matured theirs too. Long term I can’t imagine Microsoft or VMware being competitive with VM solutions from all the other companies backing KVM. Having options is good, having completely free, rock solid options that perform well is already possible. Obviously, Microsoft will be the early winner for 100% Microsoft solutions. Having easy access to code and internal experts provides that leg-up. Over the long term, does it really matter if Windows8 desktops run a little slower for 6 months until the F/LOSS teams figure out the settings? It doesn’t for me or most companies. We aren’t on the bleeding edge. None that I know have any plans to use Win8 for the next 5 yrs.

    Travel

    Sorry there haven’t been many useful replies recently. I’ve been traveling overseas the last few weeks.

  7. Mike 11/09/2012 at 16:56

    @John: Thanks. I initially chose KVM because what research I did made it appear to be easier to manage, just as you said. I will likely return to KVM at some point when I have time to experiment.

    If my changes appeared random it’s due to a combination of a) my inexperience with Linux, b) time pressure to get it working, c) the bizarre symptoms. I don’t believe what I was seeing was a result of performance issues and that there was something fundamentally broken. I record primarily copy-once digital cable which is heavily DRM laden and stream it to Xbox360’s, so maybe that is why I am seeing something you are not.
    The fact that the playback timeline didn’t show the correct video length was beyond strange. If I watched 5 minutes of video, the position on the timeline would show I had watched around 11 minutes. Skipping backwards was problematic, sometimes going very far back and sometimes acting like it was skipping only to remain in the exact same location (probably related to the timeline issue). Playback would cut out and go black on both recorded and live TV every 5-15 minutes. Starting a different video or changing the channel would usually correct it, after which you could return to what you were watching. Whether these things were caused by video, network, or cpu issues I have no clue. There were zero useful log entries on the Win7 box to help pin it down. I suspect a bug somewhere in the emulated hardware in KVM, but my use case is so far outside the norm I doubt anyone would go to the trouble of trying to recreate it.

    I was surprised you hadn’t at least heard of Hyper-V. Glad to hear you were joking. :)

    I am a long time Windows user, admin, and developer (since the days of 3.0). I currently work as a consultant doing primarily C# development which I’ve been doing for about 7 years. Before that I was a network administrator for Windows and Citrix networks. I have a great deal of experience with Microsoft products and technologies and they have always been my first choice when implementing something new. That said, I have always respected the ideas behind the free software movement and every few years would install a Linux box just to see what it looked like. But I never used Linux for everyday work and so my knowledge of it has remained limited. Windows has been my default because it is familiar, I have amassed a large amount of background knowledge, and it easily supports the things I do (besides using it for work, I am also an avid gamer). Despite Microsoft’s increasing limitations on users with each new edition (increasingly draconian activation technologies, non-configurable behavior, etc.) it has never been enough to push me over the edge to using Linux as my primary OS until Windows 8. I won’t go into all the reasons Windows 8 and Microsoft’s new direction is bad, suffice to say it is not a superficial dislike of the new UI or touch paradigm but rather on the rather ridiculous numbers of restrictions on what you can do with it. I have always looked forward to each new release of Windows because it meant learning new things. Windows 8 was no exception but excitement gave way to disillusionment within weeks. Only two versions of Windows have ever left me with an overall negative impression: Window ME, and Windows 8 (yes, it is far worse than Vista). So I’ve finally given up on Windows and am making the switch to free software and it feels pretty good. So far I’m fairly impressed with Debian.

  8. JD 11/09/2012 at 18:23

    @Mike: A thought about the the time issues you are seeing. Have you noticed whether this happens with 1080p, 1080i, 720p, or SD recordings?

    I’ve seen issues like this when recording multiple HD streams, but only if I leave the resulting files in a WTV container. It never happens when recording only 1 stream unless the system is really busy already.

    If I pull the content out and put it into a .MPEG2 container, then transcode it to h.264, all is well. I’ve never been happy with skipping around or FFWD/RWND of WTV container media. I think that container sucks and lacks simple time codes that players can use to get exactly where you like with skips and fast fwd/rwd.

    I’m not really as clueless about Windows as I seem, though my exposure to it since 2007 has been fairly limited. In late 2007, I switched to Linux for almost everything – this applies to home and work. Prior to that, my exposure was daily, deep and complex. I was a cross platform C/C++ dev for many different platforms, including Windows clients and servers. I haven’t bothered to learn anything about Win8. Similarly, I don’t usually bother with the latest Linux releases either.

    Stability trumps “new” every time for me.

  9. Mike 11/12/2012 at 16:21

    I typically only record HD, mostly 720p but some 1080i/p. I can record three simultaneous streams with no trouble (the Prime is a triple tuner), but the timeline and skipping problems even manifested on the short sample video that came with Media Center, so it can’t be recording related. That rules out a bunch of possibilities.

    I’d love to transcode the videos to another format, but I’m not sure that’s possible with copy-once flagged recordings. It is my understanding the wtv files are encrypted in that case. They definitely don’t play back on another media center PC (or even the same one if you screw up the settings!) As my cable provider Time Warner flags practically EVERYTHING as copy-once, it leaves very few options as apparently only media center is capable of recording copy-once broadcasts.

  10. JD 11/12/2012 at 18:12

    I was worried that Comcast would do that copy once setting too. I hate DRM that cannot be trivially removed and avoid it. I almost purchased a Prime a few months ago to get a 3rd OTA tuner, but alas, it is only QAM and CableCARD – no ATSC. I have an HDHR3 – dual QAM/ATSC tuner.

    In the non-encrypted WTV files, it is just an MPEG2 stream. Lots of tools will dump the WTV into MPEG2. Have you tried any?

    • VideoRedo Plus – what I use (My needs were beyond the trivial)
    • http://www.hack7mc.com/ – a nice 7MC tips website "WTV to Mpeg2 Search’:http://www.hack7mc.com/?s=wtv+to+mpeg
    • The Green Button – another nice 7MC tips website
    • Right click on the WTV file in Exploder, select Convert to … dvr-ms

    You probably are familiar with all those sites and tools.

  11. john-alt 11/18/2012 at 01:43

    I’ve converted non-encrypted WTV’s before, but none of the tools I’ve tried will touch the encrypted ones. :(

    I have a question that is off-topic but I’m hoping you can help, or direct me to an appropriate forum: When I ran my hyper-v server, I had an Active Directory domain with a file server provide files to my local lan. Now I’d like my Debian server to serve up those same files, but I’d like to do it “the linux way”, whatever that may be. My research has suggested nfs or samba. I don’t need windows compatibility, so nfs seems the way to go? I installed the nfs-kernel-server package, but I see no way to secure based on username/password, only hostname or IP. Insecure sharing is not the route I wish to take, so how do I restrict folders based on username and ensure secure transmission of username/password authentication? I’ve seen mention of using kerberos, which is what Active Directory uses, but no information on how to integrate this with nfs, so I don’t know if that is a valid approach. Also a great deal of the information I find seems to be outdated. Thanks

  12. JD 11/18/2012 at 11:27

    @john: In Linux, file ownership, groups and permissions are how data security is performed. This works for NFS as well. Unlike with MS-Windows, file permissions work the same regardless of whether it is accessed locally or from a remote file system. If you setup the file permissions on a Linux/UNIX file system to only allow 1 user to have read access, then no other users (except root) can read those files.

    The real key for the best file sharing available is to store the files on a UNIX file system. This provides the greatest flexibility since it is easy to share files with both NFS AND samba at the same time.

    There are at least 20 different ways to share files the linux way, but for home users, NFS should be sufficient between the two Linux/UNIX machines. Sharing over NFS between Linux and MS-Windows will just bring problems unless you pay for a good NFS client on Windows.

    If you need higher security than NFS provides alone, which can be locked down by client IP, export permissions (no root, no exec, etc) in addition to userid file permissions, then you’d need to look at adding Kerberos. I’ve never had a need myself.

    The trick to doing it the linux way is centralized management of all userids. The uids and guids need to match between the systems. How you do that is completely up to you – it can be manually making the /etc/passwd and /etc/groups files match or running an LDAP server with users or connecting to an AD server, if you prefer – LDAP integration with AD sorta just works these days. I use openLDAP myself – it does credentials for email, logins, and multiple 3rd party server logins.

    If you do trust file permissions over NFS and you are worried that unwelcomed clients might spoof their uid/gids to match, definitely restrict the IPs that can mount NFS shares/exports. NFS traffic is not encrypted. On a LAN, that shouldn’t be an issue. Heck, Window share traffic isn’t encrypted either.

    If NFS isn’t enough, you can use sshfs, but it is slow – really slow in comparison, however, as the name implies, it uses ssh for the transport, so both user authentication AND all transfers happen over encrypted ssh channels.

    I hope that was clear enough.

  13. Mike 11/18/2012 at 17:20

    Before I saw your reply I did some more research.

    I feel NFS is too insecure for my needs. The idea that a UID is the only thing stopping an attacker from wiping all my important shared data is too much to accept.

    I tried SAMBA but couldn’t get it to work. The smbd wouldn’t start. It kept complaining about the address being in use despite netstat not showing anything using any of the typical ports needed by smb file sharing. All the help I could find for that error seemed unrelated to my issue. At this pont I was pretty frustrated and gave up for the night.

    This morning I read your reply and immediately looked up sshfs. This is exactly what I wanted and it was dead simple to get working! The encryption overhead is completely acceptable.
    My Windows Server was running fully encrypted smb traffic as well (supported as of Server 2012) so it is nice to keep that level of security.

    The only thing I’d like to add would be the ability to allow users to connect via sshfs but deny a shell (both remote and at the actual console). I’m not sure if that is workable but I will look into it.

    Thanks again.

  14. JD 11/19/2012 at 01:37
    The idea that a UID is the only thing stopping an attacker from wiping all my important shared data is too much to accept.

    Really? What do you think secures Windows shares for an average user? NFS is as secure as the servers it runs on. There is no way for an end-user to change their uid.

    sshfs is a FUSE file system. All FUSE file systems run in user-land, not kernel modes. They are slow. Really slow. sshfs does not support all the normal features of a real file system like NFS does. Just be aware you are giving up some things when you use it.

    Samba is pretty easy to setup too. The only trick is to set smbpasswds for the samba users. It has always worked this way, not that it makes any sense. There are many how-to guides available. Find the one for your distro and situation. I’ve been running samba shares since around 1996 without any issues.

    Is encryption of file shares on a LAN really necessary? It isn’t likely there will be a MITM attack and since everyone uses switches these days, there really isn’t much concern unless your entire network has been hacked. Ah – you do run Windows – perhaps being paranoid is useful. ;)

    I’m pretty paranoid, but mainly about external stuff, not things on my internal LAN. I suppose if you treat your LAN like a DMZ, then being paranoid is a good idea and it makes perfect sense. OTOH, if you have iptables running on every box and blocking all traffic by default, except the traffic you want, to/from the machines you want it, then there isn’t much concern over NFS traffic or uids being spoofed.

    Am I missing something?

  15. Mike 11/19/2012 at 18:32

    You could consider me paranoid. :)
    But it’s probably more my long exposure to security issues. I’ve always run my home network much as I would a corporate network with 1000+ users. I find it is good practice. My general rule is treat all networks as hostile and assume there are always compromised machines present. While that may be extremely unlikely on my home lan, defense in depth is a proven strategy.

    I don’t want to defend Windows’ security, so let’s phrase this in terms of the security models of SAMBA and NFS. If I set up NFS on a company server and someone comes along with their private laptop and plugs into the network, they can create any UID they wish, correct? I would think with a little scripting one could rapidly try multiple UIDs while attempting to write to the server’s file system. Once it hits a valid UID, bang… your server is compromised. If you don’t prevent root access to NFS, it’s even easier. IP restrictions add little value to that scenario. What that boils down to is virtually no security if you can’t keep unknown machines off the network. In a corporate environment with consultants and others controlling their own machines and needing to connect to company-wide shares, basing filesystem security on nobody being able to do this seems somewhat naïve. With SAMBA/Windows I can not simply set up a user on my local machine with no regard to password and make a connection to a protected share. It won’t work.

    I don’t know why SAMBA wouldn’t work properly for me. I followed some tutorials I found for setting it up on Debian. The only thing I have on the server beyond the default install is Xen, plus I configured bridged networking for my VM. I haven’t yet locked down the server’s iptables, preferring to get everything working first.

    Anyway, it’s moot since sshfs seems to do what I need. I didn’t notice any functional difference in accessing data via sshfs vs locally; What filesystem features is it missing? Performance seems decent. I copied about 200GB of data via a wireless sshfs connection yesterday and it only took around two hours. That’s fast enough for my needs. The last piece I need is preventing users from logging in locally to the server. I read about changing the user’s shell to prevent this, but not sure yet how that affects ssh and sshfs. I will experiment tonight and figure it out.

  16. JD 11/19/2012 at 20:21

    @Mike: NFS has both client-side permissions AND server-side permissions. The server doesn’t share everything with just anyone. I think the scenario outlined is bogus. Here’s how:

    • The server will have a fixed IP. The client will have a fixed IP. The server will share only to the specific IPs that you setup – not to an entire subnet (unless you are lazy and allow that).
    • When you share, you can share read-only. No way for the clients to write anything. Your files are safe, regardless. This is a great way to share media files.
    • When you share, you share to specific IPs only. Not the entire subject. If you have setup your home network like a company with 1,000 users, then you’ll have separate subnets for desktops and servers, right? If a non-server tries to use a server subnet address, their networking doesn’t work at all.
    • Do you allow just any clients on your corporate network? Don’t you put any unknown desktops into a DMZ area with only internet access? Unknown devices should never get desktop LAN and definitely not server LAN access. My house guests dislike that I put them on a different subnet from all the servers – security protocols demand that.
    • I never trust any wifi connected devices. If it uses wifi and wants access to my internal LAN, then it will use a VPN (IPSec or OpenVPN).
    • If somehow an unknown client gets on your serer-LAN, selects an IP that is shared, knows the names of the NFS shares, gets the uids/gids correct, knows how to mount it, and does, then you have bigger issues. If you think all that can happen – and it could – then just use Kerberos with NFSv4. It doesn’t appear to be that hard.

    I haven’t used NFS in many years, but I did use it extensively at one job. It is common to have HOME directories located on NFS shares and mount them across all the UNIX workstations on a LAN. When we’d login to any workstation, our same HOME would be mounted. Sometimes that made initialization settings complicated, since we’d have many mixed systems with different OSes and CPUs. Still, knowing how to create cross-UNIX .profile was very useful. Sometimes I’d be on HP-UX or OSF or Solaris or SunOS or Linux or AIX – my settings would need to handle those slight differences. We also used it for cross-platform software development.

    Clearly we had multiple exports and tied the share/export to IPs and controlled all user accounts through NIS+. We never had any issues, but that network was not a hot target either. Inside a government LAN that was air-gap to any other networking, we ran Kerberos, so I do understand a certain level of paranoia. Still, most Microsoft installations do not use Kerberos when we know all the attacks that exist for non-Kerberos systems.

    I should point out that I haven’t used NFSv4, which includes lots and lots of security additions. Is there a 1-click NFS installer than does all the appropriate security things for you automatically? No. That isn’t the UNIX-way.

    Xen does things to the hostOS. That’s a nice way to put it. It messes with the kernel in unexpected ways. I wouldn’t recommend running anything but Xen on the hostOS – no NFS, no samba, no print server, just Xen. Those other items should be run inside a VM, under Xen, if you need them or on another system. Definitely never run any GUI on a Xen VM-host, especially if you care about stability. GUIs are the buggiest code I’ve seen – I say this as an X/Windows developer.

    As to how slow sshfs is, I’ve had a GigE network for almost a decade and can’t stand anything less than about 300Mbps for moving files around. I avoid using USB2 devices for the same reason. sshfs is slow, really slow.

    As to locking down user accounts to prevent unwanted access methods – there are many, many ways to accomplish this. The O’Reilly books on UNIX security or Essential System Administration describe these techniques. Because the controls are so flexible, they are also fairly complex. There are things that you and I will never need which can be accomplish with a thorough understanding. You’ll want to do some reading. The simple things are just to change the login shell to something non-existent or to find a setting in the sshd_config file that specifically denies what you don’t want. I do not know if it is possible to only allow sshfs and not allow an ssh login. It may not be possible – sshfs is a FUSE file system after all. Does the Xbox360 have an sshfs client?

    What is sshfs missing? It doesn’t support hard or soft links. I’m sure there are other things too, but those are the main things – especially when using the mount for backup purposes. Hardlinks often make backups highly efficient on storage. I should point out that if you use softlinks as much as I do, there is a special samba setting to allow those to work, but it does open a slight security issue if enabled.

    BTW, have you had a chance to read my article on the Linux Thought Shift?

  17. Mike 11/19/2012 at 22:02

    Thanks. I just read the article you mentioned. I agree most people switching have to do a bit of a metal shift. That said, Windows does allow this approach too, and the few really, really good Windows admins and developers I’ve known take advantage of it. Most don’t.

    I agree that running anything else on the Xen host is probably a bad idea and it is my intention to separate things eventually. On Hyper-V I always kept my server roles separated on different VM’s. Right now I am still in the process of learning, so I fully intend to rebuild the entire server probably several times over the next year as I learn. I just have to do it when it is convenient to take the TV off-line. :)

    Good to know about the lack of support for links in sshfs. I hadn’t tested that yet. That may become an issue at some point and I will have to re-evaluate.

    Perhaps my criticism of NFS isn’t fair. Who knows. On many (most?) of the networks I’ve seen as a consultant, outside machines connect as workstations all the time with little to no control. Often they need access to shares and rely on temporary username/passwords to connect. Yes, those networks have serious issues. In that sort of situation, I think SAMBA would provide more security than NFS.

  18. Mike 11/19/2012 at 22:02

    Oops, ‘metal’ shift should’ve been ‘mental’ shift. :)

  19. Mike 11/20/2012 at 16:33

    (My browser timed-out, so this may be a duplicate post!)

    Looks like NFS4 does not deserve my criticism. I just found this in rfc3530:

    NFS has historically used a model where, from an authentication
    perspective, the client was the entire machine, or at least the
    source IP address of the machine. The NFS server relied on the NFS
    client to make the proper authentication of the end-user. The NFS
    server in turn shared its files only to specific clients, as
    identified by the client’s source IP address. Given this model, the
    AUTH_SYS RPC security flavor simply identified the end-user using the
    client to the NFS server. When processing NFS responses, the client
    ensured that the responses came from the same IP address and port
    number that the request was sent to. While such a model is easy to
    implement and simple to deploy and use, it is certainly not a safe
    model. Thus, NFSv4 mandates that implementations support a security
    model that uses end to end authentication, where an end-user on a
    client mutually authenticates (via cryptographic schemes that do not
    expose passwords or keys in the clear on the network) to a principal
    on an NFS server.