Search This Blog

Saturday, November 1, 2014

CentOS 6.5, Custom Linux Kernel 2.6.36.4 with UnionFS

CentOS 6.5 is currently using Linux Kernel 2.6.32.   There are many blogs, posts and snippets around that will lecture you about the 'standards' of Enterprise Linux and force the idea down your throat that if you change the CentOS kernel, you are wrong and your operating system will be flawed and unsupported, etc. etc...

In reality, you may have an environment where you can only install and use CentOS 6.5 instances.  However, you may be in a position in which you can integrate custom code.  In my situation, I am looking to upgrade the kernel so I can use unionfs.  UnionFS is used in LTSP to create a virtual filesystem in which tmpfs is the writeable layer for the read-only root filesystem which provides the OS via NFS/nbd.

This script should work with CentOS 6.5 from a 'Minimal' ISO install.  It should be run as root, and should be the first thing you do with a fresh install.  Once you have verified the kernel is working, you can then install updates and go from there.

One thing to note, this is a CUSTOM solution for a specific situation.  This script will add an exclusion to the YUM configuration so that kernel updates are no longer downloaded.  Use this script at your own risk and be sure to test thoroughly in a VM before trying to run this on any systems used by others.

Friday, September 19, 2014

Just Install Docker on Ubuntu 14.04 64-bit...

It has been a while since I have posted, so I'm making this one short.  Ever want a clean and concise script written in Bash that will install the latest version of Docker (And stay up to date)?  Well, here. Note that Docker wants you to pick the DNS.  So I've used 8.8.8.8 (Google-DNS) for this example.

Note: This will also install the kernal extras to enable AUFS support.
#!/bin/bash
#Update our local package index
sudo apt-get update
#Make sure Apt supports HTTPS
[ -e /usr/lib/apt/methods/https ] || {
  apt-get install -y apt-transport-https
}
# Get server key for repo
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
# Update Repo Sources...
[ -e /etc/apt/sources.list.d/docker.list ] || {
  rm /etc/apt/sources.list.d/docker.list
}
sudo echo "deb https://get.docker.io/ubuntu docker main" > /etc/apt/sources.list.d/docker.list
sudo apt-get update
# ensure  Linux kernel extra modules are installed which support cpu cgroups, etc...
sudo apt-get install -y linux-image-extra-$(uname -r)
# install docker
sudo apt-get install -y lxc-docker
# update docker DNS and listen on localhost tcp
sudo echo 'DOCKER_OPTS="-dns 8.8.8.8 -H unix:///var/run/docker.sock -H tcp://127.0.0.1:5555"' > /etc/default/docker

Tuesday, September 2, 2014

Netflix Finally works on Linux

After years of waiting.  Netflix finally works with the baseline install (and apt-get dist-upgrade) of Ubuntu 14.04 LTS.  Additionally, the performance seems to be much better than Silverlight plugin on Windows 7.

I also installed a few packages from the blog article posted here: http://www.omgubuntu.co.uk/2014/08/netflix-linux-html5-support-plugins

The post at the link above is by far the most thorough and well documented guide to configuring Netflix to work with Ubuntu.  I did have to reboot my system after performing the steps, otherwise Netflix throws an error which I now cannot replicate to provide further information.

Thanks to the perseverance of the Ubuntu community, Netflix is finally supported and the Ubuntu distribution is going to get a lot more footing in the war of operating systems for desktop PCs.  Since it's still free, I'll be sticking with Ubuntu before spending any money on a Windows operating system.

Wednesday, August 20, 2014

Using USB Devices on Solaris (ZFS)

UPDATE:
Solaris 11 does not support the ZFS filesystem provided by 'native' ZFS for linux, and vice-versa.  Thus, you should export your data as NTFS so that everyone is happy, as NTFS is the most supported filesystem for 'read-only'.  Otherwise it's Solaris->Solaris only.  I have not tested native ZFS with import/export, but i suspect the version of zpool (5000 or something like that) will work fine between fedora type distros and ubuntu.


Recently I needed to export a large amount of data on our Solaris NFS Server.  However, getting this information straight off this robust server is not as intuitive or straight forward as you might first think.  Many filesystems are not natively supported by solaris, thus can cause a lot of headache trying to figure out how to use fdisk and format.  Additionally, with block sizes of 4096 (like on a Seagate 3TB hard drive) it may not even be compatible with UFS.

Before you begin, take a look at this chart on wikipedia which breaks down the various versions of zpool and zfs.  This will only work if you are using the proper version of zpool/zfs between hosts.  Currently the Native Linux ZFS project uses a zpool version not support by Solaris 11 and vice versa.  So ZFS cannot be imported between hosts: http://en.wikipedia.org/wiki/ZFS#List_of_operating_systems_supporting_ZFS

ZFS is a good solution and can be imported on other linux distributions such as CentOS or Ubuntu.  Here is a synopsis in which I exported a large amount of data with the label "TwitterFeeds"

List available drives
$ format -e
...    other disks likely shown here...
54. c9t0d0 <Seagate-Expansion Desk-0604 cyl 45597 alt 2 hd 255 sec 63> /pci@0,0/pci108e,cb84@2,1/hub@6/storage@2/disk@0,0
          /pci@0,0/pci108e,cb84@2,1/hub@6/storage@2/disk@0,0
Created a ZFS pool on the USB drive, added a ZFS file system, chowned it with my default user and started rsync with nohup to log (rsync.out)
$ sudo zpool create TwitterFeeds c9t0d0
$ sudo zfs create TwitterFeeds/export 
$ sudo chown -R user:group /TwitterFeeds/export
$ nohup rsync -r --progress /sasdata/TwitterFeeds /TwitterFeeds/export > rsync.out 2>&1&
Example entry of output in rsync log:
bytesize Percent% xferRate Time(file#, to-check=filesremain/estimatedtotal)Absolute/File/Path/Filename.ext
805313211 100%   29.23MB/s    0:00:26 (xfer#364, to-check=1008/1417)TwitterFeeds/08/19/2014/05/17/22/10FEB25021318-S3DM_R5C4-053771096010_01_P001.DAT
You can use tail -f on the rsync log to periodically view progress.
$ tail -f /TwitterFeeds/export/rsync.out
Once rsync has completed, the ZFS File System and Zpool are unmounted and removed from your available zpools.
$ sudo zpool export TwitterFeeds
The USB Drive can now be physically removed and plugged into another computer.  Plug USB Drive in to target system.  ZFS detects any moved or renamed devices, and adjusts the configuration appropriately. To discover available pools, run the zpool import command with no options. To import a pool, specify the name as an argument to the import command (TwitterFeeds).  By default, the zpool import command only searches devices within the /dev/dsk directory. If devices exist in another directory, or you are using pools backed by files, you must use the -d option to search alternate directories.  This may be required when using CentOS/Ubuntu with ZFS.
$ zpool import TwitterFeeds
$ zpool import -d /dev/rdsk/c9t0d0 TwitterFeeds #specifying disk device path manually
Once Imported, "$ sudo zfs get all" should show a PROPERTY mount point for the zpool to access the data. From this point you should be able to use native OS filesystem utils like cp, rm, chmod, and others.

Reference Links:
Managing ZFS Storage Pools
Managing ZFS File Systems (Not the Physical Devices!)
Using ZFS on Linux

Friday, August 8, 2014

Fetching Artifactory Maven Dependencies via Python

This clever python script will fetch your dependencies identified in a YAML file.  Don't forget that python uses white space for scope.  So make sure if you're going to copy and paste, to get it right.  The script uses artifactory's GAVC API.  This is the same API used by maven plugin for artifactory.  A nice feature is that you can run it multiple times, and by comparing md5 hashes, it will only download JAR files that have changed.  Also make note that javadoc, sources and POMs are omitted in the condition on line 67.

Because Python has a great API for Docker as well.  I will be using this code to implement something like fig to automatically deploy containers in my environment which can pull latest dependencies from Artifactory for installation.

First, an example of the YAML:

artifacts:
- artifact:
     artifactid:     accumulo-core
     groupid:     org.apache.accumulo
     version:     1.5.1

- artifact:
     artifactid:     accumulo-fate
     groupid:     org.apache.accumulo
     version:     1.5.1

- artifact:
     artifactid:     accumulo-trace
     groupid:     org.apache.accumulo
     version:     1.5.1



The Script:

#!/usr/bin/env python
import yaml
import hashlib
import os
import sys
import httplib
import json
import urllib2
from urlparse import urlparse

__author__ = 'champion'
artifactory_url = "art.mydomain.com:8081"
#local download folder
local_folder = "./deps"
conn = httplib.HTTPConnection(artifactory_url)


def download(filename, remote):
    print "\nDownloading: " + remote
    req = urllib2.urlopen(remote)
    blocksize = 16 * 1024
    with open(local_folder + "/" + filename, 'wb') as fp:
        while True:
            chunk = req.read(blocksize)
            if not chunk:
                break
            fp.write(chunk)
        fp.close()


def main():
    if not os.path.exists(local_folder):
        os.mkdir(local_folder)

    # Take last arg as filename
    filename = sys.argv[-1]

    if os.path.isfile(filename):
        stream = open(filename, 'r')
        yaml_instance = yaml.safe_load(stream)
        stream.close()

        artifacts = yaml_instance["artifacts"]

        print "\nFetching Artifacts in '" + filename + "' from Artifactory... "

        #for each element in YAML...
        for artifact in artifacts:
            entry = artifact["artifact"]
            artifact_version = str(entry["version"])
            artifact_groupid = str(entry["groupid"])
            artifact_artifactid = str(entry["artifactid"])

            # Create API call
            api_call = "/artifactory/api/search/gavc?g=" + artifact_groupid + "&a=" + artifact_artifactid + "&v=" + artifact_version

            # GET the results
            conn.request("GET", api_call)
            r1 = conn.getresponse()

            # If GET was Successful
            if r1.status == 200 and r1.reason == "OK":
                uris = json.loads(r1.read())["results"]
                # Omit Javadoc, Sources, POMs...
                for uri in uris:
                    link = uri["uri"]
                    if not link.endswith("pom") and not link.endswith("sources.jar") and not link.endswith("javadoc.jar"):
                        #Request the Artifact information
                        conn.request("GET", link)
                        artifact_json = conn.getresponse().read()
                        artifact_props = json.loads(artifact_json)

                        downloaduri = artifact_props["downloadUri"]
                        md5 = artifact_props["checksums"]["md5"]
                        fname = urlparse(downloaduri).path.split('/')[-1]

                        #Always Download Dep, unless conditions change.
                        omit_dl = False
                        if os.path.exists(local_folder + "/" + fname):
                            print "\nLocal Copy of '" + fname + "' Exists, checking md5..."
                            print "Remote MD5: " + md5
                            curr_md5 = hashlib.md5(open(local_folder + "/" + fname).read()).hexdigest()
                            print " Local MD5: " + curr_md5
                            if curr_md5 == md5:
                                omit_dl = True  # conditions changed

                        if not omit_dl:
                            download(fname, downloaduri)
                        else:
                            print "Hashes match, omitting download..."
                    else:
                        #artifact is not the binary jar
                        continue

            else:
                print "Artifact was not found in Artifactory."

        conn.close()
        print "Done."
    else:
        print "YAML file: '" + sys.argv[-1] + "' not found."



main()

Saturday, August 2, 2014

Ubuntu 14.04 LTSP Docker Container

Just finished creating a redundant LTSP server as a Docker container.  Using a Dockerfile, we can now build portable LTSP servers to host both thick and thin client PCs. This of course depends on the LAN having at least DHCP and DNS already provided.  DHCP is used to support the PXE boot process, while DNS is used by the Dockerfile to resolve Ubuntu APT Repository package host IPs.  Thus, an Internet connection is also needed.

In this scenario, the container is running on the "docker01.devops.mydomain.com" CentOS 6.5 server.  LTSP ports were forwarded through the "devops.mydomain.com" VLAN Gateway (Zentyal server with two ethernet cards) to the docker01 host.  This allows clients on the normal domain LAN to hit the docker01 host behind the gateway on the VLAN.  The container running LTSP then has the gateway's forwarded ports mapped to the docker01 host so the clients can access the LTSP services.

Port 69/udp - TFTPD, serves the PXE boot configuration to clients referred by DHCPd.
Port 10809/tcp - Network Block Device daemon (NBDd), serves the base chroot up as a block device for the client. The external port is 10809 as well.
Port 22/tcp - SSH, serves the authentication and home folder via "SSHFS".  The external port is 2222 so that it does not conflict with the default SSH port 22 on the docker01 host.

Initially had some trouble with SSHFS due to custom security configurations of the 'Dockerhub' provided Ubuntu 14.04 image.  Essentially my SSH connections were being closed by the server as soon as authentication was completed.  I speculated a PAM configuration, but didnt want to dig through all the undocumented changes.  So, I performed a fresh install of Ubuntu 14.04 Server edition (~900MB) on a VM and exported the file system as a gzipped tarball, excluding /sys and /proc.  I then imported the file system in to a new docker container.

Dockerfile:
FROM champion:ubuntu-base #custom ubuntu 14.04 image
MAINTAINER championofcyrodiil@blogspot.com

RUN apt-get update

RUN apt-get install -y ltsp-server
RUN ltsp-build-client --arch="amd64" --fat-client
RUN ltsp-config lts.conf

Mannually Update lts.conf:
Port 22 would likely conflict with the docker host's SSH port. So manually add 'SSH_OVERRIDE_PORT=2222' below the [Default] tag inside /var/lib/tftpdboot/ltsp/amd64/lts.conf.  Also add 'SERVER=' so that the client hits the docker host w/ mapped ports, since it can't see the container's resolved host that is used by default.  The value should be the IP of your Docker container's host.

Run Command:
$ docker run -h ltsp -it -p 2222:22 -p 10809:10809 -p 69:69/udp -v /data/ltsp-home/:/home/ -v /data/ltsp-opt/:/opt/ --privileged=true cott:ltsp /bin/bash

-h Sets the hostname to 'ltsp', although this does not really matter, it helps to know what container your attached to.
-p Maps specific ports. (Uppercase P will use the docker range, check docker run usage)
-v The first mount point specified is the docker host path, the second mount point (after the colon) is the mount point inside the container.
--privileged=true Gives the container permissions to access kernel parameters and other information from /proc, this is definitely needed when you build the LTSP fat client.  I left it on during run time as well, but may not be needed.

Once the container has started and you're at the bash prompt, here you just needed to fire up the daemons:

$ service ssh start
$ service tftpd-hpa start
$ service nbd-server start

Press Ctrl+P, Q to drop from the container's bash prompt w/o killing the container's root bash process.  Ultimately you would want these service calls in a custom bash script with the 'wait' command at the end, and specify it in your Dockerfile as the default command.  But for sanity's sake, above it is a manual process to get the gist of what needs to happen.

[user@docker01 ~]$ docker ps -a
CONTAINER ID        IMAGE                                      COMMAND                CREATED             STATUS              PORTS                                                                                                NAMES
2c09964e1882        champion:ltsp                                  /bin/bash              2 hours ago         Up 2 hours          0.0.0.0:69->69/udp, 0.0.0.0:2222->22/tcp, 0.0.0.0:10809->10809/tcp                                   ltsp_server

LTSP Client Login Screen




Ultimately the configuration needs to be adjusted so that the fat/thin client connects to the docker container's host, with ALL LTSP required ports (69, 10809, 22) mapped to the container using the -p options with the docker 'run' command, and modifications to the client's tftpd provided ltsp configuration (/var/lib/tftpboot/ltsp/amd64/lts.conf) to support the changed SSHFS port, which is modified to 2222 in this post.

Geoserver 2.5.2 Dockerfile

This geoserver dockerfile also requires the Oracle JDK tar.gz reside in the Dockerfile context.

#
# example Dockerfile for Geoserver 2.5.2
#

FROM ubuntu:14.04
MAINTAINER championofcyrodiil@blogger.com

#JDK
RUN mkdir /opt/java
ADD jdk-7u60-linux-x64.tar.gz /opt/java
#Line below commented out because docker auto extracts a tar.gz when ADDed above
#RUN tar -C /opt/java -xf /opt/java/jdk-7u60-linux-x64.tar.gz
ENV JAVA_HOME /opt/java/jdk1.7.0_60
ENV PATH $PATH:$JAVA_HOME/bin

#Geoserver, wget or put the zip in the Dockerfile folder.
#ADD geoserver-2.5.2-bin.zip /opt/
RUN wget -P /opt/ http://sourceforge.net/projects/geoserver/files/GeoServer/2.5.2/geoserver-2.5.2-bin.zip
RUN unzip -d /opt/ /opt/geoserver-2.5.2-bin.zip
ENV GEOSERVER_HOME /opt/geoserver-2.5.2/
RUN chmod -R 755 /opt/geoserver-2.5.2

EXPOSE 8080
CMD ["/opt/geoserver-2.5.2/bin/startup.sh"]


Thursday, July 10, 2014

Apt-cacher with Docker

I have been working with docker from www.docker.io for couple of weeks and I love it.  I have made containers for Postgres w/ postgis extensions, Accumulo/HDFS/Zookeeper running pseudo-distributed, geoserver and as of yesterday, an apt-cacher for our Ubuntu 14.04 workstations at the office.  Here is all you need!

You can get the /etc/apt-cacher/apt-cacher.conf from an ubuntu system after installing the apt-cacher package.

#
# example ubuntu 14.04 apt-cacher
#

FROM ubuntu:14.04
MAINTAINER championofcyrodiil.blogspot.com

USER root
RUN apt-get update
RUN apt-get -y -q install apt-cacher
ADD apt-cacher.conf /etc/apt-cacher/apt-cacher.conf
ADD start-cacher.sh /root/start-cacher.sh
RUN chmod +x /root/start-cacher.sh
EXPOSE 3142


#Default Docker Run Comamnd(s)
CMD ["/root/start-cacher.sh"]


And of course, the 'start-cacher.sh' bash script.  Make sure it has been chmod +x so it's executable.

#!/bin/bash
/usr/sbin/apt-cacher -R 3 -d -p /var/run/apt-cacher.pid
tail -f /var/log/apt-cacher/access.log
wait


Once that is done, you will want to place all three files in to a single folder, cd to that folder and build the docker image, here is my example:


 $ sudo docker build -t local:apt-cacher .

Note the period at the end to specify the context of the current directory with the scripts.  Next it is time to run the container, make sure the host (-h) matches the daemon listening host specified in /etc/apt-cacher/apt-cacher.conf.

$ sudo docker run -d -p 3142:3142 -h apt-cacher local:apt-cacher

$ sudo docker ps -sl
CONTAINER ID        IMAGE               COMMAND                CREATED             STATUS              PORTS                    NAMES               SIZE
7b6a7ff25691        local:apt-cacher    /root/start-cacher.s   18 hours ago        Up 18 hours         0.0.0.0:3142->3142/tcp   clever_hoover       361.4 MB

For client systems to use the proxy, add the file /etc/apt/apt.conf.d/01proxy with the contents:

Acquire::http::Proxy "http://$DOCKERHOST:3142";

Like so,
$ sudo echo 'Acquire::http::Proxy "http://:3142";' >> /etc/apt/apt.conf.d/01proxy

Wednesday, June 11, 2014

Screen Locking with X and C

I had an issue recently with screen locking not working properly in LTSP.  The shadow file is not available on the file system of the thick client, so when the screen lock is run as a local program, the user can not unlock to get back to the desktop.

The work around was to install xscreensaver, which authenticates using a PAM module when the screen is locked.  However, the 'hook' which tells the system to 'lock' after a determined idle time is disabled to prevent gnome from locking out the user.

So, I created my own C application which runs in the background and monitors for user mouse movement or keystrokes.  There are 4 threads:


  • Timer Thread (Track user idle time, locks screen if threshold is reached.)
  • Screen Watch Thread (Start timer if screen is ever Unlocked)
  • Mouse Thread (Restart timer if mouse is moved)
  • Keyboard Thread (Restart timer if keystroke is made)

I used several sources and although this is not the most elegant solution it does work.  I would appreciate any feed back, as this is the first C application I have ever written, except for "Hello world."




#include
#include
#include
#include
#include
#include

typedef struct Timer
{
double elapsed;
struct timespec start,finish;
}Timer;


//Predefine global vars.
double timeLimit = 300.0; //seconds
int verboseClock = 0; //1-True,0-False
int verboseMouse = 0; //1-True,0-False
int verboseKeyboard = 0; //1-True,0-False

//do not modify these
volatile int movement = 0;
volatile int screenlocked = 0;
pthread_t threads[4];
int rc_one;
int rc_two;
int rc_three;
int rc_four;


void *MouseWatch(void *threadid) {
//This threa will watch the mouse and ensure the timer is reset if event fired.
FILE *fp;
char buffer[3];
/* Open the command for reading from xinput. */
fp = popen("xinput --test 9", "r");
if (fp == NULL) {
printf("Failed to watch mouse\n");
exit;
}
printf("Watching Mouse...\n");
/* Read the output a line at a time - output it. */
  while (fgets(buffer, sizeof(buffer)-1, fp) != NULL) {
if(verboseMouse) {  
printf("%c", buffer[0]);
}
    if(buffer[0]=='m') { 
//motion detected.
movement = 1;

}
  }
pclose(fp);
}



void *LockTimer(void *threadid) {
//This thread acts as a timer until Locked, provided mousemove==0 the whole time.
Timer stopwatch;
//start timer.
clock_gettime(CLOCK_MONOTONIC, &stopwatch.start);
//while time < timeLimit seconds.
while(stopwatch.elapsed < timeLimit ) {
if(movement == 1) {
movement = 0;
//mouse moved, restart the clock
clock_gettime(CLOCK_MONOTONIC, &stopwatch.start);
}
//get elapsed time.  if clock has exceeded threshold, lock screen.
       sleep(1);
clock_gettime(CLOCK_MONOTONIC, &stopwatch.finish);
       stopwatch.elapsed = (stopwatch.finish.tv_sec - stopwatch.start.tv_sec);
       stopwatch.elapsed += (stopwatch.finish.tv_nsec - stopwatch.start.tv_nsec) / 1000000000.0;
if(verboseClock) {
printf("elapsed time: %f\n",stopwatch.elapsed);
}
if(screenlocked) { break; }
}
system("xscreensaver-command -lock");
screenlocked = 1;

while(screenlocked) {
sleep(2);
}

printf("restarting the timer\n");
rc_two = pthread_create(&threads[2], NULL, LockTimer, (void *)2);
if(rc_two)  {
printf("ERROR; return code from pthread_create() is %d\n", rc_two);
                exit(-1);
}

pthread_exit(NULL);
}

void *KeyboardWatch(void *threadid) {
//This threa will watch the keyboard and ensure the timer is reset if key is pressed.
FILE *fp;
char buffer[3];
/* Open the command for reading from xinput. */
fp = popen("xinput --test 10", "r");
if (fp == NULL) {
printf("Failed to watch keyboard\n");
exit;
}
printf("Watching Keyboard...\n");
/* Read the output a line at a time - output it. */
  while (fgets(buffer, sizeof(buffer)-1, fp) != NULL) {
    if(verboseKeyboard) { printf("%s", buffer); }
    if(buffer[0]=='k') { 
//key press detected.
movement = 1;
}
  }
pclose(fp);
}

void *ScreenWatch(void *threadid) {
//This thread will watch the screen and ensure timer is started when screen is 'U'nlocked.
FILE *fp;
char buffer[50];
/* Open the command for reading. */
  fp = popen("xscreensaver-command -watch", "r");
  if (fp == NULL) {
    printf("Failed to run command\n" );
    exit;
  }
printf("Watching Screen...\n");
  /* Read the output a line at a time - output it. */
  while (fgets(buffer, sizeof(buffer)-1, fp) != NULL) {
   
printf("%s", buffer);

    if(buffer[0]=='U') { 
screenlocked=0;

if(buffer[0]=='L') {
screenlocked=1;
}
  }
}



int main()
{
        printf("In main: creating threads. \n");
//start mouse watch thread
rc_one = pthread_create(&threads[1], NULL, MouseWatch, (void *)1);
        if(rc_one)  {
                        printf("ERROR; return code from mouse thread is %d\n", rc_one);
                        exit(-1);
                }
//start timer thread
        rc_two = pthread_create(&threads[2], NULL, LockTimer, (void *)2);
if(rc_two)  {
                        printf("ERROR; return code from timer thread is %d\n", rc_two);
                        exit(-1);
                }
//start Screen watch thread
rc_three = pthread_create(&threads[3], NULL, ScreenWatch, (void *)3);
        if(rc_three)  {
                        printf("ERROR; return code from screen watch thread is %d\n", rc_one);
                        exit(-1);
                }
//start keyboard watch thread
rc_four = pthread_create(&threads[4], NULL, KeyboardWatch, (void *)4);
        if(rc_four)  {
                        printf("ERROR; return code from keyboard thread is %d\n", rc_four);
                        exit(-1);
                }
//LOOP Forever until user Kills main thread. (Ctrl+C)
while(1) {
sleep(10);
}

}

Thursday, May 8, 2014

nomachine (!M) NX client on ubuntu LTSP thick/fat clients

The NX client/server software released from the company NoMachine, is fantastic.  I did discover that their Linux x86 installer does not properly handle missing dependencies on a CentOS 6.5 Minimal installation.  But it was my fault for mistakenly choosing the wrong binary from their download site.  Nevertheless, A few yum installs later, and even their 32 bit client is fantastic! Of course, I'm running their x64 client in our development environment now.

There is a lot to take in when you first run the client tool.  Its classy simple icon, installer, and website pitch, leads you to believe this magic tool will just work without a lot of features.  And for me, it did.  Every time. On different platforms.  The streaming of multimedia through the nx client over seperate UDP port is genius. And there are tons of small features that make this worth the effort.


There isn't a custom view you can't get with their client.  All of your devices are integrated as well.  I haven't even started to use collaboration tools.  But they are there if you need them, and even include recording video sessions.


I am running their software on the newest Ubuntu 14.04 LTSP "fat" client desktop.  I initially ran into an issue with the thick client LDM sessions not properly locking the screen.  Filed a bug report with launchpad here: https://bugs.launchpad.net/ubuntu/+source/unity/+bug/1316320.  I was able to work around by installing the classic xscreensaver package with the gl extras.  Not only is this batch of screen savers really cool, but it enables you to create a desktop and unity bar shortcut that will activate the xscreensaver-command lock.  Just make sure you enable the authentication dependencies on on the client image as well.  Otherwise your screen cannot be unlocked if your client session is logged in with a terminal server account that isn't also on the image.


The hardware I chose was the Intel NUC.  There are several variations to this model, but the model #D34010WYKA worked great for me.  With a modern monitor and using the display port as the primary output.  Running this PC without hard disk or wireless networking is very snappy.  With the NoMachine client on top of that, I can stream Youtube video with audio, flawlessly from a headless KVM running CentOS 6 and a "nohup" GDM session.  It was quite impressive.


My next investigation will be deeper into virus scanning solutions for the LTSP environment.  Although the client image is read-only.  A user can still execute downloaded code from their home folder, or temporary write space on the RAMfs (/tmp).  They could exploit vulnerabilities on your network systems and create back door entry points, causing information leakage, and more.  Often users who aren't intentionally malicious will pick up these Trojans and viruses from various websites.  Having a modern virus scanning engine will stop a lot of this junk.  It may not stop someone creating custom code and targeting your network specifically.  But it will help ensure avoidable accidents don't happen.


ClamAV is looking like a good bet.  It is an open source (GPL) antivirus engine.  But McAfee has a lot of years under their belt and is already on the approved list for a lot of organizations.  Corporate solutions tend to have a bit smaller footprint than the Best Buy 1st year free edition you often get with buying a PC from a partnered vendor.  And paid products usually include personal support.

Wednesday, April 23, 2014

Thin Client Computing with Ubuntu Linux

I used Ubuntu 12.04 ALTERNATIVE ISO to perform the F4 Mode "LTSP Installation" during ISO boot.

LTSP is a Thin Client solution for Linux operating systems.  This was chosen because of the preferred use of Ubuntu Linux for development.  Benefits of LTSP are as follows:

·      Reduced Costs – Thin Clients require fewer resources than traditional Thick clients and therefore have a lower procurement cost.
·      No Licensing Fees – LTSP is open source software released under GPLv2 License.
·      Less Maintenance – Single point of control is the operating system image on the thin clients.
·      Security – LTSP clients are secured via SSH and are restricted to their own LAN.
·      LTSP Display Manager (LDM) – Python application for remote desktop SSH sessions.  KDM/GDM do not support remote SSH sessions.

Typical LTSP Layout


LTSP is typically run from a single server with two network cards that piggyback the LTSP isolated LAN and the larger network.  The LTSP Server uses NAT to provide connections between thin clients and the rest of the resources on the larger network.  This allows more control over the connections between developers and network systems and services.  Developers still have access to web services and network bound APIs they need, without necessarily having access to sensitive management protocols and systems.


LTSP supports a concept called ‘screen scripts’.  Multiple screen scripts can be run at the same time on different virtual consoles. (Ctrl + Alt + F[1-9])  User’s can toggle between screens while Screen six (or seven?) is reserved for the LDM.   Screen scripts can also be used to enable rdesktop for connecting to a Windows Server. 

LTS Configuration allows many custom configurations to be applied per client machine.  Here are some examples:
[AA:BB:CC:DD:EE:FF]
# Use nvidia driver for this thin client, overriding auto-detected driver
XSERVER = nvidia

[FF:EE:DD:CC:BB:AA]
# Set Screen 7 of this client to an RDP session rather than LDM
SCREEN_07 = “rdesktop 192.168.0.253”

A new feature of LTSPv5 is the ability to run linux applications installed on the chroot (“change root”, the image used by the thin clients) environment from within the LDM session.  This means reduced server load, enables use of graphics intense multimedia applications, and enables use of applications that require direct hardware access.  Drawbacks include increased chroot maintenance and increased hardware requirements on thin clients.

Local devices can also be supported with thin clients; so removable media such as CDROM and USB Flash drives can still be used on the thin clients.

Printers are supported and spooling is done on the server. No client-side print driver management required.

Sound is redirected from the server to the client using PulseAudio.  This network-aware client-server sound system can easily go through NAT firewalls.

Although not yet tested, LTSP also supports use of “Thick” clients.  Also known as Fat Clients.  These client machines would have a larger network block device root file system containing a complete OS with all desired additional programs (i.e. Chrome). Since processes are running on the client rather than the server, an admin cannot kill them from a central location.  Internet connectivity is provided directly to the client, so the client needs to be directed to an Internet gateway.