Search This Blog

Friday, March 28, 2014

Parrot AR Drone 2.0

So I am starting a new hobby.  The Parrot AR Drone 2.0 is an awesome piece of equipment.

Here are some interesting points.
  • 720p Camera mounted to the front
  • GPS Supported
  • Linux 2.6
  • MAVlink compatable for use with Qgroundcontrol
  • Uses Wifi 2.4 GHz 802.11b/g/n standards for control and configuration
  • And more...
I have only embarked on a single outdoor test flight.  And it was very successful.  Since then I have ordered the GPS module and the 2000mah batteries with balance changer for quick charges.  Once everything comes, I should be able to support continuous flight.

There are also a lot of mods that can be performed.  Camera mods, hull mods, transceiver mods, and more.

I have also started looking into an opensource solution on github called ardrone_automony. Linked here: So far I have been successful at connecting to the drone and looking at settings, however I have not yet been able to issue commands.  Ultimately I would like to be able to map my joystick input.

Do you work with these types of devices? let me know your thoughts.

I was able to configure both a Joystick and a Gamepad to control the AR Drone using the SDK from Parrot.   Once you get the code to compile and run in QT, it is quite easy to use with a little testing.

Also got the GPS working with qgroundcontrol.  this was really a cool experience and I have had many flights via GPS.  WARNING, pay attention to altitude settings and ensure the MAX altitude is configured on the drone before using the MAVlink GPS.  If your drone's max altitude is set to 3 M, then it doesnt matter that you may have 10 M in the GPS flight.

Tuesday, March 18, 2014

Installing Accumulo 1.5.0 on Cloudera Managed CDH4 Cluster

The default Hadoop Prefix for will not work with normal distribution of CDH4 with the Cloudera Manager 4 suite.  Here are my settings for the hadoop prefix and conf directory:

if [ -z "$HADOOP_HOME" ]
   test -z "$HADOOP_PREFIX"      && export HADOOP_PREFIX=/opt/cloudera/parcels/CDH/lib/hadoop  
   unset HADOOP_HOME
test -z "$HADOOP_CONF_DIR"       && export HADOOP_CONF_DIR="$HADOOP_PREFIX/etc/hadoop"

# cdh4
export HADOOP_HDFS_HOME=/opt/cloudera/parcels/CDH/lib/hadoop-hdfs
export HADOOP_MAPREDUCE_HOME=/opt/cloudera/parcels/CDH/lib/hadoop-mapreduce

Then, modify the classpath inside accumulo-site.xml:


    Classpaths that accumulo checks for updates and class files.
      When using the Security Manager, please remove the ".../target/classes/" values.


Increasing Read Times with Accumulo

A few weeks ago I wanted to increase performance of the Accumulo 1.5 so that scanning through large tables for information would happen a faster rate. I think I was getting around 200,000 entries per second prior to this performance modification.  I was able to increase the speed to what is now almost 3,000,000 entries per second with the following steps.

Stop Accumulo
I am able to stop the entire cluster by running the script inside the Accumulo home folder's "bin" directory.

Increase JVM Heap Space to accommodate larger Index Cache 
The Tablet server heap space is defined in the file "" located in the Accumulo home folder's "conf" directory.  Inside this folder you can see the settings for tablet server Xmx and Xms at the bottom defined as an environment variable, "$ACCUMULO_TSERVER_OPTS".  Depending on how much memory is available you will want to increase this value to support the increase we will make to the index cache next.  Here is my setting:

Increase Index Cache
In the Accumulo home folder's "conf" directory, you should also see a file called "accumulo-site.xml".  Here you can define properties for the Accumulo cluster.  I have set the cache.index.size to 512M:


I have not had any issues with tablet server memory yet, so I believe this is a good fix.  Please provide feedback and comments below.

There are various other performance tweaks as well.  Such as NOT using LVM with CentOS/RHEL, and ensuring any virtual machines in the cluster are running with "Independent Disk Mode" so writes are flushed straight to disk.