Search This Blog

Thursday, August 3, 2017

Setting VirtualBox SMBIOS Settings to match your PC

In some cases you may want your VM to emulate your physical machine.  For example, if you want to set the Serial Number/Service Tag to match your laptop, you can do this with the following steps:

  1. sudo dmidecode -t system
  2. VBoxManage setextradata "VM name" "VBoxInternal/Devices/pcbios/0/Config/DmiSystemSerial" "#######"
In Step two above, you would not use the #######, but instead use the Serial Number displayed when running dmidecode.

Essentially what we are doing is configuring the SMBIOS (System Management BIOS) on the VirtualMachine to have the same strings/settings as the actual PC.  This can be useful in a scenario in which you are running some software within a virtual machine that expects the hardware to be of a specific vendor or serial number.

More information about the SMBIOS can be found here:

More information about setting VirtualBox VM SMBIOS data can be found here:

And last, more information about how to use dmidecode on your Linux PC can be found here:

This technique can be used to "spoof" the VM's Product ID, Serial Number, BIOS Version, Manufacturer and really any other SMBIOS value.

DISCLAIMER:
I AM NOT RESPONSIBLE FOR ANY MISUSE OF SOFTWARE COPYRIGHT LAWS RESULTING FROM THE USE OF THIS INFORMATION

Saturday, August 6, 2016

Linux, Newest Kernel, Latest Hardware, Windows 10



About eight years ago I installed Ubuntu Intrepid Ibex, which I believe was Ubuntu 8.10.  I had a Dell desktop at work and they just rolled out, "Trusted Desktop".  The original intention of trusted desktop is simply to ensure an end-user's workstation is safe, free of viruses, and has the most recent security updates.  This helps to keep malware and attackers out of the corporate network when users plug in to it daily.

The problem for me is that because the corporation was so large, they did not ask or care about the developer's needs.  The first version of trusted desktop they rolled over top of Windows XP completely removed Administration capability, as well as installed several incompatible drivers and encryption software combinations.  Eventually these systems began showing the "Blue screen of death", and crawling to a halt under the poorly designed implementation of trusted platform computing.

At this point I saw where Windows was going with my company and discovered they would "allow" employees to use Linux if they signed a waiver and ensured they implemented a compliant disk encryption policy.  So I was off, installing Ubuntu, Virtualbox and Windows XP VM to support Windows specific work software I needed to use.  In this case it was mostly Microsoft Office and the ActiveID client drivers for my smartcard to work with the VPN software.

I didn't need wireless because this was a desktop plugged in at work.  However I did have two outputs on my video card and hoped I would be able to run a similar dual head/monitor display, the way I had been doing on XP for a couple of years.  I quickly had flashbacks of trying to get wireless drivers working on Mandrake Linux, circa 2004.  It was only a bit painful getting Ibex to use my Radeon correctly, but after all of this I was content for about a year.

Unfortunately, a year later support for 8.10 died out, and it was time to upgrade to 9.04, Jaunty. In that release, support for my old ATI Radeon GPU was dropped in favor of the newest version of X that was shipping w/ Jaunty.  I felt betrayed by the Linux community for leaving my crappy old GPU behind.  I had no idea how to write my own driver then, had my own work to do, and so I gave up and went back to Windows XP and then 7 for a long time.  Of course, later I would provide systems support for about fifty developers running Ubuntu 14.04 LTS, but this topic would cause us to digress as I usually do.

Well, this past week I have gone back to Linux, this time choosing Debian Jessie 8.  I chose Debian because I am feeling unsure about where Canonical will be taking Ubuntu in the future, and I wanted something I was already familiar with in regard to package management.  For example, I have recently used a Slackware 10 distribution on an old laptop I have, but it leaves much to be desired in regard to community support for the latest hardware.

After choosing Debian for my OS, I got a new Dell Precision 7710 laptop.  This thing really packs a punch with:
  • Intel® Core™ i7-6820HQ CPU @ 2.70GHz
  • Intel Skylake GPU w/ CPU
  • Nvidia® Quadro® M3000M w/4GB GDDR5
  • 32GB (4x8GB) 2133MHz DDR4 SDRAM, Non-ECC
  • Hynix 512GB M.2 PCIe NVMe Class 40 Solid State Drive

You can imagine my disappointment when after 8 years passing since I ran Linux on Dell for my personal system, my desktop booted with a notification that my Cinnamon desktop was being rendered in software mode.  Furthermore, my wireless interface was nowhere to be found.

The first issue was with a feature in the BIOS being enabled called Optimus.  I don't want to make this entire post about Optimus, so I'll use a reference from Wikipedia. "Nvidia Optimus is a computer GPU switching technology created by Nvidia which, depending on the resource load generated by client software applications, will seamlessly switch between two graphics adapters within a computer system in order to provide either maximum performance or minimum power draw from the system's graphics rendering hardware."

Apparently the Dell supported version of Ubuntu knows how to handle configuration this correctly.  However, Debian Jessie out of the box could not.  I chose not to attempt to enable Optimus (Optimux would have been a better name.) support on Debian, but found some interesting work here, https://nouveau.freedesktop.org/wiki/Optimus/ that may assist if you choose to do so.  For me, I have simply disabled the setting in the BIOS, and have opted for the NVidia GPU to be my primary display adapter.  Perhaps later I will investigate Optimus further.

However this led me to my next issue, in that my GPU is not yet supported by Nouveau.  The Quadro M3000M is part of the Maxwell-2 series.

NVIDIA Quadro Mobile Specification Comparison (High-End)
Quadro M5000MQuadro M4000MQuadro M3000M
CUDA Cores153612801024
Memory Clock5GHz GDDR55GHz GDDR55GHz GDDR5
Memory Bus Width256-bit256-bit256-bit
VRAM8GB4GB4GB
FP641/321/321/32
TDP100W100W75W
GPUGM204GM204GM204
ArchitectureMaxwell 2Maxwell 2Maxwell 2

A quick review of the list at https://nouveau.freedesktop.org/wiki/CodeNames/ will show you that this GPU is not supported.  So, I had to install the bundle available from Nvidia.  The important thing to remember when bundling the manufacture's driver is to ensure DKMS (Dynamic Kernel Module Support) is installed and that Nvidia uses DKMS to manage the kernel module between kernel updates.

So with my GPU now working, it was time to figure out the issue with my wireless.  A quick `lspci | grep Wireless` resolved,

02:00.0 Network controller: Intel Corporation Wireless 8260 (rev 3a)

The firmware for Intel wireless adapters generally ends up in a package called, 'iwlwifi' and once installed on most systems, will match the vendor id on your hardware during modprobe, and load the correct module.  As luck would have it, this card's module was not added/patched until November 2015.

This means my driver doesn't really show up until the November 2015 timeframe, which is kernel 4.3.  Turns out Debian has backports up to Kernel 4.6! So, upgrading with the backports should include the firmware for my wifi card.  Backports warns you to only update a package if you NEED it, and encourages you not to upgrade everything.  However, so much depends on the Linux kernel, that upgrading that specific package creates a dependency for many other backport requied packages.  As a result, I ended up using backports for pretty much all of my software.  Which is Virtualbox, X, and Gnome3.

To perform this upgrade, first gloss over, https://backports.debian.org/Instructions/.  Then, after you have added the backports repo, run the following,

$ sudo apt-get -t jessie-backports install linux-image-amd64

Once this is complete, rebooting your PC should prompt you for the latest kernel at the Debian GRUB screen, in my case this is 4.6+74~bpo8+1.  If I understand this naming convention correctly, it means
"Linux Kernel 4.6, Build number 74, Backport for 8.0 Build 1".  Which is quite the mouthful.

As a bonus, the above kernel will also include support for the latest bluetooth adapter!  The end result of all this work was a bleeding edge opensource operating system using Virtualbox 5.0 (from backports) which has USB 3.0 support and 3D acceleration!


Thursday, July 28, 2016

Parsing tcpdump/pcap files in bash, without Wireshark.

Sometimes I work in an environment in which software access is restricted.  Things like Wireshark are DEFINITELY not allowed.

However, I'm still expected to troubleshoot network connectivity issues.  Luckily, tcpdump exists in the EPEL Repo, but leaves much to be desired when trying to read the PCAP files.  I decided to read up on the PCAP spec, and create my own parser in bash.  Crazy? Maybe.  But, it does work.

Another nice capability here is that using something like vim, you can easily tweak this script to dump application data for further debugging.  For example, switch your HTTPd to run in Non-SSL, run a capture, and dump the application data.  Of course, you may have to work out the correct byte offset for your packets. As this is a bare minimal implementation of a pcap parser and does not nearly support the full scope of capability provided by libpcap.

The real issue here is that I have not taken the time to parse some flags which consist of single bits.  There is also much work to be done in regard to writing the conditionals for various Ethernet, IP Routing and TCP frames.  But this is a good baseline to start.

To use this script just run 'tcpdump -w file.pcap -X' to run a capture.  Once done, pass the file as an argument to the pcap-analyzer bash script, and you will be able to read your frames on the CLI!  This should work out of the box with Centos/RHEL 6 Minimal, with the addition of the tcpdump RPM.

Here is a screenshot.



As you can see frame 2 starts to break because of the parsing.  Some flags actually mean, the header is going to be 28 bytes instead of 26.  As a result of the byte variations, the byte iterator ends up on some obscure part of the stream I don't handle, and things get start to break.  Git forks are welcome if you like this little utility and are interested in making it more robust.

https://github.com/charlescva/common/blob/master/pcap-analyzer.sh

One final important characteristic about Ethernet wire data is that it is ALWAYS Big Endian.  This is confusing because most CPUs are little endian.  To work around this issue, I use both 'hexdump' and 'od' because one enforces big endian while the other is CPU dependent.

Other resources Used:
http://www.tcpdump.org/manpages/pcap-savefile.5.html
http://www.tcpdump.org/linktypes.html
https://en.wikipedia.org/wiki/Ethernet_frame#Ethernet_II
https://en.wikipedia.org/wiki/IPv4#Packet_structure
https://en.wikipedia.org/wiki/Transmission_Control_Protocol#TCP_segment_structure

Saturday, June 11, 2016

Apache Nifi Behind Apache (httpd) (SSL to Non-SSL)

ProxyPreserveHost On


    ProxyPass "http://localhost:8080/nifi" max=20 ttl=120 retry=300
    ProxyPassReverse  "http://localhost:8080/nifi"
    RequestHeader add X-ProxyScheme "https"
    RequestHeader add X-ProxyHost "proxyserver.acme.com"
    RequestHeader add X-ProxyPort "443"



    ProxyPass "http://localhost:8080/nifi-api" max=20 ttl=120 retry=300
    ProxyPassReverse "http://localhost:8080/nifi-api"
    RequestHeader add X-ProxyScheme "https"
    RequestHeader add X-ProxyHost "proxyserver.acme.com"
    RequestHeader add X-ProxyPort "443"



    ProxyPass "http://localhost:8080/nifi-docs" max=20 ttl=120 retry=300
    ProxyPassReverse "http://localhost:8080/nifi-docs"
    RequestHeader add X-ProxyScheme "https"
    RequestHeader add X-ProxyHost "proxyserver.acme.com"
    RequestHeader add X-ProxyPort "443"


Thursday, April 21, 2016

NAB Show 2016

NABShow 2016 was a great experience.  I have never taken interest in the production of movies, radio, or any form of media in my life, but this event has changed my perspective dramatically.  This will be my first in a series of posts, in which I will share photos and information I learned throughout this event.

A recent gig has required me to take an interest in video production.  It is not always common to have an employer pay for the attendance of a conference, but I think it is important.  If you take away anything from this post, it should be that self-enrichment is required to succeed in any field.  And you should push your employer to give you an opportunity to pursue it.


The show was in Las Vegas, Nevada.  Being from Virginia, it was a long flight.  However, after meeting so many people from Europe and China, I cannot complain.  I stayed at a cheaper Hotel/Casino called "Circus Circus".  

The "Hotel and Casino" was dated and odd with the whole circus theme, but the room was clean and the last night I was there, I had fun losing a few dollars at the slot machines and roulette.  They do however have a good Italian place with the best meatball sub, and a great steakhouse that has class and has been around for years. ProTip: The steakhouse is reservation only, however if you are alone, you can eat at the classy bar. Also, I can say that I appreciate not seeing one single clown the whole time I was there.  I think acrobats and dancers have mostly replaced them.

The event is Monday through Thursday every year.  I attended Tues & Wed, and had enough for my first visit.  However, if I get the chance to go again, I'll plan it better.  The hotel is only a 15 minute walk from the convention center.  The Las Vegas Convention Center is HUGE.  To give you an idea of what I mean, the "Westgate" is 200,000 sq ft (19,000 m2) and has it's own station on the Vegas monorail.


As you can see the convention center is much larger than the Westgate, located to the left of the blue "North Hall" in the picture above.  So be prepared to walk several MILES at this show.

There were so many exhibitors I could not count them all.  But I learned a lot from some of the free sessions that were included with the Exhibits only pass.  Make sure you go to those while you are there. They are usually only 45 minutes, but worth the time.


I realized I was at a whole new type of tech event when the first thing I came across was being called "IP Hybrid Routing".  My first thought was, "What to they mean by 'hybrid'?  Is there some other kind of digital routing that is more efficient than TCP/IP that I don't know about?"  

Turns out there is.  Video and audio signals have been around a lot longer than the internet.  These original analog signals gave birth to the broadcasting industry.  As a result, the industry vendors behind analog equipment went straight to using hardware components.  Custom hardware for digital signal processing was faster than computer processors (or GPUs), and could perform the digital operations as quickly as they were needed to encode and decode the signals being recorded live.

As a result, in 2016 there are thousands of vendors of this equipment worldwide.  And they are all chomping at the bit to crack in to the cloud.



I attended two sessions by Josh Kolden regarding his C4 ID system.  His system essentially uses natural language semantics and byte hashes to create a metadata tagging system for all of the artifacts used in video production.  There is a whitepaper that goes in to more detail available at his site, http://cccc.io. He also stresses the importance of using a RESTful API with JSON as the glue between your software components.  He named several but Avid and Nuke are two that come to mind.  If you are in to software engineering, it shouldn't take much to wrap your mind around how quickly the cloud is going to (or already is) provide a major role in the future of media production.  The C4 ID system was used to produce "The Suitcase" a movie currently in post-production and will be released later this year. It was very interesting to hear about the challenges getting this metadata collection process integrated with the movie production staff.

I spent the rest of my first day looking at Drones, LED displays and other gadgets.  There was a TON of them.

Dual-Prop Airplane Drone

Stabilized Sensor Suite

Octo-copter with gimbals and professional video camera

With adjustable carbon-fiber chasis

These enormous LED displays are high resolution at a distance, and can be scaled to any size

The LED screens are an array of squares with an input, output and LCD display showing about 14 volts and 37 degrees Celsius.

You can even make cubes with live video for your nightclub.

In my next post, I will go over more equipment and talk a bit more about the current VR industry, the future of VR, and post more photos of the event.

Wednesday, March 23, 2016

QuickTip: Tree Alias

Sometimes directory structures are a bit of a nest.  You often want to see the full structure, but may not have access to packages like 'tree'.  I found a great expression and decided to wrap it into an aliased command in ~/.bash_profile

alias tree="ls -R | grep \":$\" | sed -e 's/:$//' -e 's/[^-][^\/]*\//--/g' -e 's/^/   /' -e 's/-/|/'"

Now, when you want to see a complex directory structure in the shell, just change to the working directory, and type `tree`

Custom `tree` command

Sunday, November 15, 2015

Dynamic Adaptive Streaming over HTTP (DASH) on WildFly

I've been away for a while.  October 10th, 2015 I started a new project at work, and that has kept me busy.

As demand has increased on internet media, the core web infrastructure has evolved to support new use cases for some pretty old standards.

HTTP has been used since the inception of the web around 1989.  The core concept being, a client's request receives a peer's response.  Originally a content length header element was used to box the request and response body in to some static size.  This was used for a variety of reasons, and was also manipulated to perform denial-of-service (DoS) attacks.  This is now used to stream partial offsets of a video file, allowing the player to start fetching segments with a lower bit-rate.  Hence, dynamic adaptive streaming over http.

You can check out the wildfly and castlab's code at my github public repository: https://github.com/charlescva/mobile-dashjs

Notice the Content-Length is determined by the offset as provided by the MPD and Initial MP4 containing the Metadata about each stream.


You can review the source, but the steps are as follows:

  1. Obtain a standard MP4 example video.
  2. Configure Apache to host the files in the directory dashencrypt is using. This is currently hard-coded in the VideoRegistration.java.
  3. Add Video using the Add a Video tab.  The JAX-RS enabled VideoService.java will handle the request, and dash the file for you.
  4. Upon success, you will see the entry for the video appear on the "Video Player" tab.
Observing the Console.  You can see the logger is outputting the steps as it processes the request.
I am still getting my feet wet as well, and came across a great article on the following website, https://arashafiei.wordpress.com/2012/11/13/quick-dash/.  

I'll be working on integrating a "live" stream in which a imaging device like /dev/video0 (webcam) will be used to generate the video segment data, while the MPD (Manifest) and initial MP4 file containing the Movie Box (moov) and/or Fragment Box (moof) are updated on the fly.  Essentially, the goal is to enable "DASHing" of a live video feed.