Raspberry Pi + ESXi Backups

Raspberry Pi + ESXi Backups

I received my Raspberry Pi 2 in the mail months ago. Of course I was so excited I set it up the same night I received it and proceeded to tinker with the OS and learn about it’s little nuisances. Here’s what I got “done” over the past few months with it:

  1. ARM OSes are not as “flavorful” as I imagined… Obviously the first place I went to find an OS for my Pi was the Raspberry Pi website, but really there are plenty of others. However, I wasn’t too impressed as it seemed to me they were either a) built for a specific purpose or b) not fully built out and missing a lot of packages (not everything has been ported to the ARM architecture). While I do love compiling and building some things from scratch, this wasn’t one of those projects. So Raspbian is the way to go…for now.
  2. Raspbian is Debian and I am a Redhat person… 2 things drive me crazy when using linux: 1) typing a command, hitting tab, bash auto-complete failing to find the command and 2) Realizing I am using Debian while attempting #1. At this point in time, Debian has a good stable train of OSes for the Pi and a nicely filled out repository of ARM architecture packages. There is a CentOS flavor  for Raspberry Pi available; however, the basic repositories don’t have enough packages to make it useful. Usually using EPEL alleviates that problem, but it appears that there is no ARM compatible repo just yet.
  3. Borking the OS is a part of the learning process…I had to restore my Pi 3 times before I got it right.
    1. Setting a static IP in /etc/network/interfaces is apparently the incorrect way to set the IP these days. So wrong in fact, that it completely borks the OS. I had only been running the Pi for about 15 minutes. The correct way is the modify the /etc/dhcpcd.conf file, following the instructions in the file should help.
    2. Trying on new OSes beside Raspbian and jumping ship back to Raspbian. As soon as I found out CentOS had an ARM compatible OS, I loaded it up. As I mentioned before, a lot of the functionality I was looking for (usually installed from EPEL packages), was not available.
  4. CUPS + Brother Printers don’t get along so well…This is sort of related to the Pi itself, but more of my aging hardware, specifically my Brother printer. This wasn’t one of the intended uses I had lined up for the Pi, but since my networking card in the Brother printer bit the dust this seemed like a logical next step. Only problem is the installer provided via Brother (a .deb file), is not ARM compatible. This print server project has been put on hold for the moment.
  5. Dedicated DHCP Server…Dynamic DNS is a convenient functionality and I realize I could take a load off of my home router from managing DHCP at the same time. DNSMasq is a very useful application that accomplishes this task is incredibly easy to setup. However, the only issue with this is that DNSMasq doesn’t dynamically update my home DNS server (which is being managed by FreeIPA). However, as of recent FreeIPA does have a way of dynamically receiving updates from the ISC DHCP server package as long as it is configured correctly. However, performing DNS/DHCP changes during the day creates some unique networking issues that others on the network will experience (specifically web surfing girlfriends). This project is also on hold until I can find a time when I cause network disruptions without bothering anyone but myself.
  6. ESXi NFS backup Server…My original post explained that this was my primary goal in purchasing this device. I finally got around to getting this setup. Using ghettoVCB and some cron jobs, I have been able to automate weekly backups of all my guest virtual machines.

Overall, I think I have only accomplished one task with the Pi and that’s my original project of performing automated ESXi backups. Thankfully it was in just the right time though as I just purchased some upgrades for my server: 2x 3TB hard drives, 16GB RAM kit. I plan to wipe the system, setup a RAID-5 array (3 active, 1 spare) and re-image the system to the latest ESXi, so restoring from these backups will come in handy.

getopt(s) and how I have been (sort of) wasting my time

It isn’t a rare occasion that I decide to automate (or contemplate automating) a task in some workflow at work (or even at home). In fact I think it’s a great way to exercise my programming skills and save time getting future tasks done (if only this automation worked with chores). However, I still have my doubts about the latter part of that last statement since I tend to spend many hours and in some cases, days tweaking, perfecting and debugging some script I hobbled together just to fit some task. Four hours in, knee-deep in the debug process, and 3 cups of tea later, the thought of: “is this really worth it?”, pops into my head. Is writing this little script going to save me that much time? Usually this thought gets pushed aside and I root out the bugs, find the usefulness in the new tool I built and generally just forget that I blew hours and a sleepless night or two. The one consolation of these mini projects is that I definitely learn something new and to me in the end is worth it…except maybe when I realize I have been wasting a lot of time.

Today the urge came over me to automate something, specifically REST API calls from my local machine. Lately I have been using the curl command line tool to perform my REST API calls to all sorts of various services at my company. It is becoming so useful to pull data for a service as opposed to visiting the page, logging in with my credentials, and dig around through the menus and submenus. Since I spend most of my time in my terminal, these calls are a huge convenience, but it comes at a cost of typing a lot of arguments into the call. Specifying the hostname, the call type, all the headers and then having to rewrite the path every time I want to do a new request just adds up.

I am not going to go into the details of what I did since this post isn’t about that (that post may come later), but instead about getopt and getopts. These are shell functions that are commonly used to parse command flags passed to a shell script. I only recently stumbled onto this today while ironically debugging a script written by someone else. The idea had never really dawned on me that perhaps someone already wrote some functions for parsing command flags and that rolling my own may not be necessary.

Trolling the web for some decent examples I came across so many conflicts about whether I should be using “getopt” or “getopts”.  Below is a list of what I gathered in general:

getopt

  • Older version (developed in the early 1980’s)
  • Handles short and long arguments
  • Supposedly mis-handles white-space and empty arguments
  • Runs in compatibility mode on most *nix environments (not sure what kind of hinderance this is yet)
  • Some would like to imagine it never existed: “Never use getopt(1). getopt cannot handle empty arguments strings, or arguments with embedded whitespace. Please forget that it ever existed.”

getopts

  • Newer version (developed/introduced in 1986)
  • Only handles short arguments
  • More portable (compatibility across more *nix environments)
  • Standardized by POSIX

Digging up all these facts and attempting to find some good examples, I realized that the usage of getopt(s) is probably one of those long-standing battles similar to Vi(m) vs Emacs or Apple vs Microsoft. With this in mind, there is probably not a right answer, but instead it is probably dependent on a case by case need (and maybe comfort). This of course didn’t help me since I tend to lean towards the newer tools for compatibility, but I did want the long argument support…in the end the compatibility won (coupled with the abundance of good examples for usage).

After slaying some bugs the script was finished and nicely tucked into my “tool belt” with all the others I have developed over time. Coming out of the fray of this made me realize how much time I could have been saving by using one of these functions as opposed to all the days and hours I usually spent rolling my own parser…in a few short hours I was able to roll out a new tool where I was able to focus on the core functionality instead of fixing the parsing of inputs, commands, flags and the whitespace. Now the question is: should I go back and update all those scripts I spent so much time and lost so much sleep over?

 

Dayone/Tumblr to WordPress Migration

I have used Dayone Journaling app on and off since 2012, but ever since I got my home server running, I have been trying to consolidate my data under my own control (dropbox to own cloud, iPhotos/Flickr to Plex, evernote to mediawiki, etc.).

Tumblr is another story as I created a tumblr for the JOH2013 trip to post my progress for the public, but I now realize I don’t want it out there that much anymore. Not to mention a lot links to my data are breaking as I have moved away from Flickr to iPhoto and now to Plex Photos (photo curation is just a whole other beast).

WordPress seemed like a good options for consolidating journal entries and blogs in one convenient (and actually secure) location. Not only do I control where the data is at, but how it is accessed and posted.

The failings of security behind Dayone are just too good to not mention…the whole password/passcode setup for all of their applications are just a UI facade for security. The Dayone journal files are actually just in a flat text file format. These files are then just shuffled around (depending on storage medium: iCloud, Dropbox, local) and manipulated by the application. Also, if you forget your passcode/password to that file, Dayone support (used to) basically shows you how to basically wipe the password from the file by deleting the “security.plist”. Really basic and no encryption what-so-ever.

Tumblr on the other hand, I have decided to just move my personal blogging “home” and to a platform I can’t fine tune and have complete control over.

The point of this post was to “admire” my nifty script that scraped the exported Dayone journal, (which was supposed to be in some markdown form, but turned out to simply be a text file with a lot spaces and returns), organized the post in sections, then utilized a WordPress command base to automatically create posts from that content.

#!/bin/bash

DATE=""
TITLE=""
LOCATION=""
WEATHER=""
TAGS=""
CONTENT=("")
TEMPFILE="TEMP.txt"
parseTags()
{
	for word in $1
	do
		TEMP="$(echo $word | sed 's/,//')"
		TAGS="$TAGS #$TEMP"
	done
}
parseDate()
{
	MONTH=""
	DAY=""
	YEAR=""
	for word in $1
	do
		case $word in
			"January")
				MONTH="01"
				;;
			"February")
				MONTH="02"
				;;
			"March")
				MONTH="03"
				;;
			"April")
				MONTH="04"
				;;
			"May")
				MONTH="05"
				;;
			"June")
				MONTH="06"
				;;
			"July")
				MONTH="07"
				;;
			"August")
				MONTH="08"
				;;
			"September")
				MONTH="09"
				;;
			"October")
				MONTH="10"
				;;
			"November")
				MONTH="11"
				;;
			"December")
				MONTH="12"
				;;
			"1,")
				DAY="01"
				;;
			"2,")
				DAY="02"
				;;
			"3,")
				DAY="03"
				;;
			"4,")
				DAY="04"
				;;
			"5,")
				DAY="05"
				;;
			"6,")
				DAY="06"
				;;
			"7,")
				DAY="07"
				;;
			"8,")
				DAY="08"
				;;
			"9,")
				DAY="09"
				;;
			*",")
				DAY="$(echo $word | sed 's/,//')"
				;;
			"20"*)
				YEAR="$word"
				;;
			*)
				;;
		esac
	done
	DATE="$YEAR-$MONTH-$DAY"
}
parseFile()
{
	CONTROL=0
	cat $1 | while read line
	do
		case $line in
			"Date:"*)
				if [ $CONTROL -eq 1 ]; then
					createPosts
				else
					CONTROL=1 #Indicate start of entry
				fi
				TEMP="$(echo $line | sed 's/Date://')"
				parseDate "$TEMP"
				;;
			"Tags:"*)
				TEMP="$(echo $line | sed 's/Tags://')"
				parseTags "$TEMP"
				;;
			"Location:"*)
				LOCATION="$line"
				;;
			"Weather:"*)
				WEATHER="$line"
				;;
			"Photo:"*)
				;;
			*)
				CONTENT=("${CONTENT[@]}" "$line")
				;;
		esac
	done
}

createPosts()
{
	touch "$TEMPFILE"
	TITLE="\"${CONTENT[2]}\""
	TITLESIZE=${#TITLE}
	if [ $TITLESIZE -gt 60 ]; then
		TITLE="\"RENAME ME - $DATE\""
	fi
	for LINE in "${CONTENT[@]}"
	do
		echo "$LINE" >> "$TEMPFILE"
	done
	if [ -n "$LOCATION" ]; then
	echo "$LOCATION" >> "$TEMPFILE"
	fi
	if [ -n "$WEATHER" ]; then
		echo "$WEATHER" >> "$TEMPFILE"
	fi
	if [ -n "$TAGS" ]; then
		echo "$TAGS" >> "$TEMPFILE"
	fi
	eval sudo -u apache /usr/local/bin/wp post create $TEMPFILE --user=cgomez --post_title="$TITLE" --post_date="$DATE" --post_category=116 --post_status=private
	eval rm -f $TEMPFILE
	DATE=""
	LOCATION=""
	WEATHER=""
	TAGS=""
	CONTENT=("")	
}

if [ -z $1 ]; then
	echo "No File Detected. Exiting."
	exit 1
fi

if [ ! -f $1 ]; then
	echo "File doesn't exist. Exiting."
	exit 1
fi

parseFile $1

exit 0

Raspberry Pi Adventure Begins

Finally took the plunge into the “micro-computing” world and purchased a Raspberry Pi 2.

The primary reason for buying one is to automate my esxi backups to an external disk. At the moment I am sort of a sitting duck in that I have no (easy) external back up solution in place for all my virtual machines. My esxi host is on RAID 1, but that could sideways really fast, so for months (or at least since last year when I setup my server) I have been mulling over possible backup solutions…you know, in the event of “catastrophic disk failure”. Can you tell I am paranoid?

My original plan was the use my C3700 Netgear Router’s NAS Functionality, but it turns out the disk options are limited to a select few Western Digital and Seagate models.  I suspect the reasoning behind this is that some of the NAS “work” is passed to the disk as opposed to the router doing all the heavy lifting. The downside is that I bought the wrong model, but the upside is I got it for really cheap on Black Friday (no I didn’t stand in any lines, I ordered online from my bed). So this left me with a backup disk, but no way to get the backups onto the disk…Rapsberry Pi to the rescue!

Now months later I am just getting around to buying a pi. Its one of those purchases that seemed simple but once I started “down the rabbit hole” of parts required to run a pi I was soon inundated with too many browser tabs just to compare things like cases. This eventually caused me to be overwhelmed and put the whole purchase on hold, until last night when I just jumped in. 

I settled on Amazon as the vendor because of Prime free shipping, and a kit (which was definitely more economical than the piecemeal route). The only thing I needed to buy extra was the MicroSD card, but that was very reasonable at $10.99 for 32GB.

Now its a waiting game with the post office…