WordPress Pages vs Posts

I use WordPress to run this blog.  I got started by following Pat Flynn’s tutorial on How to start a Podcast.   Aside from that I haven’t done much research on the features of WordPress or best practices.

I have noticed that when I’m ready to publish some content I have the option of creating a Page or doing a Post.  Sometimes I’ve decided to use pages. I’ve been using Post mostly when I’m publishing a Podcast episode.  Eventually I wondered what is the difference between the two and which should I use on a regular basis.

A few Google searches provided the answer. Use Posts for your publishing content!

The purpose of Pages is more for static content such as your About page or Home page.  But that explanation didn’t help me much.  What did make sense to me is that Posts get sent out via an RSS feed and readers can comment on Posts.  You don’t expect readers to comment on your Home page.  So if you’re using Pages for content you’re missing out on that interaction with the user.

This makes sense to me.  So from now on I’ll be using Posts and will probably convert some of my old pages to Posts.

This article helped me understand this.



Entropy on a Linux Box


“In computingentropy is the randomness collected by an operating system or application for use in cryptography or other uses that require random data. This randomness is often collected from hardware sources(variance in fan noise or HDD), either pre-existing ones such as mouse movements or specially provided randomness generators. A lack of entropy can have a negative impact on performance and security.” Wikipedia


On RHEL6, You can read the available entropy value in /proc/sys/kernel/random/entropy_avail.  This entropy is generated by the kernel using sources such as physical mouse/keyboard. So the value you read in this file will change constantly.  So a system may generate low entropy depending on where what’s using for random data.

If you require more entropy than what the system is generating you can install a physical entropy generating card.  This is a device that can generate randomness based on physical sensors rather than based on a computer program.  Sensors may be thermal based.

I have experience with a GOCD agents not producing enough entropy and this affected the appp.

Typing over ssh session makes no difference on entropy, you can test this , since randomness doesn’t care about pseudo terminal comands, only attached keyboard/mouse.

/dev/random is the process that generates the randomness.  This won’t return a value if the entropy has been used up.  /dev/urandom on the other hand will return  a value even if no entropy is there.  This is not good for security, such as certifcates, ssl and other cryptographic functions.




Learning Python

Know that there are two major Python versions,  2.7 and 3.  Version 2.7 has a huge following that doesn’t seem to be going away.  I’ve read many sources that say learn on 3 but you might be in an environment that’s 2.7 heavy.  So I say use the one that you’re required at work but learn both.



How to use exit a program

‘sys.exit(s)’ is just shorthand for ‘raise SystemExit(s)’


>>>raise SystemExit(‘RafError’)

# echo $?



>> Ctrl+d

# echo $?


Find out when a system was built

This is a tricky question.  You can see how long a server has been running via the ‘uptime‘ command.  But there’s isn’t a definitive way to know when a server was built.

The closest answer is by running the command below which will let you know when a file system was created

$ tune2fs -l /dev/sda1

So if you run this on the root file system you can get a good idea when this box was built.  Unless your system was created from some kind of image backup.

How to share private information

There are times you have to pass security related information among your peers.  There are very professional ways to do this for automated high attention processes, such as the movement of secrets in a pipeline.  Tools such as Hashicorp Vault, or an HMC in AWS can serve for this highly automated purposes.  But we can’t forget about normal communication when we need to pass a credential from a source group to the person that will secure that credential, or the case of sharing a group password.

We should also have an method to use encryption to share these data and not just send it in plain text.  Even if it’s a private email to a co-worker in your group using the company’s mail utility.  This document will help you secure this data transfer.

We will leverage GPG (GNU Privacy Guard) to carry out our encryption. is a common tool bundled in most Linux distributions.  We will generate an Asymmetrical Key.  This is means a public/private keypair.   The public key can be shared with the world (your buddies), this is why is public.  Then your buddies can add the public key to your keychain and encrypt the file.  Now only the person with the private key (that is you) will be able to decrypt the file.

This is similar to what we do with SSH keys.  The public key is shared and entered into a client’s athorizedkeys file.  Then the server with the private key (you don’t share this key that’s why it’s private) can start a secure/encrypted connection with the client.

So that is the process.  Here are the commands to achieve this.  Remember that this was done on a Linux server (RHEL to be specific).


Generate your public/private “Asymetric” keypair in your Linux workstation.

$ gpg –gen-key


Export the Public key to a file.  Then share this file with your buddies

$ gpg –armor –export <yourname@email>


Now the receiver (your buddy) has to import the key to their key-chain

$ gpg –import <pub_key_file>


Now your buddy can encrypt a file as follows.  The recipient email should be your email

$ gpg –encrypt –sign –armor -r <recipient_email> <file_to_encrypt>


Now your buddy can email you the file since it’s encrypted.  You take the file copy it over to your workstation, where you created the key-pair, and de-crypt it.

$ gpg -d <file_to_decrypt>


That’s it.  You’ve successfully shared a sensitive file in a encrypted way to throw off any eavesdroppers.


Two more commands for your bag of tricks.

$ gpg –list-keys

$ gpg –list-keys <your_email>



VMware: Creating a VM from a OVA/OVF

Did you know that you can create a Virtual Machine in VMware from a pre-created file?  Well you can.     By using this procedure you can use an OVF to boot into a VM.  The result is a system identical to the original system from where the OVF was created.


First let’s discuss the components we need and then I’ll define the steps to build the VM.  So here are the components in my environment

  • vCenter 6
  •  ESXi host with a local datastore
  • OVA file
    • OVF file
    • VMDK file
    • mf file

The OVA file is nothing but a tar file that contains the ovf,vmdk and mf files.

The vmdk is the actual disk file that will be booted by VMware.

The ovf file is a metadata file in xml format that has the instructions to for VMware.  It includes such directives as the name of the vmdk file, the number of processors, amount of memory, etc.  OVF is a standard format used by various virtualization technologies to distribute virtual machines.

the mf file contains a hash of the ovf and vmdk to check the validity of these files.

A scenario where you may need to boot an OVF is if you already have an image that you use in a cloud, say Openstack, and you want to boot that image in VMware.  You can use OS tools to convert the image into an OVF and boot it up so you don’t need to go through the processes of hardening the OS again.

With these details out of the way, let’s now see how to boot the OVF.  It’s so simple.

  1. Run your vSphere client
  2. Select your EXSi host
  3. Select File -> Deploy OVF Template
  4. Select your template file
  5. Enter a name for the VM
  6. Select the datastore
  7. Hit next and deploy.and Voila!

You should now see your new  VM listed in the machine inventory.

How many processors are in your server

Physical servers are a thing of the past some would say.  The way things are moving in IT in the direction of Cloud.  First it was offsite hosting.  Then virtualization with a Hypervisor (VMware took the torch and ran with it). Then Private cloud was a thing, think Openstack.

Now we’re supposed to dream of a future where we don’t manage physical components.  We just control ones and zeros through infrastructure as code.

But but all enterprises have adopted public cloud yet.  And with regulations and all, you can bet that we’ll be managing physical servers for a while to come.

So we still need to be able to answer questions such as “how many processors are in your Box?”.  I was asked this question by a senior team member.  Two of my buddies had said that the box had 32 CPUS because they saw that /proc/cpuinfo showed 31 lines of processor, last one being “processor :31”.  They were wrong.  At least partly wrong.

I thought I’d be smart and said the server had 8 processors because /proc/cpuinfo also showed that “cpu cores :8”.  So nebulously I said “there is some multi-threading” being done.  I go some credit but that was not a clear answer.

So here is how it works. We all looked at /proc/cpuinfo and that is the correct source.  But we have to read all the details this file provides to understand the full picture of how a system is presented processors.

First understand that /proc/cpuinfo is sort of a database.  Each record in this file is delimited by a blan-space-line.  The line items that we want to consider in each record are:

  • processor
  • model name (mostly background info)
  • physical ID
  • siblings
  • core id
  • cpu cores
  • flags

Let’s start by giving the final solution to the question and we’ll break it down.

Physical Processors – The server actually has 2 physical processors.  That means two physical sockets with a processor card on each.  You can see that in the fact that there were only two “physical IDs” in the file ( 0 and 1).  You can see this by  grep “physical id” /proc/cpuinfo.

Number of Cores – So if there are only two physical processors why does the system show 32 “processors”?  Part of the answer is that each of these two physical processors has 8 cores.  Think of it like an 8 layer Oreo cookie.  It’s a single cookie with 8 wafers.  So when it comes to raw-physical-processing-core the system has 16.  That is 2processors * 8 cores each = 16.  You can see this by running grep “cpu cores” /proc/cpuinfo.  You can also determine this by reading the mode name by grep “model name” /proc/cpuinfo and doing a Google search.  you can find the specs which will confirm the number of cores.  In my case the processor was an Intel Xeon E5-2660.

Number of threads – The server can handle 32 simultaneous threads.  This means 32 execution simultaneously. You can determine this in two ways.  One by grep “siblings” /procu/cpuinfo.  In my Xeon server that command shows “siblings :16”.  This means that on each processor there are 16siblings, aka threads.  This is confirmed by the fact that the files shows 32 processors.  And again if you Google the specs you’ll see that each processor supports 16 threads.  So a single core can process two threads (2*8*2=32 processors).

So what we have is a play of words between the BIOS and the OS.  It wants to tell the OS that it can handle 32threads so the OS represents them as processors.

As I’ve learned some Apps are really good and maximizing their threads so in that case it’d be better to disable Hyperthreading via the BIOS.  In that case the OS would report only 16 processors, this would be 16 real cores.

This was a good review of basics for me.  Hopefully you too can know answer when someone asks “how many processors are in your box”





Troubleshooting GOCD/Chef deployment issues

GOCD is a Continuous Delivery Automation server.  With it you can create Pipelines for your common deployment tasks.  I’m used to seeing GOCD used to create pipelines to converge servers managed by Chef.

In this case GOCD will use one of its agents to do any prep work in the infrastructure, such as spin up cloud instances.  And lastly the agent will kick off a run of the chef client.

But what do you do if there’s an error in the Pipeline execution? Where do you start.

Below is the process that I’ve used to troubleshoot errors on the execution of a Pipeline.

  1. Log in the GOCD agent that ran the pipeline.  You can identify this agent in the Pipeline view page.
  2. Login to the agent and find the execution logs for the pipeline.  You always need to go to the logs.
  3. Search for the string ‘ERR’ to identify the problem.
  4. Once you find the error line, you need to determine where in the code this is happening.  Most times I narrow down to a specific cookbook in our Chef software repository.
  5. List the change history on the related cookbook file.  With SVN in Linux you can run ‘svn blame file’ or in Windows you can use Tortoise SVN to view the history.
  6. If you can’t make out the error from viewing the recent changes, you can always contact the last person to commit a change to this file.

Hopefully this process can help you the next time you get an error on your pipeline.





Have your Mac read to you

Did you know you can have your Mac read to you? Well it can.   You can be looking at a website or a PDF or some other doc and have the system read it for you.

This is a great way to help you speed read or just to not fall asleep while studying a long document.

So how do you set this up?  Go into the Apple menu then System Preferences.  Then choose the Accessibility icon and select Speech.  Here you can configure the options to enable Speech.

Then all you have to do is highlight the text and select Option+Esc and Voila, the computer is reading to you.

Happy reading.