Find out when a system was built

This is a tricky question.  You can see how long a server has been running via the ‘uptime‘ command.  But there’s isn’t a definitive way to know when a server was built.

The closest answer is by running the command below which will let you know when a file system was created

$ tune2fs -l /dev/sda1

So if you run this on the root file system you can get a good idea when this box was built.  Unless your system was created from some kind of image backup.

How to share private information

There are times you have to pass security related information among your peers.  There are very professional ways to do this for automated high attention processes, such as the movement of secrets in a pipeline.  Tools such as Hashicorp Vault, or an HMC in AWS can serve for this highly automated purposes.  But we can’t forget about normal communication when we need to pass a credential from a source group to the person that will secure that credential, or the case of sharing a group password.

We should also have an method to use encryption to share these data and not just send it in plain text.  Even if it’s a private email to a co-worker in your group using the company’s mail utility.  This document will help you secure this data transfer.

We will leverage GPG (GNU Privacy Guard) to carry out our encryption. is a common tool bundled in most Linux distributions.  We will generate an Asymmetrical Key.  This is means a public/private keypair.   The public key can be shared with the world (your buddies), this is why is public.  Then your buddies can add the public key to your keychain and encrypt the file.  Now only the person with the private key (that is you) will be able to decrypt the file.

This is similar to what we do with SSH keys.  The public key is shared and entered into a client’s athorizedkeys file.  Then the server with the private key (you don’t share this key that’s why it’s private) can start a secure/encrypted connection with the client.

So that is the process.  Here are the commands to achieve this.  Remember that this was done on a Linux server (RHEL to be specific).

 

Generate your public/private “Asymetric” keypair in your Linux workstation.

$ gpg –gen-key

 

Export the Public key to a file.  Then share this file with your buddies

$ gpg –armor –export <yourname@email>

 

Now the receiver (your buddy) has to import the key to their key-chain

$ gpg –import <pub_key_file>

 

Now your buddy can encrypt a file as follows.  The recipient email should be your email

$ gpg –encrypt –sign –armor -r <recipient_email> <file_to_encrypt>

 

Now your buddy can email you the file since it’s encrypted.  You take the file copy it over to your workstation, where you created the key-pair, and de-crypt it.

$ gpg -d <file_to_decrypt>

 

That’s it.  You’ve successfully shared a sensitive file in a encrypted way to throw off any eavesdroppers.

 

Two more commands for your bag of tricks.

$ gpg –list-keys

$ gpg –list-keys <your_email>

 

-Raf

VMware: Creating a VM from a OVA/OVF

Did you know that you can create a Virtual Machine in VMware from a pre-created file?  Well you can.     By using this procedure you can use an OVF to boot into a VM.  The result is a system identical to the original system from where the OVF was created.

 

First let’s discuss the components we need and then I’ll define the steps to build the VM.  So here are the components in my environment

  • vCenter 6
  •  ESXi host with a local datastore
  • OVA file
    • OVF file
    • VMDK file
    • mf file

The OVA file is nothing but a tar file that contains the ovf,vmdk and mf files.

The vmdk is the actual disk file that will be booted by VMware.

The ovf file is a metadata file in xml format that has the instructions to for VMware.  It includes such directives as the name of the vmdk file, the number of processors, amount of memory, etc.  OVF is a standard format used by various virtualization technologies to distribute virtual machines.

the mf file contains a hash of the ovf and vmdk to check the validity of these files.

A scenario where you may need to boot an OVF is if you already have an image that you use in a cloud, say Openstack, and you want to boot that image in VMware.  You can use OS tools to convert the image into an OVF and boot it up so you don’t need to go through the processes of hardening the OS again.

With these details out of the way, let’s now see how to boot the OVF.  It’s so simple.

  1. Run your vSphere client
  2. Select your EXSi host
  3. Select File -> Deploy OVF Template
  4. Select your template file
  5. Enter a name for the VM
  6. Select the datastore
  7. Hit next and deploy.and Voila!

You should now see your new  VM listed in the machine inventory.

How many processors are in your server

Physical servers are a thing of the past some would say.  The way things are moving in IT in the direction of Cloud.  First it was offsite hosting.  Then virtualization with a Hypervisor (VMware took the torch and ran with it). Then Private cloud was a thing, think Openstack.

Now we’re supposed to dream of a future where we don’t manage physical components.  We just control ones and zeros through infrastructure as code.

But but all enterprises have adopted public cloud yet.  And with regulations and all, you can bet that we’ll be managing physical servers for a while to come.

So we still need to be able to answer questions such as “how many processors are in your Box?”.  I was asked this question by a senior team member.  Two of my buddies had said that the box had 32 CPUS because they saw that /proc/cpuinfo showed 31 lines of processor, last one being “processor :31”.  They were wrong.  At least partly wrong.

I thought I’d be smart and said the server had 8 processors because /proc/cpuinfo also showed that “cpu cores :8”.  So nebulously I said “there is some multi-threading” being done.  I go some credit but that was not a clear answer.

So here is how it works. We all looked at /proc/cpuinfo and that is the correct source.  But we have to read all the details this file provides to understand the full picture of how a system is presented processors.

First understand that /proc/cpuinfo is sort of a database.  Each record in this file is delimited by a blan-space-line.  The line items that we want to consider in each record are:

  • processor
  • model name (mostly background info)
  • physical ID
  • siblings
  • core id
  • cpu cores
  • flags

Let’s start by giving the final solution to the question and we’ll break it down.

Physical Processors – The server actually has 2 physical processors.  That means two physical sockets with a processor card on each.  You can see that in the fact that there were only two “physical IDs” in the file ( 0 and 1).  You can see this by  grep “physical id” /proc/cpuinfo.

Number of Cores – So if there are only two physical processors why does the system show 32 “processors”?  Part of the answer is that each of these two physical processors has 8 cores.  Think of it like an 8 layer Oreo cookie.  It’s a single cookie with 8 wafers.  So when it comes to raw-physical-processing-core the system has 16.  That is 2processors * 8 cores each = 16.  You can see this by running grep “cpu cores” /proc/cpuinfo.  You can also determine this by reading the mode name by grep “model name” /proc/cpuinfo and doing a Google search.  you can find the specs which will confirm the number of cores.  In my case the processor was an Intel Xeon E5-2660.

Number of threads – The server can handle 32 simultaneous threads.  This means 32 execution simultaneously. You can determine this in two ways.  One by grep “siblings” /procu/cpuinfo.  In my Xeon server that command shows “siblings :16”.  This means that on each processor there are 16siblings, aka threads.  This is confirmed by the fact that the files shows 32 processors.  And again if you Google the specs you’ll see that each processor supports 16 threads.  So a single core can process two threads (2*8*2=32 processors).

So what we have is a play of words between the BIOS and the OS.  It wants to tell the OS that it can handle 32threads so the OS represents them as processors.

As I’ve learned some Apps are really good and maximizing their threads so in that case it’d be better to disable Hyperthreading via the BIOS.  In that case the OS would report only 16 processors, this would be 16 real cores.

This was a good review of basics for me.  Hopefully you too can know answer when someone asks “how many processors are in your box”

-Raf

 

 

 

Troubleshooting GOCD/Chef deployment issues

GOCD is a Continuous Delivery Automation server.  With it you can create Pipelines for your common deployment tasks.  I’m used to seeing GOCD used to create pipelines to converge servers managed by Chef.

In this case GOCD will use one of its agents to do any prep work in the infrastructure, such as spin up cloud instances.  And lastly the agent will kick off a run of the chef client.

But what do you do if there’s an error in the Pipeline execution? Where do you start.

Below is the process that I’ve used to troubleshoot errors on the execution of a Pipeline.

  1. Log in the GOCD agent that ran the pipeline.  You can identify this agent in the Pipeline view page.
  2. Login to the agent and find the execution logs for the pipeline.  You always need to go to the logs.
  3. Search for the string ‘ERR’ to identify the problem.
  4. Once you find the error line, you need to determine where in the code this is happening.  Most times I narrow down to a specific cookbook in our Chef software repository.
  5. List the change history on the related cookbook file.  With SVN in Linux you can run ‘svn blame file’ or in Windows you can use Tortoise SVN to view the history.
  6. If you can’t make out the error from viewing the recent changes, you can always contact the last person to commit a change to this file.

Hopefully this process can help you the next time you get an error on your pipeline.

 

Cheers.

 

-Raf