David Lane

About David Lane

Windows 10 Security Issues are not Overblown

As a technology professional, I have been reading ComputerWorld for most of my career. Most of the time, the information in it is useful and occasionally biased. But the bias is easy to pick out and people will generally roll their eyes and move on. However, today, while reading a different article, I came across an August 25, 2015 article by Preston Gralla on 4 overblown Windows 10 worries that made my jaw hit the floor and actually question if Preston is working directly for Microsoft, because I cannot imaging an objective journalist writing some of the things he says, at least a journalist with any technical skill whatsoever.

Now, I am going to start by saying that Windows 8, as an operating system had a number of problems that really made me wonder what Microsoft was thinking, but the more I hear about Windows 10, the more I am convinced that Microsoft knows exactly what I am thinking, and what they are thinking runs diametrically against what most technicians and other IT professionals (especially security professionals) feel and operating system should be doing. The article tackles four key features of Windows 10 that have security people (and others concerned about digital privacy and security) pretty much wrapped around the axel.

First: Wi-Fi Sense will share all your passwords

Preston say this is not true, then goes on to explain why it is. He also says it is a good and necessary thing.

The concept behind Wi-Fi Sense is a solid one: To make it easier for visitors to find and connect to Wi-Fi networks. Wi-Fi Sense lets you share your network with others without seeing the actual network passwords – the passwords are encrypted and stored on Microsoft’s servers so they aren’t visible to outside users.

Let me explain. Wi-fi Sense shares your passwords with other users and they are stored on Microsoft’s servers. Oh, sure, they are encrypted, but are they encrypted with your keys? Do you control the revocation of the passwords? If you answered yes, please box up your PC and return it to where you bought it. The fact that this feature is enabled by default is a massive security hole. He tries to bloviate by saying it was invented by a similar idea invented by the Open Wireless Movement, but you can be sure the OWM had much less specific user information in mind for its implementation than what Microsoft has implemented. He goes on to say you have to take another step to actually share the key. Again, that it is enabled by default is a bad idea. The second step is merely a feel-good panacea. And since most home users do not have good network security, the myth that users on your network will not be able to get to other resources is just that a myth. This feature should not be part of any implementation of any operating system. If I want someone to have access to my Wi-Fi, I will provide them that access in a way that does not jeopardize my network, nor provides critical infrastructure information to an unknown third-party system.

Second: Windows 10 updates are automatically installed on your system, and that is a bad thing.

Says Preston:

The concern here is that, unlike previous versions of Windows, Windows 10 doesn’t give you a choice about when (or which) Windows updates will be installed on your computer. What Microsoft sends to you will be installed, whether you like it or not, and as a result, an update could break something on your PC – for example, a driver for a peripheral like a printer.

The truth is much more sinister.

It’s true that if you have the Windows 10 Home edition, you don’t have a choice about installing Windows 10 updates – Microsoft sends them and your system installs them.

And the fact is that most people have will be running Windows 10 Home. And you really should have a choice about what you will install because while most of Microsoft’s core patches are necessary, I have spend hours helping my less technologically savvy friends recover from a bad patch, or roll back a peripheral patch that caused a once working device to fail. And it happens more than anyone would like to admit.

I am all for installing patches and keeping your systems as current as possible, but not all patches should be blindly installed and certainly not on the day they are released. Let other people be the Guinea pigs. This is especially true with some of the less than successful browser updates in Microsoft’s past.

Third: Microsoft’s use of peer-to-peer networking for Windows updates will slow down your network connection.

Says Preston:

With Windows 10, Microsoft uses a trick borrowed from peer-to-peer networking apps like BitTorrent in order to distribute updates more efficiently. Rather than have everyone get updates from a central server, the updates are also delivered from PC to PC.

Microsoft “BITS” service has been around for a long time. Systems Management Server and the updated Systems Center Configuration Manager have used BITS for distributing files across low-bandwidth links. Preston likens the model to the way Bit Torrent works. But unless you have a slow bandwidth (and some do), this is actually not an effective way to deliver packets for an update. Further, there is a risk that the Peer-to-Peer network can be infiltrated. I fully expect that there will be a viable penetration before year-end if there is not one already. Again, you can turn it off, but it should not be enabled by default to begin with.

Fourth: Windows 10 is a privacy nightmare.

Well, honestly, it is. Preston even admits it by saying:

Most of the fears have to do with Windows 10’s default privacy settings, created during the installation if you use the express install option. With those default options, Windows 10 will send your calendar and contact details to Microsoft; assign you an advertising ID that can track you on the Internet and, when using Windows apps, track your location; and send your keystrokes and voice input to Microsoft.

He goes on to say that you can turn them all off. Two things wrong with this. First, opt-in, not opt-out should be the default setting for anything being sent anywhere. Period, end of sentence. Secondly, there are still a number of things that security professional are finding being sent to Microsoft even if you turn them off. Compound that by even more errors when you actively block the transmission of data to Microsoft. This is not a secure operating system. This is an information sieve.

What really upsets me is this:

Let’s face it – every time you use a computer, you’re living with tradeoffs between your privacy and getting things done more easily.

No. Privacy should never be a trade-off. Deciding what and when I send information to unknown third-parties should always be my decision, not the decision of an organization that knows better than me. Most home users do not know any better, which means that Microsoft should actively be helping them better protect themselves than exposing them to harm.

He concludes his article with this statement:

But other concerns have been overblown – in many cases you can change the defaults to make the operating system work more to your liking. And other concerns – for example, that Wi-Fi Sense automatically shares your Wi-Fi passwords with your friends and friends of friends – are myths.

No, they are not myths. They are facts, enabled by default, and while some of them can be turned off, the average user needs a much larger skill set than in past versions of Windows. Microsoft is not interested in their customer’s privacy, or security, or these, and other features would not be enabled by default, and that is not a myth.

A Month of Letter Writing

In January, we have National Handwriting Day an excuse for those of us who like the art of writing to celebrate what is becoming a dying skill in this age of digitally processed information sharing. Following on that, a new challenge has popped up in February, called A Month of Letters. The goal of a month of letter is to send a letter, postcard, or respond to any handwritten message, by mail, every day of the month except for Sundays and President’s Day (since it is a US challenge).

Original iconWhat I love about this challenge is it exercises two things, one, handwriting, and two, sending letters. I grew up in a time when instant communications was picking up the telephone and dialing someone’s house and if they were not their, you either left a message with someone else, or it just had to wait. If you wanted to communicate with someone in another city, you could call, but most times, the cost was prohibitive for anything other than critical messages. The rest wrote letters.

I spent most of my teenage years away at boarding school, and at summer camp. The idea of using a phone was just not viable. I wrote letters. Lots of letters. And I liked getting letters. If you write letters, you get letters. At least most of the time. I did have a few friends that were not good letter writers, but most wrote regularly.

Flash forward to 2016. The number of people writing letters has dropped so much that finding writing paper is a challenge. There has been a resurgence of writing, especially writing with fountain pens, but there has yet to be a similar uptick in the physical act of sending letters. But, hopefully, challenges like A Month of Letters and Postcrossing are two ways to stimulate the love of mail. If, like me, you are fascinated by letters, there is a wonderful book called To the Letter: A Celebration of the Lost Art of Letter Writing that you might enjoy as well.

Now, if you will excuse me, I need to go and write a letter. I just do not know who I am going to send it to. It might be you.

Using A New Tool

Every now and then, I find a new tool to make my life easier, at least that is the theory. My first new tool was to ditch Microsoft Windows for the MacOS. At least as my primary day-to-day OS. Yes, I spend a large portion of my work day in Linux. At the moment the distribution is Ubuntu, but I spend most of my day staring at a terminal emulator. When I am not doing that, the OS should be something I do not have to think about, and Windows, especially Windows 8, was causing me too much thought. Then with the release of Windows 10 and all the things that are talking back to Microsoft, I decided it was time to try something else. So Mac won, despite the costs.

As many of you know, I have a certain loathing for the Mac. My primary arguement has (and still is) cost. It is just too bloody expensive. It has the advantage of being Unix like under the covers though, and it has a couple of other advantages in terms of photo work that Windows, even with all the RAM I could throw at it, just could not measure up to. So, I bit the bullet and went Mac.

With the conversion, came a couple of new tools, of which this is one – it is a piece of blogging software call Byword, and is a combination text editor, markdown support. It seamlessly connects with my blogging platform (which is good) and supports markdown which is good because more and more of my documents are being written in markdown than they are in anything else. It is more portable and just a better way of doing things. This is the first post with the Byword, and I am doing it more to test out the software than anything else. So here we go.

Hello world!

Test Kitchen to support Amazon Web Service (AWS) AMIs

I will keep this document updated as I move along.

Summary


Security Considerations

Under the instructions the Amazon Security Blog you need to do a few things to get started.

First, you need to create a new file called credentials in ~/.aws and set the rights to 600.

The credentials file needs to look like this:

[default]
aws_access_key_idx = "value here" <-- "This is the Access Key ID from IAM for the core user"
aws_secret_access_key = "value here" <-- "This is the secret ID from the CSV file that matches the access key"

Some things also need to be variables it seems. This is the default .bash_profile:

export AWS_ACCESS_KEY_ID="value here"
export AWS_SECRET_ACCESS_KEY="vale here"
export AWS_SSH_KEY_ID="PEM key name without the .pem"
export AWS_SSH_KEY="$HOME/.ssh/pem key with the .pem"

This is a bit of belt and suspenders, but it works and doesn’t throw irrational errors that keep you chasing your tail. Ideally you should not need the AWS_ACCESS_KEY and ID in your .bash_profile file, but some functions seem to need it.

You may want to set up a config file in ~/.ssh similar to:

# contents of $HOME/.ssh/config
Host chef
    User ubuntu
    HostName 52.91.89.20  <-- public IP address of instance
    IdentityFile ~/.ssh/awskey.pem <-- aws key

Drivers

You will need the EC2 Drivers from GitHub You will also need to install the AWS SDK for Ruby v2 gem.

To install the gems:

 $ gem install aws-sdk
 $ gem install ec2

Instantiate the kitchen:

$ kitchen init --driver=kitchen-ec2 --create-gemfile
  create  .kitchen.yml
  create  test/integration/default
  create  Gemfile
  append  Gemfile
  append  Gemfile
You must run `bundle install' to fetch any new gems.

The .kitchen.yml file

Modify/tweak your .kitchen.yml file to look like either of these or use the baseline sample:

Ubuntu Sample

---
driver:
  name: ec2 <-- Driver name
  security_group_ids: ["security group"]
  require_chef_omnibus: true
  region: us-east-1 <-- Verify
  availability_zone: d <-- Verify
  subnet_id: "subnet-x"
  associate_public_ip: true <-- If you want to connect from outside.
  interface: private <-- To connect from in AWS

transport:
  ssh_key: "/home/ubuntu/.ssh/AWSKEY.pem" <-- set to your key name
  username: ["ubuntu"] <-- Connect user name (needs quotes and brackets)

provisioner:
  name: chef_solo

platforms:
  - name: ubuntu-14.04 <-- Descriptive name
  driver:
    image_id: ami-d05e75b8 <-- Verify
    instance_type: t2.micro <-- Verify
    block_device_mappings: <-- Optional
      - ebs_device_name: /dev/sdb
        ebs_volume_type: gp2
        ebs_virtual_name: test
        ebs_volume_size: 8
        ebs_delete_on_termination: true

  suites:
    - name: default
    run_list:
    attributes:

CentOS/RHEL Sample

---
driver:
  name: ec2
  security_group_ids: ["security group"]
  require_chef_omnibus: true
  region: us-east-1 <-- zone may need verification
  availability_zone: e <-- may need verification
  subnet_id: "subnet-yoursubnet"
  associate_public_ip: true
  interface: private <-- when building from inside AWS

transport:
  ssh_key: ~/.ssh/AWS.pem <-- set to your key name
  username: ["ec2-user"] <-- may need to be root for CentOS, ubuntu for ubuntu

provisioner:
  name: chef_solo

platforms:
  - name: centos-6.4
driver:
  image_id: ami-26cc934e <-- Verify
  instance_type: t1.micro <-- Verify
  block_device_mappings:
    - ebs_device_name: /dev/sdb
      ebs_volume_type: gp2
      ebs_virtual_name: test
      ebs_volume_size: 8
      ebs_delete_on_termination: true  

suites:
  - name: default
    run_list:
    attributes:         

Baseline file sample for both Ubuntu and CentOS/RHEL

---
driver: 
  name: ec2
  require_chef_omnibus: true
  aws_ssh_key_id: AWSKEY <-- AWS Key name (no .pem)
  security_group_ids: ["sg-...f"] <-- security group
  region: us-east-1 <-- verify your region
  associate_public_ip: true <-- if you need to access the node outside AWS
  interface: private <-- set to _private_ if you are inside AWS

provisioner:
   name: chef_solo
transport:
   ssh_key: "/location/.ssh/key.pem" <-- don't know why, but this has to be here and not in the individual sections. 

platforms:
   - name: rhel-7.1 <-- RHEL is not officially supported but will work
     driver:
       image_id: ami-12663b7a <-- verify the image 
       instance_type: t2.micro <-- verify the instance type and size
       availability_zone: e <-- verify the zone it can run in
       transport.username: ["ec2-user"] <-- user will vary _ec2-user_ is the default for RHEL, but may need _root_
       subnet_id: "subnet-...2" <-- verify the subnet with the zone
       block_device_mappings:
         - ebs_device_name: /dev/sdb
           ebs_volume_type: gp2
           ebs_virtual_name: test
           ebs_volume_size: 8
           ebs_delete_on_termination: true

- name: ubuntu-14.04
     driver:
     image_id: ami-d05e75b8 <-- verify the image
     instance_type: t2.micro <-- verify the instance type and size
     availability_zone: d <-- verify the zone it can run in
     subnet_id: subnet-...c <-- verify the subnet with the zone
     transport.username: ["ubuntu"] <-- default name for Ubuntu
     block_device_mappings:
       - ebs_device_name: /dev/sdb
         ebs_volume_type: gp2
         ebs_virtual_name: test
         ebs_volume_size: 8
         ebs_delete_on_termination: true

suites:
  - name: default
    run_list:
    attributes:

If you want to assign a static address to the host, you have to do it at kitchen create stage. In the platforms section add:

network:
   - ["private_network", {ip: "172.31.47.69"}]

Using Kitchen

Kitchen List: Check your Instances and Actions

$ kitchen list
Instance             Driver  Provisioner  Verifier  Transport   Last Action
default-rhel-71      Ec2     ChefSolo     Busser    Ssh         <Not Created>
default-ubuntu-1404  Ec2     ChefSolo     Busser    Ssh         <Not Created>

Kitchen Create: Create an instance

$ kitchen create default-ubuntu-1404
-----> Starting Kitchen (v1.4.2)
-----> Creating <default-ubuntu-1404>...
    If you are not using an account that qualifies under the AWS free-tier, you may be charged to run these suites. 
    The charge should be minimal, but neither Test Kitchen nor its maintainers are responsible for your incurred costs.

   Instance <i-d4f71865> requested.
   EC2 instance <i-d4f71865> created.
   Waited 0/300s for instance <i-d4f71865> to become ready.
   Waited 5/300s for instance <i-d4f71865> to become ready.
   Waited 10/300s for instance <i-d4f71865> to become ready.
   Waited 15/300s for instance <i-d4f71865> to become ready.
   Waited 20/300s for instance <i-d4f71865> to become ready.
   Waited 25/300s for instance <i-d4f71865> to become ready.
   Waited 30/300s for instance <i-d4f71865> to become ready.
   Waited 35/300s for instance <i-d4f71865> to become ready.
   EC2 instance <i-d4f71865> ready.
   Waiting for SSH service on 172.31.63.224:22, retrying in 3 seconds
   Waiting for SSH service on 172.31.63.224:22, retrying in 3 seconds
   Waiting for SSH service on 172.31.63.224:22, retrying in 3 seconds
       [SSH] Established
       Finished creating <default-ubuntu-1404> (1m9.39s).
-----> Kitchen is finished. (1m9.46s)

$ kitchen list
Instance             Driver  Provisioner  Verifier  Transport   Last Action
default-rhel-71      Ec2     ChefSolo     Busser    Ssh         <Not Created>
default-ubuntu-1404  Ec2     ChefSolo     Busser    Ssh         Created

Kitchen Destroy: Destroy an Instance

$ kitchen destroy default-ubuntu-1404
-----> Starting Kitchen (v1.4.2)
-----> Destroying <default-ubuntu-1404>...
       EC2 instance <i-d4f71865> destroyed.
       Finished destroying <default-ubuntu-1404> (0m0.82s).
-----> Kitchen is finished. (0m0.87s)

Kitchen Setup: Install Chef on a node

$ kitchen setup default-rhel-71
-----> Starting Kitchen (v1.4.2)
-----> Creating <default-rhel-71>...
If you are not using an account that qualifies under the AWS free-tier, you may be charged to run these suites. 
The charge should be minimal, but neither Test Kitchen nor its maintainers are responsible for your incurred costs.

   Instance <i-387a1fc1> requested.
   EC2 instance <i-387a1fc1> created.
   Waited 0/300s for instance <i-387a1fc1> to become ready.
   Waited 5/300s for instance <i-387a1fc1> to become ready.
   Waited 10/300s for instance <i-387a1fc1> to become ready.
   Waited 15/300s for instance <i-387a1fc1> to become ready.
   Waited 20/300s for instance <i-387a1fc1> to become ready.
   Waited 25/300s for instance <i-387a1fc1> to become ready.
   Waited 30/300s for instance <i-387a1fc1> to become ready.
   Waited 35/300s for instance <i-387a1fc1> to become ready.
   EC2 instance <i-387a1fc1> ready.
   Waiting for SSH service on 172.31.41.13:22, retrying in 3 seconds
   Waiting for SSH service on 172.31.41.13:22, retrying in 3 seconds
   Waiting for SSH service on 172.31.41.13:22, retrying in 3 seconds
   Waiting for SSH service on 172.31.41.13:22, retrying in 3 seconds
   Please login as the user "ec2-user" rather than the user "root".

   Please login as the user "ec2-user" rather than the user "root".

   Finished creating <default-rhel-71> (1m47.75s).
-----> Converging <default-rhel-71>...
   Preparing files for transfer
   Preparing dna.json
   Preparing current project directory as a cookbook
   Removing non-cookbook files before transfer
   Preparing solo.rb
   Please login as the user "ec2-user" rather than the user "root".

   Please login as the user "ec2-user" rather than the user "root".

-----> Starting Kitchen (v1.4.2)
-----> Converging <default-rhel-71>...
   Preparing files for transfer
   Preparing dna.json
   Preparing current project directory as a cookbook
   Removing non-cookbook files before transfer
   Preparing solo.rb
-----> Installing Chef Omnibus (install only if missing)
   Downloading https://www.chef.io/chef/install.sh to file /tmp/install.sh
   Trying curl...
   Download complete.
   Downloading Chef  for el...
   downloading https://www.chef.io/chef/metadata?v=&prerelease=false&nightlies=false&p=el&pv=7&m=x86_64
     to file /tmp/install.sh.10715/metadata.txt
   trying curl...
   url  https://opscode-omnibus-packages.s3.amazonaws.com/el/7/x86_64/chef-12.5.1-1.el7.x86_64.rpm
   md5  9333136ba8a11bd6cad6d28fcd26a2c7
   sha256   7a937d8c0ab68a1f342aba4ad33417fc4ba8cb1a71f46e4a18b5e76c363e4075
   downloaded metadata file looks valid...
   downloading https://opscode-omnibus-packages.s3.amazonaws.com/el/7/x86_64/chef-12.5.1-1.el7.x86_64.rpm
     to file /tmp/install.sh.10715/chef-12.5.1-1.el7.x86_64.rpm
   trying curl...
   Comparing checksum with sha256sum...

   WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING

   You are installing an omnibus package without a version pin.  If you are installing
   on production servers via an automated process this is DANGEROUS and you will
   be upgraded without warning on new releases, even to new major releases.
   Letting the version float is only appropriate in desktop, test, development or
   CI/CD environments.

   WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING

   Installing Chef 
   installing with rpm...
   warning: /tmp/install.sh.10715/chef-12.5.1-1.el7.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 83ef826a: NOKEY
   Preparing...             ################################# [100%]
   Updating / installing... ################################# [100%]
   Thank you for installing Chef!
   Transferring files to <default-rhel-71>
   Starting Chef Client, version 12.5.1
   Compiling Cookbooks...
   Converging 0 resources

   Running handlers:
   Running handlers complete
   Chef Client finished, 0/0 resources updated in 00 seconds
   Finished converging <default-rhel-71> (0m39.27s).
-----> Setting up <default-rhel-71>...
   Finished setting up <default-rhel-71> (0m0.00s).
-----> Kitchen is finished. (0m39.32s)

$ kitchen list
Instance             Driver  Provisioner  Verifier  Transport   Last Action
default-rhel-71      Ec2     ChefSolo     Busser    Ssh         Set Up
default-ubuntu-1404  Ec2     ChefSolo     Busser    Ssh         <Not Created>

Kitchen Converge: Deploying a file to a node

Modify your .kitchen.yml file, and update the suites section with the recipe:

suites:
  - name: default
    run_list:
      - recipe[motd::default]
    attributes:

Then run the kitchen converge command:

$ kitchen converge default-rhel-71
-----> Starting Kitchen (v1.4.2)
-----> Creating <default-rhel-71>...
   If you are not using an account that qualifies under the AWS free-tier, you may be charged to run these suites. 
   The charge should be minimal, but neither Test Kitchen nor its maintainers are responsible for your incurred costs.

   Instance <i-af402556> requested.
   EC2 instance <i-af402556> created.
   Waited 0/300s for instance <i-af402556> to become ready.
   Waited 5/300s for instance <i-af402556> to become ready.
   Waited 10/300s for instance <i-af402556> to become ready.
   Waited 15/300s for instance <i-af402556> to become ready.
   Waited 20/300s for instance <i-af402556> to become ready.
   Waited 25/300s for instance <i-af402556> to become ready.
   EC2 instance <i-af402556> ready.
   Waiting for SSH service on 172.31.45.65:22, retrying in 3 seconds
   Waiting for SSH service on 172.31.45.65:22, retrying in 3 seconds
   [SSH] Established
   Finished creating <default-rhel-71> (1m4.66s).
-----> Converging <default-rhel-71>...
   Preparing files for transfer
   Preparing dna.json
   Preparing current project directory as a cookbook
   Removing non-cookbook files before transfer
   Preparing solo.rb
-----> Installing Chef Omnibus (install only if missing)
   Downloading https://www.chef.io/chef/install.sh to file /tmp/install.sh
   Trying curl...
   Download complete.
   Downloading Chef  for el...
   downloading https://www.chef.io/chef/metadata?v=&prerelease=false&nightlies=false&p=el&pv=7&m=x86_64
     to file /tmp/install.sh.5483/metadata.txt
   trying curl...
   url  https://opscode-omnibus-packages.s3.amazonaws.com/el/7/x86_64/chef-12.5.1-1.el7.x86_64.rpm
   md5  9333136ba8a11bd6cad6d28fcd26a2c7
   sha256   7a937d8c0ab68a1f342aba4ad33417fc4ba8cb1a71f46e4a18b5e76c363e4075
   downloaded metadata file looks valid...
   downloading https://opscode-omnibus-packages.s3.amazonaws.com/el/7/x86_64/chef-12.5.1-1.el7.x86_64.rpm
     to file /tmp/install.sh.5483/chef-12.5.1-1.el7.x86_64.rpm
   trying curl...
   Comparing checksum with sha256sum...

   WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING

   You are installing an omnibus package without a version pin.  If you are installing
    on production servers via an automated process this is DANGEROUS and you will be upgraded without warning on new releases, even to new major releases.
   Letting the version float is only appropriate in desktop, test, development or CI/CD environments.

   WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING

   Installing Chef 
   installing with rpm...
   warning: /tmp/install.sh.5483/chef-12.5.1-1.el7.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 83ef826a: NOKEY
   Preparing...             ################################# [100%]
   Updating / installing... ################################# [100%]
   Thank you for installing Chef!
   Transferring files to <default-rhel-71>
   Starting Chef Client, version 12.5.1
   Compiling Cookbooks...
   Converging 1 resources
   Recipe: motd::default
     * cookbook_file[/etc/motd] action create
       - update content in file /etc/motd from e3b0c4 to 295b84
       --- /etc/motd    2013-06-07 10:31:32.000000000 -0400
       +++ /etc/.motd20151210-10819-18peqj2 2015-12-10 14:02:01.757471882 -0500
       @@ -1 +1,10 @@
       + __________________________________
       +/ You are on a simulated Chef node \
       +\ environment                      /
       + ----------------------------------
       +        \   ^__^
       +         \  (oo)\_______
       +            (__)\       )\/\
       +                ||----w |

       - restore selinux security context

   Running handlers:
   Running handlers complete
   Chef Client finished, 1/1 resources updated in 00 seconds
   Finished converging <default-rhel-71> (0m32.21s).
-----> Kitchen is finished. (1m36.95s)

$ kitchen list
Instance             Driver  Provisioner  Verifier  Transport   Last Action
default-rhel-71      Ec2     ChefSolo     Busser    Ssh         Converged
default-ubuntu-1404  Ec2     ChefSolo     Busser    Ssh         <Not Created>

$ ssh -i ~/.ssh/awskey.pem ec2-user@52.91.126.45
Last login: Thu Dec 10 14:02:00 2015 from ip-172-31-60-114.ec2.internal
 __________________________________
/ You are on a simulated Chef node \
\ environment                      /
----------------------------------
        \       ^__^
         \      (oo)\_______
                (__)\       )\/\
                    ||----w |
                    ||     ||
[ec2-user@ip-172-31-45-65 ~]$ exit
logout
Connection to 52.91.126.45 closed.

Metadata.rb modifications

When you are creating a new recipe, you need to edit the metadata.rb file. For example, in the apache cookbook example, the file will look like:

name             'apache'
maintainer       'David A. Lane'
maintainer_email 'david.lane@gmx.com'
license          'All rights reserved'
description      'Installs/Configures apache'
long_description IO.read(File.join(File.dirname(__FILE__), 'README.md'))
version          '0.1.0'

Writing a recipe: Modifying recipe/default.rb

When you want to install a package, you will need to modify the default.rb file in the recipe subdirectory. An example, for installing apache is as follows:

#
# Cookbook Name:: apache
# Recipe:: default
#
# Copyright 2015, YOUR_COMPANY_NAME
#
# All rights reserved - Do Not Redistribute
#

package "httpd" do
  action :install
end

Once you make that modificaiton run a kitchen converge [node] and it will install apache.

[ec2-user@ip-172-31-47-69 ~]$ rpm -qa httpd
httpd-2.4.6-40.el7.x86_64

Service Resource

You can take it a step further to install, and activate the package once it is installed by modifying the default.rb like this:

package "httpd" 

service "httpd" do
  action [ :enable, :start ]
end

Which should result in ouput like this:

$ kitchen converge default-rhel-71
-----> Starting Kitchen (v1.4.2)
-----> Converging <default-rhel-71>...
   Preparing files for transfer
   Preparing dna.json
   Preparing current project directory as a cookbook
   Removing non-cookbook files before transfer
   Preparing solo.rb
-----> Chef Omnibus installation detected (install only if missing)
   Transferring files to <default-rhel-71>
   Starting Chef Client, version 12.5.1
   Compiling Cookbooks...
   Converging 2 resources
   Recipe: apache::default
    (up to date)

       - enable service service[httpd]

       - start service service[httpd]

   Running handlers:
   Running handlers complete
   Chef Client finished, 2/3 resources updated in 03 seconds
   Finished converging <default-rhel-71> (0m5.05s).
-----> Kitchen is finished. (0m5.11s)

And on the server, you get:

[ec2-user@ip-172-31-47-69 ~]$ systemctl list-unit-files | grep httpd
httpd.service                               enabled 

Template Resource

Modify the default.rb to add the template line as shown:

package "httpd"

service "httpd" do
  action [ :enable, :start ]
end

template "/var/www/html/index.html" do
  source 'index.html.erb'
  mode '0644'
end

And then you need to create the index.html.erb file. Start by running chef generate template <file>:

$ chef generate template index.html

and then change into templates/default and edit the index.html.erb file with what you want to include, such as:

This site was set up by <%= node['hostname'] %>

and run another kitchen converge.

Check the output:

$ kitchen converge default-rhel-71
-----> Starting Kitchen (v1.4.2)
-----> Converging <default-rhel-71>...
   Preparing files for transfer
   Preparing dna.json
   Preparing current project directory as a cookbook
   Removing non-cookbook files before transfer
   Preparing solo.rb
-----> Chef Omnibus installation detected (install only if missing)
   Transferring files to <default-rhel-71>
   Starting Chef Client, version 12.5.1
   Compiling Cookbooks...
   Converging 3 resources
   Recipe: apache::default
    (up to date)
    (up to date)
    (up to date)

       - create new file /var/www/html/index.html
       - update content in file /var/www/html/index.html from none to b2f6ae
       --- /var/www/html/index.html 2015-12-11 12:49:17.376524243 -0500
       +++ /var/www/html/.index.html20151211-19185-1lfz25z  2015-12-11 12:49:17.376524243 -0500
       @@ -1 +1,2 @@
       +This site was set up by 

       - restore selinux security context

   Running handlers:
   Running handlers complete
   Chef Client finished, 1/4 resources updated in 03 seconds
   Finished converging <default-rhel-71> (0m5.03s).
-----> Kitchen is finished. (0m5.09s)

And then on the host, you can verify the installation:

[ec2-user@ip-172-31-47-69 ~]$ curl localhost
This site was set up by ip-172-31-47-69 

Using Knife

Creating a Knife file

$ knife cookbook create motd --cookbook-path .
WARNING: No knife configuration file found
** Creating cookbook motd in /home/ubuntu/git/motd
** Creating README for cookbook: motd
** Creating CHANGELOG for cookbook: motd
** Creating metadata for cookbook: motd

$ kitchen init --create-gemfile
conflict  .kitchen.yml
Overwrite /home/ubuntu/git/motd/.kitchen.yml? (enter "h" for help) [Ynaqdh] n
    skip  .kitchen.yml
conflict  chefignore
Overwrite /home/ubuntu/git/motd/chefignore? (enter "h" for help) [Ynaqdh] y
   force  chefignore
  create  Gemfile
  append  Gemfile
  append  Gemfile
You must run `bundle install' to fetch any new gems.

$ bundle install
Fetching gem metadata from https://rubygems.org/..........
Fetching version metadata from https://rubygems.org/...
Fetching dependency metadata from https://rubygems.org/..
Resolving dependencies...
Using mixlib-shellout 2.2.5
Using net-ssh 2.9.2
Using net-scp 1.2.1
Using safe_yaml 1.0.4
Using thor 0.19.1
Using test-kitchen 1.4.2
Using kitchen-vagrant 0.19.0
Using bundler 1.10.6
Bundle complete! 2 Gemfile dependencies, 8 gems now installed.
Use `bundle show [gemname]` to see where a bundled gem is installed.

To Dos



We Keep Shaving the English Language

On the way to work this morning, I heard an advertisement for unlimited apps. For a moment, I thought it was for a new pricing scheme on the Apple store, or similar, but then they started talking about buffalo wings. It took me a moment to make the shift. And that confused me even more. And got me thinking, when did we start shaving the English language to the point that a commercial about appetizers has me thinking about software.

Once upon a time, there was a little code of software called an applet. Usually prefaced with Java, as in Java Applet. This was code that was downloaded from the server to a client, the first step along the way to the browser based world of today. You could argue that today’s app comes from. You could also argue that it comes from an abbreviation of application. Which makes some limited sense, because they are small applications.

But when did we start shaving words that did no need to be truncated and make no sense to be truncated. My least favorite is convo, short for conversation. When did the word conversation get to be so bothersome that we needed to shorten it? Similarly with appetizer, being truncated to app. In Politics and the English language, George Orwell comments on how powerful words are, and not only that but how badly, for political reasons, words are warped and changed to no longer mean what they did, but what the body politic wants them to mean. A perfect modern example is pro-life. This does not mean the individual is actually in favor of life, just opposed to abortion. Most who claim to be pro-life also support the death penalty and, in the United States, the Second Amendment, both of which are completely in opposition to life.

Why would we shave the language this way? Why would we let ad agencies, and others get away with this sort of thing? Why? Because we are too lazy to prevent it from happening. I for one am not going to let it happen.

President 2016 – My dog is declaring his candidacy

From this morning’s WTOP:

COLUMBUS, Ohio (AP) — Ohio’s John Kasich, a blunt governor who embraces conservative ideals but disdains the political sport of bashing Hillary Rodham Clinton, is to become the 16th notable Republican to enter the 2016 presidential race.

Teddy Lane
No, seriously, I think Teddy announced his candidacy today. Somewhere between his morning stretch and that bowl of kibble. And, I think, he has all the right qualifications. He is friendly to babies, outgoing, photogenic. He has a strong platform related to ensuring his people are held together as a unit. And he loves the feel of the wind in his ears. I have never heard him say a bad word about anyone, and the fertilizer he spreads around is more robust that what I have heard coming out of the mouths of most of the Presidential candidates so far. The only downside to his personality is his tendency to run as far and as fast as he can when he gets off the leash or out of the yard. But with all the new improvements around the White House, I am pretty sure that getting out is not something he is going to do very often. So when you are considering the options for 2016, I want you to consider Teddy for President. After all, he is just as qualified as any of the other candidates that have declared so far. On both sides of the aisle.

 

Congress is Upset?

This morning, the Washington Post reported:

Some lawmakers, including top Democrats, express frustration that the U.N. Security Council gets the chance to vote on the deal this week, signaling the international community’s intention to dismantle the sanctions against Iran before Congress votes on it.NYTimes

I read this once, then I read it again. Then I forced myself to read it yet a third time. At some point, I thought the author was kidding. But no, if you read the New York Times article, you begin to understand something very important. That is that the United Nations Security Council, part of a a huge, multi-national organization is more responsive and flexible the the United States Congress.

The document in question is the Iran Nuclear Deal. About 180 pages in length. The United Nations Security Council members, apparently, has had sufficient time to read the deal and decided it knows enough to schedule a vote. The United States Congress, on the other hand, is getting ready to leave Washington for their summer recess, and therefore, will take the next sixty days to review the agreement, and then ponder whether it will vote. As a voting constituent, I ask two questions:

  • If this is such an important agreement, shouldn’t Congress delay their vacation to deal with the work in front of them?
  • If this is not so important, then why are they upset that the United Nations Security Council is voting before them?

Me thinks Congress doth protest too much. Either that, or they really are less interested in their doing their job, than they are keeping their job.

Rand Paul and the Patriot Act

Passed in the wake of September 11, 2001, the Patriot Act was a rush to grant law enforcement sweeping powers that they had not had prior to its passage.  Most of the act is classified, and it it rumored that just talking about it is a felony.  Over the weekend, the Patriot Act was on the chopping block, with numerous politicians scrambling to save it, and the authorizations that it grants.  The most sweeping of those being the bulk collection of meta-data by the NSA. Senator Rand Paul (R-Ky.) stood alone against its renewal. In fact, Senator John McCain (R-Az.) said:

He obviously has a higher priority for his fundraising and political ambitions than for the security of the nation.” (as heard on CBS World News Roundup – 1Jn2015).

Despite Senator McCain’s opinion, many people would disagree, both in the United States and abroad.

That being said, it is clear that Rand Paul is not naive, admitting that the bill will eventually pass and the wiretapping will go on.

What surprises me is that Senator McCain even thinks something like a filibuster could or would have any effect on the bulk collection of data. As if the expiration of a law could stop it? And before you get on your soapbox and rant that “It is a law, it is no longer in force, therefore it is illegal,” allow me to point out a few facts.

The federal bureaucracy moves with glacial inertia. It is very hard to get things moving but once you do, it is almost impossible to make them stop. This is even more so in the intelligence community with is not subject to any sort of real oversight. The bulk collection of data is a huge industry. There are building springing up like mushrooms to support the effort. Contracts worth billions of dollars have been let by the government and the companies that hold those contracts will do everything in their power to keep those contracts active.

Short of an international delegation overseeing the complete shutdown of the collection process (much like under the SALT agreements for nuclear disarmament) the bulk collection of data is here to stay.  Legally, or not.

The Corner Pharmacy

Is the corner pharmacy a relic of the past? Oh, sure, if you want a quart of milk and some baby wipes at 3AM, it might be a convenient place to drop in. But if you are in need of medications, specifically, acute medications, they have to order them, and they will be available in two to four weeks. Maybe we should let Amazon know there is an untapped market here.

I am not talking about maintenance medicines. Those medicines you order 30 at a time to keep your blood pressure or your diabetes under control. Not the medicine that you know you need and that you can plan on when you pick them up. I am talking about those medicines that are meant to stave off something and you need them now. Pain medications, antibiotics. Those medications that, if ordered, are valueless by the time they arrive two to four weeks later. At best, the infection has been fought off. At worst, you will be dead (or in hospital).

Now, I am not saying that they need to stock all combinations of the medications that are on the market today. But one would think that basic pain medications, antibiotics, and other acute requirement medications would be on the shelf.  You would also expect that, if you were a regular customer, they would have your needs on file and since their automated systems can call you and tell you when your prescriptions are due for a refill, they could at least have those medicines on the shelf and ready for you to pick up. Even this seems to be too much of a challenge for most local pharmacies.

I do not understand why they are taking on supermarkets. Or rather, maybe I do understand better. Since they do not seem to stock medicines, as is their primary function, they have to make their money somehow.

November is almost over

Winner-2014-Square-ButtonI did not tout National Novel Writing Month this year because, frankly, I did not think I had the time to participate, and if I did, I was pretty sure I would not have the time to finish successfully. After several years of not making it, I just did not want to get my hopes up. And if I had reported out a week ago, I would be telling you that it was not looking good. I had lost almost a week of writing, which, when you have to crank out more than 1600 words a day to be successful, means a loss of almost 10,000 words. That is a huge margin to make up when you are shooting for 50,000. So I was not hopeful that my story would make the word count. But a couple of lucky breaks and a burst of imagination and I have not only passed the 50K word count to be successful, but I have still some story to write, and should be able to actually complete the story too.

So if you think you cannot write a novel, I am here to assure you that you can. And if you are still struggling to get to the end, keep pushing. It is not over, until it is over!