Why not JavaScript *Without* a Framework?

For months now I have been thinking about the design of a web front end for a collection services. There are *SO MANY CHOICES* out there that it’s sometimes difficult to turn off the incoming streams and make a choice. The exercise of thinking about a design often starts with thinking about frameworks. Code frameworks, primarily written for Javascript with a few for Python, do a bunch of work for you. But there’s a price. Frameworks are often large and regularly force you to implement things in one and only one way. Frameworks are not compatible with one another, so a choice locks you into a heavy investment. Should we use NodeJS? React? Flask? The list is seemingly endless.

Yesterday, I stumbled upon a podcast that takes an alternate approach. The guest on the show, Chris Ferdinandi, asserts that the conversation need not start with a framework. He acknowledges the fact that it can be overwhelming to thinking about frameworks. In some cases, it’s better to just use JavaScript directly. The code is smaller. There’s no framework lock-in. And JavaScript has come a long way from the state it was in several years ago that led to the proliferation of frameworks for it. To help people figure out what to do, Ferdinandi created a website, Go Make Things, where we can find small guides to help us learn to do things *without* resorting to frameworks.

The podcast uses quite a bit of jargon. I didn’t understand it all. But I did find the exposure to another way of thinking to be very helpful. I plan to check out Ferdinandi’s website.

Go Make things:


Hanselminutes podcast:



AnsibleFest 2017, Network Modules in 2.3, and Ansible connection type

I had the opportunity to attend AnsibleFest 2017 in San Francisco. The sessions I attended were of high quality. It was well worth the cost of attendance.

One of the cool things the Ansible folks setup were “Ask an Expert” sessions. Attendees could sign up for a 15-minute appoitment to discuss whatever issues have been troubling you. I signed up for a session with a network-module author to talk about my recent struggles running network modules using Ansible 2.3.

In Ansible 2.3, network modules require connection: local. When I ran my existing playbooks using Ansible 2.3, I saw errors of the form:

invalid connection specified, expected connection=local, got smart

I spent a bunch of time online before AnsibleFest trying to find out why this was happening. I didn’t understand connection: local, and the documentation was vague. The expert at AnsibleFest set me straight. The explanation was so simple that I had a solid “duh!” moment.

connection: local simply means that the module being invoked will be run on the Ansible node. That’s it. The default, connection: remote, pushes the module’s Python code to the inventory host and executes it there. Starting in Ansible version 2.3, network modules enforce connection: local because they operate against inventory hosts that usually don’t have Python installed, e.g. SROS.

I’ve now adopted the following best practice:

  • Configure my inventory to set connection type for localhost to ‘local’
    • [local_host]
      localhost ansible_connection=local
  • Delegate tasks that I want to run on the Ansible host to localhost
    • delegate_to: localhost


Ansible map() not available on el6

If you are running Ansible playbooks on an el6 machine and you run across an error like this:

2017-05-24 10:21:47,585 p=8525 u=root |  fatal: [localhost]: FAILED! =>
{"failed": true, "msg": "The conditional check '( myvsds is defined and 
( myvsds | map(attribute='target_server_type') | list | issuperset([\"kvm\"]) 
or myvsds | map(attribute='target_server_type') | list | issuperset([\"heat\"])
) ) or ( myvcins is defined and ( myvcins | map(attribute='target_server_type')
| list | issuperset([\"kvm\"]) or myvcins | map(attribute='target_server_type')
| list | issuperset([\"heat\"]) ) )' failed. The error was: template
error while templating string: no filter named 'map'. String: {% if (
myvsds is defined and ( myvsds | map(attribute='target_server_type') | list |
issuperset([\"kvm\"]) or myvsds | map(attribute='target_server_type') | list |
issuperset([\"heat\"]) ) ) or ( myvcins is defined and ( myvcins |
map(attribute='target_server_type') | list | issuperset([\"kvm\"]) or myvcins |
map(attribute='target_server_type') | list | issuperset([\"heat\"]) ) ) %} True
{% else %} False {% endif %}\n\nThe error appears to have been in 
'/metro-2.1.1/roles/build/tasks/get_paths.yml': line 8, column 7, but may\nbe
elsewhere in the file depending on the exact syntax problem.\n\nThe offending
line appears to be:\n\n  - block: # QCOW2\n    - name: Find name of VSD QCOW2
File\n      ^ here\n"}

Note the text in BOLD. The problem is caused by the fact that Ansible as of version 2.1 depends on the map() filter implementation from the package python-jinja2. map() was introduced into python-jinja2 starting with python-jinja2 version 2.7. The base python-jinja2 version for el6 is 2.2, thus creating the error, above.

This means that Ansible using map() must be running el7 on the Ansible host.

Ansible [WARNING]: The loop variable ‘item’ is already in use.

I made a simple change to an existing Ansible playbook. I used the include_role command to invoke another role. Since I was calling the role on a list of hosts that I had dynamically discovered at runtime, I used with_items to make the call iterate over the list.

Not good. I saw the following warning and error:

[WARNING]: The loop variable 'item' is already in use. You should set
the `loop_var` value in the `loop_control` option for the task to 
something else to avoid variable collisions and unexpected behavior.

fatal: [localhost]: FAILED! => {
 "failed": true,
 "msg": "The conditional check ''u16' not in item|json_query('Name')
failed. The error was: error while evaluating conditional ('u16' not 
in item|json_query('Name')): 'item' is undefined... }

After a bit of searching and reading docs, I figured out how to fix. But the docs and examples were not straightforward. I hope you will find a better explanation herein.

First, Ansible (I’m using 2.2.1) doesn’t handle nested with_items loops properly. There’s something special about the way item is handled such that using item in nested loops causes one of the expected values to be overwritten.

My outer loop:

- name: Use ci-destroy to clean unused VMs
    name: ci-destroy
    - "{{ my_vm_list }}"

ci-destroy is a role that we use to garbage collect VMs from test failures in our CI environment. Before this task, the code gathers a list of the orphan VMs in the environment. The ci-destroy role is called on each one.

The ci-destroy role is my inner loop. It contains, among other things:

- name: Remove several entries from /etc/hosts file
    dest: /etc/hosts
    line: "{{ item }}"
    state: absent
  with_items: "{{ line_list }}"

With the outer loop and the inner loop using {{ item }}, Ansible had a problem. WARNING and the ERROR, as shown above.

The fix? Use loop_control to specify a non-default variable name for the inner item variable name. In my case:

- name: Remove several entries from /etc/hosts file
    dest: /etc/hosts
    line: "{{ line_item }}"
    state: absent
  with_items: "{{ line_list }}"
    loop_var: line_item

The changed lines are shown in red. Basically, I changed the inner loop such that it used line_item instead of item. Worked like a charm.

Ansible dependencies via meta

I ran across an interesting feature of Ansible this week. A co-worker said that an upstream change to an open-source project he had been working on broke our installation code. The author had moved the invocation of a role from the main playbook for a role into roles/role_name/meta/main.yml. This broke the installation. Here’s why.

According to http://docs.ansible.com/ansible/playbooks_roles.html, dependencies listed in meta/main.yml are loaded and executed *before* the rest of the role. This is perfect for executing roles that are, in fact, dependencies. Dependencies get taken care of first. In my friend’s case, the upstream contributor didn’t understand that our role is *not* a dependency. When he moved the role invocation to meta/main.yml, he caused it to execute before its own dependencies had been satisfied. The fix was simple: Move our role back to the main playbook.

By the way, here’s what the dependencies look like in meta/main.yml:

  - { role: common, some_parameter: 3 }
  - { role: apache, apache_port: 80 }
  - { role: postgres, dbname: blarg, other_parameter: 12 }

The take-away is that using meta dependencies is yet another interesting way Ansible can be used to create clean playbooks that aren’t cluttered with dependencies.

Running Ansible as remote_user Requires Inventory

Maybe I just missed it. I was running a Jenkins job that triggered an Ansible role that pulled tar.gz files for several versions of my company’s software from a build server and deposited them in an NFS-shared directory on my Jenkins slave. The Jenkins slave was pulling dual duty as my local NFS server for nightly builds. Before the nightly builds ran, this Jenkins job would ensure that my NFS server had the proper files staged and ready to go. Sounds easy, right?

Nope. We kept having permissions issues. The Ansible role we created had two tasks:

  1. Clean out any unnecessary directories–for versions we were no longer supporting
  2. Create and populate directories for the versions we were supporting

We were experiencing permissions errors doing both tasks. I’ll save you the gory details, but we tried everything. We deleted everything. We used chmod, chown, and chgrp to set directory and file modes and ownership. We changed the Jenkins user. I tried running the playbook with become: yes. I tried sprinkling the tasks with remote_user: root. Nothing worked. I ran the job dozens of times, tweaking one thing at a time. Yuk.

Then I noticed something in the 4x verbose output of the job:


I had set remote_user: root. Hmm. I checked another job that wasn’t having this problem. Sure enough, it was user root.

Here’s the difference: Playbook A, the one that was failing, didn’t use an inventory file because it was always executing on localhost. Playbook B, by contrast, used an inventory file. When I switched Playbook A to use an inventory file, everything worked. Bottom line: use an inventory file when you want to run as remote_user.

I suspect there may be a more elegant way to fix this, but in the fast-paced environment in which I work I am happy to have this solution.

Time-stamped Directory Name

One of my co-workers wrote an Ansible playbook that gathered and processed data from a number of nodes in our lab. There could be as many as 250 nodes in play. Here’s a high-level overview of the steps the playbook took:

  • Created a local temporary directory via local_action
  • Wrote intermediate files for each node to the local temp directory
  • Read collected intermediate files from the local temp directory
  • Deleted the local temp directory

Do you see the mistake? By default, Ansible will attempt to parallelize the operation across as many nodes as possible. The first one that finishes will–you guessed it–delete the temporary directory. Oops.

Initial testing was done against a single node. When I added a second, BOOM. After looking at it, I decided that I had the following viable options to fix:

  1. Use serial: 1 in the playbook to prevent concurrent execution. This is undesirable because it would make running against 250 nodes *much* longer.
  2. Restructure the playbooks such that temp directory creation and deletion took place outside of the data gathering. This would have been a lot of work *and* introduced dependencies between playbooks that I don’t like. Using the same temp directory name in more than one playbook is one example of such coupling.
  3. Use a unique temp directory for each node.

Not very elegant, but the last option listed, above, was simple and practical. A quick search yielded a code snippet similar to the following:

- name: Create a temporary directory name using timestamp
    tmp_scripts_dir: > 
      "{{ playbook_dir }}/scripts/
       {{ lookup('pipe', 'date +%Y%m%d%H%M%S.%5N') }}/tmp"

This creates a temp directory name that includes a timestamp down to nanoseconds–fine enough detail to differentiate between multiple nodes that are kicked off within the same second. I then used tmp_scripts_dir to satisfy the process steps.