Maybe I just missed it. I was running a Jenkins job that triggered an Ansible role that pulled
tar.gz files for several versions of my company’s software from a build server and deposited them in an NFS-shared directory on my Jenkins slave. The Jenkins slave was pulling dual duty as my local NFS server for nightly builds. Before the nightly builds ran, this Jenkins job would ensure that my NFS server had the proper files staged and ready to go. Sounds easy, right?
Nope. We kept having permissions issues. The Ansible role we created had two tasks:
- Clean out any unnecessary directories–for versions we were no longer supporting
- Create and populate directories for the versions we were supporting
We were experiencing permissions errors doing both tasks. I’ll save you the gory details, but we tried everything. We deleted everything. We used
chgrp to set directory and file modes and ownership. We changed the Jenkins user. I tried running the playbook with
become: yes. I tried sprinkling the tasks with
remote_user: root. Nothing worked. I ran the job dozens of times, tweaking one thing at a time. Yuk.
Then I noticed something in the 4x verbose output of the job:
ESTABLISH SSH CONNECTION FOR USER: fred
I had set
remote_user: root. Hmm. I checked another job that wasn’t having this problem. Sure enough, it was user
Here’s the difference: Playbook A, the one that was failing, didn’t use an inventory file because it was always executing on
localhost. Playbook B, by contrast, used an inventory file. When I switched Playbook A to use an inventory file, everything worked. Bottom line: use an inventory file when you want to run as
I suspect there may be a more elegant way to fix this, but in the fast-paced environment in which I work I am happy to have this solution.