In Ansible Automation Platform 2.0 (the new version of Ansible Tower), there’s changes in how you work with custom environments. Instead of creating Python virtual environments, Automation Platform works with “ansible execution environments”.

Here’s an overview of what those are:

https://www.ansible.com/blog/whats-new-in-ansible-automation-platform-2-automation-execution-environments.

A very useful blog post I found, really helped me along: https://blog.networktocode.com/post/ansible-builder-runner-ee/.

I share the frustration with the crappy documentation on Ansible Runner and Builder.

An Ansible execution environment is a Container image (Podman is used by default) that you can configure to contain all the dependencies necessary to run your playbooks. The idea is that a developer can run Playbooks in the Execution environment on his local machine using Ansible Runner, and be certain that everything will work exactly the same when running on Ansible Automation Platform. This also changes the way Tower works with collections: instead of running ansible-galaxy every time the playbook starts, you now pre-define the Galaxy collections (and roles) in the Execution environment.

So this is an example execution environment to use with ISVA.

  • it contains the Python prerequisites

  • it contains the ibm.isam collection

  • it contains Operating System level dependencies to add to the base container image

You will likely have to create an execution environment that is specific to your environment:

  • connect to your local Automation hub instance for Galaxy

  • include any other collections

  • include additional Python libraries

So the goal is to create a custom virtual environment (as a container image), make that image available (for instance on a public repository or on your local Automation hub instance) so it’s available to your developers and to Automation Platform (Tower).

You can skip to Ansible Runner , if you want to use an existing image I created earlier.

Ansible Builder

Install ansible-builder

Install Ansible Builder, the tool to build a new execution environment. This example installs ansible-builder (and ansible) in a separate Python virtual environment (using a separate virtual environment is optional).

python3 -m virtualenv ~/venvansiblebuilder
source ~/venvansiblebuilder/bin/activate

pip install ansible ansible-builder

Configure the image

Create a directory to contain your build information.

mkdir ~/builder
cd ~/builder

This directory needs to contain the configuration files for builder. Most of these files are optional, but in the use case of preparing an image for ISVA, only ansible.cfg is optional.

execution-environment.yml

This is the main configuration file. It contains the references to the configuration files for Python, the system and Galaxy, and allows you to include additional (Docker build) commands.

version: 1

build_arg_defaults:
  EE_BASE_IMAGE: 'quay.io/ansible/ansible-runner:stable-2.12-devel'
  EE_BUILDER_IMAGE: 'quay.io/ansible/ansible-builder:latest'

ansible_config: 'ansible.cfg'

dependencies:
  python: requirements.txt
  galaxy: requirements.yml
  system: bindep.txt

additional_build_steps:
  prepend: |
    RUN pip3 install --upgrade pip setuptools
  append:
    - RUN mkdir /ansible
    - RUN echo Successfully built the IBM ISAM Ansible Execution environment

The EE_BASE_IMAGE is the base image to start from. The versions match specific Ansible versions. You can find the available base ansible-runner images on quay.io:

https://quay.io/repository/ansible/ansible-runner?tab=tags

requirements.txt (Python)

The required Python libraries. You can define versions here if you need to (it’s a Python requirements file, after all)


NOTE If you have a running Python virtual environment for the IBM ISAM Collection, you can start with pip freeze > requirements.txt. —

The contents of the file:

requests
jmespath
ibmsecurity==2021.9.22.0

Custom Python libraries cannot be added here, because ansible builder does not support “Private repos”. So if you want to add a custom Python repository (like test.pypi.org), we can add this in the prepend section of additional_build_steps.

RUN pip install --upgrade --index-url https://test.pypi.org/simple/ --no-deps tbosmans-isam-demo

requirements.yml (Galaxy)

The required collections and roles, that will be downloaded from Ansible Galaxy (or from whatever you define in ansible.cfg). The configuration with a reference to the custom collection , looks like this:

---
collections:
  - community.general
  - ibm.isam
  - name: https://github.com/tombosmansibm/custom_isam_collection.git
    type: git

bindep.txt (linux)

The required linux system packages. What’s in here will be passed to dnf (the default package manager on Cent OS)

# necessary for ibmsecurity:
python38-devel
openldap-devel
gcc
# support for git based repositories
git

ansible.cfg

The main purpose of the ansible.cfg file is to configure Ansible Galaxy.

[galaxy]
#server_list = automation_hub, my_org_hub, release_galaxy
server_list = release_galaxy

[galaxy_server.automation_hub]
url=https://cloud.redhat.com/api/automation-hub/
auth_url=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token
token=my_ah_token

[galaxy_server.my_org_hub]
url=https://automation.my_org/
username=my_user
password=my_pass

[galaxy_server.release_galaxy]
url=https://galaxy.ansible.com/

Build the image

Once you have prepared the configuration files, you can build the image. You can immediately tag it (-t) and give it verbose output (-v 3 , 2 is the default but is not very informative).

ansible-builder build -v 3 -t isva-runner-demo:0.0.5

Problems during build

If you run into problems during build, it’s a good idea to pull in the latest base images from Quay.io. Specifically the ansible-builder image needs to be refreshed.

podman pull quay.io/ansible/ansible-builder:latest

And then try again.

Push the image to a container registry. (optional step)

This step makes more sense if you use a local registry for your container images.

Login to the registry

podman login quay.io

Tag the image (optional if you already tagged it during the build step)

podman tag localhost/isva-runner-demo:0.0.5 isva-runner-demo:0.1.1

Push to the registry. You need to replace my name with your own.

podman push localhost/isva-runner-demo:0.1.1 quay.io/tombosmansibm/isva-runner-demo:0.1.1

For quay.io, you’ll now have to make the image public using the web interface unless you have at least a ‘Developer’ plan with them.

Ansible Runner

Now you can run playbooks using Ansible Runner with the image you just created yourself, or with the image I created earlier.

There’s 2 ways to use the Execution environment:

Ansible Runner on the host

Install Ansible Runner locally

Install ansible runner on your host system, optionally in a virtual environment

pip install ansible-runner

Prepare file structure

There’s a structure that Ansible Runner expects : you need a directory containing your playbooks, with a directory named inventory to contain your inventories.

directory structure

<playbook_dir>
 |____playbook.yml
 |____inventory
        |____group_vars
        |____host_vars
        |____hosts
 |____env
        |____settings

env/settings

The settings file is a yaml file that can contain some of the commandline arguements as defaults.

This is an example, based on the image that I pushed to Quay.io earlier.

---
container_image: quay.io/tombosmansibm/isva-runner-demo:0.1.1
process_isolation_executable: podman # or docker
process_isolation: true
container-volume-mount: ".:/runner/project"

Note the container-volume-mount, which should not actually be necessary according to documentation, but it is at the moment. The default container runtime is Podman, but you can also use Docker.

Run playbooks

To run the ‘playbook.yml’ in the current directory on the execution environment, you’d do this:

ansible-runner run . -p playbook.yml

If you do not use an env/settings file, the complete command would look like this:

ansible-runner run --process-isolation --container-volume-mount .:/runner/project --container-image quay.io/tombosmansibm/isva-runner-demo:0.1.0 . -p isva_playbooks/0_configure_system.yml

Run playbooks from the collection

You can also run playbooks from your custom collection.

The custom collection, that we added in requirements.yml (Galaxy) , is located at /usr/share/ansible/collections/ansible_collections/custom/isam/ in the container.

We can run the playbooks like this:

ansible-runner run . -p /usr/share/ansible/collections/ansible_collections/custom/isam/playbooks/dev-base-setup-module-demo.yml

The inventory will come out of the current directory on the host, just like in the previous example.

Ansible Runner in the Execution environment

You can also run playbooks within the Execution environment, this can be handy to debug the container image, or to look in detail what is happening. In short, this is good for debugging.

Start the container in interactive mode:

podman run -it -v ".:/runner/project:Z" --rm localhost/isva-runner-demo:0.0.3 /bin/bash

The volume mount needs Z (unshared label) in my experience, but I cannot explain why.

A command prompt will be available, in the /runner directory in the container.

From the command prompt in the container, now start the playbook, for instance:

ansible-runner run . --inventory inventory -p isva_playbooks/0_configure_system.yml

The ansible runner command actually “works” from /runner/project , so the paths look weird. And even though the inventory directory should also be read from /runner/project/inventory, it is not. So the --inventory argument seems necessary.

Next steps

The next step would be to run this on Tower. I’ll come to this in a next post.