Kubernetes is a widely used container platform but definitely not the only one used. Having the ability to run XLRelease Runners on straight Docker/Podman or even Hashicorp's Nomad platform would be a massive benefit. My organization would love to be able to run plugins like Terraform and Vault from external Runners on simpler setups. This could also empower smaller teams to leverage XLRelease for Infrastructure as Code deployments.

This would pair well with https://ideas.digital.ai/devops/Idea/Detail/4421

Comments

  • Hi Kiyotoshi,

    Thank you for the request.

    For Docker environments we offer the Cloud Connector option. This will install a lightweight Kubernetes environment with Release runner on your Docker machine.

    To install it, go to (Gear menu) Settings > Runners > Add Runner and choose "Digital.ai Release runner install with Digital.ai Cloud Connector". This will walk you through the installation process.

    The first page of the installation workflow lists the requirements:

    ---
    This workflow will help you set up a Release runner by way of the Digital.ai Platform Cloud Connector.

    Use this option if you don't have access to an existing Kubernetes cluster.

    The Cloud Connector provides an embedded Kubernetes runtime environment that will host the Release runner and the container-based tasks coming from Release.

    Prerequisites

    For a successful runner installation with Digital.ai Platform you will need

    - Admin permissions in Digital.ai Release for this user running the workflow
    - Network access to Digital.ai Platform service on the internet
    - Username and password of the admin user of your customer account on Digital.ai Platform
    - A target machine for the installation running Linux or macOS
    - A Docker runtime on the target machine
    ---

    Let me know if this option could work for you.

    Note: We can help you with access to the Digital.ai Platform account through Support. Please reference the link to this Idea on the support request.

    Kind regards,

    Hes Siemelink