Browsed by
Tag: pipelines

How to leverage a private modules in your YAML Pipelines

How to leverage a private modules in your YAML Pipelines

I’ve made no secret about my love of DevOps, and to be honest, over the past few months it’s been more apparent to me than ever before that these practices are what makes developers more productive. And taking the time to setup these processes correctly are extremely valuable and will pay significant dividends over the life of the project.

Now that being said, I’ve also been doing a lot of work with Python, and honestly I’m really enjoying it. It’s one of those languages that is fairly easy to pickup but the options and opportunities based on it’s flexibility take longer to master. But one of the things I’m thankful we started doing was leveraging python modules to empower our code re-use.

The ability to leverage pip to install modules into containers creates this amazing ability to separate the business logic from the compute implementation.

To that end, there’s a pretty common problem, that I’m surprised is not more documented. And that’s if you’ve built python modules, and deployed them to a private artifact feed, how can you pull those same modules into a docker container to be used.

Step 1 – Create a Personal Access Token

The first part of this is creating a personal access token in ADO, which you can find instructions for here. The key to this though is the PAT must have access to the packages section, and I recommend read access.

Step 2 – Update DockerFile to accept an argument

Next we need to update our Dockerfile to be able to accept an argument so that we can pass that url. You’ll need to build out the url your going to use by doing the following:

https://{PAT}@pkgs.dev.azure.com/{organization}/{project id}/_packaging/{feed name}/pypi/simple/

Step 3 – Update YAML file to pass argument

This is done by adding the following to your docker file:

ARG feed_url=""
RUN pip install --upgrade pip
RUN pip install -r requirements.txt --index-url="${feed_url}"

The above will provide the ability to pass the url required for accessing the private feed into the process of building a docker image. This can be done in the YAML file by using the following:

- task: Bash@3
  inputs:
    targetType: 'inline'
    script: 'docker build -t="$(container_registry_name)/$(image_name):latest" -f="./DockerFile" . --build-arg feed_url="$(feed_url)"'
    workingDirectory: '$(Agent.BuildDirectory)'
  displayName: "Build Docker Image"

At this point, you can create your requirements file with all the appropriate packages and it will build when you run your automated build for your container image.

How to leverage templates in YAML Pipelines

How to leverage templates in YAML Pipelines

So now secret that I really am a big fan of leveraging DevOps to extend your productivity. I’ve had the privilege of working on smaller teams that have accomplished far more than anyone could have predicted. And honestly the key principle that is always at the center of those efforts is treat everything as a repeatable activity.

Now, if you look at the idea of a micro service application, at it’s core its several different services that are independently deployable, and at it’s core that statement can cause a lot of excessive technical debt from a DevOps perspective.

For example, if I encapsulate all logic into separate python modules, I need a pipeline for each module, and those pipelines look almost identical.

Or if I’m deploying docker containers, my pipelines for each service likely look almost identical. See the pattern here?

Now imagine, you do this and build a robust application with 20-30 services running in containers. In the above, that means if I have to change their deployment pipeline, by adding say a new environment, I have to update between 20 – 30 pipelines, with the same changes.

Thankfully, ADO has an answer to this, in the use of templates. The idea here is we create a repo within ADO for our deployment templates, which contain the majority of the logic to deploy our services and then call those templates in each service.

For this example, I’ve built a template that I use to deploy a docker container and push it to a container registry, which is a pretty common practice.

The logic to implement it is fairly simple and looks like the following:

resources:
  repositories:
    - repository: templates
      type: git
      name: "TestProject/templates"

Using the above code will enable your pipeline to pull from a separate git repo, and then you can use the following to code to create a sample template:

parameters:
  - name: imageName
    type: string
  
  - name: containerRegistryName
    type: string

  - name: repositoryName
    type: string

  - name: containerRegistryConnection
    type: string

  - name: tag
    type: string

steps:
- task: Bash@3
  inputs:
    targetType: 'inline'
    script: 'docker build -t="${{ parameters.containerRegistryName }}/${{ parameters.imageName }}:${{ parameters.tag }}" -t="${{ parameters.containerRegistryName }}/${{ parameters.imageName }}:latest" -f="./Dockerfile" .'
    workingDirectory: '$(Agent.BuildDirectory)/container'
  displayName: "Building docker container"

- task: Docker@2
  inputs:
    containerRegistry: '${{ parameters.containerRegistryConnection }}'
    repository: '${{ parameters.imageName }}'
    command: 'push'
    tags: |
      $(tag)
      latest
  displayName: "Pushing container to registry"

Finally, you can go to any yaml pipeline in your project and use the following to reference the template:

steps:
- template: /containers/container.yml@templates
  parameters:
    imageName: $(imageName)
    containerRegistryName: $(containerRegistry)
    repositoryName: $(repositoryName)
    containerRegistryConnection: 'AG-ASCII-GSMP-boxaimarketopsacr'
    tag: $(tag)