Get a personalized demo and find out how to accelerate your time-to-market
A lot of Tangram Vision’s software is [written in Rust](https://www.tangramvision.com/blog/why-were-choosing-rust). Internally, we always try to ensure that our code runs on the latest stable Rust, because that’s what we expect most of our users to be targeting. In particular, we’ve found that with every new Rust release comes new compiler lints (using `cargo clippy`), as well as all sorts of updates that make writing Rust easier.
Like any modern software company, we test and lint our code in CI (we use [GitLab](https://gitlab.com)). This is often done through a CI image based off of the [official Rust image](https://hub.docker.com/_/rust/), `rust:latest`; however, we cannot always rely on `rust:latest` on its own — sometimes we link to external C libraries through Rust to support sensors or data formats (e.g. our [realsense-rust](https://gitlab.com/tangram-vision/oss/realsense-rust) crate relies on [librealsense](https://github.com/IntelRealSense/librealsense/)). To make these external libraries readily available in CI, we build our own Docker image on top of `rust:latest` that contains all the libraries we need.
However, a major challenge with maintaining our custom Docker image is that every time `rust:latest` updates we have to re-generate our custom image to use the new Rust image as a base. This creates a lot of work for our engineers, despite the fact that the comprising changes are small on their own, if not fairly insignificant.
In this article, we detail a common recipe we use to automatically regenerate our internal Docker image whenever `rust:latest` is updated to point to a new version. This involves committing to the repo from a CI-runner, as well as scheduling and organizing our CI runs so that we get the new Rust image shortly after release.
Sample code shown in the article below is available at [https://gitlab.com/tangram-vision/oss/tangram-visions-blog/-/tree/main/2022.09.13_GitLabRecipesCommitFromCI](https://gitlab.com/tangram-vision/oss/tangram-visions-blog/-/tree/main/2022.09.13_GitLabRecipesCommitFromCI)
# The Problem
## What we're trying to accomplish
In short, we're just trying to schedule a job in CI that does the following:
1. Check if a new `rust:latest` exists on DockerHub. If it does:
1. Bump the version PATCH number
2. Commit this change
3. Push to our default branch (referenced via `$CI_DEFAULT_BRANCH`)
2. Whenever a commit is pushed to main, we rebuild the Docker image and publish it to our internal [GitLab container registry](https://docs.gitlab.com/ee/user/packages/container_registry/).
And that's it. It doesn't sound complicated, but pushing a commit *from* CI is where we end up finding most of our complexity.
## Setup
First and foremost, let’s set up the problem. Our repo will have a handful of files that we primarily care about:
1. `.gitlab-ci.yml`: The CI file for our repository.
2. `Dockerfile`: The file used to build Docker images.
3. `VERSION`: A file containing the version triplet (MAJOR.MINOR.PATCH, e.g. `1.2.3`) of our Docker image.
We'll be developing our `.gitlab-ci.yml` file throughout this article; however our basic structure will roughly be:
```yaml
stages:
- build
build-and-publish:
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
script:
- export BUILD_VERSION=$(cat ${CI_PROJECT_DIR}/VERSION | tr -d '\n')
- /kaniko/executor
--context $CI_PROJECT_DIR
--dockerfile $CI_PROJECT_DIR/Dockerfile
--destination $CI_REGISTRY_IMAGE:latest
--destination $CI_REGISTRY_IMAGE:$BUILD_VERSION
--cache
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
```
💡 We use kaniko, a proven and secure way to build Docker images while running inside a container (as our CI jobs do). The GitLab docs highlight some of the issues with alternative approaches, such as Docker-in-Docker. Buildah is another good option.
Effectively, this will build our CI image whenever we push to our default branch (in this case `main`).
This `Dockerfile` is admittedly pretty basic, but will suit our purposes for the sake of this experiment. Most importantly, this Dockerfile demonstrates, in part, how we might add on top of the `rust:latest` layer over time.
```yaml
FROM rust:latest AS builder
RUN set -eux; \
cargo install ripgrep;
FROM rust:latest
# Copy binaries, tools, libraries, etc. into final image.
COPY --from=builder /usr/local/cargo/bin/rg /usr/local/bin/
RUN set -eux; \
# Add clippy
rustup component add clippy; \
#
# INSTALL EXTERNAL LIBRARIES AND TOOLS HERE
#
# Clean up (if you installed anything with apt or cargo)
rm -rf /var/lib/apt/lists/*; \
rm -rf /usr/local/cargo/registry;
CMD ["bash"]
```
Last but certainly not least, our `VERSION` file is quite simple:
```
1.0.0
```
# Common Approaches
What we are doing here is not actually all that uncommon. Many projects want to build custom Docker images on top of existing Docker images on DockerHub or elsewhere. The tricky bit is we want to bump the version number we use to label our Docker image via the `VERSION` file, which means CI needs to modify the repo in a commit and push the commit. While solving this problem internally, we came across a number of different articles and guides that suggest to:
- Use your own account’s credentials
- Create a dedicated bot user account
In both of these approaches, the user/bot account likely has wide-ranging permissions and its credentials (user+pass or SSH private key) must be stored in the CI variables of the GitLab organization or repo. These approaches have some downsides:
- The account’s credentials are exposed to CI and anything running inside CI (including compromised or malicious dependencies and developers or open source contributors with sufficient privileges to trigger CI pipelines to run)
- Any leak or security breach compromises everything the account has access to (potentially all repos in your organization or subgroup) and requires costly followup investigation and secret revocation/rotation and redeployment
- If the user leaves the organization, then CI will fail until new credentials are provided
- A dedicated bot user takes up a paid seat if you’re using GitLab SaaS
Considering the above downsides, we kept looking for a better solution.
# Recipe: Using API Tokens as a Bot
This takes us to our actual solution for the problem. We leverage [project access tokens](https://docs.gitlab.com/ee/user/project/settings/project_access_tokens.html) so that our "bots" have as limited access as possible to each repo, which also gives us a great way to surgically rotate tokens as necessary.
First, we need a bit of scaffolding in order for CI to check if a new `rust:latest` even exists. Our first step is to set the schedule up. For this, we can go to CI/CD → Schedules on the left sidebar of our repository. We can set up a weekly schedule to run CI for our check:
But what will CI run once a week? Well, we can start by adding a new stage to our `.gitlab-ci.yml` file as follows:
```yaml
scheduled-docker-build:
stage: build
image: debian:latest
script:
- 'apt update && apt install curl jq git -y'
- # to be continued
rules:
- if: $CI_PIPELINE_SOURCE == "schedule" && $CI_COMMIT_REF_PROTECTED
```
Notice here that all we've done is install `curl`, `jq`, and `git`. We'll need all three of these in order to check if a new `rust:latest` image is available and to push a commit to `main` to trigger rebuilding our custom image on the new `rust:latest` image.
## What is the latest `:latest`?
Every Docker image has a [manifest](https://docs.docker.com/registry/spec/api/#manifest) describing the tag (i.e. `latest` in this example). This manifest is provided by their API with a [content digest](https://docs.docker.com/registry/spec/api/#content-digests), which is a unique identifier for any given image. Since this value will update any time a new change is pushed to the `:latest` tag, we can check this value in order to know if the tag has a different digest from the last time we built against `rust:latest`.
To get this tag, we use a small script we call `latest-manifest-digest.sh`:
```bash
#!/usr/bin/env bash
token=$(curl -s "<https://auth.docker.io/token?service=registry.docker.io&scope=repository:library/rust:pull>" \
| jq -r '.token')
curl -H "Authorization: Bearer $token" \
-H "Accept: application/vnd.docker.distribution.manifest.v2+json" \
-s "<https://registry-1.docker.io/v2/library/rust/manifests/latest>" \
-I \
| grep "digest" \
| cut -d " " -f2
```
At the time of writing, this gives us a string that looks something like:
```bash
$ ./latest-rust-manifest-digest.sh
sha256:df962ca4f985f8f6b81b7c02f66348d294974d8cafc2d0f15104c472623d433e
```
By storing this in a file whenever we generate, we can check against the value to see if there are any changes to the `rust:latest` tag.
💡 Generally speaking, the semantics of <code>:latest</code> mean that we're getting the most recent (i.e. latest) image, but theoretically the Rust project could push to <code>:latest</code> with an image that isn't what we want. For the purposes of this example, we're going to assume that won't happen.
We'll add the following command so we can save that file off!
```bash
latest-rust-manifest-digest.sh > rust-latest.digest
```
Committing this script and the `rust-latest.digest` file doesn't do a lot on its own, but this does give us a way to track if the upstream Docker image has changed.
## Checking What's Changed
To check what's changed and push to main, we're writing one more script. This script checks our digest file, determines if the current latest digest matches it, and if not it will bump the PATCH version of our Docker image and push that to `main`.
The script that does this is called `check-rust-latest.sh`:
```bash
#!/usr/bin/env bash
set -euo pipefail
THIS_DIR=$(dirname $0)
function on_rust_latest() {
local current_manifest=$(cat rust-latest.digest)
local latest_manifest=$("${THIS_DIR}/latest-rust-manifest-digest.sh")
[[ ${current_manifest} = ${latest_manifest} ]]
}
function bump_image_version() {
local version_file="${THIS_DIR}/VERSION"
local new_version=$(mawk -F. '{ $NF=$NF+1; print }' OFS=. ${version_file})
tee <<<"${new_version}" ${version_file}
}
function check() {
if on_rust_latest; then
echo "Already on Rust latest; nothing to do."
else
"${THIS_DIR}/latest-rust-manifest-digest.sh" > "${THIS_DIR}/rust-latest.digest"
local version=$(bump_image_version)
git config user.email "gitlab-bots@example.com"
REPO_PATH=$(git remote get-url origin | sed -E 's/^.*gitlab\.com.//')
git remote set-url origin "<https://CI-Bot:${PROJECT_ACCESS_TOKEN}@gitlab.com/${REPO_PATH}>"
git switch "${CI_COMMIT_REF_NAME}"
git add --update
git commit -m "Apply patch bump (v${version}) because of new Rust:latest image"
git push
fi
}
check
```
The script has a few things going on, but the essential flow is:
1. Check if we’re already on the latest `rust:latest`. If yes, exit. Otherwise, continue.
2. Update the digest
3. Update the version file to the next PATCH version
4. Set up our git remote to use our `PROJECT_ACCESS_TOKEN` for authentication
5. Commit and push to whatever branch the scheduled job ran on (`CI_COMMIT_REF_NAME`)
With this, we now can fill out the rest of our `.gitlab-ci.yml` file from before:
```yaml
scheduled-docker-build:
stage: build
image: debian:latest
script:
- 'apt update && apt install curl jq git -y'
- bash check-rust-latest.sh
rules:
- if: $CI_PIPELINE_SOURCE == "schedule" && $CI_COMMIT_REF_PROTECTED
```
💡 Tip: If you wanted to do what we did above, but tell CI not to run on whatever commit you pushed, just add <code>[skip ci]</code> to the commit message. This always tells GitLab CI to do nothing based on that commit.
## Creating and Sharing the Project Access Token
In order for the previous step to work, we need to make `PROJECT_ACCESS_TOKEN` available. First, we need to make the token. Go to Settings → Access Tokens on your project and make an access token with the following permissions:
Like other credentials we discussed earlier, we're going to share this via the project's CI/CD variables. However, unlike other credentials, this token represents a bot user on its own that only has permission to the project. This means that if we need to rotate it, it's easy (we can just do it the same way we created one, and the old token can be revoked). Anyone with Maintainer role on the project can rotate the token, which is an improvement compared to if we were using a user account or a dedicated bot account, where the owner of that account would be the only one able to rotate tokens.
This answers some of the security questions we had earlier; in particular, using tokens for the project allows us to revoke the token, and limit its scope. A time limit can also be placed on it if your team wants to rotate it every month / year / whatever.
One last tip: we forgot to mention that our main branch is protected. Normally pushing to `main` would be considered pretty poor form; however, we'll make an exception for this bot. So let's do just that!
Every project access token denotes a bot user. This user does not count towards your user total if you're paying for a premium version of GitLab, but it does get its own name in the listing of your project. This means that whatever permissions can be applied to users, can also be applied to the bot. This is a huge advantage if you're using this bot for multiple different job rules!
# Conclusion
With some scripting and the powerful functionality provided by GitLab’s CI tools, we have automated rebuilding of a custom Docker image on top of the latest official Rust image, whenever it changes. This automation has already yielded some tangible benefits:
- Our CI testing across all Rust repos stays up-to-date with recent Rust releases
- As a follow-on effect: we are quickly notified of any new `clippy` lints and any issues caused by new versions of Rust
- We save developers some time and focus — they no longer need to follow a manual process to periodically update our custom CI image or check what version is currently being used in CI to see if they can use newly-stabilized features or options of Rust and Cargo
Having a secure and usable pattern for automatic git and API actions using [project access tokens](https://docs.gitlab.com/ee/user/project/settings/project_access_tokens.html) also unlocks other time-saving automation, which we’ll post more about in the future! Thanks for reading, and please let us know if you have any corrections, suggestions, or questions!
<hr>
If you’re curious to learn more about how we improve perception sensors, visit us at [Tangram Vision](https://www.tangramvision.com/), and if saving perception engineers years of effort sounds like your calling, then I have even better news: we’re hiring! Check out our [careers page for open positions](https://www.tangramvision.com/careers).