Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rework CI pipeline to use Docker containers? #521

Open
NexAdn opened this issue Feb 7, 2023 · 0 comments
Open

Rework CI pipeline to use Docker containers? #521

NexAdn opened this issue Feb 7, 2023 · 0 comments

Comments

@NexAdn
Copy link
Member

NexAdn commented Feb 7, 2023

This is intended as a continuation of the discussion in PR #512 (closed unmerged).
Discussing such decisions is IMHO not something which should be done in a PR review, but in a dedicated issue.

In the original PR I replace the deprecated Repoman with pkgcheck and added a few more checks. To simplify the process, that PR migrates away from starting Docker containers on a machine runner and uses Docker runners with the prebuilt images from ::fem-overlay (fem-overlay, Dockerfile), which already provide a portage tree and all the required tools (including the fem-overlay-ci-tools required for the new manifest check) installed.

@simonvanderveldt was opposed to my changes, stating the following:

  • The steps run by the Docker runner can't be run in any easy way locally anymore
  • Simon is unsure whether we should depend on a third party Docker image, since (from his point of view) it's a random/unvetted image. Instead, the official Gentoo stage3 containers with a volume container was suggested.

I replied with these arguments (not in the exact same order):

  • The steps in the Docker runner don't need to be run in the exact same way locally, since the standard development tools pkgdev and pkgcheck already to the required checks automatically during development.
  • As pkgdev and pkgcheck are required for development, they are already installed on developers' machines, thus eliminating the need for Docker to run the tools.
  • The image is built using GitLab CI in a public repo and can thus be verified by anyone. The same can not be said about images on Docker Hub (at least not for all of them).
  • The Dockerfile for the image is written by me.
  • Sticking to the current approach (stage3 plus portage tree) requires installing software in each CI run, which can cause problems if the stage3 is older than the mounted portage tree, causing large amounts of package updates/rebuilds.

So, continuing the discussion here, am still in favor of using Docker runners with a prebuilt image (be it a Dockerfile in the gentoo-audio GitHub org or the fem-overlay-ci-image).
I have used the approach with using a plain stage3 with emerge --sync before (GitLab CI didn't support volume containers without touching the runner configuration). While it does work so long as no big updates need to be installed, it is still very slow (even with binary packages). A pipeline with everything preinstalled can run in less than 10 seconds, while the current approach with installing everything in the pipeline (which is also used in #520) usually blows up the pipeline run time to over a minute (so basically 6 times slower).
This is not that annoying for CI pipelines, since waiting for that one minute to pass is not that big a deal. However, we need to consider that these tests are also intended to be run locally. And I honestly don't have to wait for a minute each time I want to do a commit. Especially when doing a bunch of QA fixes or version bumps at once, this can easily blow up the development time by several minutes, causing unnecessary annoyances to the developers. Thus, I consider this approach unusable for local development, since there is a better alternative (running the tools locally without Docker or using a prebuilt Docker image).

What I can agree to is that running the pipeline commands locally with Docker runners is problematic. The script which is executed in the CI pipeline can't be run locally since it is not a shell script lying around somewhere in the repo. Using a machine runner that spins up a Docker container to do the work is an acceptable solution. But again, in this situation, I'd still favor using a prebuilt image for the reasons mentioned above, since it is basically the same situation as using a Docker runner and doing everything in the Docker container there.

Let me know what everone thinks about the matter. In any case, I'd like to keep this discussion open for a while so Simon has a chance to respond.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

1 participant