Downloading the Internet
Can you remember the last time you started out on a clean development environment and ran the build of some software using Maven or Gradle for dependency management? It takes ages to download all of the necessary third party libraries from one or more remote repositories, leading to expressions like, "Just waiting for Maven to download the Internet".
Once your development environment has been used for building a few projects the range of dependencies that will need to be downloaded for other builds reduces down as the previously referenced ones will now be cached and found locally on your computer's hard drive.
What happens on the Continuous Integration environment?
Now consider what goes on when Jenkins or your other preferred Continuous Integration server comes to build your software. If it doesn't have a local copy of the libraries that have been referenced then it is going to pay the cost of that slow "download the Internet" process every single time that it comes to check out your latest changes and run a build.
What are the main costs involved here?
- Developer time waiting on the build to complete before moving on to the next change
- Data transfer charges for sourcing from external repositories
Cutting down costs - saving time
What options do we have available for reducing these costs?
- Localise the artifact repository, acting as a pass-through cache
- Or Pre-download the most common artifacts in a build container image
Option 1 would involve the selection and setup of an appropriate artifact repository manager such as Nexus or Artifactory. There's a reasonable chance that if your organisation happens to write your own reusable libraries then this will already be in place for supporting the distribution of those artifacts anyway, so it may just be a matter of re-configuring the setup to support mirroring of external third party libraries sources from external repositories.
Option 2 may seem a bit counter-intuitive as it would go against the current trend of trying to minimise container sizes and to be generally useful it would need to contain a broader range of artifacts than any one project's build would require.
Keep it local
For both options the performance improvement comes down to locality of reference. The builds should be able to obtain most, if not all, dependencies without having to go beyond the organisation's private build environment's network - whether that be a Virtual Private Cloud or a data centre.
With this type of setup in place builds should be able to spend less time on initial setup, and be more focused on compilation, running tests, and ultimately making the new known good version of the code available for use.
If you want to understand the potential time savings on offer here, just try temporarily moving the content of your local development environment's build cache away and see how long a build takes. For a typical Java microservice I would not be at all surprised if the build time doubles or even triples for having to obtain the build plugin libraries, the application's direct dependencies, and all of the transitive dependencies.
With the container option, is there an impact in the number of images that are generated to support changing dependency versions? i.e. you regularly update your dependencies within a project for needed features, or more importantly for any required security patches. With a repo proxy (e.g. Nexus) the impact should be localised to the size of the new dependency. With an image, you'd have the layer supporting the dependencies changing constantly - so the size of that could vary depending on how you perform the update?
ReplyDeleteThat's a fair point.
DeleteThe value of the container option would degrade over time if the container isn't getting regularly updated to pick up the most up to date versions of dependencies, including transitive dependencies.
The build container image itself could be built on a regular schedule with one or more dummy projects specifying artifacts, so something like Renovate can be applied to automatically detect available upgrades.
(There's probably a less hacky way to achieve this, but this was the first thing that popped into my head).
The image size could balloon out if it picked up dependencies that get frequent updates, so something like an AWS client library might result in multiple versions per week.
DeleteIf this got out hand then I'd want to apply a policy to limit the range of older versions that are kept around.