Skip to main content

Automation won't pick up some version upgrades

Introduction

This post is about situations where software components that are commonly imported in as part of assembling production systems can slip outside of the normal expected path for detecting the availability and applying version upgrades.

A couple of examples of systems that can be set up to detect when new versions of dependencies are available are:

Renovate

Dependabot

Examples of dependency changes

When a base Docker image went distroless

When new versions stopped being released for the alpine distribution of the envoyproxy Docker image automation had nothing in place to detect that and raise it as a potential issue.

I came across this when a production issue came up in another team's core infrastructure service. Since my team was going to be blocked until the incident wsa resolved, I followed the online chat discussion, checked some logs, did some Googling and established that the error that was being seen should have been resolved by a version of envoy that had been available for several months.

It took an hour or so to join the dots and establish that the "latest" version that was being deployed was several versions behind envoy's releases because it had not been updated to align with a decision to stop support of a particular linux distribution in favor of going with a distroless approach. 

Change of maven artifact name

For Java applications maven packaging has become a de facto standard for the management of libraries that need to be brought in to support functionalities within a service or application.

An example of an artifact that changed its name as part of a major version upgrade is apache commons-lang that moved over to commons-lang3.

I can't recall any particular problem arising from running with commons-lang, but I wouldn't like to see commons-lang as a dependency in my codebase - given that it's most recent release was back in 2011, more than 14 years ago.

So how can we stay up to date?

In my view, the best way to reduce dependendency management overhead is to minimise dependencies in the first place. Carefully weigh up the value that is being added when you bring in any dependency

  • Does it bring along a bunch of transitivie dependencies? Is it worth it?
  • Could the same be achieved with a couple of extra classes directly in our codebase?

As software bills of material and greater attention is focussed on the software supply chain, I believe it will become more common for organisations to have centralised tooling in place to surface up the use of out of date artifacts. 

Comments

Popular posts from this blog

Speeding up Software Builds for Continuous Integration

Downloading the Internet Can you remember the last time you started out on a clean development environment and ran the build of some software using Maven or Gradle for dependency management? It takes ages to download all of the necessary third party libraries from one or more remote repositories, leading to expression like, "Just waiting for Maven to download the Internet". Once your development environment has been used for building a few projects the range of dependencies that will need to be downloaded for other builds reduces down as the previously referenced onces will now be cached and found locally on your computer's hard drive. What happens on the Continuous Integration environment? Now consider what goes on when Jenkins or your other preferred Continuous Integration server comes to build your software. If it doesn't have a local copy of the libraries that have been referenced then it is going to pay the cost of that slow " download the Internet" p...

2022 - A year in review

Just a look back over the last 12 months. January I moved back to Christchurch to live, after having spent a few months further south since moving back from London. Work was mainly around balancing other peoples' understanding and expectations around our use of Kafka. February I decided that it would be worthwhile to have a year's subscription for streaming Sky Sports, as some rugby matches that I would want to watch would be on at time when venues wouldn't be open. Having moved to Christchurch to be close to an office, now found myself working from home as Covid restrictions came back into effect across New Zealand. March Got back into some actual coding at work - as opposed to mainly reviewing pull requests for configuration changes for Kafka topics.  This became urgent, as the command line interface tool that our provisioning system was dependent on had been marked for deprecation. April   Had my first direct experience with Covid-19.  I only went for a test because ...

Designing systems - The "ity"s That Limit or Enable Profitability

Introduction This started off as a little aide-mémoire to get my head into the right space for preparing for an interview. It's not an exhaustive list, and twists terminology that has been used to represent other things (see:  to Velocity), so don't treat it as a text book reference to work from. Most of the listed points can be associated back to so called "non-functional requirements" - NFRs. I don't like that particular terminology, so alternatively we might consider them as dimensions of the quality of the sytem. Usability "If you build it, they will come" should come with a provisor, "... but if it's awkward to use they'll soon go away, and might not come back." Security All of the aspects that combine to protect data from being seen or manipulated by anyone other than the intended recipient or sender, and also assuring users that the data has originated from the intended source. Velocity Here I'm cheating a bit by trying t...