Skip to main content

Automation won't pick up some version upgrades

Introduction

This post is about situations where software components that are commonly imported in as part of assembling production systems can slip outside of the normal expected path for detecting the availability and applying version upgrades.

A couple of examples of systems that can be set up to detect when new versions of dependencies are available are:

Renovate

Dependabot

Examples of dependency changes

When a base Docker image went distroless

When new versions stopped being released for the alpine distribution of the envoyproxy Docker image automation had nothing in place to detect that and raise it as a potential issue.

I came across this when a production issue came up in another team's core infrastructure service. Since my team was going to be blocked until the incident wsa resolved, I followed the online chat discussion, checked some logs, did some Googling and established that the error that was being seen should have been resolved by a version of envoy that had been available for several months.

It took an hour or so to join the dots and establish that the "latest" version that was being deployed was several versions behind envoy's releases because it had not been updated to align with a decision to stop support of a particular linux distribution in favor of going with a distroless approach. 

Change of maven artifact name

For Java applications maven packaging has become a de facto standard for the management of libraries that need to be brought in to support functionalities within a service or application.

An example of an artifact that changed its name as part of a major version upgrade is apache commons-lang that moved over to commons-lang3.

I can't recall any particular problem arising from running with commons-lang, but I wouldn't like to see commons-lang as a dependency in my codebase - given that it's most recent release was back in 2011, more than 14 years ago.

So how can we stay up to date?

In my view, the best way to reduce dependendency management overhead is to minimise dependencies in the first place. Carefully weigh up the value that is being added when you bring in any dependency

  • Does it bring along a bunch of transitivie dependencies? Is it worth it?
  • Could the same be achieved with a couple of extra classes directly in our codebase?

As software bills of material and greater attention is focussed on the software supply chain, I believe it will become more common for organisations to have centralised tooling in place to surface up the use of out of date artifacts. 

Comments

Popular posts from this blog

Having a go at learning some Kotlin

What's this about?  The year 2025 is almost over, so that means that it has been a bit over a decade since my old colleague Filippo gave a presentation to the development team of ScienceDirect covering the merits of the Kotlin programming language. So, it's about time that I had a proper go at using it. This blog post is intended to trace what the experience has been like, covering surprises that I encounter along the way. Getting started The programming language that I am most experienced with is Java, so I have chosen to try out implementing some functionality in Kotlin from a recent hobby project that I developed in Java involving spinning up a database in a Docker container and running some queries. JVM version support IntelliJ IDEA includes some automation for creating a new project, so I selected the relevant options to use the latest LTS version of the Java virtual machine with Spring Boot, Kotlin, Postgresql and Test containers. After a few seconds I had a new project i...

The Importance of Segmenting Infrastructure

Kafka for Logging I was recently poking around in the source code of a few technologies that I have been using for a few years when I came across KafkaLog4jAppender. It enables you to use Kafka as a place to capture application logs. The thing that caught my eye was the latest commit associated with that particular class, "KafkaLog4jAppender deadlocks when idempotence is enabled" . In the context of Kafka, idempotence is intended to enable the system to avoid producing duplicate records when a producer may need to retry sending events due to some - hopefully - intermittent connectivity problem between the producer and the receiving broker. The unfortunate situation that arises here is that the Kafka client code itself uses Log4j, so it can result in the application being blocked from sending its logs via a Kafka topic because the Kafka client Producer gets deadlocked waiting on transaction state. Kafka For Metrics - But Not For Kafka Metrics This reminded me of a similar scen...

2022 - A year in review

Just a look back over the last 12 months. January I moved back to Christchurch to live, after having spent a few months further south since moving back from London. Work was mainly around balancing other peoples' understanding and expectations around our use of Kafka. February I decided that it would be worthwhile to have a year's subscription for streaming Sky Sports, as some rugby matches that I would want to watch would be on at time when venues wouldn't be open. Having moved to Christchurch to be close to an office, now found myself working from home as Covid restrictions came back into effect across New Zealand. March Got back into some actual coding at work - as opposed to mainly reviewing pull requests for configuration changes for Kafka topics.  This became urgent, as the command line interface tool that our provisioning system was dependent on had been marked for deprecation. April   Had my first direct experience with Covid-19.  I only went for a test because ...