Skip to main content

Looking outside the service boundary

Help others to help yourself

This post is about how it sometimes pays to take a look beyond the services your team owns, so that you have a deeper understanding of the operating context and can have confidence in the performance and robustness of the implementation.

I wouldn't claim to be an expert in anything, but sometimes my extra pair of eyes picks up on an opportunity to make a small change and get a significant benefit.

Database queries 

Back when I was operating in an environment where teams had access to logs and metrics of the services  of other teams, I could dip into what was going on when my login service was hitting timeouts form a dependency.

Based on the details in the logs, the culprit seemed to be delays from a database query. Surprisingly enough, the database was missing an index for most common query pattern, so as we scaled up from a few hundred users to a few thousand 

Default configuration options don't always match what has been in place in previous setups. Migration from a hand-configured setup to one using infrastructure as code...

Logging of ElasticSearch slow queries

In AWS at least there is an option to have ElasticSearch log the slowest queries that it encounters. This can feed into an evaluation of whether the data or query needs to be adjusted to reach acceptable and / or optimal performance.

I was involved in a project where an existing ElasticSearch setup needed to be migrated to a new cluster with a less click-ops approach to the configuration. When I took a look into the new setup I noticed that slow query logging was not enabled, so I alerted the other team and they were able to adjust the config before we needed it.

Finding a root cause during a production incident 

On one occasion I followed the incident call and chat when a system that impacted the workflow of all developers in the company was underway and no root cause had been established.

There were a few pages of logs to look at so it took a while to isolate down what was relevant to the situation.

Without going into too much detail, it turned out that a Docker container that was part of the deployment process was not up to date. This was not a simple case of a team not keeping up with the latest available updates, but actually a situation where the third party developers had switched away from the particular distribution that was involved.

Automation of picking up updated versions and applying them would not have helped in that situation, as the distribution switch could be regarded as a type of forking so it would not be easy to automatically detect it.

Comments

Popular posts from this blog

Having a go at learning some Kotlin

What's this about?  The year 2025 is almost over, so that means that it has been a bit over a decade since my old colleague Filippo gave a presentation to the development team of ScienceDirect covering the merits of the Kotlin programming language. So, it's about time that I had a proper go at using it. This blog post is intended to trace what the experience has been like, covering surprises that I encounter along the way. Getting started The programming language that I am most experienced with is Java, so I have chosen to try out implementing some functionality in Kotlin from a recent hobby project that I developed in Java involving spinning up a database in a Docker container and running some queries. JVM version support IntelliJ IDEA includes some automation for creating a new project, so I selected the relevant options to use the latest LTS version of the Java virtual machine with Spring Boot, Kotlin, Postgresql and Test containers. After a few seconds I had a new project i...

The Importance of Segmenting Infrastructure

Kafka for Logging I was recently poking around in the source code of a few technologies that I have been using for a few years when I came across KafkaLog4jAppender. It enables you to use Kafka as a place to capture application logs. The thing that caught my eye was the latest commit associated with that particular class, "KafkaLog4jAppender deadlocks when idempotence is enabled" . In the context of Kafka, idempotence is intended to enable the system to avoid producing duplicate records when a producer may need to retry sending events due to some - hopefully - intermittent connectivity problem between the producer and the receiving broker. The unfortunate situation that arises here is that the Kafka client code itself uses Log4j, so it can result in the application being blocked from sending its logs via a Kafka topic because the Kafka client Producer gets deadlocked waiting on transaction state. Kafka For Metrics - But Not For Kafka Metrics This reminded me of a similar scen...

2022 - A year in review

Just a look back over the last 12 months. January I moved back to Christchurch to live, after having spent a few months further south since moving back from London. Work was mainly around balancing other peoples' understanding and expectations around our use of Kafka. February I decided that it would be worthwhile to have a year's subscription for streaming Sky Sports, as some rugby matches that I would want to watch would be on at time when venues wouldn't be open. Having moved to Christchurch to be close to an office, now found myself working from home as Covid restrictions came back into effect across New Zealand. March Got back into some actual coding at work - as opposed to mainly reviewing pull requests for configuration changes for Kafka topics.  This became urgent, as the command line interface tool that our provisioning system was dependent on had been marked for deprecation. April   Had my first direct experience with Covid-19.  I only went for a test because ...