Skip to main content

The morning of the redundancy announcement email

A day morning in the life of a developer

Code reviews

Blocking for required changes

Checked Bitbucket for fresh pull requests on my team's repositories that require approvals before they could be merged in and included in a deploy.

One of the changes involved a private function for calculating some dates that included logic based on the current date. The documentation comment appeared to be incomplete so I couldn't quite tell what it was intended to do.

I added a couple of comments, mainly proposing that the date calculation logic should be extracted out to its own component with a Clock being provided as a dependency so that we could cover it with tests and have control over what the value of "now" would be for the Clock component.

On this particular day I made the decision to be a little bit stricter in my feedback, so I clicked the "Changes required" option meaning that I would have to come back to re-review the pull request before it could be progressed.

Checking out a branch to see the code in context

For another code review I decided to check out the branch to give it a proper inspection on my work laptop.

This particular codebase was a bit older, so had the potential to include some legacy stuff in there. Specifically, I noticed the configuration of the dependencies mentioned a client for metrics that we haven't been using during either of my stints at Atlassian. 

There were no references to the dependency in the code, but seeing it in the gradle file means it's probably being bundled into the service and deployed.

I didn't even get around to making a little "note to self" for that, so maybe someone on the team will pick up on that some time later.

Post incident reviews

My department has a regularly scheduled session for developers to build up experience and awareness of how to go about addressing incidents that arise across our services. This is a chance to learn from others' mistakes and to give input to ensure that the appropriate mitigations are applied to ensure that the specific system(s) involved in the recent incident don't get impacted again.

As I had a meeting clash for this particular day, so I decided to have an early read-through of the incident summaries and provide my comments in advance of the normal meeting schedule.

My main contribution on this occasion was to query whether a feature flag had involved an id value, as this should have ensured that when multiple feature flag lookups were involved in the processing that the same result would have been returned.

Preparing to upload synthetic data to S3

The story that I was currently working on involved assessing the performance of Amazon Athena for different data formats when a large volume of data is involved.

The previous day I had guided an AI agent to create some Python scripts that would generate files to resemble an S3 inventory report as CSV or Parquet.

The functionality was quite impressive, even if I do say so myself
- Random generation of the object key (file path)
- Control of the seed for random generator, so CSV and Parquet would have like-for-like data
- Parallelism to support generation of multiple files at the same time
- Generation of manifest file with representative path
- Script for uploading to S3, also supporting parallelism

This particular experiment was deemed to be necessary because out AI agents were not able to give us solid data for their estimation of performance differences between the data formats. Moving from CSV to Parquet would be a blocker on an existing implementation being usable.

Then things got interesting

A team mate posted a link to Mike's blog post in one of our team Slack channels.
I started reading the blog post, then checked my email...

Ruh roh...

The initial sentences mentioned "may be impacted" in bold, then the content mentioned that access to various systems would be cut off, so it dawned on me that this time my employment with Atlassian would be over.

I didn't get a chance to check in those Python scripts because the company had made the understandable decision to cut off access to Bitbucket.

I had access to the Atlassian Slack from my personal device for most of the rest of the day, so I could let my team know about my situation.

I got several nice mentions in the team channel and in direct messages from my teammates and managers. It came as much of a shock to them as it did to me.

When I went on LinkedIn I started to see a few familiar names showing up mentioning how they too had been caught up in this round of redundancies.

It was kind of reassuring to see the names of people who have been significant contributors - so it's not as though I have been identified as dead wood and cut out based on performance. Apparently there are about 1600 of us in this situation.

Comments

Popular posts from this blog

Having a go at learning some Kotlin

What's this about?  The year 2025 is almost over, so that means that it has been a bit over a decade since my old colleague Filippo gave a presentation to the development team of ScienceDirect covering the merits of the Kotlin programming language. So, it's about time that I had a proper go at using it. This blog post is intended to trace what the experience has been like, covering surprises that I encounter along the way. Getting started The programming language that I am most experienced with is Java, so I have chosen to try out implementing some functionality in Kotlin from a recent hobby project that I developed in Java involving spinning up a database in a Docker container and running some queries. JVM version support IntelliJ IDEA includes some automation for creating a new project, so I selected the relevant options to use the latest LTS version of the Java virtual machine with Spring Boot, Kotlin, Postgresql and Test containers. After a few seconds I had a new project i...

The Importance of Segmenting Infrastructure

Kafka for Logging I was recently poking around in the source code of a few technologies that I have been using for a few years when I came across KafkaLog4jAppender. It enables you to use Kafka as a place to capture application logs. The thing that caught my eye was the latest commit associated with that particular class, "KafkaLog4jAppender deadlocks when idempotence is enabled" . In the context of Kafka, idempotence is intended to enable the system to avoid producing duplicate records when a producer may need to retry sending events due to some - hopefully - intermittent connectivity problem between the producer and the receiving broker. The unfortunate situation that arises here is that the Kafka client code itself uses Log4j, so it can result in the application being blocked from sending its logs via a Kafka topic because the Kafka client Producer gets deadlocked waiting on transaction state. Kafka For Metrics - But Not For Kafka Metrics This reminded me of a similar scen...

2022 - A year in review

Just a look back over the last 12 months. January I moved back to Christchurch to live, after having spent a few months further south since moving back from London. Work was mainly around balancing other peoples' understanding and expectations around our use of Kafka. February I decided that it would be worthwhile to have a year's subscription for streaming Sky Sports, as some rugby matches that I would want to watch would be on at time when venues wouldn't be open. Having moved to Christchurch to be close to an office, now found myself working from home as Covid restrictions came back into effect across New Zealand. March Got back into some actual coding at work - as opposed to mainly reviewing pull requests for configuration changes for Kafka topics.  This became urgent, as the command line interface tool that our provisioning system was dependent on had been marked for deprecation. April   Had my first direct experience with Covid-19.  I only went for a test because ...