Skip to main content

When to avoid, or allow upserts

Introduction 

A few recent posts on this blog have outlined how we could achieve version aware upserting of data into various databases. In this post let's consider situations where that might be an unsuitable approach. 

An assumption about Id uniqueness

When we attempt to take an entity and write it into a database, we have an expectation that the attribute or attributes that are used to uniquely identify that entity are safe to handle as trustworthy within the business domain. Let's consider a situation where that assumption has been known to fall down in real production systems.

Generation of a value to be used as the primary key in a relational database can seem to be a solved problem, given that we now have UUIDs that can be generated and passed around for use in our applications and services.

Some earlier implementations for generation of UUID involved combining the MAC address associated with the network device of the machine and the current time as a way of combining to produce a value that would not clash with other machines.

There turned out to be limitations to the uniqueness guarantee when the processing speed and concurrency of the machine resulted in multiple "unique" values being generated at effectively the same time.

Another unfortunate way of producing colliding UUIDs values could occur when a virtual machine happened to have been set up specifying the same MAC address, further increasing the risk of collisions.

For situations where the identifier generator cannot be trusted, we should focus our efforts on recognizing inserts and updates as clearly differentiated operations. 

Trusting a source of truth

It is common for alternative representations of data to exist downstream from an originating system. These may asynchronously apply some transformations or aggregations to produce an entity that is intended for reporting or any manner of follow-on business processing.

In this situation the system is far enough removed from the original data creation that there is little point in expecting inserts and updates to arrive in order.

One size does not fit all

I have made some broad generalizations, but the universal consideration applies, "It depends"....  You may find yourself needing a pipeline that can and must differentiate between creating and updating, in which case you can also expect to need to have firm control over the ordering of those events - perhaps involving Kafka with suitable partitioning and concurrency controls. That's a topic for another post.

Comments

Popular posts from this blog

Having a go at learning some Kotlin

What's this about?  The year 2025 is almost over, so that means that it has been a bit over a decade since my old colleague Filippo gave a presentation to the development team of ScienceDirect covering the merits of the Kotlin programming language. So, it's about time that I had a proper go at using it. This blog post is intended to trace what the experience has been like, covering surprises that I encounter along the way. Getting started The programming language that I am most experienced with is Java, so I have chosen to try out implementing some functionality in Kotlin from a recent hobby project that I developed in Java involving spinning up a database in a Docker container and running some queries. JVM version support IntelliJ IDEA includes some automation for creating a new project, so I selected the relevant options to use the latest LTS version of the Java virtual machine with Spring Boot, Kotlin, Postgresql and Test containers. After a few seconds I had a new project i...

The Importance of Segmenting Infrastructure

Kafka for Logging I was recently poking around in the source code of a few technologies that I have been using for a few years when I came across KafkaLog4jAppender. It enables you to use Kafka as a place to capture application logs. The thing that caught my eye was the latest commit associated with that particular class, "KafkaLog4jAppender deadlocks when idempotence is enabled" . In the context of Kafka, idempotence is intended to enable the system to avoid producing duplicate records when a producer may need to retry sending events due to some - hopefully - intermittent connectivity problem between the producer and the receiving broker. The unfortunate situation that arises here is that the Kafka client code itself uses Log4j, so it can result in the application being blocked from sending its logs via a Kafka topic because the Kafka client Producer gets deadlocked waiting on transaction state. Kafka For Metrics - But Not For Kafka Metrics This reminded me of a similar scen...

2022 - A year in review

Just a look back over the last 12 months. January I moved back to Christchurch to live, after having spent a few months further south since moving back from London. Work was mainly around balancing other peoples' understanding and expectations around our use of Kafka. February I decided that it would be worthwhile to have a year's subscription for streaming Sky Sports, as some rugby matches that I would want to watch would be on at time when venues wouldn't be open. Having moved to Christchurch to be close to an office, now found myself working from home as Covid restrictions came back into effect across New Zealand. March Got back into some actual coding at work - as opposed to mainly reviewing pull requests for configuration changes for Kafka topics.  This became urgent, as the command line interface tool that our provisioning system was dependent on had been marked for deprecation. April   Had my first direct experience with Covid-19.  I only went for a test because ...