Skip to main content

Vendor lock-in and relational databases

Avoiding features to enable portability

In the early 2000s the company that I was working for got a fright when the database vendor that we were tied to came out with a different approach to charging for their software license. Having applications on "the Internet" got the vendor's attention so they wanted to have a way of deriving more revenue from websites.

From then on all developers on our relatively small team were directed towards avoiding using any non-standard features of that particular database engine, as it would make it more difficult for us to be able to transition across to an alternative vendor. For example, stored procedures were only allowed for edge cases such as an advertising engine (yeah, we rolled our own advert engine for a customer back in the day).

Keep in mind that this was back when self-hosted physical servers were pretty much the only way to operate, well before cloud computing such as AWS even existed, so it was a big deal to set up a database server, involving direct license agreements with vendors.

Much much later on my career, we had guidance come down from senior management that there was frustration that the cloud vendor knew about the broad range of their services that we were using, so felt confident that we were not a flight risk of giong to another provider. On that basis we were instructed to either not make use of vendor-specific services, or wrap them in such a way that our services would be able to be ported across to another cloud provider. That tied up quite a bit of developer time, and left us restricting our implementations to the lowest common denominator level of functionality when it came to considerations like messaging systems.

"Standard" functionality doesn't mean it's the same

Looking back after a couple of decades, my recent dabbling with transaction isolation levels has been a real eye opener as to how much relational database engines can differ in the way that they go about complying to standards, even without going into obvious vendor extensions.

This has reinforced an opinion that some former colleagues of mine expressed while we were working together on a large cloud transformation project during my time in London, "You'll never end up changing database, so just use whatever it offers." 

Don't get me wrong, vendor lock-in is still a risk to keep in mind, but it needs to be weighed up against the opportunity cost of not making use of the differentiating features that could make the product worth paying extra for in the first place.

Comments

Popular posts from this blog

Having a go at learning some Kotlin

What's this about?  The year 2025 is almost over, so that means that it has been a bit over a decade since my old colleague Filippo gave a presentation to the development team of ScienceDirect covering the merits of the Kotlin programming language. So, it's about time that I had a proper go at using it. This blog post is intended to trace what the experience has been like, covering surprises that I encounter along the way. Getting started The programming language that I am most experienced with is Java, so I have chosen to try out implementing some functionality in Kotlin from a recent hobby project that I developed in Java involving spinning up a database in a Docker container and running some queries. JVM version support IntelliJ IDEA includes some automation for creating a new project, so I selected the relevant options to use the latest LTS version of the Java virtual machine with Spring Boot, Kotlin, Postgresql and Test containers. After a few seconds I had a new project i...

The Importance of Segmenting Infrastructure

Kafka for Logging I was recently poking around in the source code of a few technologies that I have been using for a few years when I came across KafkaLog4jAppender. It enables you to use Kafka as a place to capture application logs. The thing that caught my eye was the latest commit associated with that particular class, "KafkaLog4jAppender deadlocks when idempotence is enabled" . In the context of Kafka, idempotence is intended to enable the system to avoid producing duplicate records when a producer may need to retry sending events due to some - hopefully - intermittent connectivity problem between the producer and the receiving broker. The unfortunate situation that arises here is that the Kafka client code itself uses Log4j, so it can result in the application being blocked from sending its logs via a Kafka topic because the Kafka client Producer gets deadlocked waiting on transaction state. Kafka For Metrics - But Not For Kafka Metrics This reminded me of a similar scen...

2022 - A year in review

Just a look back over the last 12 months. January I moved back to Christchurch to live, after having spent a few months further south since moving back from London. Work was mainly around balancing other peoples' understanding and expectations around our use of Kafka. February I decided that it would be worthwhile to have a year's subscription for streaming Sky Sports, as some rugby matches that I would want to watch would be on at time when venues wouldn't be open. Having moved to Christchurch to be close to an office, now found myself working from home as Covid restrictions came back into effect across New Zealand. March Got back into some actual coding at work - as opposed to mainly reviewing pull requests for configuration changes for Kafka topics.  This became urgent, as the command line interface tool that our provisioning system was dependent on had been marked for deprecation. April   Had my first direct experience with Covid-19.  I only went for a test because ...