Introduction
These days I consider myself to mainly be a developer, but with a solid background in getting services and apps operational.
This post is intended to focus on some of the work that I have done so far in my career that I would consider as being more focussed on the operations side of technology-driven projects. It can be read as background to my appreciation of the DevOps approach to developing and deploying applications.
If you came to read about my medical history then that is also covered here: I've never had an operation.
University days
Personal hardware
Back in the 90s PCs were still the dominant home computer, and as my brother was into electronics and electrical engineering I went down the path of assembling my first proper computers from components. The level of compontent that I'm referring to here is consumer-accessible ready-made modules rather than individual chips and circuit boards. I never took up a soldering iron in anger.
So, when I had enough of my own money (being a poor student, money was not abundant), I selected a case, hard drive, motherboard, CPU, memory, video card, and sound case and went about combining them to form the basis of a working computer.
Back then the Internet wasn't a common in every home, so I didn't need a modem or network card to start with. The display was a CRT monitor that I had received as a hand-me-down when a member of the family had upgraded their system.
Unix, shell scripting and cgi-bin (server side "dynamic" web)
Around the same time I was learning some C programming and various useful utilities on Solaris at university both as part of my coursework, as well as out of general curiosity of how to get things done.
I was very interested in how the web worked, and managed to set up some cgi scripts that would run under my home directory on one of the computer science department's servers, though I think this didn't get much beyond calling a random number generator to determine which image to show from a sub-directory on the server.
Startup Systems Administration
Setting up Workstations
Being one of the first employees at a growing company meant that I was able to get involved in configuring the workstations of the new employees. Back then that involved installing the OS (Windows NT 4, later Windows 2000) and a few core applications: virus scanner, current version of Java SDK, Perl, IIS, MS Office. We were quite a small scale operation of fewer than a dozen employees for most of the time, so there was never any great need to automate this work.
Configuring Linux Servers
During the day to day work I would pair up with a colleague when he needed to update some of our critical infrastructure software for the systems that we hosted on our servers. Sendmail and BIND seemed to need updates relatively often - probably every month or every other month. Back then we had a few customisations in our installation so an upgrade involved downloading the latest source code and building it from configure scripts and makefiles with specific options and configuration files applied.
Later on I needed to apply the same experience to build up a Postgresql server with the PostGIS extension compiled in for use on a property auction website - enabling the site to present out an up to date visual representation of the sold / available status of each property on the map. The build of the database server was a bit last minute as we realised that a flat file representation of the data wasn't going to work properly on Linux (and I already had my reservations about that approach being suitable for multiple concurrent updates).
Another significant pieve of work involved upgrading and migrating our internally hosted IMAP email server to a more powerful server, so I went ahead and applied some locking down of access rights to sensitive company folders at around the same time - not because anyone in the company wasn't trustworthy, but just as a general best practice for responsible business data handling. There were a few gotchas when setting up new email folder permissions after that, but the data migration and version upgrade went seamlessly well.
Configuring Windows Servers
Historically we had hosted several client websites using Microsoft's Internet Information Services - which acted as the runtime environment for Active Server Pages (commonly referred to as "ASP pages"). I took care of setting up IIS and locking down various optional features based on security recommendations.
I used to regularly monitor slashdot, securityfocus.com and various other sites to keep up to date with the latest issues and potential issues to be addressed as proactively as possible. I also routinely monitored the content being uploaded by our clients, such as ensuring that they weren't slipping into outdated practices for making database credentials available for ASP pages that happened to involve database queries.
Database Access Rights
To minimise the potential risk of having compromised database credentials exploited, all of our databases had table level permissions locked down to the absolute minimum rights required by each client application. We didn't quite split responsibilities up to have reads handled separately from deletes or writes back then as we were still developing monoliths - which was fine for the scale that we were operating at back then.
Continuous Integration - GoCD plugin
While I was working at Springer (later known as Springer Nature) I gained some hands on experience with two technologies that worked well together: Thoughtwork's GoCD, and Pivotal's Cloud Foundry.
When a competition was announced by Thoughtworks to bolster up the range of plugins available for GoCD, I decided to have a go at producing some open source software that would enable teams that used the combination of GoCD and Cloud Foundry to gain a small additional capability for automating the detection of updates in Cloud Foundry.
My plugin won first prize in the competition, but soon afterwards the somewhat clunky GoCD plugins API was given a significant overhaul meaning that my plugin would not work in later versions of GoCD.
Microservices - Infrastructure As Code
Deploying Microservices
Microservices often need some supporting infrastructure beyond their unit of deployment, such as an object store (e.g. S3) or a database or a persistent shared cache. All of the major cloud providers provide these types of services and have interfaces to allow us to create them programmatically. My more recent operations experience has been strongly oriented towards automation of provisioning and configuring such infrastructure components.
The more mature organisations that I have worked in have had systems such as Ansible and Terraform in place to allow developers to specify the expectations of the available supporting services and have any updates be provisioned and applied automatically, sometimes as part of the release process for a new version of a service.
Feature Toggles, Not Feature Branches
When it comes to significant changes to infrastructure as part of a release I've found that it is better to aim for a "roll forward" strategy in the event of any unexpected side-effects rather than expecting to be able to roll back. This can involve something as simple as toggling the new feature off to give the team enough time to properly diagnose what has gone wrong. The alternative might involve removal of the newly created infrastructure, which could hide the issue and delay resolution.
Where To From Here?
At this point in time (October 2021) I'm at a potential fork in the road careerwise, I could either continue to be a developer of services for end users and service integrations, or switch over to join a team more focused on enabling developers - sometimes refered to as "Platform engineering" - or I could move even further towards the metal and get involved in developing the tools and services that underpin platforms.