Technical leadership during large refactors

May 9, 2022

A Long Time Ago™, we started a project at work to move our API serialisers from ActiveModelSerializers to Blueprinter. This kind of thing is hard. Hard to migrate a bunch of existing serialisers in prod on high traffic endpoints and be confident nothing will blow up in your face. Hard to get people motivated to migrate individual files and get things done. Hard to be sure that you've communicated the change correctly and that no one is left introducing more technical debt while you migrate.

These are all issues that we had to deal with doing our migration. Our migration languished for a long time in a kind of limbo state where we had started using the new library. However, an extensive collection of legacy serialisers sat gathering dust, not migrated as they were in critical or commonly-used paths. There was a general trepidation around getting in there and changing things over.

I chatted to team members who had contributed to the migration already, reflected on my own experiences, and identified some common concerns. In this article, I'll go through how we address the main hurdles we had with getting the project completed:

  • It wasn't easy to maintain momentum because it wasn't clear what the end goal was.
  • Even with the existing tests, people weren't confident that their changes would go out to production and everything would be fine.
  • It was unclear where to start or how to get involved.

Setting goal posts and building momentum

So there were two immediate issues I wanted to deal with before trying to start approaching the problem of actually migrating any files over.

  • How do we know when we've finished the work?
  • How do we know people aren't adding more technical debt with the old library while we're migrating?

Both of these were important problems to solve. It's hard to motivate folks to finish a job when they don't know how long is left to go, what the end state looks like, or in this case, how do we know when we're safe to remove the old library? And we want to remove as much friction as we can for other developers having to write serialisers during this period. No one wants to be confused about which bit of tech to use when starting something, and they don't want to have it pointed out to them in a code review after they've just finished doing their work.

I was reminded of some work I'd done recently in another project where I used a custom Rubocop class to solve similar issues. They're handy because you can enforce a certain direction in the codebase on an ongoing basis, preventing anyone from doing the thing you're trying to avoid, while also providing more context in the error messages they'll receive. On top of that, the list of offences Rubocop would generate for the existing files will reflect the body of work we need to get through before we can consider the project completed.

Here is what we ended up with:

# frozen_string_literal: true
require 'rubocop'
module RuboCop
module Cop
module Serializers
class PreferBlueprinter < Base
BASE_PATTERN = '(const (const nil? :ActiveModel) :Serializer)'
MSG = 'Use Blueprinter, please!'
def_node_matcher :class_definition, <<~PATTERN
(class (const _ _) #{BASE_PATTERN} ...)
def on_class(node)
class_definition(node) do

I'm still getting used to writing these. Still, this article from Evil Martians has been a big help. The rubocop-rails codebase also had some cops similar to what I wanted to put together. The cop we've put together checks if the class inherits from ActiveModel::Serializer and adds an offence to that line.

Now to get our todo list of classes we need to migrate. Running bundle exec rubocop --format files --only Serializers/PreferBlueprinter will generate a list of files that cause offences for our new cop. We can add that to our rubocop.yml file as an exclude list for that cop.

- 'app/serializers/old_serializer_a.rb'
- 'app/serializers/old_serializer_b.rb'
- 'app/serializers/old_serializer_c.rb'

That list is now how we're going to measure how close we are to being done. Each contribution to the migration will whittle that file down until we don't have anything left on the ignore list. After that, we can do a final PR to remove the cop and old library from our codebase.

Raising Confidence

For each desired change, make the change easy (warning: this may be hard), then make the easy change.

Kent Beck

Okay, but we still haven't dealt with the root issue of people's low confidence and fear around changing these load-bearing serialisers. From chatting with the team, I learned two things:

  • People were less concerned about completely breaking things. There were enough tests to catch issues, and while annoying, we'll get alerts about that kind of thing and can roll back.
  • They were more concerned about subtler serialisation issues. How can we be sure that we're not introducing some small difference into the new serialiser that might cause issues?

So what can we do? I thought we could add some form of safety net so that if things did go awry with our changes, we'd a) know so we could fix them and b) not adversely affect production. I landed on a version on dark reading, except here, we're comparing the results of two different serialisers rather than reading from two databases. If we detect any differences, we'd throw away the new one and fall back to the old JSON.

The below helper class is a generalised version of what we ended up with (with extra comments around some of the decisions):

class BlueprinterMigrationFallback
class BlueprinterValidationMismatch < StandardError
def initialize(message, diff)
@diff = do |operator, field, _|
"#{operator} #{field}"
attr_reader :diff
# For the original serialisation, we can just pass in the JSON output as is,
# but, I chose to use a proc for the new serialisation so that we don't go through
# the work of serialising it for requests that we don't need to.
def self.validate(ams_json:, blueprinter_proc:, ignore: [])
sample_rate = Rails.env.production? ? 0.1 : 1.0
sampled = Random.rand < sample_rate
return ams_json unless sampled
blueprinter_json =
# We found that some serialised fields _could_ be non-deterministic for
# whatever reason, so we added support to ignore fields from a diff as
# they would be different with each serialisation.
diff = Hashdiff.diff(ams_json, blueprinter_json).reject { |_, field, _| ignore.include?(field) }
# Raise our custom exception if there's a diff between the two versions.
raise'Resource failed to serialise with Blueprinter', diff) unless diff.empty?
# Everything's all good, use our new serialisation!
# Rescuing from StandardError here in case the proc we pass in throws some
# error we're not expecting.
rescue StandardError => exception
# Log the exception + the keys and operators involved in the diff. In our
# app, this is where we capture the exception in Sentry so we can follow up later.
raise exception if Rails.env.development? || Rails.env.test?
# Return the original JSON.

It's a pretty simple class, but the safety net of not using the new serialiser unless it matches the old one should raise confidence in making the changes. The tradeoff is that you're doing the work twice for a subset of requests, and that's a potential gotcha depending on what you're serialising and how heavy it is.

The other potential gotcha is the diffing itself, which can be a potentially long-running step depending on how big the payload is. We got bitten by that in one case where each record on the index action was also serialising thousands of associated records.

Getting people on the same page

Having added the helper class to the codebase and gone through a couple of migrations myself, I had a body of work I could point to as examples for others to follow.

I compiled that info into a document that outlined the steps one would take to migrate one of the old classes, why we approached the problem this way and some additional use cases that other people might run into. The document also reiterated the problem statement, why we were changing serialisers in the first place and what we were hoping to get out of it. The project had been in limbo for long enough that it was worth communicating that info that might have been lost to the depths of Slack.

How did we go?

None of these changes meant that the work was instantly completed overnight, but they provided a framework for getting the job done. The project is moving along again, and we've made our way through a third of the files we need to migrate (50 of a set of 150).

I'm really pleased with how this has turned out so far, and the process has given me more tools to approach managing a large cross-team change. I'll be sure to report back with anything new I learn as we make our way to the finish line!

Just to wrap up, these were some of the key points we looked at as ways to better manage a large cross-team change:

  • Ensure you can measure and report on progress. You need an end goal in sight and reporting on how you're moving can help keep the momentum up.
  • Minimise risk. Make it easy for people to fail, they'll be more inclined to help out and get involved if they have fewer things to worry about.
  • Document the change. Make sure you're able to articulate what you're doing and why, and what the benefits are.

Let me know if you've had your own experiences or challenges going through change management in a way that gets people on board or makes it easy for others to get involved.