How to complete a website shutdown with a split migration
Shutting down a website is easy. But having to do it along with redirects can get complicated. Here’s what you can do about it.
Ludwig Makhyan on October 11, 2022 at 6:00 am | Reading time: 7 minutes
Recently, I have been asked by a major global brand to support a website shutdown project. The company wasn’t making enough to sustain itself and make a profit. The same global brand also owns multiple other similar companies in the niche.
Shutting down a website is easy, but when you have to do that along with redirects, what and how do you do it?
One of the requirements was to make the shutdown smooth and redirect half of the traffic to one website, and the other half to another. The determining factor was the type of product.
Why is it called a ‘split migration’?
A website migration is a detailed, procedural process that often involves changing the domain and URLs, redesigning or replatforming. While tedious even on a small site, the issues that you can run into will only amplify with each additional page of a site.
Now, imagine a site with thousands of pages or products that you need to include in the migration. Add in the complexity of having to migrate portions of a single site between two other sites.
How do you handle thousands of pages and products when you have to move them to two different and existing websites?
Before this point, I never did a split migration, but I had a general idea of where to start and how to begin the process.
Unfortunately, migrations are more than a simple redirect. Every migration comes with its difficulties, but this particular one was a little easier because it included only URL redirects from the source to the destination page(s).
However, the process would be more painstaking for a normal migration or replatform that includes:
- 404 errors
- Existing redirects
In terms of quality assurance (QA), a basic URL redirect is easier to test and verify that everything is working properly. However, we still went through multiple phases to make the process go along smoothly, starting with “discovery.”
I knew that my biggest challenge was to identify the existing pages on all three websites. For privacy, intellectual property and other legal reasons let’s call them Source, Destination1 and Destination2.
I started off with a Screaming Frog crawl that would go over the source website, identifying all indexable pages. And for an easier decision-making process, I had to pull Google Analytics and Google Search Console data using the APIs.
But I was still missing another piece of data. I didn’t have Semrush API access so had to pull the backlinks data separately. Once you have the list of URLs you need to work with, make sure to pull backlink data as well.
You can change this to Ahrefs or another source you prefer. The point here is to know how important a URL is in terms of backlinks.
Get the daily newsletter search marketers rely on.
<input type="hidden" name="utmMedium" value="” />
<input type="hidden" name="utmCampaign" value="” />
<input type="hidden" name="utmSource" value="” />
<input type="hidden" name="utmContent" value="” />
<input type="hidden" name="pageLink" value="” />
<input type="hidden" name="ipAddress" value="” />
Now that I collected all my URLs from the source site I had to work with, I needed a few things to make the process smoother:
- Clean up the list to exclude URL parameters or other duplicates.
- Identify the owners of the page (to which site should it go – Destination1 or Destination2).
- What is the priority of the page? (Does it have direct traffic? Is it ranking in Google? Does it have backlinks?)
- What is the status code of the page? (Is it a 200 or a 301 or a 404?)
I put my ScreamingFrog data and Semrush data into separate sheets and created a Google Sheets file:
Initially, I shared the file with all involved parties and owners, this allowed them to select the URLs that belong to the specific destination. Remember that each URL may have its owner counterpart on each of those websites.
For example, a specific product existed on Source and Destination website, so the final redirect should be to the Destination URL and the same page.
You may also choose to pass parameters. So make sure the Destination is properly filled in.
In my task, I was working with 18,000 URLs, and after some filtering and removing those that weren’t important I ended up with 11,000.
In most cases, you’d have to test this on a temporary server. This may or may not have a setup of your actual Source URLs, for example, it may be
dev.source.com instead of
You can easily replace your source URLs temporarily, or use a duplicate of the file.
When testing, make sure you use the same configuration you used during discovery.
Updating the ScreamingFrog export in SF-Internal ALL sheet will automatically adjust the data in the URLs sheet and show you the ones which have a failed destination URL.
But what went wrong?
Every migration is a learning experience. If you’ve ever performed a migration, you know that a single mistake can lead to 404 errors, images not loading or other issues. Thankfully, the biggest issue we had was with a global server-side cache impacting the testing process.
This migration QA was based on two things:
- Checking 301 redirects.
We spent more time on the discovery phase than the migration itself.
One thing we take away from this particular migration is that you really need to learn the ins and outs of a site. It takes a lot of time to work with large teams and learn who the owner of each page is and how to manage them (pages or page owners).
Nothing particularly concerning impeded the migration, but we attribute this to the immense amount of time spent in the discovery phase. If we didn’t dedicate a lot of time to this phase, it could have ended badly.
Discovery allowed us to be ready if there were issues during migration so that we could call in the right people and have the error fixed on the spot.
Using the template helped us a lot.
This split migration template will be updated as new and similar projects come my way. Please make your own copy rather than requesting access. Also, improve or suggest improvements as you work on it.
Our migration was a success, it took us 5 minutes to go live (including clearing all global cache) and just 15 minutes to run a list crawl to identify all the redirects were functioning properly.
Defining “success” is something that every migration needs. In most migrations, there always seems to be something that goes wrong. Even when you spend extra time in the setup phase, there’s always a small detail that is overlooked.
This migration’s success was admittedly easier than others performed in the past because the only real QA required was ensuring the redirects worked. Automation helped here to verify that all redirects were successful.
However, that’s only part of the story. We also waited for the following before deeming the project a win:
- Source site pages started to slowly be de-indexed.
- Destination page rankings began to increase.
We did run into a problem with a server-side cache that impacted the verification process and needed to be cleared a few times. Global caches always seem to pose issues during migrations, so this is definitely something to consider when working on a regular or split migration in the future.
Within one day we started seeing updates in Google’s search results and many of the destination pages spiked because of targeted redirects.
Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.