The final stage in the S.C.A.L.E framework is to evaluate the campaign’s process, measure what has been working, and more crucially, what hasn’t. You want to be able to replicate successful results and learn from unsuccessful ones. Here are the steps that can be used to evaluate, measure, and clarify results.
A framework that helps — S.C.A.L.E.
We developed the S.C.A.L.E. framework to help companies get a fundamental understanding of the strategic ABM process. S.C.A.L.E. stands for:
- Secure ABM program goals
- Choose strategic accounts
- Advance targeting tactics
- Lead a seamless campaign
- Evaluate progress, measure results
This blog post will focus on the “E” of S.C.A.L.E.: Evaluate progress, measure results. This is the fifth and final in a series of posts that explain our framework. Previous posts are linked in the list above.
In the preparation phase we explained how to set your goals. While they should be top of mind throughout the project, now is the time to bring them front and center.
The postmortem meeting
While you should keep your goals in mind throughout the monitoring process, now is the time to hold a postmortem meeting, which should cover:
- A review of the whole program
- A performance assessment for those goals
If you exceeded your goals by a wide margin, were the goals ambitious enough? If you didn’t make your goals by a wide margin, were the goals too ambitious? What went wrong? Were they the right goals?
Also cover questions that arose during the program:
- Could anything have been better? (e.g. communications with sales)
- Could the process have been smoother?
- Anything else that should be considered in future programs
The results from this meeting should dictate process and playbook changes going forward. We’re looking at both big and minor updates. Even small updates become big improvements when done continually over time.
If a tactic didn’t work, was the tactic itself the issue, or was the implementation to blame? One way to solve for that is to try the same tactic three different ways in order to rule out the actual tactic, rather than missing out because it was implemented badly.
Reporting here is critical to disperse what you’ve learned across your teams. While you may not want to do a full presentation documenting your wins and losses, a deck with that information can help keep knowledge in-house even when there’s turnover (“What did we do on XYZ campaign? Go look at the postmortem deck.”)
The simplest way to determine a campaign’s success is to track and report metrics. The most obvious area to analyze is revenue growth. If your closed wins exceeded your goal, review the points mentioned above, then start looking at expanding your campaign. If not, then ask yourself:
- Was your campaign personalized enough?
- Was your targeting too general?
- Did you communicate your services clearly enough?
- Was the tone and content of your messaging appropriate for your target audience?
The second way to test your campaign’s success is to track pipeline growth. The pipeline provides a detailed overview of where leads are within the sales funnel and shows you whether your sales-marketing cadence was effective.
A third indicator of a campaign’s performance is engagement. For example, with an email campaign, you can see how the target interacted with the email.
- Did they open it?
- What was the click-through rate?
- Did they respond to the email?
- Was the response positive or negative?
With Folloze boards, you can see how many views your page received, where the traffic came from, how long they stayed on a particular page, the number of opt-ins, downloads, and so on.
One more way to evaluate progress is the ROI on ABM investment. The total revenue tied to ABM initiatives, the win rate/closed-wins, and the deal velocity are all key areas to measure when tracking success. As ABM is highly targeted and specialized, you should see results in the number of closes. If your ROI shows that your ABM strategy has been successful, you can use that data to design future campaigns.
Improve by iterations
Marketing performance is iterative by nature, and what you learn from one campaign is transferrable to future campaigns. By noting the things that did and didn’t work you can pinpoint the areas that need more testing.
Take email, for example. B2B buyers receive upward of 60 emails per week from suppliers. They’re much likelier to engage with messaging that speaks directly to their needs. Language and tone are important. You’ll know after the first email goes out if the subject line works by the opens. From the responses you could infer if the email created a connection and hit the mark with your prospects. Should you have a lower-than-expected open rate, change the subject line of your second email. If there are no responses to the email, ask whether the email subject connected enough with the copy, or if you baited and switched (don’t bait-and-switch, no one likes that).
You’ve finished your cadence. If it worked out well, celebrate!
If it didn’t work out well, still celebrate. It’s not that you haven’t won, you just haven’t won YET. Now you know what needs to be fixed, and you can fix it on your next campaign.