If you’re paying more and getting the same (or worse) results, something’s off. Maybe it’s the market. Maybe it’s your offer. Maybe your provider is coasting. The point is: you don’t fix this with another “strategy refresh” doc and a few new ad creatives.
You fix it by getting brutally specific about what’s working, what isn’t, and how long you’re willing to wait for proof.
One line that’s saved a lot of teams I’ve worked with: marketing isn’t a relationship—you’re buying performance.
Hot take: if they can’t prove impact, they’re not a partner
I don’t care how nice the account manager is. If reporting is vague, attribution is hand-wavy, and every hard question gets answered with “branding takes time,” you’re not getting a service—you’re funding someone’s learning curve. See the results when the work is actually measurable.
Now, this won’t apply to everyone, but… most “stuck” accounts aren’t stuck because the channel died. They’re stuck because the provider stopped testing aggressively and started defending the status quo.
Are you hitting diminishing returns… or just feeling impatient?
Plateaus happen. Campaigns fatigue. Audiences get saturated. CPMs spike. That’s normal.
What isn’t normal: investing more month after month while the provider can’t explain (in plain language) why the numbers moved and what they changed because of it.
Here’s the disciplined way to separate noise from a real decline:
– Market reality check: Are benchmarks shifting in your vertical? (Example: paid social efficiency often compresses when auction competition rises.)
– Initiative mapping: Tie performance changes to specific moves: new landing page, new targeting, new creative, budget shifts.
– Competitor pressure scan: Who’s taking share? What are they promising? Where are they showing up that you aren’t?
– Experiment velocity: How many meaningful tests ran last month? Not “we tweaked copy,” I mean actual hypothesis-driven tests.
A provider who’s still valuable will welcome this level of scrutiny. The defensive ones get weird fast.
Performance audit: revenue, pipeline, CAC (not vibes)
Vanity metrics are comforting. They’re also the quickest path to paying for marketing that looks busy but doesn’t build the business.
A real audit connects activity to outcomes. Not perfectly. Not magically. But credibly.
What I’d demand to see (and yes, weekly at first)
You want a view that answers:
– What generated pipeline?
– What closed?
– What did it cost to acquire customers (CAC), by channel and segment?
– Where are leads dropping out—and why?
Use an attribution model that matches your sales cycle. If you’re selling enterprise with a 90-day cycle, last-click is basically fiction. If you’re eCommerce, you can often get away with simpler models, but you still need discipline.
And look, attribution will never be perfect. But “we can’t track it” is often code for “we didn’t set it up.”
One hard data point to calibrate expectations: Gartner has reported that marketing analytics and attribution efforts are frequently hindered by fragmented data and tooling across teams (Gartner, Marketing Data and Analytics research). That’s common. It’s also solvable enough that “shrug” isn’t an acceptable answer.
One-line emphasis:
Pipeline is a business metric, not a marketing metric.
Communication and project management: the silent performance killer
Some provider relationships fail because performance drops. Others fail because the machine around performance is chaos.
You’ll feel it when:
You’re chasing updates.
Not reviewing results. Not making decisions. Just… hunting down basic status info.
Clear communication channels (the “don’t make me guess” test)
If you don’t know who owns what, when you’ll hear from them, and how decisions get documented, you’re already bleeding time. I’ve seen decent marketers look incompetent because their agency had no cadence and everything lived in scattered email threads (a personal pet peeve).
Healthy looks like: defined owners, predictable check-ins, written recaps, and an escalation path that isn’t emotionally loaded.
Efficient project tracking (it’s not optional)
Ask what they use. Ask to see it.
A functioning system should show milestones, blockers, and dependencies in a way you can understand in 30 seconds. If it takes a 45-minute call to figure out why the landing page isn’t live, the system is broken—or they are.
Timely issue resolution
Here’s the thing: responsiveness isn’t replying fast. It’s resolving fast.
The pattern you want is:
1) identify issue
2) assign owner
3) propose fix + timeline
4) confirm outcome
If you keep getting “we’re looking into it” with no owner and no deadline, that’s not service. That’s delay.
Price vs value (because “cheaper” can get expensive)
I’ve watched companies fire a $12k/month agency, hire a $5k/month one, and end up spending more because performance dropped and internal teams had to patch the gap.
Price is what you pay. Value is what you keep.
So compare providers on:
– Outcome accountability: Do they tie work to pipeline/revenue or just “engagement”?
– Scope clarity: What’s included, what isn’t, and what triggers extra fees?
– Risk-sharing: Any performance incentives? Any commitments? Any SLAs?
– Operational maturity: Do they have a repeatable process—or just talented people improvising?
If a proposal is basically “we’ll run ads and send reports,” you’re not buying expertise. You’re renting labor.
Before you switch: define the criteria (or you’ll repeat the same mess)
Switching providers without a scoreboard is how teams end up doing this every 12–18 months.
Set transition criteria that are measurable and time-bound. A simple scorecard beats a complex deck.
Metrics that actually help you decide
Mix leading and lagging indicators, but don’t let the leading ones become an excuse.
Examples:
– Lead-to-SQL rate (quality signal)
– CAC by channel (efficiency signal)
– Pipeline velocity (sales-cycle signal)
– Close rate by source (alignment signal)
– Revenue influenced or sourced (business signal)
Baselines matter. Targets without baselines are just wishful thinking.
Exit criteria (yes, write it down)
This part feels “serious,” but it saves you later.
Exit triggers might include: repeated missed deliverables, unexplained budget creep, no testing cadence, persistent tracking gaps, or failure to hit agreed performance floors over a defined period.
Also include a practical handover requirement: access, assets, documentation, tagging, dashboards, ad accounts, creative files, landing pages, and historical reports. If they can’t hand that over cleanly, that tells you a lot about how they operate.
How to evaluate new partners without getting dazzled
Charismatic agencies win pitches. Structured agencies win outcomes.
I prefer a scoring approach that forces evidence over vibes:
– Require case studies with numbers, timeframes, and constraints
– Ask what they’d do in the first 30 days (not the first year)
– Make them explain a time they were wrong—and what changed because of it
– Request sample reporting (real format, not a mockup)
– Check references with pointed questions: “What did they improve specifically?” “What broke?” “How did they handle conflict?”
If you can anonymize certain materials during review, do it. Halo effect is real. Big-name logos don’t guarantee your account gets the A-team.
(Also, listen for overconfidence. It’s usually a tell.)
Red flags that justify an immediate switch
Some issues are fixable. Some are deal-breakers.
Immediate-switch territory looks like:
– Opaque reporting or refusal to share raw data access
– Unapproved budget changes or “surprise” invoices
– No documented testing roadmap, just perpetual tweaking
– Compliance carelessness (privacy, regulated claims, brand safety)
– Campaigns that stall after launch and never regain momentum
– Blame-shifting instead of diagnosis (platforms, seasonality, “the algorithm”)
When you see consistent avoidance of accountability, don’t negotiate with it. Replace it.
The practical switch checklist (keeps campaigns from face-planting)
Switching doesn’t have to mean starting over. It should feel like a controlled handoff.
A clean changeover plan covers:
Access + ownership
– Admin access to ad accounts, analytics, tag manager, CRM, CMS
– Domain/DNS and pixel ownership clarified
– Naming conventions documented (campaigns, UTMs, audiences)
Measurement
– Attribution model agreed (and realistic)
– Conversion events audited (duplicates and broken events happen constantly)
– Dashboards rebuilt with trendlines that match your funnel
Continuity
– Content calendar reviewed (what must ship no matter what)
– Creative refresh plan for quick wins (ads fatigue faster than people admit)
– Audience/segmentation rules migrated intact
Governance
– Weekly cadence for the first 30 days
– Written recaps with decisions and owners
– Escalation path that doesn’t rely on “who you know”
Run a post-mortem after the first major campaign under the new provider. Not to play blame games—just to tighten the system so you don’t repeat old mistakes with new people.
If you’re on the fence, use a simple litmus test: can your current provider tell you what they tried, what they learned, and what they’ll do next—using your business metrics, not theirs?
If not, you already have your answer.