Most agencies don’t fear hard work. They fear scrutiny.
Not because they’re all shady, but because scrutiny forces specifics: what you’ll do, what it will cost, what “working” means, and when you’ll admit it’s not working. If your agency gets twitchy when you ask those questions, that’s not “protecting the strategy.” That’s protecting the invoice.
Start here: “What are you optimizing for, exactly?”
Bold take: if they can’t answer this in one minute, you’re buying activity, not outcomes.
“Awareness” isn’t an objective unless you define it. “Growth” is a mood. You need a measurable north star that maps to the business. Revenue is the obvious one, but not always the right immediate target. In some cases it’s qualified pipeline, activation, retention, or expansion. The trick is agreeing on the one primary outcome and a small set of supporting indicators that actually predict it (not the stuff that looks good in a deck). If you want a solid example of what that kind of clarity looks like, Explore here.
One-line clarity beats a 30-slide report.
A practical framework (that doesn’t require a PhD)
Here’s the skeleton I use when I’m trying to force transparency into a marketing relationship:
– Objective: the business result (ex: “$400k in new ARR from mid-market by Q3”)
– Decision metrics: 3–5 numbers that will change what you do next week
– Instrumentation: where the data comes from, who owns it, and how it’s audited
– Cadence: what you’ll review weekly vs. monthly (and what triggers an emergency review)
– Guardrails: spend caps, brand constraints, compliance rules, no-go tactics
It’s not fancy. That’s the point.

The uncomfortable part: Which metrics actually drive the business?
Some metrics are performance theater. Others are levers.
If your agency celebrates a CTR increase but can’t tell you whether those clicks turned into activated users or SQLs, you’re watching the wrong scoreboard. In my experience, the most useful metrics have two traits: they’re close to cash, and they tell you what to fix.
A few that usually earn their keep:
– Conversion rate by funnel stage (where leakage actually happens)
– CAC by channel + cohort (not blended, and not “estimated”)
– Payback period (because cash flow is real life)
– LTV by segment (because “average customer” is a myth)
– Activation speed (time-to-value, especially in product-led models)
Now, this won’t apply to everyone, but if you’re running longer sales cycles, you need proxy metrics that correlate with revenue and are hard to game: meeting-to-opportunity rate, pipeline velocity, stage conversion, win rate by source. Otherwise you’ll be sold “growth” while the CRM stays quiet.
A stat to keep everyone honest
Multi-touch attribution is messy, but the “we can’t measure anything” excuse is getting old.
Google’s internal research has long pointed to the messy middle of decision-making (exploration and evaluation loops) as a core reason simplistic attribution fails. That doesn’t mean measurement is pointless; it means you need better experimentation and cleaner definitions of incrementality.
Source: Google/Think with Google, Decoding Decision Making: The Messy Middle (2020).
Translation: don’t demand perfect attribution. Demand useful attribution, plus tests that prove lift.
Reading a proposal: where transparency shows up (and where it hides)
Some proposals are written to inform. Most are written to avoid accountability.
Look for alignment that’s almost boring in its specificity. A transparent plan doesn’t just list tactics; it maps:
activity → leading indicator → business outcome → decision threshold
And it should say, in plain language, what happens when results are below target. Not “we’ll optimize.” Optimize what, using which lever, by when?
Red flags I don’t ignore anymore:
– Vague milestones (“phase 1: strategy” is not a milestone)
– “Custom dashboards” with no sample view or metric dictionary
– A timeline with deliverables but no owners (who’s writing, building, approving?)
– Benchmarks presented as promises (benchmarks are context, not contracts)
– Case studies that sound impressive but don’t match your funnel or sales cycle
Here’s the thing: a good agency will tell you what they can’t guarantee. If they pretend everything is controllable, they’re either inexperienced or performing.
Real costs vs. hidden fees (the part that blows budgets up)
You’re not crazy if you feel like marketing invoices “grow legs.” That happens when the commercial model is loose and the scope is written in fog.
Hidden fees usually come from four places:
- Tooling: analytics, reporting, CRO platforms, heatmaps, enrichment, call tracking
- Creative production: “one ad concept” turns into six formats, three aspect ratios, and five rounds of revisions
- Media management add-ons: separate fees for each channel, each market, each product line
- Emergency work: launches, PR fires, “we need this by Friday” (you will need this by Friday)
Ask for a budget table that separates:
– Agency labor
– Paid media
– Third-party tools
– One-time setup
– Ongoing maintenance
– Contingency (and the rules for using it)
And yes, ask what happens if you don’t spend the full media budget. Do fees drop proportionally, or are you paying for “management” regardless of spend? You’d be shocked how often that’s unclear.
Which channels deliver real ROI (and why buzzwords mislead)
If someone tells you “TikTok is the future” or “SEO is dead,” they’re selling a narrative, not a plan.
Channels aren’t good or bad. They’re compatible or incompatible with your offer, margins, sales cycle, and audience behavior. That’s it. That’s the whole secret.
A more useful way to think about channels:
– Capture (high intent): paid search, SEO, marketplaces
– Nurture (mid intent): email, webinars, retargeting, community
– Create demand (low intent): paid social, creators, PR, YouTube, events
If your agency can’t explain which bucket each channel sits in for your business, you’re going to get “full-funnel” spending with bottom-funnel expectations. That’s where ROI goes to die.
Buzzword blindspots I’ve seen derail budgets
One quick opinionated list, because it helps:
– “AI-driven” targeting (often means basic automation with nicer branding)
– “Brand awareness” without lift tests or brand studies
– “Viral” as a strategy (it’s not a strategy; it’s an outcome)
– “Engagement” when the business needs pipeline
Look, sometimes awareness campaigns are exactly right. But if you can’t measure downstream impact with holdouts, geo tests, or at least directional cohorts, you’re funding vibes.
From idea to action: build a campaign plan that survives contact with reality
A practical plan is not a brainstorm doc. It’s a sequence of decisions.
Start with one campaign objective. Not three. Then choose one primary channel and one support channel. You can expand later, after you have signal.
A campaign plan I trust usually includes:
1) Audience segmentation that’s actually usable
Not “SMBs” and “enterprises.” Give me segments by pain, urgency, and buying constraints. Tell me which one you’ll prioritize and why.
2) A hypothesis you can lose
If the hypothesis can’t be wrong, it’s not a hypothesis. “We believe pricing-page traffic from non-branded search will convert to demo requests at 2.0%+ if we align landing pages to job-to-be-done messaging.”
3) Creative rules, not just creative ideas
What claims are allowed? What proof is required? Which offers are banned because they attract bad-fit leads? (Yes, that happens.)
4) A dashboard built for action
If a dashboard doesn’t tell you what to do next, it’s just a prettier spreadsheet.
One-line paragraph, because it matters:
Speed beats elegance.
When benchmarks fail: the response playbook nobody writes down
Most agencies have an optimization process. Fewer have a failure protocol.
When performance misses benchmarks, your next move shouldn’t be panic or denial. It should be pre-decided. I like decision thresholds that trigger specific actions. For example:
– If CAC is 20% above target for 2 consecutive weeks, pause the worst-performing ad set and reallocate 30% of spend to proven segments
– If lead quality drops (SQL rate falls below X), tighten targeting and change the offer before you change the channel
– If conversion rate falls after a landing page update, roll back within 24 hours (no ego, just rollback)
And you need ownership. One person decides. One person implements. One person validates. Committees are how mediocrity protects itself.
Updates, access, ownership: how communication should actually work
If you’re chasing updates, the relationship is already broken.
You want a cadence that’s predictable and light:
– Weekly: what changed, what we learned, what we’re doing next week
– Monthly: channel performance, budget variance, experiment results, roadmap updates
– Quarterly: strategy review, segmentation refresh, creative performance trends, next bets
Access matters too. You should have role-based access to ad accounts, analytics, tag managers, CRM reporting, and the project board. If you don’t, you’re trusting a middleman to tell you what the data says (and that’s where reality gets edited).
Partner or one-way accountability system?
Ask this directly: “When results miss, what do you take responsibility for—and what do you expect us to own?”
A real partner has answers ready. They’ll tell you the dependencies: sales follow-up speed, offer approval timelines, technical constraints, margin limits, product readiness. They won’t use those dependencies as excuses, but they won’t pretend they don’t exist either.
If you’re the only one watching the numbers, you’re not in a partnership.
You’re supervising a vendor.
And that’s fine, if that’s what you want. But don’t confuse it with growth strategy.