The PACT framework for PPC: How to move beyond ‘it depends’
There’s a phrase PPC experts reach for whenever they get a tough question. At conferences, online, and on client calls. Two words, a smug smile, and absolutely zero useful information: “It depends.”
This has been bugging me for as long as I can remember. Turns out it’s not just a PPC thing, either. Aleyda Solis gave an excellent presentation calling out the exact same pattern in SEO. So we’re dealing with an industry-wide epidemic here. Two disciplines, same cop-out.
Not every question is equally hard to answer.
- “What’s the maximum number of RSAs per ad group?” Just look it up.
- “Why did my CPA spike last week?” That takes data plus interpretation.
- “What will my ROAS look like if I increase budget by 30%?” Now you need context, too.
- “What bid strategy should I use?” That requires data, interpretation, context, and an understanding of someone’s priorities.
It makes sense that “It depends” clusters around the hardest questions. More variables, more context needed, more ways to be wrong. I get it. But since when is “This is hard” a reason to give up on being useful?
So I built a framework for giving useful answers instead. I call it PACT, which stands for Process, Anchors, Conditions, and Trade-offs.
The PACT framework assumes a broader audience context where you don’t have the asker’s data in front of you. If you do, great — crunching the numbers and statistical models become additional answer options.
Not all questions are created equal
If we borrow from the world of analytics, questions come in four flavors, each progressively harder to answer.

Descriptive questions: Asking what happened or how something works
“What’s my impression share?” or “How does broad match work?”
These are answered with data and facts. You know them or look them up. Nobody says “It depends” here because nobody needs to. I’ll ignore this category for the rest of this article.
Diagnostic questions: Asking why something happened
“Why did my conversion rate drop?”
These need data plus your interpretation of that data. “It depends” already starts creeping in here because something clearly changed, and pinpointing the cause is rarely straightforward.
Predictive questions: Asking what will happen or what good looks like
“What if I decrease my target ROAS by 30%?” or “What’s a good CTR for my industry?”
These are harder. You need interpretation, but you also need context about the specific business and market. This is where “It depends” starts to feel earned.
Prescriptive questions: Asking ‘What should I do?’ or ‘What’s the best solution?’
“What bid strategy should I use?” or “Should I consolidate my campaigns?”
These need everything: data, interpretation, context, and an understanding of someone’s priorities. If “It depends” has a permanent home, it’s here.
The PACT framework
There are many useful answers you could offer your audience instead of “It depends,” such as explaining how it depends, outlining the trade-offs, or sharing benchmarks and flowcharts.
I tried to categorize the answers into four concrete response types. (Whether the category names were chosen for clarity or reverse-engineered from a four-letter word is between me and my thesaurus.)
The diagram below shows which response types fit which question types. (There’s overlap, and that’s fine.)

Process: Give a structured path
For many diagnostic questions and for some prescriptive questions, a process is the best answer. Show your audience which steps to take, in which order, to reach their answer (and, increasingly, steps you can hand to an AI agent with a skill).
If you work at an agency, you need good processes anyway. As David Rodnitzky would say:
- “An agency without process is just a bunch of people running around doing things.”
Suggested formats
Flow charts: The first time I fell in love with a flow chart was in 2012, when the Rimm-Kaufman Group (now Merkle) shared a performance troubleshooting flowchart in their Dossier 3.2. It’s an excellent example of a helpful answer to the question, “Why did my CPA increase (or ROAS decrease)?”

Decision trees: Prescriptive “Should I?” questions can also be helped with a decision tree. They can be simple, funny-but-true ones like this one from Tom Orbach:

Or more professional ones, like Aleyda Solis’ SEO Flowcharts for SEO Decision Making.

Anchors: Ground it with data and examples
Anchors are the “quick and easy” evidence-based answers that are still better than “It depends.”
Suggested formats
Benchmarks: Everybody loves a good benchmark. If you have enough data from comparable businesses, you can use it to answer “What does good look like?” questions.
When someone asks, “What’s the average ecommerce conversion rate?” don’t say “It depends.” Say:
- “For health and beauty, it’s 3.3%. For electronics, it’s 1.9%.” The more specific the benchmark, the better.
Usual suspects: Think of the usual suspects as a “light version” of a process for diagnostic questions using the 80/20 Pareto principle: 80% of outcomes result from 20% of causes.
Instead of a 25-step flowchart, you can share a ranked list of the most likely causes ordered by frequency. Basically saying:
- “Check these five things first, because 80% of the time it’s one of them.”
Case study: When someone asks, “What will happen if I do X?”, telling them what actually happened when a similar account did X is worth more than any theoretical answer.
- “We consolidated 12 campaigns into four for an ecommerce account spending $50,000/month. CPA improved 20% after the learning period, but we lost visibility into product category performance.”
The key is specificity: industry, budget range, what changed, and the trade-off. Vague case studies (“We saw great results”) are just “It depends” wearing a suit.
Conditions: Name the hidden variables
This is the most direct replacement for “It depends,” as you’ll say, “It depends on these specific things” instead.
Suggested formats
Checklist: For diagnostic questions, this could be a segmentation drill-down. Slice the data by device, geo, time of day, campaign, match type, audience, etc., until the anomaly isolates to one segment. This expands “Why did it happen?” to “Where did it happen?” which can be just as useful.
If [x] then [y]: For example, “What will happen if I double my budget?” Then you follow up with questions like:
- “What’s your current impression share?”
- “Are you budget-constrained or bid-constrained?”
- “How steep is the diminishing returns curve in your auction?”
If you’re at 60% impression share and purely budget-limited, doubling your budget could get you close to 80% more conversions. If you’re already at 95% impression share, that extra budget is going to buy you mostly junk.
Reversibility test: For a quick filter on prescriptive “Should I?” questions, use one condition: reversibility. Categorize decisions by how easy they are to undo. Low-stakes reversible decisions (e.g., testing a new ad copy) get a “Just try it” answer.
High-stakes irreversible decisions (such as restructuring your entire account) get the full trade-off analysis (and move to the next category). This helps your audience judge how much thought a decision actually deserves.
Jeff Bezos famously calls these irreversible Type 1 (one-way door) and reversible Type 2 (two-way door) decisions. He also warns us not to treat Type 2 decisions as Type 1 decisions.
Trade-offs: Surface the choices
Some questions don’t have a right answer. Instead, they involve choosing between competing priorities.
When someone asks “What’s the best approach?”, they often don’t realize they’re asking “Which trade-off am I most comfortable with?” The fix is to make the trade-offs visible.
Suggested formats
Trade-off explanation: Replace “What’s the right answer?” with “Here’s what each option gains and sacrifices.”
For example, “Should I consolidate my campaigns into fewer, bigger ones?” Instead of “It depends on your goals,” surface the actual trade-off:
- “Consolidation gives you more data per campaign, which helps Smart Bidding learn faster. But it reduces your control over budget allocation and makes it harder to optimize for different segments.”
- “So the real question is: Do you value algorithmic learning speed more than granular control right now? That depends on whether your current structure is data-starved or if you’re already getting strong results and just want more precision.”
Now the person isn’t stuck. They have a choice to make, and they understand what’s at stake on both sides.
Calculators: If the calculator presents the trade-off as an input field, it can yield a useful answer. One of my all-time favorites is the Build vs. Buy calculator from Baremetrics, which helps you decide whether to buy a tool or build it internally.
Closer to the daily life of a PPC practitioner, we created two free calculators to determine your target CPA or target ROAS. When you enter “% of margin willing to invest in acquisition,” you’re resolving the subjective part of the trade-off yourself. The calculator just runs the math on your decision.
The ‘it depends’ cheat sheet
Next time your gut says, “It depends,” check which type of question you’re dealing with and pick the format that fits.

I’m not naive enough to think we’ll eradicate “It depends” overnight. But I do think we can hold ourselves to a higher standard. If you’re speaking at a conference, writing a blog post, or answering a client question, try replacing your next “It depends” with one of these four response types.
And if you find a question that genuinely can’t be answered with a process, anchor, condition, or trade-off, I’d love to hear it. I haven’t found one yet. But I’m probably not done looking.
