The Basics
Analytics tells you what happened. Attribution tells you why — which marketing activities caused which outcomes. Both are essential. Both are harder than they look. Most companies are measuring the wrong things, crediting the wrong channels, and making budget decisions based on a version of reality that their own data is actively misrepresenting.
The definitions
Marketing analytics is the measurement and analysis of marketing data — traffic, engagement, conversions, leads, pipeline, revenue. It answers descriptive questions: how many people visited, how many clicked, how many converted. The tools are well-established. Google Analytics, your CRM, your email platform. The data is plentiful. The challenge is knowing which data matters.
Attribution is the specific practice of assigning credit for outcomes — a lead, an opportunity, a closed deal — to the marketing activities that influenced it. It answers causal questions: which touchpoint, channel, or piece of content actually caused this conversion? Attribution is where the real difficulty lives. In B2B, where buying journeys are long, nonlinear, and involve multiple people across multiple channels over many months, answering that question accurately is genuinely hard.
The gap between what attribution models report and what actually drove the sale is where most marketing budget decisions go wrong. Channels that look like they are producing nothing are often doing essential work earlier in the journey. Channels that appear to produce everything are often just the last thing the buyer clicked before converting.
The B2B reality check
Research from Dreamdata found the average B2B buyer takes around 211 days from first touch to purchase. A 2024 Dentsu study puts it even longer — around 379 days for complex deals. Most attribution windows are 7 to 30 days. Most B2B journeys are 7 to 12 months. You are measuring a marathon with a stopwatch set to time a sprint.
The attribution problem
Here is a real B2B buying journey, compressed. A prospect reads a LinkedIn article on Day 1. They search for a related term on Day 45 and find a blog post. They download a report on Day 90 via a Google ad. They attend a webinar on Day 150. They search directly for the company name on Day 200 and request a demo. Three weeks later they sign.
Under last-touch attribution — the most common model — Google organic search for the brand name gets 100% of the credit. The LinkedIn article that started the whole journey gets nothing. The blog post, the report, the webinar — all invisible in the data. LinkedIn, which initiated the relationship, looks like it produces no revenue. Brand search, which just captured existing intent, looks like it produces all of it.
The budget decision that follows from that data: cut LinkedIn, increase investment in branded search. Which is exactly backwards from what the evidence supports.
The same journey — six touchpoints, one deal
Day 1
Day 45
Day 90
Day 150
Day 200
Day 221
Last-touch attribution gives 100% of the credit to the final touchpoint before conversion — in this case, the branded search that led to a demo request. Every earlier touchpoint gets zero credit. Most CRMs and ad platforms use this model by default.
The models explained
Last-touch attribution credits the final touchpoint entirely. Simple, widely used, consistently misleading. It systematically undervalues awareness and mid-funnel activity — precisely the content that creates the conditions for the final conversion to happen.
First-touch attribution credits the first touchpoint entirely. Better for understanding what creates demand, but equally misleading — it ignores everything that happened between initial awareness and conversion, including the touchpoints that actually moved the deal forward.
Linear attribution distributes credit equally across all touchpoints. Fairer than single-touch models, but treats a webinar on Day 150 as equally important as the demo request on Day 200 — which is probably not accurate either. It is an improvement, not a solution.
Position-based attribution gives more credit to first and last touches, distributing the remainder across middle touchpoints. In B2B this is often implemented as 40% to first touch, 40% to last touch, 20% spread across everything in between. More nuanced, still imperfect. The honest conclusion is that no attribution model is fully accurate — each is a simplification of a reality that is genuinely complex.
What to actually measure
The most commonly tracked marketing metrics — page views, social media followers, email open rates, impressions — are output metrics. They measure activity, not impact. A page that gets 50,000 views and converts no one is less valuable than a page that gets 500 views and converts 20 prospects into conversations. Volume without conversion is noise. The question is never "how many" — it is always "how many of the right people, doing the right things."
The metrics that actually connect marketing to business outcomes are: pipeline generated, pipeline influenced, conversion rate from lead to opportunity, sales cycle length, average deal size by channel, and customer acquisition cost by source. These are harder to measure — they require your CRM to be properly set up and for marketing and sales to agree on definitions — but they are the metrics that enable good decisions.
For content specifically, the most useful metric is often the simplest: did reading this piece lead to a conversation? Track what content prospects have consumed before they book a call. Interview customers about what they read or watched before they reached out. That qualitative signal — "I found your article on X and it made me want to talk to you" — tells you more about what is working than most dashboards will.
The vanity metric trap
LinkedIn impressions. Website traffic. Email open rates. These feel like evidence of marketing working because they are large numbers that go up and to the right. They are not evidence of marketing working — they are evidence of marketing activity. The question to ask about every metric you track: if this number went to zero tomorrow, would revenue be affected? If the honest answer is "probably not," you are measuring the wrong thing.
What good looks like
Most B2B companies do not have the resources to implement sophisticated multi-touch attribution modelling. That is fine. The practical alternative is a layered approach that combines imperfect quantitative data with deliberate qualitative research.
Use your CRM as the source of truth. Every deal should have a source attached — where did this lead originally come from? This will not be perfectly accurate, but it gives you a directional view of which channels are opening relationships. Review it quarterly. Look for patterns over time rather than individual data points.
Ask every customer how they found you. Not in a form — in the first conversation. "Before we go any further — how did you first hear about us?" The answer will surprise you regularly. You will discover that the channel you thought was producing nothing is where half your best customers first encountered you.
Track pipeline contribution, not just lead volume. A channel that generates 100 leads that never become opportunities is less valuable than a channel that generates 10 leads that become 8 deals. Lead quality, measured by conversion rate to pipeline, is the metric that connects marketing activity to sales reality.
Where to start
This week, add one question to your sales process: "Before we start — how did you first hear about us?" Record the answers in your CRM. Do this for 90 days. Then look at what you collected. That simple data point — gathered consistently over time — will tell you more about what is actually driving your business than any analytics dashboard you currently have access to.