The concept is simple enough:
Show different offers to different people based on what they're doing on your site.
But what does that actually look like when you're running a real eCommerce business with real customers?
Here are three examples from brands that stopped treating every visitor the same way.
Example 1: Only showing offers to people who need them (it Luggage)
it Luggage sells direct and through retailers. Like most brands in that position, they needed their direct channel to perform without constant discounting.
The problem was resources. They had no time to A/B test every offer. No capacity to build complex segmentation rules. Just the usual challenge: too many people leaving, not enough converting.
What they did
They ran exit campaigns, but with one key condition: the offer appeared only to visitors with a buying intent score below 15%.
In practice, this means that visitors who demonstrate intent signals would not see an offer. Signals such as:
- Returning to the site
- Viewing product categories and brands
- Adding items to their basket
Only visitors who visited once and showed little engagement would receive one.
The reasoning is straightforward: visitors with high intent are likely to purchase regardless of an offer, so showing them discounts unnecessarily erodes profit. Visitors with low intent, conversely, are less likely to buy unless given an incentive; therefore, offers are targeted to them.
On top of that, they tested three discount levels (5%, 10%, 15%) against a control group to find the sweet spot.
The results
- +16% conversion rate
- +4% revenue
- +5% revenue vs sitewide offers
- +5% revenue per user
Revenue increased despite orders with offers having a 12% lower AOV, and targeting made up the difference. More people converted, fewer discounts went to waste, and the overall result was stronger than a blanket promotion.
(Also worth noting: the 15% discount won the test. Sometimes the smallest discount isn't the most profitable one.)
The takeaway
Show offers only to those who need them. Protect margin by suppressing discounts for high-intent visitors and convert fence-sitters without reducing profits.
Example 2: Using multiple offer types across the journey (Radley)
Radley is a premium accessories brand. Not the kind of business that can afford to train customers to expect constant discounts.
But also not immune to the reality that people abandon baskets, browse without buying, and copy product names to hunt for better deals elsewhere.
What they did
Rather than picking one tactic, they deployed a range of offers targeting different behaviours:
- Exit campaigns for people showing signs of leaving
- Ctrl+C campaigns for deal-seekers copying product names
- Invalid code offers for customers who are trying expired or third-party codes
- Stretch & Save offers to encourage larger baskets
- Cross-sell offers based on browsing and cart contents
- An offer wallet, a centralised hub for all available incentives
- Intent-based suppression to avoid discounting high-intent shoppers
Each offer type responds to a specific signal. Someone copying a product name gets a different experience from someone hovering over the back button. Someone with £80 in their basket sees a different offer from someone with £20.
The offer wallet sits quietly in the corner of the site, holding any offers the customer has unlocked. No intrusive pop-ups, no interruptions to the journey. Just a persistent spot where deals live if you need them. It can be a channel for promoting new products or content.
The results
15% conversion rate uplift. The incremental revenue from the campaigns was delivered at 8% lower cost than with sitewide offers.
Revenue went up, and promotion costs went down, closing the gap that targeted offers are designed to address.
The takeaway
One offer type won't cover every scenario. Exit intent works for some visitors, but not all. Stretch & Save works for others. Cross-selling works when someone's already bought into the brand but hasn't added the matching item yet.
Match offers to behaviours, and you can deliver them without overwhelming the site or cheapening your brand.
(This works for premium and luxury brands. Radley's brand equity didn't suffer. If anything, targeted offers feel more premium than blanket codes floating around on affiliate sites. It’s all about maintaining perceived value.)
Example 3: Testing offer structure, not just size (US Polo Assn)
US Polo Assn already had a strong Stretch & Save campaign running. Multiple tiers. Something for every basket size. Decent performance. But they wanted to know: could it be better?
The question wasn't whether to discount. It was about structuring the discount to drive greater profit.
What they did
They ran a multi-variant test comparing two offer formats:
- Group A: percentage discount (e.g. "15% off when you spend £50")
- Group B: fixed amount discount (e.g. "£10 off when you spend £50")
Half the qualifying visitors saw one. Half saw the other. Same traffic. Same products. Different messaging.
The hypothesis? Fixed discounts might yield similar conversion rates while sacrificing more margin—a £10 discount on a £60 order costs less than 15% off. Even if behaviour didn't change much, the bottom line would improve.
The results
A slim margin won percentage discounts in terms of conversion rate and AOV. But the fixed discount was more profitable overall.
So what did they do? Switched to the fixed discount. Slightly lower conversion rate. Slightly lower AOV. But a higher profit on every order.
That's a trade-off most teams don't measure. Redemption shows use; revenue shows income; but profit shows worth.
The takeaway
The offer structure is as important as the size. Percentage discounts scale, eating margins; fixed discounts cap the cost. Choose a structure that aligns with your goals.
Testing this stuff is simple. The results aren't always what you expect. But once you know, you can make an informed decision about what you're optimising for: conversions, revenue, or profit.
(And if you're running Stretch & Save offers, this test is worth running. The difference compounds quickly.)
What connects these three examples
None of these brands gave bigger discounts; instead, they gave smarter ones.
- it Luggage only showed offers to low-intent visitors
- Radley matched the offer type to the behaviour
- US Polo Assn tested the offer structure to protect margin
The common thread? They stopped assuming every visitor needed the same thing. And they measured whether the offer actually changed behaviour, not just whether it got used.
Intelligent offers ask: Did this change behaviour, or just give away profit? Always measure impact, not just usage.
If you're running promotions today and you can't answer that question, you're probably doing the second thing more often than you think.





