Now in beta — free for early publishers. Join the waitlist
Case study

How a developer tool increased ad revenue with zero ad ops changes

A popular web-based developer tool running Prebid.js with multiple demand partners connected BidTune to see if automated optimization could outperform their existing manually-tuned configuration. Over 12 days, AutoResearch ran 20+ controlled experiments and found actionable improvements across bidder management, timeout tuning, and price floor optimization.

20+
experiments run
6
winners deployed
7
bad ideas caught
+31%
best single win
12
days

Metric: net revenue per session (gross revenue minus serving cost). All experiments ran as 50/50 traffic splits with Bayesian evaluation. Auto-terminated at >95% confidence. Lifts shown are marginal per-experiment gains vs. control.

The situation

The site was running a standard Prebid.js setup with 8+ demand partners through a managed wrapper. Configuration hadn't been actively optimized in months. Timeouts were set to defaults across all bidders. Floor prices were minimal. Nobody was testing whether the current setup was actually leaving money on the table.

This is typical. Most publishers configure Prebid once and move on. But bidder performance shifts constantly—demand partners adjust algorithms, new inventory demand comes online, traffic patterns change by season and geography. A setup that was optimal months ago is almost certainly suboptimal today.

What happened

After connecting BidTune (one script URL change), the system spent 48 hours in observation mode—collecting bid-level data across every auction, bidder, geography, and ad unit to build a baseline. AutoResearch then began proposing and running experiments automatically.

Over the next 12 days, AutoResearch ran 20+ experiments across non-overlapping traffic segments. Multiple experiments ran simultaneously—a floor price test in one geography alongside a bidder timeout test in another—maximizing the rate of learning without statistical interference.

What worked

Winner

Bidder exclusion in premium markets

Two demand partners were consistently winning auctions at below-market CPMs in high-value geographies, suppressing competition. Removing them forced remaining bidders to compete harder.

+31.2%
Winner

Country-specific floor pricing

Sessions from certain geographies had extremely low average CPMs. Setting minimum price floors eliminated sub-penny bids and lifted revenue significantly in those regions.

+128.7%
Winner

Premium ad unit floor optimization

The highest-revenue ad unit had no floor price. Adding one filtered junk bids and increased average winning CPMs without reducing fill rate.

+13.0%
Winner

Per-bidder timeout tuning

One high-value bidder was timing out frequently at the default timeout. Extending their timeout captured high-CPM bids that were previously missed, without affecting overall page latency.

+12.0%
Winner

Secondary ad unit floor pricing

Applied the same floor strategy that worked on the header to the footer ad unit, which generates the most total revenue.

+12.0%
Winner

Bidder geo-targeting refinement

A high-CPM bidder performed well in premium markets but poorly elsewhere. Restricting it to markets where it wins improved overall auction efficiency.

+11.9%

What didn't work

Rejected

Aggressive sidebar floor pricing

Raising the sidebar floor to match the header floor was too aggressive—it priced out most bidders on a lower-value unit.

-41.8%
Rejected

Emerging market floor increase

Raising floors in high-traffic emerging markets filtered out too many bids. The traffic volume didn't support the higher price point.

-27.9%
Rejected

Timeout reduction on premium units

Reducing timeouts on the header unit to improve speed lost high-value late bids. The revenue impact outweighed the latency savings.

-24.8%
Rejected

Global timeout reduction

Lowering the global timeout from 2500ms to 2000ms seemed safe but missed bids from the highest-CPM bidder, which needed the extra time.

-19.1%
The experiments that didn't ship saved money too. The rejected experiments would have collectively destroyed significant revenue if deployed without testing. AutoResearch's Bayesian auto-termination caught every one of them within hours to days—before they could do real damage. Most optimization approaches can't show you this because they don't run controlled experiments.

Timeline

Day 0
Script connected. One URL change. No other modifications to the site.
Day 1-2
Observation mode. Collected millions of bid events across all partners. Built performance baseline.
Day 3
First experiments. AutoResearch identified high-impact hypotheses and began running experiments across non-overlapping segments.
Day 3-6
Rapid iteration. Multiple experiments running simultaneously. First winner detected and promoted within 6 hours.
Day 6-10
Refinement. Follow-up experiments tested extensions of winning strategies. Bad ideas auto-terminated.
Day 10-12
Expansion. Winning strategies expanded to additional geographies. More experiments queued and evaluated.
Ongoing
Continuous optimization. AutoResearch keeps running. As bidder performance shifts, new experiments adapt the configuration automatically.
The key insight: no single optimization was transformative on its own. The value came from the system—continuously testing, measuring, and compounding small improvements. 6 winners, each contributing 10-30% marginal gains, running simultaneously across non-overlapping segments. That's the difference between setting Prebid config once and letting it drift, versus having an engine that adapts as the market moves.

Get started — connect your site in 5 minutes