Skip to main content

A/B Testing for Digital Signage

Most digital signage content is created based on assumptions and opinions. A/B testing replaces guesswork with data, allowing you to scientifically determine what content drives the best results. This guide covers methodology, metrics, and practical implementation for signage optimization.

Why A/B Test Digital Signage?

The Impact of Optimization

MetricUnoptimizedAfter A/B Testing
Viewer attention rate25%40-60%
Call-to-action response2%5-8%
Promotional lift5%15-25%
Message recall15%35-50%

Common Assumptions That Are Wrong

AssumptionReality (from testing)
"Bigger text is always better"Depends on viewing context
"Video outperforms static"Not always - complexity matters
"Red calls attention"Can signal danger, reduce action
"More info is better"Often decreases comprehension
"Our brand colors work best"Sometimes neutral performs better

A/B Testing Fundamentals

What is A/B Testing?

┌─────────────────────────────────────────────────────────────────────────┐
│ A/B TEST STRUCTURE │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ CONTROL (A) VARIANT (B) │
│ ┌─────────────────────┐ ┌─────────────────────┐ │
│ │ │ │ │ │
│ │ Current Design │ │ New Design │ │
│ │ │ │ (One Change) │ │
│ │ │ │ │ │
│ └─────────────────────┘ └─────────────────────┘ │
│ │ │ │
│ ▼ ▼ │
│ ┌─────────────────────┐ ┌─────────────────────┐ │
│ │ 50% of displays │ │ 50% of displays │ │
│ │ or time │ │ or time │ │
│ └─────────────────────┘ └─────────────────────┘ │
│ │ │ │
│ └──────────────┬───────────────┘ │
│ ▼ │
│ ┌─────────────────┐ │
│ │ COMPARE RESULTS │ │
│ │ Statistical │ │
│ │ Significance │ │
│ └─────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────┐ │
│ │ IMPLEMENT │ │
│ │ WINNER │ │
│ └─────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────┘

Key Testing Principles

  1. Test one variable at a time - Otherwise you can't attribute results
  2. Adequate sample size - Enough exposure for statistical validity
  3. Run simultaneously - Eliminates time-based variables
  4. Define metrics beforehand - Know what success looks like
  5. Achieve statistical significance - Don't stop early

What to Test

High-Impact Test Variables

VariableImpact PotentialTest Difficulty
Headlines/CopyVery HighEasy
Call-to-ActionVery HighEasy
Hero ImageHighEasy
Price DisplayHighEasy
LayoutHighMedium
Color SchemeMedium-HighEasy
AnimationMediumMedium
Content DurationMediumEasy
Font SizeMediumEasy
BackgroundLow-MediumEasy

Headline/Copy Tests

Test variations in:

ElementVariant AVariant B
Benefit focus"Save 50%""Half Price"
Urgency"Today Only""Limited Time"
Question vs. Statement"Hungry?""Satisfy Your Craving"
Length"Get 50% Off All Items""50% Off"
Specificity"Save Money""Save $10 Today"
Tone"Amazing Deal""Smart Choice"

Call-to-Action Tests

ElementVariant AVariant B
Action verb"Buy Now""Shop Now"
Urgency"Order Today""Don't Miss Out"
Benefit"Get Yours""Start Saving"
Question"Ready to Save?""Save Now"
PositionTop of screenBottom of screen
SizeStandard20% larger
ColorBrand colorContrasting color

Image Tests

ElementTest Options
SubjectProduct alone vs. product in use
PeopleWith people vs. without
AngleClose-up vs. wide shot
MoodBright/energetic vs. calm/sophisticated
QuantitySingle item vs. collection
BackgroundPlain vs. lifestyle context

Layout Tests

┌─────────────────────────────────────────────────────────────────┐
│ LAYOUT TEST EXAMPLE │
├─────────────────────────────────────────────────────────────────┤
│ │
│ VARIANT A: Image Left VARIANT B: Image Right │
│ ┌─────────┬─────────────┐ ┌─────────────┬─────────┐ │
│ │ │ Headline │ │ Headline │ │ │
│ │ IMAGE │ Body text │ │ Body text │ IMAGE │ │
│ │ │ CTA Button │ │ CTA Button │ │ │
│ └─────────┴─────────────┘ └─────────────┴─────────┘ │
│ │
│ VARIANT C: Image Top VARIANT D: Full Bleed │
│ ┌─────────────────────┐ ┌─────────────────────┐ │
│ │ IMAGE │ │█████████████████████│ │
│ ├─────────────────────┤ │██ Headline ██████████│ │
│ │ Headline │ │██ Body text █████████│ │
│ │ Body text CTA │ │██ CTA Button ████████│ │
│ └─────────────────────┘ └─────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘

Metrics for Digital Signage A/B Tests

Primary Metrics

MetricHow to MeasureBest For
Attention RateCamera/sensor: viewers ÷ passersEngagement optimization
Dwell TimeCamera/sensor: seconds lookingContent interest
QR Code ScansScan count per impressionDirect response
Promotional LiftPOS: promo sales vs. baselineSales content
Conversion RateActions ÷ impressionsCall-to-action

Secondary Metrics

MetricHow to MeasureUse Case
Traffic FlowSensors: direction toward displayWayfinding
Interaction RateTouch/gesture countInteractive content
Session DurationTouch: time in sessionKiosk engagement
Survey ResponsePost-exposure surveysMessage recall
Social MentionsHashtag/mention trackingBrand campaigns

Calculating Key Metrics

Attention Rate:

Attention Rate = (Viewers Looking at Screen ÷ Total Passers) × 100

Example: 250 viewers ÷ 1000 passers = 25% attention rate

Promotional Lift:

Lift = ((Test Period Sales - Baseline Sales) ÷ Baseline Sales) × 100

Example: ($15,000 - $10,000) ÷ $10,000 = 50% lift

Statistical Significance:

For 95% confidence (p < 0.05):
- Need sufficient sample size
- Difference must exceed margin of error
- Use chi-square or t-test

Test Design & Methodology

Sample Size Calculator

Baseline RateMinimum Lift to DetectSample Size Needed (per variant)
5%20% relative (5%→6%)15,000
5%50% relative (5%→7.5%)2,500
10%20% relative (10%→12%)4,000
10%50% relative (10%→15%)700
25%20% relative (25%→30%)1,100
25%50% relative (25%→37.5%)200

Rule of thumb: Aim for at least 1,000 observations per variant minimum.

Test Duration Guidelines

FactorConsideration
Traffic volumeLow traffic = longer test
Day-of-week effectsRun full weeks to capture patterns
SeasonalityAvoid holidays unless testing for them
Promotional cyclesTest outside major promotions
Minimum durationAt least 1-2 weeks
Maximum duration4-6 weeks before fatigue

Splitting Strategy

Option 1: Time-Based Split

Mon-Wed: Variant A
Thu-Sat: Variant B
(Rotate next week)
  • ✅ Simple to implement
  • ❌ Day-of-week bias possible

Option 2: Location Split

Stores 1, 3, 5: Variant A
Stores 2, 4, 6: Variant B
  • ✅ Simultaneous testing
  • ❌ Location differences may confound

Option 3: Display Split

Screen 1: Variant A
Screen 2: Variant B
(Same location)
  • ✅ Controls for location
  • ❌ Needs multiple screens

Option 4: Random Rotation

Each play: Random A or B
(50/50 probability)
  • ✅ Best for statistical validity
  • ❌ Requires CMS support

Running a Test: Step by Step

Phase 1: Planning (Week 1)

┌─────────────────────────────────────────────────────────────────┐
│ TEST PLANNING CHECKLIST │
├─────────────────────────────────────────────────────────────────┤
│ │
│ □ Define hypothesis │
│ "Changing X will improve Y by Z%" │
│ │
│ □ Select single variable to test │
│ │
│ □ Define primary success metric │
│ │
│ □ Calculate required sample size │
│ │
│ □ Determine test duration │
│ │
│ □ Create both variants (A and B) │
│ │
│ □ Set up measurement/tracking │
│ │
│ □ Document current baseline performance │
│ │
│ □ Get stakeholder buy-in │
│ │
└─────────────────────────────────────────────────────────────────┘

Phase 2: Execution (Weeks 2-3)

  1. Launch simultaneously: Both variants start at same time
  2. Monitor for errors: Check displays, tracking, data collection
  3. Don't peek: Avoid making decisions on early data
  4. Document issues: Record any anomalies
  5. Maintain consistency: Don't change other variables

Phase 3: Analysis (Week 4)

Analysis Template:

TEST: [Name]
HYPOTHESIS: [Statement]
DATES: [Start] - [End]
DISPLAYS: [List]

RESULTS:
VARIANT A VARIANT B
Impressions: 10,000 10,000
Viewers: 2,500 3,200
Attention Rate: 25% 32%
Difference: +7 percentage points (+28% relative)

STATISTICAL TEST:
Chi-square value: [X]
p-value: [Y]
Confidence: [Z]%

WINNER: Variant B
RECOMMENDATION: Implement Variant B across all locations
NEXT TEST: [Idea]

Phase 4: Implementation

  1. Roll out winner to all displays
  2. Document learnings for future reference
  3. Monitor post-implementation for consistency
  4. Plan next test based on learnings

Test Examples & Results

Example 1: Headline Test

Hypothesis: Urgency-focused headline will increase attention rate

Variant A (Control)Variant B (Urgency)
Headline"Summer Collection""Last Days of Summer Sale"
Impressions5,0005,000
Attention Rate22%31%
Result+41% improvement

Example 2: Image Test

Hypothesis: People in images increase engagement

Variant A (Product Only)Variant B (With People)
ImageShoes on whiteModel wearing shoes
Dwell Time2.1 seconds3.8 seconds
Result+81% improvement

Example 3: CTA Color Test

Hypothesis: Contrasting CTA button improves scans

Variant A (Brand Blue)Variant B (Orange)
CTA Color#0066CC#FF6600
QR Scans4578
Scan Rate0.9%1.6%
Result+78% improvement

Example 4: Animation Test

Hypothesis: Subtle animation increases attention

Variant A (Static)Variant B (Animated)
TreatmentStatic imageGentle zoom effect
Attention Rate28%34%
Dwell Time2.5 sec2.2 sec
ResultMore attention, less dwell

Learning: Animation grabbed attention but didn't hold it. Static may be better for conveying information.


Common Mistakes to Avoid

MistakeProblemSolution
Testing too many variablesCan't attribute resultsOne change at a time
Stopping earlyFalse positivesWait for significance
Ignoring external factorsConfounded resultsControl for variables
No baselineCan't measure improvementDocument current state
Small sample sizeUnreliable resultsCalculate needs upfront
Confirmation biasSee what you wantPre-define metrics
Not documentingLost learningsKeep detailed records
Testing trivial changesWasted effortFocus on high-impact

Building a Testing Culture

Testing Roadmap

┌─────────────────────────────────────────────────────────────────┐
│ ANNUAL TESTING ROADMAP │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Q1: Foundation Tests │
│ ├── Headline formulation tests │
│ ├── Primary CTA optimization │
│ └── Core layout testing │
│ │
│ Q2: Content Type Tests │
│ ├── Static vs. video │
│ ├── Animation effectiveness │
│ └── Information density │
│ │
│ Q3: Optimization Tests │
│ ├── Winning element combinations │
│ ├── Timing and dayparting │
│ └── Seasonal content │
│ │
│ Q4: Advanced Tests │
│ ├── Personalization approaches │
│ ├── Interactive elements │
│ └── Multi-screen coordination │
│ │
│ Target: 12-24 tests per year │
│ Goal: 10-15% annual performance improvement │
│ │
└─────────────────────────────────────────────────────────────────┘

Learning Library

Document all tests:

  • Test name and date
  • Hypothesis
  • Variables tested
  • Results and winner
  • Statistical confidence
  • Key learnings
  • Recommendations

Sharing Results

AudienceFocus
ExecutivesROI impact, strategic learnings
MarketingCreative insights, best practices
OperationsImplementation requirements
Design teamVisual guidelines that work

Frequently Asked Questions


Next Steps


This guide is maintained by MediaSignage, pioneers of digital signage technology since 2008.