Cox Automotive, the world’s leader in automotive remarketing services, and parent company to such brands as Autotrader, Kelley Blue Book, Manheim, and Dealer.com, has more than 40,000 auto dealer clients across five continents.
Cox Auto focuses on continually improving its products to create faster vehicle transactions and enabling consumers to have a seamless online-to-offline experience. Testing has a natural space to play here - as Cox Automotive’s businesses have learned to scale experimentation to optimize the design of its digital experiences.
In this webinar, Frances Reyes, Seth Stuck, and Sabrina Ho will discuss how Cox Automotive is building a culture of experimentation and testing across their digital properties.
You’ll learn:
- The impetus of testing at Cox Automotive
- How they leverage and share information across their business units, creating shared goals despite different business priorities
- How they created a framework for data-driven decisions across the company
6. Revenue, share of wallet, funnel conversion, risk
mitigation, ops efficiency
What
We Do Next gen “Test and Learn” platform for enterprise-
wide digital experience optimization
Over 1B impressions daily
Replaces Digital Guesswork
With Evidence-based Optimization
Digital Experimentation
SaaS Platform
Apply the scientific method to “at-scale”
business decision making.
20X increase in Yield
8. Frontend UI Backend Business
Logic & Data
The anatomy of an experience
i.e. navigation,
search location
& visual treatment
Copy
Images &
Colors
Layout Search algorithms
Personalize content
based on previous
behavior
Recommendations
Make your headlines
more personal
9. “Our success is a function of
how many experiments we do
per year, per month, per week,
per day.”
Jeff Bezos
“Our aim is to create the best
product for our customers, and
we do that through constant
innovation and testing.”
Gillan Tans, CEO
“Our company culture
encourages experimentation
and free flow of ideas.”
Larry Page
“We use experimentation and
testing to inform as much of the
business as we possibly can” -
Gregory Peters, CPO
Today’s Digital Leaders Win By
Using Experimentation At-Scale
10. Spent with Digital Media
19 hr
week
2018
Digital-Centric Customer
Spent with Digital Media
41
$396m
Mobile commerce Mobile commerce
$1 8t.
2008
Digital-Convenient Customer
hr
week
Your customer has fundamentally
changed how they engage
28. 28
Workflow and RACI Overview
Test Stages Description Responsible
Who will do the work for
this step?
Accountable
Who validates this work and
pushes the test into the next
stage of the workflow?
Consulted
Who will likely need to help
the Responsible person(s)
Informed
Who needs to be informed
about the progress of this
work?
Requirements
Idea has all relevant test details
to move forward and has been
scored by necessary
stakeholders (identified in
"Consulted" column).
Whomever is ideating
the test - can be anyone
RTEs (to accept for
scoring), Testing
Analytics (to validate test
details)
UX, Product, Engineering,
Product Analytics
RTEs, Leadership
Creative
Assets for experiment variants
are received and are attached to
the idea.
UX UX Product Analytics, UX,
Product, Engineering,
Testing Analytics
RTEs
Development
Variants for experiment have
been built
UX / UI / Front-end
Engineering,
Engineering, and/or
Architecture
UX UX / UI / Front-end
Engineering, Engineering,
and/or Architecture,
Testing Analytics
RTEs, Product Analytics
Setup and QA
Experiment has been configured
in Optimizely and QA’ed and
accepted by necessary
stakeholders.
UX, Engineering,
Testing Analytics,
Product Analytics
Testing Analytics UX / UI / Front-end
Engineering, Product,
Engineering, and/or
Architecture
Product, RTEs
Testing
Experiment has been deployed
and is actively running
Product Analytics,
Testing Analytics
Product Analytics UX, Product, RTEs,
Testing Analytics
Leadership
Analysis
Experiment has concluded and
success is being determined
(usually requires some off-site
validation)
Product Analytics,
Testing Analytics
Product Analytics UX, Product, RTEs,
Testing Analytics
Leadership, Stakeholders
Completed
Experiment learnings and next
steps are distributed to rest of
the organization.
Product, Product
Analytics, Testing
Analytics
RTEs UX, Product, RTEs,
Testing Analytics
Leadership
32. 32
Date Range: 11/30/18 through 4/30/19
Velocity (week) Conclusive Rate Win Rate
Cox Automotive 2.2 25% 18%
All Customers
(Web + Full Stack)
0.4 27% 17%
All Web 0.5 25% 15%
Top 90%
All Web
1.7 31% 24%
Top 90%
Retail Web
3.0 33% 20%
Top 90%
Media Web
2.4 36% 26%
Top 5
Marketplace Web
2.9 34% 21%
39. 39
Knowing When to Test
When the clarity of a pre/post just won’t be enough
1 2 3 4 5 6 7 8 9 10 11 12 13 14
Page Conversion
1 2 3 4 5 6 7 8 9 10 11 12 13 14
Page Conversion
Week 1 Week 2
Pre Post
Experiment w/ concurrent
control and test groups
42. 42
Knowing When to Test
Types of Tests
Discovery and Light-
Weight Prototyping
• Used to very quickly
answer whether an
idea has legs – or to
help size up the
opportunity
• Common types of
tests include: Fake
door, A/B, Usability,
Focus Group
Optimization
• Used to compare
multiple design
variants to
determine which
has optimal
performance
towards a given
goal
• Common types of
tests include: A/B
or A/B/n tests
De-risking or Validation
• Used to determine how
functionality that has
already been developed
will perform – generally as
a means for ensuring that
the new product or
design performs as
expected
• Also helpful for
retrospectives (looking
back over a given time
period to assess what has
had the most impact)
• Common types of tests
include: A/B or Multi-
variate tests
Blue Sky
• Used to answer
heavy business
questions –
often, the thing
being tested
would never
be deployed
• Intended to
help shape up
hypotheticals
and limit
speculation in
strategy
43. 43
Knowing When to Test
Types of Tests
Discovery and Light-
Weight Prototyping
• Used to very quickly
answer whether an
idea has legs – or to
help size up the
opportunity
• Common types of
tests include: Fake
door, A/B, Usability,
Focus Group
Optimization
• Used to compare
multiple design
variants to
determine which
has optimal
performance
towards a given
goal
• Common types of
tests include: A/B
or A/B/n tests
De-risking or Validation
• Used to determine how
functionality that has
already been developed
will perform – generally as
a means for ensuring that
the new product or
design performs as
expected
• Also helpful for
retrospectives (looking
back over a given time
period to assess what has
had the most impact)
• Common types of tests
include: A/B or Multi-
variate tests
Blue Sky
• Used to answer
heavy business
questions –
often, the thing
being tested
would never
be deployed
• Intended to
help shape up
hypotheticals
and limit
speculation in
strategy
44. 44
Knowing When to Test
Types of Tests
Discovery and Light-
Weight Prototyping
• Used to very quickly
answer whether an
idea has legs – or to
help size up the
opportunity
• Common types of
tests include: Fake
door, A/B, Usability,
Focus Group
Optimization
• Used to compare
multiple design
variants to
determine which
has optimal
performance
towards a given
goal
• Common types of
tests include: A/B
or A/B/n tests
De-risking or Validation
• Used to determine how
functionality that has
already been developed
will perform – generally as
a means for ensuring that
the new product or
design performs as
expected
• Also helpful for
retrospectives (looking
back over a given time
period to assess what has
had the most impact)
• Common types of tests
include: A/B or Multi-
variate tests
Blue Sky
• Used to answer
heavy business
questions –
often, the thing
being tested
would never
be deployed
• Intended to
help shape up
hypotheticals
and limit
speculation in
strategy
45. 45
Example of a Product Validation Test
Vehicles priced below
Fair Market Range saw a
7% increase in overall value
47. 47
De-risking to Blue Sky
A/B Test #1
HP Redesign:
Control Challenger A
A/B Test #2
HP Redesign:
Control Challenger A
Test creative based on User Testing
Live site results differed from User Testing
Redesigned Hero section Clicks to main KPI increased
+7%
48. 48
Knowing When to Test
Types of Tests
Discovery and Light-
Weight Prototyping
• Used to very quickly
answer whether an
idea has legs – or to
help size up the
opportunity
• Common types of
tests include: Fake
door, A/B, Usability,
Focus Group
Optimization
• Used to compare
multiple design
variants to
determine which
has optimal
performance
towards a given
goal
• Common types of
tests include: A/B
or A/B/n tests
De-risking or Validation
• Used to determine how
functionality that has
already been developed
will perform – generally as
a means for ensuring that
the new product or
design performs as
expected
• Also helpful for
retrospectives (looking
back over a given time
period to assess what has
had the most impact)
• Common types of tests
include: A/B or Multi-
variate tests
Blue Sky
• Used to answer
heavy business
questions –
often, the thing
being tested
would never
be deployed
• Intended to
help shape up
hypotheticals
and limit
speculation in
strategy
49. 49
Example of a Blue Sky
Sponsored Search Links
Sponsorships
Sponsored Hero
Image and Search Links
Advertisement
Removing ads from the site
created a 15.1% lift in both
value events and VDPs
50. 50
Knowing When to Test
When the (un)certainty of a pre/post just won’t be enough
Uncertainty Certainty
High Level
of Effort
Low Level
of Effort
“Just do” launch with
no Pre/Post Analysis
“Just do” launch with
Pre/Post Analysis
Optimization
A/B Tests
Blue Sky Testing with
Iterative Learning Plan
De-Risking
and Validation
Testing
Discovery and
Light-Weight
Prototyping
52. 52
Creating an Enterprise Testing Program
High-Level Learnings
What HAS Worked?
• Leveraging larger, more experienced testing
programs and personnel
• Funding
• Workflows and RACI
• Pro-bono Support
• Comparing depth and breadth of test results
and analysis to what’s possible in a pre/post
• Transparency of test plans and results
• Quarterly Summit for enterprise-wide insights
and partnership
• Partnership with Optimizely
What HAS NOT Worked?
• Federating access to Individual teams
who don’t embrace basic best practices
• Over-reliance on Analytics to support all
aspects of the test
• Jumping straight to cross-brand testing
• Skipping testing in favor of “just do”
mentality