WFoA and the Advertising Research Foundation convened a small group of forward-thinking researchers on December 12, 2012 in New York City to identify the high-impact, high-priority areas in ad effectiveness measurement. The goal of the session was to identify what is lacking and determine what it will take to create game-changing breakthroughs in the field. Each participant presented their view on the following four questions:
1. What are the most important gaps in advertising effectiveness measurement and methodology? Why?
2. What efforts are underway to address these gaps?
3. What else could/should we do to fill these gaps?
4. What should be our litmus test of success?
The output of the session will be an assessment of the current state of affairs and a game plan/call to action for moving to a future we would all consider worth achieving.
Rajeev Batra,* Professor of Marketing, University of Michigan
Jane Clarke, Managing Director, CIMM
Gian Fulgoni, Chairman, Comscore
Don Gloeckler, EVP, Chief Research Officer, The ARF
Catharine Findiesen Hays, Executive Director, WFoA
Denise Larson, Principal and Co-Founder, NewMediaMetrics
Paul Lavrakas,* Research Psychologist and Research Methodologist Consultant
Elissa Moses, EVP, Neuroscience and Emotion, Ipsos
Graham Mudd, Head of Measurement Market Development, Facebook
Peter Orban, EVP, Online, Social and Mobile Media and Marketing, The ARF
Steve Rappaport, Knowledge Solutions Director, The ARF
Eric Schwartz, Doctoral Candidate in Marketing, The Wharton School
Richard Silberstein,* Professor, Brain Sciences Institute, Swinburn University of Australia
Horst Stipp, EVP, Global Business Strategy, The ARF
Gerard Tellis,* Professor of Marketing, Marshall School of Business, USC
Jim Thompson,* Senior Advisor, ARF Neuro 2.0
Jerry Wind, Lauder Professor, Professor of Marketing, The Wharton School
Leslie Wood, Chief Research Officer, Nielsen Catalina Solutions
Chuck Young, CEO, Ameritest
Some challenges highlighted by participants included:
Now our situation is much more complicated. We also have lots of intermediate. We get likes, tweets, retweets, and tons of brand engagement metrics.
We learned that traditional legacy methods of small panels don’t cut it in this fragmented world that we are in.
Audiences are being bought without any attention being paid to where the ad impression is delivered.
If measurement doesn’t keep up with the way media has innovated, we will loose the core benefit of digital.
Some key takeaways highlighted by participants included:
There is a big tool box for researchers. However, that is not the point. We need to start thinking about layering tools.
Online and mobile: I think all media will be digital and will be addressable. One of the things that digital makes possible is micro targeting.
We need to as an industry catch up to where advertising is. We are still operating in a broadcast model. We need to scale up to big data from panel sized data or medium data. To do big data correct it requires real measurement science. How you clean and use data makes a big difference.
The aim should not be test and learn. The aim should be let’s earn while learning.
Brand! Brands are why we advertise. Let’s talk about brands. Brand is a memory. It is a change in the neural system. We have a simplistic view of how complicated memory really is.
Participant Responses: What are the most important gaps in advertising effectiveness measurements and methodology?
Rajeev Batra – Professor of Marketing, University of Michigan
- Metrics lack clear link to ultimate business goal and to each other
- Current situation is much more complicated than purchase funnel and we lack clear sense
- Focus on measuring sales from one campaign, instead of over a few campaigns
Jane Clarke – Managing Director, CIMM
- Scalable cross platform measurements
- We need better planning tools
- Labs create the shiny new toy effect
- No single source
- Lag of time results in MMM
- Ad identification issues
Gian Fulgoni – Chairman, comScore
- Common and scalable multi-platform measurement metrics
- Continued focusing on click-through rates when better measurement solutions exist
- Programmatic planning, buying and delivery and the need for more transparency around the bid
- Copy testing is decreasing
- Data scanning and reading data too quickly
Denise Larson – Principal and Co-Founder, NewMediaMetrics
- Definition of advertising effectiveness is inconsistent among the industry
- Without a good standard, measurement becomes difficult
- Creativity and quality of the creative are extremely difficult to measure
- Advertisers have different goals of what they want their ads to accomplish
Elissa Moses – EVP, Neurosciences and Emotions, Ipsos
- Despite 95% of brain processing being below consciousness, 99% of Ad testing is above conscious awareness
- People can’t always tell the truth
- Lack of data sharing
Graham Mudd – Head of Measurement Market Development, Facebook
- Industry is still operating in a broadcast model and needs to catch up to advertising
- We use pseudo experiment design as oppose to real experimental design
- Methodologies to match big data with other sources
- Data is proprietary
- If we have time series model, MMM is a problem
- Short focus
- Budgets to experiment are small or non-existent
Peter Orban – EVP, Online, Social & Mobile Media Marketing, the ARF
- Clear definition of ‘Context’
- Understanding of how context impacts effectiveness
Eric Schwartz – Doctoral Candidate in Marketing, the Wharton School
- Gaps in A/B and Multivariate testing
- View of testing as a cost that impedes on what advertisers actually want to do
Richard Silberstein – Professor, Brain Sciences Institute, Swinburn University of Australia
- Methods of recall do not attack the unconscious
- Little attention is paid to incidental advertising
Horst Stipp – EVP, Global Business Strategy, the ARF
- The enormous amount of data that exists is proprietary
- Advertisers not focused on accurate measurement
- Engagement not defined in terms of outcome
- The ARF model from the 1960s doesn’t work anymore because the purchase process is more complicated. However, there may not be a purpose to coming up with a new model because models should be simple and not complicated.
Jerry Wind – The Lauder Professor, Professor of Marketing the Wharton School
- Improper use of data presentation
- Focus on advertising and ignoring other critical touch points like package design, store design, etc
- How to look at and compare combinations of advertisements
- Need for adaptive/inductive experimentation
Leslie Wood – Chief Research Officer, Nielsen Catalina Solutions
- Noise. No standard for how to properly clean noise without missing data
- Difference in calendars and media clocks
- TV GRPs are different than Radio GRPs
- Common definitions don’t exist
- Time between data availability and data reporting (Data is not reported for 2 weeks. 92% of their data is identical after cleaning). Can better controls be put in place to increase this accuracy?
- Industry forum to test new methods
- Modeler bias
- Ad effects change
Chuck Young – CEO, Ameritest
- We have a simplistic view of how complicated memory really is
- Poor understanding of the difference as well as the link between experienced and remembered self
- Recall testing not addressing individualized and episodic memories
- Advertising’s impact on pricing and its contribution to ROI in models
Call to Action (Don Gloeckler)
- Pay attention to what we already know
- Collaborate to learn together