about us action audits advertising Allto Allto Consulting Allto Vision analysis analytics binomial test blog blue sky thinking branding bulletin boards business to business careers CATI clients communicating competitor analysis concept testing confidence interval confidence interval calculator conjoint analysis consumer contact us customer customer closeness customer profiling customer satisfaction customer service dashboards depths DIY election ethnography eye tracking financial FMCG focus groups food and drink free report hall tests healthcare hub immersion insight international key contacts key driver analysis LAB literature evaluation market research maximising data value mccallum layton media medical research methodologies modelling money matters news NPD observed sample proportion old grey whistle test online online surveys preference mapping presentation product creation product optimisation qualitative quantitative raw data reports retail ROI sample mean sample size sample size calculator sample std segmentation snapshots report social media stats calculator stimulus safaris sustainability t-test calculator techniques tele-depths the grid tracking travel and tourism utilities video vox pops web buzz workshops z-test calculator

The Hub

info@allto.co.uk

Removing The Jokers From The Pack – Aug 2015

 I recently came across Allan Fromen’s article ‘When Will Market Research Get Serious About Sample Quality’ and found myself wholeheartedly empathising.  As a quantitative executive who specialises in online methodologies, I have come across my fair share of suspicious looking data when we’ve used panel sample – the ‘jokers’ who could compromise the results we deliver to our clients.  In fact, in a recent study conducted by Allto Consulting via a UK based panel provider, a shocking 352 completed interviews out of a total base of 2000 had to be removed and replaced due to various quality control issues, which included:

  • ‘speedsters’ – those people who complete the survey far too quickly for their answers to be truly considered
  • ‘flatliners’ – those who repeatedly give the same answer
  • nonsense verbatims – random letters, or responses that don’t answer the question
  • contradictions in responses – respondent A says he has a son, but then later in the survey, the son magically disappears
  • offensive language – I’m all for passionate responses, but when the respondent has simply filled the space with swear words, they have to go!

Bearing this in mind, we really owe it to our respondents to provide them with engaging and stimulating surveys to make sure they don’t get bored.  But when the average panellist is on 5-6 panels, and receiving many invites per week, it’s difficult to make our surveys truly stand out.

Most issues come from real life respondents, but one of the most worrying trends for me is the growing sophistication of automated programs, designed to ‘cheat’ our carefully constructed questionnaires.  Whilst checking the data on a different survey, we found 30 completes that seemed to draw on a standard set of around 8 verbatim responses – the phrasing, punctuation, spacing and spelling mistakes were identical, and couldn’t have come from unrelated ‘real life’ respondents.  More worryingly, these verbatims all referenced the topic of the questionnaire, so wouldn’t necessarily be detectable to the untrained eye.  When we approached the panel company to report this, they said the IDs in question came from 30 completely different IP addresses, and they simply couldn’t have uncovered these fraudulent responses using their own initial checks.  Once some retrospective digging was done, the perpetrators were found, but the panel provider wouldn’t have been aware if we hadn’t flagged it.

Interestingly, when the same survey was relaunched over a year later, we spotted the same bank of 8 verbatims being called upon again.  Having just completed the 4th wave of the research, it’s still an issue and despite changing panel provider, we have to remain vigilant to this kind of activity.

So I think it falls to us – the researchers and analysts – to give detailed feedback to our panel partners to root out the people who are consistently providing us with unreliable data.  Speaking to others in the industry, I’m not sure that the process of checking data quality is deemed to be as important as the analysis and reporting stages.  If everyone contributes to this effort, we can help to drive sample quality to the top of the agenda.  And if these fraudsters are proving elusive, we need to (at the very least) replace these interviews so our clients are always getting the best possible quality of data.

 

Laura Finnemore: Senior Research Executive

 

 

 

Let us know what you think

Your Name

Your Email

Your Feedback