Sample Quality and DIY Are Not Mutually Exclusive

Sample is often taken for granted and the pre-project discussion is limited to: what’s the incidence?  How many do we need?  How many can we get?  How fast?  And beyond that most clients don’t care.  But that’s ok – that’s what we’re here for.  “Sample Quality” is not just a buzz word around here at Insight Rabbit.  Rather, quality is a key foundational component in our approach to satisfying client needs. 

This commitment to quality is embedded in the comprehensive sample procurement and management approach applied to all research projects.  By systematically applying multiple techniques and technologies, we provide respondents who are 100% real, unique, engaged, demographically representative of their geography and GDPR compliant.  This is verifiable through automated sample transparency reports and assured through independent audits.

The purpose of this blog post is to provide a more detailed explanation of the techniques and technologies utilized by Insight Rabbit to ensure the delivery of quality online sample.

Sample Procurement

At Insight Rabbit we take sample seriously – very seriously.  We don’t push our own sample/panel or that of any particular panel company on our clients in the interest of maximizing our own profits – we always use the sample most appropriate for our client’s needs and objectives. 

We have preferred partnerships with the top tier panel companies in the industry and the broadest reach possible – over 100+ million respondents.  And we have gone to great lengths to vet each panel we partner with.  In order to meet our qualifications and be considered top tier, we conduct annual parallel research projects among all the panels we use (and those we don’t) to understand how different panels perform in terms of response rates, response profiles to a variety of question types, and other quality assessments.  This enables us to apply a score to each panel and then cluster them into unique tiers based on these scores.

We make every effort to hold sampling consistent within project and across projects for clients but when the need arises, because of low incidence levels or high demand, to use multiple panel sources we rely on our tier scoring to blend sample consistently.  This ability to blend sample from our panel partners to increase capacity and sample pool randomness, especially for low incidence groups, is a huge benefit for our clients.  By consistently drawing sample from these panel partners in the tiered clusters we have identified, we can confidently manage the blend while maintaining flexibility, scale, and quality. 

Real, Unique and Engaged

To validate that a panelist is, in fact, a real person, we leverage proprietary technology to observe our panelists’ online behavior and flag activities that only that person can do.  

We use a two-pronged approach to unique that nests digital fingerprint technology with a dual cookie match.  The digital fingerprint technology employed is consistent with GDPR privacy and data protection laws.  When a respondent enters our survey platform, we identify the computer and collect a large number of data points.  The information gathered is put through deterministic algorithms to create a unique digital fingerprint of each computer. The process is invisible to the user and does not interfere with the user experience. This fingerprint is used to ensure that the same person cannot participate in the study more than one time – even across multiple panels.

At Insight Rabbit we believe that engagement needs to be defined at the study level.  For example, what may be an acceptable level of engagement for an omnibus study may not be acceptable for a copy test.  For this reason, the patterns of behavior which classify a respondent as engaged or not engaged are customized based on the research technique and survey design.

For copy testing respondents who straight-line, speed, or provide excessive incomplete open-ended questions are removed for that test.  If a respondent exhibits such behavior in multiple studies, they are permanently removed from the panel pool.  BOT screeners, red herrings, reverse order questions, and IP address verification are also employed to make sure that the respondent is not using fraudulent survey testing aids.  Time metrics (e.g. time in survey, media playtime, time on page) and mouse movement are used to verify quality exposure to stimuli and deliberate question answering.  Finally, questions on worthwhileness and perceived length are asked with each test to gauge respondent participation satisfaction and eliminate systemic issues with survey design.

To guarantee valid results, each project is balanced to ensure that the sample is representative of the population under investigation.  Insight Rabbit has deployed geography specific standards for using online sample.  To ensure that the consumer population is broadly accessible, online testing on its own is restricted to those markets with a sufficient internet penetration.  Also, all tests are conducted in the native language and idiom for that geography.  Insight Rabbit also tightly controls the demographic composition of the sample at the session level by the use of quotas. 

Sample Transparency and Auditing

Finally, our internal Research and Sample Quality Hygiene Team has numerous automated and manual audits in place to monitor all projects that run through our shop.  These audits include sample source, blending percentages, demographic balancing, response rates, response levels, as well as quality assurance checks on test designs and statistical reliability from automatic retests.  This ensures that the testing system is free from extraneous variance and all reported metrics are as reliable as the laws of random sampling allow.

And one last item regarding sample – how fast is too fast?

There are a number of DIY (and other research providers) promoting results in as little as 6 hours.  To be honest this isn’t difficult – especially in the automated DIY world.  And in fact, we could promote the same here at Insight Rabbit.  We don’t.  The reason we don’t is that it’s irresponsible and bad practice in most research situations.  The FACT is that conducting online research this fast introduces a tremendous amount of sample bias into your project.  The simplest example of this is that if I launch a survey at 9am on the east coast with no throttle on speed and no control on geography – that study will complete in a few hours and have ZERO west coast representation.  Fast is great but let’s be responsible. 

Leave a Reply

Your email address will not be published. Required fields are marked *