Blog

So let's just accept that non-probability sample is the norm, and like it or not the bulk of sample for research comes from large global access panels.

A value add from this article is the attention brought to variations between these panels, which the authors nicely address. Yes it is bias, but it is known & consistent bias, which can be accounted for

I've had several clients over the years notice this & for this reason they tend to stick to the same panel for repeat studies even if this panel no longer the best price or coverage (or even close to it) when they do these repeat studies.

The possibility that changes between studies may be aberrations due to sample composition rather than true respondent opinion change scares the hell out of researchers.

So how do we deal with this?

Here is one suggestion: Global panel providers ought to be able, in this day & age of powerful databases, to have a dynamic weighting matrix for the panel as a whole and create weights based on the panel compositional differences as compared to the most current census data for the region surveyed.

It should be pretty simple to filter to the region sampled, count by age/gender/regional sub-groups/income/etc & compare to the census numbers & grind out a weight on the fly.

I don't know how to adjust for recruitment methodologies & weigh the difference in incentive practices (which may attract different thinking people - gamblers may be drawn to prize draws, non-risk takers to fixed incentives, etc) but I'm sure people more experienced in the fine art of sampling can come up with something.

There have been many research on research projects recently (like that conducting by ARF) that may already have answers to questions like this.

The important thing is that, as an industry, we stop sticking our heads in the sand and stop spouting rhetoric about probability sampling when it no longer applies.

Time to smell the 21st century

Cheers