We now formulate ten concrete suggestions that, given the currently available
evidence, appear to be good advice for survey designers or users who want to deal
effectively with the problem of minority bias in their own research. None of these
recommendations will be entirely new to readers of the international literature on
survey methodology (see e.g., Feskens, Hox, Lensvelt-Mulders & Schmeets, 2006;
Groves, 2006; or Peytchev, Baxter & Carley-Baxter, 2009), but none of them is trivial
to raise in the Swiss context: a fully-fledged implementation of any of these proposals
would involve surpassing some currently established routines. Each is based on a
collective interpretation of the correlational findings reported by Lipps et al. and
Laganà et al., in the context of the wider theoretical and empirical literature. These
empirically informed initial recommendations carry a twofold invitation to survey
practitioners and researchers: first, to creatively try out promising practices and,
second, to assess their impact, ideally by way of randomised survey experiments.
Outcomes from such evaluation studies could then contribute to building the wider
and more systematic knowledge-base that is still required to solidify and refine the
recommendations, in an iterative fashion.
Recommendation 1: Samples should be based on reliable population registers
whenever available and stratified by the main cleavages that are likely to organise the
distribution of relevant indicators in the target population.
Recommendation 2: It is important to invest in the right survey languages and
to be clear about the part of population that will be lost as a consequence of the actual language policy of the survey.
Recommendation 3: As the language and mode of first contact will always be
critical, these need to be planned particularly carefully.
Recommendation 4: Assumptions about daily routines among respondents
(which will affect the chances to establish contact at all, as well as the quality of actual contact) should not be taken for granted or transposed mechanically from one survey to the next. Instead, they should always be critically assessed for specific target populations and draw whenever possible on relevant knowledge, such as might be provided by community members serving as key informants.
Recommendation 5: Overall survey experience of interviewers should not be
taken as a guarantee for optimal implementation of contact procedures when it
comes to minority members. Specific socio-cultural competences of interviewers
should be assessed and possibly prioritised when composing a field team; linguistic
skills or knowledge about relevant cultural and social norms required to interact
appropriately with members from the main target communities can be critical assets.
Recommendation 6: The impact of interviewer reward schemes should be
critically reflected on when designing a survey. It is very likely that whenever they are
based on the mere number of completed interviews, instead of being proportional to
actual interviewer efforts, interviewers will be encouraged to concentrate their energy
on potentially “easy” respondents and discouraged from developing effective
strategies for recruiting rare or “difficult” respondents. Rewards based on actual
working hours, for example, should be considered as a potentially fairer and
methodologically more efficient alternative.
Recommendation 7: Individual and collective learning processes regarding
appropriate communication codes and strategies should be actively promoted. This
implies that contact and interview debriefings should be conceived as a systematic
tool to allow interviewers to learn from their own experiences and researchers to get
relevant real-time feedback on the implementation of fieldwork procedures.
Recommendation 8: Coverage and non-response bias should always be
assessed and monitored using all available register and para-data, to inform data
producers about the efficiency of the design strategies, and to inform data users about actual selection processes that need to be considered when interpreting findings.
Recommendation 9: The main benchmark against which the quality of the
survey design should ultimately be assessed are specific biases (that are sensitive to
the research goals), rather than arbitrarily defined overall response rates.
Recommendation 10: Possible post-stratification weights should be developed
empirically by way of testing, instead of assuming homogeneity within the categories
that are used to attribute different weights to individual respondents.
We are aware that, in the field, limited resources rather than lack of knowledge or
good will constitute the critical obstacles to implementing methodological
recommendations. In practice, the question will typically come down to how to define
priorities rationally and how to balance different requirements, which cannot all be
met simultaneously. We might therefore complement the ten recommendations with
five much more general suggestions, which aim to help survey practitioners find their
own way when negotiating difficult compromises, in order to approach as far as
feasible methodological high ideals:
Be critical: The fact that most of the established measures usually used to
improve data quality failed to effectively handle minority bias should encourage
critical reflection of such procedures, their concrete objectives, and their capacity to
Be specific: There are no universally valid criteria for making decisions about
sampling procedures, survey modes and languages, field team composition, or
contact strategies. Any good design strategy needs to be target-population-centred. In particular, survey researchers should be clear about which minority groups have to be
represented accurately in their sample in order to address the main research goals,
and then define the priorities of the survey design accordingly.
Be consistent: The design strategy needs to be in line with the research questions,
and the interpretation of findings should refer to the strategy used. For example, if an
accurate representation of vulnerable minority groups has not been defined as a
priority in the survey design process, then the resulting data should not be used to
make statistical inferences regarding levels of vulnerability in the overall population
(as this will inevitably lead to statistics that embellish social reality rather than
Be holistic: Specific measures to handle minority bias should be considered within
an integrated perspective rather than in isolation. This is important because
interaction effects of separate survey design parameters can be as important as their
simple effects. For example, costly implementations of survey interviews in additional
languages might prove inefficient as long as the mode and language of the first
contact are not optimal.
Be creative: The fact that no perfect solution exists and that no satisfactory set of
solutions to minority bias have been implemented so far compels us to try out new
methodological avenues, to empirically assess their impact, and to openly debate
failures and successes on the road to truly representative surveys.