• CASP Subquestions
Note . The CASP questions are adapted from “10 questions to help you make sense of qualitative research,” by Critical Appraisal Skills Programme, 2013, retrieved from http://media.wix.com/ugd/dded87_29c5b002d99342f788c6ac670e49f274.pdf . Its license can be found at http://creativecommons.org/licenses/by-nc-sa/3.0/
Once articles were assessed by the two authors independently, all three authors discussed and reconciled our assessment. No articles were excluded based on CASP results; rather, results were used to depict the general adequacy (or rigor) of all 55 articles meeting inclusion criteria for our systematic review. In addition, the CASP was included to enhance our examination of the relationship between the methods and the usefulness of the findings documented in each of the QD articles included in this review.
To further assess each of the 55 articles, data were extracted on: (a) research objectives, (b) design justification, (c) theoretical or philosophical framework, (d) sampling and sample size, (e) data collection and data sources, (f) data analysis, and (g) presentation of findings (see Table 2 ). We discussed extracted data and identified common and unique features in the articles included in our systematic review. Findings are described in detail below and in Table 3 .
Elements for Data Extraction
Elements | Data Extraction |
---|---|
Research objectives | • Verbs used in objectives or aims |
• Focuses of study | |
Design justification | • If the article cited references for qualitative description |
• If the article offered rationale to choose qualitative description | |
• References cited | |
• Rationale reported | |
Theoretical or philosophical frameworks | • If the article has theoretical or philosophical frameworks for study |
• Theoretical or philosophical frameworks reported | |
• How the frameworks were used in data collection and analysis | |
Sampling and sample sizes | • Sampling strategies (e.g., purposeful sampling, maximum variation) |
• Sample size | |
Data collection and sources | • Data collection techniques (e.g., individual or focus-group interviews, interview guide, surveys, field notes) |
Data analysis | • Data analysis techniques (e.g., qualitative content analysis, thematic analysis, constant comparison) |
• If data saturation was achieved | |
Presentation of findings | • Statement of findings |
• Consistency with research objectives |
Data Extraction and Analysis Results
Authors Country | Research Objectives | Design justification | Theoretical/ philosophical frameworks | Sampling/ sample size | Data collection and data sources | Data analysis | Findings |
---|---|---|---|---|---|---|---|
• USA | • Explore • Responses to communication strategies | • (-) Reference • (-) Rationale | Not reported (NR) | • Purposive sampling/ maximum variation • 32 family members | • Interviews • Observations • Review of daily flow sheet • Demographics | • Inductive and deductive qualitative content analysis • (-) Data saturation | Five themes about family members’ perceptions of nursing communication approaches |
• Sweden | • Describe • Experiences of using guidelines in daily practice | • (-) Reference • (+) Rationale • Part of a research program | NR | • Unspecified • 8 care providers | • Semistructured, individual interviews • Interview guide | • Qualitative content analysis • (-) Data saturation | One theme and seven subthemes about care providers’ experiences of using guidelines in daily practice |
• USA | • Examine • Culturally specific views of processes and causes of midlife weight gain | • (-) Reference • (-) Rationale | Health belief model and Kleiman’s explanatory model | • Unspecified • 19 adults | • Semistructured, individual interview | • Conventional content analysis • (-) Data saturation | Three main categories (from the model) and eight subthemes about causes of weight gain in midlife |
• Iran | • Explore • Factors initiating responsibility among medical trainees | • (-) Reference • (+) Rationale | NR | • Convenience, snowball, and maximum variation sampling • 15 trainees and other professionals | • Semistructured, individual interview • Interview guide | • Conventional content analysis • Constant comparison • (+) Data saturation | Two themes and individual and non- individual-based factors per theme |
• Iran | • Explore • Factors related to job satisfaction and dissatisfaction | • (-) Reference • (-) Rationale | NR | • Convenience sampling • 85 nurses | • Semistructured focus group interviews • Interview guide | • Thematic analysis • (+) Data saturation | Three main themes and associated factors regarding job satisfaction and dissatisfaction |
• Norway | • Describe • Perceptions on simulation-based team training | • (-) Reference • (-) Rationale | NR | • Strategic sampling • 18 registered nurses | • Semistructured individual interviews | • Inductive content analysis • (-) Data saturation | One main category, three categories, and six sub- categories regarding nurses’ perceptions on simulation-based team training |
• USA | • Determine • Barriers and supports for attending college and nursing school | • (-) Reference • (-) Rationale | NR | • Unspecified • 45 students | • Focus-group interviews • Using Photovoice and SHOWeD | • Constant comparison • (-) Data saturation | Five themes about facilitators and barriers |
• USA | • Explore • Reasons for choosing home birth and birth experiences | • (-) Reference • (-) Rationale | NR | • Purposeful sampling • 20 women | • Semistructured focus-group interviews • Interview guide • Field notes | • Qualitative content analysis • (+) Data saturation | Five common themes and concepts about reasons for choosing home birth based on their birth experiences |
• New Zealand | • Explore • Normal fetal activity related to hunger and satiation | • (+) Reference • (+) Rationale • • Denzin & Lincoln (2011) | NR | • Purposive sampling • 19 pregnant women | • Semistructured individual interviews • Open-ended questions | • Inductive qualitative content analysis • Descriptive statistical analysis • (+) Data saturation | Four patterns regarding fetal activities in relation to meal anticipation, maternal hunger, maternal meal consummation, and maternal satiety |
• Italy | • Explore, describe, and compare • perceptions of nursing caring | • (+) Reference • (-) Rationale • | NR | • Purposive sampling • 20 nurses and 20 patients | • Semistructured individual interviews • Interview guide • Field notes during interviews | • Unspecified various analytic strategies including constant comparison • (-) Data saturation | Nursing caring from both patients’ and nurses’ perspectives – a summary of data in visible caring and invisible caring |
• Hong Kong | • Address • How to reduce coronary heart disease risks | • (+) Reference • (+) Rationale • Secondary analysis • • | NR | • Convenience and snowball sampling • 105 patients | • Focus-group interviews • Interview guide | • Content analysis • (+) Data saturation | Four categories about patients’ abilities to reduce coronary heart disease |
• Taiwan | • Explore • Reasons for young–old people not killing themselves | • (-) Reference • (-) Rationale | NR | • Convenience sampling • 31 older adults | • Semistructured individual interviews • Interview guide • Observation with memos/reflective journal | • Content analysis • (+) Data saturation | Six themes regarding reasons for not committing to suicide |
• USA | • Explore • Neonatal intensive care unit experiences | • (+) Reference • (+) Rationale • | NR | • Purposive sampling and convenience sample • 15 mothers | • Semistructured individual interviews • Interview guide | • Qualitative content analysis • (+) Data saturation | Four themes about participants’ experiences of neonatal intensive care unit |
• Colombia | • Investigate • Barriers/facilitators to implementing evidence-based nursing | • (+) Reference • (-) Rationale • | Ottawa model for research use: knowledge translation framework | • Convenience sampling • 13 nursing professionals | • Semistructured individual interviews • Interview guide | • Inductive qualitative content analysis • Constant comparison • (-) Data saturation | Four main barriers and potential facilitators to evidence-based nursing |
• Australia | • Explore • Perceptions and utilization of diaries | • (+) Reference • (-) Rationale • | NR | • Unspecified • 19 patients and families | • Responses to open-ended questions on survey | • Unspecified analysis strategy • (-) Data saturation | Five themes regarding perceptions on use of diaries and descriptive statistics using frequencies of utilization |
• USA | • Explore • Knowledge, attitudes, and beliefs about sexual consent | • (-) Reference • (-) Rationale • Part of a larger mixed-method study | Theory of planned behavior | • Purposive sampling • snowball sampling • 26 women | • Semistructured focus-group interviews • Interview guide | • Content analysis • (+) Data saturation | Three main categories and subthemes regarding sexual consent |
• Sweden | • Describe • Experiences of knowledge development in wound management | • (+) Reference • (+) Rationale: weak • | NR | • Purposive sampling • 16 district nurses | • Individual interviews • Interview guide | • Qualitative content analysis • (-) Data saturation | Three categories and eleven sub-categories about knowledge development experiences in wound management |
• USA | • Describe • Parental-pain journey, beliefs about pain, and attitudes/behaviors related to children’s responses | • (+) Reference • (+) Rationale • • • Part of a larger mixed methods study | NR | • Purposive sampling • 9 parents | • Individual interviews • One open- ended question | • Qualitative content analysis • (+) Data saturation | Two main themes, categories, and subcategories about parents’ experiences of observing children’s pain |
• USA | • Describe • Challenges and barriers in providing culturally competent care | • (+) Reference • (+) Rationale • • Secondary analysis | NR | • Stratified sampling • 253 nurses | • Written responses to 2 open-ended questions on survey | • Thematic analysis • (-) Data saturation | Three themes regarding challenges/barriers |
• Denmark | • Describe • Experiences of childbirth | • (-) Reference • (-) Rationale • A substudy | NR | • Purposive sampling with maximum variation • Partners of 10 women | • Semistructured, individual interviews • Interview guide | • Thematic analysis • (+) Data saturation | Three themes and four subthemes about partners’ experiences of women’s childbirth |
• Australia | • Explore • Perceptions about medical nutrition and hydration at the end of life | • (+) Reference • (+) Rationale • | NR | • Purposeful sampling • 10 nurses | • Focus-group interviews | • “analyzed thematically” • (-) Data saturation | One main theme and four subthemes regarding nurses’ perceptions on EOL- related medical nutrition and hydration |
• USA | • Describe • Reasons for leaving a home visiting program early | • (-) Reference • (-) Rationale | NR | • Convenience sample • 32 mothers, nurses, and nurse supervisors | • Semistructured, individual interviews • Focus-group interviews • Interview guide | • Inductive content analysis • Constant comparison approach • (+) Data saturation | Three sets of reasons for leaving a home visiting program |
• Sweden | • Explore and describe • Beliefs and attitudes around the decision for a caesarean section | • (+) Reference • (+) Rationale • • | NR | • Unspecified • 21 males | • Individual telephone interviews | • Thematic analysis • Constant comparison approach • (-) Data saturation | Two themes and subthemes in relation to the research objective |
• Taiwan | • Explore • Illness experiences of early onset of knee osteoarthritis | • (+) Reference • (+) Rationale • • • Part of a large research series | NR | • Purposive sampling • 17 adults | • Semistructured, Individual interviews • Interview guide • Memo/field notes (observations) | • Inductive content analysis • (+) Data saturation | Three major themes and nine subthemes regarding experiences of early onset-knee osteoarthritis |
• Australia | • Explore • Perceptions about bedside handover (new model) by nurses | • (+) Reference • (+) Rationale • • | NR | • Purposive sampling • 30 patients | • Semistructured, individual interviews • Interview guide | • Thematic content analysis • (-) Data analysis | Two dominant themes and related subthemes regarding patients’ thoughts about nurses’ bedside handover |
• Sweden | • Identify • Patterns in learning when living with diabetes | • (-) Reference • (-) Rationale | NR | • Purposive sampling with variations in age and sex • 13 participants | • Semistructured, individual interviews (3 times over 3 years) | • analysis process • Inductive qualitative content analysis • (-) Data saturation | Five main patterns of learning when living with diabetes for three years following diagnosis |
• Canada | • Evaluate • Book chat intervention based on a novel | • (-) Reference • (-) Rationale • Part of a larger research project | NR | • Unspecified • 11 long-term- care staff | • Questionnaire with two open- ended questions | • Thematic content analysis • (-) Data saturation | Five themes (positive comments) about the book chat with brief description |
• Taiwan | • Explore • Facilitators and barriers to implementing smoking- cessation counseling services | • (-) Reference • (-) Rationale | NR | • Unspecified • 16 nurse- counselors | • Semistructured individual interviews • Interview guide | • Inductive content analysis • Constant comparison • (-) Data saturation | Two themes and eight subthemes about facilitators and barriers described using 2-4 quotations per subtheme |
• USA | • Identify • Educational strategies to manage disruptive behavior | • (-) Reference • (-) Rationale • Part of a larger study | NR | • Unspecified • 9 nurses | • Semistructured, individual interviews • Interview guide | • Content analysis procedures • (-) Data saturation | Two main themes regarding education strategies for nurse educators |
• USA | • Explore • Experiences of difficulty resolving patient- related concerns | • (-) Reference • (-) Rationale • Secondary analysis | NR | • Unspecified • 1932 physician, nursing, and midwifery professionals | • E-mail survey with multiple- choice and free- text responses | • Inductive thematic analysis • Descriptive statistics • (-) Data saturation | One overarching theme and four subthemes about professionals’ experiences of difficulty resolving patient-related concerns |
• Singapore | • Explicate • Experience of quality of life for older adults | • (+) Reference • (+) Rationale • | Parse’s human becoming paradigm | • Unspecified • 10 elderly residents | • Individual interviews • Interview questions presented (Parse) | • Unspecified analysis techniques • (-) Data saturation | Three themes presented using both participants’ language and the researcher’s language |
• China | • Explore • Perspectives on learning about caring | • (-) Reference • (-) Rationale | NR | • Purposeful sampling • 20 nursing students | • Focus-group interviews • Interview guide | • Conventional content analysis • (-) Data saturation | Four categories and associated subcategories about facilitators and challenges to learning about caring |
• Poland | • Describe and assess • Components of the patient–nurse relationship and pediatric-ward amenities | • (+) Reference • (-) Rationale • | NR | • Purposeful, maximum variation sampling • 26 parents or caregivers and 22 children | • Individual interviews | • Qualitative content analysis • (-) Data saturation | Five main topics described from the perspectives of children and parents |
• Canada | • Evaluate • Acceptability and feasibility of hand-massage therapy | • (-) Reference • (-) Rationale • Secondary to a RCT | Focused on feasibility and acceptability | • Unspecified • 40 patients | • Semistructured, individual interviews • Field notes • Video recording | • Thematic analysis for acceptability • Quantitative ratings of video items for feasibility • (-) Data analysis | Summary of data focusing on predetermined indicators of acceptability and descriptive statistics to present feasibility |
• USA | • Understand • Challenges occurring during transitions of care | • (+) Reference • (+) Rationale • • Part of a larger study | NR | • Convenience sample • 22 nurses | • Focus groups • Interview guide | • Qualitative content analysis methods • (+) Data analysis | Three themes about challenges regarding transitions of care: |
• Canada | • Understand • Factors that influence nurses’ retention in their current job | • (-) Reference • (-) Rationale | NR | • Purposeful sampling • 41 nurses | • Focus-group interviews • Interview guide | • Directed content analysis • (+) Data saturation | Nurses’ reasons to stay and leave their current job |
• Australia | • Extend • Understanding of caregivers’ views on advance care planning | • (+) Reference • (+) Rationale • • Grounded theory overtone | NR | • Theoretical sampling • 18 caregivers | • Semistructured focus group and individual interviews • Interview guide • Vignette technique | • Inductive, cyclic, and constant comparative analysis • (-) Data analysis | Three themes regarding caregivers’ perceptions on advance care planning |
• USA | • Describe • Outcomes older adults with epilepsy hope to achieve in management | • (-) Reference • (-) Rationale | NR | • Unspecified • 20 patients | • Individual interview | • Conventional content analysis • (-) Data saturation | Six main themes and associated subthemes regarding what older adults hoped to achieve in management of their epilepsy |
• The Netherlands | • Gain • Experience of personal dignity and factors influencing it | • (+) Reference • (-) Rationale • | Model of dignity in illness | • Maximum variation sampling • 30 nursing home residents | • Individual interviews • Interview guide | • Thematic analysis • Constant comparison • (+) Data saturation | The threatening effect of illness and three domains being threatened by illness in relation to participants’ experiences of personal dignity |
• USA | • Identify and describe • Needs in mental health services and “ideal” program | • (+) Reference • (+) Rationale • • There is a primary study | NR | • Unspecified • 52 family members | • Semistructured, individual and focus-group interviews | • “Standard content analytic procedures” with case-ordered meta-matrix • (-) Data saturation | Two main topics – (a) intervention modalities that would fit family members’ needs in mental health services and (b) topics that programs should address |
• USA | • “What are the perceptions of staff nurses regarding palliative care…?” | • (-) Reference • (-) Rationale | NR | • Purposive, convenience sampling • 18 nurses | • Semistructured and focus-group interviews • Interview guide | • Ritchie and Spencer’s framework for data analysis • (-) Data saturation | Five thematic categories and associated subcategories about nurses’ perceptions of palliative care |
• Canada | • Describe • Experience of caring for a relative with dementia | • (+) Reference • (+) Rationale • Sandelowski ( ; ) • Secondary analysis • Phenomenological overtone | NR | • Purposive sampling • 11 bereaved family members | • Individual interviews • 27 transcripts from the primary study | • Unspecified • (-) Data saturation | Five major themes regarding the journey with dementia from the time prior to diagnosis and into bereavement |
• Canada | • Describe Experience of fetal fibronectin testing | • (+) Reference • (+) Rationale • • | NR | • Unspecified • 17 women | • Semistructured individual interviews • Interview guide | • Conventional content analysis • (+) Data saturation | One overarching theme, three themes, and six subthemes about women’s experiences of fetal fibronectin testing |
• New Zealand | • Explore • Role of nurses in providing palliative and end-of-life care | • (+) Reference • (+) Rationale • • Part of a larger study | NR | • Purposeful sampling • 21 nurses | • Semistructured individual interviews | • Thematic analysis • (-) Data saturation | Three themes about practice nurses’ experiences in providing palliative and end-of-life care |
• Brazil | • Understand • Experience with postnatal depression | • (+) Reference • (-) Rationale • | NR | • Purposeful, criterion sampling • 15 women with postnatal depression | • Minimally structured, individual interviews | • Thematic analysis • (+) Data saturation | Two themes – women’s “bad thoughts” and their four types of responses to fear of harm (with frequencies) |
• Australia | • Understand • Experience of peripherally inserted central catheter insertion | • (+) Reference • (+) Rationale • | NR | • Purposeful sampling • 10 patients | • Semistructured, individual interviews • Interview guide | • Thematic analysis • (+) Data saturation | Four themes regarding patients’ experiences of peripherally inserted central catheter insertion |
• USA | • Discover • Context, values, and background meaning of cultural competency | • (+) Reference • (+) Rationale • | Focused on cultural competence | • Purposive, maximum variation, and network • 20 experts | • Semistructured, individual interviews | • Within-case and across-case analysis • (-) Data saturation | Three themes regarding cultural competency |
• USA | • Explore and describe • Cancer experience | • (+) Reference • (+) Rationale • | NR | • Unspecified • 15 patients | • Longitudinal individual interviews (4 time points) • 40 interviews | • Inductive content analysis • (-) Data saturation | Processes and themes about adolescent identify work and cancer identify work across the illness trajectory |
• Sweden | • Explore • Experiences of giving support to patients during the transition | • (-) Reference • (-) Rationale | Focused on support and transition | • Unspecified (but likely purposeful sampling) • 8 nurses | • Semistructured Individual interviews • Interview guide | • Content analysis • (-) Data saturation | One theme, three main categories, and eight associated categories |
• Taiwan | • Describe • Process of women’s recovery from stillbirth | • (+) Reference • (+) Rationale • | NR | • Purposeful sampling • 21 women | • Individual interview techniques | • Inductive analytic approaches ( ) • (+) Data saturation | Three stages (themes) regarding the recovery process of Taiwanese women with stillbirth |
• Iran | • Describe • Perspectives of causes of medication errors | • (+) Reference • (+) Rationale • | NR | • Purposeful sampling • 24 nursing students | • Focus-group interviews • Observations with notes | • Content analysis • (-) Data saturation | Two main themes about nursing students’ perceptions on causes of medication errors |
• Iran | • Explore • Image of nursing | • (-) Reference • (-) Rationale | NR | • Purposeful sampling • 18 male nurses | • Semistructured individual, interviews • Field notes | • Content analysis • (-) Data saturation | Two main views (themes) on nursing presented with subthemes per view |
• Spain | • Ascertain • Barriers to sexual expression | • (-) Reference • (-) Rationale | NR | • Maximum variation • 100 staff and residents | • Semistructured, individual interview | • Content analysis • (-) Data saturation | 40% of participants without identification of barriers and 60% with seven most cited barriers to sexual expression in the long-term care setting |
• Canada | • Explore • Perceptions of empowerment in academic nursing environments | • (+) Reference • (+) Rationale • Sandelowski ( , ) | Theories of structural power in organizations and psychological empowerment | • Unspecified • 8 clinical instructors | • Semistructured, individual • interview guide | • Unspecified (but used pre-determined concepts) • (+) Data saturation | Structural empowerment and psychological empowerment described using predetermined concepts |
• China | • Investigate • Meaning of life and health experience with chronic illness | • (+) Reference • (+) Rationale • Sandelowski ( , ) | Positive health philosophy | • Purposive, convenience sampling • 11 patients | • Individual interviews • Observations of daily behavior with field notes | • Thematic analysis • (-) Data saturation | Four themes regarding the meaning of life and health when living with chronic illnesses |
Note . NR = not reported
Justification for use of a QD design was evident in close to half (47.3%) of the 55 publications. While most researchers clearly described recruitment strategies (80%) and data collection methods (100%), justification for how the study setting was selected was only identified in 38.2% of the articles and almost 75% of the articles did not include any reason for the choice of data collection methods (e.g., focus-group interviews). In the vast majority (90.9%) of the articles, researchers did not explain their involvement and positionality during the process of recruitment and data collection or during data analysis (63.6%). Ethical standards were reported in greater than 89% of all articles and most articles included an in-depth description of data analysis (83.6%) and development of categories or themes (92.7%). Finally, all researchers clearly stated their findings in relation to research questions/objectives. Researchers of 83.3% of the articles discussed the credibility of their findings (see Table 1 ).
In statements of study objectives and/or questions, the most frequently used verbs were “explore” ( n = 22) and “describe” ( n = 17). Researchers also used “identify” ( n = 3), “understand” ( n = 4), or “investigate” ( n = 2). Most articles focused on participants’ experiences related to certain phenomena ( n = 18), facilitators/challenges/factors/reasons ( n = 14), perceptions about specific care/nursing practice/interventions ( n = 11), and knowledge/attitudes/beliefs ( n = 3).
A total of 30 articles included references for QD. The most frequently cited references ( n = 23) were “Whatever happened to qualitative description?” ( Sandelowski, 2000 ) and “What’s in a name? Qualitative description revisited” ( Sandelowski, 2010 ). Other references cited included “Qualitative description – the poor cousin of health research?” ( Neergaard et al., 2009 ), “Reaching the parts other methods cannot reach: an introduction to qualitative methods in health and health services research” ( Pope & Mays, 1995 ), and general research textbooks ( Polit & Beck, 2004 , 2012 ).
In 26 articles (and not necessarily the same as those citing specific references to QD), researchers provided a rationale for selecting QD. Most researchers chose QD because this approach aims to produce a straight description and comprehensive summary of the phenomenon of interest using participants’ language and staying close to the data (or using low inference).
Authors of two articles distinctly stated a QD design, yet also acknowledged grounded-theory or phenomenological overtones by adopting some techniques from these qualitative traditions ( Michael, O'Callaghan, Baird, Hiscock, & Clayton, 2014 ; Peacock, Hammond-Collins, & Forbes, 2014 ). For example, Michael et al. (2014 , p. 1066) reported:
The research used a qualitative descriptive design with grounded theory overtones ( Sandelowski, 2000 ). We sought to provide a comprehensive summary of participants’ views through theoretical sampling; multiple data sources (focus groups [FGs] and interviews); inductive, cyclic, and constant comparative analysis; and condensation of data into thematic representations ( Corbin & Strauss, 1990 , 2008 ).
Authors of four additional articles included language suggestive of a grounded-theory or phenomenological tradition, e.g., by employing a constant comparison technique or translating themes stated in participants’ language into the primary language of the researchers during data analysis ( Asemani et al., 2014 ; Li, Lee, Chen, Jeng, & Chen, 2014 ; Ma, 2014 ; Soule, 2014 ). Additionally, Li et al. (2014) specifically reported use of a grounded-theory approach.
In most (n = 48) articles, researchers did not specify any theoretical or philosophical framework. Of those articles in which a framework or philosophical stance was included, the authors of five articles described the framework as guiding the development of an interview guide ( Al-Zadjali, Keller, Larkey, & Evans, 2014 ; DeBruyn, Ochoa-Marin, & Semenic, 2014 ; Fantasia, Sutherland, Fontenot, & Ierardi, 2014 ; Ma, 2014 ; Wiens, Babenko-Mould, & Iwasiw, 2014 ). In two articles, data analysis was described as including key concepts of a framework being used as pre-determined codes or categories ( Al-Zadjali et al., 2014 ; Wiens et al., 2014 ). Oosterveld-Vlug et al. (2014) and Zhang, Shan, and Jiang (2014) discussed a conceptual model and underlying philosophy in detail in the background or discussion section, although the model and philosophy were not described as being used in developing interview questions or analyzing data.
In 38 of the 55 articles, researchers reported ‘purposeful sampling’ or some derivation of purposeful sampling such as convenience ( n = 10), maximum variation ( n = 8), snowball ( n = 3), and theoretical sampling ( n = 1). In three instances ( Asemani et al., 2014 ; Chan & Lopez, 2014 ; Soule, 2014 ), multiple sampling strategies were described, for example, a combination of snowball, convenience, and maximum variation sampling. In articles where maximum variation sampling was employed, “variation” referred to seeking diversity in participants’ demographics ( n = 7; e.g., age, gender, and education level), while one article did not include details regarding how their maximum variation sampling strategy was operationalized ( Marcinowicz, Abramowicz, Zarzycka, Abramowicz, & Konstantynowicz, 2014 ). Authors of 17 articles did not specify their sampling techniques.
Sample sizes ranged from 8 to 1,932 with nine studies in the 8–10 participant range and 24 studies in the 11–20 participant range. The participant range of 21–30 and 31–50 was reported in eight articles each. Six studies included more than 50 participants. Two of these articles depicted quite large sample sizes (N=253, Hart & Mareno, 2014 ; N=1,932, Lyndon et al., 2014 ) and the authors of these articles described the use of survey instruments and analysis of responses to open-ended questions. This was in contrast to studies with smaller sample sizes where individual interviews and focus groups were more commonly employed.
In a majority of studies, researchers collected data through individual ( n = 39) and/or focus-group ( n = 14) interviews that were semistructured. Most researchers reported that interviews were audiotaped ( n = 51) and interview guides were described as the primary data collection tool in 29 of the 51 studies. In some cases, researchers also described additional data sources, for example, taking memos or field notes during participant observation sessions or as a way to reflect their thoughts about interviews ( n = 10). Written responses to open-ended questions in survey questionnaires were another type of data source in a small number of studies ( n = 4).
The analysis strategy most commonly used in the QD studies included in this review was qualitative content analysis ( n = 30). Among the studies where this technique was used, most researchers described an inductive approach; researchers of two studies analyzed data both inductively and deductively. Thematic analysis was adopted in 14 studies and the constant comparison technique in 10 studies. In nine studies, researchers employed multiple techniques to analyze data including qualitative content analysis with constant comparison ( Asemani et al., 2014 ; DeBruyn et al., 2014 ; Holland, Christensen, Shone, Kearney, & Kitzman, 2014 ; Li et al., 2014 ) and thematic analysis with constant comparison ( Johansson, Hildingsson, & Fenwick, 2014 ; Oosterveld-Vlug et al., 2014 ). In addition, five teams conducted descriptive statistical analysis using both quantitative and qualitative data and counting the frequencies of codes/themes ( Ewens, Chapman, Tulloch, & Hendricks, 2014 ; Miller, 2014 ; Santos, Sandelowski, & Gualda, 2014 ; Villar, Celdran, Faba, & Serrat, 2014 ) or targeted events through video monitoring ( Martorella, Boitor, Michaud, & Gelinas, 2014 ). Tseng, Chen, and Wang (2014) cited Thorne, Reimer Kirkham, and O’Flynn-Magee (2004)’s interpretive description as the inductive analytic approach. In five out of 55 articles, researchers did not specifically name their analysis strategies, despite including descriptions about procedural aspects of data analysis. Researchers of 20 studies reported that data saturation for their themes was achieved.
Researchers described participants’ experiences of health care, interventions, or illnesses in 18 articles and presented straightforward, focused, detailed descriptions of facilitators, challenges, factors, reasons, and causes in 15 articles. Participants’ perceptions of specific care, interventions, or programs were described in detail in 11 articles. All researchers presented their findings with extensive descriptions including themes or categories. In 25 of 55 articles, figures or tables were also presented to illustrate or summarize the findings. In addition, the authors of three articles summarized, organized, and described their data using key concepts of conceptual models ( Al-Zadjali et al., 2014 ; Oosterveld-Vlug et al., 2014 ; Wiens et al., 2014 ). Martorella et al. (2014) assessed acceptability and feasibility of hand massage therapy and arranged their findings in relation to pre-determined indicators of acceptability and feasibility. In one longitudinal QD study ( Kneck, Fagerberg, Eriksson, & Lundman, 2014 ), the researchers presented the findings as several key patterns of learning for persons living with diabetes; in another longitudinal QD study ( Stegenga & Macpherson, 2014 ), findings were presented as processes and themes regarding patients’ identity work across the cancer trajectory. In another two studies, the researchers described and compared themes or categories from two different perspectives, such as patients and nurses ( Canzan, Heilemann, Saiani, Mortari, & Ambrosi, 2014 ) or parents and children ( Marcinowicz et al., 2014 ). Additionally, Ma (2014) reported themes using both participants’ language and the researcher’s language.
In this systematic review, we examined and reported specific characteristics of methods and findings reported in journal articles self-identified as QD and published during one calendar year. To accomplish this we identified 55 articles that met inclusion criteria, performed a quality appraisal following CASP guidelines, and extracted and analyzed data focusing on QD features. In general, three primary findings emerged. First, despite inconsistencies, most QD publications had the characteristics that were originally observed by Sandelowski (2000) and summarized by other limited available QD literature. Next, there are no clear boundaries in methods used in the QD studies included in this review; in a number of studies, researchers adopted and combined techniques originating from other qualitative traditions to obtain rich data and increase their understanding of the phenomenon under investigation. Finally, justification for how QD was chosen and why it would be an appropriate fit for a particular study is an area in need of increased attention.
In general, the overall characteristics were consistent with design features of QD studies described in the literature ( Neergaard et al., 2009 ; Sandelowski, 2000 , 2010 ; Vaismoradi et al., 2013 ). For example, many authors reported that study objectives were to describe or explore participants’ experiences and factors related to certain phenomena, events, or interventions. In most cases, these authors cited Sandelowski (2000) as a reference for this particular characteristic. It was rare that theoretical or philosophical frameworks were identified, which also is consistent with descriptions of QD. In most studies, researchers used purposeful sampling and its derivative sampling techniques, collected data through interviews, and analyzed data using qualitative content analysis or thematic analysis. Moreover, all researchers presented focused or comprehensive, descriptive summaries of data including themes or categories answering their research questions. These characteristics do not indicate that there are correct ways to do QD studies; rather, they demonstrate how others designed and produced QD studies.
In several studies, researchers combined techniques that originated from other qualitative traditions for sampling, data collection, and analysis. This flexibility or variability, a key feature of recently published QD studies, may indicate that there are no clear boundaries in designing QD studies. Sandelowski (2010) articulated: “in the actual world of research practice, methods bleed into each other; they are so much messier than textbook depictions” (p. 81). Hammersley (2007) also observed:
“We are not so much faced with a set of clearly differentiated qualitative approaches as with a complex landscape of variable practice in which the inhabitants use a range of labels (‘ethnography’, ‘discourse analysis’, ‘life history work’, narrative study’, ……, and so on) in diverse and open-ended ways in order to characterize their orientation, and probably do this somewhat differently across audiences and occasions” (p. 293).
This concept of having no clear boundaries in methods when designing a QD study should enable researchers to obtain rich data and produce a comprehensive summary of data through various data collection and analysis approaches to answer their research questions. For example, using an ethnographical approach (e.g., participant observation) in data collection for a QD study may facilitate an in-depth description of participants’ nonverbal expressions and interactions with others and their environment as well as situations or events in which researchers are interested ( Kawulich, 2005 ). One example found in our review is that Adams et al. (2014) explored family members’ responses to nursing communication strategies for patients in intensive care units (ICUs). In this study, researchers conducted interviews with family members, observed interactions between healthcare providers, patients, and family members in ICUs, attended ICU rounds and family meetings, and took field notes about their observations and reflections. Accordingly, the variability in methods provided Adams and colleagues (2014) with many different aspects of data that were then used to complement participants’ interviews (i.e., data triangulation). Moreover, by using a constant comparison technique in addition to qualitative content analysis or thematic analysis in QD studies, researchers compare each case with others looking for similarities and differences as well as reasoning why differences exist, to generate more general understanding of phenomena of interest ( Thorne, 2000 ). In fact, this constant comparison analysis is compatible with qualitative content analysis and thematic analysis and we found several examples of using this approach in studies we reviewed ( Asemani et al., 2014 ; DeBruyn et al., 2014 ; Holland et al., 2014 ; Johansson et al., 2014 ; Li et al., 2014 ; Oosterveld-Vlug et al., 2014 ).
However, this flexibility or variability in methods of QD studies may cause readers’ as well as researchers’ confusion in designing and often labeling qualitative studies ( Neergaard et al., 2009 ). Especially, it could be difficult for scholars unfamiliar with qualitative studies to differentiate QD studies with “hues, tones, and textures” of qualitative traditions ( Sandelowski, 2000 , p. 337) from grounded theory, phenomenological, and ethnographical research. In fact, the major difference is in the presentation of the findings (or outcomes of qualitative research) ( Neergaard et al., 2009 ; Sandelowski, 2000 ). The final products of grounded theory, phenomenological, and ethnographical research are a generation of a theory, a description of the meaning or essence of people’s lived experience, and an in-depth, narrative description about certain culture, respectively, through researchers’ intensive/deep interpretations, reflections, and/or transformation of data ( Streubert & Carpenter, 2011 ). In contrast, QD studies result in “a rich, straight description” of experiences, perceptions, or events using language from the collected data ( Neergaard et al., 2009 ) through low-inference (or data-near) interpretations during data analysis ( Sandelowski, 2000 , 2010 ). This feature is consistent with our finding regarding presentation of findings: in all QD articles included in this systematic review, the researchers presented focused or comprehensive, descriptive summaries to their research questions.
Finally, an explanation or justification of why a QD approach was chosen or appropriate for the study aims was not found in more than half of studies in the sample. While other qualitative approaches, including grounded theory, phenomenology, ethnography, and narrative analysis, are used to better understand people’s thoughts, behaviors, and situations regarding certain phenomena ( Sullivan-Bolyai et al., 2005 ), as noted above, the results will likely read differently than those for a QD study ( Carter & Little, 2007 ). Therefore, it is important that researchers accurately label and justify their choices of approach, particularly for studies focused on participants’ experiences, which could be addressed with other qualitative traditions. Justifying one’s research epistemology, methodology, and methods allows readers to evaluate these choices for internal consistency, provides context to assist in understanding the findings, and contributes to the transparency of choices, all of which enhance the rigor of the study ( Carter & Little, 2007 ; Wu, Thompson, Aroian, McQuaid, & Deatrick, 2016 ).
Use of the CASP tool drew our attention to the credibility and usefulness of the findings of the QD studies included in this review. Although justification for study design and methods was lacking in many articles, most authors reported techniques of recruitment, data collection, and analysis that appeared. Internal consistencies among study objectives, methods, and findings were achieved in most studies, increasing readers’ confidence that the findings of these studies are credible and useful in understanding under-explored phenomenon of interest.
In summary, our findings support the notion that many scholars employ QD and include a variety of commonly observed characteristics in their study design and subsequent publications. Based on our review, we found that QD as a scholarly approach allows flexibility as research questions and study findings emerge. We encourage authors to provide as many details as possible regarding how QD was chosen for a particular study as well as details regarding methods to facilitate readers’ understanding and evaluation of the study design and rigor. We acknowledge the challenge of strict word limitation with submissions to print journals; potential solutions include collaboration with journal editors and staff to consider creative use of charts or tables, or using more citations and less text in background sections so that methods sections are robust.
Several limitations of this review deserve mention. First, only articles where researchers explicitly stated in the main body of the article that a QD design was employed were included. In contrast, articles labeled as QD in only the title or abstract, or without their research design named were not examined due to the lack of certainty that the researchers actually carried out a QD study. As a result, we may have excluded some studies where a QD design was followed. Second, only one database was searched and therefore we did not identify or describe potential studies following a QD approach that were published in non-PubMed databases. Third, our review is limited by reliance on what was included in the published version of a study. In some cases, this may have been a result of word limits or specific styles imposed by journals, or inconsistent reporting preferences of authors and may have limited our ability to appraise the general adequacy with the CASP tool and examine specific characteristics of these studies.
A systematic review was conducted by examining QD research articles focused on nursing-related phenomena and published in one calendar year. Current patterns include some characteristics of QD studies consistent with the previous observations described in the literature, a focus on the flexibility or variability of methods in QD studies, and a need for increased explanations of why QD was an appropriate label for a particular study. Based on these findings, recommendations include encouragement to authors to provide as many details as possible regarding the methods of their QD study. In this way, readers can thoroughly consider and examine if the methods used were effective and reasonable in producing credible and useful findings.
This work was supported in part by the John A. Hartford Foundation’s National Hartford Centers of Gerontological Nursing Excellence Award Program.
Hyejin Kim is a Ruth L. Kirschstein NRSA Predoctoral Fellow (F31NR015702) and 2013–2015 National Hartford Centers of Gerontological Nursing Excellence Patricia G. Archbold Scholar. Justine Sefcik is a Ruth L. Kirschstein Predoctoral Fellow (F31NR015693) through the National Institutes of Health, National Institute of Nursing Research.
Conflict of Interest Statement
The Authors declare that there is no conflict of interest.
Hyejin Kim, MSN, CRNP, Doctoral Candidate, University of Pennsylvania School of Nursing.
Justine S. Sefcik, MS, RN, Doctoral Candidate, University of Pennsylvania School of Nursing.
Christine Bradway, PhD, CRNP, FAAN, Associate Professor of Gerontological Nursing, University of Pennsylvania School of Nursing.
Introduction, challenging some common methodological assumptions about online qualitative surveys, ten practical tips for designing, implementing and analysing online qualitative surveys, acknowledgements, conflict of interest statement, data availability, ethical approval.
Samantha L Thomas, Hannah Pitt, Simone McCarthy, Grace Arnot, Marita Hennessy, Methodological and practical guidance for designing and conducting online qualitative surveys in public health, Health Promotion International , Volume 39, Issue 3, June 2024, daae061, https://doi.org/10.1093/heapro/daae061
Online qualitative surveys—those surveys that prioritise qualitative questions and interpretivist values—have rich potential for researchers, particularly in new or emerging areas of public health. However, there is limited discussion about the practical development and methodological implications of such surveys, particularly for public health researchers. This poses challenges for researchers, funders, ethics committees, and peer reviewers in assessing the rigour and robustness of such research, and in deciding the appropriateness of the method for answering different research questions. Drawing and extending on the work of other researchers, as well as our own experiences of conducting online qualitative surveys with young people and adults, we describe the processes associated with developing and implementing online qualitative surveys and writing up online qualitative survey data. We provide practical examples and lessons learned about question development, the importance of rigorous piloting strategies, use of novel techniques to prompt detailed responses from participants, and decisions that are made about data preparation and interpretation. We consider reviewer comments, and some ethical considerations of this type of qualitative research for both participants and researchers. We provide a range of practical strategies to improve trustworthiness in decision-making and data interpretation—including the importance of using theory. Rigorous online qualitative surveys that are grounded in qualitative interpretivist values offer a range of unique benefits for public health researchers, knowledge users, and research participants.
Public health researchers are increasingly using online qualitative surveys.
There is still limited practical and methodological information about the design and implementation of these studies.
Building on Braun and Clarke (2013) , Terry and Braun (2017) and Braun et al . (2021) , we reflect on the methodological and practical lessons we have learnt from our own experience with conducting online qualitative surveys.
We provide guidance and practical examples about the design, implementation and analysis processes.
We argue that online qualitative surveys have rich potential for public health researchers and can be an empowering and engaging way to include diverse populations in qualitative research.
Public health researchers mostly engage in experiential (interpretive) qualitative approaches ( Braun and Clarke, 2013 ). These approaches are ‘centred on the exploration of participants’ subjective experiences and sense-making’ [( Braun and Clarke, 2021c ), p. 39]. Given the strong focus in public health on social justice, power and inequality, researchers proactively use the findings from these qualitative studies—often in collaboration with lived experience experts and others who are impacted by key decisions ( Reed et al ., 2024 )—to advocate for changes to public health policy and practice. There is also an important level of theoretical, methodological and empirical reflection that is part of the public health researcher’s role. For example, as qualitative researchers actively construct and interpret meaning from data, they constantly challenge their assumptions, their way of knowing and their way of ‘doing’ research ( Braun and Clarke, 2024 ). This reflexive practice also includes considering how to develop more inclusive opportunities for people to participate in research and to share their opinions and experiences about the issues that matter to them.
While in-depth interviews and focus groups provide rich and detailed narratives that are central to understanding people’s lives, these forms of data collection may sometimes create practical barriers for both researchers and participants. For example, they can be time consuming, and the power dynamics associated with face-to-face interviews (even in online settings) may make them less accessible for groups that are marginalized or stigmatized ( Edwards and Holland, 2020 ). While some population subgroups (and contexts) may suit (or require) face-to-face qualitative data collection approaches, others may lend themselves to different forms of data collection. Young people, for example, may be keen to be civically involved in research about the issues that matter to them, such as the climate crisis, but they may find it more convenient and comfortable using anonymized digital technologies to do so ( Arnot et al ., 2024b ). As such, part of our reflexive practice as public health researchers must be to explore, and be open to, a range of qualitative methodological approaches that could be more convenient, less intimidating and more engaging for a diverse range of population subgroups. This includes thinking about pragmatic ways of operationalizing qualitative data collection methods. How can we develop methods and engagement strategies that enable us to gain insights from a diverse range of participants about new issues or phenomenon that may pose threats to public health, or look at existing issues in new ways?
Advancements in online data collection methods have also created new options for researchers and participants about how they can be involved in qualitative studies ( Hensen et al ., 2021 ; Chen, 2023 ; Fan et al ., 2024 ). Online qualitative surveys—those surveys that prioritize qualitative values and questions—have rich potential for qualitative researchers. Braun and Clarke (2013 , p. 135) state that qualitative surveys:
…consist of a series of open-ended questions about a topic, and participants type or hand-write their responses to each question. They are self-administered; a researcher-administered qualitative survey would basically be an interview.
While these types of studies are increasingly utilized in public health, researchers have highlighted that there is still relatively limited discussion about the methodological and practical implications of these surveys ( Braun and Clarke, 2013 ; Terry and Braun, 2017 ; Braun et al ., 2021 ). This poses challenges for qualitative public health researchers, funders, ethics committees and peer reviewers in assessing the purpose, rigour and contribution of such research, and in deciding the appropriateness of the method for answering different research questions.
Using examples from online qualitative surveys that we have been involved in, this article discusses a range of methodological and practical lessons learnt from developing, implementing and analysing data from these types of surveys. While we do not claim to have all the answers, we aim to develop and extend on the methodological and practical guidance from Braun and Clarke (2013) , Terry and Braun (2017) and Braun et al . (2021) about the potential for online qualitative surveys. This includes how they can provide a rigorous ‘wide-angle picture’ [( Toerien and Wilkinson, 2004 ), p. 70] from a diverse range of participants about contemporary public health phenomena.
Figure 1 aims to develop and extend on the key points made by Braun and Clarke (2013) , Terry and Braun (2017) and Braun et al . (2021) , which provide the methodological and empirical foundation for our article.
: Methodological considerations in conducting online qualitative surveys.
Online qualitative surveys take many forms. They may be fully qualitative or qualitative dominant—mostly qualitative with some quantitative questions ( Terry and Braun, 2017 ). There are also many different ways of conducting these studies—from using a smaller number of questions that engage specific population groups or knowledge users in understanding detailed experiences ( Hennessy and O’Donoghue, 2024 ), to a larger number of questions (which may use market research panel providers to recruit participants), that seek broader opinions and attitudes about public health issues ( Marko et al ., 2022a ; McCarthy et al ., 2023 ; Arnot et al ., 2024a ). However, based on our experiences of applying for grant funding and conducting, publishing and presenting these studies, there are still clear misconceptions and uncertainties about these types of surveys.
One of the concerns raised about online qualitative surveys is how they are situated within broader qualitative values and approaches. This includes whether they can provide empirically innovative, rigorous, rich and theoretically grounded qualitative contributions to knowledge. Our experience is that online qualitative surveys have the most potential when they harness the values of interpretivist ‘Big Q’ approaches to collect information from a diverse range of participants about their experiences, opinions and practices ( Braun et al ., 2021 ). The distinction between positivist (small q) and interpretivist (Big Q) approaches to online qualitative surveys is an important one that requires some initial methodological reflection, particularly in considering the (largely unhelpful) critiques that are made about the rigour and usefulness of these surveys. These critiques often overlook the theoretical underpinnings and qualitative values inherent in such surveys. For example, while there may be a tendency to think of surveys and survey data as atheoretical and descriptive, the use of theory is central in informing online qualitative surveys. For example, Varpio and Ellaway (2021 , p. 343) explain that theory can ‘offer explanations and detailed premises that we can wrestle with, agree with, disagree with, reject and/or accept’. This includes the research design, the approach to data collection and analysis, the interpretation of findings and the conclusions that are drawn. Theory is also important in helping researchers to engage in reflexive practice. The use of theory is essential in progressing online qualitative surveys beyond description and towards in-depth interpretation and explanations—thus facilitating a deeper understanding of the studied phenomenon ( Collins and Stockton, 2018 ; Jamie and Rathbone, 2022 ).
The main assumptions about online qualitative surveys are that they can only collect ‘thin’ textual data, and that they are not flexible enough as a data collection tool for researchers to prompt or ask follow-up questions or to co-create detailed and rich data with participants ( Braun and Clarke, 2013 ; Terry and Clarke, 2017 ; Braun et al ., 2021 ). While we acknowledge that the type of data that is collected in these types of studies is different from those in in-depth interview studies, these surveys may be a more accessible and engaging way to collect rich insights from a diverse range of participants who may otherwise not participate in qualitative research ( Braun and Clarke, 2013 ; Terry and Braun, 2017 ; Braun et al ., 2021 ). Despite this, peer reviewers can question the depth of information that may be collected in these studies. Assumptions about large but ‘thin’ datasets may also mean that researchers, funders and reviewers take (and perhaps expect) a more positivist approach to the design and analytical processes associated with these surveys. For example, the multiple topics and questions, larger sample sizes, and the generally smaller textual responses that online qualitative surveys generate may lead researchers to approach these surveys using more descriptive and atheoretical paradigms. This approach may focus on ‘measuring’ phenomena, using variables, developing thinner analytical description and adding numerical values to the number of responses for different categories or themes.
We have found that assumptions can also impact the review processes associated with these types of studies, receiving critiques from those with both positivist and interpretivist positions. Positivist critiques focus on matters associated with whether the samples are ‘representative’, and the flaws associated with ‘self-selecting convenience’ samples. Critiques from interpretivist colleagues question why such large sample sizes are needed for qualitative studies, seeing surveys as a less rigorous method for gaining rich and meaningful data. For example, we have had reviewers query the scope and depth of the analysis of the data that we present from these studies because they are concerned that the type of data collected lacks depth and does not fully contextualize and explain how participants think about issues. We have also had reviewers request that we should return to the study to collect quantitative data to supplement the qualitative findings of the survey. They also question how ‘representative’ the samples are of population groups. These comments, of course, are not unique to online qualitative surveys but do highlight the difficulty that reviewers may have in placing and situating these types of studies in broader qualitative approaches. With this in mind, we have also found that some reviewers can ask for additional information to justify both the use of online qualitative surveys and why we have chosen these over other qualitative approaches. For example, reviewers have asked us to justify why we have chosen an online qualitative survey and also to explain what we may have missed out on by not conducting in-depth interviews or quantitative or mixed methods surveys instead.
While there is now a general understanding that attributing ‘numbers’ to qualitative data is largely unhelpful and inappropriate ( Chowdhury, 2015 ), there may be expectations that the larger sample sizes associated with online qualitative surveys enable researchers to provide numerical indicators of data. Rather than focusing on the ‘artfully interpretive’ techniques used to analyse and construct themes from the data ( Finlay, 2021 ), we have found that reviewers often ask us to provide numerical information about how many people provided different responses to different questions (or constructed themes), and the number at which ‘saturation’ was determined. Reviewer feedback that we have received about analytical processes has asked for detailed explanations about why attempts to ‘minimize bias’ (including calculations of inter-rater reliability and replicability of data quality) were not used. This demonstrates that peer reviewers may misinterpret the interpretivist values that guide online qualitative surveys, asking for information that is essentially ‘meaningless’ in qualitative paradigms in which researchers’ subjectivity ‘sculpts’ the knowledge that is produced ( Braun and Clarke, 2021a ).
As well as a ‘wide-angle picture’ [( Toerien and Wilkinson, 2004 ), p. 70] on phenomenon, online qualitative surveys can also: (i) generate both rich and focused data about perceptions and practices, and (ii) have multiple participatory and practical advantages—including helping to overcome barriers to research participation ( Braun and Clarke, 2013 ; Terry and Braun, 2017 ; Braun et al ., 2021 ). For researchers , online qualitative surveys can be a more cost-effective alternative ( Braun and Clarke, 2013 ; Terry and Braun, 2017 )—they are generally more time-efficient and less labour-intensive (particularly if working with market research companies to recruit panels). They are also able to reach a broad range of participants—such as those who are geographically dispersed ( Braun and Clarke, 2013 ; Terry and Braun, 2017 ), and those who may not have internet connectivity that is reliable enough to complete online interviews (a common issue for individuals living in regional or rural settings) ( de Villiers et al ., 2022 ). We are also more able to engage young people in qualitative research through online surveys, perhaps partly due to extensive panel company databases but also because they may be a more accessible and familiar way for young people to participate in research. The ability to quickly investigate new public health threats from the perspective of lived experience can also provide important information for researchers, providing justification for new areas of research focus, including setting agendas and advocating for the need for funding (or policy attention). Collecting data from a diverse range of participants—including from those who hold views that we may see as less ‘politically acceptable’, or inconsistent with our own public health reasoning about health and equity—is important in situating and contextualizing community attitudes towards particular issues.
For participants , benefits include having a degree of autonomy and control over their participation, including completing the survey at a time and place that suits them, and the anonymous nature of participation (that may be helpful for people from highly stigmatized groups). Participants can take time to reflect on their responses or complete the survey, and may feel more able to ‘talk back’ to the researcher about the framing of questions or the purpose of the research ( Braun et al ., 2021 ). We would also add that a benefit of these types of studies is that participants can also drop out of the study easily if the survey does not interest them or meet their expectations—something that we think might be more onerous or uncomfortable for participants in an interview or focus group.
For knowledge users, including advocates, service providers and decision-makers, qualitative research provides an important form of evidence, and the ‘wide-angle picture' [( Toerien and Wilkinson, 2004 ), p. 70] on issues from a diverse range of individuals in a community or population can be a powerful advocacy tool. Online qualitative surveys can also provide rapid insights into how changes to policy and practice may impact population subgroups in different ways.
There are, of course, some limitations associated with online qualitative surveys ( Braun et al ., 2021 ; Marko et al ., 2022b ). For example, there is no ability to engage individuals in a ‘traditional’ conversation or to prompt or probe meaning in the interactive ways that we are familiar with in interview studies. There is less ability to refine the questions that we ask participants in an iterative way throughout a study based on participant responses (particularly when working with market research panel companies). There may also be barriers associated with written literacy, access to digital technologies and stable internet connections ( Braun et al ., 2021 ). They may also not be the most suitable for individuals who have different ways of ‘knowing, being and doing’ qualitative research—including Indigenous populations [( Kennedy et al ., 2022 ), p. 1]. All of these factors should be taken into consideration when deciding whether online qualitative surveys are an appropriate way of collecting data. Finally, while these types of surveys can collect data quickly ( Marko et al ., 2022b ), there can also be additional decision-making processes related to data preparation and inclusion that can be time-consuming.
There are a range of practical considerations that can improve the rigour, trustworthiness and quality of online qualitative survey data. Again, developing and expanding on ( Braun and Clarke, 2013 ; Terry and Braun, 2017 ; Braun et al ., 2021 ), Figure 2 gives an overview of some key practical considerations associated with the design, implementation and analysis of these surveys. We would also note that before starting your survey design, you should be aware that people may use different types of technology to complete the survey, and in different spaces. For example, we cannot assume that people will be sitting in front of a computer or laptop at home or in the office, with people more likely to complete surveys on a mobile phone, perhaps on a train or bus on the way to work or school.
: Top ten practical tips for conducting online qualitative surveys.
Creating an appropriate and accessible structure
The first step in designing an online qualitative survey is to plan the structure of your survey. This step is important because the structure influences the way that participants interact with and participate through the survey. The survey structure helps to create an ‘environment’ that helps participants to share their perspectives, prompt their views and develop their ideas ( Braun and Clarke, 2013 ; Terry and Braun, 2017 ). Similar to an interview study, the structure of the survey guides participants from one set of questions (and topics) to the next. It is important to consider the ordering of topics to enable participants to complete a survey that has a logical flow, introduces participants to concepts and allows them to develop their depth of responses.
Before participants start the survey, we provide a clear and simple lay language summary of the survey. Because many individuals will be familiar with completing quantitative surveys, we include a welcoming statement and reiterate the qualitative nature of the survey, stating that their answers can be about their own experiences:
Thank you for agreeing to take part in this survey about [topic] . This survey involves writing responses to questions rather than checking boxes.
We then clearly reiterate the purpose of the survey, providing a short description of the topic that we are investigating. We state that we do not seek to collect any data that is identifiable, that we are interested in participants perspectives, that there are no right or wrong answers, and that participants can withdraw from the survey at any time without giving a reason.
Similar to Braun et al . (2021) , we start our surveys with questions about demographic and related characteristics (which we often call ‘ participant/general characteristics ’). These can be discrete choice questions, but can also utilize open text—for example, in relation to gender identity. We have found that there is always a temptation with surveys to ask many questions about the demographic characteristics of participants. However, we caution that too many questions can be intrusive for participants and can take away valuable time from open-text questions, which are the core focus of the survey. We recommend asking participant characteristic and demographic questions that situate and contextualize the sample ( Elliott et al ., 1999 ).
We generally start the open-text sections of these surveys by asking broad introductory questions about the topic. This might include questions such as: ‘Please describe the main reasons you drink alcohol ’, and ‘W hat do you think are the main impacts of climate change on the world? ’ We have found that these types of questions get participants used to responding to open-text questions relevant to the study’s research questions and aims. For each new topic of investigation (which are based on our theoretical concepts and overall study aims and research questions), we provide a short explanation about what we will ask participants. We also use tools and text to signpost participant progress through the survey. This can be a valuable way to avoid high attrition rates where participants exit the survey because they are getting fatigued and are unclear when the survey will end:
Great! We are just over half-way through the survey.
We ask more detailed questions that are more aligned with our theoretical concepts in the middle of the survey. For example, we may start with broad questions about a harmful industry and their products (such as gambling, vaping or alcohol) and then in the middle of the survey ask more detailed questions about the commercial determinants of health and the specific tactics that these industries use (for example, about product design, political tactics, public relations strategies or how these practices may influence health and equity). In relation to these more complex questions, it is particularly important that we reiterate that there are no wrong answers and try to include encouraging text throughout the survey:
There are no right or wrong answers—we are curious to hear your opinions .
We always try to end the survey on a positive. While these types of questions depend on the study, we try to ask questions which enable participants to reflect on what could be done to address or improve an issue. This might include their attitudes about policy, or what they would say to those in positions of power:
What do you think should be done to protect young people from sports betting advertising on social media? If there was one thing that could be done to prevent young people from being exposed to the risks associated with alcohol, cigarettes, vaping, or gambling, what would it be? If you could say one thing to politicians about climate change, what would it be?
Finally, we ask participants if there is anything we have missed or if they have anything else to add, sometimes referred to as a ‘clean-up’ question ( Braun and Clarke, 2013 ). The following provides a few examples of how we have framed these questions in some of our studies:
Is there anything you would like to say about alcohol, cigarettes, vaping, and gambling products that we have not covered? Is there anything we haven’t asked you about the advertising of alcohol to women that you would like us to know?
Considering the impact of the length of the survey on responses
The length of the survey (both the number of questions and the time it takes an individual to complete the survey) is guided by a range of methodological and practical considerations and will vary between studies ( Braun and Clarke, 2013 ). Many factors will influence completion times. We try to give individuals a guide at the start of the survey about how long we think it will take to complete the survey (for example, between 20 and 30 minutes). We highlight that it may take people a little longer or shorter and that people are able to leave their browser open or save the survey and come back to finish it later. For our first few online qualitative surveys, we found that we asked lots of questions because we felt less in control of being able to prompt or ask follow-up questions from participants. However, we have learned that less is more! Asking too many questions may lead to more survey dropouts, and may significantly reduce the textual quality of the information that you receive from participants ( Braun and Clarke, 2013 ; Terry and Clarke, 2017 ). This includes considering how the survey questions might lead to repetition, which may be annoying for participants, leading to responses such as ‘like I’ve already said’ , ‘I’ve already answered that’ or ‘see above’ .
Providing clear and simple guidance
When designing an online qualitative survey, we try to think of ways to make participation in the survey engaging. We do not want individuals to feel that we are ‘mining’ them for data. Rather we want to demonstrate that we are genuinely interested in their perspectives and views. We use a range of mechanisms to do this. Because there is no opportunity to verbally explain or clarify concepts to participants, there is a particular need to ensure that the language used is clear and accessible ( Braun and Clarke, 2013 ; Terry and Clarke, 2017 ). If language or concepts are complex, you are more likely to receive ‘I don’t know’ responses to your questions. We need to remember that participants have a range of written and comprehension skills, and inclusive and accessible language is important. We also never try to assume a level of knowledge about an issue (unless we have specifically asked for participants who are aware and engaged in an issue—such as women who drink alcohol) ( Pitt et al ., 2023 ). This includes avoiding highly technical or academic language and not making assumptions that the individuals completing the survey will understand concepts in the same way that researchers do ( Braun and Clarke, 2013 ). Clearly explaining concepts or using text or images to prompt memories can help to overcome this:
Some big corporations (such as the tobacco, vaping, alcohol, junk food, or gambling industries) sponsor women's sporting teams or clubs, or other events. You might see sponsor logos on sporting uniforms, or at sporting grounds, or sponsoring a concert or arts event.
At all times, we try to centre the language that we use with the population from which we are seeking responses. Advisory groups can be particularly helpful in framing language for different population subgroups. We often use colloquial language, even if it might not be seen as the ‘correct’ academic language or terminology. Where possible, we also try to define theoretical concepts in a clear and easy to understand way. For example, in our study investigating parent perceptions of the impact of harmful products on young people, we tried to clearly define ‘normalization’:
In this section we ask you about some of the perceived health impacts of the above products on young people. We also ask you about the normalisation of these products for young people. When we talk about normalisation, we are thinking about the range of factors that might make these products more acceptable for young people to use. These factors might include individual factors, such as young people being attracted to risk, the influence of family or peers, the accessibility and availability of these products, or the way the industry advertises and promotes these products.
Using innovative approaches to improve accessibility and prompt responses
Online qualitative surveys can include features beyond traditional question-and-answer formats ( Braun and Clarke, 2013 ; Terry and Braun, 2017 ). For example, we often use a range of photo elicitation techniques (using images or videos) to make surveys more accessible to participate in, address different levels of literacy, and overcome the assumption that we are not able to ‘prompt’ responses. These types of visual methodologies enable a collaborative and creative research experience by asking the participant to reflect on aspects of the visual materials, such as symbolic representations, and discuss these in relation to the research objectives ( Glaw et al ., 2017 ). The combination of visual images and clear descriptions helps to provide a focus for responses about different issues, as well as prompting nuanced information such as participant memories and emotions ( Glaw et al ., 2017 ). We use different types of visuals in our studies, such as photographs (including of the public health issues we’re investigating); screenshots from websites and social media posts (including newspaper headlines) and videos (including short videos from social media sites such as TikTok) ( Arnot et al ., 2024b ). For example, when talking about government responses to the climate crisis, we used a photograph of former Australian Prime Minister Scott Morrison holding a piece of coal in the Australian parliament to prompt participants’ thinking about the government’s relationship with fossil fuels and to provide a focal point for their answer. However, we would caution against using any images that may be confronting for participants or deliberately provocative. The purpose of using visuals must always be in the interests of the participants—to clarify, prompt and reflect on concepts. Ethics committees should carefully review the images used in surveys to ensure that they have a clear purpose and are unlikely to cause any discomfort.
Thinking carefully about your criteria for recruitment
Determining the sample size of online qualitative studies is not an exact science. The sample sizes for recent studies have ranged from n = 46 in a study about pregnancy loss ( Hennessy and O’Donoghue, 2024 ), to n = 511 in a study with young people about the climate crisis ( Arnot et al ., 2023b ). We follow ‘rules of thumb’ [( Braun and Clarke, 2021b ), p. 211] which try to balance the needs of the research and data richness with key practical considerations (such as funding and time constraints), funder expectations, discipline-specific norms and our knowledge and experience of designing and implementing online qualitative surveys. However, we have found that peer reviewers expect much more justification of sample sizes than they do for other types of qualitative research. Robust justification of sample sizes are often needed to prevent any ‘concerns’ that reviewers may raise. Our response to these reviews often reiterates that our focus (as with all qualitative research) is not to produce a ‘generalisable’ or ‘representative’ sample but to recruit participants who will help to provide ‘rich, complex and textured data’ [( Terry and Braun, 2017 ), p. 15] about an issue. Instead of focusing on data saturation, a contested concept which is incongruent with reflexive thematic analysis in particular ( Braun and Clarke, 2021b ), we find it useful to consider information power to determine the sample size for these surveys ( Malterud et al ., 2016 ). Information power prioritizes the adequacy, quality and variability of the data collected over the number of participants.
Recruitment for online qualitative surveys can be influenced by a range of factors. Monetary and time constraints will impact the size and, if using market research company panels, the specificity of participant quotas. Recruitment strategies must be developed to ensure that the data provides enough information to answer the research questions of the study. For our research purposes, we often try to ensure that participants with a range of socio-demographic characteristics are invited to participate in the sample. We set soft quotas for age, gender and geographic location to ensure some diversity. We have found that some population subgroups may also be recruited more easily than others—although this may depend on the topic of the survey. For example, we have found that quotas for women and those living in metropolitan areas may fill more quickly. In these scenarios, the research team must weigh up the timelines associated with recruitment and data collection (e.g. How long do we want to run data collection for? How much of our budget can be spent on achieving a more equally split sample? Are quotas necessary?) versus the purpose and goals of the research (i.e. to generate ideas rather than data representativeness), and the study-specific aims and research questions.
There are, of course, concerns about not being able to ‘see’ the people that are completing these surveys. There is an increasing focus in the academic literature on ‘false’ respondents, particularly in quantitative online surveys ( Levi et al ., 2021 ; Wang et al ., 2023 ). This will be an important ongoing discussion for qualitative researchers, and we do not claim to have the answers for how to overcome these issues. For example, some individuals may say that they meet the inclusion criteria to access the survey, while others may not understand or misinterpret the inclusion criteria. There is also a level of discomfort about who and how we judge who may be a ‘legitimate’ participant or not. However, we can talk practically about some of the strategies that we use to ensure the rigour of data. For example, we find that screening questions can provide a ‘double-check’ in relation to inclusion criteria and can also help with ensuring that there is consistency between the information an individual provides about how they meet the inclusion criteria and subsequent responses. For example, in a recent survey of parents of young people, a participant stated that they were 18 years old and were a parent to a 16-year-old and 15-year-old. Their overall responses were inconsistent with being a parent of children these ages. Similarly, in our gambling studies, people may tick that they have gambled in the last year but then in subsequent questions say they have not gambled at all. This highlights the importance of checking data across all questions, although it should be noted that time and cost constraints associated with comprehensively scanning the data for such responses are not always feasible and can result in overlooking these participants.
Ensuring that there are strategies to create agency and engage participants in the research
One of the benefits of online qualitative surveys compared to traditional quantitative surveys is the scope for participants to explain their answers and to disagree with the research team’s position. An indication that participants are feeling able to do this is when they are asked for any additional comments at the end of the survey. For example, in a survey about women’s attitudes towards alcohol marketing, the following participant concluded the survey by writing: ‘I think you have covered everything. I think that you need to stop shaming women for having fun’. Other participants demonstrate their engagement and interest in the survey by reaffirming the perspectives they have shared throughout the survey. For example, in a study with young people on climate, participants responded at the end that ‘it’s one of the few things I actually care about’ , while another commented on the quality of the survey questions, stating, ‘I think this survey did a great job with probing questions to prompt all the thoughts I have on it’ .
We also think that online qualitative surveys may lead to less social desirability in participants’ responses. Participants seem less wary about communicating less politically correct opinions than they may do in a face-to-face interview. For example, at times, participants communicate attitudes that may not align with public health values (e.g. supporting personal responsibility, anti-nanny state, and neoliberal ideologies of health and wellbeing), that we rarely see communicated to us in in-depth interview or focus group studies. We would argue that these perspectives are valuable for public health researchers because they capture a different community voice that may not otherwise be represented in research. This may show where there is a lack of support for health interventions and policy reforms and may indicate where further awareness-raising needs to occur. These types of responses also contribute to reflexive practice by challenging our assumptions and positions about how we think people should think or feel about responses to particular public health issues. Examples of such responses from our surveys include:
"Like I have already said, if you try to hide it you will only make it more attractive. This nanny-state attitude of the elite drives me crazy. People must be allowed to decide for themselves."
Ethical issues for participants and researchers
Researchers should also be aware that some of the ethical issues associated with online qualitative surveys may be different from those in in-depth interviews—and it is important that these are explained in any ethical consideration of the study. Providing a clear and simply worded Plain Language Statement (in written or video form) is important in establishing informed consent and willingness to participate. While participants are given information about who to contact if they have further questions about the study, this may be an extra step for participants, and they may not feel as able to ask for clarification about the study. Because of this, we try to provide multiple examples of the types of questions that we will ask, as well as providing downloadable support details (for example, for mental health support lines). A positive aspect of surveys is that participants are able to easily ignore recruitment notices to participate in the study. They are also able to stop the survey at any time by exiting out of the browser if they feel discomfort without having to give a reason in person to a researcher.
While the anonymous nature of the survey may be empowering for some participants ( Braun and Clarke, 2013 ; Terry and Braun, 2017 ; Braun et al. , 2021 ), it can also make it difficult for researchers to ascertain if people need any further support after completing the survey. Participants may also fill in surveys with someone else and may be influenced about how they should respond to questions (with the exception of some studies in which people may require assistance from someone to type their responses). Because of the above, some researchers, ethics committees and funders may be more cautious about using these studies for highly sensitive subjects. However, we would argue that the important point is that the studies follow ethical principles and take the lack of direct contact with participants into the ethical considerations of the study. It is also important to ensure that platforms used to collect survey data are trusted and secure. Here, we would argue that universities have an obligation to investigate and, where possible, approve survey providers to ensure that researchers are using platforms that meet rigorous standards for data and privacy.
It is also important to note that there may be responses from participants that may be challenging ( Terry and Braun, 2017 ; Braun and Clarke, 2021 ). Online spaces are rife with trolling due to their anonymous nature, and online surveys are not immune to this behaviour. Naturally, this leads to some silly responses—‘ Deakin University is responsible for all of this ’, but researchers should also be aware that the anonymity of surveys can (although in our experience not often) lead to responses that may cause discomfort for the researchers. For example, when asked if participants had anything else to add to a climate survey ( Arnot et al ., 2024c ), one responded ‘ nope, but you sure asked a lot of dumbass questions’ . Just as with interview-based studies, there must be processes built into the research for debriefing—particularly for students and early career researchers—as well as clear decisions about whether to include or exclude these types of responses when preparing the dataset for analysis and in writing up the results from the survey.
The importance of piloting the survey
Because of the lack of ability to explain and clarify concepts, piloting is particularly important ( Braun and Clarke, 2013 ; Terry and Braun, 2017 ; Braun et al. , 2021 ) to ensure that: (i) the technical aspects of the survey work as intended; (ii) the survey is eliciting quality responses (with limited ‘nonsensical’ responses such as random characters); (iii) the survey responses indicate comprehension of the survey questions; and (vi) there is not a substantial number of people who ‘drop-out’ of the study. Typically, we pilot our survey with 10% of the intended sample size. After piloting, we often change question wording, particularly to address questions that elicit very small text responses, the length of the survey and sometimes refine definitions or language to ensure increased comprehension. Researchers should remember that changes to the survey questions may need to be reviewed by ethics committees before launching the full survey. It is important to build in time for piloting and the revision of the survey to ensure you get this right as once you launch the full survey, there is no going back!
Preparing the dataset
Once launching the full survey, the quality of data and types of responses you receive in these types of surveys can vary. There is very limited transparency around how the dataset was prepared (more familiar to some as ‘data cleaning’) in published papers, including the decisions about which (if any) participants (or indeed responses) were excluded from the dataset and why. Nonsensical responses can be common—and can take a range of forms ( Figure 3 ). These can include random numbers or letters, a chunk of text that has been copied and pasted from elsewhere, predictive text or even repeat emojis. In one study, we had a participant quote the script of The Bee Movie in response to questions.
: Visual examples of nonsensical responses in online qualitative surveys.
Part of our familiarization with the dataset [Phase One in Braun and Clarke’s reflexive approach to thematic analysis ( Braun and Clarke, 2013 ; Braun et al ., 2021 )] includes preparing the dataset for analysis. We use this phase to help make decisions about what to include and exclude from the final dataset. While a row of emojis in the data file can easily be spotted and removed from the dataset, sometimes responses can look robust until you read, become familiar and engage with the data. For example, when asked about what they thought about collective climate action ( Arnot et al ., 2023a , 2024c ), some participants entered random yet related terms such as ‘ plastic ’, or repeated similar phrases across multiple questions:
“ why do we need paper straws ”, “ paper straws are terrible ”, “ papers straws are bad for you ”, “ paper straws are gross .”
Participants can also provide comprehensive answers for the first few questions and then nonsensical responses for the rest, which may also be due to question fatigue [( Braun and Clarke, 2013 ), p. 138]. Therefore, it is important to closely go through each participant’s response to ensure they have attempted to provide bone-fide responses. For example, in one of our young people and climate surveys ( Arnot et al ., 2023a , 2024c ), one participant responded genuinely to the first half of the survey before their quality dropped dramatically:
“I can’t even be bothered to read that question ”, “ why so many questions ”, “ bro too many sections. ”
Some market research panel providers may complete an initial quality screen of data. However, this does not replace the need for the research teams’ own data preparation processes. Researchers should ensure they are checking that responses are coherent—for example, not giving information that contradicts or is not credible. In our more recent studies, we have increasingly seen responses cut and pasted from ChatGPT and other AI tools—providing a new challenge in assessing the quality of responses. If you are seeing these types of responses, it might be an opportunity to think about the style and suitability of the questions being asked. For example, the use of AI tools might suggest that people are finding it difficult to answer questions or may feel that they have to present a ‘correct’ answer. We would also note that because of the volume of data in these surveys, the preparation of data involves multiple members of the team. In many cases, decisions need to be made about participants who may not have provided authentic responses across the survey. The research team should make clear in any paper their decisions about their choices to include or exclude participants from the study. There is a careful balancing act that can require assessing the quality of the participants’ responses across the whole dataset to determine if the overall quality of responses contributes to the research.
Navigating the volume of data and writing up results
Finally, discussions about how to navigate the volume of data that these types of studies produce could be a standalone paper. In general, principles of reflexive practices apply to the analysis of data from these studies. However, as a starting point, here are a few considerations when approaching these datasets.
We would argue that online qualitative surveys lend themselves to some types of analytical approaches over others—for example, reflexive thematic analysis, as compared to grounded theory or interpretive phenomenological analysis (though it can be used with these) ( Braun and Clarke, 2013 ; Terry and Braun, 2017 ).
While initial familiarization, coding and analysis can focus on specific questions and associated responses, it is important to analyse the dataset as a whole (or as clusters associated with particular topics) as participants may provide relevant data to a topic under multiple questions ( Terry and Braun, 2017 ). We initially focus our coding on specific questions or a group of survey questions under a topic of investigation. Once we have developed and constructed preliminary themes from the data associated with these clusters of questions, we then move to looking at responses across the dataset as we review themes further.
Researchers should think carefully about how to manage the data—which may not be available as ‘individual participant transcripts’ but rather as a ‘whole’ dataset in an Excel spreadsheet. Some may prefer qualitative data analysis software (QDAS) to manage and navigate data. However, many of us find that Excel (and particularly the use of labelled Tabs) is useful in grouping data and moving from codes to constructing themes.
As with all rigorous qualitative research, coding and theme development should be guided by the research questions. A clear record of decision-making about analytical choices (and being reflexive about these) should be kept. In any write-up, we would recommend that researchers are clear about which survey questions they used in the analysis [researchers could consider providing a supplementary file of some or all of the survey questions—see, for example Hennessy and O’Donoghue (2024) ].
In writing up the results, researchers should still seek to present a rich description of the data, as demonstrated in the presentation of results in the following papers ( Marko et al ., 2022a , 2022b ; McCarthy et al ., 2023 ; Pitt et al ., 2023 ; Hennessy and O’Donoghue, 2024 ). We have found the use of tables with additional examples of quotes as they relate to themes and subthemes can be a practical way of providing the reader with further examples of the data, particularly when constrained by journal word count limits [see, for example, Table 2 in Arnot et al ., (2024c) ]. However, these tables do not replace a full and complete presentation of the interpretation of the data.
This article offers methodological reflections and practical guidance around online qualitative survey design, implementation and analysis. While online qualitative surveys engage participants in a different type of conversation, they have design features that enable the collection of rich data. We recognize that we have much to learn and that while no survey of ours has been perfect, each new experience with developing and conducting online qualitative surveys has brought new understandings and lessons for future studies. In recognizing that we are learning, we also feel that our experience to date could be valuable for progressing the conversation about the rigour of online qualitative surveys and maximizing this method for public health gains.
H.P. is funded through a VicHealth Postdoctoral Research Fellowship. S.M. is funded through a Deakin University Faculty of Health Deans Postdoctoral Fellowship. G.A. is funded by an Australian Government Research Training Program Scholarship. M.H. is funded through an Irish Research Council Government of Ireland Postdoctoral Fellowship Award [GOIPD/2023/1168].
The pregnancy loss study was funded by the Irish Research Council through its New Foundations Awards and in partnership with the Irish Hospice Foundation as civil society partner [NF/2021/27123063].
S.T. is Editor in Chief of Health Promotion International, H.P. is a member of the Editorial Board of Health Promotion International, S.M. and G.A. are Social Media Coordinators for Health Promotion International, M.H. is an Associate Editor for Health Promotion International. They were not involved in the review process or in any decision-making on the manuscript.
The data used in this study are not available.
Ethical approval for studies conducted by Deakin University include the climate crisis (HEAG-H 55_2020, HEAG-H 162_2021); parents perceptions of harmful industries on young people (HEAG-H 158_2022); women and alcohol marketing (HEAG-H 123_2022) and gambling (HEAG 227_2020).
Arnot , G. , Pitt , H. , McCarthy , S. , Cordedda , C. , Marko , S. and Thomas , S. L. ( 2024a ) Australian youth perspectives on the role of social media in climate action . Australian and New Zealand Journal of Public Health , 48 , 100111 .
Google Scholar
Arnot , G. , Pitt , H. , McCarthy , S. , Cordedda , C. , Marko , S. and Thomas , S. L. ( 2024b ) Australian youth perspectives on the role of social media in climate action . Australian and New Zealand Journal of Public Health , 48 , 100111 .
Arnot , G. , Thomas , S. , Pitt , H. and Warner , E. ( 2023a ) Australian young people’s perceptions of the commercial determinants of the climate crisis . Health Promotion International , 38 , daad058 .
Arnot , G. , Thomas , S. , Pitt , H. and Warner , E. ( 2023b ) ‘It shows we are serious’: young people in Australia discuss climate justice protests as a mechanism for climate change advocacy and action . Australian and New Zealand Journal of Public Health , 47 , 100048 .
Arnot , G. , Thomas , S. , Pitt , H. and Warner , E. ( 2024c ) Australian young people’s perspectives about the political determinants of the climate crisis . Health Promotion Journal of Australia , 35 , 196 – 206 .
Braun , V. and Clarke , V. ( 2013 ) Successful Qualitative Research: A Practical Guide for Beginners . Sage , London .
Google Preview
Braun , V. and Clarke , V. ( 2021a ) One size fits all? What counts as quality practice in (reflexive) thematic analysis ? Qualitative Research in Psychology , 18 , 328 – 352 .
Braun , V. and Clarke , V. ( 2021b ) To saturate or not to saturate? Questioning data saturation as a useful concept for thematic analysis and sample-size rationales . Qualitative Research in Sport, Exercise and Health , 13 , 201 – 216 .
Braun , V. and Clarke , V. ( 2021c ) Can I use TA? Should I use TA? Should I not use TA? Comparing reflexive thematic analysis and other pattern‐based qualitative analytic approaches . Counselling and Psychotherapy Research , 21 , 37 – 47 .
Braun , V. and Clarke , V. ( 2024 ) A critical review of the reporting of reflexive thematic analysis in Health Promotion International . Health Promotion International , 39 , daae049 .
Braun , V. , Clarke , V. , Boulton , E. , Davey , L. and McEvoy , C. ( 2021 ) The online survey as a qualitative research tool . International Journal of Social Research Methodology , 24 , 641 – 654 .
Chen , J. ( 2023 ) Digitally dispersed, remotely engaged: interrogating participation in virtual photovoice . Qualitative Research , 23 , 1535 – 1555 .
Chowdhury , M. F. ( 2015 ) Coding, sorting and sifting of qualitative data analysis: debates and discussion . Quality & Quantity , 49 , 1135 – 1143 .
Collins , C. S. and Stockton , C. M. ( 2018 ) The central role of theory in qualitative research . International Journal of Qualitative Methods , 17 , 160940691879747 .
de Villiers , C. , Farooq , M. B. and Molinari , M. ( 2022 ) Qualitative research interviews using online video technology—challenges and opportunities . Meditari Accountancy Research , 30 , 1764 – 1782 .
Edwards , R. and Holland , J. ( 2020 ) Reviewing challenges and the future for qualitative interviewing . International Journal of Social Research Methodology , 23 , 581 – 592 .
Elliott , R. , Fischer , C. T. and Rennie , D. L. ( 1999 ) Evolving guidelines for publication of qualitative research studies in psychology and related fields . British Journal of Clinical Psychology , 38 , 215 – 229 .
Fan , H. , Li , B. , Pasaribu , T. and Chowdhury , R. ( 2024 ) Online interviews as new methodological normalcy and a space of ethics: an autoethnographic investigation into Covid-19 educational research . Qualitative Inquiry , 30 , 333 – 344 .
Finlay , L. ( 2021 ) Thematic analysis: the ‘good’, the ‘bad’ and the ‘ugly’ . European Journal for Qualitative Research in Psychotherapy , 11 , 103 – 116 .
Glaw , X. , Inder , K. , Kable , A. and Hazelton , M. ( 2017 ) Visual methodologies in qualitative research: autophotography and photo elicitation applied to mental health research . International Journal of Qualitative Methods , 16 , 160940691774821 .
Hennessy , M. and O’Donoghue , K. ( 2024 ) Bridging the gap between pregnancy loss research and policy and practice: insights from a qualitative survey with knowledge users . Health Research Policy and Systems , 22 , 15 .
Hensen , B. , Mackworth-Young , C. R. S. , Simwinga , M. , Abdelmagid , N. , Banda , J. , Mavodza , C. et al. . ( 2021 ) Remote data collection for public health research in a COVID-19 era: ethical implications, challenges and opportunities . Health Policy and Planning , 36 , 360 – 368 .
Jamie , K. and Rathbone , A. P. ( 2022 ) Using theory and reflexivity to preserve methodological rigour of data collection in qualitative research . Research Methods in Medicine & Health Sciences , 3 , 11 – 21 .
Kennedy , M. , Maddox , R. , Booth , K. , Maidment , S. , Chamberlain , C. and Bessarab , D. ( 2022 ) Decolonising qualitative research with respectful, reciprocal, and responsible research practice: a narrative review of the application of Yarning method in qualitative Aboriginal and Torres Strait Islander health research . International Journal for Equity in Health , 21 , 134 .
Levi , R. , Ridberg , R. , Akers , M. and Seligman , H. ( 2021 ) Survey fraud and the integrity of web-based survey research . American Journal of Health Promotion , 36 , 18 – 20 .
Malterud , K. , Siersma , V. D. and Guassora , A. D. ( 2016 ) Sample size in qualitative interview studies: guided by information power . Qualitative Health Research , 26 , 1753 – 1760 .
Marko , S. , Thomas , S. , Pitt , H. and Daube , M. ( 2022a ) ‘Aussies love a bet’: gamblers discuss the social acceptance and cultural accommodation of gambling in Australia . Australian and New Zealand Journal of Public Health , 46 , 829 – 834 .
Marko , S. , Thomas , S. L. , Robinson , K. and Daube , M. ( 2022b ) Gamblers’ perceptions of responsibility for gambling harm: a critical qualitative inquiry . BMC Public Health , 22 , 725 .
McCarthy , S. , Thomas , S. L. , Pitt , H. , Warner , E. , Roderique-Davies , G. , Rintoul , A. et al. . ( 2023 ) ‘They loved gambling more than me’. Women’s experiences of gambling-related harm as an affected other . Health Promotion Journal of Australia , 34 , 284 – 293 .
Pitt , H. , McCarthy , S. , Keric , D. , Arnot , G. , Marko , S. , Martino , F. et al. . ( 2023 ) The symbolic consumption processes associated with ‘low-calorie’ and ‘low-sugar’ alcohol products and Australian women . Health Promotion International , 38 , 1 – 13 .
Reed , M. S. , Merkle , B. G. , Cook , E. J. , Hafferty , C. , Hejnowicz , A. P. , Holliman , R. et al. . ( 2024 ) Reimagining the language of engagement in a post-stakeholder world . Sustainability Science .
Terry , G. and Braun , V. ( 2017 ) Short but often sweet: the surprising potential of qualitative survey methods . In Braun , V. , Clarke , V. and Gray , D. (eds), Collecting Qualitative Data: A Practical Guide to Textual, Media and Virtual Techniques . Cambridge University Press , Cambridge .
Toerien , M. and Wilkinson , S. ( 2004 ) Exploring the depilation norm: a qualitative questionnaire study of women’s body hair removal . Qualitative Research in Psychology , 1 , 69 – 92 .
Varpio , L. and Ellaway , R. H. ( 2021 ) Shaping our worldviews: a conversation about and of theory . Advances in Health Sciences Education: Theory and Practice , 26 , 339 – 345 .
Wang , J. , Calderon , G. , Hager , E. R. , Edwards , LV , Berry , A. A. , Liu , Y. et al. . ( 2023 ) Identifying and preventing fraudulent responses in online public health surveys: lessons learned during the COVID-19 pandemic . PLOS Global Public Health , 3 , e0001452 .
Month: | Total Views: |
---|---|
June 2024 | 730 |
Citing articles via.
Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide
Sign In or Create an Account
This PDF is available to Subscribers Only
For full access to this pdf, sign in to an existing account, or purchase an annual subscription.
IMAGES
VIDEO
COMMENTS
Qualitative research involves collecting and analyzing non-numerical data (e.g., text, video, or audio) to understand concepts, opinions, or experiences. It can be used to gather in-depth insights into a problem or generate new ideas for research. Qualitative research is the opposite of quantitative research, which involves collecting and ...
Qualitative research involves the studied use and collection of a variety of empirical materials - case study, personal experience, introspective, life story, interview, observational, historical, interactional, and visual texts - that describe routine and problematic moments and meanings in individuals' lives.
While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...
But this definition is less about what qualitative research can be and more about what it is not. To be honest, any simple statement will fail to capture the power and depth of qualitative research. ... This book was based on the research he did for his dissertation at the University of California-Berkeley in 2012. Actually, the dissertation ...
Abstract. This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions ...
Qualitative research is the naturalistic study of social meanings and processes, using interviews, observations, and the analysis of texts and images. In contrast to quantitative researchers, whose statistical methods enable broad generalizations about populations (for example, comparisons of the percentages of U.S. demographic groups who vote in particular ways), qualitative researchers use ...
Qualitative Research. Qualitative research is a type of research methodology that focuses on exploring and understanding people's beliefs, attitudes, behaviors, and experiences through the collection and analysis of non-numerical data. It seeks to answer research questions through the examination of subjective data, such as interviews, focus ...
Qualitative research is defined as an exploratory method that aims to understand complex phenomena, often within their natural settings, by examining subjective experiences, beliefs, attitudes, and behaviors. Unlike quantitative research, which focuses on numerical measurements and statistical analysis, qualitative research employs a range of ...
Qualitative research is a type of research that aims to gather and analyse non-numerical (descriptive) data in order to gain an understanding of individuals' social reality, including understanding their attitudes, beliefs, and motivation. This type of research typically involves in-depth interviews, focus groups, or observations in order to collect data that is rich in detail and context.
Qualitative research is a type of research that explores and provides deeper insights into real-world problems.[1] Instead of collecting numerical data points or intervening or introducing treatments just like in quantitative research, qualitative research helps generate hypothenar to further investigate and understand quantitative data. Qualitative research gathers participants' experiences ...
Qualitative research is a process of naturalistic inquiry that seeks an in-depth understanding of social phenomena within their natural setting. It focuses on the "why" rather than the "what" of social phenomena and relies on the direct experiences of human beings as meaning-making agents in their every day lives.
a qualitative research study. This means that researchers convey (i.e., in a method sec-tion, in an introduction, or in other place. in a study) their background(e.g., work experiences, cultural experiences, history), how it informs their interpretation of the information in a study, and what the. have to gain from the.
Revised on 30 January 2023. Qualitative research involves collecting and analysing non-numerical data (e.g., text, video, or audio) to understand concepts, opinions, or experiences. It can be used to gather in-depth insights into a problem or generate new ideas for research. Qualitative research is the opposite of quantitative research, which ...
Qualitative research is designed to explore the human elements of a given topic, while specific qualitative methods examine how individuals see and experience the world. Qualitative approaches are typically used to explore new phenomena and to capture individuals' thoughts, feelings, or interpretations of meaning and process.
Developing a theoretical framework for your dissertation is one of the key elements of a qualitative research project. Through writing your literature review, you are likely to have identified either a problem that need 'fixing' or a gap that your research may begin to fill. The theoretical framework is your toolbox.
Undertaking an MSc dissertation in Evidence-Based Health Care (EBHC) may be your first hands-on experience of doing qualitative research. I chatted to Dr. Veronika Williams, an experienced qualitative researcher, and tutor on the EBHC programme, to find out her top tips for producing a high-quality qualitative EBHC thesis.
When collecting and analyzing data, quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings. Both are important for gaining different kinds of knowledge. Quantitative research. Quantitative research is expressed in numbers and graphs. It is used to test or confirm theories and assumptions.
Qualitative research is the methodology researchers use to gain deep contextual understandings of users via non-numerical means and direct observations. Researchers focus on smaller user samples—e.g., in interviews—to reveal data such as user attitudes, behaviors and hidden factors: insights which guide better designs.
Here, we'll focus on the three main types of dissertation research to get you one step closer to earning your doctoral degree. 1. Qualitative. The first type of dissertation is known as a qualitative dissertation. A qualitative dissertation mirrors the qualitative research that a doctoral candidate would conduct throughout their studies.
The results chapter in a dissertation or thesis (or any formal academic research piece) is where you objectively and neutrally present the findings of your qualitative analysis (or analyses if you used multiple qualitative analysis methods ). This chapter can sometimes be combined with the discussion chapter (where you interpret the data and ...
Case Study Methodology of Qualitative Research: Key Attributes and Navigating the Conundrums in Its Application ... the case study became an instant hit as 'it was the first one to use and give operational meaning to terms such as 'upper-upper class, 'lower-upper class and so on'. Warner and Lunt's analysis of social stratification ...
Qualitative description (QD) is a term that is widely used to describe qualitative studies of health care and nursing-related phenomena. However, limited discussions regarding QD are found in the existing literature. In this systematic review, we identified characteristics of methods and findings reported in research articles published in 2014 ...
thesis supervisor during this research. The effort, encouragement, wisdom, and valuable recommendations and guidance he imparted greatly supported me throughout this research completion. Many thanks, Jim. I am most thankful to the members of my dissertation committee, Dr. John Portelli and Dr. Joe Flessa.
INTRODUCTION. Public health researchers mostly engage in experiential (interpretive) qualitative approaches (Braun and Clarke, 2013).These approaches are 'centred on the exploration of participants' subjective experiences and sense-making' [(Braun and Clarke, 2021c), p. 39].Given the strong focus in public health on social justice, power and inequality, researchers proactively use the ...