Design and Methods.
Table 8.1 is derived from ‘Three Approaches to Case Study Methods in Education: Yin, Merriam, and Stake ‘ by Bedrettin Yazan, licensed under CC BY-NC-SA 4.0. 5
There are several forms of qualitative case studies. 1,2
Discovery-led case studies, which:
Theory-led case studies, which:
Single and collective case studies, where: 2, 9
In both intrinsic, instrumental and illustrative case studies, the exploration might take place within a single case. In contrast, a collective case study includes multiple individual cases, and the exploration occurs both within and between cases. Collective case studies may include comparative cases, whereby cases are sampled to provide points of comparison for either context or the phenomenon. Embedded case studies are increasingly common within multi-site, randomised controlled trials, where each of the study sites is considered a case.
Multiple forms of data collection and methods of analysis (e.g. thematic, content, framework and constant comparative analyses) can be employed, since case studies are characterised by the depth of knowledge they provide and their nuanced approaches to understanding phenomena within context. 2,5 This approach enables triangulation between data sources (interviews, focus groups, participant observations), researchers and theory. Refer to Chapter 19 for information about triangulation.
Advantages of using a case study approach include the ability to explore the subtleties and intricacies of complex social situations, and the use of multiple data collection methods and data from multiple sources within the case, which enables rigour through triangulation. Collective case studies enable comparison and contrasting within and across cases.
However, it can be challenging to define the boundaries of the case and to gain appropriate access to the case for the ‘deep dive’ form of analysis. Participant observation, which is a common form of data collection, can lead to observer bias. Data collection can take a long time and may require lengthy times, resources and funding to conduct the study. 9
Table 8.2 provides an example of a single case study and of a collective case study.
Title | ||
---|---|---|
Nayback-Beebe, 2012 | Clack, 2018 | |
‘The purpose of this phenomenological qualitative case study… was to gain a holistic understanding of the lived-experience of a male victim of intimate partner violence and the real-life context in which the violence emerged.’ | ‘in-depth investigation of the main barriers, facilitators and contextual factors relevant to successfully implementing these strategies in European acute care hospitals’ | |
‘What is the lived experience of living in and leaving an abusive intimate relationship for a white middle class male?’ | ‘(1) what are the main barriers and facilitators to successfully implementing CRBSI prevention procedures?; and (2) what role do contextual factors play?’ | |
A single, intrinsic qualitative research study. Following Yin’s case study approach, the authors wished to uncover the contextual conditions relevant to the phenomenon under study – living in and leaving an abusive intimate relationship as a white, middle-class male. The researchers wanted to understand and explore the contextual conditions related to female-to-male perpetrated intimate partner violence. | A qualitative comparative case study of 6 of the 14 hospitals participating in the Prevention of Hospital Infections by Intervention and Training (PROHIBIT) randomised controlled study on the prevention of catheter-related bloodstream infection prevention. The case study examined contextual factors that affect the implementation of an intervention, particularly across culturally, politically and economically diverse hospital settings in Europe. | |
United States of America, insights from a case study to provide nurses with an understanding that intimate partner violence occurs in the lives of men and women, and to be aware of this in the inpatient and outpatient settings. | European acute-care hospitals that were participating in the PROHIBIT randomised controlled trial. | |
Three in-depth interviews conducted for one month. The participant was a 44-year-old man who met the following inclusion criteria: • self-reported survivor of physical, emotional, verbal abuse, harassment and/or humiliation by a current or former partner • the violence occurred in the context of a heterosexual relationship • was in the process of leaving or had left the relationship | Data collection before and after the implementation of an intervention and included 129 interviews (133 hours) with hospital administration, IPC and ICU leadership and staff, telephone interviews with onsite investigators alongside 41 hours of direct observations | |
Existential phenomenology following Colaizzi’s method for data analysis. | Thematic analysis was inductive (first site visit) and deductive (second site visit), with cross-case analysis using a stacking technique; cases were grouped according to common characteristics and differences, and similarities were examined. | |
Theme 1. Living in the relationship – confrontation from within Theme 2. Living in the relationship – confrontation from without Theme 3. Leaving the relationship – realisation and relinquishment Overarching theme: Living with a knot in your stomach | Three meta themes were identified • implementation agendas • resourcing • boundary spanning |
Qualitative case studies provide a study design with diverse methods to examine the contextual factors relevant to understanding the why and how of a phenomenon within a case. The design incorporates single case studies and collective cases, which can also be embedded within randomised controlled trials as a form of process evaluation.
Qualitative Research – a practical guide for health and social care researchers and practitioners Copyright © 2023 by Darshini Ayton is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License , except where otherwise noted.
Run a free plagiarism check in 10 minutes, generate accurate citations for free.
Methodology
Published on June 19, 2020 by Pritha Bhandari . Revised on June 22, 2023.
Qualitative research involves collecting and analyzing non-numerical data (e.g., text, video, or audio) to understand concepts, opinions, or experiences. It can be used to gather in-depth insights into a problem or generate new ideas for research.
Qualitative research is the opposite of quantitative research , which involves collecting and analyzing numerical data for statistical analysis.
Qualitative research is commonly used in the humanities and social sciences, in subjects such as anthropology, sociology, education, health sciences, history, etc.
Approaches to qualitative research, qualitative research methods, qualitative data analysis, advantages of qualitative research, disadvantages of qualitative research, other interesting articles, frequently asked questions about qualitative research.
Qualitative research is used to understand how people experience the world. While there are many approaches to qualitative research, they tend to be flexible and focus on retaining rich meaning when interpreting data.
Common approaches include grounded theory, ethnography , action research , phenomenological research, and narrative research. They share some similarities, but emphasize different aims and perspectives.
Approach | What does it involve? |
---|---|
Grounded theory | Researchers collect rich data on a topic of interest and develop theories . |
Researchers immerse themselves in groups or organizations to understand their cultures. | |
Action research | Researchers and participants collaboratively link theory to practice to drive social change. |
Phenomenological research | Researchers investigate a phenomenon or event by describing and interpreting participants’ lived experiences. |
Narrative research | Researchers examine how stories are told to understand how participants perceive and make sense of their experiences. |
Note that qualitative research is at risk for certain research biases including the Hawthorne effect , observer bias , recall bias , and social desirability bias . While not always totally avoidable, awareness of potential biases as you collect and analyze your data can prevent them from impacting your work too much.
Professional editors proofread and edit your paper by focusing on:
See an example
Each of the research approaches involve using one or more data collection methods . These are some of the most common qualitative methods:
Qualitative researchers often consider themselves “instruments” in research because all observations, interpretations and analyses are filtered through their own personal lens.
For this reason, when writing up your methodology for qualitative research, it’s important to reflect on your approach and to thoroughly explain the choices you made in collecting and analyzing the data.
Qualitative data can take the form of texts, photos, videos and audio. For example, you might be working with interview transcripts, survey responses, fieldnotes, or recordings from natural settings.
Most types of qualitative data analysis share the same five steps:
There are several specific approaches to analyzing qualitative data. Although these methods share similar processes, they emphasize different concepts.
Approach | When to use | Example |
---|---|---|
To describe and categorize common words, phrases, and ideas in qualitative data. | A market researcher could perform content analysis to find out what kind of language is used in descriptions of therapeutic apps. | |
To identify and interpret patterns and themes in qualitative data. | A psychologist could apply thematic analysis to travel blogs to explore how tourism shapes self-identity. | |
To examine the content, structure, and design of texts. | A media researcher could use textual analysis to understand how news coverage of celebrities has changed in the past decade. | |
To study communication and how language is used to achieve effects in specific contexts. | A political scientist could use discourse analysis to study how politicians generate trust in election campaigns. |
Qualitative research often tries to preserve the voice and perspective of participants and can be adjusted as new research questions arise. Qualitative research is good for:
The data collection and analysis process can be adapted as new ideas or patterns emerge. They are not rigidly decided beforehand.
Data collection occurs in real-world contexts or in naturalistic ways.
Detailed descriptions of people’s experiences, feelings and perceptions can be used in designing, testing or improving systems or products.
Open-ended responses mean that researchers can uncover novel problems or opportunities that they wouldn’t have thought of otherwise.
Researchers must consider practical and theoretical limitations in analyzing and interpreting their data. Qualitative research suffers from:
The real-world setting often makes qualitative research unreliable because of uncontrolled factors that affect the data.
Due to the researcher’s primary role in analyzing and interpreting data, qualitative research cannot be replicated . The researcher decides what is important and what is irrelevant in data analysis, so interpretations of the same data can vary greatly.
Small samples are often used to gather detailed data about specific contexts. Despite rigorous analysis procedures, it is difficult to draw generalizable conclusions because the data may be biased and unrepresentative of the wider population .
Although software can be used to manage and record large amounts of text, data analysis often has to be checked or performed manually.
If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.
Research bias
Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.
Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.
There are five common approaches to qualitative research :
Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.
There are various approaches to qualitative data analysis , but they all share five steps in common:
The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .
If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.
Bhandari, P. (2023, June 22). What Is Qualitative Research? | Methods & Examples. Scribbr. Retrieved July 2, 2024, from https://www.scribbr.com/methodology/qualitative-research/
Other students also liked, qualitative vs. quantitative research | differences, examples & methods, how to do thematic analysis | step-by-step guide & examples, what is your plagiarism score.
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .
Despite on-going debate about credibility, and reported limitations in comparison to other approaches, case study is an increasingly popular approach among qualitative researchers. We critically analysed the methodological descriptions of published case studies. Three high-impact qualitative methods journals were searched to locate case studies published in the past 5 years; 34 were selected for analysis. Articles were categorized as health and health services ( n= 12), social sciences and anthropology ( n= 7), or methods ( n= 15) case studies. The articles were reviewed using an adapted version of established criteria to determine whether adequate methodological justification was present, and if study aims, methods, and reported findings were consistent with a qualitative case study approach. Findings were grouped into five themes outlining key methodological issues: case study methodology or method, case of something particular and case selection, contextually bound case study, researcher and case interactions and triangulation, and study design inconsistent with methodology reported. Improved reporting of case studies by qualitative researchers will advance the methodology for the benefit of researchers and practitioners.
Case study research is an increasingly popular approach among qualitative researchers (Thomas, 2011 ). Several prominent authors have contributed to methodological developments, which has increased the popularity of case study approaches across disciplines (Creswell, 2013b ; Denzin & Lincoln, 2011b ; Merriam, 2009 ; Ragin & Becker, 1992 ; Stake, 1995 ; Yin, 2009 ). Current qualitative case study approaches are shaped by paradigm, study design, and selection of methods, and, as a result, case studies in the published literature vary. Differences between published case studies can make it difficult for researchers to define and understand case study as a methodology.
Experienced qualitative researchers have identified case study research as a stand-alone qualitative approach (Denzin & Lincoln, 2011b ). Case study research has a level of flexibility that is not readily offered by other qualitative approaches such as grounded theory or phenomenology. Case studies are designed to suit the case and research question and published case studies demonstrate wide diversity in study design. There are two popular case study approaches in qualitative research. The first, proposed by Stake ( 1995 ) and Merriam ( 2009 ), is situated in a social constructivist paradigm, whereas the second, by Yin ( 2012 ), Flyvbjerg ( 2011 ), and Eisenhardt ( 1989 ), approaches case study from a post-positivist viewpoint. Scholarship from both schools of inquiry has contributed to the popularity of case study and development of theoretical frameworks and principles that characterize the methodology.
The diversity of case studies reported in the published literature, and on-going debates about credibility and the use of case study in qualitative research practice, suggests that differences in perspectives on case study methodology may prevent researchers from developing a mutual understanding of practice and rigour. In addition, discussion about case study limitations has led some authors to query whether case study is indeed a methodology (Luck, Jackson, & Usher, 2006 ; Meyer, 2001 ; Thomas, 2010 ; Tight, 2010 ). Methodological discussion of qualitative case study research is timely, and a review is required to analyse and understand how this methodology is applied in the qualitative research literature. The aims of this study were to review methodological descriptions of published qualitative case studies, to review how the case study methodological approach was applied, and to identify issues that need to be addressed by researchers, editors, and reviewers. An outline of the current definitions of case study and an overview of the issues proposed in the qualitative methodological literature are provided to set the scene for the review.
Case study research is an investigation and analysis of a single or collective case, intended to capture the complexity of the object of study (Stake, 1995 ). Qualitative case study research, as described by Stake ( 1995 ), draws together “naturalistic, holistic, ethnographic, phenomenological, and biographic research methods” in a bricoleur design, or in his words, “a palette of methods” (Stake, 1995 , pp. xi–xii). Case study methodology maintains deep connections to core values and intentions and is “particularistic, descriptive and heuristic” (Merriam, 2009 , p. 46).
As a study design, case study is defined by interest in individual cases rather than the methods of inquiry used. The selection of methods is informed by researcher and case intuition and makes use of naturally occurring sources of knowledge, such as people or observations of interactions that occur in the physical space (Stake, 1998 ). Thomas ( 2011 ) suggested that “analytical eclecticism” is a defining factor (p. 512). Multiple data collection and analysis methods are adopted to further develop and understand the case, shaped by context and emergent data (Stake, 1995 ). This qualitative approach “explores a real-life, contemporary bounded system (a case ) or multiple bounded systems (cases) over time, through detailed, in-depth data collection involving multiple sources of information … and reports a case description and case themes ” (Creswell, 2013b , p. 97). Case study research has been defined by the unit of analysis, the process of study, and the outcome or end product, all essentially the case (Merriam, 2009 ).
The case is an object to be studied for an identified reason that is peculiar or particular. Classification of the case and case selection procedures informs development of the study design and clarifies the research question. Stake ( 1995 ) proposed three types of cases and study design frameworks. These include the intrinsic case, the instrumental case, and the collective instrumental case. The intrinsic case is used to understand the particulars of a single case, rather than what it represents. An instrumental case study provides insight on an issue or is used to refine theory. The case is selected to advance understanding of the object of interest. A collective refers to an instrumental case which is studied as multiple, nested cases, observed in unison, parallel, or sequential order. More than one case can be simultaneously studied; however, each case study is a concentrated, single inquiry, studied holistically in its own entirety (Stake, 1995 , 1998 ).
Researchers who use case study are urged to seek out what is common and what is particular about the case. This involves careful and in-depth consideration of the nature of the case, historical background, physical setting, and other institutional and political contextual factors (Stake, 1998 ). An interpretive or social constructivist approach to qualitative case study research supports a transactional method of inquiry, where the researcher has a personal interaction with the case. The case is developed in a relationship between the researcher and informants, and presented to engage the reader, inviting them to join in this interaction and in case discovery (Stake, 1995 ). A postpositivist approach to case study involves developing a clear case study protocol with careful consideration of validity and potential bias, which might involve an exploratory or pilot phase, and ensures that all elements of the case are measured and adequately described (Yin, 2009 , 2012 ).
The future of qualitative research will be influenced and constructed by the way research is conducted, and by what is reviewed and published in academic journals (Morse, 2011 ). If case study research is to further develop as a principal qualitative methodological approach, and make a valued contribution to the field of qualitative inquiry, issues related to methodological credibility must be considered. Researchers are required to demonstrate rigour through adequate descriptions of methodological foundations. Case studies published without sufficient detail for the reader to understand the study design, and without rationale for key methodological decisions, may lead to research being interpreted as lacking in quality or credibility (Hallberg, 2013 ; Morse, 2011 ).
There is a level of artistic license that is embraced by qualitative researchers and distinguishes practice, which nurtures creativity, innovation, and reflexivity (Denzin & Lincoln, 2011b ; Morse, 2009 ). Qualitative research is “inherently multimethod” (Denzin & Lincoln, 2011a , p. 5); however, with this creative freedom, it is important for researchers to provide adequate description for methodological justification (Meyer, 2001 ). This includes paradigm and theoretical perspectives that have influenced study design. Without adequate description, study design might not be understood by the reader, and can appear to be dishonest or inaccurate. Reviewers and readers might be confused by the inconsistent or inappropriate terms used to describe case study research approach and methods, and be distracted from important study findings (Sandelowski, 2000 ). This issue extends beyond case study research, and others have noted inconsistencies in reporting of methodology and method by qualitative researchers. Sandelowski ( 2000 , 2010 ) argued for accurate identification of qualitative description as a research approach. She recommended that the selected methodology should be harmonious with the study design, and be reflected in methods and analysis techniques. Similarly, Webb and Kevern ( 2000 ) uncovered inconsistencies in qualitative nursing research with focus group methods, recommending that methodological procedures must cite seminal authors and be applied with respect to the selected theoretical framework. Incorrect labelling using case study might stem from the flexibility in case study design and non-directional character relative to other approaches (Rosenberg & Yates, 2007 ). Methodological integrity is required in design of qualitative studies, including case study, to ensure study rigour and to enhance credibility of the field (Morse, 2011 ).
Case study has been unnecessarily devalued by comparisons with statistical methods (Eisenhardt, 1989 ; Flyvbjerg, 2006 , 2011 ; Jensen & Rodgers, 2001 ; Piekkari, Welch, & Paavilainen, 2009 ; Tight, 2010 ; Yin, 1999 ). It is reputed to be the “the weak sibling” in comparison to other, more rigorous, approaches (Yin, 2009 , p. xiii). Case study is not an inherently comparative approach to research. The objective is not statistical research, and the aim is not to produce outcomes that are generalizable to all populations (Thomas, 2011 ). Comparisons between case study and statistical research do little to advance this qualitative approach, and fail to recognize its inherent value, which can be better understood from the interpretive or social constructionist viewpoint of other authors (Merriam, 2009 ; Stake, 1995 ). Building on discussions relating to “fuzzy” (Bassey, 2001 ), or naturalistic generalizations (Stake, 1978 ), or transference of concepts and theories (Ayres, Kavanaugh, & Knafl, 2003 ; Morse et al., 2011 ) would have more relevance.
Case study research has been used as a catch-all design to justify or add weight to fundamental qualitative descriptive studies that do not fit with other traditional frameworks (Merriam, 2009 ). A case study has been a “convenient label for our research—when we ‘can't think of anything ‘better”—in an attempt to give it [qualitative methodology] some added respectability” (Tight, 2010 , p. 337). Qualitative case study research is a pliable approach (Merriam, 2009 ; Meyer, 2001 ; Stake, 1995 ), and has been likened to a “curious methodological limbo” (Gerring, 2004 , p. 341) or “paradigmatic bridge” (Luck et al., 2006 , p. 104), that is on the borderline between postpositivist and constructionist interpretations. This has resulted in inconsistency in application, which indicates that flexibility comes with limitations (Meyer, 2001 ), and the open nature of case study research might be off-putting to novice researchers (Thomas, 2011 ). The development of a well-(in)formed theoretical framework to guide a case study should improve consistency, rigour, and trust in studies published in qualitative research journals (Meyer, 2001 ).
The purpose of this study was to analyse the methodological descriptions of case studies published in qualitative methods journals. To do this we needed to develop a suitable framework, which used existing, established criteria for appraising qualitative case study research rigour (Creswell, 2013b ; Merriam, 2009 ; Stake, 1995 ). A number of qualitative authors have developed concepts and criteria that are used to determine whether a study is rigorous (Denzin & Lincoln, 2011b ; Lincoln, 1995 ; Sandelowski & Barroso, 2002 ). The criteria proposed by Stake ( 1995 ) provide a framework for readers and reviewers to make judgements regarding case study quality, and identify key characteristics essential for good methodological rigour. Although each of the factors listed in Stake's criteria could enhance the quality of a qualitative research report, in Table I we present an adapted criteria used in this study, which integrates more recent work by Merriam ( 2009 ) and Creswell ( 2013b ). Stake's ( 1995 ) original criteria were separated into two categories. The first list of general criteria is “relevant for all qualitative research.” The second list, “high relevance to qualitative case study research,” was the criteria that we decided had higher relevance to case study research. This second list was the main criteria used to assess the methodological descriptions of the case studies reviewed. The complete table has been preserved so that the reader can determine how the original criteria were adapted.
Framework for assessing quality in qualitative case study research.
Checklist for assessing the quality of a case study report |
---|
Relevant for all qualitative research |
1. Is this report easy to read? |
2. Does it fit together, each sentence contributing to the whole? |
3. Does this report have a conceptual structure (i.e., themes or issues)? |
4. Are its issues developed in a series and scholarly way? |
5. Have quotations been used effectively? |
6. Has the writer made sound assertions, neither over- or under-interpreting? |
7. Are headings, figures, artefacts, appendices, indexes effectively used? |
8. Was it edited well, then again with a last minute polish? |
9. Were sufficient raw data presented? |
10. Is the nature of the intended audience apparent? |
11. Does it appear that individuals were put at risk? |
High relevance to qualitative case study research |
12. Is the case adequately defined? |
13. Is there a sense of story to the presentation? |
14. Is the reader provided some vicarious experience? |
15. Has adequate attention been paid to various contexts? |
16. Were data sources well-chosen and in sufficient number? |
17. Do observations and interpretations appear to have been triangulated? |
18. Is the role and point of view of the researcher nicely apparent? |
19. Is empathy shown for all sides? |
20. Are personal intentions examined? |
Added from Merriam ( ) |
21. Is the case study particular? |
22. Is the case study descriptive? |
23. Is the case study heuristic? |
Added from Creswell ( ) |
24. Was study design appropriate to methodology? |
Adapted from Stake ( 1995 , p. 131).
The critical review method described by Grant and Booth ( 2009 ) was used, which is appropriate for the assessment of research quality, and is used for literature analysis to inform research and practice. This type of review goes beyond the mapping and description of scoping or rapid reviews, to include “analysis and conceptual innovation” (Grant & Booth, 2009 , p. 93). A critical review is used to develop existing, or produce new, hypotheses or models. This is different to systematic reviews that answer clinical questions. It is used to evaluate existing research and competing ideas, to provide a “launch pad” for conceptual development and “subsequent testing” (Grant & Booth, 2009 , p. 93).
Qualitative methods journals were located by a search of the 2011 ISI Journal Citation Reports in Social Science, via the database Web of Knowledge (see m.webofknowledge.com). No “qualitative research methods” category existed in the citation reports; therefore, a search of all categories was performed using the term “qualitative.” In Table II , we present the qualitative methods journals located, ranked by impact factor. The highest ranked journals were selected for searching. We acknowledge that the impact factor ranking system might not be the best measure of journal quality (Cheek, Garnham, & Quan, 2006 ); however, this was the most appropriate and accessible method available.
International Journal of Qualitative Studies on Health and Well-being.
Journal title | 2011 impact factor | 5-year impact factor |
---|---|---|
2.188 | 2.432 | |
1.426 | N/A | |
0.839 | 1.850 | |
0.780 | N/A | |
0.612 | N/A |
In March 2013, searches of the journals, Qualitative Health Research , Qualitative Research , and Qualitative Inquiry were completed to retrieve studies with “case study” in the abstract field. The search was limited to the past 5 years (1 January 2008 to 1 March 2013). The objective was to locate published qualitative case studies suitable for assessment using the adapted criterion. Viewpoints, commentaries, and other article types were excluded from review. Title and abstracts of the 45 retrieved articles were read by the first author, who identified 34 empirical case studies for review. All authors reviewed the 34 studies to confirm selection and categorization. In Table III , we present the 34 case studies grouped by journal, and categorized by research topic, including health sciences, social sciences and anthropology, and methods research. There was a discrepancy in categorization of one article on pedagogy and a new teaching method published in Qualitative Inquiry (Jorrín-Abellán, Rubia-Avi, Anguita-Martínez, Gómez-Sánchez, & Martínez-Mones, 2008 ). Consensus was to allocate to the methods category.
Outcomes of search of qualitative methods journals.
Journal title | Date of search | Number of studies located | Number of full text studies extracted | Health sciences | Social sciences and anthropology | Methods |
---|---|---|---|---|---|---|
4 Mar 2013 | 18 | 16 | Barone ( ); Bronken et al. ( ); Colón-Emeric et al. ( ); Fourie and Theron ( ); Gallagher et al. ( ); Gillard et al. ( ); Hooghe et al. ( ); Jackson et al. ( ); Ledderer ( ); Mawn et al. ( ); Roscigno et al. ( ); Rytterström et al. ( ) | Nil | Austin, Park, and Goble ( ); Broyles, Rodriguez, Price, Bayliss, and Sevick ( ); De Haene et al. ( ); Fincham et al. ( ) | |
7 Mar 2013 | 11 | 7 | Nil | Adamson and Holloway ( ); Coltart and Henwood ( ) | Buckley and Waring ( ); Cunsolo Willox et al. ( ); Edwards and Weller ( ); Gratton and O'Donnell ( ); Sumsion ( ) | |
4 Mar 2013 | 16 | 11 | Nil | Buzzanell and D’Enbeau ( ); D'Enbeau et al. ( ); Nagar-Ron and Motzafi-Haller ( ); Snyder-Young ( ); Yeh ( ) | Ajodhia-Andrews and Berman ( ); Alexander et al. ( ); Jorrín-Abellán et al. ( ); Nairn and Panelli ( ); Nespor ( ); Wimpenny and Savin-Baden ( ) | |
Total | 45 | 34 | 12 | 7 | 15 |
In Table III , the number of studies located, and final numbers selected for review have been reported. Qualitative Health Research published the most empirical case studies ( n= 16). In the health category, there were 12 case studies of health conditions, health services, and health policy issues, all published in Qualitative Health Research . Seven case studies were categorized as social sciences and anthropology research, which combined case study with biography and ethnography methodologies. All three journals published case studies on methods research to illustrate a data collection or analysis technique, methodological procedure, or related issue.
The methodological descriptions of 34 case studies were critically reviewed using the adapted criteria. All articles reviewed contained a description of study methods; however, the length, amount of detail, and position of the description in the article varied. Few studies provided an accurate description and rationale for using a qualitative case study approach. In the 34 case studies reviewed, three described a theoretical framework informed by Stake ( 1995 ), two by Yin ( 2009 ), and three provided a mixed framework informed by various authors, which might have included both Yin and Stake. Few studies described their case study design, or included a rationale that explained why they excluded or added further procedures, and whether this was to enhance the study design, or to better suit the research question. In 26 of the studies no reference was provided to principal case study authors. From reviewing the description of methods, few authors provided a description or justification of case study methodology that demonstrated how their study was informed by the methodological literature that exists on this approach.
The methodological descriptions of each study were reviewed using the adapted criteria, and the following issues were identified: case study methodology or method; case of something particular and case selection; contextually bound case study; researcher and case interactions and triangulation; and, study design inconsistent with methodology. An outline of how the issues were developed from the critical review is provided, followed by a discussion of how these relate to the current methodological literature.
A third of the case studies reviewed appeared to use a case report method, not case study methodology as described by principal authors (Creswell, 2013b ; Merriam, 2009 ; Stake, 1995 ; Yin, 2009 ). Case studies were identified as a case report because of missing methodological detail and by review of the study aims and purpose. These reports presented data for small samples of no more than three people, places or phenomenon. Four studies, or “case reports” were single cases selected retrospectively from larger studies (Bronken, Kirkevold, Martinsen, & Kvigne, 2012 ; Coltart & Henwood, 2012 ; Hooghe, Neimeyer, & Rober, 2012 ; Roscigno et al., 2012 ). Case reports were not a case of something, instead were a case demonstration or an example presented in a report. These reports presented outcomes, and reported on how the case could be generalized. Descriptions focussed on the phenomena, rather than the case itself, and did not appear to study the case in its entirety.
Case reports had minimal in-text references to case study methodology, and were informed by other qualitative traditions or secondary sources (Adamson & Holloway, 2012 ; Buzzanell & D'Enbeau, 2009 ; Nagar-Ron & Motzafi-Haller, 2011 ). This does not suggest that case study methodology cannot be multimethod, however, methodology should be consistent in design, be clearly described (Meyer, 2001 ; Stake, 1995 ), and maintain focus on the case (Creswell, 2013b ).
To demonstrate how case reports were identified, three examples are provided. The first, Yeh ( 2013 ) described their study as, “the examination of the emergence of vegetarianism in Victorian England serves as a case study to reveal the relationships between boundaries and entities” (p. 306). The findings were a historical case report, which resulted from an ethnographic study of vegetarianism. Cunsolo Willox, Harper, Edge, ‘My Word’: Storytelling and Digital Media Lab, and Rigolet Inuit Community Government (2013) used “a case study that illustrates the usage of digital storytelling within an Inuit community” (p. 130). This case study reported how digital storytelling can be used with indigenous communities as a participatory method to illuminate the benefits of this method for other studies. This “case study was conducted in the Inuit community” but did not include the Inuit community in case analysis (Cunsolo Willox et al., 2013 , p. 130). Bronken et al. ( 2012 ) provided a single case report to demonstrate issues observed in a larger clinical study of aphasia and stroke, without adequate case description or analysis.
Case selection is a precursor to case analysis, which needs to be presented as a convincing argument (Merriam, 2009 ). Descriptions of the case were often not adequate to ascertain why the case was selected, or whether it was a particular exemplar or outlier (Thomas, 2011 ). In a number of case studies in the health and social science categories, it was not explicit whether the case was of something particular, or peculiar to their discipline or field (Adamson & Holloway, 2012 ; Bronken et al., 2012 ; Colón-Emeric et al., 2010 ; Jackson, Botelho, Welch, Joseph, & Tennstedt, 2012 ; Mawn et al., 2010 ; Snyder-Young, 2011 ). There were exceptions in the methods category ( Table III ), where cases were selected by researchers to report on a new or innovative method. The cases emerged through heuristic study, and were reported to be particular, relative to the existing methods literature (Ajodhia-Andrews & Berman, 2009 ; Buckley & Waring, 2013 ; Cunsolo Willox et al., 2013 ; De Haene, Grietens, & Verschueren, 2010 ; Gratton & O'Donnell, 2011 ; Sumsion, 2013 ; Wimpenny & Savin-Baden, 2012 ).
Case selection processes were sometimes insufficient to understand why the case was selected from the global population of cases, or what study of this case would contribute to knowledge as compared with other possible cases (Adamson & Holloway, 2012 ; Bronken et al., 2012 ; Colón-Emeric et al., 2010 ; Jackson et al., 2012 ; Mawn et al., 2010 ). In two studies, local cases were selected (Barone, 2010 ; Fourie & Theron, 2012 ) because the researcher was familiar with and had access to the case. Possible limitations of a convenience sample were not acknowledged. Purposeful sampling was used to recruit participants within the case of one study, but not of the case itself (Gallagher et al., 2013 ). Random sampling was completed for case selection in two studies (Colón-Emeric et al., 2010 ; Jackson et al., 2012 ), which has limited meaning in interpretive qualitative research.
To demonstrate how researchers provided a good justification for the selection of case study approaches, four examples are provided. The first, cases of residential care homes, were selected because of reported occurrences of mistreatment, which included residents being locked in rooms at night (Rytterström, Unosson, & Arman, 2013 ). Roscigno et al. ( 2012 ) selected cases of parents who were admitted for early hospitalization in neonatal intensive care with a threatened preterm delivery before 26 weeks. Hooghe et al. ( 2012 ) used random sampling to select 20 couples that had experienced the death of a child; however, the case study was of one couple and a particular metaphor described only by them. The final example, Coltart and Henwood ( 2012 ), provided a detailed account of how they selected two cases from a sample of 46 fathers based on personal characteristics and beliefs. They described how the analysis of the two cases would contribute to their larger study on first time fathers and parenting.
The limits or boundaries of the case are a defining factor of case study methodology (Merriam, 2009 ; Ragin & Becker, 1992 ; Stake, 1995 ; Yin, 2009 ). Adequate contextual description is required to understand the setting or context in which the case is revealed. In the health category, case studies were used to illustrate a clinical phenomenon or issue such as compliance and health behaviour (Colón-Emeric et al., 2010 ; D'Enbeau, Buzzanell, & Duckworth, 2010 ; Gallagher et al., 2013 ; Hooghe et al., 2012 ; Jackson et al., 2012 ; Roscigno et al., 2012 ). In these case studies, contextual boundaries, such as physical and institutional descriptions, were not sufficient to understand the case as a holistic system, for example, the general practitioner (GP) clinic in Gallagher et al. ( 2013 ), or the nursing home in Colón-Emeric et al. ( 2010 ). Similarly, in the social science and methods categories, attention was paid to some components of the case context, but not others, missing important information required to understand the case as a holistic system (Alexander, Moreira, & Kumar, 2012 ; Buzzanell & D'Enbeau, 2009 ; Nairn & Panelli, 2009 ; Wimpenny & Savin-Baden, 2012 ).
In two studies, vicarious experience or vignettes (Nairn & Panelli, 2009 ) and images (Jorrín-Abellán et al., 2008 ) were effective to support description of context, and might have been a useful addition for other case studies. Missing contextual boundaries suggests that the case might not be adequately defined. Additional information, such as the physical, institutional, political, and community context, would improve understanding of the case (Stake, 1998 ). In Boxes 1 and 2 , we present brief synopses of two studies that were reviewed, which demonstrated a well bounded case. In Box 1 , Ledderer ( 2011 ) used a qualitative case study design informed by Stake's tradition. In Box 2 , Gillard, Witt, and Watts ( 2011 ) were informed by Yin's tradition. By providing a brief outline of the case studies in Boxes 1 and 2 , we demonstrate how effective case boundaries can be constructed and reported, which may be of particular interest to prospective case study researchers.
Ledderer ( 2011 ) used a qualitative case study research design, informed by modern ethnography. The study is bounded to 10 general practice clinics in Denmark, who had received federal funding to implement preventative care services based on a Motivational Interviewing intervention. The researcher question focussed on “why is it so difficult to create change in medical practice?” (Ledderer, 2011 , p. 27). The study context was adequately described, providing detail on the general practitioner (GP) clinics and relevant political and economic influences. Methodological decisions are described in first person narrative, providing insight on researcher perspectives and interaction with the case. Forty-four interviews were conducted, which focussed on how GPs conducted consultations, and the form, nature and content, rather than asking their opinion or experience (Ledderer, 2011 , p. 30). The duration and intensity of researcher immersion in the case enhanced depth of description and trustworthiness of study findings. Analysis was consistent with Stake's tradition, and the researcher provided examples of inquiry techniques used to challenge assumptions about emerging themes. Several other seminal qualitative works were cited. The themes and typology constructed are rich in narrative data and storytelling by clinic staff, demonstrating individual clinic experiences as well as shared meanings and understandings about changing from a biomedical to psychological approach to preventative health intervention. Conclusions make note of social and cultural meanings and lessons learned, which might not have been uncovered using a different methodology.
Gillard et al. ( 2011 ) study of camps for adolescents living with HIV/AIDs provided a good example of Yin's interpretive case study approach. The context of the case is bounded by the three summer camps of which the researchers had prior professional involvement. A case study protocol was developed that used multiple methods to gather information at three data collection points coinciding with three youth camps (Teen Forum, Discover Camp, and Camp Strong). Gillard and colleagues followed Yin's ( 2009 ) principles, using a consistent data protocol that enhanced cross-case analysis. Data described the young people, the camp physical environment, camp schedule, objectives and outcomes, and the staff of three youth camps. The findings provided a detailed description of the context, with less detail of individual participants, including insight into researcher's interpretations and methodological decisions throughout the data collection and analysis process. Findings provided the reader with a sense of “being there,” and are discovered through constant comparison of the case with the research issues; the case is the unit of analysis. There is evidence of researcher immersion in the case, and Gillard reports spending significant time in the field in a naturalistic and integrated youth mentor role.
This case study is not intended to have a significant impact on broader health policy, although does have implications for health professionals working with adolescents. Study conclusions will inform future camps for young people with chronic disease, and practitioners are able to compare similarities between this case and their own practice (for knowledge translation). No limitations of this article were reported. Limitations related to publication of this case study were that it was 20 pages long and used three tables to provide sufficient description of the camp and program components, and relationships with the research issue.
Researcher and case interactions and transactions are a defining feature of case study methodology (Stake, 1995 ). Narrative stories, vignettes, and thick description are used to provoke vicarious experience and a sense of being there with the researcher in their interaction with the case. Few of the case studies reviewed provided details of the researcher's relationship with the case, researcher–case interactions, and how these influenced the development of the case study (Buzzanell & D'Enbeau, 2009 ; D'Enbeau et al., 2010 ; Gallagher et al., 2013 ; Gillard et al., 2011 ; Ledderer, 2011 ; Nagar-Ron & Motzafi-Haller, 2011 ). The role and position of the researcher needed to be self-examined and understood by readers, to understand how this influenced interactions with participants, and to determine what triangulation is needed (Merriam, 2009 ; Stake, 1995 ).
Gillard et al. ( 2011 ) provided a good example of triangulation, comparing data sources in a table (p. 1513). Triangulation of sources was used to reveal as much depth as possible in the study by Nagar-Ron and Motzafi-Haller ( 2011 ), while also enhancing confirmation validity. There were several case studies that would have benefited from improved range and use of data sources, and descriptions of researcher–case interactions (Ajodhia-Andrews & Berman, 2009 ; Bronken et al., 2012 ; Fincham, Scourfield, & Langer, 2008 ; Fourie & Theron, 2012 ; Hooghe et al., 2012 ; Snyder-Young, 2011 ; Yeh, 2013 ).
Good, rigorous case studies require a strong methodological justification (Meyer, 2001 ) and a logical and coherent argument that defines paradigm, methodological position, and selection of study methods (Denzin & Lincoln, 2011b ). Methodological justification was insufficient in several of the studies reviewed (Barone, 2010 ; Bronken et al., 2012 ; Hooghe et al., 2012 ; Mawn et al., 2010 ; Roscigno et al., 2012 ; Yeh, 2013 ). This was judged by the absence, or inadequate or inconsistent reference to case study methodology in-text.
In six studies, the methodological justification provided did not relate to case study. There were common issues identified. Secondary sources were used as primary methodological references indicating that study design might not have been theoretically sound (Colón-Emeric et al., 2010 ; Coltart & Henwood, 2012 ; Roscigno et al., 2012 ; Snyder-Young, 2011 ). Authors and sources cited in methodological descriptions were inconsistent with the actual study design and practices used (Fourie & Theron, 2012 ; Hooghe et al., 2012 ; Jorrín-Abellán et al., 2008 ; Mawn et al., 2010 ; Rytterström et al., 2013 ; Wimpenny & Savin-Baden, 2012 ). This occurred when researchers cited Stake or Yin, or both (Mawn et al., 2010 ; Rytterström et al., 2013 ), although did not follow their paradigmatic or methodological approach. In 26 studies there were no citations for a case study methodological approach.
The findings of this study have highlighted a number of issues for researchers. A considerable number of case studies reviewed were missing key elements that define qualitative case study methodology and the tradition cited. A significant number of studies did not provide a clear methodological description or justification relevant to case study. Case studies in health and social sciences did not provide sufficient information for the reader to understand case selection, and why this case was chosen above others. The context of the cases were not described in adequate detail to understand all relevant elements of the case context, which indicated that cases may have not been contextually bounded. There were inconsistencies between reported methodology, study design, and paradigmatic approach in case studies reviewed, which made it difficult to understand the study methodology and theoretical foundations. These issues have implications for methodological integrity and honesty when reporting study design, which are values of the qualitative research tradition and are ethical requirements (Wager & Kleinert, 2010a ). Poorly described methodological descriptions may lead the reader to misinterpret or discredit study findings, which limits the impact of the study, and, as a collective, hinders advancements in the broader qualitative research field.
The issues highlighted in our review build on current debates in the case study literature, and queries about the value of this methodology. Case study research can be situated within different paradigms or designed with an array of methods. In order to maintain the creativity and flexibility that is valued in this methodology, clearer descriptions of paradigm and theoretical position and methods should be provided so that study findings are not undervalued or discredited. Case study research is an interdisciplinary practice, which means that clear methodological descriptions might be more important for this approach than other methodologies that are predominantly driven by fewer disciplines (Creswell, 2013b ).
Authors frequently omit elements of methodologies and include others to strengthen study design, and we do not propose a rigid or purist ideology in this paper. On the contrary, we encourage new ideas about using case study, together with adequate reporting, which will advance the value and practice of case study. The implications of unclear methodological descriptions in the studies reviewed were that study design appeared to be inconsistent with reported methodology, and key elements required for making judgements of rigour were missing. It was not clear whether the deviations from methodological tradition were made by researchers to strengthen the study design, or because of misinterpretations. Morse ( 2011 ) recommended that innovations and deviations from practice are best made by experienced researchers, and that a novice might be unaware of the issues involved with making these changes. To perpetuate the tradition of case study research, applications in the published literature should have consistencies with traditional methodological constructions, and deviations should be described with a rationale that is inherent in study conduct and findings. Providing methodological descriptions that demonstrate a strong theoretical foundation and coherent study design will add credibility to the study, while ensuring the intrinsic meaning of case study is maintained.
The value of this review is that it contributes to discussion of whether case study is a methodology or method. We propose possible reasons why researchers might make this misinterpretation. Researchers may interchange the terms methods and methodology, and conduct research without adequate attention to epistemology and historical tradition (Carter & Little, 2007 ; Sandelowski, 2010 ). If the rich meaning that naming a qualitative methodology brings to the study is not recognized, a case study might appear to be inconsistent with the traditional approaches described by principal authors (Creswell, 2013a ; Merriam, 2009 ; Stake, 1995 ; Yin, 2009 ). If case studies are not methodologically and theoretically situated, then they might appear to be a case report.
Case reports are promoted by university and medical journals as a method of reporting on medical or scientific cases; guidelines for case reports are publicly available on websites ( http://www.hopkinsmedicine.org/institutional_review_board/guidelines_policies/guidelines/case_report.html ). The various case report guidelines provide a general criteria for case reports, which describes that this form of report does not meet the criteria of research, is used for retrospective analysis of up to three clinical cases, and is primarily illustrative and for educational purposes. Case reports can be published in academic journals, but do not require approval from a human research ethics committee. Traditionally, case reports describe a single case, to explain how and what occurred in a selected setting, for example, to illustrate a new phenomenon that has emerged from a larger study. A case report is not necessarily particular or the study of a case in its entirety, and the larger study would usually be guided by a different research methodology.
This description of a case report is similar to what was provided in some studies reviewed. This form of report lacks methodological grounding and qualities of research rigour. The case report has publication value in demonstrating an example and for dissemination of knowledge (Flanagan, 1999 ). However, case reports have different meaning and purpose to case study, which needs to be distinguished. Findings of our review suggest that the medical understanding of a case report has been confused with qualitative case study approaches.
In this review, a number of case studies did not have methodological descriptions that included key characteristics of case study listed in the adapted criteria, and several issues have been discussed. There have been calls for improvements in publication quality of qualitative research (Morse, 2011 ), and for improvements in peer review of submitted manuscripts (Carter & Little, 2007 ; Jasper, Vaismoradi, Bondas, & Turunen, 2013 ). The challenging nature of editor and reviewers responsibilities are acknowledged in the literature (Hames, 2013 ; Wager & Kleinert, 2010b ); however, review of case study methodology should be prioritized because of disputes on methodological value.
Authors using case study approaches are recommended to describe their theoretical framework and methods clearly, and to seek and follow specialist methodological advice when needed (Wager & Kleinert, 2010a ). Adequate page space for case study description would contribute to better publications (Gillard et al., 2011 ). Capitalizing on the ability to publish complementary resources should be considered.
There is a level of subjectivity involved in this type of review and this should be considered when interpreting study findings. Qualitative methods journals were selected because the aims and scope of these journals are to publish studies that contribute to methodological discussion and development of qualitative research. Generalist health and social science journals were excluded that might have contained good quality case studies. Journals in business or education were also excluded, although a review of case studies in international business journals has been published elsewhere (Piekkari et al., 2009 ).
The criteria used to assess the quality of the case studies were a set of qualitative indicators. A numerical or ranking system might have resulted in different results. Stake's ( 1995 ) criteria have been referenced elsewhere, and was deemed the best available (Creswell, 2013b ; Crowe et al., 2011 ). Not all qualitative studies are reported in a consistent way and some authors choose to report findings in a narrative form in comparison to a typical biomedical report style (Sandelowski & Barroso, 2002 ), if misinterpretations were made this may have affected the review.
Case study research is an increasingly popular approach among qualitative researchers, which provides methodological flexibility through the incorporation of different paradigmatic positions, study designs, and methods. However, whereas flexibility can be an advantage, a myriad of different interpretations has resulted in critics questioning the use of case study as a methodology. Using an adaptation of established criteria, we aimed to identify and assess the methodological descriptions of case studies in high impact, qualitative methods journals. Few articles were identified that applied qualitative case study approaches as described by experts in case study design. There were inconsistencies in methodology and study design, which indicated that researchers were confused whether case study was a methodology or a method. Commonly, there appeared to be confusion between case studies and case reports. Without clear understanding and application of the principles and key elements of case study methodology, there is a risk that the flexibility of the approach will result in haphazard reporting, and will limit its global application as a valuable, theoretically supported methodology that can be rigorously applied across disciplines and fields.
The authors have not received any funding or benefits from industry or elsewhere to conduct this study.
https://doi.org/10.1136/eb-2017-102845
Request permissions.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Case study is a research methodology, typically seen in social and life sciences. There is no one definition of case study research. 1 However, very simply… ‘a case study can be defined as an intensive study about a person, a group of people or a unit, which is aimed to generalize over several units’. 1 A case study has also been described as an intensive, systematic investigation of a single individual, group, community or some other unit in which the researcher examines in-depth data relating to several variables. 2
Often there are several similar cases to consider such as educational or social service programmes that are delivered from a number of locations. Although similar, they are complex and have unique features. In these circumstances, the evaluation of several, similar cases will provide a better answer to a research question than if only one case is examined, hence the multiple-case study. Stake asserts that the cases are grouped and viewed as one entity, called the quintain . 6 ‘We study what is similar and different about the cases to understand the quintain better’. 6
The steps when using case study methodology are the same as for other types of research. 6 The first step is defining the single case or identifying a group of similar cases that can then be incorporated into a multiple-case study. A search to determine what is known about the case(s) is typically conducted. This may include a review of the literature, grey literature, media, reports and more, which serves to establish a basic understanding of the cases and informs the development of research questions. Data in case studies are often, but not exclusively, qualitative in nature. In multiple-case studies, analysis within cases and across cases is conducted. Themes arise from the analyses and assertions about the cases as a whole, or the quintain, emerge. 6
If a researcher wants to study a specific phenomenon arising from a particular entity, then a single-case study is warranted and will allow for a in-depth understanding of the single phenomenon and, as discussed above, would involve collecting several different types of data. This is illustrated in example 1 below.
Using a multiple-case research study allows for a more in-depth understanding of the cases as a unit, through comparison of similarities and differences of the individual cases embedded within the quintain. Evidence arising from multiple-case studies is often stronger and more reliable than from single-case research. Multiple-case studies allow for more comprehensive exploration of research questions and theory development. 6
Despite the advantages of case studies, there are limitations. The sheer volume of data is difficult to organise and data analysis and integration strategies need to be carefully thought through. There is also sometimes a temptation to veer away from the research focus. 2 Reporting of findings from multiple-case research studies is also challenging at times, 1 particularly in relation to the word limits for some journal papers.
Example 1: nurses’ paediatric pain management practices.
One of the authors of this paper (AT) has used a case study approach to explore nurses’ paediatric pain management practices. This involved collecting several datasets:
Observational data to gain a picture about actual pain management practices.
Questionnaire data about nurses’ knowledge about paediatric pain management practices and how well they felt they managed pain in children.
Questionnaire data about how critical nurses perceived pain management tasks to be.
These datasets were analysed separately and then compared 7–9 and demonstrated that nurses’ level of theoretical did not impact on the quality of their pain management practices. 7 Nor did individual nurse’s perceptions of how critical a task was effect the likelihood of them carrying out this task in practice. 8 There was also a difference in self-reported and observed practices 9 ; actual (observed) practices did not confirm to best practice guidelines, whereas self-reported practices tended to.
The other author of this paper (RH) has conducted a multiple-case study to determine the quality of care for patients with complex clinical presentations in NPLCs in Ontario, Canada. 10 Five NPLCs served as individual cases that, together, represented the quatrain. Three types of data were collected including:
Review of documentation related to the NPLC model (media, annual reports, research articles, grey literature and regulatory legislation).
Interviews with nurse practitioners (NPs) practising at the five NPLCs to determine their perceptions of the impact of the NPLC model on the quality of care provided to patients with multimorbidity.
Chart audits conducted at the five NPLCs to determine the extent to which evidence-based guidelines were followed for patients with diabetes and at least one other chronic condition.
The three sources of data collected from the five NPLCs were analysed and themes arose related to the quality of care for complex patients at NPLCs. The multiple-case study confirmed that nurse practitioners are the primary care providers at the NPLCs, and this positively impacts the quality of care for patients with multimorbidity. Healthcare policy, such as lack of an increase in salary for NPs for 10 years, has resulted in issues in recruitment and retention of NPs at NPLCs. This, along with insufficient resources in the communities where NPLCs are located and high patient vulnerability at NPLCs, have a negative impact on the quality of care. 10
These examples illustrate how collecting data about a single case or multiple cases helps us to better understand the phenomenon in question. Case study methodology serves to provide a framework for evaluation and analysis of complex issues. It shines a light on the holistic nature of nursing practice and offers a perspective that informs improved patient care.
Competing interests None declared.
Provenance and peer review Commissioned; internally peer reviewed.
Intended for healthcare professionals
Case study evaluations, using one or more qualitative methods, have been used to investigate important practical and policy questions in health care. This paper describes the features of a well designed case study and gives examples showing how qualitative methods are used in evaluations of health services and health policy.
This is the last in a series of seven articles describing non-quantitative techniques and showing their value in health research
The medical approach to understanding disease has traditionally drawn heavily on qualitative data, and in particular on case studies to illustrate important or interesting phenomena. The tradition continues today, not least in regular case reports in this and other medical journals. Moreover, much of the everyday work of doctors and other health professionals still involves decisions that are qualitative rather than quantitative in nature.
This paper discusses the use of qualitative research methods, not in clinical care but in case study evaluations of health service interventions. It is useful for doctors to understand the principles guiding the design and conduct of these evaluations, because they are frequently used by both researchers and inspectorial agencies (such as the Audit Commission in the United Kingdom and the Office of Technology Assessment in the United States) to investigate the work of doctors and other health professionals.
We briefly discuss the circumstances in which case study research can usefully be undertaken in health service settings and the ways in which qualitative methods are used within case studies. Examples show how qualitative methods are applied, both in purely qualitative studies and alongside quantitative methods.
Doctors often find themselves asking important practical questions, such as should we be involved in the management of hospitals and, if so, how? how will new government policies affect the lives of our patients? and how can we cope with changes …
BMA Member Log In
If you have a subscription to The BMJ, log in:
Subscribe from £184 *.
Subscribe and get access to all BMJ articles, and much more.
* For online subscription
Access this article for 1 day for: £33 / $40 / €36 ( excludes VAT )
You can download a PDF version for your personal record.
Buy this article
(Stanford users can avoid this Captcha by logging in.)
Available online, at the library.
The Education Library is closed for construction. Request items for pickup at another library.
Call number | Note | Status |
---|---|---|
LB1028 .M396 1998 | Unknown |
Creators/contributors, contents/summary.
Browse related items.
© Stanford University , Stanford , California 94305 .
Introduction, challenging some common methodological assumptions about online qualitative surveys, ten practical tips for designing, implementing and analysing online qualitative surveys, acknowledgements, conflict of interest statement, data availability, ethical approval.
Samantha L Thomas, Hannah Pitt, Simone McCarthy, Grace Arnot, Marita Hennessy, Methodological and practical guidance for designing and conducting online qualitative surveys in public health, Health Promotion International , Volume 39, Issue 3, June 2024, daae061, https://doi.org/10.1093/heapro/daae061
Online qualitative surveys—those surveys that prioritise qualitative questions and interpretivist values—have rich potential for researchers, particularly in new or emerging areas of public health. However, there is limited discussion about the practical development and methodological implications of such surveys, particularly for public health researchers. This poses challenges for researchers, funders, ethics committees, and peer reviewers in assessing the rigour and robustness of such research, and in deciding the appropriateness of the method for answering different research questions. Drawing and extending on the work of other researchers, as well as our own experiences of conducting online qualitative surveys with young people and adults, we describe the processes associated with developing and implementing online qualitative surveys and writing up online qualitative survey data. We provide practical examples and lessons learned about question development, the importance of rigorous piloting strategies, use of novel techniques to prompt detailed responses from participants, and decisions that are made about data preparation and interpretation. We consider reviewer comments, and some ethical considerations of this type of qualitative research for both participants and researchers. We provide a range of practical strategies to improve trustworthiness in decision-making and data interpretation—including the importance of using theory. Rigorous online qualitative surveys that are grounded in qualitative interpretivist values offer a range of unique benefits for public health researchers, knowledge users, and research participants.
Public health researchers are increasingly using online qualitative surveys.
There is still limited practical and methodological information about the design and implementation of these studies.
Building on Braun and Clarke (2013) , Terry and Braun (2017) and Braun et al . (2021) , we reflect on the methodological and practical lessons we have learnt from our own experience with conducting online qualitative surveys.
We provide guidance and practical examples about the design, implementation and analysis processes.
We argue that online qualitative surveys have rich potential for public health researchers and can be an empowering and engaging way to include diverse populations in qualitative research.
Public health researchers mostly engage in experiential (interpretive) qualitative approaches ( Braun and Clarke, 2013 ). These approaches are ‘centred on the exploration of participants’ subjective experiences and sense-making’ [( Braun and Clarke, 2021c ), p. 39]. Given the strong focus in public health on social justice, power and inequality, researchers proactively use the findings from these qualitative studies—often in collaboration with lived experience experts and others who are impacted by key decisions ( Reed et al ., 2024 )—to advocate for changes to public health policy and practice. There is also an important level of theoretical, methodological and empirical reflection that is part of the public health researcher’s role. For example, as qualitative researchers actively construct and interpret meaning from data, they constantly challenge their assumptions, their way of knowing and their way of ‘doing’ research ( Braun and Clarke, 2024 ). This reflexive practice also includes considering how to develop more inclusive opportunities for people to participate in research and to share their opinions and experiences about the issues that matter to them.
While in-depth interviews and focus groups provide rich and detailed narratives that are central to understanding people’s lives, these forms of data collection may sometimes create practical barriers for both researchers and participants. For example, they can be time consuming, and the power dynamics associated with face-to-face interviews (even in online settings) may make them less accessible for groups that are marginalized or stigmatized ( Edwards and Holland, 2020 ). While some population subgroups (and contexts) may suit (or require) face-to-face qualitative data collection approaches, others may lend themselves to different forms of data collection. Young people, for example, may be keen to be civically involved in research about the issues that matter to them, such as the climate crisis, but they may find it more convenient and comfortable using anonymized digital technologies to do so ( Arnot et al ., 2024b ). As such, part of our reflexive practice as public health researchers must be to explore, and be open to, a range of qualitative methodological approaches that could be more convenient, less intimidating and more engaging for a diverse range of population subgroups. This includes thinking about pragmatic ways of operationalizing qualitative data collection methods. How can we develop methods and engagement strategies that enable us to gain insights from a diverse range of participants about new issues or phenomenon that may pose threats to public health, or look at existing issues in new ways?
Advancements in online data collection methods have also created new options for researchers and participants about how they can be involved in qualitative studies ( Hensen et al ., 2021 ; Chen, 2023 ; Fan et al ., 2024 ). Online qualitative surveys—those surveys that prioritize qualitative values and questions—have rich potential for qualitative researchers. Braun and Clarke (2013 , p. 135) state that qualitative surveys:
…consist of a series of open-ended questions about a topic, and participants type or hand-write their responses to each question. They are self-administered; a researcher-administered qualitative survey would basically be an interview.
While these types of studies are increasingly utilized in public health, researchers have highlighted that there is still relatively limited discussion about the methodological and practical implications of these surveys ( Braun and Clarke, 2013 ; Terry and Braun, 2017 ; Braun et al ., 2021 ). This poses challenges for qualitative public health researchers, funders, ethics committees and peer reviewers in assessing the purpose, rigour and contribution of such research, and in deciding the appropriateness of the method for answering different research questions.
Using examples from online qualitative surveys that we have been involved in, this article discusses a range of methodological and practical lessons learnt from developing, implementing and analysing data from these types of surveys. While we do not claim to have all the answers, we aim to develop and extend on the methodological and practical guidance from Braun and Clarke (2013) , Terry and Braun (2017) and Braun et al . (2021) about the potential for online qualitative surveys. This includes how they can provide a rigorous ‘wide-angle picture’ [( Toerien and Wilkinson, 2004 ), p. 70] from a diverse range of participants about contemporary public health phenomena.
Figure 1 aims to develop and extend on the key points made by Braun and Clarke (2013) , Terry and Braun (2017) and Braun et al . (2021) , which provide the methodological and empirical foundation for our article.
: Methodological considerations in conducting online qualitative surveys.
Online qualitative surveys take many forms. They may be fully qualitative or qualitative dominant—mostly qualitative with some quantitative questions ( Terry and Braun, 2017 ). There are also many different ways of conducting these studies—from using a smaller number of questions that engage specific population groups or knowledge users in understanding detailed experiences ( Hennessy and O’Donoghue, 2024 ), to a larger number of questions (which may use market research panel providers to recruit participants), that seek broader opinions and attitudes about public health issues ( Marko et al ., 2022a ; McCarthy et al ., 2023 ; Arnot et al ., 2024a ). However, based on our experiences of applying for grant funding and conducting, publishing and presenting these studies, there are still clear misconceptions and uncertainties about these types of surveys.
One of the concerns raised about online qualitative surveys is how they are situated within broader qualitative values and approaches. This includes whether they can provide empirically innovative, rigorous, rich and theoretically grounded qualitative contributions to knowledge. Our experience is that online qualitative surveys have the most potential when they harness the values of interpretivist ‘Big Q’ approaches to collect information from a diverse range of participants about their experiences, opinions and practices ( Braun et al ., 2021 ). The distinction between positivist (small q) and interpretivist (Big Q) approaches to online qualitative surveys is an important one that requires some initial methodological reflection, particularly in considering the (largely unhelpful) critiques that are made about the rigour and usefulness of these surveys. These critiques often overlook the theoretical underpinnings and qualitative values inherent in such surveys. For example, while there may be a tendency to think of surveys and survey data as atheoretical and descriptive, the use of theory is central in informing online qualitative surveys. For example, Varpio and Ellaway (2021 , p. 343) explain that theory can ‘offer explanations and detailed premises that we can wrestle with, agree with, disagree with, reject and/or accept’. This includes the research design, the approach to data collection and analysis, the interpretation of findings and the conclusions that are drawn. Theory is also important in helping researchers to engage in reflexive practice. The use of theory is essential in progressing online qualitative surveys beyond description and towards in-depth interpretation and explanations—thus facilitating a deeper understanding of the studied phenomenon ( Collins and Stockton, 2018 ; Jamie and Rathbone, 2022 ).
The main assumptions about online qualitative surveys are that they can only collect ‘thin’ textual data, and that they are not flexible enough as a data collection tool for researchers to prompt or ask follow-up questions or to co-create detailed and rich data with participants ( Braun and Clarke, 2013 ; Terry and Clarke, 2017 ; Braun et al ., 2021 ). While we acknowledge that the type of data that is collected in these types of studies is different from those in in-depth interview studies, these surveys may be a more accessible and engaging way to collect rich insights from a diverse range of participants who may otherwise not participate in qualitative research ( Braun and Clarke, 2013 ; Terry and Braun, 2017 ; Braun et al ., 2021 ). Despite this, peer reviewers can question the depth of information that may be collected in these studies. Assumptions about large but ‘thin’ datasets may also mean that researchers, funders and reviewers take (and perhaps expect) a more positivist approach to the design and analytical processes associated with these surveys. For example, the multiple topics and questions, larger sample sizes, and the generally smaller textual responses that online qualitative surveys generate may lead researchers to approach these surveys using more descriptive and atheoretical paradigms. This approach may focus on ‘measuring’ phenomena, using variables, developing thinner analytical description and adding numerical values to the number of responses for different categories or themes.
We have found that assumptions can also impact the review processes associated with these types of studies, receiving critiques from those with both positivist and interpretivist positions. Positivist critiques focus on matters associated with whether the samples are ‘representative’, and the flaws associated with ‘self-selecting convenience’ samples. Critiques from interpretivist colleagues question why such large sample sizes are needed for qualitative studies, seeing surveys as a less rigorous method for gaining rich and meaningful data. For example, we have had reviewers query the scope and depth of the analysis of the data that we present from these studies because they are concerned that the type of data collected lacks depth and does not fully contextualize and explain how participants think about issues. We have also had reviewers request that we should return to the study to collect quantitative data to supplement the qualitative findings of the survey. They also question how ‘representative’ the samples are of population groups. These comments, of course, are not unique to online qualitative surveys but do highlight the difficulty that reviewers may have in placing and situating these types of studies in broader qualitative approaches. With this in mind, we have also found that some reviewers can ask for additional information to justify both the use of online qualitative surveys and why we have chosen these over other qualitative approaches. For example, reviewers have asked us to justify why we have chosen an online qualitative survey and also to explain what we may have missed out on by not conducting in-depth interviews or quantitative or mixed methods surveys instead.
While there is now a general understanding that attributing ‘numbers’ to qualitative data is largely unhelpful and inappropriate ( Chowdhury, 2015 ), there may be expectations that the larger sample sizes associated with online qualitative surveys enable researchers to provide numerical indicators of data. Rather than focusing on the ‘artfully interpretive’ techniques used to analyse and construct themes from the data ( Finlay, 2021 ), we have found that reviewers often ask us to provide numerical information about how many people provided different responses to different questions (or constructed themes), and the number at which ‘saturation’ was determined. Reviewer feedback that we have received about analytical processes has asked for detailed explanations about why attempts to ‘minimize bias’ (including calculations of inter-rater reliability and replicability of data quality) were not used. This demonstrates that peer reviewers may misinterpret the interpretivist values that guide online qualitative surveys, asking for information that is essentially ‘meaningless’ in qualitative paradigms in which researchers’ subjectivity ‘sculpts’ the knowledge that is produced ( Braun and Clarke, 2021a ).
As well as a ‘wide-angle picture’ [( Toerien and Wilkinson, 2004 ), p. 70] on phenomenon, online qualitative surveys can also: (i) generate both rich and focused data about perceptions and practices, and (ii) have multiple participatory and practical advantages—including helping to overcome barriers to research participation ( Braun and Clarke, 2013 ; Terry and Braun, 2017 ; Braun et al ., 2021 ). For researchers , online qualitative surveys can be a more cost-effective alternative ( Braun and Clarke, 2013 ; Terry and Braun, 2017 )—they are generally more time-efficient and less labour-intensive (particularly if working with market research companies to recruit panels). They are also able to reach a broad range of participants—such as those who are geographically dispersed ( Braun and Clarke, 2013 ; Terry and Braun, 2017 ), and those who may not have internet connectivity that is reliable enough to complete online interviews (a common issue for individuals living in regional or rural settings) ( de Villiers et al ., 2022 ). We are also more able to engage young people in qualitative research through online surveys, perhaps partly due to extensive panel company databases but also because they may be a more accessible and familiar way for young people to participate in research. The ability to quickly investigate new public health threats from the perspective of lived experience can also provide important information for researchers, providing justification for new areas of research focus, including setting agendas and advocating for the need for funding (or policy attention). Collecting data from a diverse range of participants—including from those who hold views that we may see as less ‘politically acceptable’, or inconsistent with our own public health reasoning about health and equity—is important in situating and contextualizing community attitudes towards particular issues.
For participants , benefits include having a degree of autonomy and control over their participation, including completing the survey at a time and place that suits them, and the anonymous nature of participation (that may be helpful for people from highly stigmatized groups). Participants can take time to reflect on their responses or complete the survey, and may feel more able to ‘talk back’ to the researcher about the framing of questions or the purpose of the research ( Braun et al ., 2021 ). We would also add that a benefit of these types of studies is that participants can also drop out of the study easily if the survey does not interest them or meet their expectations—something that we think might be more onerous or uncomfortable for participants in an interview or focus group.
For knowledge users, including advocates, service providers and decision-makers, qualitative research provides an important form of evidence, and the ‘wide-angle picture' [( Toerien and Wilkinson, 2004 ), p. 70] on issues from a diverse range of individuals in a community or population can be a powerful advocacy tool. Online qualitative surveys can also provide rapid insights into how changes to policy and practice may impact population subgroups in different ways.
There are, of course, some limitations associated with online qualitative surveys ( Braun et al ., 2021 ; Marko et al ., 2022b ). For example, there is no ability to engage individuals in a ‘traditional’ conversation or to prompt or probe meaning in the interactive ways that we are familiar with in interview studies. There is less ability to refine the questions that we ask participants in an iterative way throughout a study based on participant responses (particularly when working with market research panel companies). There may also be barriers associated with written literacy, access to digital technologies and stable internet connections ( Braun et al ., 2021 ). They may also not be the most suitable for individuals who have different ways of ‘knowing, being and doing’ qualitative research—including Indigenous populations [( Kennedy et al ., 2022 ), p. 1]. All of these factors should be taken into consideration when deciding whether online qualitative surveys are an appropriate way of collecting data. Finally, while these types of surveys can collect data quickly ( Marko et al ., 2022b ), there can also be additional decision-making processes related to data preparation and inclusion that can be time-consuming.
There are a range of practical considerations that can improve the rigour, trustworthiness and quality of online qualitative survey data. Again, developing and expanding on ( Braun and Clarke, 2013 ; Terry and Braun, 2017 ; Braun et al ., 2021 ), Figure 2 gives an overview of some key practical considerations associated with the design, implementation and analysis of these surveys. We would also note that before starting your survey design, you should be aware that people may use different types of technology to complete the survey, and in different spaces. For example, we cannot assume that people will be sitting in front of a computer or laptop at home or in the office, with people more likely to complete surveys on a mobile phone, perhaps on a train or bus on the way to work or school.
: Top ten practical tips for conducting online qualitative surveys.
Creating an appropriate and accessible structure
The first step in designing an online qualitative survey is to plan the structure of your survey. This step is important because the structure influences the way that participants interact with and participate through the survey. The survey structure helps to create an ‘environment’ that helps participants to share their perspectives, prompt their views and develop their ideas ( Braun and Clarke, 2013 ; Terry and Braun, 2017 ). Similar to an interview study, the structure of the survey guides participants from one set of questions (and topics) to the next. It is important to consider the ordering of topics to enable participants to complete a survey that has a logical flow, introduces participants to concepts and allows them to develop their depth of responses.
Before participants start the survey, we provide a clear and simple lay language summary of the survey. Because many individuals will be familiar with completing quantitative surveys, we include a welcoming statement and reiterate the qualitative nature of the survey, stating that their answers can be about their own experiences:
Thank you for agreeing to take part in this survey about [topic] . This survey involves writing responses to questions rather than checking boxes.
We then clearly reiterate the purpose of the survey, providing a short description of the topic that we are investigating. We state that we do not seek to collect any data that is identifiable, that we are interested in participants perspectives, that there are no right or wrong answers, and that participants can withdraw from the survey at any time without giving a reason.
Similar to Braun et al . (2021) , we start our surveys with questions about demographic and related characteristics (which we often call ‘ participant/general characteristics ’). These can be discrete choice questions, but can also utilize open text—for example, in relation to gender identity. We have found that there is always a temptation with surveys to ask many questions about the demographic characteristics of participants. However, we caution that too many questions can be intrusive for participants and can take away valuable time from open-text questions, which are the core focus of the survey. We recommend asking participant characteristic and demographic questions that situate and contextualize the sample ( Elliott et al ., 1999 ).
We generally start the open-text sections of these surveys by asking broad introductory questions about the topic. This might include questions such as: ‘Please describe the main reasons you drink alcohol ’, and ‘W hat do you think are the main impacts of climate change on the world? ’ We have found that these types of questions get participants used to responding to open-text questions relevant to the study’s research questions and aims. For each new topic of investigation (which are based on our theoretical concepts and overall study aims and research questions), we provide a short explanation about what we will ask participants. We also use tools and text to signpost participant progress through the survey. This can be a valuable way to avoid high attrition rates where participants exit the survey because they are getting fatigued and are unclear when the survey will end:
Great! We are just over half-way through the survey.
We ask more detailed questions that are more aligned with our theoretical concepts in the middle of the survey. For example, we may start with broad questions about a harmful industry and their products (such as gambling, vaping or alcohol) and then in the middle of the survey ask more detailed questions about the commercial determinants of health and the specific tactics that these industries use (for example, about product design, political tactics, public relations strategies or how these practices may influence health and equity). In relation to these more complex questions, it is particularly important that we reiterate that there are no wrong answers and try to include encouraging text throughout the survey:
There are no right or wrong answers—we are curious to hear your opinions .
We always try to end the survey on a positive. While these types of questions depend on the study, we try to ask questions which enable participants to reflect on what could be done to address or improve an issue. This might include their attitudes about policy, or what they would say to those in positions of power:
What do you think should be done to protect young people from sports betting advertising on social media? If there was one thing that could be done to prevent young people from being exposed to the risks associated with alcohol, cigarettes, vaping, or gambling, what would it be? If you could say one thing to politicians about climate change, what would it be?
Finally, we ask participants if there is anything we have missed or if they have anything else to add, sometimes referred to as a ‘clean-up’ question ( Braun and Clarke, 2013 ). The following provides a few examples of how we have framed these questions in some of our studies:
Is there anything you would like to say about alcohol, cigarettes, vaping, and gambling products that we have not covered? Is there anything we haven’t asked you about the advertising of alcohol to women that you would like us to know?
Considering the impact of the length of the survey on responses
The length of the survey (both the number of questions and the time it takes an individual to complete the survey) is guided by a range of methodological and practical considerations and will vary between studies ( Braun and Clarke, 2013 ). Many factors will influence completion times. We try to give individuals a guide at the start of the survey about how long we think it will take to complete the survey (for example, between 20 and 30 minutes). We highlight that it may take people a little longer or shorter and that people are able to leave their browser open or save the survey and come back to finish it later. For our first few online qualitative surveys, we found that we asked lots of questions because we felt less in control of being able to prompt or ask follow-up questions from participants. However, we have learned that less is more! Asking too many questions may lead to more survey dropouts, and may significantly reduce the textual quality of the information that you receive from participants ( Braun and Clarke, 2013 ; Terry and Clarke, 2017 ). This includes considering how the survey questions might lead to repetition, which may be annoying for participants, leading to responses such as ‘like I’ve already said’ , ‘I’ve already answered that’ or ‘see above’ .
Providing clear and simple guidance
When designing an online qualitative survey, we try to think of ways to make participation in the survey engaging. We do not want individuals to feel that we are ‘mining’ them for data. Rather we want to demonstrate that we are genuinely interested in their perspectives and views. We use a range of mechanisms to do this. Because there is no opportunity to verbally explain or clarify concepts to participants, there is a particular need to ensure that the language used is clear and accessible ( Braun and Clarke, 2013 ; Terry and Clarke, 2017 ). If language or concepts are complex, you are more likely to receive ‘I don’t know’ responses to your questions. We need to remember that participants have a range of written and comprehension skills, and inclusive and accessible language is important. We also never try to assume a level of knowledge about an issue (unless we have specifically asked for participants who are aware and engaged in an issue—such as women who drink alcohol) ( Pitt et al ., 2023 ). This includes avoiding highly technical or academic language and not making assumptions that the individuals completing the survey will understand concepts in the same way that researchers do ( Braun and Clarke, 2013 ). Clearly explaining concepts or using text or images to prompt memories can help to overcome this:
Some big corporations (such as the tobacco, vaping, alcohol, junk food, or gambling industries) sponsor women's sporting teams or clubs, or other events. You might see sponsor logos on sporting uniforms, or at sporting grounds, or sponsoring a concert or arts event.
At all times, we try to centre the language that we use with the population from which we are seeking responses. Advisory groups can be particularly helpful in framing language for different population subgroups. We often use colloquial language, even if it might not be seen as the ‘correct’ academic language or terminology. Where possible, we also try to define theoretical concepts in a clear and easy to understand way. For example, in our study investigating parent perceptions of the impact of harmful products on young people, we tried to clearly define ‘normalization’:
In this section we ask you about some of the perceived health impacts of the above products on young people. We also ask you about the normalisation of these products for young people. When we talk about normalisation, we are thinking about the range of factors that might make these products more acceptable for young people to use. These factors might include individual factors, such as young people being attracted to risk, the influence of family or peers, the accessibility and availability of these products, or the way the industry advertises and promotes these products.
Using innovative approaches to improve accessibility and prompt responses
Online qualitative surveys can include features beyond traditional question-and-answer formats ( Braun and Clarke, 2013 ; Terry and Braun, 2017 ). For example, we often use a range of photo elicitation techniques (using images or videos) to make surveys more accessible to participate in, address different levels of literacy, and overcome the assumption that we are not able to ‘prompt’ responses. These types of visual methodologies enable a collaborative and creative research experience by asking the participant to reflect on aspects of the visual materials, such as symbolic representations, and discuss these in relation to the research objectives ( Glaw et al ., 2017 ). The combination of visual images and clear descriptions helps to provide a focus for responses about different issues, as well as prompting nuanced information such as participant memories and emotions ( Glaw et al ., 2017 ). We use different types of visuals in our studies, such as photographs (including of the public health issues we’re investigating); screenshots from websites and social media posts (including newspaper headlines) and videos (including short videos from social media sites such as TikTok) ( Arnot et al ., 2024b ). For example, when talking about government responses to the climate crisis, we used a photograph of former Australian Prime Minister Scott Morrison holding a piece of coal in the Australian parliament to prompt participants’ thinking about the government’s relationship with fossil fuels and to provide a focal point for their answer. However, we would caution against using any images that may be confronting for participants or deliberately provocative. The purpose of using visuals must always be in the interests of the participants—to clarify, prompt and reflect on concepts. Ethics committees should carefully review the images used in surveys to ensure that they have a clear purpose and are unlikely to cause any discomfort.
Thinking carefully about your criteria for recruitment
Determining the sample size of online qualitative studies is not an exact science. The sample sizes for recent studies have ranged from n = 46 in a study about pregnancy loss ( Hennessy and O’Donoghue, 2024 ), to n = 511 in a study with young people about the climate crisis ( Arnot et al ., 2023b ). We follow ‘rules of thumb’ [( Braun and Clarke, 2021b ), p. 211] which try to balance the needs of the research and data richness with key practical considerations (such as funding and time constraints), funder expectations, discipline-specific norms and our knowledge and experience of designing and implementing online qualitative surveys. However, we have found that peer reviewers expect much more justification of sample sizes than they do for other types of qualitative research. Robust justification of sample sizes are often needed to prevent any ‘concerns’ that reviewers may raise. Our response to these reviews often reiterates that our focus (as with all qualitative research) is not to produce a ‘generalisable’ or ‘representative’ sample but to recruit participants who will help to provide ‘rich, complex and textured data’ [( Terry and Braun, 2017 ), p. 15] about an issue. Instead of focusing on data saturation, a contested concept which is incongruent with reflexive thematic analysis in particular ( Braun and Clarke, 2021b ), we find it useful to consider information power to determine the sample size for these surveys ( Malterud et al ., 2016 ). Information power prioritizes the adequacy, quality and variability of the data collected over the number of participants.
Recruitment for online qualitative surveys can be influenced by a range of factors. Monetary and time constraints will impact the size and, if using market research company panels, the specificity of participant quotas. Recruitment strategies must be developed to ensure that the data provides enough information to answer the research questions of the study. For our research purposes, we often try to ensure that participants with a range of socio-demographic characteristics are invited to participate in the sample. We set soft quotas for age, gender and geographic location to ensure some diversity. We have found that some population subgroups may also be recruited more easily than others—although this may depend on the topic of the survey. For example, we have found that quotas for women and those living in metropolitan areas may fill more quickly. In these scenarios, the research team must weigh up the timelines associated with recruitment and data collection (e.g. How long do we want to run data collection for? How much of our budget can be spent on achieving a more equally split sample? Are quotas necessary?) versus the purpose and goals of the research (i.e. to generate ideas rather than data representativeness), and the study-specific aims and research questions.
There are, of course, concerns about not being able to ‘see’ the people that are completing these surveys. There is an increasing focus in the academic literature on ‘false’ respondents, particularly in quantitative online surveys ( Levi et al ., 2021 ; Wang et al ., 2023 ). This will be an important ongoing discussion for qualitative researchers, and we do not claim to have the answers for how to overcome these issues. For example, some individuals may say that they meet the inclusion criteria to access the survey, while others may not understand or misinterpret the inclusion criteria. There is also a level of discomfort about who and how we judge who may be a ‘legitimate’ participant or not. However, we can talk practically about some of the strategies that we use to ensure the rigour of data. For example, we find that screening questions can provide a ‘double-check’ in relation to inclusion criteria and can also help with ensuring that there is consistency between the information an individual provides about how they meet the inclusion criteria and subsequent responses. For example, in a recent survey of parents of young people, a participant stated that they were 18 years old and were a parent to a 16-year-old and 15-year-old. Their overall responses were inconsistent with being a parent of children these ages. Similarly, in our gambling studies, people may tick that they have gambled in the last year but then in subsequent questions say they have not gambled at all. This highlights the importance of checking data across all questions, although it should be noted that time and cost constraints associated with comprehensively scanning the data for such responses are not always feasible and can result in overlooking these participants.
Ensuring that there are strategies to create agency and engage participants in the research
One of the benefits of online qualitative surveys compared to traditional quantitative surveys is the scope for participants to explain their answers and to disagree with the research team’s position. An indication that participants are feeling able to do this is when they are asked for any additional comments at the end of the survey. For example, in a survey about women’s attitudes towards alcohol marketing, the following participant concluded the survey by writing: ‘I think you have covered everything. I think that you need to stop shaming women for having fun’. Other participants demonstrate their engagement and interest in the survey by reaffirming the perspectives they have shared throughout the survey. For example, in a study with young people on climate, participants responded at the end that ‘it’s one of the few things I actually care about’ , while another commented on the quality of the survey questions, stating, ‘I think this survey did a great job with probing questions to prompt all the thoughts I have on it’ .
We also think that online qualitative surveys may lead to less social desirability in participants’ responses. Participants seem less wary about communicating less politically correct opinions than they may do in a face-to-face interview. For example, at times, participants communicate attitudes that may not align with public health values (e.g. supporting personal responsibility, anti-nanny state, and neoliberal ideologies of health and wellbeing), that we rarely see communicated to us in in-depth interview or focus group studies. We would argue that these perspectives are valuable for public health researchers because they capture a different community voice that may not otherwise be represented in research. This may show where there is a lack of support for health interventions and policy reforms and may indicate where further awareness-raising needs to occur. These types of responses also contribute to reflexive practice by challenging our assumptions and positions about how we think people should think or feel about responses to particular public health issues. Examples of such responses from our surveys include:
"Like I have already said, if you try to hide it you will only make it more attractive. This nanny-state attitude of the elite drives me crazy. People must be allowed to decide for themselves."
Ethical issues for participants and researchers
Researchers should also be aware that some of the ethical issues associated with online qualitative surveys may be different from those in in-depth interviews—and it is important that these are explained in any ethical consideration of the study. Providing a clear and simply worded Plain Language Statement (in written or video form) is important in establishing informed consent and willingness to participate. While participants are given information about who to contact if they have further questions about the study, this may be an extra step for participants, and they may not feel as able to ask for clarification about the study. Because of this, we try to provide multiple examples of the types of questions that we will ask, as well as providing downloadable support details (for example, for mental health support lines). A positive aspect of surveys is that participants are able to easily ignore recruitment notices to participate in the study. They are also able to stop the survey at any time by exiting out of the browser if they feel discomfort without having to give a reason in person to a researcher.
While the anonymous nature of the survey may be empowering for some participants ( Braun and Clarke, 2013 ; Terry and Braun, 2017 ; Braun et al. , 2021 ), it can also make it difficult for researchers to ascertain if people need any further support after completing the survey. Participants may also fill in surveys with someone else and may be influenced about how they should respond to questions (with the exception of some studies in which people may require assistance from someone to type their responses). Because of the above, some researchers, ethics committees and funders may be more cautious about using these studies for highly sensitive subjects. However, we would argue that the important point is that the studies follow ethical principles and take the lack of direct contact with participants into the ethical considerations of the study. It is also important to ensure that platforms used to collect survey data are trusted and secure. Here, we would argue that universities have an obligation to investigate and, where possible, approve survey providers to ensure that researchers are using platforms that meet rigorous standards for data and privacy.
It is also important to note that there may be responses from participants that may be challenging ( Terry and Braun, 2017 ; Braun and Clarke, 2021 ). Online spaces are rife with trolling due to their anonymous nature, and online surveys are not immune to this behaviour. Naturally, this leads to some silly responses—‘ Deakin University is responsible for all of this ’, but researchers should also be aware that the anonymity of surveys can (although in our experience not often) lead to responses that may cause discomfort for the researchers. For example, when asked if participants had anything else to add to a climate survey ( Arnot et al ., 2024c ), one responded ‘ nope, but you sure asked a lot of dumbass questions’ . Just as with interview-based studies, there must be processes built into the research for debriefing—particularly for students and early career researchers—as well as clear decisions about whether to include or exclude these types of responses when preparing the dataset for analysis and in writing up the results from the survey.
The importance of piloting the survey
Because of the lack of ability to explain and clarify concepts, piloting is particularly important ( Braun and Clarke, 2013 ; Terry and Braun, 2017 ; Braun et al. , 2021 ) to ensure that: (i) the technical aspects of the survey work as intended; (ii) the survey is eliciting quality responses (with limited ‘nonsensical’ responses such as random characters); (iii) the survey responses indicate comprehension of the survey questions; and (vi) there is not a substantial number of people who ‘drop-out’ of the study. Typically, we pilot our survey with 10% of the intended sample size. After piloting, we often change question wording, particularly to address questions that elicit very small text responses, the length of the survey and sometimes refine definitions or language to ensure increased comprehension. Researchers should remember that changes to the survey questions may need to be reviewed by ethics committees before launching the full survey. It is important to build in time for piloting and the revision of the survey to ensure you get this right as once you launch the full survey, there is no going back!
Preparing the dataset
Once launching the full survey, the quality of data and types of responses you receive in these types of surveys can vary. There is very limited transparency around how the dataset was prepared (more familiar to some as ‘data cleaning’) in published papers, including the decisions about which (if any) participants (or indeed responses) were excluded from the dataset and why. Nonsensical responses can be common—and can take a range of forms ( Figure 3 ). These can include random numbers or letters, a chunk of text that has been copied and pasted from elsewhere, predictive text or even repeat emojis. In one study, we had a participant quote the script of The Bee Movie in response to questions.
: Visual examples of nonsensical responses in online qualitative surveys.
Part of our familiarization with the dataset [Phase One in Braun and Clarke’s reflexive approach to thematic analysis ( Braun and Clarke, 2013 ; Braun et al ., 2021 )] includes preparing the dataset for analysis. We use this phase to help make decisions about what to include and exclude from the final dataset. While a row of emojis in the data file can easily be spotted and removed from the dataset, sometimes responses can look robust until you read, become familiar and engage with the data. For example, when asked about what they thought about collective climate action ( Arnot et al ., 2023a , 2024c ), some participants entered random yet related terms such as ‘ plastic ’, or repeated similar phrases across multiple questions:
“ why do we need paper straws ”, “ paper straws are terrible ”, “ papers straws are bad for you ”, “ paper straws are gross .”
Participants can also provide comprehensive answers for the first few questions and then nonsensical responses for the rest, which may also be due to question fatigue [( Braun and Clarke, 2013 ), p. 138]. Therefore, it is important to closely go through each participant’s response to ensure they have attempted to provide bone-fide responses. For example, in one of our young people and climate surveys ( Arnot et al ., 2023a , 2024c ), one participant responded genuinely to the first half of the survey before their quality dropped dramatically:
“I can’t even be bothered to read that question ”, “ why so many questions ”, “ bro too many sections. ”
Some market research panel providers may complete an initial quality screen of data. However, this does not replace the need for the research teams’ own data preparation processes. Researchers should ensure they are checking that responses are coherent—for example, not giving information that contradicts or is not credible. In our more recent studies, we have increasingly seen responses cut and pasted from ChatGPT and other AI tools—providing a new challenge in assessing the quality of responses. If you are seeing these types of responses, it might be an opportunity to think about the style and suitability of the questions being asked. For example, the use of AI tools might suggest that people are finding it difficult to answer questions or may feel that they have to present a ‘correct’ answer. We would also note that because of the volume of data in these surveys, the preparation of data involves multiple members of the team. In many cases, decisions need to be made about participants who may not have provided authentic responses across the survey. The research team should make clear in any paper their decisions about their choices to include or exclude participants from the study. There is a careful balancing act that can require assessing the quality of the participants’ responses across the whole dataset to determine if the overall quality of responses contributes to the research.
Navigating the volume of data and writing up results
Finally, discussions about how to navigate the volume of data that these types of studies produce could be a standalone paper. In general, principles of reflexive practices apply to the analysis of data from these studies. However, as a starting point, here are a few considerations when approaching these datasets.
We would argue that online qualitative surveys lend themselves to some types of analytical approaches over others—for example, reflexive thematic analysis, as compared to grounded theory or interpretive phenomenological analysis (though it can be used with these) ( Braun and Clarke, 2013 ; Terry and Braun, 2017 ).
While initial familiarization, coding and analysis can focus on specific questions and associated responses, it is important to analyse the dataset as a whole (or as clusters associated with particular topics) as participants may provide relevant data to a topic under multiple questions ( Terry and Braun, 2017 ). We initially focus our coding on specific questions or a group of survey questions under a topic of investigation. Once we have developed and constructed preliminary themes from the data associated with these clusters of questions, we then move to looking at responses across the dataset as we review themes further.
Researchers should think carefully about how to manage the data—which may not be available as ‘individual participant transcripts’ but rather as a ‘whole’ dataset in an Excel spreadsheet. Some may prefer qualitative data analysis software (QDAS) to manage and navigate data. However, many of us find that Excel (and particularly the use of labelled Tabs) is useful in grouping data and moving from codes to constructing themes.
As with all rigorous qualitative research, coding and theme development should be guided by the research questions. A clear record of decision-making about analytical choices (and being reflexive about these) should be kept. In any write-up, we would recommend that researchers are clear about which survey questions they used in the analysis [researchers could consider providing a supplementary file of some or all of the survey questions—see, for example Hennessy and O’Donoghue (2024) ].
In writing up the results, researchers should still seek to present a rich description of the data, as demonstrated in the presentation of results in the following papers ( Marko et al ., 2022a , 2022b ; McCarthy et al ., 2023 ; Pitt et al ., 2023 ; Hennessy and O’Donoghue, 2024 ). We have found the use of tables with additional examples of quotes as they relate to themes and subthemes can be a practical way of providing the reader with further examples of the data, particularly when constrained by journal word count limits [see, for example, Table 2 in Arnot et al ., (2024c) ]. However, these tables do not replace a full and complete presentation of the interpretation of the data.
This article offers methodological reflections and practical guidance around online qualitative survey design, implementation and analysis. While online qualitative surveys engage participants in a different type of conversation, they have design features that enable the collection of rich data. We recognize that we have much to learn and that while no survey of ours has been perfect, each new experience with developing and conducting online qualitative surveys has brought new understandings and lessons for future studies. In recognizing that we are learning, we also feel that our experience to date could be valuable for progressing the conversation about the rigour of online qualitative surveys and maximizing this method for public health gains.
H.P. is funded through a VicHealth Postdoctoral Research Fellowship. S.M. is funded through a Deakin University Faculty of Health Deans Postdoctoral Fellowship. G.A. is funded by an Australian Government Research Training Program Scholarship. M.H. is funded through an Irish Research Council Government of Ireland Postdoctoral Fellowship Award [GOIPD/2023/1168].
The pregnancy loss study was funded by the Irish Research Council through its New Foundations Awards and in partnership with the Irish Hospice Foundation as civil society partner [NF/2021/27123063].
S.T. is Editor in Chief of Health Promotion International, H.P. is a member of the Editorial Board of Health Promotion International, S.M. and G.A. are Social Media Coordinators for Health Promotion International, M.H. is an Associate Editor for Health Promotion International. They were not involved in the review process or in any decision-making on the manuscript.
The data used in this study are not available.
Ethical approval for studies conducted by Deakin University include the climate crisis (HEAG-H 55_2020, HEAG-H 162_2021); parents perceptions of harmful industries on young people (HEAG-H 158_2022); women and alcohol marketing (HEAG-H 123_2022) and gambling (HEAG 227_2020).
Arnot , G. , Pitt , H. , McCarthy , S. , Cordedda , C. , Marko , S. and Thomas , S. L. ( 2024a ) Australian youth perspectives on the role of social media in climate action . Australian and New Zealand Journal of Public Health , 48 , 100111 .
Google Scholar
Arnot , G. , Pitt , H. , McCarthy , S. , Cordedda , C. , Marko , S. and Thomas , S. L. ( 2024b ) Australian youth perspectives on the role of social media in climate action . Australian and New Zealand Journal of Public Health , 48 , 100111 .
Arnot , G. , Thomas , S. , Pitt , H. and Warner , E. ( 2023a ) Australian young people’s perceptions of the commercial determinants of the climate crisis . Health Promotion International , 38 , daad058 .
Arnot , G. , Thomas , S. , Pitt , H. and Warner , E. ( 2023b ) ‘It shows we are serious’: young people in Australia discuss climate justice protests as a mechanism for climate change advocacy and action . Australian and New Zealand Journal of Public Health , 47 , 100048 .
Arnot , G. , Thomas , S. , Pitt , H. and Warner , E. ( 2024c ) Australian young people’s perspectives about the political determinants of the climate crisis . Health Promotion Journal of Australia , 35 , 196 – 206 .
Braun , V. and Clarke , V. ( 2013 ) Successful Qualitative Research: A Practical Guide for Beginners . Sage , London .
Google Preview
Braun , V. and Clarke , V. ( 2021a ) One size fits all? What counts as quality practice in (reflexive) thematic analysis ? Qualitative Research in Psychology , 18 , 328 – 352 .
Braun , V. and Clarke , V. ( 2021b ) To saturate or not to saturate? Questioning data saturation as a useful concept for thematic analysis and sample-size rationales . Qualitative Research in Sport, Exercise and Health , 13 , 201 – 216 .
Braun , V. and Clarke , V. ( 2021c ) Can I use TA? Should I use TA? Should I not use TA? Comparing reflexive thematic analysis and other pattern‐based qualitative analytic approaches . Counselling and Psychotherapy Research , 21 , 37 – 47 .
Braun , V. and Clarke , V. ( 2024 ) A critical review of the reporting of reflexive thematic analysis in Health Promotion International . Health Promotion International , 39 , daae049 .
Braun , V. , Clarke , V. , Boulton , E. , Davey , L. and McEvoy , C. ( 2021 ) The online survey as a qualitative research tool . International Journal of Social Research Methodology , 24 , 641 – 654 .
Chen , J. ( 2023 ) Digitally dispersed, remotely engaged: interrogating participation in virtual photovoice . Qualitative Research , 23 , 1535 – 1555 .
Chowdhury , M. F. ( 2015 ) Coding, sorting and sifting of qualitative data analysis: debates and discussion . Quality & Quantity , 49 , 1135 – 1143 .
Collins , C. S. and Stockton , C. M. ( 2018 ) The central role of theory in qualitative research . International Journal of Qualitative Methods , 17 , 160940691879747 .
de Villiers , C. , Farooq , M. B. and Molinari , M. ( 2022 ) Qualitative research interviews using online video technology—challenges and opportunities . Meditari Accountancy Research , 30 , 1764 – 1782 .
Edwards , R. and Holland , J. ( 2020 ) Reviewing challenges and the future for qualitative interviewing . International Journal of Social Research Methodology , 23 , 581 – 592 .
Elliott , R. , Fischer , C. T. and Rennie , D. L. ( 1999 ) Evolving guidelines for publication of qualitative research studies in psychology and related fields . British Journal of Clinical Psychology , 38 , 215 – 229 .
Fan , H. , Li , B. , Pasaribu , T. and Chowdhury , R. ( 2024 ) Online interviews as new methodological normalcy and a space of ethics: an autoethnographic investigation into Covid-19 educational research . Qualitative Inquiry , 30 , 333 – 344 .
Finlay , L. ( 2021 ) Thematic analysis: the ‘good’, the ‘bad’ and the ‘ugly’ . European Journal for Qualitative Research in Psychotherapy , 11 , 103 – 116 .
Glaw , X. , Inder , K. , Kable , A. and Hazelton , M. ( 2017 ) Visual methodologies in qualitative research: autophotography and photo elicitation applied to mental health research . International Journal of Qualitative Methods , 16 , 160940691774821 .
Hennessy , M. and O’Donoghue , K. ( 2024 ) Bridging the gap between pregnancy loss research and policy and practice: insights from a qualitative survey with knowledge users . Health Research Policy and Systems , 22 , 15 .
Hensen , B. , Mackworth-Young , C. R. S. , Simwinga , M. , Abdelmagid , N. , Banda , J. , Mavodza , C. et al. . ( 2021 ) Remote data collection for public health research in a COVID-19 era: ethical implications, challenges and opportunities . Health Policy and Planning , 36 , 360 – 368 .
Jamie , K. and Rathbone , A. P. ( 2022 ) Using theory and reflexivity to preserve methodological rigour of data collection in qualitative research . Research Methods in Medicine & Health Sciences , 3 , 11 – 21 .
Kennedy , M. , Maddox , R. , Booth , K. , Maidment , S. , Chamberlain , C. and Bessarab , D. ( 2022 ) Decolonising qualitative research with respectful, reciprocal, and responsible research practice: a narrative review of the application of Yarning method in qualitative Aboriginal and Torres Strait Islander health research . International Journal for Equity in Health , 21 , 134 .
Levi , R. , Ridberg , R. , Akers , M. and Seligman , H. ( 2021 ) Survey fraud and the integrity of web-based survey research . American Journal of Health Promotion , 36 , 18 – 20 .
Malterud , K. , Siersma , V. D. and Guassora , A. D. ( 2016 ) Sample size in qualitative interview studies: guided by information power . Qualitative Health Research , 26 , 1753 – 1760 .
Marko , S. , Thomas , S. , Pitt , H. and Daube , M. ( 2022a ) ‘Aussies love a bet’: gamblers discuss the social acceptance and cultural accommodation of gambling in Australia . Australian and New Zealand Journal of Public Health , 46 , 829 – 834 .
Marko , S. , Thomas , S. L. , Robinson , K. and Daube , M. ( 2022b ) Gamblers’ perceptions of responsibility for gambling harm: a critical qualitative inquiry . BMC Public Health , 22 , 725 .
McCarthy , S. , Thomas , S. L. , Pitt , H. , Warner , E. , Roderique-Davies , G. , Rintoul , A. et al. . ( 2023 ) ‘They loved gambling more than me’. Women’s experiences of gambling-related harm as an affected other . Health Promotion Journal of Australia , 34 , 284 – 293 .
Pitt , H. , McCarthy , S. , Keric , D. , Arnot , G. , Marko , S. , Martino , F. et al. . ( 2023 ) The symbolic consumption processes associated with ‘low-calorie’ and ‘low-sugar’ alcohol products and Australian women . Health Promotion International , 38 , 1 – 13 .
Reed , M. S. , Merkle , B. G. , Cook , E. J. , Hafferty , C. , Hejnowicz , A. P. , Holliman , R. et al. . ( 2024 ) Reimagining the language of engagement in a post-stakeholder world . Sustainability Science .
Terry , G. and Braun , V. ( 2017 ) Short but often sweet: the surprising potential of qualitative survey methods . In Braun , V. , Clarke , V. and Gray , D. (eds), Collecting Qualitative Data: A Practical Guide to Textual, Media and Virtual Techniques . Cambridge University Press , Cambridge .
Toerien , M. and Wilkinson , S. ( 2004 ) Exploring the depilation norm: a qualitative questionnaire study of women’s body hair removal . Qualitative Research in Psychology , 1 , 69 – 92 .
Varpio , L. and Ellaway , R. H. ( 2021 ) Shaping our worldviews: a conversation about and of theory . Advances in Health Sciences Education: Theory and Practice , 26 , 339 – 345 .
Wang , J. , Calderon , G. , Hager , E. R. , Edwards , LV , Berry , A. A. , Liu , Y. et al. . ( 2023 ) Identifying and preventing fraudulent responses in online public health surveys: lessons learned during the COVID-19 pandemic . PLOS Global Public Health , 3 , e0001452 .
Month: | Total Views: |
---|---|
June 2024 | 790 |
Citing articles via.
Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide
Sign In or Create an Account
This PDF is available to Subscribers Only
For full access to this pdf, sign in to an existing account, or purchase an annual subscription.
Research methods in sport and exercise psychology.
Research methods in sport and exercise psychology are embedded in the domain’s network of methodological assumptions, historical traditions, and research themes. Sport and exercise psychology is a unique domain that derives and integrates concepts and terminologies from both psychology and kinesiology domains. Thus, research methods used to study the main concerns and interests of sport and exercise psychology represent the domain’s intellectual properties.
The main methods used in the sport and exercise psychology domain are: (a) experimental, (b) psychometric, (c) multivariate correlational, (d) meta-analytic, (e) idiosyncratic, and (f) qualitative approach. Each of these research methods tends to fulfill a distinguishable research purpose in the domain and thus enables the generation of evidence that is not readily gleaned through other methods. Although the six research methods represent a sufficient diversity of available methods in sport and exercise psychology, they must be viewed as a starting point for researchers interested in the domain. Other research methods (e.g., case study, Bayesian inferences, and psychophysiological approach) exist and bear potential to advance the domain of sport and exercise psychology.
You do not currently have access to this article
Please login to access the full content.
Access to the full content requires a subscription
Printed from Oxford Research Encyclopedias, Psychology. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).
date: 02 July 2024
Character limit 500 /500
Research output : Contribution to journal › Article › Research › peer-review
Background: Poor academic performance and failure can cause undesired effects for students, schools, and society. Understanding why some students fail while their peers succeed is important to enhance student performance. Therefore, this study explores the differences in the learning process between high- and low-achieving pre-clinical medical students from a theory of action perspective. Methods: This study employed a qualitative instrumental case study design intended to compare two groups of students—high-achieving students (n = 14) and low-achieving students (n = 5), enrolled in pre-clinical medical studies at the Universiti Malaya, Malaysia. Data were collected through reflective journals and semi-structured interviews. Regarding journaling, participants were required to recall their learning experiences of the previous academic year. Two analysts coded the data and then compared the codes of high- and low-achieving students. The third analyst reviewed the codes. Themes were identified iteratively, working towards comparing the learning processes of high- and low-achieving students. Results: Data analysis revealed four themes—motivation and expectation, study methods, self-management, and flexibility of mindset. First, high-achieving students were more motivated and had higher academic expectations than low-achieving students. Second, high-achieving students adopted study planning and deep learning approaches, whereas low-achieving students adopted superficial learning approaches. Third, in contrast to low-achieving students, high-achieving students exhibited better time management and studied consistently. Finally, high-achieving students proactively sought external support and made changes to overcome challenges. In contrast, low-achieving students were less resilient and tended to avoid challenges. Conclusion: Based on the theory of action, high-achieving students utilize positive governing variables, whereas low-achieving students are driven by negative governing variables. Hence, governing variable-based remediation is needed to help low-achieving students interrogate the motives behind their actions and realign positive governing variables, actions, and intended outcomes.Key Messages This study found four themes describing the differences between high- and low-achieving pre-clinical medical students: motivation and expectation, study methods, self-management, and flexibility of mindset. Based on the theory of action approach, high-achieving pre-clinical medical students are fundamentally different from their low-achieving peers in terms of their governing variables, with the positive governing variables likely to have guided them to act in a manner beneficial to and facilitating desirable academic performance. Governing variable-based remediation may help students interrogate the motives of their actions.
Original language | English |
---|---|
Pages (from-to) | 195-210 |
Number of pages | 16 |
Journal | |
Volume | 54 |
Issue number | 1 |
DOIs | |
Publication status | Published - 31 Dec 2022 |
Externally published | Yes |
T1 - Differences between high- and low-achieving pre-clinical medical students
T2 - a qualitative instrumental case study from a theory of action perspective
AU - Foong, Chan Choong
AU - Bashir Ghouse, Nur Liyana
AU - Lye, An Jie
AU - Pallath, Vinod
AU - Hong, Wei Han
AU - Vadivelu, Jamuna
N1 - Funding Information: The research was supported by Geran Penyelidikan Tabung UMSC CA.R.E [PV045-2019 & PV014-2020] and Bantuan Kecil Penyelidikan [BK023-2016], Universiti Malaya. The authors would like to thank Dr. Nurul Atira Khairul Anhar Holder, the student support officer for her contribution. We also wish to thank the students for participating in the study. Publisher Copyright: © 2022 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group.
PY - 2022/12/31
Y1 - 2022/12/31
N2 - Background: Poor academic performance and failure can cause undesired effects for students, schools, and society. Understanding why some students fail while their peers succeed is important to enhance student performance. Therefore, this study explores the differences in the learning process between high- and low-achieving pre-clinical medical students from a theory of action perspective. Methods: This study employed a qualitative instrumental case study design intended to compare two groups of students—high-achieving students (n = 14) and low-achieving students (n = 5), enrolled in pre-clinical medical studies at the Universiti Malaya, Malaysia. Data were collected through reflective journals and semi-structured interviews. Regarding journaling, participants were required to recall their learning experiences of the previous academic year. Two analysts coded the data and then compared the codes of high- and low-achieving students. The third analyst reviewed the codes. Themes were identified iteratively, working towards comparing the learning processes of high- and low-achieving students. Results: Data analysis revealed four themes—motivation and expectation, study methods, self-management, and flexibility of mindset. First, high-achieving students were more motivated and had higher academic expectations than low-achieving students. Second, high-achieving students adopted study planning and deep learning approaches, whereas low-achieving students adopted superficial learning approaches. Third, in contrast to low-achieving students, high-achieving students exhibited better time management and studied consistently. Finally, high-achieving students proactively sought external support and made changes to overcome challenges. In contrast, low-achieving students were less resilient and tended to avoid challenges. Conclusion: Based on the theory of action, high-achieving students utilize positive governing variables, whereas low-achieving students are driven by negative governing variables. Hence, governing variable-based remediation is needed to help low-achieving students interrogate the motives behind their actions and realign positive governing variables, actions, and intended outcomes.Key Messages This study found four themes describing the differences between high- and low-achieving pre-clinical medical students: motivation and expectation, study methods, self-management, and flexibility of mindset. Based on the theory of action approach, high-achieving pre-clinical medical students are fundamentally different from their low-achieving peers in terms of their governing variables, with the positive governing variables likely to have guided them to act in a manner beneficial to and facilitating desirable academic performance. Governing variable-based remediation may help students interrogate the motives of their actions.
AB - Background: Poor academic performance and failure can cause undesired effects for students, schools, and society. Understanding why some students fail while their peers succeed is important to enhance student performance. Therefore, this study explores the differences in the learning process between high- and low-achieving pre-clinical medical students from a theory of action perspective. Methods: This study employed a qualitative instrumental case study design intended to compare two groups of students—high-achieving students (n = 14) and low-achieving students (n = 5), enrolled in pre-clinical medical studies at the Universiti Malaya, Malaysia. Data were collected through reflective journals and semi-structured interviews. Regarding journaling, participants were required to recall their learning experiences of the previous academic year. Two analysts coded the data and then compared the codes of high- and low-achieving students. The third analyst reviewed the codes. Themes were identified iteratively, working towards comparing the learning processes of high- and low-achieving students. Results: Data analysis revealed four themes—motivation and expectation, study methods, self-management, and flexibility of mindset. First, high-achieving students were more motivated and had higher academic expectations than low-achieving students. Second, high-achieving students adopted study planning and deep learning approaches, whereas low-achieving students adopted superficial learning approaches. Third, in contrast to low-achieving students, high-achieving students exhibited better time management and studied consistently. Finally, high-achieving students proactively sought external support and made changes to overcome challenges. In contrast, low-achieving students were less resilient and tended to avoid challenges. Conclusion: Based on the theory of action, high-achieving students utilize positive governing variables, whereas low-achieving students are driven by negative governing variables. Hence, governing variable-based remediation is needed to help low-achieving students interrogate the motives behind their actions and realign positive governing variables, actions, and intended outcomes.Key Messages This study found four themes describing the differences between high- and low-achieving pre-clinical medical students: motivation and expectation, study methods, self-management, and flexibility of mindset. Based on the theory of action approach, high-achieving pre-clinical medical students are fundamentally different from their low-achieving peers in terms of their governing variables, with the positive governing variables likely to have guided them to act in a manner beneficial to and facilitating desirable academic performance. Governing variable-based remediation may help students interrogate the motives of their actions.
KW - academic achievement
KW - high-achieving
KW - low-achieving
KW - Qualitative instrumental case study
KW - theory of action
UR - http://www.scopus.com/inward/record.url?scp=85122873644&partnerID=8YFLogxK
U2 - 10.1080/07853890.2021.1967440
DO - 10.1080/07853890.2021.1967440
M3 - Article
C2 - 35019800
AN - SCOPUS:85122873644
SN - 0785-3890
JO - Annals of Medicine
JF - Annals of Medicine
Financial reforms and interest rate spreads in the commercial banking system in malawi.
Pricing strategy: setting price levels, managing price discounts and establishing price structures, customer value‐based pricing strategies: why companies resist, marketing management 15th ed., related papers.
Showing 1 through 3 of 0 Related Papers
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
Email citation, add to collections.
Your saved search, create a file for external citation management software, your rss feed.
Affiliations.
Background: As large language models (LLMs) are becoming increasingly integrated into different aspects of health care, questions about the implications for medical academic literature have begun to emerge. Key aspects such as authenticity in academic writing are at stake with artificial intelligence (AI) generating highly linguistically accurate and grammatically sound texts.
Objective: The objective of this study is to compare human-written with AI-generated scientific literature in orthopedics and sports medicine.
Methods: Five original abstracts were selected from the PubMed database. These abstracts were subsequently rewritten with the assistance of 2 LLMs with different degrees of proficiency. Subsequently, researchers with varying degrees of expertise and with different areas of specialization were asked to rank the abstracts according to linguistic and methodological parameters. Finally, researchers had to classify the articles as AI generated or human written.
Results: Neither the researchers nor the AI-detection software could successfully identify the AI-generated texts. Furthermore, the criteria previously suggested in the literature did not correlate with whether the researchers deemed a text to be AI generated or whether they judged the article correctly based on these parameters.
Conclusions: The primary finding of this study was that researchers were unable to distinguish between LLM-generated and human-written texts. However, due to the small sample size, it is not possible to generalize the results of this study. As is the case with any tool used in academic research, the potential to cause harm can be mitigated by relying on the transparency and integrity of the researchers. With scientific integrity at stake, further research with a similar study design should be conducted to determine the magnitude of this issue.
Keywords: AI; LLM; artificial intelligence; detection; feedback; large language model; medical database; orthopedic; orthopedic surgery; orthopedics; qualitative study; research; scientific integrity; sports medicine; study design; surgery; tool.
©Hassan Tarek Hakam, Robert Prill, Lisa Korte, Bruno Lovreković, Marko Ostojić, Nikolai Ramadanov, Felix Muehlensiepen. Originally published in JMIR Formative Research (https://formative.jmir.org), 16.02.2024.
PubMed Disclaimer
Conflicts of Interest: None declared.
Full text sources.
NCBI Literature Resources
MeSH PMC Bookshelf Disclaimer
The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.
IMAGES
VIDEO
COMMENTS
A case study is one of the most commonly used methodologies of social research. This article attempts to look into the various dimensions of a case study research strategy, the different epistemological strands which determine the particular case study type and approach adopted in the field, discusses the factors which can enhance the effectiveness of a case study research, and the debate ...
Case studies play a significant role in knowledge development across various disciplines. Analysis of cases provides an avenue for researchers to explore phenomena within their context based on the collected data. Analysis of qualitative data from case study research can contribute to knowledge development.
Case study method is the most widely used method in academia for researchers interested in qualitative research (Baskarada, 2014).Research students select the case study as a method without understanding array of factors that can affect the outcome of their research.
A case study is a research method that involves an in-depth examination and analysis of a particular phenomenon or case, such as an individual, organization, community, event, or situation. It is a qualitative research approach that aims to provide a detailed and comprehensive understanding of the case being studied.
A case study is a detailed study of a specific subject, such as a person, group, place, event, organization, or phenomenon. Case studies are commonly used in social, educational, clinical, and business research. A case study research design usually involves qualitative methods, but quantitative methods are sometimes also used.
While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...
The purpose of case study research is twofold: (1) to provide descriptive information and (2) to suggest theoretical relevance. Rich description enables an in-depth or sharpened understanding of the case. It is unique given one characteristic: case studies draw from more than one data source. Case studies are inherently multimodal or mixed ...
A Case study is: An in-depth research design that primarily uses a qualitative methodology but sometimes includes quantitative methodology. Used to examine an identifiable problem confirmed through research. Used to investigate an individual, group of people, organization, or event. Used to mostly answer "how" and "why" questions.
The case study is a qualitative approach used to study phenomena within contexts (Baxter & Jack, 2008) and can be used as a tool for learning (Baskarada, 2014).
Case study research is typically extensive; it draws on multiple methods of data collection and involves multiple data sources. The researcher begins by identifying a specific case or set of cases to be studied. Each case is an entity that is described within certain parameters, such as a specific time frame, place, event, and process.
Qualitative Case Study Research Qualitative Case Study Research. ... After first noting various contexts in which case studies are commonly used, the chapter focuses on case study research directly Strengths and potential problematic issues are outlined and then key phases of the process. The chapter emphasizes how important it is to design the ...
Why a qualitative case study was conducted: A single, intrinsic qualitative research study. Following Yin's case study approach, the authors wished to uncover the contextual conditions relevant to the phenomenon under study - living in and leaving an abusive intimate relationship as a white, middle-class male.
According to the book Understanding Case Study Research, case studies are "small scale research with meaning" that generally involve the following:. The study of a particular case, or a number of cases. That the case will be complex and bounded. That it will be studied in its context.
Qualitative research involves collecting and analyzing non-numerical data (e.g., text, video, or audio) to understand concepts, opinions, or experiences. It can be used to gather in-depth insights into a problem or generate new ideas for research. Qualitative research is the opposite of quantitative research, which involves collecting and ...
This chapter explores case study as a major approach to research and evaluation. After first noting various contexts in which case studies are commonly used, the chapter focuses on case study research directly. Strengths and potential problematic issues are outlined, followed by key phases of the process.
Abstract. This article presents the case study as a type of qualitative research. Its aim is to give a detailed description of a case study - its definition, some classifications, and several ...
Qualitative research involves the studied use and collection of a variety of empirical materials - case study, personal experience, introspective, life story, interview, observational, historical, interactional, and visual texts - that describe routine and problematic moments and meanings in individuals' lives.
Case studies are designed to suit the case and research question and published case studies demonstrate wide diversity in study design. There are two popular case study approaches in qualitative research. The first, proposed by Stake ( 1995) and Merriam ( 2009 ), is situated in a social constructivist paradigm, whereas the second, by Yin ( 2012 ...
Case study is a research methodology, typically seen in social and life sciences. There is no one definition of case study research.1 However, very simply… 'a case study can be defined as an intensive study about a person, a group of people or a unit, which is aimed to generalize over several units'.1 A case study has also been described as an intensive, systematic investigation of a ...
Case studies are a common way to do qualitative inquiry. Case study research is neither new nor essentially qualitative. Case study is not a methodological choice but a choice of what is to be studied. If case study research is more humane or in some ways transcendent, it is because the researchers are so, not because of the methods. By whatever methods, we choose to study the case. We could ...
Case study evaluations, using one or more qualitative methods, have been used to investigate important practical and policy questions in health care. This paper describes the features of a well designed case study and gives examples showing how qualitative methods are used in evaluations of health services and health policy. This is the last in a series of seven articles describing non ...
Further, often the contribution of a qualitative case study research (QCSR) emerges from the 'extension of a theory' or 'developing deeper understanding—fresh meaning of a phenomenon'. However, the lack of knowledge on how to identify themes results in shallow findings with limited to no contribution towards literature. This editorial ...
Timely, authoritative, and approachable, Qualitative Research and Case Study Applications in Education is a practical resource that offers the information and guidance needed to manage all phases of the qualitative and case study research process. (source: Nielsen Book Data)
Harnessing interpretivist approaches and qualitative values in online qualitative surveys. Online qualitative surveys take many forms. They may be fully qualitative or qualitative dominant—mostly qualitative with some quantitative questions (Terry and Braun, 2017).There are also many different ways of conducting these studies—from using a smaller number of questions that engage specific ...
Other research methods (e.g., case study, Bayesian inferences, and psychophysiological approach) exist and bear potential to advance the domain of sport and exercise psychology. ... meta-analytic, (e) idiosyncratic, and (f) qualitative approach. Each of these research methods tends to fulfill a distinguishable research purpose in the domain and ...
Methods: This study employed a qualitative instrumental case study design intended to compare two groups of students—high-achieving students (n = 14) and low-achieving students (n = 5), enrolled in pre-clinical medical studies at the Universiti Malaya, Malaysia. Data were collected through reflective journals and semi-structured interviews.
Through approach qualitative, this research involves participant observation, in-depth interviews and document analysis for dig up data from the object institution study. The research results show ...
Qualitative case study methodology (QCSM) is a useful research approach that has grown in popularity within the social sciences; however, it has received less attention in the occupational therapy literature. The current scoping review aims to explore how studies utilizing a QCSM help inform occupational therapy knowledge and practice.
Pricing strategies are critical determinants of a company's market share, profitability, and overall competitive advantage. This study explores the impact of various pricing strategies on the brand performance of Safaricom Ltd, Kenya's leading mobile phone operator. The research aims to understand how differential, competitive, and value-based pricing strategies influence brand performance.
The primary finding of this study was that researchers were unable to distinguish between LLM-generated and human-written texts. However, due to the small sample size, it is not possible to generalize the results of this study. As is the case with any tool used in academic research, the potential to …