Shapiro Library

Psychology Research Guide

Conducting psychological research.

Conducting your own psychological research means analyzing data, designing and executing surveys or experiments, and drawing conclusions from your findings. Conducting original or primary research is how scholars and students contribute to the body of scholarly knowledge. Below are links to funding resources for financing your research projects and the SAGE Research Methods database that has comprehensive information on designing and executing a study with resources on a plethora of research methods. Additionally, you may find the psychological tests & assessments and databases that include datasets on this page helpful during your research process.

  • Research Funding This page includes new funding announcements issued by federal agencies and federally affiliated organizations supporting research and training in areas including psychological science. Also included are grants, scholarships, and award information from the APA and other psychology-related organizations.

This resource contains full-text articles and reports from journals and magazines.

Institutional Review Board

Institutional review boards are committees formed to review and monitor biomedical and behavioral research with human subjects. All research involving human subjects must be approved by the IRB before research begins. Visit the SNHU IRB site to learn more, review the research submission process, and download the forms you'll need to get started.

  • SNHU Institutional Review Board This link opens in a new window

Psychological Tests and Assessments

Researchers use various tools to measure particular psychological and social phenomenon. These tools or tests are stringently reviewed for validity and reliability. When searching for tests, be sure to locate the accompanying reviews to substantiate your choice of tool prior to submitting your research proposal to the Institutional Review Board (IRB).

You may find tests listed in these databases that are available only in print format or from the creator. Check the availability and permissions to see how you might access a copy of the instrument.

A mobile-friendly version of this resource is available.

  • Measurement Instrument Database for the Social Sciences (MIDSS) The Measurement Instrument Database for the Social Sciences (MIDSS) is designed to be a repository for instruments that are used to collect data from across the social sciences. Use the site to discover instruments you can use in your own research.
  • PSI CHI Research Measures Database Psi Chi, the International Honor Society in Psychology, maintains a database of various websites linking to research measures, tools, and instruments. There are multiple ways to use this website. All of the resources are tagged by category and by keywords, so you can retrieve lists of resources related to various topical domains (e.g., affective, social, cognitive) or search by keywords.
  • Science of Behavior Change (SOBC) Research Network: The Measures The SOBC Research Network has identified specific potential targets for behavior change interventions in the three broad domains of self-regulation, stress reactivity and stress resilience, and interpersonal and social processes. As of January, 2022, there are 186 measures available with more being added monthly. The SOBC Repository is the source for behavioral science measures that have been validated (or are in the process of being validated) in accordance with the SOBC Experimental Medicine Approach. The Open Science Framework (OSF) hosts the full details of the validation process for each measure posted in the SOBC Repository, to increase the openness and transparency of this science.
  • National Institute of Health: NIH Toolbox The NIH Toolbox® is a comprehensive set of neuro-behavioral measurements that quickly assess cognitive, emotional, sensory, and motor functions from the convenience of an iPad.

You may also find information about psychological measures in books. Below are some examples:

how to conduct research in psychology

Data Sets 

Research may be conducted using existing data sets. Demographic data sets may be used to frame further research. The library subscribes to some databases containing data sets. Others are available freely online.

An additional login (after your SNHU login) is required to access this resource--see database description for more details.

Online Data Sets

What is Open Data? In simple terms, Open Data means the kind of data which is open for anyone and everyone for access, modification, reuse, and sharing. Open Data derives its base from various “open movements” such as open-source, open hardware, open government, open science, etc. Governments, independent organizations, and agencies have come forward to open the floodgates of data to create more and more open data for free and easy access.(Definition from freeCodeCamp.org)

  • Data.gov This link opens in a new window
  • Google Dataset Search This link opens in a new window
  • Google Public Data Explorer This link opens in a new window
  • Harvard Dataverse This link opens in a new window
  • Kaggle This link opens in a new window
  • Open Data Monitor This link opens in a new window
  • World Health Organization (WHO) Open Data Repository This link opens in a new window

SNHU Undergraduate Research

SNHU hosts an annual Undergraduate Research Day on campus to showcase research done by undergraduates during the year. Students select a mentor, submit a proposal by the deadline, and if accepted, conduct their research and present on the first Wednesday of April at Undergraduate Research Day. At this point in time, only students who attend SNHU on campus are eligible to participate. Students conducting research using human subjects are required to submit a proposal to the IRB (see box above) prior to submitting their proposal to UGR. 

  • SNHU Undergraduate Research Site
  • << Previous: Social Psychology
  • Next: Citing Sources >>

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Res Metr Anal

Logo of frontrma

The Use of Research Methods in Psychological Research: A Systematised Review

Salomé elizabeth scholtz.

1 Community Psychosocial Research (COMPRES), School of Psychosocial Health, North-West University, Potchefstroom, South Africa

Werner de Klerk

Leon t. de beer.

2 WorkWell Research Institute, North-West University, Potchefstroom, South Africa

Research methods play an imperative role in research quality as well as educating young researchers, however, the application thereof is unclear which can be detrimental to the field of psychology. Therefore, this systematised review aimed to determine what research methods are being used, how these methods are being used and for what topics in the field. Our review of 999 articles from five journals over a period of 5 years indicated that psychology research is conducted in 10 topics via predominantly quantitative research methods. Of these 10 topics, social psychology was the most popular. The remainder of the conducted methodology is described. It was also found that articles lacked rigour and transparency in the used methodology which has implications for replicability. In conclusion this article, provides an overview of all reported methodologies used in a sample of psychology journals. It highlights the popularity and application of methods and designs throughout the article sample as well as an unexpected lack of rigour with regard to most aspects of methodology. Possible sample bias should be considered when interpreting the results of this study. It is recommended that future research should utilise the results of this study to determine the possible impact on the field of psychology as a science and to further investigation into the use of research methods. Results should prompt the following future research into: a lack or rigour and its implication on replication, the use of certain methods above others, publication bias and choice of sampling method.

Introduction

Psychology is an ever-growing and popular field (Gough and Lyons, 2016 ; Clay, 2017 ). Due to this growth and the need for science-based research to base health decisions on (Perestelo-Pérez, 2013 ), the use of research methods in the broad field of psychology is an essential point of investigation (Stangor, 2011 ; Aanstoos, 2014 ). Research methods are therefore viewed as important tools used by researchers to collect data (Nieuwenhuis, 2016 ) and include the following: quantitative, qualitative, mixed method and multi method (Maree, 2016 ). Additionally, researchers also employ various types of literature reviews to address research questions (Grant and Booth, 2009 ). According to literature, what research method is used and why a certain research method is used is complex as it depends on various factors that may include paradigm (O'Neil and Koekemoer, 2016 ), research question (Grix, 2002 ), or the skill and exposure of the researcher (Nind et al., 2015 ). How these research methods are employed is also difficult to discern as research methods are often depicted as having fixed boundaries that are continuously crossed in research (Johnson et al., 2001 ; Sandelowski, 2011 ). Examples of this crossing include adding quantitative aspects to qualitative studies (Sandelowski et al., 2009 ), or stating that a study used a mixed-method design without the study having any characteristics of this design (Truscott et al., 2010 ).

The inappropriate use of research methods affects how students and researchers improve and utilise their research skills (Scott Jones and Goldring, 2015 ), how theories are developed (Ngulube, 2013 ), and the credibility of research results (Levitt et al., 2017 ). This, in turn, can be detrimental to the field (Nind et al., 2015 ), journal publication (Ketchen et al., 2008 ; Ezeh et al., 2010 ), and attempts to address public social issues through psychological research (Dweck, 2017 ). This is especially important given the now well-known replication crisis the field is facing (Earp and Trafimow, 2015 ; Hengartner, 2018 ).

Due to this lack of clarity on method use and the potential impact of inept use of research methods, the aim of this study was to explore the use of research methods in the field of psychology through a review of journal publications. Chaichanasakul et al. ( 2011 ) identify reviewing articles as the opportunity to examine the development, growth and progress of a research area and overall quality of a journal. Studies such as Lee et al. ( 1999 ) as well as Bluhm et al. ( 2011 ) review of qualitative methods has attempted to synthesis the use of research methods and indicated the growth of qualitative research in American and European journals. Research has also focused on the use of research methods in specific sub-disciplines of psychology, for example, in the field of Industrial and Organisational psychology Coetzee and Van Zyl ( 2014 ) found that South African publications tend to consist of cross-sectional quantitative research methods with underrepresented longitudinal studies. Qualitative studies were found to make up 21% of the articles published from 1995 to 2015 in a similar study by O'Neil and Koekemoer ( 2016 ). Other methods in health psychology, such as Mixed methods research have also been reportedly growing in popularity (O'Cathain, 2009 ).

A broad overview of the use of research methods in the field of psychology as a whole is however, not available in the literature. Therefore, our research focused on answering what research methods are being used, how these methods are being used and for what topics in practice (i.e., journal publications) in order to provide a general perspective of method used in psychology publication. We synthesised the collected data into the following format: research topic [areas of scientific discourse in a field or the current needs of a population (Bittermann and Fischer, 2018 )], method [data-gathering tools (Nieuwenhuis, 2016 )], sampling [elements chosen from a population to partake in research (Ritchie et al., 2009 )], data collection [techniques and research strategy (Maree, 2016 )], and data analysis [discovering information by examining bodies of data (Ktepi, 2016 )]. A systematised review of recent articles (2013 to 2017) collected from five different journals in the field of psychological research was conducted.

Grant and Booth ( 2009 ) describe systematised reviews as the review of choice for post-graduate studies, which is employed using some elements of a systematic review and seldom more than one or two databases to catalogue studies after a comprehensive literature search. The aspects used in this systematised review that are similar to that of a systematic review were a full search within the chosen database and data produced in tabular form (Grant and Booth, 2009 ).

Sample sizes and timelines vary in systematised reviews (see Lowe and Moore, 2014 ; Pericall and Taylor, 2014 ; Barr-Walker, 2017 ). With no clear parameters identified in the literature (see Grant and Booth, 2009 ), the sample size of this study was determined by the purpose of the sample (Strydom, 2011 ), and time and cost constraints (Maree and Pietersen, 2016 ). Thus, a non-probability purposive sample (Ritchie et al., 2009 ) of the top five psychology journals from 2013 to 2017 was included in this research study. Per Lee ( 2015 ) American Psychological Association (APA) recommends the use of the most up-to-date sources for data collection with consideration of the context of the research study. As this research study focused on the most recent trends in research methods used in the broad field of psychology, the identified time frame was deemed appropriate.

Psychology journals were only included if they formed part of the top five English journals in the miscellaneous psychology domain of the Scimago Journal and Country Rank (Scimago Journal & Country Rank, 2017 ). The Scimago Journal and Country Rank provides a yearly updated list of publicly accessible journal and country-specific indicators derived from the Scopus® database (Scopus, 2017b ) by means of the Scimago Journal Rank (SJR) indicator developed by Scimago from the algorithm Google PageRank™ (Scimago Journal & Country Rank, 2017 ). Scopus is the largest global database of abstracts and citations from peer-reviewed journals (Scopus, 2017a ). Reasons for the development of the Scimago Journal and Country Rank list was to allow researchers to assess scientific domains, compare country rankings, and compare and analyse journals (Scimago Journal & Country Rank, 2017 ), which supported the aim of this research study. Additionally, the goals of the journals had to focus on topics in psychology in general with no preference to specific research methods and have full-text access to articles.

The following list of top five journals in 2018 fell within the abovementioned inclusion criteria (1) Australian Journal of Psychology, (2) British Journal of Psychology, (3) Europe's Journal of Psychology, (4) International Journal of Psychology and lastly the (5) Journal of Psychology Applied and Interdisciplinary.

Journals were excluded from this systematised review if no full-text versions of their articles were available, if journals explicitly stated a publication preference for certain research methods, or if the journal only published articles in a specific discipline of psychological research (for example, industrial psychology, clinical psychology etc.).

The researchers followed a procedure (see Figure 1 ) adapted from that of Ferreira et al. ( 2016 ) for systematised reviews. Data collection and categorisation commenced on 4 December 2017 and continued until 30 June 2019. All the data was systematically collected and coded manually (Grant and Booth, 2009 ) with an independent person acting as co-coder. Codes of interest included the research topic, method used, the design used, sampling method, and methodology (the method used for data collection and data analysis). These codes were derived from the wording in each article. Themes were created based on the derived codes and checked by the co-coder. Lastly, these themes were catalogued into a table as per the systematised review design.

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0001.jpg

Systematised review procedure.

According to Johnston et al. ( 2019 ), “literature screening, selection, and data extraction/analyses” (p. 7) are specifically tailored to the aim of a review. Therefore, the steps followed in a systematic review must be reported in a comprehensive and transparent manner. The chosen systematised design adhered to the rigour expected from systematic reviews with regard to full search and data produced in tabular form (Grant and Booth, 2009 ). The rigorous application of the systematic review is, therefore discussed in relation to these two elements.

Firstly, to ensure a comprehensive search, this research study promoted review transparency by following a clear protocol outlined according to each review stage before collecting data (Johnston et al., 2019 ). This protocol was similar to that of Ferreira et al. ( 2016 ) and approved by three research committees/stakeholders and the researchers (Johnston et al., 2019 ). The eligibility criteria for article inclusion was based on the research question and clearly stated, and the process of inclusion was recorded on an electronic spreadsheet to create an evidence trail (Bandara et al., 2015 ; Johnston et al., 2019 ). Microsoft Excel spreadsheets are a popular tool for review studies and can increase the rigour of the review process (Bandara et al., 2015 ). Screening for appropriate articles for inclusion forms an integral part of a systematic review process (Johnston et al., 2019 ). This step was applied to two aspects of this research study: the choice of eligible journals and articles to be included. Suitable journals were selected by the first author and reviewed by the second and third authors. Initially, all articles from the chosen journals were included. Then, by process of elimination, those irrelevant to the research aim, i.e., interview articles or discussions etc., were excluded.

To ensure rigourous data extraction, data was first extracted by one reviewer, and an independent person verified the results for completeness and accuracy (Johnston et al., 2019 ). The research question served as a guide for efficient, organised data extraction (Johnston et al., 2019 ). Data was categorised according to the codes of interest, along with article identifiers for audit trails such as authors, title and aims of articles. The categorised data was based on the aim of the review (Johnston et al., 2019 ) and synthesised in tabular form under methods used, how these methods were used, and for what topics in the field of psychology.

The initial search produced a total of 1,145 articles from the 5 journals identified. Inclusion and exclusion criteria resulted in a final sample of 999 articles ( Figure 2 ). Articles were co-coded into 84 codes, from which 10 themes were derived ( Table 1 ).

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0002.jpg

Journal article frequency.

Codes used to form themes (research topics).

These 10 themes represent the topic section of our research question ( Figure 3 ). All these topics except, for the final one, psychological practice , were found to concur with the research areas in psychology as identified by Weiten ( 2010 ). These research areas were chosen to represent the derived codes as they provided broad definitions that allowed for clear, concise categorisation of the vast amount of data. Article codes were categorised under particular themes/topics if they adhered to the research area definitions created by Weiten ( 2010 ). It is important to note that these areas of research do not refer to specific disciplines in psychology, such as industrial psychology; but to broader fields that may encompass sub-interests of these disciplines.

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0003.jpg

Topic frequency (international sample).

In the case of developmental psychology , researchers conduct research into human development from childhood to old age. Social psychology includes research on behaviour governed by social drivers. Researchers in the field of educational psychology study how people learn and the best way to teach them. Health psychology aims to determine the effect of psychological factors on physiological health. Physiological psychology , on the other hand, looks at the influence of physiological aspects on behaviour. Experimental psychology is not the only theme that uses experimental research and focuses on the traditional core topics of psychology (for example, sensation). Cognitive psychology studies the higher mental processes. Psychometrics is concerned with measuring capacity or behaviour. Personality research aims to assess and describe consistency in human behaviour (Weiten, 2010 ). The final theme of psychological practice refers to the experiences, techniques, and interventions employed by practitioners, researchers, and academia in the field of psychology.

Articles under these themes were further subdivided into methodologies: method, sampling, design, data collection, and data analysis. The categorisation was based on information stated in the articles and not inferred by the researchers. Data were compiled into two sets of results presented in this article. The first set addresses the aim of this study from the perspective of the topics identified. The second set of results represents a broad overview of the results from the perspective of the methodology employed. The second set of results are discussed in this article, while the first set is presented in table format. The discussion thus provides a broad overview of methods use in psychology (across all themes), while the table format provides readers with in-depth insight into methods used in the individual themes identified. We believe that presenting the data from both perspectives allow readers a broad understanding of the results. Due a large amount of information that made up our results, we followed Cichocka and Jost ( 2014 ) in simplifying our results. Please note that the numbers indicated in the table in terms of methodology differ from the total number of articles. Some articles employed more than one method/sampling technique/design/data collection method/data analysis in their studies.

What follows is the results for what methods are used, how these methods are used, and which topics in psychology they are applied to . Percentages are reported to the second decimal in order to highlight small differences in the occurrence of methodology.

Firstly, with regard to the research methods used, our results show that researchers are more likely to use quantitative research methods (90.22%) compared to all other research methods. Qualitative research was the second most common research method but only made up about 4.79% of the general method usage. Reviews occurred almost as much as qualitative studies (3.91%), as the third most popular method. Mixed-methods research studies (0.98%) occurred across most themes, whereas multi-method research was indicated in only one study and amounted to 0.10% of the methods identified. The specific use of each method in the topics identified is shown in Table 2 and Figure 4 .

Research methods in psychology.

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0004.jpg

Research method frequency in topics.

Secondly, in the case of how these research methods are employed , our study indicated the following.

Sampling −78.34% of the studies in the collected articles did not specify a sampling method. From the remainder of the studies, 13 types of sampling methods were identified. These sampling methods included broad categorisation of a sample as, for example, a probability or non-probability sample. General samples of convenience were the methods most likely to be applied (10.34%), followed by random sampling (3.51%), snowball sampling (2.73%), and purposive (1.37%) and cluster sampling (1.27%). The remainder of the sampling methods occurred to a more limited extent (0–1.0%). See Table 3 and Figure 5 for sampling methods employed in each topic.

Sampling use in the field of psychology.

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0005.jpg

Sampling method frequency in topics.

Designs were categorised based on the articles' statement thereof. Therefore, it is important to note that, in the case of quantitative studies, non-experimental designs (25.55%) were often indicated due to a lack of experiments and any other indication of design, which, according to Laher ( 2016 ), is a reasonable categorisation. Non-experimental designs should thus be compared with experimental designs only in the description of data, as it could include the use of correlational/cross-sectional designs, which were not overtly stated by the authors. For the remainder of the research methods, “not stated” (7.12%) was assigned to articles without design types indicated.

From the 36 identified designs the most popular designs were cross-sectional (23.17%) and experimental (25.64%), which concurred with the high number of quantitative studies. Longitudinal studies (3.80%), the third most popular design, was used in both quantitative and qualitative studies. Qualitative designs consisted of ethnography (0.38%), interpretative phenomenological designs/phenomenology (0.28%), as well as narrative designs (0.28%). Studies that employed the review method were mostly categorised as “not stated,” with the most often stated review designs being systematic reviews (0.57%). The few mixed method studies employed exploratory, explanatory (0.09%), and concurrent designs (0.19%), with some studies referring to separate designs for the qualitative and quantitative methods. The one study that identified itself as a multi-method study used a longitudinal design. Please see how these designs were employed in each specific topic in Table 4 , Figure 6 .

Design use in the field of psychology.

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0006.jpg

Design frequency in topics.

Data collection and analysis —data collection included 30 methods, with the data collection method most often employed being questionnaires (57.84%). The experimental task (16.56%) was the second most preferred collection method, which included established or unique tasks designed by the researchers. Cognitive ability tests (6.84%) were also regularly used along with various forms of interviewing (7.66%). Table 5 and Figure 7 represent data collection use in the various topics. Data analysis consisted of 3,857 occurrences of data analysis categorised into ±188 various data analysis techniques shown in Table 6 and Figures 1 – 7 . Descriptive statistics were the most commonly used (23.49%) along with correlational analysis (17.19%). When using a qualitative method, researchers generally employed thematic analysis (0.52%) or different forms of analysis that led to coding and the creation of themes. Review studies presented few data analysis methods, with most studies categorising their results. Mixed method and multi-method studies followed the analysis methods identified for the qualitative and quantitative studies included.

Data collection in the field of psychology.

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0007.jpg

Data collection frequency in topics.

Data analysis in the field of psychology.

Results of the topics researched in psychology can be seen in the tables, as previously stated in this article. It is noteworthy that, of the 10 topics, social psychology accounted for 43.54% of the studies, with cognitive psychology the second most popular research topic at 16.92%. The remainder of the topics only occurred in 4.0–7.0% of the articles considered. A list of the included 999 articles is available under the section “View Articles” on the following website: https://methodgarden.xtrapolate.io/ . This website was created by Scholtz et al. ( 2019 ) to visually present a research framework based on this Article's results.

This systematised review categorised full-length articles from five international journals across the span of 5 years to provide insight into the use of research methods in the field of psychology. Results indicated what methods are used how these methods are being used and for what topics (why) in the included sample of articles. The results should be seen as providing insight into method use and by no means a comprehensive representation of the aforementioned aim due to the limited sample. To our knowledge, this is the first research study to address this topic in this manner. Our discussion attempts to promote a productive way forward in terms of the key results for method use in psychology, especially in the field of academia (Holloway, 2008 ).

With regard to the methods used, our data stayed true to literature, finding only common research methods (Grant and Booth, 2009 ; Maree, 2016 ) that varied in the degree to which they were employed. Quantitative research was found to be the most popular method, as indicated by literature (Breen and Darlaston-Jones, 2010 ; Counsell and Harlow, 2017 ) and previous studies in specific areas of psychology (see Coetzee and Van Zyl, 2014 ). Its long history as the first research method (Leech et al., 2007 ) in the field of psychology as well as researchers' current application of mathematical approaches in their studies (Toomela, 2010 ) might contribute to its popularity today. Whatever the case may be, our results show that, despite the growth in qualitative research (Demuth, 2015 ; Smith and McGannon, 2018 ), quantitative research remains the first choice for article publication in these journals. Despite the included journals indicating openness to articles that apply any research methods. This finding may be due to qualitative research still being seen as a new method (Burman and Whelan, 2011 ) or reviewers' standards being higher for qualitative studies (Bluhm et al., 2011 ). Future research is encouraged into the possible biasness in publication of research methods, additionally further investigation with a different sample into the proclaimed growth of qualitative research may also provide different results.

Review studies were found to surpass that of multi-method and mixed method studies. To this effect Grant and Booth ( 2009 ), state that the increased awareness, journal contribution calls as well as its efficiency in procuring research funds all promote the popularity of reviews. The low frequency of mixed method studies contradicts the view in literature that it's the third most utilised research method (Tashakkori and Teddlie's, 2003 ). Its' low occurrence in this sample could be due to opposing views on mixing methods (Gunasekare, 2015 ) or that authors prefer publishing in mixed method journals, when using this method, or its relative novelty (Ivankova et al., 2016 ). Despite its low occurrence, the application of the mixed methods design in articles was methodologically clear in all cases which were not the case for the remainder of research methods.

Additionally, a substantial number of studies used a combination of methodologies that are not mixed or multi-method studies. Perceived fixed boundaries are according to literature often set aside, as confirmed by this result, in order to investigate the aim of a study, which could create a new and helpful way of understanding the world (Gunasekare, 2015 ). According to Toomela ( 2010 ), this is not unheard of and could be considered a form of “structural systemic science,” as in the case of qualitative methodology (observation) applied in quantitative studies (experimental design) for example. Based on this result, further research into this phenomenon as well as its implications for research methods such as multi and mixed methods is recommended.

Discerning how these research methods were applied, presented some difficulty. In the case of sampling, most studies—regardless of method—did mention some form of inclusion and exclusion criteria, but no definite sampling method. This result, along with the fact that samples often consisted of students from the researchers' own academic institutions, can contribute to literature and debates among academics (Peterson and Merunka, 2014 ; Laher, 2016 ). Samples of convenience and students as participants especially raise questions about the generalisability and applicability of results (Peterson and Merunka, 2014 ). This is because attention to sampling is important as inappropriate sampling can debilitate the legitimacy of interpretations (Onwuegbuzie and Collins, 2017 ). Future investigation into the possible implications of this reported popular use of convenience samples for the field of psychology as well as the reason for this use could provide interesting insight, and is encouraged by this study.

Additionally, and this is indicated in Table 6 , articles seldom report the research designs used, which highlights the pressing aspect of the lack of rigour in the included sample. Rigour with regards to the applied empirical method is imperative in promoting psychology as a science (American Psychological Association, 2020 ). Omitting parts of the research process in publication when it could have been used to inform others' research skills should be questioned, and the influence on the process of replicating results should be considered. Publications are often rejected due to a lack of rigour in the applied method and designs (Fonseca, 2013 ; Laher, 2016 ), calling for increased clarity and knowledge of method application. Replication is a critical part of any field of scientific research and requires the “complete articulation” of the study methods used (Drotar, 2010 , p. 804). The lack of thorough description could be explained by the requirements of certain journals to only report on certain aspects of a research process, especially with regard to the applied design (Laher, 20). However, naming aspects such as sampling and designs, is a requirement according to the APA's Journal Article Reporting Standards (JARS-Quant) (Appelbaum et al., 2018 ). With very little information on how a study was conducted, authors lose a valuable opportunity to enhance research validity, enrich the knowledge of others, and contribute to the growth of psychology and methodology as a whole. In the case of this research study, it also restricted our results to only reported samples and designs, which indicated a preference for certain designs, such as cross-sectional designs for quantitative studies.

Data collection and analysis were for the most part clearly stated. A key result was the versatile use of questionnaires. Researchers would apply a questionnaire in various ways, for example in questionnaire interviews, online surveys, and written questionnaires across most research methods. This may highlight a trend for future research.

With regard to the topics these methods were employed for, our research study found a new field named “psychological practice.” This result may show the growing consciousness of researchers as part of the research process (Denzin and Lincoln, 2003 ), psychological practice, and knowledge generation. The most popular of these topics was social psychology, which is generously covered in journals and by learning societies, as testaments of the institutional support and richness social psychology has in the field of psychology (Chryssochoou, 2015 ). The APA's perspective on 2018 trends in psychology also identifies an increased amount of psychology focus on how social determinants are influencing people's health (Deangelis, 2017 ).

This study was not without limitations and the following should be taken into account. Firstly, this study used a sample of five specific journals to address the aim of the research study, despite general journal aims (as stated on journal websites), this inclusion signified a bias towards the research methods published in these specific journals only and limited generalisability. A broader sample of journals over a different period of time, or a single journal over a longer period of time might provide different results. A second limitation is the use of Excel spreadsheets and an electronic system to log articles, which was a manual process and therefore left room for error (Bandara et al., 2015 ). To address this potential issue, co-coding was performed to reduce error. Lastly, this article categorised data based on the information presented in the article sample; there was no interpretation of what methodology could have been applied or whether the methods stated adhered to the criteria for the methods used. Thus, a large number of articles that did not clearly indicate a research method or design could influence the results of this review. However, this in itself was also a noteworthy result. Future research could review research methods of a broader sample of journals with an interpretive review tool that increases rigour. Additionally, the authors also encourage the future use of systematised review designs as a way to promote a concise procedure in applying this design.

Our research study presented the use of research methods for published articles in the field of psychology as well as recommendations for future research based on these results. Insight into the complex questions identified in literature, regarding what methods are used how these methods are being used and for what topics (why) was gained. This sample preferred quantitative methods, used convenience sampling and presented a lack of rigorous accounts for the remaining methodologies. All methodologies that were clearly indicated in the sample were tabulated to allow researchers insight into the general use of methods and not only the most frequently used methods. The lack of rigorous account of research methods in articles was represented in-depth for each step in the research process and can be of vital importance to address the current replication crisis within the field of psychology. Recommendations for future research aimed to motivate research into the practical implications of the results for psychology, for example, publication bias and the use of convenience samples.

Ethics Statement

This study was cleared by the North-West University Health Research Ethics Committee: NWU-00115-17-S1.

Author Contributions

All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

  • Aanstoos C. M. (2014). Psychology . Available online at: http://eds.a.ebscohost.com.nwulib.nwu.ac.za/eds/detail/detail?sid=18de6c5c-2b03-4eac-94890145eb01bc70%40sessionmgr4006&vid$=$1&hid$=$4113&bdata$=$JnNpdGU9ZWRzL~WxpdmU%3d#AN$=$93871882&db$=$ers
  • American Psychological Association (2020). Science of Psychology . Available online at: https://www.apa.org/action/science/
  • Appelbaum M., Cooper H., Kline R. B., Mayo-Wilson E., Nezu A. M., Rao S. M. (2018). Journal article reporting standards for quantitative research in psychology: the APA Publications and Communications Board task force report . Am. Psychol. 73 :3. 10.1037/amp0000191 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bandara W., Furtmueller E., Gorbacheva E., Miskon S., Beekhuyzen J. (2015). Achieving rigor in literature reviews: insights from qualitative data analysis and tool-support . Commun. Ass. Inform. Syst. 37 , 154–204. 10.17705/1CAIS.03708 [ CrossRef ] [ Google Scholar ]
  • Barr-Walker J. (2017). Evidence-based information needs of public health workers: a systematized review . J. Med. Libr. Assoc. 105 , 69–79. 10.5195/JMLA.2017.109 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bittermann A., Fischer A. (2018). How to identify hot topics in psychology using topic modeling . Z. Psychol. 226 , 3–13. 10.1027/2151-2604/a000318 [ CrossRef ] [ Google Scholar ]
  • Bluhm D. J., Harman W., Lee T. W., Mitchell T. R. (2011). Qualitative research in management: a decade of progress . J. Manage. Stud. 48 , 1866–1891. 10.1111/j.1467-6486.2010.00972.x [ CrossRef ] [ Google Scholar ]
  • Breen L. J., Darlaston-Jones D. (2010). Moving beyond the enduring dominance of positivism in psychological research: implications for psychology in Australia . Aust. Psychol. 45 , 67–76. 10.1080/00050060903127481 [ CrossRef ] [ Google Scholar ]
  • Burman E., Whelan P. (2011). Problems in / of Qualitative Research . Maidenhead: Open University Press/McGraw Hill. [ Google Scholar ]
  • Chaichanasakul A., He Y., Chen H., Allen G. E. K., Khairallah T. S., Ramos K. (2011). Journal of Career Development: a 36-year content analysis (1972–2007) . J. Career. Dev. 38 , 440–455. 10.1177/0894845310380223 [ CrossRef ] [ Google Scholar ]
  • Chryssochoou X. (2015). Social Psychology . Inter. Encycl. Soc. Behav. Sci. 22 , 532–537. 10.1016/B978-0-08-097086-8.24095-6 [ CrossRef ] [ Google Scholar ]
  • Cichocka A., Jost J. T. (2014). Stripped of illusions? Exploring system justification processes in capitalist and post-Communist societies . Inter. J. Psychol. 49 , 6–29. 10.1002/ijop.12011 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Clay R. A. (2017). Psychology is More Popular Than Ever. Monitor on Psychology: Trends Report . Available online at: https://www.apa.org/monitor/2017/11/trends-popular
  • Coetzee M., Van Zyl L. E. (2014). A review of a decade's scholarly publications (2004–2013) in the South African Journal of Industrial Psychology . SA. J. Psychol . 40 , 1–16. 10.4102/sajip.v40i1.1227 [ CrossRef ] [ Google Scholar ]
  • Counsell A., Harlow L. (2017). Reporting practices and use of quantitative methods in Canadian journal articles in psychology . Can. Psychol. 58 , 140–147. 10.1037/cap0000074 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Deangelis T. (2017). Targeting Social Factors That Undermine Health. Monitor on Psychology: Trends Report . Available online at: https://www.apa.org/monitor/2017/11/trend-social-factors
  • Demuth C. (2015). New directions in qualitative research in psychology . Integr. Psychol. Behav. Sci. 49 , 125–133. 10.1007/s12124-015-9303-9 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Denzin N. K., Lincoln Y. (2003). The Landscape of Qualitative Research: Theories and Issues , 2nd Edn. London: Sage. [ Google Scholar ]
  • Drotar D. (2010). A call for replications of research in pediatric psychology and guidance for authors . J. Pediatr. Psychol. 35 , 801–805. 10.1093/jpepsy/jsq049 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Dweck C. S. (2017). Is psychology headed in the right direction? Yes, no, and maybe . Perspect. Psychol. Sci. 12 , 656–659. 10.1177/1745691616687747 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Earp B. D., Trafimow D. (2015). Replication, falsification, and the crisis of confidence in social psychology . Front. Psychol. 6 :621. 10.3389/fpsyg.2015.00621 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ezeh A. C., Izugbara C. O., Kabiru C. W., Fonn S., Kahn K., Manderson L., et al.. (2010). Building capacity for public and population health research in Africa: the consortium for advanced research training in Africa (CARTA) model . Glob. Health Action 3 :5693. 10.3402/gha.v3i0.5693 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ferreira A. L. L., Bessa M. M. M., Drezett J., De Abreu L. C. (2016). Quality of life of the woman carrier of endometriosis: systematized review . Reprod. Clim. 31 , 48–54. 10.1016/j.recli.2015.12.002 [ CrossRef ] [ Google Scholar ]
  • Fonseca M. (2013). Most Common Reasons for Journal Rejections . Available online at: http://www.editage.com/insights/most-common-reasons-for-journal-rejections
  • Gough B., Lyons A. (2016). The future of qualitative research in psychology: accentuating the positive . Integr. Psychol. Behav. Sci. 50 , 234–243. 10.1007/s12124-015-9320-8 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Grant M. J., Booth A. (2009). A typology of reviews: an analysis of 14 review types and associated methodologies . Health Info. Libr. J. 26 , 91–108. 10.1111/j.1471-1842.2009.00848.x [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Grix J. (2002). Introducing students to the generic terminology of social research . Politics 22 , 175–186. 10.1111/1467-9256.00173 [ CrossRef ] [ Google Scholar ]
  • Gunasekare U. L. T. P. (2015). Mixed research method as the third research paradigm: a literature review . Int. J. Sci. Res. 4 , 361–368. Available online at: https://ssrn.com/abstract=2735996 [ Google Scholar ]
  • Hengartner M. P. (2018). Raising awareness for the replication crisis in clinical psychology by focusing on inconsistencies in psychotherapy Research: how much can we rely on published findings from efficacy trials? Front. Psychol. 9 :256. 10.3389/fpsyg.2018.00256 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Holloway W. (2008). Doing intellectual disagreement differently . Psychoanal. Cult. Soc. 13 , 385–396. 10.1057/pcs.2008.29 [ CrossRef ] [ Google Scholar ]
  • Ivankova N. V., Creswell J. W., Plano Clark V. L. (2016). Foundations and Approaches to mixed methods research , in First Steps in Research , 2nd Edn. K. Maree (Pretoria: Van Schaick Publishers; ), 306–335. [ Google Scholar ]
  • Johnson M., Long T., White A. (2001). Arguments for British pluralism in qualitative health research . J. Adv. Nurs. 33 , 243–249. 10.1046/j.1365-2648.2001.01659.x [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Johnston A., Kelly S. E., Hsieh S. C., Skidmore B., Wells G. A. (2019). Systematic reviews of clinical practice guidelines: a methodological guide . J. Clin. Epidemiol. 108 , 64–72. 10.1016/j.jclinepi.2018.11.030 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ketchen D. J., Jr., Boyd B. K., Bergh D. D. (2008). Research methodology in strategic management: past accomplishments and future challenges . Organ. Res. Methods 11 , 643–658. 10.1177/1094428108319843 [ CrossRef ] [ Google Scholar ]
  • Ktepi B. (2016). Data Analytics (DA) . Available online at: https://eds-b-ebscohost-com.nwulib.nwu.ac.za/eds/detail/detail?vid=2&sid=24c978f0-6685-4ed8-ad85-fa5bb04669b9%40sessionmgr101&bdata=JnNpdGU9ZWRzLWxpdmU%3d#AN=113931286&db=ers
  • Laher S. (2016). Ostinato rigore: establishing methodological rigour in quantitative research . S. Afr. J. Psychol. 46 , 316–327. 10.1177/0081246316649121 [ CrossRef ] [ Google Scholar ]
  • Lee C. (2015). The Myth of the Off-Limits Source . Available online at: http://blog.apastyle.org/apastyle/research/
  • Lee T. W., Mitchell T. R., Sablynski C. J. (1999). Qualitative research in organizational and vocational psychology, 1979–1999 . J. Vocat. Behav. 55 , 161–187. 10.1006/jvbe.1999.1707 [ CrossRef ] [ Google Scholar ]
  • Leech N. L., Anthony J., Onwuegbuzie A. J. (2007). A typology of mixed methods research designs . Sci. Bus. Media B. V Qual. Quant 43 , 265–275. 10.1007/s11135-007-9105-3 [ CrossRef ] [ Google Scholar ]
  • Levitt H. M., Motulsky S. L., Wertz F. J., Morrow S. L., Ponterotto J. G. (2017). Recommendations for designing and reviewing qualitative research in psychology: promoting methodological integrity . Qual. Psychol. 4 , 2–22. 10.1037/qup0000082 [ CrossRef ] [ Google Scholar ]
  • Lowe S. M., Moore S. (2014). Social networks and female reproductive choices in the developing world: a systematized review . Rep. Health 11 :85. 10.1186/1742-4755-11-85 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Maree K. (2016). Planning a research proposal , in First Steps in Research , 2nd Edn, ed Maree K. (Pretoria: Van Schaik Publishers; ), 49–70. [ Google Scholar ]
  • Maree K., Pietersen J. (2016). Sampling , in First Steps in Research, 2nd Edn , ed Maree K. (Pretoria: Van Schaik Publishers; ), 191–202. [ Google Scholar ]
  • Ngulube P. (2013). Blending qualitative and quantitative research methods in library and information science in sub-Saharan Africa . ESARBICA J. 32 , 10–23. Available online at: http://hdl.handle.net/10500/22397 . [ Google Scholar ]
  • Nieuwenhuis J. (2016). Qualitative research designs and data-gathering techniques , in First Steps in Research , 2nd Edn, ed Maree K. (Pretoria: Van Schaik Publishers; ), 71–102. [ Google Scholar ]
  • Nind M., Kilburn D., Wiles R. (2015). Using video and dialogue to generate pedagogic knowledge: teachers, learners and researchers reflecting together on the pedagogy of social research methods . Int. J. Soc. Res. Methodol. 18 , 561–576. 10.1080/13645579.2015.1062628 [ CrossRef ] [ Google Scholar ]
  • O'Cathain A. (2009). Editorial: mixed methods research in the health sciences—a quiet revolution . J. Mix. Methods 3 , 1–6. 10.1177/1558689808326272 [ CrossRef ] [ Google Scholar ]
  • O'Neil S., Koekemoer E. (2016). Two decades of qualitative research in psychology, industrial and organisational psychology and human resource management within South Africa: a critical review . SA J. Indust. Psychol. 42 , 1–16. 10.4102/sajip.v42i1.1350 [ CrossRef ] [ Google Scholar ]
  • Onwuegbuzie A. J., Collins K. M. (2017). The role of sampling in mixed methods research enhancing inference quality . Köln Z Soziol. 2 , 133–156. 10.1007/s11577-017-0455-0 [ CrossRef ] [ Google Scholar ]
  • Perestelo-Pérez L. (2013). Standards on how to develop and report systematic reviews in psychology and health . Int. J. Clin. Health Psychol. 13 , 49–57. 10.1016/S1697-2600(13)70007-3 [ CrossRef ] [ Google Scholar ]
  • Pericall L. M. T., Taylor E. (2014). Family function and its relationship to injury severity and psychiatric outcome in children with acquired brain injury: a systematized review . Dev. Med. Child Neurol. 56 , 19–30. 10.1111/dmcn.12237 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Peterson R. A., Merunka D. R. (2014). Convenience samples of college students and research reproducibility . J. Bus. Res. 67 , 1035–1041. 10.1016/j.jbusres.2013.08.010 [ CrossRef ] [ Google Scholar ]
  • Ritchie J., Lewis J., Elam G. (2009). Designing and selecting samples , in Qualitative Research Practice: A Guide for Social Science Students and Researchers , 2nd Edn, ed Ritchie J., Lewis J. (London: Sage; ), 1–23. [ Google Scholar ]
  • Sandelowski M. (2011). When a cigar is not just a cigar: alternative perspectives on data and data analysis . Res. Nurs. Health 34 , 342–352. 10.1002/nur.20437 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sandelowski M., Voils C. I., Knafl G. (2009). On quantitizing . J. Mix. Methods Res. 3 , 208–222. 10.1177/1558689809334210 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Scholtz S. E., De Klerk W., De Beer L. T. (2019). A data generated research framework for conducting research methods in psychological research .
  • Scimago Journal & Country Rank (2017). Available online at: http://www.scimagojr.com/journalrank.php?category=3201&year=2015
  • Scopus (2017a). About Scopus . Available online at: https://www.scopus.com/home.uri (accessed February 01, 2017).
  • Scopus (2017b). Document Search . Available online at: https://www.scopus.com/home.uri (accessed February 01, 2017).
  • Scott Jones J., Goldring J. E. (2015). ‘I' m not a quants person'; key strategies in building competence and confidence in staff who teach quantitative research methods . Int. J. Soc. Res. Methodol. 18 , 479–494. 10.1080/13645579.2015.1062623 [ CrossRef ] [ Google Scholar ]
  • Smith B., McGannon K. R. (2018). Developing rigor in quantitative research: problems and opportunities within sport and exercise psychology . Int. Rev. Sport Exerc. Psychol. 11 , 101–121. 10.1080/1750984X.2017.1317357 [ CrossRef ] [ Google Scholar ]
  • Stangor C. (2011). Introduction to Psychology . Available online at: http://www.saylor.org/books/
  • Strydom H. (2011). Sampling in the quantitative paradigm , in Research at Grass Roots; For the Social Sciences and Human Service Professions , 4th Edn, eds de Vos A. S., Strydom H., Fouché C. B., Delport C. S. L. (Pretoria: Van Schaik Publishers; ), 221–234. [ Google Scholar ]
  • Tashakkori A., Teddlie C. (2003). Handbook of Mixed Methods in Social & Behavioural Research . Thousand Oaks, CA: SAGE publications. [ Google Scholar ]
  • Toomela A. (2010). Quantitative methods in psychology: inevitable and useless . Front. Psychol. 1 :29. 10.3389/fpsyg.2010.00029 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Truscott D. M., Swars S., Smith S., Thornton-Reid F., Zhao Y., Dooley C., et al.. (2010). A cross-disciplinary examination of the prevalence of mixed methods in educational research: 1995–2005 . Int. J. Soc. Res. Methodol. 13 , 317–328. 10.1080/13645570903097950 [ CrossRef ] [ Google Scholar ]
  • Weiten W. (2010). Psychology Themes and Variations . Belmont, CA: Wadsworth. [ Google Scholar ]

Logo for BCcampus Open Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 1: The Science of Psychology

Scientific Research in Psychology

Learning Objectives

  • Describe a general model of scientific research in psychology and give specific examples that fit the model.
  • Explain who conducts scientific research in psychology and why they do it.
  • Distinguish between basic research and applied research.

A Model of Scientific Research in Psychology

Figure 1.1 presents a more specific model of scientific research in psychology. The researcher (who more often than not is really a small group of researchers) formulates a research question, conducts a study designed to answer the question, analyzes the resulting data, draws conclusions about the answer to the question, and publishes the results so that they become part of the research literature. Because the research literature is one of the primary sources of new research questions, this process can be thought of as a cycle. New research leads to new questions, which lead to new research, and so on. Figure 1.1 also indicates that research questions can originate outside of this cycle either with informal observations or with practical problems that need to be solved. But even in these cases, the researcher would start by checking the research literature to see if the question had already been answered and to refine it based on what previous research had already found.

""

The research by Mehl and his colleagues is described nicely by this model. Their question—whether women are more talkative than men—was suggested to them both by people’s stereotypes and by published claims about the relative talkativeness of women and men. When they checked the research literature, however, they found that this question had not been adequately addressed in scientific studies. They then conducted a careful empirical study, analyzed the results (finding very little difference between women and men), and published their work so that it became part of the research literature. The publication of their article is not the end of the story, however, because their work suggests many new questions (about the reliability of the result, about potential cultural differences, etc.) that will likely be taken up by them and by other researchers inspired by their work.

QR code that links to distracted driving video

As another example, consider that as cell phones became more widespread during the 1990s, people began to wonder whether, and to what extent, cell phone use had a negative effect on driving. Many psychologists decided to tackle this question scientifically (Collet, Guillot, & Petit, 2010) [1] . It was clear from previously published research that engaging in a simple verbal task impairs performance on a perceptual or motor task carried out at the same time, but no one had studied the effect specifically of cell phone use on driving. Under carefully controlled conditions, these researchers compared people’s driving performance while using a cell phone with their performance while not using a cell phone, both in the lab and on the road. They found that people’s ability to detect road hazards, reaction time, and control of the vehicle were all impaired by cell phone use. Each new study was published and became part of the growing research literature on this topic.

Who Conducts Scientific Research in Psychology?

Scientific research in psychology is generally conducted by people with doctoral degrees (usually the  doctor of philosophy [PhD] ) and master’s degrees in psychology and related fields, often supported by research assistants with bachelor’s degrees or other relevant training. Some of them work for government agencies (e.g., the Mental Health Commission of Canada), national associations (e.g., the Canadian Psychological Association), nonprofit organizations (e.g., the Canadian Mental Health Association), or in the private sector (e.g., in product development). However, the majority of them are college and university faculty, who often collaborate with their graduate and undergraduate students. Although some researchers are trained and licensed as clinicians—especially those who conduct research in clinical psychology—the majority are not. Instead, they have expertise in one or more of the many other subfields of psychology: behavioural neuroscience, cognitive psychology, developmental psychology, personality psychology, social psychology, and so on. Doctoral-level researchers might be employed to conduct research full-time or, like many college and university faculty members, to conduct research in addition to teaching classes and serving their institution and community in other ways.

Of course, people also conduct research in psychology because they enjoy the intellectual and technical challenges involved and the satisfaction of contributing to scientific knowledge of human behaviour. You might find that you enjoy the process too. If so, your college or university might offer opportunities to get involved in ongoing research as either a research assistant or a participant. Of course, you might find that you do not enjoy the process of conducting scientific research in psychology. But at least you will have a better understanding of where scientific knowledge in psychology comes from, an appreciation of its strengths and limitations, and an awareness of how it can be applied to solve practical problems in psychology and everyday life.

Scientific Psychology Blogs

A fun and easy way to follow current scientific research in psychology is to read any of the many excellent blogs devoted to summarizing and commenting on new findings.

Among them are the following:

  • Brain Blogger
  • Research Digest
  • Social Psychology Eye
  • We’re Only Human

You can also browse through Research Blogging , select psychology as your topic, and read entries from a wide variety of blogs.

The Broader Purposes of Scientific Research in Psychology

People have always been curious about the natural world, including themselves and their behaviour (in fact, this is probably why you are studying psychology in the first place). Science grew out of this natural curiosity and has become the best way to achieve detailed and accurate knowledge. Keep in mind that most of the phenomena and theories that fill psychology textbooks are the products of scientific research. In a typical introductory psychology textbook, for example, one can learn about specific cortical areas for language and perception, principles of classical and operant conditioning, biases in reasoning and judgment, and people’s surprising tendency to obey those in positions of authority. And scientific research continues because what we know right now only scratches the surface of what we  can  know.

Scientific research is often classified as being either basic or applied. Basic research  in psychology is conducted primarily for the sake of achieving a more detailed and accurate understanding of human behaviour, without necessarily trying to address any particular practical problem. The research of Mehl and his colleagues falls into this category.  Applied research  is conducted primarily to address some practical problem. Research on the effects of cell phone use on driving, for example, was prompted by safety concerns and has led to the enactment of laws to limit this practice. Although the distinction between basic and applied research is convenient, it is not always clear-cut. For example, basic research on sex differences in talkativeness could eventually have an effect on how marriage therapy is practiced, and applied research on the effect of cell phone use on driving could produce new insights into basic processes of perception, attention, and action.

Key Takeaways

  • Research in psychology can be described by a simple cyclical model. A research question based on the research literature leads to an empirical study, the results of which are published and become part of the research literature.
  • Scientific research in psychology is conducted mainly by people with doctoral degrees in psychology and related fields, most of whom are college and university faculty members. They do so for professional and for personal reasons, as well as to contribute to scientific knowledge about human behaviour.
  • Basic research is conducted to learn about human behaviour for its own sake, and applied research is conducted to solve some practical problem. Both are valuable, and the distinction between the two is not always clear-cut.
  • Practice: Find a description of an empirical study in a professional journal or in one of the scientific psychology blogs. Then write a brief description of the research in terms of the cyclical model presented here. One or two sentences for each part of the cycle should suffice.
  • Practice: Based on your own experience or on things you have already learned about psychology, list three basic research questions and three applied research questions of interest to you.
  • Watch the following TED Ed video, in which David H. Schwartz provides an introduction to two types of empirical studies along with some methods that scientists use to increase the reliability of their results:

QR code that links to

Video Attributions

  • “ Understanding driver distraction ” by American Psychological Association . Standard YouTube Licence.
  • “ Not all scientific studies are created equal – David H. Schwartz ” by TED-Ed . Standard YouTube Licence.
  • Collet, C., Guillot, A., & Petit, C. (2010). Phoning while driving I: A review of epidemiological, psychological, behavioural and physiological studies. Ergonomics, 53 , 589–601. ↵

A doctoral degree generally held by people who conduct scientific research in psychology.

In psychology, research conducted for the sake of achieving a more detailed and accurate understanding of human behaviour, without necessarily trying to address any particular problem.

Research conducted primarily to address some practical problem.

Research Methods in Psychology - 2nd Canadian Edition Copyright © 2015 by Paul C. Price, Rajiv Jhangiani, & I-Chant A. Chiang is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

how to conduct research in psychology

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons

Margin Size

  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Social Sci LibreTexts

2.5: Conducting Psychology Research in the Real World

  • Last updated
  • Save as PDF
  • Page ID 10603

  • Matthias R. Mehl
  • https://nobaproject.com/ via The Noba Project

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

University of Arizona

Because of its ability to determine cause-and-effect relationships, the laboratory experiment is traditionally considered the method of choice for psychological science. One downside, however, is that as it carefully controls conditions and their effects, it can yield findings that are out of touch with reality and have limited use when trying to understand real-world behavior. This module highlights the importance of also conducting research outside the psychology laboratory, within participants’ natural, everyday environments, and reviews existing methodologies for studying daily life.

learning objectives

  • Identify limitations of the traditional laboratory experiment.
  • Explain ways in which daily life research can further psychological science.
  • Know what methods exist for conducting psychological research in the real world.

Introduction

The laboratory experiment is traditionally considered the “gold standard” in psychology research. This is because only laboratory experiments can clearly separate cause from effect and therefore establish causality. Despite this unique strength, it is also clear that a scientific field that is mainly based on controlled laboratory studies ends up lopsided. Specifically, it accumulates a lot of knowledge on what can happen—under carefully isolated and controlled circumstances—but it has little to say about what actually does happen under the circumstances that people actually encounter in their daily lives.

An experimenter sits at a table across from a young girl who is a participant in a laboratory experiment.

For example, imagine you are a participant in an experiment that looks at the effect of being in a good mood on generosity, a topic that may have a good deal of practical application. Researchers create an internally-valid, carefully-controlled experiment where they randomly assign you to watch either a happy movie or a neutral movie, and then you are given the opportunity to help the researcher out by staying longer and participating in another study. If people in a good mood are more willing to stay and help out, the researchers can feel confident that – since everything else was held constant – your positive mood led you to be more helpful. However, what does this tell us about helping behaviors in the real world? Does it generalize to other kinds of helping, such as donating money to a charitable cause? Would all kinds of happy movies produce this behavior, or only this one? What about other positive experiences that might boost mood, like receiving a compliment or a good grade? And what if you were watching the movie with friends, in a crowded theatre, rather than in a sterile research lab? Taking research out into the real world can help answer some of these sorts of important questions.

As one of the founding fathers of social psychology remarked, “Experimentation in the laboratory occurs, socially speaking, on an island quite isolated from the life of society” (Lewin, 1944, p. 286). This module highlights the importance of going beyond experimentation and also conducting research outside the laboratory (Reis & Gosling, 2010), directly within participants’ natural environments, and reviews existing methodologies for studying daily life.

Rationale for Conducting Psychology Research in the Real World

One important challenge researchers face when designing a study is to find the right balance between ensuring internal validity , or the degree to which a study allows unambiguous causal inferences, and external validity , or the degree to which a study ensures that potential findings apply to settings and samples other than the ones being studied (Brewer, 2000). Unfortunately, these two kinds of validity tend to be difficult to achieve at the same time, in one study. This is because creating a controlled setting, in which all potentially influential factors (other than the experimentally-manipulated variable) are controlled, is bound to create an environment that is quite different from what people naturally encounter (e.g., using a happy movie clip to promote helpful behavior). However, it is the degree to which an experimental situation is comparable to the corresponding real-world situation of interest that determines how generalizable potential findings will be. In other words, if an experiment is very far-off from what a person might normally experience in everyday life, you might reasonably question just how useful its findings are.

Because of the incompatibility of the two types of validity, one is often—by design—prioritized over the other. Due to the importance of identifying true causal relationships, psychology has traditionally emphasized internal over external validity. However, in order to make claims about human behavior that apply across populations and environments, researchers complement traditional laboratory research, where participants are brought into the lab, with field research where, in essence, the psychological laboratory is brought to participants. Field studies allow for the important test of how psychological variables and processes of interest “behave” under real-world circumstances (i.e., what actually does happen rather than what can happen ). They can also facilitate “downstream” operationalizations of constructs that measure life outcomes of interest directly rather than indirectly.

Take, for example, the fascinating field of psychoneuroimmunology, where the goal is to understand the interplay of psychological factors - such as personality traits or one’s stress level - and the immune system. Highly sophisticated and carefully controlled experiments offer ways to isolate the variety of neural, hormonal, and cellular mechanisms that link psychological variables such as chronic stress to biological outcomes such as immunosuppression (a state of impaired immune functioning; Sapolsky, 2004). Although these studies demonstrate impressively how psychological factors can affect health-relevant biological processes, they—because of their research design—remain mute about the degree to which these factors actually do undermine people’s everyday health in real life. It is certainly important to show that laboratory stress can alter the number of natural killer cells in the blood. But it is equally important to test to what extent the levels of stress that people experience on a day-to-day basis result in them catching a cold more often or taking longer to recover from one. The goal for researchers, therefore, must be to complement traditional laboratory experiments with less controlled studies under real-world circumstances. The term ecological validity is used to refer the degree to which an effect has been obtained under conditions that are typical for what happens in everyday life (Brewer, 2000). In this example, then, people might keep a careful daily log of how much stress they are under as well as noting physical symptoms such as headaches or nausea. Although many factors beyond stress level may be responsible for these symptoms, this more correlational approach can shed light on how the relationship between stress and health plays out outside of the laboratory.

An Overview of Research Methods for Studying Daily Life

Capturing “life as it is lived” has been a strong goal for some researchers for a long time. Wilhelm and his colleagues recently published a comprehensive review of early attempts to systematically document daily life (Wilhelm, Perrez, & Pawlik, 2012). Building onto these original methods, researchers have, over the past decades, developed a broad toolbox for measuring experiences, behavior, and physiology directly in participants’ daily lives (Mehl & Conner, 2012). Figure 1 provides a schematic overview of the methodologies described below.

A diagram showing five research methods for studying daily life - sampling daily behavior, sampling daily experiences, sampling daily psychology, collecting usage data via smartphones, and sampling online behavior.

Studying Daily Experiences

Starting in the mid-1970s, motivated by a growing skepticism toward highly-controlled laboratory studies, a few groups of researchers developed a set of new methods that are now commonly known as the experience-sampling method (Hektner, Schmidt, & Csikszentmihalyi, 2007), ecological momentary assessment (Stone & Shiffman, 1994), or the diary method (Bolger & Rafaeli, 2003). Although variations within this set of methods exist, the basic idea behind all of them is to collect in-the-moment (or, close-to-the-moment) self-report data directly from people as they go about their daily lives. This is typically accomplished by asking participants’ repeatedly (e.g., five times per day) over a period of time (e.g., a week) to report on their current thoughts and feelings. The momentary questionnaires often ask about their location (e.g., “Where are you now?”), social environment (e.g., “With whom are you now?”), activity (e.g., “What are you currently doing?”), and experiences (e.g., “How are you feeling?”). That way, researchers get a snapshot of what was going on in participants’ lives at the time at which they were asked to report.

Technology has made this sort of research possible, and recent technological advances have altered the different tools researchers are able to easily use. Initially, participants wore electronic wristwatches that beeped at preprogrammed but seemingly random times, at which they completed one of a stack of provided paper questionnaires. With the mobile computing revolution, both the prompting and the questionnaire completion were gradually replaced by handheld devices such as smartphones. Being able to collect the momentary questionnaires digitally and time-stamped (i.e., having a record of exactly when participants responded) had major methodological and practical advantages and contributed to experience sampling going mainstream (Conner, Tennen, Fleeson, & Barrett, 2009).

A woman sits at the counter of a coffee shop while using her smartphone.

Over time, experience sampling and related momentary self-report methods have become very popular, and, by now, they are effectively the gold standard for studying daily life. They have helped make progress in almost all areas of psychology (Mehl & Conner, 2012). These methods ensure receiving many measurements from many participants, and has further inspired the development of novel statistical methods (Bolger & Laurenceau, 2013). Finally, and maybe most importantly, they accomplished what they sought out to accomplish: to bring attention to what psychology ultimately wants and needs to know about, namely “what people actually do, think, and feel in the various contexts of their lives” (Funder, 2001, p. 213). In short, these approaches have allowed researchers to do research that is more externally valid, or more generalizable to real life, than the traditional laboratory experiment.

To illustrate these techniques, consider a classic study, Stone, Reed, and Neale (1987), who tracked positive and negative experiences surrounding a respiratory infection using daily experience sampling. They found that undesirable experiences peaked and desirable ones dipped about four to five days prior to participants coming down with the cold. More recently, Killingsworth and Gilbert (2010) collected momentary self-reports from more than 2,000 participants via a smartphone app. They found that participants were less happy when their mind was in an idling, mind-wandering state, such as surfing the Internet or multitasking at work, than when it was in an engaged, task-focused one, such as working diligently on a paper. These are just two examples that illustrate how experience-sampling studies have yielded findings that could not be obtained with traditional laboratory methods.

Recently, the day reconstruction method (DRM) (Kahneman, Krueger, Schkade, Schwarz, & Stone, 2004) has been developed to obtain information about a person’s daily experiences without going through the burden of collecting momentary experience-sampling data. In the DRM, participants report their experiences of a given day retrospectively after engaging in a systematic, experiential reconstruction of the day on the following day. As a participant in this type of study, you might look back on yesterday, divide it up into a series of episodes such as “made breakfast,” “drove to work,” “had a meeting,” etc. You might then report who you were with in each episode and how you felt in each. This approach has shed light on what situations lead to moments of positive and negative mood throughout the course of a normal day.

Studying Daily Behavior

Experience sampling is often used to study everyday behavior (i.e., daily social interactions and activities). In the laboratory, behavior is best studied using direct behavioral observation (e.g., video recordings). In the real world, this is, of course, much more difficult. As Funder put it, it seems it would require a “detective’s report [that] would specify in exact detail everything the participant said and did, and with whom, in all of the contexts of the participant’s life” (Funder, 2007, p. 41).

As difficult as this may seem, Mehl and colleagues have developed a naturalistic observation methodology that is similar in spirit. Rather than following participants—like a detective—with a video camera (see Craik, 2000), they equip participants with a portable audio recorder that is programmed to periodically record brief snippets of ambient sounds (e.g., 30 seconds every 12 minutes). Participants carry the recorder (originally a microcassette recorder, now a smartphone app) on them as they go about their days and return it at the end of the study. The recorder provides researchers with a series of sound bites that, together, amount to an acoustic diary of participants’ days as they naturally unfold—and that constitute a representative sample of their daily activities and social encounters. Because it is somewhat similar to having the researcher’s ear at the participant’s lapel, they called their method the electronically activated recorder, or EAR (Mehl, Pennebaker, Crow, Dabbs, & Price, 2001). The ambient sound recordings can be coded for many things, including participants’ locations (e.g., at school, in a coffee shop), activities (e.g., watching TV, eating), interactions (e.g., in a group, on the phone), and emotional expressions (e.g., laughing, sighing). As unnatural or intrusive as it might seem, participants report that they quickly grow accustomed to the EAR and say they soon find themselves behaving as they normally would.

In a cross-cultural study, Ramírez-Esparza and her colleagues used the EAR method to study sociability in the United States and Mexico. Interestingly, they found that although American participants rated themselves significantly higher than Mexicans on the question, “I see myself as a person who is talkative,” they actually spent almost 10 percent less time talking than Mexicans did (Ramírez-Esparza, Mehl, Álvarez Bermúdez, & Pennebaker, 2009). In a similar way, Mehl and his colleagues used the EAR method to debunk the long-standing myth that women are considerably more talkative than men. Using data from six different studies, they showed that both sexes use on average about 16,000 words per day. The estimated sex difference of 546 words was trivial compared to the immense range of more than 46,000 words between the least and most talkative individual (695 versus 47,016 words; Mehl, Vazire, Ramírez-Esparza, Slatcher, & Pennebaker, 2007). Together, these studies demonstrate how naturalistic observation can be used to study objective aspects of daily behavior and how it can yield findings quite different from what other methods yield (Mehl, Robbins, & Deters, 2012).

A series of other methods and creative ways for assessing behavior directly and unobtrusively in the real world are described in a seminal book on real-world, subtle measures (Webb, Campbell, Schwartz, Sechrest, & Grove, 1981). For example, researchers have used time-lapse photography to study the flow of people and the use of space in urban public places (Whyte, 1980). More recently, they have observed people’s personal (e.g., dorm rooms) and professional (e.g., offices) spaces to understand how personality is expressed and detected in everyday environments (Gosling, Ko, Mannarelli, & Morris, 2002). They have even systematically collected and analyzed people’s garbage to measure what people actually consume (e.g., empty alcohol bottles or cigarette boxes) rather than what they say they consume (Rathje & Murphy, 2001). Because people often cannot and sometimes may not want to accurately report what they do, the direct—and ideally nonreactive—assessment of real-world behavior is of high importance for psychological research (Baumeister, Vohs, & Funder, 2007).

Studying Daily Physiology

In addition to studying how people think, feel, and behave in the real world, researchers are also interested in how our bodies respond to the fluctuating demands of our lives. What are the daily experiences that make our “blood boil”? How do our neurotransmitters and hormones respond to the stressors we encounter in our lives? What physiological reactions do we show to being loved—or getting ostracized? You can see how studying these powerful experiences in real life, as they actually happen, may provide more rich and informative data than one might obtain in an artificial laboratory setting that merely mimics these experiences.

A woman shouts and makes an aggressive hand gesture as she drives her car.

Also, in pursuing these questions, it is important to keep in mind that what is stressful, engaging, or boring for one person might not be so for another. It is, in part, for this reason that researchers have found only limited correspondence between how people respond physiologically to a standardized laboratory stressor (e.g., giving a speech) and how they respond to stressful experiences in their lives. To give an example, Wilhelm and Grossman (2010) describe a participant who showed rather minimal heart rate increases in response to a laboratory stressor (about five to 10 beats per minute) but quite dramatic increases (almost 50 beats per minute) later in the afternoon while watching a soccer game. Of course, the reverse pattern can happen as well, such as when patients have high blood pressure in the doctor’s office but not in their home environment—the so-called white coat hypertension (White, Schulman, McCabe, & Dey, 1989).

Ambulatory physiological monitoring – that is, monitoring physiological reactions as people go about their daily lives - has a long history in biomedical research and an array of monitoring devices exist (Fahrenberg & Myrtek, 1996). Among the biological signals that can now be measured in daily life with portable signal recording devices are the electrocardiogram (ECG), blood pressure, electrodermal activity (or “sweat response”), body temperature, and even the electroencephalogram (EEG) (Wilhelm & Grossman, 2010). Most recently, researchers have added ambulatory assessment of hormones (e.g., cortisol) and other biomarkers (e.g., immune markers) to the list (Schlotz, 2012). The development of ever more sophisticated ways to track what goes on underneath our skins as we go about our lives is a fascinating and rapidly advancing field.

In a recent study, Lane, Zareba, Reis, Peterson, and Moss (2011) used experience sampling combined with ambulatory electrocardiography (a so-called Holter monitor) to study how emotional experiences can alter cardiac function in patients with a congenital heart abnormality (e.g., long QT syndrome). Consistent with the idea that emotions may, in some cases, be able to trigger a cardiac event, they found that typical—in most cases even relatively low intensity— daily emotions had a measurable effect on ventricular repolarization, an important cardiac indicator that, in these patients, is linked to risk of a cardiac event. In another study, Smyth and colleagues (1998) combined experience sampling with momentary assessment of cortisol, a stress hormone. They found that momentary reports of current or even anticipated stress predicted increased cortisol secretion 20 minutes later. Further, and independent of that, the experience of other kinds of negative affect (e.g., anger, frustration) also predicted higher levels of cortisol and the experience of positive affect (e.g., happy, joyful) predicted lower levels of this important stress hormone. Taken together, these studies illustrate how researchers can use ambulatory physiological monitoring to study how the little—and seemingly trivial or inconsequential—experiences in our lives leave objective, measurable traces in our bodily systems.

Studying Online Behavior

Another domain of daily life that has only recently emerged is virtual daily behavior or how people act and interact with others on the Internet. Irrespective of whether social media will turn out to be humanity’s blessing or curse (both scientists and laypeople are currently divided over this question), the fact is that people are spending an ever increasing amount of time online. In light of that, researchers are beginning to think of virtual behavior as being as serious as “actual” behavior and seek to make it a legitimate target of their investigations (Gosling & Johnson, 2010).

A computer screen displays a series of emotional social media posts with subject lines such as "Rage!!!", "I HATE ANNA!!!!!", and "It's all right : )".

One way to study virtual behavior is to make use of the fact that most of what people do on the Web—emailing, chatting, tweeting, blogging, posting— leaves direct (and permanent) verbal traces. For example, differences in the ways in which people use words (e.g., subtle preferences in word choice) have been found to carry a lot of psychological information (Pennebaker, Mehl, & Niederhoffer, 2003). Therefore, a good way to study virtual social behavior is to study virtual language behavior. Researchers can download people’s—often public—verbal expressions and communications and analyze them using modern text analysis programs (e.g., Pennebaker, Booth, & Francis, 2007).

For example, Cohn, Mehl, and Pennebaker (2004) downloaded blogs of more than a thousand users of lifejournal.com, one of the first Internet blogging sites, to study how people responded socially and emotionally to the attacks of September 11, 2001. In going “the online route,” they could bypass a critical limitation of coping research, the inability to obtain baseline information; that is, how people were doing before the traumatic event occurred. Through access to the database of public blogs, they downloaded entries from two months prior to two months after the attacks. Their linguistic analyses revealed that in the first days after the attacks, participants expectedly expressed more negative emotions and were more cognitively and socially engaged, asking questions and sending messages of support. Already after two weeks, though, their moods and social engagement returned to baseline, and, interestingly, their use of cognitive-analytic words (e.g., “think,” “question”) even dropped below their normal level. Over the next six weeks, their mood hovered around their pre-9/11 baseline, but both their social engagement and cognitive-analytic processing stayed remarkably low. This suggests a social and cognitive weariness in the aftermath of the attacks. In using virtual verbal behavior as a marker of psychological functioning, this study was able to draw a fine timeline of how humans cope with disasters.

Reflecting their rapidly growing real-world importance, researchers are now beginning to investigate behavior on social networking sites such as Facebook (Wilson, Gosling, & Graham, 2012). Most research looks at psychological correlates of online behavior such as personality traits and the quality of one’s social life but, importantly, there are also first attempts to export traditional experimental research designs into an online setting. In a pioneering study of online social influence, Bond and colleagues (2012) experimentally tested the effects that peer feedback has on voting behavior. Remarkably, their sample consisted of 16 million (!) Facebook users. They found that online political-mobilization messages (e.g., “I voted” accompanied by selected pictures of their Facebook friends) influenced real-world voting behavior. This was true not just for users who saw the messages but also for their friends and friends of their friends. Although the intervention effect on a single user was very small, through the enormous number of users and indirect social contagion effects, it resulted cumulatively in an estimated 340,000 additional votes—enough to tilt a close election. In short, although still in its infancy, research on virtual daily behavior is bound to change social science, and it has already helped us better understand both virtual and “actual” behavior.

“Smartphone Psychology”?

A review of research methods for studying daily life would not be complete without a vision of “what’s next.” Given how common they have become, it is safe to predict that smartphones will not just remain devices for everyday online communication but will also become devices for scientific data collection and intervention (Kaplan & Stone, 2013; Yarkoni, 2012). These devices automatically store vast amounts of real-world user interaction data, and, in addition, they are equipped with sensors to track the physical (e. g., location, position) and social (e.g., wireless connections around the phone) context of these interactions. Miller (2012, p. 234) states, “The question is not whether smartphones will revolutionize psychology but how, when, and where the revolution will happen.” Obviously, their immense potential for data collection also brings with it big new challenges for researchers (e.g., privacy protection, data analysis, and synthesis). Yet it is clear that many of the methods described in this module—and many still to be developed ways of collecting real-world data—will, in the future, become integrated into the devices that people naturally and happily carry with them from the moment they get up in the morning to the moment they go to bed.

This module sought to make a case for psychology research conducted outside the lab. If the ultimate goal of the social and behavioral sciences is to explain human behavior, then researchers must also—in addition to conducting carefully controlled lab studies—deal with the “messy” real world and find ways to capture life as it naturally happens.

Mortensen and Cialdini (2010) refer to the dynamic give-and-take between laboratory and field research as “ full-cycle psychology ”. Going full cycle, they suggest, means that “researchers use naturalistic observation to determine an effect’s presence in the real world, theory to determine what processes underlie the effect, experimentation to verify the effect and its underlying processes, and a return to the natural environment to corroborate the experimental findings” (Mortensen & Cialdini, 2010, p. 53). To accomplish this, researchers have access to a toolbox of research methods for studying daily life that is now more diverse and more versatile than it has ever been before. So, all it takes is to go ahead and—literally—bring science to life.

Outside Resources

Discussion questions.

  • What do you think about the tradeoff between unambiguously establishing cause and effect (internal validity) and ensuring that research findings apply to people’s everyday lives (external validity)? Which one of these would you prioritize as a researcher? Why?
  • What challenges do you see that daily-life researchers may face in their studies? How can they be overcome?
  • What ethical issues can come up in daily-life studies? How can (or should) they be addressed?
  • How do you think smartphones and other mobile electronic devices will change psychological research? What are their promises for the field? And what are their pitfalls?
  • Baumeister, R. F., Vohs, K. D., & Funder, D. C. (2007). Psychology as the science of self-reports and finger movements: Whatever happened to actual behavior? Perspectives on Psychological Science, 2 , 396–403.
  • Bolger, N., & Laurenceau, J-P. (2013). Intensive longitudinal methods: An introduction to diary and experience sampling research . New York, NY: Guilford Press.
  • Bolger, N., Davis, A., & Rafaeli, E. (2003). Diary methods: Capturing life as it is lived. Annual Review of Psychology, 54, 579–616.
  • Bond, R. M., Jones, J. J., Kramer, A. D., Marlow, C., Settle, J. E., & Fowler, J. H. (2012). A 61 million-person experiment in social influence and political mobilization. Nature , 489, 295–298.
  • Brewer, M. B. (2000). Research design and issues of validity. In H. T. Reis & C. M. Judd (Eds.), Handbook of research methods in social psychology (pp. 3–16). New York, NY: Cambridge University Press.
  • Cohn, M. A., Mehl, M. R., & Pennebaker, J. W. (2004). Linguistic indicators of psychological change after September 11, 2001. Psychological Science, 15, 687–693.
  • Conner, T. S., Tennen, H., Fleeson, W., & Barrett, L. F. (2009). Experience sampling methods: A modern idiographic approach to personality research. Social and Personality Psychology Compass, 3 , 292–313.
  • Craik, K. H. (2000). The lived day of an individual: A person-environment perspective. In W. B. Walsh, K. H. Craik, & R. H. Price (Eds.), Person-environment psychology: New directions and perspectives (pp. 233–266). Mahwah, NJ: Lawrence Erlbaum Associates.
  • Fahrenberg, J., &. Myrtek, M. (Eds.) (1996). Ambulatory assessment: Computer-assisted psychological and psychophysiological methods in monitoring and field studies . Seattle, WA: Hogrefe & Huber.
  • Funder, D. C. (2007). The personality puzzle . New York, NY: W. W. Norton & Co.
  • Funder, D. C. (2001). Personality. Review of Psychology, 52, 197–221.
  • Gosling, S. D., & Johnson, J. A. (2010). Advanced methods for conducting online behavioral research . Washington, DC: American Psychological Association.
  • Gosling, S. D., Ko, S. J., Mannarelli, T., & Morris, M. E. (2002). A room with a cue: Personality judgments based on offices and bedrooms. Journal of Personality and Social Psychology, 82 , 379–398.
  • Hektner, J. M., Schmidt, J. A., & Csikszentmihalyi, M. (2007). Experience sampling method: Measuring the quality of everyday life . Thousand Oaks, CA: Sage.
  • Kahneman, D., Krueger, A., Schkade, D., Schwarz, N., and Stone, A. (2004). A survey method for characterizing daily life experience: The Day Reconstruction Method. Science , 306, 1776–780.
  • Kaplan, R. M., & Stone A. A. (2013). Bringing the laboratory and clinic to the community: Mobile technologies for health promotion and disease prevention. Annual Review of Psychology, 64 , 471-498.
  • Killingsworth, M. A., & Gilbert, D. T. (2010). A wandering mind is an unhappy mind. Science , 330, 932.
  • Lane, R. D., Zareba, W., Reis, H., Peterson, D., &, Moss, A. (2011). Changes in ventricular repolarization duration during typical daily emotion in patients with Long QT Syndrome. Psychosomatic Medicine, 73 , 98–105.
  • Lewin, K. (1944) Constructs in psychology and psychological ecology . University of Iowa Studies in Child Welfare, 20, 23–27.
  • Mehl, M. R., & Conner, T. S. (Eds.) (2012). Handbook of research methods for studying daily life . New York, NY: Guilford Press.
  • Mehl, M. R., Pennebaker, J. W., Crow, M., Dabbs, J., & Price, J. (2001). The electronically activated recorder (EAR): A device for sampling naturalistic daily activities and conversations. Behavior Research Methods, Instruments, and Computers, 33 , 517–523.
  • Mehl, M. R., Robbins, M. L., & Deters, G. F. (2012). Naturalistic observation of health-relevant social processes: The electronically activated recorder (EAR) methodology in psychosomatics. Psychosomatic Medicine, 74 , 410–417.
  • Mehl, M. R., Vazire, S., Ramírez-Esparza, N., Slatcher, R. B., & Pennebaker, J. W. (2007). Are women really more talkative than men? Science, 317 , 82.
  • Miller, G. (2012). The smartphone psychology manifesto. Perspectives in Psychological Science , 7, 221–237.
  • Mortenson, C. R., & Cialdini, R. B. (2010). Full-cycle social psychology for theory and application. Social and Personality Psychology Compass, 4, 53–63.
  • Pennebaker, J. W., Mehl, M. R., Niederhoffer, K. (2003). Psychological aspects of natural language use: Our words, our selves. Annual Review of Psychology, 54 , 547–577.
  • Ramírez-Esparza, N., Mehl, M. R., Álvarez Bermúdez, J., & Pennebaker, J. W. (2009). Are Mexicans more or less sociable than Americans? Insights from a naturalistic observation study. Journal of Research in Personality, 43 , 1–7.
  • Rathje, W., & Murphy, C. (2001). Rubbish! The archaeology of garbage . New York, NY: Harper Collins.
  • Reis, H. T., & Gosling, S. D. (2010). Social psychological methods outside the laboratory. In S. T. Fiske, D. T. Gilbert, & G. Lindzey, (Eds.), Handbook of social psychology (5th ed., Vol. 1, pp. 82–114). New York, NY: Wiley.
  • Sapolsky, R. (2004). Why zebras don’t get ulcers: A guide to stress, stress-related diseases and coping . New York, NY: Henry Holt and Co.
  • Schlotz, W. (2012). Ambulatory psychoneuroendocrinology: Assessing salivary cortisol and other hormones in daily life. In M.R. Mehl & T.S. Conner (Eds.), Handbook of research methods for studying daily life (pp. 193–209). New York, NY: Guilford Press.
  • Smyth, J., Ockenfels, M. C., Porter, L., Kirschbaum, C., Hellhammer, D. H., & Stone, A. A. (1998). Stressors and mood measured on a momentary basis are associated with salivary cortisol secretion. Psychoneuroendocrinology, 23 , 353–370.
  • Stone, A. A., & Shiffman, S. (1994). Ecological momentary assessment (EMA) in behavioral medicine. Annals of Behavioral Medicine, 16 , 199–202.
  • Stone, A. A., Reed, B. R., Neale, J. M. (1987). Changes in daily event frequency precede episodes of physical symptoms. Journal of Human Stress, 13 , 70–74.
  • Webb, E. J., Campbell, D. T., Schwartz, R. D., Sechrest, L., & Grove, J. B. (1981). Nonreactive measures in the social sciences . Boston, MA: Houghton Mifflin Co.
  • White, W. B., Schulman, P., McCabe, E. J., & Dey, H. M. (1989). Average daily blood pressure, not office blood pressure, determines cardiac function in patients with hypertension. Journal of the American Medical Association, 261 , 873–877.
  • Whyte, W. H. (1980). The social life of small urban spaces . Washington, DC: The Conservation Foundation.
  • Wilhelm, F.H., & Grossman, P. (2010). Emotions beyond the laboratory: Theoretical fundaments, study design, and analytic strategies for advanced ambulatory assessment. Biological Psychology, 84 , 552–569.
  • Wilhelm, P., Perrez, M., & Pawlik, K. (2012). Conducting research in daily life: A historical review. In M. R. Mehl & T. S. Conner (Eds.), Handbook of research methods for studying daily life . New York, NY: Guilford Press.
  • Wilson, R., & Gosling, S. D., & Graham, L. (2012). A review of Facebook research in the social sciences. Perspectives on Psychological Science, 7 , 203–220.
  • Yarkoni, T. (2012). Psychoinformatics: New horizons at the interface of the psychological and computing sciences. Current Directions in Psychological Science, 21 , 391–397.

2.1 Why is Research Important

Learning objectives.

By the end of this section, you will be able to:

  • Explain how scientific research addresses questions about behavior
  • Discuss how scientific research guides public policy
  • Appreciate how scientific research can be important in making personal decisions

   Scientific research is a critical tool for successfully navigating our complex world. Without it, we would be forced to rely solely on intuition, other people’s authority, and blind luck. While many of us feel confident in our abilities to decipher and interact with the world around us, history is filled with examples of how very wrong we can be when we fail to recognize the need for evidence in supporting claims. At various times in history, we would have been certain that the sun revolved around a flat earth, that the earth’s continents did not move, and that mental illness was caused by possession (figure below). It is through systematic scientific research that we divest ourselves of our preconceived notions and superstitions and gain an objective understanding of ourselves and our world.

A skull has a large hole bored through the forehead.

Some of our ancestors, across the work and over the centuries, believed that trephination – the practice of making a hole in the skull, as shown here – allowed evil spirits to leave the body, thus curing mental illness and other diseases (credit” “taiproject/Flickr)

   The goal of all scientists is to better understand the world around them. Psychologists focus their attention on understanding behavior, as well as the cognitive (mental) and physiological (body) processes that underlie behavior. In contrast to other methods that people use to understand the behavior of others, such as intuition and personal experience, the hallmark of scientific research is that there is evidence to support a claim. Scientific knowledge is empirical : It is grounded in objective, tangible evidence that can be observed time and time again, regardless of who is observing.

We can easily observe the behavior of others around us. For example, if someone is crying, we can observe that behavior. However, the reason for the behavior is more difficult to determine. Is the person crying due to being sad, in pain, or happy? Sometimes, asking about the underlying cognitions is as easy as asking the subject directly: “Why are you crying?” However, there are situations in which an individual is either uncomfortable or unwilling to answer the question honestly, or is incapable of answering. For example, infants would not be able to explain why they are crying. In other situations, it may be hard to identify exactly why you feel the way you do. Think about times when you suddenly feel annoyed after a long day. There may be a specific trigger for your annoyance (a loud noise), or you may be tired, hungry, stressed, or all of the above. Human behavior is often a complicated mix of a variety of factors. In such circumstances, the psychologist must be creative in finding ways to better understand behavior. This chapter explores how scientific knowledge is generated, and how important that knowledge is in forming decisions in our personal lives and in the public domain.

USE OF RESEARCH INFORMATION

   Trying to determine which theories are and are not accepted by the scientific community can be difficult, especially in an area of research as broad as psychology. More than ever before, we have an incredible amount of information at our fingertips, and a simple internet search on any given research topic might result in a number of contradictory studies. In these cases, we are witnessing the scientific community going through the process of coming to an agreement, and it could be quite some time before a consensus emerges. In other cases, rapidly developing technology is improving our ability to measure things, and changing our earlier understanding of how the mind works.

In the meantime, we should strive to think critically about the information we encounter by exercising a degree of healthy skepticism. When someone makes a claim, we should examine the claim from a number of different perspectives: what is the expertise of the person making the claim, what might they gain if the claim is valid, does the claim seem justified given the evidence, and what do other researchers think of the claim? Science is always changing and new evidence is alwaus coming to light, thus this dash of skepticism should be applied to all research you interact with from now on. Yes, that includes the research presented in this textbook.

Evaluation of research findings can have widespread impact. Imagine that you have been elected as the governor of your state. One of your responsibilities is to manage the state budget and determine how to best spend your constituents’ tax dollars. As the new governor, you need to decide whether to continue funding the D.A.R.E. (Drug Abuse Resistance Education) program in public schools (figure below). This program typically involves police officers coming into the classroom to educate students about the dangers of becoming involved with alcohol and other drugs. According to the D.A.R.E. website (www.dare.org), this program has been very popular since its inception in 1983, and it is currently operating in 75% of school districts in the United States and in more than 40 countries worldwide. Sounds like an easy decision, right? However, on closer review, you discover that the vast majority of research into this program consistently suggests that participation has little, if any, effect on whether or not someone uses alcohol or other drugs (Clayton, Cattarello, & Johnstone, 1996; Ennett, Tobler, Ringwalt, & Flewelling, 1994; Lynam et al., 1999; Ringwalt, Ennett, & Holt, 1991). If you are committed to being a good steward of taxpayer money, will you fund this particular program, or will you try to find other programs that research has consistently demonstrated to be effective?

A D.A.R.E. poster reads “D.A.R.E. to resist drugs and violence.”

The D.A.R.E. program continues to be popular in schools around the world despite research suggesting that it is ineffective.

It is not just politicians who can benefit from using research in guiding their decisions. We all might look to research from time to time when making decisions in our lives. Imagine you just found out that a close friend has breast cancer or that one of your young relatives has recently been diagnosed with autism. In either case, you want to know which treatment options are most successful with the fewest side effects. How would you find that out? You would probably talk with a doctor or psychologist and personally review the research that has been done on various treatment options—always with a critical eye to ensure that you are as informed as possible.

In the end, research is what makes the difference between facts and opinions. Facts are observable realities, and opinions are personal judgments, conclusions, or attitudes that may or may not be accurate. In the scientific community, facts can be established only using evidence collected through empirical research.

THE PROCESS OF SCIENTIFIC RESEARCH

   Scientific knowledge is advanced through a process known as the scientific method . Basically, ideas (in the form of theories and hypotheses) are tested against the real world (in the form of empirical observations), and those observations lead to more ideas that are tested against the real world, and so on. In this sense, the scientific process is circular. We continually test and revise theories based on new evidence.

Two types of reasoning are used to make decisions within this model: Deductive and inductive. In deductive reasoning, ideas are tested against the empirical world. Think about a detective looking for clues and evidence to test their “hunch” about whodunit. In contrast, in inductive reasoning, empirical observations lead to new ideas. In other words, inductive reasoning involves gathering facts to create or refine a theory, rather than testing the theory by gathering facts (figure below). These processes are inseparable, like inhaling and exhaling, but different research approaches place different emphasis on the deductive and inductive aspects.

A diagram has a box at the top labeled “hypothesis or general premise” and a box at the bottom labeled “empirical observations.” On the left, an arrow labeled “inductive reasoning” goes from the bottom to top box. On the right, an arrow labeled “deductive reasoning” goes from the top to the bottom box.

Psychological research relies on both inductive and deductive reasoning.

   In the scientific context, deductive reasoning begins with a generalization—one hypothesis—that is then used to reach logical conclusions about the real world. If the hypothesis is correct, then the logical conclusions reached through deductive reasoning should also be correct. A deductive reasoning argument might go something like this: All living things require energy to survive (this would be your hypothesis). Ducks are living things. Therefore, ducks require energy to survive (logical conclusion). In this example, the hypothesis is correct; therefore, the conclusion is correct as well. Sometimes, however, an incorrect hypothesis may lead to a logical but incorrect conclusion. Consider the famous example from Greek philosophy. A philosopher decided that human beings were “featherless bipeds”. Using deductive reasoning, all two-legged creatures without feathers must be human, right? Diogenes the Cynic (named because he was, well, a cynic) burst into the room with a freshly plucked chicken from the market and held it up exclaiming “Behold! I have brought you a man!”

Deductive reasoning starts with a generalization that is tested against real-world observations; however, inductive reasoning moves in the opposite direction. Inductive reasoning uses empirical observations to construct broad generalizations. Unlike deductive reasoning, conclusions drawn from inductive reasoning may or may not be correct, regardless of the observations on which they are based. For example, you might be a biologist attempting to classify animals into groups. You notice that quite a large portion of animals are furry and produce milk for their young (cats, dogs, squirrels, horses, hippos, etc). Therefore, you might conclude that all mammals (the name you have chosen for this grouping) have hair and produce milk. This seems like a pretty great hypothesis that you could test with deductive reasoning. You go out an look at a whole bunch of things and stumble on an exception: The coconut. Coconuts have hair and produce milk, but they don’t “fit” your idea of what a mammal is. So, using inductive reasoning given the new evidence, you adjust your theory again for an other round of data collection. Inductive and deductive reasoning work in tandem to help build and improve scientific theories over time.

We’ve stated that theories and hypotheses are ideas, but what sort of ideas are they, exactly? A theory is a well-developed set of ideas that propose an explanation for observed phenomena. Theories are repeatedly checked against the world, but they tend to be too complex to be tested all at once. Instead, researchers create hypotheses to test specific aspects of a theory.

A hypothesis is a testable prediction about how the world will behave if our theory is correct, and it is often worded as an if-then statement (e.g., if I study all night, I will get a passing grade on the test). The hypothesis is extremely important because it bridges the gap between the realm of ideas and the real world. As specific hypotheses are tested, theories are modified and refined to reflect and incorporate the result of these tests (figure below).

A diagram has four boxes: the top is labeled “theory,” the right is labeled “hypothesis,” the bottom is labeled “research,” and the left is labeled “observation.” Arrows flow in the direction from top to right to bottom to left and back to the top, clockwise. The top right arrow is labeled “use the hypothesis to form a theory,” the bottom right arrow is labeled “design a study to test the hypothesis,” the bottom left arrow is labeled “perform the research,” and the top left arrow is labeled “create or modify the theory.”

The scientific method of research includes proposing hypotheses, conducting research, and creating or modifying theories based on results.

   To see how this process works, let’s consider a specific theory and a hypothesis that might be generated from that theory. As you’ll learn in a later chapter, the James-Lange theory of emotion asserts that emotional experience relies on the physiological arousal associated with the emotional state. If you walked out of your home and discovered a very aggressive snake waiting on your doorstep, your heart would begin to race and your stomach churn. According to the James-Lange theory, these physiological changes would result in your feeling of fear. A hypothesis that could be derived from this theory might be that a person who is unaware of the physiological arousal that the sight of the snake elicits will not feel fear.

A scientific hypothesis is also falsifiable, or capable of being shown to be incorrect. Recall from the introductory chapter that Sigmund Freud had lots of interesting ideas to explain various human behaviors (figure below). However, a major criticism of Freud’s theories is that many of his ideas are not falsifiable. The essential characteristic of Freud’s building blocks of personality, the id, ego, and superego, is that they are unconscious, and therefore people can’t observe them. Because they cannot be observed or tested in any way, it is impossible to say that they don’t exist, so they cannot be considered scientific theories. Despite this, Freud’s theories are widely taught in introductory psychology texts because of their historical significance for personality psychology and psychotherapy, and these remain the root of all modern forms of therapy.

(a)A photograph shows Freud holding a cigar. (b) The mind’s conscious and unconscious states are illustrated as an iceberg floating in water. Beneath the water’s surface in the “unconscious” area are the id, ego, and superego. The area just below the water’s surface is labeled “preconscious.” The area above the water’s surface is labeled “conscious.”

Many of the specifics of (a) Freud’s theories, such ad (b) his division on the mind into the id, ego, and superego, have fallen out of favor in recent decades because they are not falsifiable (i.e., cannot be verified through scientific investigation).  In broader strokes, his views set the stage for much psychological thinking today, such as the idea that some psychological process occur at the level of the unconscious.

In contrast, the James-Lange theory does generate falsifiable hypotheses, such as the one described above. Some individuals who suffer significant injuries to their spinal columns are unable to feel the bodily changes that often accompany emotional experiences. Therefore, we could test the hypothesis by determining how emotional experiences differ between individuals who have the ability to detect these changes in their physiological arousal and those who do not. In fact, this research has been conducted and while the emotional experiences of people deprived of an awareness of their physiological arousal may be less intense, they still experience emotion (Chwalisz, Diener, & Gallagher, 1988).

Scientific research’s dependence on falsifiability allows for great confidence in the information that it produces. Typically, by the time information is accepted by the scientific community, it has been tested repeatedly.

Scientists are engaged in explaining and understanding how the world around them works, and they are able to do so by coming up with theories that generate hypotheses that are testable and falsifiable. Theories that stand up to their tests are retained and refined, while those that do not are discarded or modified. IHaving good information generated from research aids in making wise decisions both in public policy and in our personal lives.

Review Questions:

1. Scientific hypotheses are ________ and falsifiable.

a. observable

b. original

c. provable

d. testable

2. ________ are defined as observable realities.

a. behaviors

c. opinions

d. theories

3. Scientific knowledge is ________.

a. intuitive

b. empirical

c. permanent

d. subjective

4. A major criticism of Freud’s early theories involves the fact that his theories ________.

a. were too limited in scope

b. were too outrageous

c. were too broad

d. were not testable

Critical Thinking Questions:

1. In this section, the D.A.R.E. program was described as an incredibly popular program in schools across the United States despite the fact that research consistently suggests that this program is largely ineffective. How might one explain this discrepancy?

2. The scientific method is often described as self-correcting and cyclical. Briefly describe your understanding of the scientific method with regard to these concepts.

Personal Application Questions:

1. Healthcare professionals cite an enormous number of health problems related to obesity, and many people have an understandable desire to attain a healthy weight. There are many diet programs, services, and products on the market to aid those who wish to lose weight. If a close friend was considering purchasing or participating in one of these products, programs, or services, how would you make sure your friend was fully aware of the potential consequences of this decision? What sort of information would you want to review before making such an investment or lifestyle change yourself?

deductive reasoning

falsifiable

hypothesis:  (plural

inductive reasoning

Answers to Exercises

Review Questions: 

1. There is probably tremendous political pressure to appear to be hard on drugs. Therefore, even though D.A.R.E. might be ineffective, it is a well-known program with which voters are familiar.

2. This cyclical, self-correcting process is primarily a function of the empirical nature of science. Theories are generated as explanations of real-world phenomena. From theories, specific hypotheses are developed and tested. As a function of this testing, theories will be revisited and modified or refined to generate new hypotheses that are again tested. This cyclical process ultimately allows for more and more precise (and presumably accurate) information to be collected.

deductive reasoning:  results are predicted based on a general premise

empirical:  grounded in objective, tangible evidence that can be observed time and time again, regardless of who is observing

fact:  objective and verifiable observation, established using evidence collected through empirical research

falsifiable:  able to be disproven by experimental results

hypothesis:  (plural: hypotheses) tentative and testable statement about the relationship between two or more variables

inductive reasoning:  conclusions are drawn from observations

opinion:  personal judgments, conclusions, or attitudes that may or may not be accurate

theory:  well-developed set of ideas that propose an explanation for observed phenomena

Creative Commons License

Share This Book

  • Increase Font Size
  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

How to Conduct a Psychology Experiment

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

how to conduct research in psychology

Emily is a board-certified science editor who has worked with top digital publishing brands like Voices for Biodiversity, Study.com, GoodTherapy, Vox, and Verywell.

how to conduct research in psychology

Conducting your first psychology experiment can be a long, complicated, and sometimes intimidating process. It can be especially confusing if you are not quite sure where to begin or which steps to take.

Like other sciences, psychology utilizes the  scientific method  and bases conclusions upon empirical evidence. When conducting an experiment, it is important to follow the seven basic steps of the scientific method:

  • Ask a testable question
  • Define your variables
  • Conduct background research
  • Design your experiment
  • Perform the experiment
  • Collect and analyze the data
  • Draw conclusions
  • Share the results with the scientific community

At a Glance

It's important to know the steps of the scientific method if you are conducting an experiment in psychology or other fields. The processes encompasses finding a problem you want to explore, learning what has already been discovered about the topic, determining your variables, and finally designing and performing your experiment. But the process doesn't end there! Once you've collected your data, it's time to analyze the numbers, determine what they mean, and share what you've found.

Find a Research Problem or Question

Picking a research problem can be one of the most challenging steps when you are conducting an experiment. After all, there are so many different topics you might choose to investigate.

Are you stuck for an idea? Consider some of the following:

Investigate a Commonly Held Belief

Folk knowledge is a good source of questions that can serve as the basis for psychological research. For example, many people believe that staying up all night to cram for a big exam can actually hurt test performance.

You could conduct a study to compare the test scores of students who stayed up all night with the scores of students who got a full night's sleep before the exam.

Review Psychology Literature

Published studies are a great source of unanswered research questions. In many cases, the authors will even note the need for further research. Find a published study that you find intriguing, and then come up with some questions that require further exploration.

Think About Everyday Problems

There are many practical applications for psychology research. Explore various problems that you or others face each day, and then consider how you could research potential solutions. For example, you might investigate different memorization strategies to determine which methods are most effective.

Define Your Variables

Variables are anything that might impact the outcome of your study. An operational definition describes exactly what the variables are and how they are measured within the context of your study.

For example, if you were doing a study on the impact of sleep deprivation on driving performance, you would need to operationally define sleep deprivation and driving performance .

An operational definition refers to a precise way that an abstract concept will be measured. For example, you cannot directly observe and measure something like test anxiety . You can, however, use an anxiety scale and assign values based on how many anxiety symptoms a person is experiencing. 

In this example, you might define sleep deprivation as getting less than seven hours of sleep at night. You might define driving performance as how well a participant does on a driving test.

What is the purpose of operationally defining variables? The main purpose is control. By understanding what you are measuring, you can control for it by holding the variable constant between all groups or manipulating it as an independent variable .

Develop a Hypothesis

The next step is to develop a testable hypothesis that predicts how the operationally defined variables are related. In the recent example, the hypothesis might be: "Students who are sleep-deprived will perform worse than students who are not sleep-deprived on a test of driving performance."

Null Hypothesis

In order to determine if the results of the study are significant, it is essential to also have a null hypothesis. The null hypothesis is the prediction that one variable will have no association to the other variable.

In other words, the null hypothesis assumes that there will be no difference in the effects of the two treatments in our experimental and control groups .

The null hypothesis is assumed to be valid unless contradicted by the results. The experimenters can either reject the null hypothesis in favor of the alternative hypothesis or not reject the null hypothesis.

It is important to remember that not rejecting the null hypothesis does not mean that you are accepting the null hypothesis. To say that you are accepting the null hypothesis is to suggest that something is true simply because you did not find any evidence against it. This represents a logical fallacy that should be avoided in scientific research.  

Conduct Background Research

Once you have developed a testable hypothesis, it is important to spend some time doing some background research. What do researchers already know about your topic? What questions remain unanswered?

You can learn about previous research on your topic by exploring books, journal articles, online databases, newspapers, and websites devoted to your subject.

Reading previous research helps you gain a better understanding of what you will encounter when conducting an experiment. Understanding the background of your topic provides a better basis for your own hypothesis.

After conducting a thorough review of the literature, you might choose to alter your own hypothesis. Background research also allows you to explain why you chose to investigate your particular hypothesis and articulate why the topic merits further exploration.

As you research the history of your topic, take careful notes and create a working bibliography of your sources. This information will be valuable when you begin to write up your experiment results.

Select an Experimental Design

After conducting background research and finalizing your hypothesis, your next step is to develop an experimental design. There are three basic types of designs that you might utilize. Each has its own strengths and weaknesses:

Pre-Experimental Design

A single group of participants is studied, and there is no comparison between a treatment group and a control group. Examples of pre-experimental designs include case studies (one group is given a treatment and the results are measured) and pre-test/post-test studies (one group is tested, given a treatment, and then retested).

Quasi-Experimental Design

This type of experimental design does include a control group but does not include randomization. This type of design is often used if it is not feasible or ethical to perform a randomized controlled trial.

True Experimental Design

A true experimental design, also known as a randomized controlled trial, includes both of the elements that pre-experimental designs and quasi-experimental designs lack—control groups and random assignment to groups.

Standardize Your Procedures

In order to arrive at legitimate conclusions, it is essential to compare apples to apples.

Each participant in each group must receive the same treatment under the same conditions.

For example, in our hypothetical study on the effects of sleep deprivation on driving performance, the driving test must be administered to each participant in the same way. The driving course must be the same, the obstacles faced must be the same, and the time given must be the same.

Choose Your Participants

In addition to making sure that the testing conditions are standardized, it is also essential to ensure that your pool of participants is the same.

If the individuals in your control group (those who are not sleep deprived) all happen to be amateur race car drivers while your experimental group (those that are sleep deprived) are all people who just recently earned their driver's licenses, your experiment will lack standardization.

When choosing subjects, there are some different techniques you can use.

Simple Random Sample

In a simple random sample, the participants are randomly selected from a group. A simple random sample can be used to represent the entire population from which the representative sample is drawn.

Drawing a simple random sample can be helpful when you don't know a lot about the characteristics of the population.

Stratified Random Sample

Participants must be randomly selected from different subsets of the population. These subsets might include characteristics such as geographic location, age, sex, race, or socioeconomic status.

Stratified random samples are more complex to carry out. However, you might opt for this method if there are key characteristics about the population that you want to explore in your research.

Conduct Tests and Collect Data

After you have selected participants, the next steps are to conduct your tests and collect the data. Before doing any testing, however, there are a few important concerns that need to be addressed.

Address Ethical Concerns

First, you need to be sure that your testing procedures are ethical . Generally, you will need to gain permission to conduct any type of testing with human participants by submitting the details of your experiment to your school's Institutional Review Board (IRB), sometimes referred to as the Human Subjects Committee.

Obtain Informed Consent

After you have gained approval from your institution's IRB, you will need to present informed consent forms to each participant. This form offers information on the study, the data that will be gathered, and how the results will be used. The form also gives participants the option to withdraw from the study at any point in time.

Once this step has been completed, you can begin administering your testing procedures and collecting the data.

Analyze the Results

After collecting your data, it is time to analyze the results of your experiment. Researchers use statistics to determine if the results of the study support the original hypothesis and if the results are statistically significant.

Statistical significance means that the study's results are unlikely to have occurred simply by chance.

The types of statistical methods you use to analyze your data depend largely on the type of data that you collected. If you are using a random sample of a larger population, you will need to utilize inferential statistics.

These statistical methods make inferences about how the results relate to the population at large.

Because you are making inferences based on a sample, it has to be assumed that there will be a certain margin of error. This refers to the amount of error in your results. A large margin of error means that there will be less confidence in your results, while a small margin of error means that you are more confident that your results are an accurate reflection of what exists in that population.

Share Your Results After Conducting an Experiment

Your final task in conducting an experiment is to communicate your results. By sharing your experiment with the scientific community, you are contributing to the knowledge base on that particular topic.

One of the most common ways to share research results is to publish the study in a peer-reviewed professional journal. Other methods include sharing results at conferences, in book chapters, or academic presentations.

In your case, it is likely that your class instructor will expect a formal write-up of your experiment in the same format required in a professional journal article or lab report :

  • Introduction
  • Tables and figures

What This Means For You

Designing and conducting a psychology experiment can be quite intimidating, but breaking the process down step-by-step can help. No matter what type of experiment you decide to perform, always check with your instructor and your school's institutional review board for permission before you begin.

NOAA SciJinks. What is the scientific method? .

Nestor, PG, Schutt, RK. Research Methods in Psychology . SAGE; 2015.

Andrade C. A student's guide to the classification and operationalization of variables in the conceptualization and eesign of a clinical study: Part 2 .  Indian J Psychol Med . 2021;43(3):265-268. doi:10.1177/0253717621996151

Purna Singh A, Vadakedath S, Kandi V. Clinical research: A review of study designs, hypotheses, errors, sampling types, ethics, and informed consent .  Cureus . 2023;15(1):e33374. doi:10.7759/cureus.33374

Colby College. The Experimental Method .

Leite DFB, Padilha MAS, Cecatti JG. Approaching literature review for academic purposes: The Literature Review Checklist .  Clinics (Sao Paulo) . 2019;74:e1403. doi:10.6061/clinics/2019/e1403

Salkind NJ. Encyclopedia of Research Design . SAGE Publications, Inc.; 2010. doi:10.4135/9781412961288

Miller CJ, Smith SN, Pugatch M. Experimental and quasi-experimental designs in implementation research .  Psychiatry Res . 2020;283:112452. doi:10.1016/j.psychres.2019.06.027

Nijhawan LP, Manthan D, Muddukrishna BS, et. al. Informed consent: Issues and challenges . J Adv Pharm Technol Rese . 2013;4(3):134-140. doi:10.4103/2231-4040.116779

Serdar CC, Cihan M, Yücel D, Serdar MA. Sample size, power and effect size revisited: simplified and practical approaches in pre-clinical, clinical and laboratory studies .  Biochem Med (Zagreb) . 2021;31(1):010502. doi:10.11613/BM.2021.010502

American Psychological Association.  Publication Manual of the American Psychological Association  (7th ed.). Washington DC: The American Psychological Association; 2019.

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

Logo for M Libraries Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

1.3 Conducting Research in Social Psychology

Learning objectives.

  • Explain why social psychologists rely on empirical methods to study social behavior.
  • Provide examples of how social psychologists measure the variables they are interested in.
  • Review the three types of research designs, and evaluate the strengths and limitations of each type.
  • Consider the role of validity in research, and describe how research programs should be evaluated.

Social psychologists are not the only people interested in understanding and predicting social behavior or the only people who study it. Social behavior is also considered by religious leaders, philosophers, politicians, novelists, and others, and it is a common topic on TV shows. But the social psychological approach to understanding social behavior goes beyond the mere observation of human actions. Social psychologists believe that a true understanding of the causes of social behavior can only be obtained through a systematic scientific approach, and that is why they conduct scientific research. Social psychologists believe that the study of social behavior should be empirical —that is, based on the collection and systematic analysis of observable data .

The Importance of Scientific Research

Because social psychology concerns the relationships among people, and because we can frequently find answers to questions about human behavior by using our own common sense or intuition, many people think that it is not necessary to study it empirically (Lilienfeld, 2011). But although we do learn about people by observing others and therefore social psychology is in fact partly common sense, social psychology is not entirely common sense.

In case you are not convinced about this, perhaps you would be willing to test whether or not social psychology is just common sense by taking a short true-or-false quiz. If so, please have a look at Table 1.1 “Is Social Psychology Just Common Sense?” and respond with either “True” or “False.” Based on your past observations of people’s behavior, along with your own common sense, you will likely have answers to each of the questions on the quiz. But how sure are you? Would you be willing to bet that all, or even most, of your answers have been shown to be correct by scientific research? Would you be willing to accept your score on this quiz for your final grade in this class? If you are like most of the students in my classes, you will get at least some of these answers wrong. (To see the answers and a brief description of the scientific research supporting each of these topics, please go to the Chapter Summary at the end of this chapter.)

Table 1.1 Is Social Psychology Just Common Sense?

One of the reasons we might think that social psychology is common sense is that once we learn about the outcome of a given event (e.g., when we read about the results of a research project), we frequently believe that we would have been able to predict the outcome ahead of time. For instance, if half of a class of students is told that research concerning attraction between people has demonstrated that “opposites attract,” and if the other half is told that research has demonstrated that “birds of a feather flock together,” most of the students in both groups will report believing that the outcome is true and that they would have predicted the outcome before they had heard about it. Of course, both of these contradictory outcomes cannot be true. The problem is that just reading a description of research findings leads us to think of the many cases that we know that support the findings and thus makes them seem believable. The tendency to think that we could have predicted something that we probably would not have been able to predict is called the hindsight bias .

Our common sense also leads us to believe that we know why we engage in the behaviors that we engage in, when in fact we may not. Social psychologist Daniel Wegner and his colleagues have conducted a variety of studies showing that we do not always understand the causes of our own actions. When we think about a behavior before we engage in it, we believe that the thinking guided our behavior, even when it did not (Morewedge, Gray, & Wegner, 2010). People also report that they contribute more to solving a problem when they are led to believe that they have been working harder on it, even though the effort did not increase their contribution to the outcome (Preston & Wegner, 2007). These findings, and many others like them, demonstrate that our beliefs about the causes of social events, and even of our own actions, do not always match the true causes of those events.

Social psychologists conduct research because it often uncovers results that could not have been predicted ahead of time. Putting our hunches to the test exposes our ideas to scrutiny. The scientific approach brings a lot of surprises, but it also helps us test our explanations about behavior in a rigorous manner. It is important for you to understand the research methods used in psychology so that you can evaluate the validity of the research that you read about here, in other courses, and in your everyday life.

Social psychologists publish their research in scientific journals, and your instructor may require you to read some of these research articles. The most important social psychology journals are listed in Table 1.2 “Social Psychology Journals” . If you are asked to do a literature search on research in social psychology, you should look for articles from these journals.

Table 1.2 Social Psychology Journals

We’ll discuss the empirical approach and review the findings of many research projects throughout this book, but for now let’s take a look at the basics of how scientists use research to draw overall conclusions about social behavior. Keep in mind as you read this book, however, that although social psychologists are pretty good at understanding the causes of behavior, our predictions are a long way from perfect. We are not able to control the minds or the behaviors of others or to predict exactly what they will do in any given situation. Human behavior is complicated because people are complicated and because the social situations that they find themselves in every day are also complex. It is this complexity—at least for me—that makes studying people so interesting and fun.

Measuring Affect, Behavior, and Cognition

One important aspect of using an empirical approach to understand social behavior is that the concepts of interest must be measured ( Figure 1.4 “The Operational Definition” ). If we are interested in learning how much Sarah likes Robert, then we need to have a measure of her liking for him. But how, exactly, should we measure the broad idea of “liking”? In scientific terms, the characteristics that we are trying to measure are known as conceptual variables , and the particular method that we use to measure a variable of interest is called an operational definition .

For anything that we might wish to measure, there are many different operational definitions, and which one we use depends on the goal of the research and the type of situation we are studying. To better understand this, let’s look at an example of how we might operationally define “Sarah likes Robert.”

Figure 1.4 The Operational Definition

The Operational Definition: Sarah Likes Robert. Either Sarah says,

An idea or conceptual variable (such as “how much Sarah likes Robert”) is turned into a measure through an operational definition.

One approach to measurement involves directly asking people about their perceptions using self-report measures. Self-report measures are measures in which individuals are asked to respond to questions posed by an interviewer or on a questionnaire . Generally, because any one question might be misunderstood or answered incorrectly, in order to provide a better measure, more than one question is asked and the responses to the questions are averaged together. For example, an operational definition of Sarah’s liking for Robert might involve asking her to complete the following measure:

I enjoy being around Robert.

Strongly disagree 1 2 3 4 5 6 Strongly agree

I get along well with Robert.

I like Robert.

The operational definition would be the average of her responses across the three questions. Because each question assesses the attitude differently, and yet each question should nevertheless measure Sarah’s attitude toward Robert in some way, the average of the three questions will generally be a better measure than would any one question on its own.

Although it is easy to ask many questions on self-report measures, these measures have a potential disadvantage. As we have seen, people’s insights into their own opinions and their own behaviors may not be perfect, and they might also not want to tell the truth—perhaps Sarah really likes Robert, but she is unwilling or unable to tell us so. Therefore, an alternative to self-report that can sometimes provide a more valid measure is to measure behavior itself. Behavioral measures are measures designed to directly assess what people do . Instead of asking Sara how much she likes Robert, we might instead measure her liking by assessing how much time she spends with Robert or by coding how much she smiles at him when she talks to him. Some examples of behavioral measures that have been used in social psychological research are shown in Table 1.3 “Examples of Operational Definitions of Conceptual Variables That Have Been Used in Social Psychological Research” .

Table 1.3 Examples of Operational Definitions of Conceptual Variables That Have Been Used in Social Psychological Research

Social Neuroscience: Measuring Social Responses in the Brain

Still another approach to measuring our thoughts and feelings is to measure brain activity, and recent advances in brain science have created a wide variety of new techniques for doing so. One approach, known as electroencephalography (EEG) , is a technique that records the electrical activity produced by the brain’s neurons through the use of electrodes that are placed around the research participant’s head . An electroencephalogram (EEG) can show if a person is asleep, awake, or anesthetized because the brain wave patterns are known to differ during each state. An EEG can also track the waves that are produced when a person is reading, writing, and speaking with others. A particular advantage of the technique is that the participant can move around while the recordings are being taken, which is useful when measuring brain activity in children who often have difficulty keeping still. Furthermore, by following electrical impulses across the surface of the brain, researchers can observe changes over very fast time periods.

A woman wearing an EEG cap

This woman is wearing an EEG cap.

goocy – Research – CC BY-NC 2.0.

Although EEGs can provide information about the general patterns of electrical activity within the brain, and although they allow the researcher to see these changes quickly as they occur in real time, the electrodes must be placed on the surface of the skull, and each electrode measures brain waves from large areas of the brain. As a result, EEGs do not provide a very clear picture of the structure of the brain.

But techniques exist to provide more specific brain images. Functional magnetic resonance imaging (fMRI) is a neuroimaging technique that uses a magnetic field to create images of brain structure and function . In research studies that use the fMRI, the research participant lies on a bed within a large cylindrical structure containing a very strong magnet. Nerve cells in the brain that are active use more oxygen, and the need for oxygen increases blood flow to the area. The fMRI detects the amount of blood flow in each brain region and thus is an indicator of which parts of the brain are active.

Very clear and detailed pictures of brain structures (see Figure 1.5 “Functional Magnetic Resonance Imaging (fMRI)” ) can be produced via fMRI. Often, the images take the form of cross-sectional “slices” that are obtained as the magnetic field is passed across the brain. The images of these slices are taken repeatedly and are superimposed on images of the brain structure itself to show how activity changes in different brain structures over time. Normally, the research participant is asked to engage in tasks while in the scanner, for instance, to make judgments about pictures of people, to solve problems, or to make decisions about appropriate behaviors. The fMRI images show which parts of the brain are associated with which types of tasks. Another advantage of the fMRI is that is it noninvasive. The research participant simply enters the machine and the scans begin.

Figure 1.5 Functional Magnetic Resonance Imaging (fMRI)

an fMRI image and an MRI machine

The fMRI creates images of brain structure and activity. In this image, the red and yellow areas represent increased blood flow and thus increased activity.

Reigh LeBlanc – Reigh’s Brain rlwat – CC BY-NC 2.0; Wikimedia Commons – public domain.

Although the scanners themselves are expensive, the advantages of fMRIs are substantial, and scanners are now available in many university and hospital settings. The fMRI is now the most commonly used method of learning about brain structure, and it has been employed by social psychologists to study social cognition, attitudes, morality, emotions, responses to being rejected by others, and racial prejudice, to name just a few topics (Eisenberger, Lieberman, & Williams, 2003; Greene, Sommerville, Nystrom, Darley, & Cohen, 2001; Lieberman, Hariri, Jarcho, Eisenberger, & Bookheimer, 2005; Ochsner, Bunge, Gross, & Gabrieli, 2002; Richeson et al., 2003).

Observational Research

Once we have decided how to measure our variables, we can begin the process of research itself. As you can see in Table 1.4 “Three Major Research Designs Used by Social Psychologists” , there are three major approaches to conducting research that are used by social psychologists—the observational approach , the correlational approach , and the experimental approach . Each approach has some advantages and disadvantages.

Table 1.4 Three Major Research Designs Used by Social Psychologists

The most basic research design, observational research , is research that involves making observations of behavior and recording those observations in an objective manner . Although it is possible in some cases to use observational data to draw conclusions about the relationships between variables (e.g., by comparing the behaviors of older versus younger children on a playground), in many cases the observational approach is used only to get a picture of what is happening to a given set of people at a given time and how they are responding to the social situation. In these cases, the observational approach involves creating a type of “snapshot” of the current state of affairs.

One advantage of observational research is that in many cases it is the only possible approach to collecting data about the topic of interest. A researcher who is interested in studying the impact of a hurricane on the residents of New Orleans, the reactions of New Yorkers to a terrorist attack, or the activities of the members of a religious cult cannot create such situations in a laboratory but must be ready to make observations in a systematic way when such events occur on their own. Thus observational research allows the study of unique situations that could not be created by the researcher. Another advantage of observational research is that the people whose behavior is being measured are doing the things they do every day, and in some cases they may not even know that their behavior is being recorded.

One early observational study that made an important contribution to understanding human behavior was reported in a book by Leon Festinger and his colleagues (Festinger, Riecken, & Schachter, 1956). The book, called When Prophecy Fails , reported an observational study of the members of a “doomsday” cult. The cult members believed that they had received information, supposedly sent through “automatic writing” from a planet called “Clarion,” that the world was going to end. More specifically, the group members were convinced that the earth would be destroyed, as the result of a gigantic flood, sometime before dawn on December 21, 1954.

When Festinger learned about the cult, he thought that it would be an interesting way to study how individuals in groups communicate with each other to reinforce their extreme beliefs. He and his colleagues observed the members of the cult over a period of several months, beginning in July of the year in which the flood was expected. The researchers collected a variety of behavioral and self-report measures by observing the cult, recording the conversations among the group members, and conducting detailed interviews with them. Festinger and his colleagues also recorded the reactions of the cult members, beginning on December 21, when the world did not end as they had predicted. This observational research provided a wealth of information about the indoctrination patterns of cult members and their reactions to disconfirmed predictions. This research also helped Festinger develop his important theory of cognitive dissonance.

Despite their advantages, observational research designs also have some limitations. Most important, because the data that are collected in observational studies are only a description of the events that are occurring, they do not tell us anything about the relationship between different variables. However, it is exactly this question that correlational research and experimental research are designed to answer.

The Research Hypothesis

Because social psychologists are generally interested in looking at relationships among variables, they begin by stating their predictions in the form of a precise statement known as a research hypothesis . A research hypothesis is a statement about the relationship between the variables of interest and about the specific direction of that relationship . For instance, the research hypothesis “People who are more similar to each other will be more attracted to each other” predicts that there is a relationship between a variable called similarity and another variable called attraction. In the research hypothesis “The attitudes of cult members become more extreme when their beliefs are challenged,” the variables that are expected to be related are extremity of beliefs and the degree to which the cults’ beliefs are challenged.

Because the research hypothesis states both that there is a relationship between the variables and the direction of that relationship, it is said to be falsifiable . Being falsifiable means that the outcome of the research can demonstrate empirically either that there is support for the hypothesis (i.e., the relationship between the variables was correctly specified) or that there is actually no relationship between the variables or that the actual relationship is not in the direction that was predicted . Thus the research hypothesis that “people will be more attracted to others who are similar to them” is falsifiable because the research could show either that there was no relationship between similarity and attraction or that people we see as similar to us are seen as less attractive than those who are dissimilar.

Correlational Research

The goal of correlational research is to search for and test hypotheses about the relationships between two or more variables. In the simplest case, the correlation is between only two variables, such as that between similarity and liking, or between gender (male versus female) and helping.

In a correlational design, the research hypothesis is that there is an association (i.e., a correlation) between the variables that are being measured. For instance, many researchers have tested the research hypothesis that a positive correlation exists between the use of violent video games and the incidence of aggressive behavior, such that people who play violent video games more frequently would also display more aggressive behavior.

Playing violent video games may lead to aggressive behavior, but aggressive behavior may lead to playing violent video games

A statistic known as the Pearson correlation coefficient (symbolized by the letter r ) is normally used to summarize the association, or correlation, between two variables. The correlation coefficient can range from −1 (indicating a very strong negative relationship between the variables) to +1 (indicating a very strong positive relationship between the variables). Research has found that there is a positive correlation between the use of violent video games and the incidence of aggressive behavior and that the size of the correlation is about r = .30 (Bushman & Huesmann, 2010).

One advantage of correlational research designs is that, like observational research (and in comparison with experimental research designs in which the researcher frequently creates relatively artificial situations in a laboratory setting), they are often used to study people doing the things that they do every day. And correlational research designs also have the advantage of allowing prediction. When two or more variables are correlated, we can use our knowledge of a person’s score on one of the variables to predict his or her likely score on another variable. Because high-school grade point averages are correlated with college grade point averages, if we know a person’s high-school grade point average, we can predict his or her likely college grade point average. Similarly, if we know how many violent video games a child plays, we can predict how aggressively he or she will behave. These predictions will not be perfect, but they will allow us to make a better guess than we would have been able to if we had not known the person’s score on the first variable ahead of time.

Despite their advantages, correlational designs have a very important limitation. This limitation is that they cannot be used to draw conclusions about the causal relationships among the variables that have been measured. An observed correlation between two variables does not necessarily indicate that either one of the variables caused the other. Although many studies have found a correlation between the number of violent video games that people play and the amount of aggressive behaviors they engage in, this does not mean that viewing the video games necessarily caused the aggression. Although one possibility is that playing violent games increases aggression,

Playing violent video games may lead to aggressive behavior

another possibility is that the causal direction is exactly opposite to what has been hypothesized. Perhaps increased aggressiveness causes more interest in, and thus increased viewing of, violent games. Although this causal relationship might not seem as logical to you, there is no way to rule out the possibility of such reverse causation on the basis of the observed correlation.

Aggressive behavior may lead to playing violent video games

Still another possible explanation for the observed correlation is that it has been produced by the presence of another variable that was not measured in the research. Common-causal variables (also known as third variables) are variables that are not part of the research hypothesis but that cause both the predictor and the outcome variable and thus produce the observed correlation between them ( Figure 1.6 “Correlation and Causality” ). It has been observed that students who sit in the front of a large class get better grades than those who sit in the back of the class. Although this could be because sitting in the front causes the student to take better notes or to understand the material better, the relationship could also be due to a common-causal variable, such as the interest or motivation of the students to do well in the class. Because a student’s interest in the class leads him or her to both get better grades and sit nearer to the teacher, seating position and class grade are correlated, even though neither one caused the other.

Figure 1.6 Correlation and Causality

Where we sit in the class may correlate with our course grade, however, interest in the class, intelligence, and motivation to get good grades could also influences that decision

The correlation between where we sit in a large class and our grade in the class is likely caused by the influence of one or more common-causal variables.

The possibility of common-causal variables must always be taken into account when considering correlational research designs. For instance, in a study that finds a correlation between playing violent video games and aggression, it is possible that a common-causal variable is producing the relationship. Some possibilities include the family background, diet, and hormone levels of the children. Any or all of these potential common-causal variables might be creating the observed correlation between playing violent video games and aggression. Higher levels of the male sex hormone testosterone, for instance, may cause children to both watch more violent TV and behave more aggressively.

I like to think of common-causal variables in correlational research designs as “mystery” variables, since their presence and identity is usually unknown to the researcher because they have not been measured. Because it is not possible to measure every variable that could possibly cause both variables, it is always possible that there is an unknown common-causal variable. For this reason, we are left with the basic limitation of correlational research: Correlation does not imply causation.

Experimental Research

The goal of much research in social psychology is to understand the causal relationships among variables, and for this we use experiments. Experimental research designs are research designs that include the manipulation of a given situation or experience for two or more groups of individuals who are initially created to be equivalent, followed by a measurement of the effect of that experience .

In an experimental research design, the variables of interest are called the independent variables and the dependent variables. The independent variable refers to the situation that is created by the experimenter through the experimental manipulations , and the dependent variable refers to the variable that is measured after the manipulations have occurred . In an experimental research design, the research hypothesis is that the manipulated independent variable (or variables) causes changes in the measured dependent variable (or variables). We can diagram the prediction like this, using an arrow that points in one direction to demonstrate the expected direction of causality:

viewing violence (independent variable) → aggressive behavior (dependent variable)

Consider an experiment conducted by Anderson and Dill (2000), which was designed to directly test the hypothesis that viewing violent video games would cause increased aggressive behavior. In this research, male and female undergraduates from Iowa State University were given a chance to play either a violent video game (Wolfenstein 3D) or a nonviolent video game (Myst). During the experimental session, the participants played the video game that they had been given for 15 minutes. Then, after the play, they participated in a competitive task with another student in which they had a chance to deliver blasts of white noise through the earphones of their opponent. The operational definition of the dependent variable (aggressive behavior) was the level and duration of noise delivered to the opponent. The design and the results of the experiment are shown in Figure 1.7 “An Experimental Research Design (After Anderson & Dill, 2000)” .

Figure 1.7 An Experimental Research Design (After Anderson & Dill, 2000)

Two advantages of the experimental research design are an assurance that the independent variable (also known as the experimental manipulation) occurs prior to the measured dependent variable and the creation of initial equivalence between the conditions of the experiment.

Two advantages of the experimental research design are (a) an assurance that the independent variable (also known as the experimental manipulation) occurs prior to the measured dependent variable and (b) the creation of initial equivalence between the conditions of the experiment (in this case, by using random assignment to conditions).

Experimental designs have two very nice features. For one, they guarantee that the independent variable occurs prior to measuring the dependent variable. This eliminates the possibility of reverse causation. Second, the experimental manipulation allows ruling out the possibility of common-causal variables that cause both the independent variable and the dependent variable. In experimental designs, the influence of common-causal variables is controlled, and thus eliminated, by creating equivalence among the participants in each of the experimental conditions before the manipulation occurs.

The most common method of creating equivalence among the experimental conditions is through random assignment to conditions , which involves determining separately for each participant which condition he or she will experience through a random process, such as drawing numbers out of an envelope or using a website such as http://randomizer.org . Anderson and Dill first randomly assigned about 100 participants to each of their two groups. Let’s call them Group A and Group B. Because they used random assignment to conditions, they could be confident that before the experimental manipulation occurred , the students in Group A were, on average , equivalent to the students in Group B on every possible variable , including variables that are likely to be related to aggression, such as family, peers, hormone levels, and diet—and, in fact, everything else.

Then, after they had created initial equivalence, Anderson and Dill created the experimental manipulation—they had the participants in Group A play the violent video game and the participants in Group B the nonviolent video game. Then they compared the dependent variable (the white noise blasts) between the two groups and found that the students who had viewed the violent video game gave significantly longer noise blasts than did the students who had played the nonviolent game. Because they had created initial equivalence between the groups, when the researchers observed differences in the duration of white noise blasts between the two groups after the experimental manipulation, they could draw the conclusion that it was the independent variable (and not some other variable) that caused these differences. The idea is that the only thing that was different between the students in the two groups was which video game they had played.

When we create a situation in which the groups of participants are expected to be equivalent before the experiment begins, when we manipulate the independent variable before we measure the dependent variable, and when we change only the nature of independent variables between the conditions, then we can be confident that it is the independent variable that caused the differences in the dependent variable. Such experiments are said to have high internal validity , where internal validity refers to the confidence with which we can draw conclusions about the causal relationship between the variables .

Despite the advantage of determining causation, experimental research designs do have limitations. One is that the experiments are usually conducted in laboratory situations rather than in the everyday lives of people. Therefore, we do not know whether results that we find in a laboratory setting will necessarily hold up in everyday life. To counter this, in some cases experiments are conducted in everyday settings—for instance, in schools or other organizations . Such field experiments are difficult to conduct because they require a means of creating random assignment to conditions, and this is frequently not possible in natural settings.

A second and perhaps more important limitation of experimental research designs is that some of the most interesting and important social variables cannot be experimentally manipulated. If we want to study the influence of the size of a mob on the destructiveness of its behavior, or to compare the personality characteristics of people who join suicide cults with those of people who do not join suicide cults, these relationships must be assessed using correlational designs because it is simply not possible to manipulate mob size or cult membership.

Factorial Research Designs

Social psychological experiments are frequently designed to simultaneously study the effects of more than one independent variable on a dependent variable. Factorial research designs are experimental designs that have two or more independent variables . By using a factorial design, the scientist can study the influence of each variable on the dependent variable (known as the main effects of the variables) as well as how the variables work together to influence the dependent variable (known as the interaction between the variables). Factorial designs sometimes demonstrate the person by situation interaction.

In one such study, Brian Meier and his colleagues (Meier, Robinson, & Wilkowski, 2006) tested the hypothesis that exposure to aggression-related words would increase aggressive responses toward others. Although they did not directly manipulate the social context, they used a technique common in social psychology in which they primed (i.e., activated) thoughts relating to social settings. In their research, half of their participants were randomly assigned to see words relating to aggression and the other half were assigned to view neutral words that did not relate to aggression. The participants in the study also completed a measure of individual differences in agreeableness —a personality variable that assesses the extent to which the person sees themselves as compassionate, cooperative, and high on other-concern.

Then the research participants completed a task in which they thought they were competing with another student. Participants were told that they should press the space bar on the computer as soon as they heard a tone over their headphones, and the person who pressed the button the fastest would be the winner of the trial. Before the first trial, participants set the intensity of a blast of white noise that would be delivered to the loser of the trial. The participants could choose an intensity ranging from 0 (no noise) to the most aggressive response (10, or 105 decibels). In essence, participants controlled a “weapon” that could be used to blast the opponent with aversive noise, and this setting became the dependent variable. At this point, the experiment ended.

Figure 1.8 A Person-Situation Interaction

In this experiment by Meier, Robinson, and Wilkowski (2006) the independent variables are type of priming (aggression or neutral) and participant agreeableness (high or low). The dependent variable is the white noise level selected (a measure of aggression). The participants who were low in agreeableness became significantly more aggressive after seeing aggressive words, but those high in agreeableness did not.

In this experiment by Meier, Robinson, and Wilkowski (2006) the independent variables are type of priming (aggression or neutral) and participant agreeableness (high or low). The dependent variable is the white noise level selected (a measure of aggression). The participants who were low in agreeableness became significantly more aggressive after seeing aggressive words, but those high in agreeableness did not.

As you can see in Figure 1.8 “A Person-Situation Interaction” , there was a person by situation interaction. Priming with aggression-related words (the situational variable) increased the noise levels selected by participants who were low on agreeableness, but priming did not increase aggression (in fact, it decreased it a bit) for students who were high on agreeableness. In this study, the social situation was important in creating aggression, but it had different effects for different people.

Deception in Social Psychology Experiments

You may have wondered whether the participants in the video game study and that we just discussed were told about the research hypothesis ahead of time. In fact, these experiments both used a cover story — a false statement of what the research was really about . The students in the video game study were not told that the study was about the effects of violent video games on aggression, but rather that it was an investigation of how people learn and develop skills at motor tasks like video games and how these skills affect other tasks, such as competitive games. The participants in the task performance study were not told that the research was about task performance . In some experiments, the researcher also makes use of an experimental confederate — a person who is actually part of the experimental team but who pretends to be another participant in the study . The confederate helps create the right “feel” of the study, making the cover story seem more real.

In many cases, it is not possible in social psychology experiments to tell the research participants about the real hypotheses in the study, and so cover stories or other types of deception may be used. You can imagine, for instance, that if a researcher wanted to study racial prejudice, he or she could not simply tell the participants that this was the topic of the research because people may not want to admit that they are prejudiced, even if they really are. Although the participants are always told—through the process of informed consent —as much as is possible about the study before the study begins, they may nevertheless sometimes be deceived to some extent. At the end of every research project, however, participants should always receive a complete debriefing in which all relevant information is given, including the real hypothesis, the nature of any deception used, and how the data are going to be used.

Interpreting Research

No matter how carefully it is conducted or what type of design is used, all research has limitations. Any given research project is conducted in only one setting and assesses only one or a few dependent variables. And any one study uses only one set of research participants. Social psychology research is sometimes criticized because it frequently uses college students from Western cultures as participants (Henrich, Heine, & Norenzayan, 2010). But relationships between variables are only really important if they can be expected to be found again when tested using other research designs, other operational definitions of the variables, other participants, and other experimenters, and in other times and settings.

External validity refers to the extent to which relationships can be expected to hold up when they are tested again in different ways and for different people . Science relies primarily upon replication —that is, the repeating of research —to study the external validity of research findings. Sometimes the original research is replicated exactly, but more often, replications involve using new operational definitions of the independent or dependent variables, or designs in which new conditions or variables are added to the original design. And to test whether a finding is limited to the particular participants used in a given research project, scientists may test the same hypotheses using people from different ages, backgrounds, or cultures. Replication allows scientists to test the external validity as well as the limitations of research findings.

In some cases, researchers may test their hypotheses, not by conducting their own study, but rather by looking at the results of many existing studies, using a meta-analysis — a statistical procedure in which the results of existing studies are combined to determine what conclusions can be drawn on the basis of all the studies considered together . For instance, in one meta-analysis, Anderson and Bushman (2001) found that across all the studies they could locate that included both children and adults, college students and people who were not in college, and people from a variety of different cultures, there was a clear positive correlation (about r = .30) between playing violent video games and acting aggressively. The summary information gained through a meta-analysis allows researchers to draw even clearer conclusions about the external validity of a research finding.

Figure 1.9 Some Important Aspects of the Scientific Approach

Scientists generate research hypotheses, which are tested using an observational, correlational, or experimental research design. The variables of interest are measured using self-report or behavioral measures. Data is interpreted according to its validity (including internal validity and external validity). The results of many studies may be combined and summarized using meta-analysis.

It is important to realize that the understanding of social behavior that we gain by conducting research is a slow, gradual, and cumulative process. The research findings of one scientist or one experiment do not stand alone—no one study “proves” a theory or a research hypothesis. Rather, research is designed to build on, add to, and expand the existing research that has been conducted by other scientists. That is why whenever a scientist decides to conduct research, he or she first reads journal articles and book chapters describing existing research in the domain and then designs his or her research on the basis of the prior findings. The result of this cumulative process is that over time, research findings are used to create a systematic set of knowledge about social psychology ( Figure 1.9 “Some Important Aspects of the Scientific Approach” ).

Key Takeaways

  • Social psychologists study social behavior using an empirical approach. This allows them to discover results that could not have been reliably predicted ahead of time and that may violate our common sense and intuition.
  • The variables that form the research hypothesis, known as conceptual variables, are assessed using measured variables by using, for instance, self-report, behavioral, or neuroimaging measures.
  • Observational research is research that involves making observations of behavior and recording those observations in an objective manner. In some cases, it may be the only approach to studying behavior.
  • Correlational and experimental research designs are based on developing falsifiable research hypotheses.
  • Correlational research designs allow prediction but cannot be used to make statements about causality. Experimental research designs in which the independent variable is manipulated can be used to make statements about causality.
  • Social psychological experiments are frequently factorial research designs in which the effects of more than one independent variable on a dependent variable are studied.
  • All research has limitations, which is why scientists attempt to replicate their results using different measures, populations, and settings and to summarize those results using meta-analyses.

Exercises and Critical Thinking

1. Find journal articles that report observational, correlational, and experimental research designs. Specify the research design, the research hypothesis, and the conceptual and measured variables in each design. 2.

Consider the following variables that might have contributed to teach of the following events. For each one, (a) propose a research hypothesis in which the variable serves as an independent variable and (b) propose a research hypothesis in which the variable serves as a dependent variable.

  • Liking another person
  • Life satisfaction

Anderson, C. A., & Bushman, B. J. (2001). Effects of violent video games on aggressive behavior, aggressive cognition, aggressive affect, physiological arousal, and prosocial behavior: A meta-analytic review of the scientific literature. Psychological Science, 12 (5), 353–359.

Anderson, C. A., & Dill, K. E. (2000). Video games and aggressive thoughts, feelings, and behavior in the laboratory and in life. Journal of Personality and Social Psychology, 78 (4), 772–790.

Bushman, B. J., & Huesmann, L. R. (2010). Aggression. In S. T. Fiske, D. T. Gilbert, & G. Lindzey (Eds.), Handbook of social psychology (5th ed., Vol. 2, pp. 833–863). Hoboken, NJ: John Wiley & Sons.

Eisenberger, N. I., Lieberman, M. D., & Williams, K. D. (2003). Does rejection hurt? An fMRI study of social exclusion. Science, 302 (5643), 290–292.

Festinger, L., Riecken, H. W., & Schachter, S. (1956). When prophecy fails: A social and psychological study of a modern group that predicted the destruction of the world . Minneapolis, MN: University of Minnesota Press.

Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293 (5537), 2105–2108.

Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33 (2–3), 61–83.

Lieberman, M. D., Hariri, A., Jarcho, J. M., Eisenberger, N. I., & Bookheimer, S. Y. (2005). An fMRI investigation of race-related amygdala activity in African-American and Caucasian-American individuals. Nature Neuroscience, 8 (6), 720–722.

Lilienfeld, S. O. (2011, June 13). Public skepticism of psychology: Why many people perceive the study of human behavior as unscientific. American Psychologist. doi: 10.1037/a0023963

Meier, B. P., Robinson, M. D., & Wilkowski, B. M. (2006). Turning the other cheek: Agreeableness and the regulation of aggression-related crimes. Psychological Science, 17 (2), 136–142.

Morewedge, C. K., Gray, K., & Wegner, D. M. (2010). Perish the forethought: Premeditation engenders misperceptions of personal control. In R. R. Hassin, K. N. Ochsner, & Y. Trope (Eds.), Self-control in society, mind, and brain (pp. 260–278). New York, NY: Oxford University Press.

Ochsner, K. N., Bunge, S. A., Gross, J. J., & Gabrieli, J. D. E. (2002). Rethinking feelings: An fMRI study of the cognitive regulation of emotion. Journal of Cognitive Neuroscience, 14 (8), 1215–1229.

Preston, J., & Wegner, D. M. (2007). The eureka error: Inadvertent plagiarism by misattributions of effort. Journal of Personality and Social Psychology, 92 (4), 575–584.

Richeson, J. A., Baird, A. A., Gordon, H. L., Heatherton, T. F., Wyland, C. L., Trawalter, S., Richeson, J. A., Baird, A. A., Gordon, H. L., Heatherton, T. F., Wyland, C. L., Trawalter, S., et al.#8230.

Shelton, J. N. (2003). An fMRI investigation of the impact of interracial contact on executive function. Nature Neuroscience, 6 (12), 1323–1328.

Principles of Social Psychology Copyright © 2015 by University of Minnesota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Ethical Considerations In Psychology Research

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Ethics refers to the correct rules of conduct necessary when carrying out research. We have a moral responsibility to protect research participants from harm.

However important the issue under investigation, psychologists must remember that they have a duty to respect the rights and dignity of research participants. This means that they must abide by certain moral principles and rules of conduct.

What are Ethical Guidelines?

In Britain, ethical guidelines for research are published by the British Psychological Society, and in America, by the American Psychological Association. The purpose of these codes of conduct is to protect research participants, the reputation of psychology, and psychologists themselves.

Moral issues rarely yield a simple, unambiguous, right or wrong answer. It is, therefore, often a matter of judgment whether the research is justified or not.

For example, it might be that a study causes psychological or physical discomfort to participants; maybe they suffer pain or perhaps even come to serious harm.

On the other hand, the investigation could lead to discoveries that benefit the participants themselves or even have the potential to increase the sum of human happiness.

Rosenthal and Rosnow (1984) also discuss the potential costs of failing to carry out certain research. Who is to weigh up these costs and benefits? Who is to judge whether the ends justify the means?

Finally, if you are ever in doubt as to whether research is ethical or not, it is worthwhile remembering that if there is a conflict of interest between the participants and the researcher, it is the interests of the subjects that should take priority.

Studies must now undergo an extensive review by an institutional review board (US) or ethics committee (UK) before they are implemented. All UK research requires ethical approval by one or more of the following:

  • Department Ethics Committee (DEC) : for most routine research.
  • Institutional Ethics Committee (IEC) : for non-routine research.
  • External Ethics Committee (EEC) : for research that s externally regulated (e.g., NHS research).

Committees review proposals to assess if the potential benefits of the research are justifiable in light of the possible risk of physical or psychological harm.

These committees may request researchers make changes to the study’s design or procedure or, in extreme cases, deny approval of the study altogether.

The British Psychological Society (BPS) and American Psychological Association (APA) have issued a code of ethics in psychology that provides guidelines for conducting research.  Some of the more important ethical issues are as follows:

Informed Consent

Before the study begins, the researcher must outline to the participants what the research is about and then ask for their consent (i.e., permission) to participate.

An adult (18 years +) capable of being permitted to participate in a study can provide consent. Parents/legal guardians of minors can also provide consent to allow their children to participate in a study.

Whenever possible, investigators should obtain the consent of participants. In practice, this means it is not sufficient to get potential participants to say “Yes.”

They also need to know what it is that they agree to. In other words, the psychologist should, so far as is practicable, explain what is involved in advance and obtain the informed consent of participants.

Informed consent must be informed, voluntary, and rational. Participants must be given relevant details to make an informed decision, including the purpose, procedures, risks, and benefits. Consent must be given voluntarily without undue coercion. And participants must have the capacity to rationally weigh the decision.

Components of informed consent include clearly explaining the risks and expected benefits, addressing potential therapeutic misconceptions about experimental treatments, allowing participants to ask questions, and describing methods to minimize risks like emotional distress.

Investigators should tailor the consent language and process appropriately for the study population. Obtaining meaningful informed consent is an ethical imperative for human subjects research.

The voluntary nature of participation should not be compromised through coercion or undue influence. Inducements should be fair and not excessive/inappropriate.

However, it is not always possible to gain informed consent.  Where the researcher can’t ask the actual participants, a similar group of people can be asked how they would feel about participating.

If they think it would be OK, then it can be assumed that the real participants will also find it acceptable. This is known as presumptive consent.

However, a problem with this method is that there might be a mismatch between how people think they would feel/behave and how they actually feel and behave during a study.

In order for consent to be ‘informed,’ consent forms may need to be accompanied by an information sheet for participants’ setting out information about the proposed study (in lay terms), along with details about the investigators and how they can be contacted.

Special considerations exist when obtaining consent from vulnerable populations with decisional impairments, such as psychiatric patients, intellectually disabled persons, and children/adolescents. Capacity can vary widely so should be assessed individually, but interventions to improve comprehension may help. Legally authorized representatives usually must provide consent for children.

Participants must be given information relating to the following:

  • A statement that participation is voluntary and that refusal to participate will not result in any consequences or any loss of benefits that the person is otherwise entitled to receive.
  • Purpose of the research.
  • All foreseeable risks and discomforts to the participant (if there are any). These include not only physical injury but also possible psychological.
  • Procedures involved in the research.
  • Benefits of the research to society and possibly to the individual human subject.
  • Length of time the subject is expected to participate.
  • Person to contact for answers to questions or in the event of injury or emergency.
  • Subjects” right to confidentiality and the right to withdraw from the study at any time without any consequences.
Debriefing after a study involves informing participants about the purpose, providing an opportunity to ask questions, and addressing any harm from participation. Debriefing serves an educational function and allows researchers to correct misconceptions. It is an ethical imperative.

After the research is over, the participant should be able to discuss the procedure and the findings with the psychologist. They must be given a general idea of what the researcher was investigating and why, and their part in the research should be explained.

Participants must be told if they have been deceived and given reasons why. They must be asked if they have any questions, which should be answered honestly and as fully as possible.

Debriefing should occur as soon as possible and be as full as possible; experimenters should take reasonable steps to ensure that participants understand debriefing.

“The purpose of debriefing is to remove any misconceptions and anxieties that the participants have about the research and to leave them with a sense of dignity, knowledge, and a perception of time not wasted” (Harris, 1998).

The debriefing aims to provide information and help the participant leave the experimental situation in a similar frame of mind as when he/she entered it (Aronson, 1988).

Exceptions may exist if debriefing seriously compromises study validity or causes harm itself, like negative emotions in children. Consultation with an institutional review board guides exceptions.

Debriefing indicates investigators’ commitment to participant welfare. Harms may not be raised in the debriefing itself, so responsibility continues after data collection. Following up demonstrates respect and protects persons in human subjects research.

Protection of Participants

Researchers must ensure that those participating in research will not be caused distress. They must be protected from physical and mental harm. This means you must not embarrass, frighten, offend or harm participants.

Normally, the risk of harm must be no greater than in ordinary life, i.e., participants should not be exposed to risks greater than or additional to those encountered in their normal lifestyles.

The researcher must also ensure that if vulnerable groups are to be used (elderly, disabled, children, etc.), they must receive special care. For example, if studying children, ensure their participation is brief as they get tired easily and have a limited attention span.

Researchers are not always accurately able to predict the risks of taking part in a study, and in some cases, a therapeutic debriefing may be necessary if participants have become disturbed during the research (as happened to some participants in Zimbardo’s prisoners/guards study ).

Deception research involves purposely misleading participants or withholding information that could influence their participation decision. This method is controversial because it limits informed consent and autonomy, but can provide otherwise unobtainable valuable knowledge.

Types of deception include (i) deliberate misleading, e.g. using confederates, staged manipulations in field settings, deceptive instructions; (ii) deception by omission, e.g., failure to disclose full information about the study, or creating ambiguity.

The researcher should avoid deceiving participants about the nature of the research unless there is no alternative – and even then, this would need to be judged acceptable by an independent expert. However, some types of research cannot be carried out without at least some element of deception.

For example, in Milgram’s study of obedience , the participants thought they were giving electric shocks to a learner when they answered a question wrongly. In reality, no shocks were given, and the learners were confederates of Milgram.

This is sometimes necessary to avoid demand characteristics (i.e., the clues in an experiment that lead participants to think they know what the researcher is looking for).

Another common example is when a stooge or confederate of the experimenter is used (this was the case in both the experiments carried out by Asch ).

According to ethics codes, deception must have strong scientific justification, and non-deceptive alternatives should not be feasible. Deception that causes significant harm is prohibited. Investigators should carefully weigh whether deception is necessary and ethical for their research.

However, participants must be deceived as little as possible, and any deception must not cause distress.  Researchers can determine whether participants are likely distressed when deception is disclosed by consulting culturally relevant groups.

Participants should immediately be informed of the deception without compromising the study’s integrity. Reactions to learning of deception can range from understanding to anger. Debriefing should explain the scientific rationale and social benefits to minimize negative reactions.

If the participant is likely to object or be distressed once they discover the true nature of the research at debriefing, then the study is unacceptable.

If you have gained participants’ informed consent by deception, then they will have agreed to take part without actually knowing what they were consenting to.  The true nature of the research should be revealed at the earliest possible opportunity or at least during debriefing.

Some researchers argue that deception can never be justified and object to this practice as it (i) violates an individual’s right to choose to participate; (ii) is a questionable basis on which to build a discipline; and (iii) leads to distrust of psychology in the community.

Confidentiality

Protecting participant confidentiality is an ethical imperative that demonstrates respect, ensures honest participation, and prevents harms like embarrassment or legal issues. Methods like data encryption, coding systems, and secure storage should match the research methodology.

Participants and the data gained from them must be kept anonymous unless they give their full consent.  No names must be used in a lab report .

Researchers must clearly describe to participants the limits of confidentiality and methods to protect privacy. With internet research, threats exist like third-party data access; security measures like encryption should be explained. For non-internet research, other protections should be noted too, like coding systems and restricted data access.

High-profile data breaches have eroded public trust. Methods that minimize identifiable information can further guard confidentiality. For example, researchers can consider whether birthdates are necessary or just ages.

Generally, reducing personal details collected and limiting accessibility safeguards participants. Following strong confidentiality protections demonstrates respect for persons in human subjects research.

What do we do if we discover something that should be disclosed (e.g., a criminal act)? Researchers have no legal obligation to disclose criminal acts and must determine the most important consideration: their duty to the participant vs. their duty to the wider community.

Ultimately, decisions to disclose information must be set in the context of the research aims.

Withdrawal from an Investigation

Participants should be able to leave a study anytime if they feel uncomfortable. They should also be allowed to withdraw their data. They should be told at the start of the study that they have the right to withdraw.

They should not have pressure placed upon them to continue if they do not want to (a guideline flouted in Milgram’s research).

Participants may feel they shouldn’t withdraw as this may ‘spoil’ the study. Many participants are paid or receive course credits; they may worry they won’t get this if they withdraw.

Even at the end of the study, the participant has a final opportunity to withdraw the data they have provided for the research.

Ethical Issues in Psychology & Socially Sensitive Research

There has been an assumption over the years by many psychologists that provided they follow the BPS or APA guidelines when using human participants and that all leave in a similar state of mind to how they turned up, not having been deceived or humiliated, given a debrief, and not having had their confidentiality breached, that there are no ethical concerns with their research.

But consider the following examples:

a) Caughy et al. 1994 found that middle-class children in daycare at an early age generally score less on cognitive tests than children from similar families reared in the home.

Assuming all guidelines were followed, neither the parents nor the children participating would have been unduly affected by this research. Nobody would have been deceived, consent would have been obtained, and no harm would have been caused.

However, consider the wider implications of this study when the results are published, particularly for parents of middle-class infants who are considering placing their young children in daycare or those who recently have!

b)  IQ tests administered to black Americans show that they typically score 15 points below the average white score.

When black Americans are given these tests, they presumably complete them willingly and are not harmed as individuals. However, when published, findings of this sort seek to reinforce racial stereotypes and are used to discriminate against the black population in the job market, etc.

Sieber & Stanley (1988) (the main names for Socially Sensitive Research (SSR) outline 4 groups that may be affected by psychological research: It is the first group of people that we are most concerned with!
  • Members of the social group being studied, such as racial or ethnic group. For example, early research on IQ was used to discriminate against US Blacks.
  • Friends and relatives of those participating in the study, particularly in case studies, where individuals may become famous or infamous. Cases that spring to mind would include Genie’s mother.
  • The research team. There are examples of researchers being intimidated because of the line of research they are in.
  • The institution in which the research is conducted.
salso suggest there are 4 main ethical concerns when conducting SSR:
  • The research question or hypothesis.
  • The treatment of individual participants.
  • The institutional context.
  • How the findings of the research are interpreted and applied.

Ethical Guidelines For Carrying Out SSR

Sieber and Stanley suggest the following ethical guidelines for carrying out SSR. There is some overlap between these and research on human participants in general.

Privacy : This refers to people rather than data. Asking people questions of a personal nature (e.g., about sexuality) could offend.

Confidentiality: This refers to data. Information (e.g., about H.I.V. status) leaked to others may affect the participant’s life.

Sound & valid methodology : This is even more vital when the research topic is socially sensitive. Academics can detect flaws in methods, but the lay public and the media often don’t.

When research findings are publicized, people are likely to consider them fact, and policies may be based on them. Examples are Bowlby’s maternal deprivation studies and intelligence testing.

Deception : Causing the wider public to believe something, which isn’t true by the findings, you report (e.g., that parents are responsible for how their children turn out).

Informed consent : Participants should be made aware of how participating in the research may affect them.

Justice & equitable treatment : Examples of unjust treatment are (i) publicizing an idea, which creates a prejudice against a group, & (ii) withholding a treatment, which you believe is beneficial, from some participants so that you can use them as controls.

Scientific freedom : Science should not be censored, but there should be some monitoring of sensitive research. The researcher should weigh their responsibilities against their rights to do the research.

Ownership of data : When research findings could be used to make social policies, which affect people’s lives, should they be publicly accessible? Sometimes, a party commissions research with their interests in mind (e.g., an industry, an advertising agency, a political party, or the military).

Some people argue that scientists should be compelled to disclose their results so that other scientists can re-analyze them. If this had happened in Burt’s day, there might not have been such widespread belief in the genetic transmission of intelligence. George Miller (Miller’s Magic 7) famously argued that we should give psychology away.

The values of social scientists : Psychologists can be divided into two main groups: those who advocate a humanistic approach (individuals are important and worthy of study, quality of life is important, intuition is useful) and those advocating a scientific approach (rigorous methodology, objective data).

The researcher’s values may conflict with those of the participant/institution. For example, if someone with a scientific approach was evaluating a counseling technique based on a humanistic approach, they would judge it on criteria that those giving & receiving the therapy may not consider important.

Cost/benefit analysis : It is unethical if the costs outweigh the potential/actual benefits. However, it isn’t easy to assess costs & benefits accurately & the participants themselves rarely benefit from research.

Sieber & Stanley advise that researchers should not avoid researching socially sensitive issues. Scientists have a responsibility to society to find useful knowledge.

  • They need to take more care over consent, debriefing, etc. when the issue is sensitive.
  • They should be aware of how their findings may be interpreted & used by others.
  • They should make explicit the assumptions underlying their research so that the public can consider whether they agree with these.
  • They should make the limitations of their research explicit (e.g., ‘the study was only carried out on white middle-class American male students,’ ‘the study is based on questionnaire data, which may be inaccurate,’ etc.
  • They should be careful how they communicate with the media and policymakers.
  • They should be aware of the balance between their obligations to participants and those to society (e.g. if the participant tells them something which they feel they should tell the police/social services).
  • They should be aware of their own values and biases and those of the participants.

Arguments for SSR

  • Psychologists have devised methods to resolve the issues raised.
  • SSR is the most scrutinized research in psychology. Ethical committees reject more SSR than any other form of research.
  • By gaining a better understanding of issues such as gender, race, and sexuality, we are able to gain greater acceptance and reduce prejudice.
  • SSR has been of benefit to society, for example, EWT. This has made us aware that EWT can be flawed and should not be used without corroboration. It has also made us aware that the EWT of children is every bit as reliable as that of adults.
  • Most research is still on white middle-class Americans (about 90% of research is quoted in texts!). SSR is helping to redress the balance and make us more aware of other cultures and outlooks.

Arguments against SSR

  • Flawed research has been used to dictate social policy and put certain groups at a disadvantage.
  • Research has been used to discriminate against groups in society, such as the sterilization of people in the USA between 1910 and 1920 because they were of low intelligence, criminal, or suffered from psychological illness.
  • The guidelines used by psychologists to control SSR lack power and, as a result, are unable to prevent indefensible research from being carried out.

American Psychological Association. (2002). American Psychological Association ethical principles of psychologists and code of conduct. www.apa.org/ethics/code2002.html

Baumrind, D. (1964). Some thoughts on ethics of research: After reading Milgram’s” Behavioral study of obedience.”.  American Psychologist ,  19 (6), 421.

Caughy, M. O. B., DiPietro, J. A., & Strobino, D. M. (1994). Day‐care participation as a protective factor in the cognitive development of low‐income children.  Child development ,  65 (2), 457-471.

Harris, B. (1988). Key words: A history of debriefing in social psychology. In J. Morawski (Ed.), The rise of experimentation in American psychology (pp. 188-212). New York: Oxford University Press.

Rosenthal, R., & Rosnow, R. L. (1984). Applying Hamlet’s question to the ethical conduct of research: A conceptual addendum. American Psychologist, 39(5) , 561.

Sieber, J. E., & Stanley, B. (1988). Ethical and professional dimensions of socially sensitive research.  American psychologist ,  43 (1), 49.

The British Psychological Society. (2010). Code of Human Research Ethics. www.bps.org.uk/sites/default/files/documents/code_of_human_research_ethics.pdf

Further Information

  • MIT Psychology Ethics Lecture Slides

BPS Documents

  • Code of Ethics and Conduct (2018)
  • Good Practice Guidelines for the Conduct of Psychological Research within the NHS
  • Guidelines for Psychologists Working with Animals
  • Guidelines for ethical practice in psychological research online

APA Documents

APA Ethical Principles of Psychologists and Code of Conduct

Print Friendly, PDF & Email

Related Articles

Qualitative Data Coding

Research Methodology

Qualitative Data Coding

What Is a Focus Group?

What Is a Focus Group?

Cross-Cultural Research Methodology In Psychology

Cross-Cultural Research Methodology In Psychology

What Is Internal Validity In Research?

What Is Internal Validity In Research?

What Is Face Validity In Research? Importance & How To Measure

Research Methodology , Statistics

What Is Face Validity In Research? Importance & How To Measure

Criterion Validity: Definition & Examples

Criterion Validity: Definition & Examples

In order to help minimize spread of the coronavirus and protect our campus community, Cowles Library is adjusting our services, hours, and building access. Read more...

  • Research, Study, Learning
  • Archives & Special Collections

how to conduct research in psychology

  • Cowles Library
  • Find Journal Articles
  • Find Articles in Related Disciplines
  • Find Streaming Video
  • Conducting a Literature Review
  • Organizations, Associations, Societies
  • For Faculty

What is a Literature Review?

Description.

A literature review, also called a review article or review of literature, surveys the existing research on a topic. The term "literature" in this context refers to published research or scholarship in a particular discipline, rather than "fiction" (like American Literature) or an individual work of literature. In general, literature reviews are most common in the sciences and social sciences.

Literature reviews may be written as standalone works, or as part of a scholarly article or research paper. In either case, the purpose of the review is to summarize and synthesize the key scholarly work that has already been done on the topic at hand. The literature review may also include some analysis and interpretation. A literature review is  not  a summary of every piece of scholarly research on a topic.

Why are literature reviews useful?

Literature reviews can be very helpful for newer researchers or those unfamiliar with a field by synthesizing the existing research on a given topic, providing the reader with connections and relationships among previous scholarship. Reviews can also be useful to veteran researchers by identifying potentials gaps in the research or steering future research questions toward unexplored areas. If a literature review is part of a scholarly article, it should include an explanation of how the current article adds to the conversation. (From: https://researchguides.drake.edu/englit/criticism)

How is a literature review different from a research article?

Research articles: "are empirical articles that describe one or several related studies on a specific, quantitative, testable research question....they are typically organized into four text sections: Introduction, Methods, Results, Discussion." Source: https://psych.uw.edu/storage/writing_center/litrev.pdf)

Steps for Writing a Literature Review

1. Identify and define the topic that you will be reviewing.

The topic, which is commonly a research question (or problem) of some kind, needs to be identified and defined as clearly as possible.  You need to have an idea of what you will be reviewing in order to effectively search for references and to write a coherent summary of the research on it.  At this stage it can be helpful to write down a description of the research question, area, or topic that you will be reviewing, as well as to identify any keywords that you will be using to search for relevant research.

2. Conduct a Literature Search

Use a range of keywords to search databases such as PsycINFO and any others that may contain relevant articles.  You should focus on peer-reviewed, scholarly articles . In SuperSearch and most databases, you may find it helpful to select the Advanced Search mode and include "literature review" or "review of the literature" in addition to your other search terms.  Published books may also be helpful, but keep in mind that peer-reviewed articles are widely considered to be the “gold standard” of scientific research.  Read through titles and abstracts, select and obtain articles (that is, download, copy, or print them out), and save your searches as needed. Most of the databases you will need are linked to from the Cowles Library Psychology Research guide .

3. Read through the research that you have found and take notes.

Absorb as much information as you can.  Read through the articles and books that you have found, and as you do, take notes.  The notes should include anything that will be helpful in advancing your own thinking about the topic and in helping you write the literature review (such as key points, ideas, or even page numbers that index key information).  Some references may turn out to be more helpful than others; you may notice patterns or striking contrasts between different sources; and some sources may refer to yet other sources of potential interest.  This is often the most time-consuming part of the review process.  However, it is also where you get to learn about the topic in great detail. You may want to use a Citation Manager to help you keep track of the citations you have found. 

4. Organize your notes and thoughts; create an outline.

At this stage, you are close to writing the review itself.  However, it is often helpful to first reflect on all the reading that you have done.  What patterns stand out?  Do the different sources converge on a consensus?  Or not?  What unresolved questions still remain?  You should look over your notes (it may also be helpful to reorganize them), and as you do, to think about how you will present this research in your literature review.  Are you going to summarize or critically evaluate?  Are you going to use a chronological or other type of organizational structure?  It can also be helpful to create an outline of how your literature review will be structured.

5. Write the literature review itself and edit and revise as needed.

The final stage involves writing.  When writing, keep in mind that literature reviews are generally characterized by a  summary style  in which prior research is described sufficiently to explain critical findings but does not include a high level of detail (if readers want to learn about all the specific details of a study, then they can look up the references that you cite and read the original articles themselves).  However, the degree of emphasis that is given to individual studies may vary (more or less detail may be warranted depending on how critical or unique a given study was).   After you have written a first draft, you should read it carefully and then edit and revise as needed.  You may need to repeat this process more than once.  It may be helpful to have another person read through your draft(s) and provide feedback.

6. Incorporate the literature review into your research paper draft. (note: this step is only if you are using the literature review to write a research paper. Many times the literature review is an end unto itself).

After the literature review is complete, you should incorporate it into your research paper (if you are writing the review as one component of a larger paper).  Depending on the stage at which your paper is at, this may involve merging your literature review into a partially complete Introduction section, writing the rest of the paper around the literature review, or other processes.

These steps were taken from: https://psychology.ucsd.edu/undergraduate-program/undergraduate-resources/academic-writing-resources/writing-research-papers/writing-lit-review.html#6.-Incorporate-the-literature-r

  • << Previous: Find Streaming Video
  • Next: Organizations, Associations, Societies >>
  • Last Updated: May 31, 2024 4:22 PM
  • URL: https://researchguides.drake.edu/psychology

how to conduct research in psychology

  • 2507 University Avenue
  • Des Moines, IA 50311
  • (515) 271-2111

Trouble finding something? Try searching , or check out the Get Help page.

  • Technical Support
  • Find My Rep

You are here

Conducting Research in Psychology

Conducting Research in Psychology Measuring the Weight of Smoke

  • Brett W. Pelham - Georgetown University, USA, Montgomery College, USA
  • Hart Blanton - Texas A&M University, USA
  • Description

Conducting Research in Psychology: Measuring the Weight of Smoke provides students an engaging introduction to psychological research by employing humor, stories, and hands-on activities. Through its methodology exercises, learners are encouraged to use their intuition to understand research methods and apply basic research principles to novel problems. Authors Brett W. Pelham and Hart Blanton integrate cutting-edge topics, including implicit biases, measurement controversies, online data collection, and new tools for determining the replicability of a set of research findings. The Fifth Edition broadens its coverage of methodologies to reflect the types of research now conducted by psychologists. Two new chapters accommodate the needs of instructors who incorporate student research projects into their courses.

See what’s new to this edition by selecting the Features tab on this page. Should you need additional information or have questions regarding the HEOA information provided for this title, including what is new to this edition, please email [email protected] . Please include your name, contact information, and the name of the title for which you would like more information. For information on the HEOA, please go to http://ed.gov/policy/highered/leg/hea08/index.html .

For assistance with your order: Please email us at [email protected] or connect with your SAGE representative.

SAGE 2455 Teller Road Thousand Oaks, CA 91320 www.sagepub.com

Supplements

The open-access  Student Study Site  includes a uthor-created, mobile-friendly  web quizzes  that allow students to independently assess their progress in learning course material.

The password-protected Instructor Resource Site offers an author-created Instructor’s Manual that includes the following for each chapter in the book.

  • An introductory Preface with teaching tips from the authors
  • M ultiple choice test questions with pre-written options as well as the opportunity to edit any question and/or insert personalized questions to assess students’ progress and understanding
  • Chapter summaries to help prepare for lectures and class discussions
  • Suggested answers to Study Questions that appear at the end of each chapter in the textbook
  • Sample course syllabi  for semester and quarter courses
  • Answers to the Hands-On Activities from Appendix 1 of the textbook
  • Answers to Methodology Exercises in Appendix 2 of the textbook
  • Answers to Methodology Problems in Appendix 3 of the textbook
  • Chapter 3 offers a conceptual overview of psychological research methods by spelling out the essential ingredients of good research and providing two simple rubrics for evaluation research.
  • Chapter 4 provides a careful summary of the process of designing and carrying out research.
  • Extensively updated online resources include over 250 new student self-test questions and over 250 new instructor test bank questions written and validated by the authors.
  • Over 100 new references to classic and contemporary papers illuminate the secrets to good research design.
  • A humorous writing style helps make the book highly accessible and enjoyable for readers.
  • A hands-on, commonsense approach to research induces the excitement of actually conducting research through lively examples and stories supplemented with engaging exercises.
  • Bolded and italicized terms throughout the text and in the glossary help readers identify important theoretical and technical terms.
  • Major and minor subheadings help readers organize knowledge and identify themes in each chapter.

Sample Materials & Chapters

For instructors, select a purchasing option, related products.

Methods in Psychological Research

Human Subject Research Protection logo

Investigator Manual: 5. Conducting Human Participant Research

In consideration of Respect for Persons, as detailed in the Belmont Report, investigators should ensure individuals have the necessary information to make a fully informed decision to participate in a study. Respect also means honoring an individual’s privacy and maintaining confidentiality when appropriate. When determining an appropriate consent process, certain considerations must be taken into account. For example, the age of participants, competency, understanding of the English language, etc.

Consent for participation in research is a process that involves an exchange of information and ongoing communication that takes places between an investigator and a potential research participant. An effective informed consent process involves the following elements:

  • Conducting the process in a manner and location that ensures participant privacy.
  • Providing adequate information about the study in a language understandable to the potential participant.
  • Providing adequate opportunities for the potential participant to consider all options.
  • Responding to the potential participant’s questions and/or concerns.
  • Ensuring the potential participant comprehends the information provided.
  • Obtaining the prospective participant’s voluntary agreement to participate.
  • Documenting the consent appropriately
  • Providing copies of the consent documents to the participants and continuing to provide information as the participant or research requires.

There are some basic elements of informed consent so a participant can make a fully informed decision to participate.

  • A statement that the study involves research.
  • An explanation of the purposes of the research.
  • The expected duration of the subject’s participation.
  • A description of the procedures and duration for each.
  • A description of any reasonably foreseeable risks or discomforts to the subject.
  • A description of any benefits to the participant or to others that may reasonably be expected from the research.
  • A disclosure of alternative procedures or courses of treatment, if any, that might be advantageous to the subject.
  • A statement describing the extent, if any, to which confidentiality of records identifying the subject will be maintained.
  • A statement of who, outside the research team, may have access to research data; e.g., University auditors; Institutional Review Board and OHRSP: OHRP, federal agencies, and other parties funding the research (if applicable).
  • Details regarding remuneration made to participants, how payments will be prorated, and when they will be provided to participants.
  • A statement that identifiers might be removed from the identifiable private information or identifiable biospecimens and that, after such removal, the information or biospecimens could be used for future research studies or distributed to another investigator for future research studies without additional informed consent from the subject or the legally authorized representative, if this might be a possibility; or
  • A statement that the subject’s information or biospecimens collected as part of the research, even if identifiers are removed, will not be used or distributed for future research studies.
  • For answers to pertinent questions about the research (researcher’s name and phone/address, and that of faculty advisor if the investigator is a student);
  • Regarding research subjects’ rights (OHRSP); and
  • In the event of a research-related injury to the subject.
  • Participation is voluntary;
  • Refusal to participate will involve no penalty or loss of benefits to which the individual is otherwise entitled; and
  • The individual may discontinue participation at any time without penalty or loss of benefits to which the subject is otherwise entitled.
  • An indication that the subject may keep a copy of the consent form.

Timing: Informed consent from the subject and/or their legally authorized representative must be obtained before initiating any research activities or data collection. Consent requirements for screening vary based on different regulations a given research study may fall under.

Ongoing communication: While the initial verbal explanation and dialogue with the subject are critical so that subjects know what they are agreeing to before they consent, ideally the consent process should be an ongoing conversation throughout the study. Throughout the study, make yourself available to answer questions and encourage subjects to ask questions or voice concerns, tell subjects about changes in the study procedures or risks or alternatives, and allow subjects to withdraw from the study for any reason at any time.

Additional approaches: With prior IRB approval, other methods of communicating information about a study may be used to supplement the consent process, or in rarer cases, substitute for a consent document. These approaches include the use of audio-visual materials, brochures, drawings, and information posted on a specific website.

Qualifications of the person obtaining consent: Principal investigators are responsible for assuring that all investigators obtaining consent are qualified and appropriately trained to explain the research and assess participant comprehension as described below. Any person who may obtain consent in a study should be listed in the IRB application as key personnel, though the person need not be listed as an investigator in the consent document itself.

Decision-making capacity: Participants should be able to understand the nature and consequences of the study. If they cannot, a surrogate consent may be required.

Voluntariness: Participants should be free from coercion when deciding to participate. This requires that researchers carefully evaluate, plan, and implement the recruitment, consent documents, and consent process. If the participant indicates “No” through words or body language, further attempts at obtaining consent should not be pursued.

Assent is defined as “a child’s affirmative agreement to participate in research.” Passive resignation to submit to an intervention or procedure is not considered assent. Federal regulations do not specify any of the elements of informed assent and do not provide an age at which assent ought to be possible.

In determining whether children are capable of assenting, the IRB takes into account;

  • The ages, maturity, and psychological state of the children involved;
  • Whether all or some of the children are capable of assenting;
  • If written documentation of assent is required; and,
  • When parental permission is required, whether by one or both parents.

In general, all adults regardless of their diagnosis or condition, are presumed competent to consent to participate in research unless there is evidence to the contrary. When investigators propose to include individuals with questionable capacity, you must provide a plan for assessing the participants’ decision-making capacity. Assessment is done on an individual basis and should determine the potential participants’ ability to understand and express a reasoned choice based on:

  • The voluntary nature of research participation and the information relevant to their participation (research procedures);
  • Consequences of participation for the participant’s situation, especially about the participant’s health condition;
  • Consequences of the alternatives to participation;
  • Potential risks and benefits involved in the study; and
  • Procedures to follow if the participant experiences discomfort or wishes to withdraw.

If the assessment shows evidence that the participant is competent to consent, you must obtain valid informed consent directly from the participant. If the assessment determines that the potential participant does not have sufficient capacity to consent, you must do the following:

  • Document the participant is incapable of understanding the information presented regarding the research in the participant’s research record;
  • Document how the legally authorized representative was determined in a manner consistent with state law;
  • Document the information provided to the participant’s legally authorized representative regarding the cognitive and health status of the participant, the risks and benefits of the research, and the role of the legally authorized representative in the research record;
  • Obtain the consent and signature of the participant’s legally authorized representative;
  • If it is expected that participants will regain their ability to consent, a plan for re-consenting the participants after study activities need to be implemented.

Any method of obtaining informed consent other than face-to-face consent must allow for an adequate exchange of information and documentation. This method must also ensure that the signer of the consent form is the person who plans to enroll as a subject in the study or is the legally authorized representative of the subject. Research records should document what method was used to conduct the consent process and document that informed consent was obtained before beginning study procedures.

“Digital signatures” may be acceptable forms of documentation of written informed consent. Electronic, computer, or tablet-based consent documents may facilitate record keeping even when an individual is present and could sign a paper form. Digital signatures may be considered for face-to-face and remote consent, but the technologies and processes used must be described in the protocol or application.

For FDA-regulated research, the digital signature platform and process must be  21 CFR part 11 compliant . In addition, the research team must verify the participant’s identity.

An investigator may request the IRB waive the requirement to obtain a signed informed consent form for some or all subjects. For the IRB to waive this requirement, at least one of the following criteria must be met:

  • The only record linking the subject and the research would be the informed consent form and the principal risk would be potential harm resulting from a breach of confidentiality. Each subject (or legally authorized representative) will be asked whether the subject wants documentation linking the subject with the research, and the subject’s wishes will govern;
  • The research presents no more than minimal risk of harm to subjects and involves no procedures for which written consent is normally required outside of the research context; or
  • If the subjects or legally authorized representatives are members of a distinct cultural group or community in which signing forms is not the norm, the research presents no more than minimal risk of harm to subjects provided there is an appropriate alternative mechanism for documenting that informed consent was obtained.

Persons with limited English proficiency are individuals who do not speak English as their primary language and/or who have a limited ability to read, speak, write, or understand English. Subjects who have limited English proficiency should be presented with informed consent information in a language understandable to them that includes all the required and additional elements for disclosure.

For consent document translations, the investigator may wish to delay translating until IRB approval is granted for the English version to avoid extra translation costs. The IRB must have all versions of the research materials (e.g., recruitment informed consent form(s), instruments) in both English and non-English on file. When submitting non-English consent documents and other research material, please complete and submit a Certificate of Translation form .

If the subject/representative has limited English proficiency, you must obtain the services of an interpreter fluent in both English and the language understood by the subject/representative. The interpreter may be a member of the research team, a family member, or a friend of the subject/representative. If the research involves medical care and/or is more than minimal risk, the use of family or friends to interpret is discouraged.

The IRB reviews study recruitment methods (including advertisements and payments) to evaluate whether they will affect the equitable selection of participants and to ensure that the proposed methods adequately protect the rights and welfare of participants. All recruitment material requires IRB approval before implementation.

The protocol application must include a description of the following:

  • the source of subjects for all study groups (intervention/case and control);
  • when, where, how, and by whom these potential subjects will be recruited;
  • the methods employed to identify potential subjects;
  • the materials used to recruit subjects, including the use of email and text messaging; and,
  • the number of times a study team can attempt to contact participants is study-dependent and the IRB will assess accordingly.

All communication methods and the protections in place to minimize privacy and confidentiality risks associated with programed must be described to the IRB as part of the review process.

The IRB is responsible for ensuring that any payment or remuneration offered to participants in human subject research is fair and not an undue inducement to participate. Remuneration for participation in research should be reasonable and the amount paid should be comparable to other research projects involving similar time, effort, and inconvenience. Payment amounts should not be large enough to constitute an undue inducement to participate in a risky or uncomfortable procedure. Additional guidelines for specific situations:

Short research studies involving one visit:  Participants may be provided payment contingent upon completion of the study. Participants who are disqualified through no fault of their own must be paid for the time and effort they expended before their termination from the study.

Research studies involving multiple visits or lengthy or repeated participation:  Partial payment should be provided to participants who withdraw, are discharged early from the study by the investigator, or otherwise fail to complete the study as agreed. The amount of partial payment should relate to the amount of time, effort, or discomfort involved. Payment schedules may be designed on a per-day, per-visit, or per-procedure rate, or some combination thereof. The terms for partial payment must be described in the application and the consent form.

Completion bonuses:  Such remuneration may be acceptable to encourage the completion of all study procedures/visits. The amount of such incentives depends on the risk and duration of the study interventions.

The regulations identify prisoners and minors as vulnerable populations. Other groups of individuals may be vulnerable based on the nature of the research and data collection methods.

Prisoners: Department of Health & Human Services (DHHS), Subpart C of Part 46 provides additional protections for biomedical and behavioral research involving prisoners as subjects. These additional protections, or provisions, of the federal regulations, are intended to assure that 1) prisoners provide voluntary consent to participate in research; 2) prisoners’ confidentiality is rigorously protected; 3) prisoners are not used as subjects in studies for which non-incarcerated subjects are suitable. These provisions apply whether the research involves individuals who are prisoners at the time of enrollment in the research or who become prisoners after they become enrolled in the research.

DHHS also requires that the IRB have among its members:

  • One or more individuals knowledgeable about and experienced in working with prisoners when research, involving prisoners, is to be reviewed;
  • A majority of the Board, exclusive of the prisoner member(s), can have no association with the prison(s) involved apart from their membership on the IRB.

Children: If your research involves children under the age of 18, your protocol should include sufficient information in your protocol or application to justify the enrollment of children. Consider whether parental permission and assent should be obtained and incorporate this information in your application.

Participants with impaired decision-making: Your protocol or application should specify the method that will be used for assessing capacity to consent. For more information, refer to the Can I obtain consent from a decisional impaired individual? section of this document.

Other special populations: Examples of vulnerable populations include undocumented individuals; those who struggle with substance use and abuse; non-English speakers, students, employees of the researcher(s)/those with a status relationship with the researcher(s), and active-duty military members.

Additional protections may be required if recruiting vulnerable populations; for example: include having someone outside the study team observe the informed consent process, exclusion of the population if not required to achieve study objectives, and independent assessment to address consent capacity.

Snowball sampling (or chain sampling, chain-referral sampling, referral sampling) is a non-probability sampling technique where existing study subjects recruit or refer future subjects from among their acquaintances.

This recruitment approach may be approved by the IRB with justification for its use and how it relates to the study and subject population. The protocol should address how the risk of violating an individual’s privacy will be minimized and how snowball sampling may impact other study risks. Investigators are to provide this justification in the recruitment section of the protocol application form.

You are required to ensure human research includes adequate provisions to protect the privacy of participants and the confidentiality of data, as required by federal regulations.

  • Privacy  refers to a person’s desire to control the access of others to themselves. For example, research participants may not want to be seen entering a place that might stigmatize them, such as a pregnancy counseling center that is identified as such by signs on the front of the building.
  • Confidentiality  refers to the researcher’s agreement with the participant about how the research participant’s identifiable private information will be handled, managed, and disseminated.

For the IRB to assess privacy and confidentiality protections, the protocol should describe how participant privacy will be protected and how data kept confidential. The IRB will assess whether the participants’ privacy interests and confidentiality of data are protected in ways commensurate with the benefits to participants and the risks of everyday life.

Issued by NIH, the CoC protects the privacy of research participants by prohibiting forced disclosure of their individually identifiable, sensitive research information to anyone not associated with the research, except when the participant consents to such disclosures or in other limited specific situations.

All ongoing or new research with NIH funding and collects or uses identifiable, sensitive information is automatically issued a CoC as a term and condition of the NIH grant award. The Notice of Award and the NIH Grants Policy Statement will serve as documentation of the Certificate protection [a separate certificate document is no longer issued]. This automatic issuance of CoC protections also applies to research that receives re-distributed NIH funds. Studies that have been issued a CoC are required to inform the participants about the CoC protections and any exceptions to the CoC protections as part of the informed consent process.

Federally funded studies that meet the NIH definition of a clinical trial and any study (regardless of funding) that meets the FDA definition of an applicable clinical trial must be registered and study consent forms must include the required language.

NIH definition : A research study in which one or more human subjects are prospectively assigned to one or more interventions (which may include placebo or other control) to evaluate the effects of those interventions on health-related biomedical or behavioral outcomes.

FDA definition : Any experiment that involves a test article [1] and one or more human subjects and is subject to requirements for submission to the Food and Drug Administration. Clinical investigations must not be initiated unless that investigation has been reviewed and approved by an IRB.

Information on ClinicalTrials.gov is provided and updated by the sponsor or principal investigator of the clinical trial or designee Studies are registered on the website before commencing and updated throughout the study. On occasion, results of the study are submitted after the study ends. This Web site and database of clinical studies is commonly referred to as a “registry and results database.”

NU-RES can assist Pis with account setup and registration, but it is the PI’s responsibility to ensure their applicable study is registered on ClinicalTrials.gov.

Federal law requires the following exact statement to be included in the informed consent documents of applicable clinical trials:

“A description of this clinical trial will be available on http://www.ClinicalTrials.gov , as required by U.S. Law. This Web site will not include information that can identify you. At most, the Web site will include a summary of the results. You can search this Web site at any time.”

[1] FDA defines a test article as: any food additive, color additive, drug, biological product, electronic product, medical device for human use, or any other article subject to regulation under the act or under sections 351 and 354-360F of the Public Health Service Act:

The European Union has additional requirements regarding data privacy, referred to as the GDPR. Where Northeastern is working with personal data collected in, or transferred from, any European Economic Area country, GDPR will be relevant. This includes data collected, obtained, or used for research projects. Failure to follow GDPR if it applies puts the University at risk of noncompliance, monetary fines, and reputational harm so it is critical to understand and assess whether GDPR applies to your study.

GDPR requires a legal basis to collect and process (e.g., analyze) personal data. To use personal data for research, the legal basis that generally will apply is consent from the data subject. Consent must be  freely given, specific, informed,  and  unambiguous  as to the data subject’s wishes by a  statement  or by  clear affirmative action.

  Note that other privacy laws may exist that need to be considered.

When conducting research in international sites, protocols are to comply with not only the regulations and laws of the United States but the international site, as well. These may be the result of differences in language, culture, social history, and societal norms. Where the two sets of standards present a conflict, the research must meet the higher standard. While we do not impose our standards for written documentation on other cultures, we do not relax our standards for ethical conduct of research or for a meaningful consent and/or assent process, including ensuring additional protections for vulnerable participant populations (e.g., children, prisoners).

Researchers are to be cognizant of relevant national policies and take special consideration in cases such as the availability of national health insurance, different philosophical legal systems, and social policies that may make U.S. research forms and procedures inappropriate. In addition to obtaining approval from the Northeastern IRB, investigators are to obtain approval from international IRBs or local ethics committees. The Office for Human Research Protection retains a current list of International Compilation of Research Standards . This document provides contact information for bio-medical and social/behavioral science IRBs or equivalent ethics committees.

Documentation of permission or approval must be submitted with the IRB application materials. In the absence of these laws or guidance, the researcher is required to obtain approval from the local government or community leaders or provide information as to the absence of the local review. Examples of local reviews may include the following:

  • Ethics committees
  • Drug approval agencies
  • Local ministries
  • Local governance

What it means for data to be coded, de-identified, or anonymous is important when it comes to understanding the identifiability of data. The identifiability of data under the Common Rule (45 CFR 46) and the Health Insurance Portability and Accountability Act (HIPAA) differs.

Coded data refers to data that have been stripped of all direct subject identifiers, but in this case, each record has its study ID or code, which is linked to identifiable information such as name or medical record number. The linking file must be separate from the coded data set. This linking file may be held by someone on the study team (e.g. the PI) or it could be held by someone outside of the study team (e.g. a researcher at another institution). A coded data set may include limited identifiers under HIPAA. Of note, the code itself may not contain identifiers such as subject initials or medical record numbers.

  De-identified data refers to data that have been stripped of all subject identifiers, including all 18 HIPAA identifiers. This means that there can be no data points that are considered limited identifiers under HIPAA, i.e. geographic area smaller than a state, elements of dates (date of birth, date of death, dates of clinical service), and age over age 89. If the data set contains any limited identifiers, it is considered a limited data set under HIPAA. If the data includes an indirect link to subject identifiers (e.g. via coded ID numbers), then the data is considered by the IRB to be coded, not de-identified.

Anonymous data are essentially the same thing as de-identified data, this refers to data that have been stripped of all subject identifiers and which have no indirect links to subject identifiers. There should be no limited identifiers in an anonymous data set.

Identifiability under the Common Rule

  • The Common Rule defines “individually identifiable” to mean that the identity of the subject is, or maybe, readily ascertained by the investigator or associated with the information.
  • A data set may be identifiable under the Common Rule if it contains: initials, address, zip code, phone number, gender, age, birth date, occupation, employer, racial or ethnic group, dates of events, names of individuals related to the participant such as teacher or physician names, and genealogy.
  • Age, ethnicity/race, and gender may be identifiers under the Common Rule if fewer than 5 individuals possess a particular cluster of traits.
  • Data may be identifiable if any combination of variables could potentially identify a subject.
  • Some of the identifiers listed above become less problematic if the sample size is large enough so that the potential identifiers could describe several individuals and thus cannot be linked to only one person. Conversely, if the sample size is small, the potential to identify an individual may increase, even in the absence of direct identifiers.

Identifiability under HIPAA

The HIPAA Privacy Rule regulation specifies 18 identifiers, most of which are demographic. The inclusion of even one of the following identifiers makes a data set identifiable. However, there are levels of identifiability. The following are considered limited identifiers under HIPAA: geographic area smaller than a state, elements of dates (date of birth, date of death, dates of clinical service), and age over age 89. The remaining identifiers in the bullet list are considered to be direct identifiers. If the data set contains any limited identifiers, but none of the direct identifiers, it is considered a limited data set under HIPAA.

Data Safety Monitoring Plan (DSMP) or Data Safety Monitoring Board (DSMB)

Any study that presents more than minimal risk to subjects must describe a data and safety monitoring plan. Studies utilizing  IRB protocol templates  will be prompted to provide this information. Most studies can utilize a DSMP, but some clinical trials or multi-site research may opt to utilize a DSMB instead. Please see NIH Guidance on Data and Safety Monitoring .

In the case of minimal risk research, the IRB may require a DSMP or DSMB if the committee deems that monitoring of the study is needed to ensure the protection of the rights and welfare of subjects or data integrity, which could affect the welfare of future subjects.

Deception: The IRB accepts the need for certain types of studies to employ strategies that include deception. However, the employment of such strategies must be justified. In general, deception is not acceptable if, in the judgment of the IRB, the participant may have declined to participate had they been fully informed of the true purpose of the research.

Radiation: If your study interventions involve the use/administration of radiation, additional state law requirements apply that are outside IRB purview. For information and assistance with these requirements, please refer to the NU Radiation Safety website .

If you are planning on conducting research in K-12 schools, additional requirements will apply to your research. These schools are autonomous institutions that retain the right to approve/reject any human research to be conducted on their site, in their facilities, or with their teachers, staff, or students. The IRB therefore requires documentation from an appropriate authority at each school or district granting permission to conduct human research.

Researchers are responsible for contacting each school or district to obtain this permission and meet the requirements each site may have for conducting human research (e.g., review by its research review committee). If review by a separate committee is required, you will need to plan additional time for this approval process as well as IRB review.

Other important items to keep in mind:

  • Often K-12 school sites will require proof of IRB review before their approval.
  • If teachers/school staff are members of the research team for non-exempt studies, they are to comply with training requirements.
  • Some schools require research personnel to undergo background checks.
  • Many school districts will not allow research activities to take place during normal class time.
  • Many schools place limitations on the use of video or audio recordings in classrooms.
  • Parental consent is required for minors to be included as research subjects.
  • Minor assent is also required before including minors as research subjects.

Additional steps and protections are often needed when conducting research in collaboration with American Indian or Alaska Native (AI/AN) communities and populations. The Common Rule affirms that each tribe may have its definition of research and its own set of research protections and laws that may have more restrictions than the Common Rule. Research teams are responsible for ensuring any research on tribal land, using tribal resources, or focusing on individuals from a tribe is conducted in a manner that is respectful of tribal autonomy, sovereignty, and rights. While the IRB does not require approval/permission from the tribe to grant final approval, a researcher must receive appropriate review and approval from the tribe of their research.

The IRB uses the terms recruitment registry, data repository, and biospecimen repository for these various research tools/resources, although terminology can vary widely across institutions. Generally, the creation and maintenance of a research registry or repository requires IRB review and approval.

Recruitment Registry: a tool used to identify and track a group of individuals that have similar characteristics. The characteristics can vary widely (e.g., disease, genetic make-up, health behaviors, surgical procedures), but the registry intends to track and classify these groups of individuals. The IRB prefers the use of the word “registry” to be used to describe lists of people along with limited personal and, when applicable, medical information. The primary use of these lists is to provide investigators with pools of potential study volunteers, as in recruitment registries. Recruitment registries generally require IRB approval, both for the creation and maintenance of the registry itself, as well as for future projects that wish to use a registry as a recruitment method.

Data Repository: a tool used to compile a set of individual subject/patient data that will be used for analysis purposes. A data repository generally has data added to it in an ongoing manner that is stored long-term. Data in the repository are intended to be distributed to multiple users and subsequently used for ongoing analysis purposes. The IRB prefers the use of the term ‘data repository’ over terms such as ‘databases’ and ‘registries.’ If the primary intent of the repository is for use in future research projects, IRB review and approval are required and may be required for the subsequent use of the data from the repository.

Biospecimen/tissue repositories: Also known as a tissue bank, a mechanism for maintaining tissue, blood, and other biological specimens for unspecified future use. These repositories typically involve the collection and long-term storage of tissue and often corresponding data to be used primarily for future research projects. Tissue to be stored in the repository can be collected retrospectively, prospectively, or both. Tissue repositories can include tissue collected from other research protocols or clinical procedures. IRB review and approval are required for the banking of biospecimens and may be required for the subsequent use of the specimens from the repository.

The IRB review requirements for research involving decedents vary depending on whether the study involves ONLY decedents or decedents and living individuals.

The Common Rule (45 CFR 46.102I(1)) defines a human subject as “a living individual about whom an investigator (whether profession or student) conducting research (1) obtains information or biospecimens through intervention or interaction with the individual, and uses, studies, or analyzes the information or biospecimens; or (2) obtains, uses, analyzes, or generates identifiable private information or identifiable biospecimens.”

The FDA does not explicitly state that regulations apply only to “living individuals.” The regulations, however, imply that the subject is alive. FDA regulations (21 CFR 56.102I) define a “human subject” as “an individual who is or becomes a participant in research, either as a recipient of a test article or as a control.” That section goes on to state that a subject may be “either a healthy individual or a patient.” And, at 21 CFR 812.3(p), “subject” means “a human who participates in an investigation, either as an individual on whom or on whose specimen an investigational device is used or as a control. A subject may be in normal health or may have a medical condition or disease.”

Based on the above, the IRB Office has determined that Common Rule and FDA Regulations do NOT apply to studies ONLY using information or biospecimens from deceased individuals. In these cases, researchers do not need to submit an IRB application for review of decedent ONLY research.

Online research sources such as Facebook, Twitter, blogs, chat rooms, discussion forums, and other social networking sites will be treated as publicly available data by the IRB in a very broad sense, and with several limitations.

Privacy Statements & Terms of Use: You are responsible for checking the privacy statement and terms of use of any site being used for research purposes. You must adhere to the written policies of any site used for research. The IRB expects that researchers will either: 1) obtain consent to use data from an individual’s social media page or 2) make an appropriate argument as part of the application process as to why the IRB should waive consent for the project.

Publicly available: The IRB does not consider sites that require the user to create an account, and then provide a login and password, to be publicly available data. Therefore, participants must be consented before an investigator can observe or interact with participants in these online environments. Members of sites that require a login expect privacy and do not expect that anything they post will be used for research purposes. In some circumstances, researchers can petition the IRB for a waiver of consent. In these situations, researchers will need to provide an appropriate argument/justification as to why a waiver of consent is appropriate.

Data mining: Facebook, Twitter, and others may provide data mining services where their developers will mine data from the site, for a fee, at the researcher’s request. Depending on the scope, the IRB may treat the data differently because data collection would be done by the media site and (likely) provided to the researcher without direct identifiers. The IRB deals with this type of research activity on a case-by-case basis.

Building, Architecture, Outdoors, City, Aerial View, Urban, Office Building, Cityscape

Research Specialist

  • Madison, Wisconsin
  • COLLEGE OF LETTERS AND SCIENCE/PSYCHOLOGY-GEN
  • Staff-Full Time
  • Opening at: Jun 3 2024 at 08:30 CDT
  • Closing at: Jun 17 2024 at 23:55 CDT

Job Summary:

Dr. Ashley Jordan has an exciting opportunity for an enthusiastic research specialist in the Development of Intergroup Social Cognition (DISC) Lab at the University of Wisconsin-Madison. This position will entail the following: contributing to the development of research studies; recruiting, conducting, and managing ongoing research projects; establishing, maintaining, and revising (as needed) lab policies, procedures, and logistics; coordinate with undergraduate research assistants; working with children, adults, and community partners to further research objectives; contributing to the development of research output; engaging with the broader psychology community at UW-Madison. Candidate must also be well organized, self-motivated and have excellent communication and interpersonal skills.

Responsibilities:

  • 20% Conducts research experiments according to established research protocols with moderate impact to the project(s). Collects data and monitors test results
  • 10% Operates, cleans, and maintains organization of research equipment and research area. Tracks inventory levels and places replenishment orders
  • 10% Reviews, analyzes, and interprets data and/or documents results for presentations and/or reporting to internal and external audiences
  • 10% Participates in the development, interpretation, and implementation of research methodology and materials
  • 20% Provides operational guidance on day-to-day activities of unit or program staff and/or student workers
  • 10% Performs literature reviews and writes reports
  • 10% Recruit participants for in-lab, offsite (e.g., at preschools and museums), and online (e.g., via Zoom) studies
  • 10% Onboard and train undergraduate students and serves as central point of contact for lab members

Institutional Statement on Diversity:

Diversity is a source of strength, creativity, and innovation for UW-Madison. We value the contributions of each person and respect the profound ways their identity, culture, background, experience, status, abilities, and opinion enrich the university community. We commit ourselves to the pursuit of excellence in teaching, research, outreach, and diversity as inextricably linked goals. The University of Wisconsin-Madison fulfills its public mission by creating a welcoming and inclusive community for people from every background - people who as students, faculty, and staff serve Wisconsin and the world. For more information on diversity and inclusion on campus, please visit: Diversity and Inclusion

Required: Bachelor's Degree in Psychology, Cognitive Science or related field

Qualifications:

Required: - 1 year of experience conducting psychological or closely related research - Experience working with youth, parents, and staff from various backgrounds - Demonstrated verbal and written communication and public relation skills Preferred: - Experience programming experiments using study design software (e.g., Qualtrics; PsychoPy) and/or using statistical software (e.g., R) - Experience effectively recruiting and working with members of communities who are marginalized or underrepresented in academic psychology (e.g., communities of color; LGBTQ+ communities) - Experience presenting research at undergraduate or professional meetings

Full Time: 100% It is anticipated this position requires work be performed in-person, onsite, at a designated campus work location.

Appointment Type, Duration:

Ongoing/Renewable

Minimum $44,550 ANNUAL (12 months) Depending on Qualifications The typical starting range for this position is $44,550 to $63,619. Employees in this position can expect to receive benefits such as generous vacation, holidays, and paid time off; competitive insurances and saving accounts; and retirement benefits. Learn more: https://hr.wisc.edu/benefits/new-employee-benefits-enrollment/

Additional Information:

The selected applicant will be responsible for ensuring their continuous eligibility for employment in the United States on or before the effective date of the appointment. University sponsorship is not available for this position.

How to Apply:

Please click on the "Apply Now" button to start the application process. For questions on the position, contact: Ashley Jordan at [email protected] . To apply for this position you will need to upload a cover letter, resume and contact information for at least three professional references, including your current supervisor. References will not be contacted without advance notice. Your cover letter should address your qualifications as the pertain to the qualifications listed above.

Cassie Wheeler [email protected] 608-262-3739 Relay Access (WTRS): 7-1-1. See RELAY_SERVICE for further information.

Official Title:

Research Specialist(RE047)

Department(s):

A48-COL OF LETTERS & SCIENCE/PSYCHOLOGY/PSYCHOLOGY

Employment Class:

Academic Staff-Renewable

Job Number:

The university of wisconsin-madison is an equal opportunity and affirmative action employer..

You will be redirected to the application to launch your career momentarily. Thank you!

Frequently Asked Questions

Applicant Tutorial

Disability Accommodations

Pay Transparency Policy Statement

Refer a Friend

You've sent this job to a friend!

Website feedback, questions or accessibility issues: [email protected] .

Learn more about accessibility at UW–Madison .

© 2016–2024 Board of Regents of the University of Wisconsin System • Privacy Statement

COMMENTS

  1. Research Methods In Psychology

    Olivia Guy-Evans, MSc. Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

  2. The Scientific Method Steps, Uses, and Key Terms

    When conducting research, the scientific method steps to follow are: Observe what you want to investigate. Ask a research question and make predictions. Test the hypothesis and collect data. Examine the results and draw conclusions. Report and share the results. This process not only allows scientists to investigate and understand different ...

  3. APA Handbook of Research Methods in Psychology

    Ethical and Professional Considerations in Conducting Psychological Research Chapter 3. Ethics in Psychological Research: Guidelines and Regulations Adam L. Fried and Kate L. Jansen; ... PhD, is an associate professor of psychology and research scientist in the Learning Research and Development Center at the University of Pittsburgh. Dr.

  4. Undergraduate Research Experience: A Roadmap to Guide Your Journey

    From start to finish, the psychology research projects you learn about in your classes involve a lot of complex steps, completed by a team of researchers, over a period of time that can last several years. Some of those steps can include: conducting a literature review to identify what we already know about a particular psychological phenomenon

  5. 1.3: Psychologists Use the Scientific Method to Guide Their Research

    The scientific method is the set of assumptions, rules, and procedures scientists use to conduct research. In addition to requiring that science be empirical, the scientific method demands that the procedures used be objective, or free from the personal bias or emotions of the scientist.

  6. Research in Psychology: Methods You Should Know

    Research in psychology focuses on a variety of topics, ranging from the development of infants to the behavior of social groups. Psychologists use the scientific method to investigate questions both systematically and empirically. Research in psychology is important because it provides us with valuable information that helps to improve human lives.

  7. Conducting research

    Advancing psychology to benefit society and improve lives. Tools and insight for scientists and researchers on human and animal research, data collection and analysis, running your lab, mentoring, ethics, research tools, and calls for papers in APA journals.

  8. Overview of the Types of Research in Psychology

    Psychology research can usually be classified as one of three major types. 1. Causal or Experimental Research. When most people think of scientific experimentation, research on cause and effect is most often brought to mind. Experiments on causal relationships investigate the effect of one or more variables on one or more outcome variables.

  9. Conducting Psychological Research

    Conducting your own psychological research means analyzing data, designing and executing surveys or experiments, and drawing conclusions from your findings. Conducting original or primary research is how scholars and students contribute to the body of scholarly knowledge.

  10. A Guide to 10 Research Methods in Psychology (With Tips)

    Tips for conducting effective research When conducting psychological research, consider the following helpful tips: Maintain research ethics: When performing research in psychology, it's important to ensure teams and participants in studies are knowledgeable of and understand procedures, policies and any confidentiality agreements regarding the research.

  11. Research process

    Research process. Each step of your research project, from identifying a research question, setting up the methodology, collecting and analysing data online and finally writing up your thesis, is outlined in the following sections. Research is to see what everybody else has seen, and to think what nobody else has thought. - Albert Szent-Gyorgyi.

  12. The Use of Research Methods in Psychological Research: A Systematised

    Introduction. Psychology is an ever-growing and popular field (Gough and Lyons, 2016; Clay, 2017).Due to this growth and the need for science-based research to base health decisions on (Perestelo-Pérez, 2013), the use of research methods in the broad field of psychology is an essential point of investigation (Stangor, 2011; Aanstoos, 2014).Research methods are therefore viewed as important ...

  13. Scientific Research in Psychology

    A Model of Scientific Research in Psychology. Figure 1.1 presents a more specific model of scientific research in psychology. The researcher (who more often than not is really a small group of researchers) formulates a research question, conducts a study designed to answer the question, analyzes the resulting data, draws conclusions about the answer to the question, and publishes the results ...

  14. 2.5: Conducting Psychology Research in the Real World

    Rationale for Conducting Psychology Research in the Real World. One important challenge researchers face when designing a study is to find the right balance between ensuring internal validity, or the degree to which a study allows unambiguous causal inferences, and external validity, or the degree to which a study ensures that potential findings apply to settings and samples other than the ...

  15. Online Research Guide

    The research methods in this psychology study guide can help students learn how to conduct online research. The guide includes information on search tools, strategies, and accuracy. Methods of Psychological Research. There are several methods for conducting research in psychology.

  16. 2.1 Why is Research Important

    Discuss how scientific research guides public policy. Appreciate how scientific research can be important in making personal decisions. Scientific research is a critical tool for successfully navigating our complex world. Without it, we would be forced to rely solely on intuition, other people's authority, and blind luck.

  17. Conducting an Experiment in Psychology

    When conducting an experiment, it is important to follow the seven basic steps of the scientific method: Ask a testable question. Define your variables. Conduct background research. Design your experiment. Perform the experiment. Collect and analyze the data. Draw conclusions.

  18. 1.3 Conducting Research in Social Psychology

    Social psychologists believe that a true understanding of the causes of social behavior can only be obtained through a systematic scientific approach, and that is why they conduct scientific research. Social psychologists believe that the study of social behavior should be empirical —that is, based on the collection and systematic analysis of ...

  19. Ethical Considerations in Psychology Research

    The research team. There are examples of researchers being intimidated because of the line of research they are in. The institution in which the research is conducted. salso suggest there are 4 main ethical concerns when conducting SSR: The research question or hypothesis. The treatment of individual participants.

  20. Science of Psychology

    Psychology is a varied field. Psychologists conduct basic and applied research, serve as consultants to communities and organizations, diagnose and treat people, and teach future psychologists and those who will pursue other disciplines. They test intelligence and personality. Many psychologists work as health care providers.

  21. Research Guides: Psychology: Conducting a Literature Review

    6. Incorporate the literature review into your research paper draft. (note: this step is only if you are using the literature review to write a research paper. Many times the literature review is an end unto itself). After the literature review is complete, you should incorporate it into your research paper (if you are writing the review as one ...

  22. Conducting Research in Psychology

    Conducting Research in Psychology: Measuring the Weight of Smoke provides students an engaging introduction to psychological research by employing humor, stories, and hands-on activities. Through its methodology exercises, learners are encouraged to use their intuition to understand research methods and apply basic research principles to novel problems.

  23. The Field Study in Social Psychology

    In fact, the results point to certain variables and research phenomena that can only be captured using field studies. In the final section, the authors also explain the methods to follow when conducting field studies, to make sure they are methodologically correct and meet the criteria of contemporary expectations regarding statistical ...

  24. Investigator Manual: 5. Conducting Human Participant Research

    When conducting research in international sites, protocols are to comply with not only the regulations and laws of the United States but the international site, as well. These may be the result of differences in language, culture, social history, and societal norms. Where the two sets of standards present a conflict, the research must meet the ...

  25. Google UX Design Professional Certificate

    Professional Certificate - 7 course series. Prepare for a career in the high-growth field of UX design, no experience or degree required. With professional training designed by Google, get on the fast-track to a competitively paid job. There are over 138,000 open jobs in UX design with a median entry-level salary of $112,000.¹.

  26. Research Specialist

    Job Summary: Dr. Ashley Jordan has an exciting opportunity for an enthusiastic research specialist in the Development of Intergroup Social Cognition (DISC) Lab at the University of Wisconsin-Madison. This position will entail the following: contributing to the development of research studies; recruiting, conducting, and managing ongoing research projects; establishing, maintaining, and ...

  27. Admission criteria for graduate psychology programs are changing

    In general, large percentages of graduate psychology programs reported that the GRE Verbal, Quantitative, Writing, and Subject scores were not criteria used in the admissions process for the 2022-23 admissions cycle. This represents a continuation of the trend of graduate psychology programs prioritizing criteria other than GRE scores in an ...