Research Methods In Psychology

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

research methods3

Hypotheses are statements about the prediction of the results, that can be verified or disproved by some investigation.

There are four types of hypotheses :
  • Null Hypotheses (H0 ) – these predict that no difference will be found in the results between the conditions. Typically these are written ‘There will be no difference…’
  • Alternative Hypotheses (Ha or H1) – these predict that there will be a significant difference in the results between the two conditions. This is also known as the experimental hypothesis.
  • One-tailed (directional) hypotheses – these state the specific direction the researcher expects the results to move in, e.g. higher, lower, more, less. In a correlation study, the predicted direction of the correlation can be either positive or negative.
  • Two-tailed (non-directional) hypotheses – these state that a difference will be found between the conditions of the independent variable but does not state the direction of a difference or relationship. Typically these are always written ‘There will be a difference ….’

All research has an alternative hypothesis (either a one-tailed or two-tailed) and a corresponding null hypothesis.

Once the research is conducted and results are found, psychologists must accept one hypothesis and reject the other. 

So, if a difference is found, the Psychologist would accept the alternative hypothesis and reject the null.  The opposite applies if no difference is found.

Sampling techniques

Sampling is the process of selecting a representative group from the population under study.

Sample Target Population

A sample is the participants you select from a target population (the group you are interested in) to make generalizations about.

Representative means the extent to which a sample mirrors a researcher’s target population and reflects its characteristics.

Generalisability means the extent to which their findings can be applied to the larger population of which their sample was a part.

  • Volunteer sample : where participants pick themselves through newspaper adverts, noticeboards or online.
  • Opportunity sampling : also known as convenience sampling , uses people who are available at the time the study is carried out and willing to take part. It is based on convenience.
  • Random sampling : when every person in the target population has an equal chance of being selected. An example of random sampling would be picking names out of a hat.
  • Systematic sampling : when a system is used to select participants. Picking every Nth person from all possible participants. N = the number of people in the research population / the number of people needed for the sample.
  • Stratified sampling : when you identify the subgroups and select participants in proportion to their occurrences.
  • Snowball sampling : when researchers find a few participants, and then ask them to find participants themselves and so on.
  • Quota sampling : when researchers will be told to ensure the sample fits certain quotas, for example they might be told to find 90 participants, with 30 of them being unemployed.

Experiments always have an independent and dependent variable .

  • The independent variable is the one the experimenter manipulates (the thing that changes between the conditions the participants are placed into). It is assumed to have a direct effect on the dependent variable.
  • The dependent variable is the thing being measured, or the results of the experiment.

variables

Operationalization of variables means making them measurable/quantifiable. We must use operationalization to ensure that variables are in a form that can be easily tested.

For instance, we can’t really measure ‘happiness’, but we can measure how many times a person smiles within a two-hour period. 

By operationalizing variables, we make it easy for someone else to replicate our research. Remember, this is important because we can check if our findings are reliable.

Extraneous variables are all variables which are not independent variable but could affect the results of the experiment.

It can be a natural characteristic of the participant, such as intelligence levels, gender, or age for example, or it could be a situational feature of the environment such as lighting or noise.

Demand characteristics are a type of extraneous variable that occurs if the participants work out the aims of the research study, they may begin to behave in a certain way.

For example, in Milgram’s research , critics argued that participants worked out that the shocks were not real and they administered them as they thought this was what was required of them. 

Extraneous variables must be controlled so that they do not affect (confound) the results.

Randomly allocating participants to their conditions or using a matched pairs experimental design can help to reduce participant variables. 

Situational variables are controlled by using standardized procedures, ensuring every participant in a given condition is treated in the same way

Experimental Design

Experimental design refers to how participants are allocated to each condition of the independent variable, such as a control or experimental group.
  • Independent design ( between-groups design ): each participant is selected for only one group. With the independent design, the most common way of deciding which participants go into which group is by means of randomization. 
  • Matched participants design : each participant is selected for only one group, but the participants in the two groups are matched for some relevant factor or factors (e.g. ability; sex; age).
  • Repeated measures design ( within groups) : each participant appears in both groups, so that there are exactly the same participants in each group.
  • The main problem with the repeated measures design is that there may well be order effects. Their experiences during the experiment may change the participants in various ways.
  • They may perform better when they appear in the second group because they have gained useful information about the experiment or about the task. On the other hand, they may perform less well on the second occasion because of tiredness or boredom.
  • Counterbalancing is the best way of preventing order effects from disrupting the findings of an experiment, and involves ensuring that each condition is equally likely to be used first and second by the participants.

If we wish to compare two groups with respect to a given independent variable, it is essential to make sure that the two groups do not differ in any other important way. 

Experimental Methods

All experimental methods involve an iv (independent variable) and dv (dependent variable)..

  • Field experiments are conducted in the everyday (natural) environment of the participants. The experimenter still manipulates the IV, but in a real-life setting. It may be possible to control extraneous variables, though such control is more difficult than in a lab experiment.
  • Natural experiments are when a naturally occurring IV is investigated that isn’t deliberately manipulated, it exists anyway. Participants are not randomly allocated, and the natural event may only occur rarely.

Case studies are in-depth investigations of a person, group, event, or community. It uses information from a range of sources, such as from the person concerned and also from their family and friends.

Many techniques may be used such as interviews, psychological tests, observations and experiments. Case studies are generally longitudinal: in other words, they follow the individual or group over an extended period of time. 

Case studies are widely used in psychology and among the best-known ones carried out were by Sigmund Freud . He conducted very detailed investigations into the private lives of his patients in an attempt to both understand and help them overcome their illnesses.

Case studies provide rich qualitative data and have high levels of ecological validity. However, it is difficult to generalize from individual cases as each one has unique characteristics.

Correlational Studies

Correlation means association; it is a measure of the extent to which two variables are related. One of the variables can be regarded as the predictor variable with the other one as the outcome variable.

Correlational studies typically involve obtaining two different measures from a group of participants, and then assessing the degree of association between the measures. 

The predictor variable can be seen as occurring before the outcome variable in some sense. It is called the predictor variable, because it forms the basis for predicting the value of the outcome variable.

Relationships between variables can be displayed on a graph or as a numerical score called a correlation coefficient.

types of correlation. Scatter plot. Positive negative and no correlation

  • If an increase in one variable tends to be associated with an increase in the other, then this is known as a positive correlation .
  • If an increase in one variable tends to be associated with a decrease in the other, then this is known as a negative correlation .
  • A zero correlation occurs when there is no relationship between variables.

After looking at the scattergraph, if we want to be sure that a significant relationship does exist between the two variables, a statistical test of correlation can be conducted, such as Spearman’s rho.

The test will give us a score, called a correlation coefficient . This is a value between 0 and 1, and the closer to 1 the score is, the stronger the relationship between the variables. This value can be both positive e.g. 0.63, or negative -0.63.

Types of correlation. Strong, weak, and perfect positive correlation, strong, weak, and perfect negative correlation, no correlation. Graphs or charts ...

A correlation between variables, however, does not automatically mean that the change in one variable is the cause of the change in the values of the other variable. A correlation only shows if there is a relationship between variables.

Correlation does not always prove causation, as a third variable may be involved. 

causation correlation

Interview Methods

Interviews are commonly divided into two types: structured and unstructured.

A fixed, predetermined set of questions is put to every participant in the same order and in the same way. 

Responses are recorded on a questionnaire, and the researcher presets the order and wording of questions, and sometimes the range of alternative answers.

The interviewer stays within their role and maintains social distance from the interviewee.

There are no set questions, and the participant can raise whatever topics he/she feels are relevant and ask them in their own way. Questions are posed about participants’ answers to the subject

Unstructured interviews are most useful in qualitative research to analyze attitudes and values.

Though they rarely provide a valid basis for generalization, their main advantage is that they enable the researcher to probe social actors’ subjective point of view. 

Questionnaire Method

Questionnaires can be thought of as a kind of written interview. They can be carried out face to face, by telephone, or post.

The choice of questions is important because of the need to avoid bias or ambiguity in the questions, ‘leading’ the respondent or causing offense.

  • Open questions are designed to encourage a full, meaningful answer using the subject’s own knowledge and feelings. They provide insights into feelings, opinions, and understanding. Example: “How do you feel about that situation?”
  • Closed questions can be answered with a simple “yes” or “no” or specific information, limiting the depth of response. They are useful for gathering specific facts or confirming details. Example: “Do you feel anxious in crowds?”

Its other practical advantages are that it is cheaper than face-to-face interviews and can be used to contact many respondents scattered over a wide area relatively quickly.

Observations

There are different types of observation methods :
  • Covert observation is where the researcher doesn’t tell the participants they are being observed until after the study is complete. There could be ethical problems or deception and consent with this particular observation method.
  • Overt observation is where a researcher tells the participants they are being observed and what they are being observed for.
  • Controlled : behavior is observed under controlled laboratory conditions (e.g., Bandura’s Bobo doll study).
  • Natural : Here, spontaneous behavior is recorded in a natural setting.
  • Participant : Here, the observer has direct contact with the group of people they are observing. The researcher becomes a member of the group they are researching.  
  • Non-participant (aka “fly on the wall): The researcher does not have direct contact with the people being observed. The observation of participants’ behavior is from a distance

Pilot Study

A pilot  study is a small scale preliminary study conducted in order to evaluate the feasibility of the key s teps in a future, full-scale project.

A pilot study is an initial run-through of the procedures to be used in an investigation; it involves selecting a few people and trying out the study on them. It is possible to save time, and in some cases, money, by identifying any flaws in the procedures designed by the researcher.

A pilot study can help the researcher spot any ambiguities (i.e. unusual things) or confusion in the information given to participants or problems with the task devised.

Sometimes the task is too hard, and the researcher may get a floor effect, because none of the participants can score at all or can complete the task – all performances are low.

The opposite effect is a ceiling effect, when the task is so easy that all achieve virtually full marks or top performances and are “hitting the ceiling”.

Research Design

In cross-sectional research , a researcher compares multiple segments of the population at the same time

Sometimes, we want to see how people change over time, as in studies of human development and lifespan. Longitudinal research is a research design in which data-gathering is administered repeatedly over an extended period of time.

In cohort studies , the participants must share a common factor or characteristic such as age, demographic, or occupation. A cohort study is a type of longitudinal study in which researchers monitor and observe a chosen population over an extended period.

Triangulation means using more than one research method to improve the study’s validity.

Reliability

Reliability is a measure of consistency, if a particular measurement is repeated and the same result is obtained then it is described as being reliable.

  • Test-retest reliability :  assessing the same person on two different occasions which shows the extent to which the test produces the same answers.
  • Inter-observer reliability : the extent to which there is an agreement between two or more observers.

Meta-Analysis

A meta-analysis is a systematic review that involves identifying an aim and then searching for research studies that have addressed similar aims/hypotheses.

This is done by looking through various databases, and then decisions are made about what studies are to be included/excluded.

Strengths: Increases the conclusions’ validity as they’re based on a wider range.

Weaknesses: Research designs in studies can vary, so they are not truly comparable.

Peer Review

A researcher submits an article to a journal. The choice of the journal may be determined by the journal’s audience or prestige.

The journal selects two or more appropriate experts (psychologists working in a similar field) to peer review the article without payment. The peer reviewers assess: the methods and designs used, originality of the findings, the validity of the original research findings and its content, structure and language.

Feedback from the reviewer determines whether the article is accepted. The article may be: Accepted as it is, accepted with revisions, sent back to the author to revise and re-submit or rejected without the possibility of submission.

The editor makes the final decision whether to accept or reject the research report based on the reviewers comments/ recommendations.

Peer review is important because it prevent faulty data from entering the public domain, it provides a way of checking the validity of findings and the quality of the methodology and is used to assess the research rating of university departments.

Peer reviews may be an ideal, whereas in practice there are lots of problems. For example, it slows publication down and may prevent unusual, new work being published. Some reviewers might use it as an opportunity to prevent competing researchers from publishing work.

Some people doubt whether peer review can really prevent the publication of fraudulent research.

The advent of the internet means that a lot of research and academic comment is being published without official peer reviews than before, though systems are evolving on the internet where everyone really has a chance to offer their opinions and police the quality of research.

Types of Data

  • Quantitative data is numerical data e.g. reaction time or number of mistakes. It represents how much or how long, how many there are of something. A tally of behavioral categories and closed questions in a questionnaire collect quantitative data.
  • Qualitative data is virtually any type of information that can be observed and recorded that is not numerical in nature and can be in the form of written or verbal communication. Open questions in questionnaires and accounts from observational studies collect qualitative data.
  • Primary data is first-hand data collected for the purpose of the investigation.
  • Secondary data is information that has been collected by someone other than the person who is conducting the research e.g. taken from journals, books or articles.

Validity means how well a piece of research actually measures what it sets out to, or how well it reflects the reality it claims to represent.

Validity is whether the observed effect is genuine and represents what is actually out there in the world.

  • Concurrent validity is the extent to which a psychological measure relates to an existing similar measure and obtains close results. For example, a new intelligence test compared to an established test.
  • Face validity : does the test measure what it’s supposed to measure ‘on the face of it’. This is done by ‘eyeballing’ the measuring or by passing it to an expert to check.
  • Ecological validit y is the extent to which findings from a research study can be generalized to other settings / real life.
  • Temporal validity is the extent to which findings from a research study can be generalized to other historical times.

Features of Science

  • Paradigm – A set of shared assumptions and agreed methods within a scientific discipline.
  • Paradigm shift – The result of the scientific revolution: a significant change in the dominant unifying theory within a scientific discipline.
  • Objectivity – When all sources of personal bias are minimised so not to distort or influence the research process.
  • Empirical method – Scientific approaches that are based on the gathering of evidence through direct observation and experience.
  • Replicability – The extent to which scientific procedures and findings can be repeated by other researchers.
  • Falsifiability – The principle that a theory cannot be considered scientific unless it admits the possibility of being proved untrue.

Statistical Testing

A significant result is one where there is a low probability that chance factors were responsible for any observed difference, correlation, or association in the variables tested.

If our test is significant, we can reject our null hypothesis and accept our alternative hypothesis.

If our test is not significant, we can accept our null hypothesis and reject our alternative hypothesis. A null hypothesis is a statement of no effect.

In Psychology, we use p < 0.05 (as it strikes a balance between making a type I and II error) but p < 0.01 is used in tests that could cause harm like introducing a new drug.

A type I error is when the null hypothesis is rejected when it should have been accepted (happens when a lenient significance level is used, an error of optimism).

A type II error is when the null hypothesis is accepted when it should have been rejected (happens when a stringent significance level is used, an error of pessimism).

Ethical Issues

  • Informed consent is when participants are able to make an informed judgment about whether to take part. It causes them to guess the aims of the study and change their behavior.
  • To deal with it, we can gain presumptive consent or ask them to formally indicate their agreement to participate but it may invalidate the purpose of the study and it is not guaranteed that the participants would understand.
  • Deception should only be used when it is approved by an ethics committee, as it involves deliberately misleading or withholding information. Participants should be fully debriefed after the study but debriefing can’t turn the clock back.
  • All participants should be informed at the beginning that they have the right to withdraw if they ever feel distressed or uncomfortable.
  • It causes bias as the ones that stayed are obedient and some may not withdraw as they may have been given incentives or feel like they’re spoiling the study. Researchers can offer the right to withdraw data after participation.
  • Participants should all have protection from harm . The researcher should avoid risks greater than those experienced in everyday life and they should stop the study if any harm is suspected. However, the harm may not be apparent at the time of the study.
  • Confidentiality concerns the communication of personal information. The researchers should not record any names but use numbers or false names though it may not be possible as it is sometimes possible to work out who the researchers were.

Print Friendly, PDF & Email

Learning Goals

  • Learn key issues related to sampling and data collection.

Research Methods in Psychology

7.1  probability versus non-probability sampling.

Essentially all psychological research involves sampling—selecting a sample to study from the population of interest. Sampling falls into two broad categories. Probability sampling occurs when the researcher can specify the probability that each member of the population will be selected for the sample. Nonprobability sampling occurs when the researcher cannot specify these probabilities. Most psychological research involves nonprobability sampling. Convenience sampling—studying individuals who happen to be nearby and willing to participate—is a very common form of nonprobability sampling used in psychological research.

Serious researchers, however, are much more likely to use some form of probability sampling. This is because the goal of most research is to make accurate estimates about what is true in a particular population, and these estimates are most accurate when based on a probability sample. For example, it is important for researchers to base their estimates of election outcomes—which are often decided by only a few percentage points—on probability samples of likely registered voters.

Compared with nonprobability sampling, probability sampling requires a very clear specification of the population, which of course depends on the research questions to be answered. The population might be all registered voters in the state of Arkansas, all American consumers who have purchased a car in the past year, women in the United States over 40 years old who have received a mammogram in the past decade, or all the alumni of a particular university. Once the population has been specified, probability sampling requires a sampling frame. This is essentially a list of all the members of the population from which to select the respondents. Sampling frames can come from a variety of sources, including telephone directories, lists of registered voters, and hospital or insurance records. In some cases, a map can serve as a sampling frame, allowing for the selection of cities, streets, or households.

There are a variety of different probability sampling methods. Simple random sampling is done in such a way that each individual in the population has an equal probability of being selected for the sample. This could involve putting the names of all individuals in the sampling frame into a hat, mixing them up, and then drawing out the number needed for the sample. Given that most sampling frames take the form of computer files, random sampling is more likely to involve computerized sorting or selection of respondents. A common approach in telephone surveys is random-digit dialing, in which a computer randomly generates phone numbers from among the possible phone numbers within a given geographic area.

A common alternative to simple random sampling is stratified random sampling, in which the population is divided into different subgroups or “strata” (usually based on demographic characteristics) and then a random sample is taken from each “stratum.” Stratified random sampling can be used to select a sample in which the proportion of respondents in each of various subgroups matches the proportion in the population. For example, because about 12.5% of the US population is Black, stratified random sampling can be used to ensure that a survey of 1,000 American adults includes about 125 Black respondents. Stratified random sampling can also be used to sample extra respondents from particularly small subgroups—allowing valid conclusions to be drawn about those subgroups. For example, because Asian Americans make up a fairly small percentage of the US population (about 4.5%), a simple random sample of 1,000 American adults might include too few Asian Americans to draw any conclusions about them as distinct from any other subgroup. If this is important to the research question, however, then stratified random sampling could be used to ensure that enough Asian American respondents are included in the sample to draw valid conclusions about Asian Americans as a whole.

Yet another type of probability sampling is cluster sampling, in which larger clusters of individuals are randomly sampled and then individuals within each cluster are randomly sampled. For example, to select a sample of small-town residents in the United States, a researcher might randomly select several small towns and then randomly select several individuals within each town. Cluster sampling is especially useful for surveys that involve face-to-face interviewing because it minimizes the amount of traveling that the interviewers must do. For example, instead of traveling to 200 small towns to interview 200 residents, a research team could travel to 10 small towns and interview 20 residents of each. The National Comorbidity Survey was done using a form of cluster sampling.

How large does a survey sample need to be? In general, this depends on two factors. One is the level of confidence in the result that the researcher wants. The larger the sample, the closer any statistic based on that sample will tend to be to the corresponding value in the population. The other factor is the budget of the study. Larger samples provide greater confidence, but they take more time, effort, and money to obtain. Taking these two factors into account, most survey research uses sample sizes that range from about 100 to about 1,000.

7.2  Sample Size and Population Size

Why is a sample of 1,000 considered to be adequate for most survey research—even when the population is much larger than that? Consider, for example, that a sample of only 1,000 registered voters is generally considered a good sample of the roughly 120 million registered voters in the US population—even though it includes only about 0.0008% of the population! The answer is a bit surprising.

One part of the answer is that a statistic based on a larger sample will tend to be closer to the population value and that this can be characterized mathematically. Imagine, for example, that in a sample of registered voters, exactly 50% say they intend to vote for the incumbent. If there are 100 voters in this sample, then there is a 95% chance that the true percentage in the population is between 40 and 60. But if there are 1,000 voters in the sample, then there is a 95% chance that the true percentage in the population is between 47 and 53. Although this “95% confidence interval” continues to shrink as the sample size increases, it does so at a slower rate. For example, if there are 2,000 voters in the sample, then this only reduces the 95% confidence interval to 48 to 52. In many situations, the small increase in confidence beyond a sample size of 1,000 is not considered to be worth the additional time, effort, and money.

Another part of the answer—and perhaps the more surprising part—is that confidence intervals depend only on the size of the sample and not on the size of the population. So a sample of 1,000 would produce a 95% confidence interval of 47 to 53 regardless of whether the population size was a hundred thousand, a million, or a hundred million.

7.3  Sampling Bias

Probability sampling was developed in large part to address the issue of sampling bias. Sampling bias occurs when a sample is selected in such a way that it is not representative of the entire population and therefore produces inaccurate results. This was the reason that the Literary Digest straw poll was so far off in its prediction of the 1936 presidential election. The mailing lists used came largely from telephone directories and lists of registered automobile owners, which overrepresented wealthier people, who were more likely to vote for Landon. Gallup was successful because he knew about this bias and found ways to sample less wealthy people as well.

There is one form of sampling bias that even careful random sampling is subject to. It is almost never the case that everyone selected for the sample actually responds to the survey. Some may have died or moved away, and others may decline to participate because they are too busy, are not interested in the survey topic, or do not participate in surveys on principle. If these survey non-responders differ from survey responders in systematic ways, then this can produce non-response bias. For example, in a mail survey on alcohol consumption, researcher Vivienne Lahaut and colleagues found that only about half the sample responded after the initial contact and two follow-up reminders (Lahaut, Jansen, van de Mheen, & Garretsen, 2002). The danger here is that the half who responded might have different patterns of alcohol consumption than the half who did not, which could lead to inaccurate conclusions on the part of the researchers. So to test for non-response bias, the researchers later made unannounced visits to the homes of a subset of the non-responders—coming back up to five times if they did not find them at home. They found that the original non-responders included an especially high proportion of abstainers (nondrinkers), which meant that their estimates of alcohol consumption based only on the original responders were too high.

Although there are methods for statistically correcting for non-response bias, they are based on assumptions about the non-responders—for example, that they are more similar to late responders than to early responders—which may not be correct. For this reason, the best approach to minimizing non-response bias is to minimize the number of non-responders—that is, to maximize the response rate. There is a large research literature on the factors that affect survey response rates (Groves et al., 2004). In general, in-person interviews have the highest response rates, followed by telephone surveys, and then mail and Internet surveys. Among the other factors that increase response rates are sending potential respondents a short prenotification message informing them that they will be asked to participate in a survey in the near future and sending simple follow-up reminders to non-responders after a few weeks. The perceived length and complexity of the survey also makes a difference, which is why it is important to keep survey questionnaires as short, simple, and on topic as possible. Finally, offering an incentive—especially cash—is a reliable way to increase response rates.

Key Takeaways

·         Research usually involves probability sampling, in which each member of the population has a known probability of being selected for the sample. Types of probability sampling include simple random sampling, stratified random sampling, and cluster sampling.

·         Sampling bias occurs when a sample is selected in such a way that it is not representative of the population and therefore produces inaccurate results. The most pervasive form of sampling bias is non-response bias, which occurs when people who do not respond to the survey differ in important ways from people who do respond. The best way to minimize non-response bias is to maximize the response rate by prenotifying respondents, sending them reminders, constructing questionnaires that are short and easy to complete, and offering incentives.

References from Chapter 7

Groves, R. M., Fowler, F. J., Couper, M. P., Lepkowski, J. M., Singer, E., & Tourangeau, R. (2004). Survey methodology. Hoboken, NJ: Wiley.

Lahaut, V. M. H. C. J., Jansen, H. A. M., van de Mheen, D., & Garretsen, H. F. L. (2002). Non-response bias in a sample survey on alcohol consumption. Alcohol and Alcoholism, 37, 256–260.

No Alignments yet.

Cite this work

Logo for M Libraries Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

9.3 Conducting Surveys

Learning objectives.

  • Explain the difference between probability and nonprobability sampling, and describe the major types of probability sampling.
  • Define sampling bias in general and nonresponse bias in particular. List some techniques that can be used to increase the response rate and reduce nonresponse bias.
  • List the four major ways to conduct a survey along with some pros and cons of each.

In this section, we consider how to go about conducting a survey. We first consider the issue of sampling, followed by some different methods of actually collecting survey data.

Essentially all psychological research involves sampling—selecting a sample to study from the population of interest. Sampling falls into two broad categories. Probability sampling occurs when the researcher can specify the probability that each member of the population will be selected for the sample. Nonprobability sampling occurs when the researcher cannot specify these probabilities. Most psychological research involves nonprobability sampling. Convenience sampling—studying individuals who happen to be nearby and willing to participate—is a very common form of nonprobability sampling used in psychological research.

Survey researchers, however, are much more likely to use some form of probability sampling. This is because the goal of most survey research is to make accurate estimates about what is true in a particular population, and these estimates are most accurate when based on a probability sample. For example, it is important for survey researchers to base their estimates of election outcomes—which are often decided by only a few percentage points—on probability samples of likely registered voters.

Compared with nonprobability sampling, probability sampling requires a very clear specification of the population, which of course depends on the research questions to be answered. The population might be all registered voters in the state of Arkansas, all American consumers who have purchased a car in the past year, women in the United States over 40 years old who have received a mammogram in the past decade, or all the alumni of a particular university. Once the population has been specified, probability sampling requires a sampling frame . This is essentially a list of all the members of the population from which to select the respondents. Sampling frames can come from a variety of sources, including telephone directories, lists of registered voters, and hospital or insurance records. In some cases, a map can serve as a sampling frame, allowing for the selection of cities, streets, or households.

There are a variety of different probability sampling methods. Simple random sampling is done in such a way that each individual in the population has an equal probability of being selected for the sample. This could involve putting the names of all individuals in the sampling frame into a hat, mixing them up, and then drawing out the number needed for the sample. Given that most sampling frames take the form of computer files, random sampling is more likely to involve computerized sorting or selection of respondents. A common approach in telephone surveys is random-digit dialing, in which a computer randomly generates phone numbers from among the possible phone numbers within a given geographic area.

A common alternative to simple random sampling is stratified random sampling , in which the population is divided into different subgroups or “strata” (usually based on demographic characteristics) and then a random sample is taken from each “stratum.” Stratified random sampling can be used to select a sample in which the proportion of respondents in each of various subgroups matches the proportion in the population. For example, because about 12.5% of the US population is Black, stratified random sampling can be used to ensure that a survey of 1,000 American adults includes about 125 Black respondents. Stratified random sampling can also be used to sample extra respondents from particularly small subgroups—allowing valid conclusions to be drawn about those subgroups. For example, because Asian Americans make up a fairly small percentage of the US population (about 4.5%), a simple random sample of 1,000 American adults might include too few Asian Americans to draw any conclusions about them as distinct from any other subgroup. If this is important to the research question, however, then stratified random sampling could be used to ensure that enough Asian American respondents are included in the sample to draw valid conclusions about Asian Americans as a whole.

Yet another type of probability sampling is cluster sampling , in which larger clusters of individuals are randomly sampled and then individuals within each cluster are randomly sampled. For example, to select a sample of small-town residents in the United States, a researcher might randomly select several small towns and then randomly select several individuals within each town. Cluster sampling is especially useful for surveys that involve face-to-face interviewing because it minimizes the amount of traveling that the interviewers must do. For example, instead of traveling to 200 small towns to interview 200 residents, a research team could travel to 10 small towns and interview 20 residents of each. The National Comorbidity Survey was done using a form of cluster sampling.

How large does a survey sample need to be? In general, this depends on two factors. One is the level of confidence in the result that the researcher wants. The larger the sample, the closer any statistic based on that sample will tend to be to the corresponding value in the population. The other factor is the budget of the study. Larger samples provide greater confidence, but they take more time, effort, and money to obtain. Taking these two factors into account, most survey research uses sample sizes that range from about 100 to about 1,000.

Sample Size and Population Size

Why is a sample of 1,000 considered to be adequate for most survey research—even when the population is much larger than that? Consider, for example, that a sample of only 1,000 registered voters is generally considered a good sample of the roughly 120 million registered voters in the US population—even though it includes only about 0.0008% of the population! The answer is a bit surprising.

One part of the answer is that a statistic based on a larger sample will tend to be closer to the population value and that this can be characterized mathematically. Imagine, for example, that in a sample of registered voters, exactly 50% say they intend to vote for the incumbent. If there are 100 voters in this sample, then there is a 95% chance that the true percentage in the population is between 40 and 60. But if there are 1,000 voters in the sample, then there is a 95% chance that the true percentage in the population is between 47 and 53. Although this “95% confidence interval” continues to shrink as the sample size increases, it does so at a slower rate. For example, if there are 2,000 voters in the sample, then this only reduces the 95% confidence interval to 48 to 52. In many situations, the small increase in confidence beyond a sample size of 1,000 is not considered to be worth the additional time, effort, and money.

Another part of the answer—and perhaps the more surprising part—is that confidence intervals depend only on the size of the sample and not on the size of the population. So a sample of 1,000 would produce a 95% confidence interval of 47 to 53 regardless of whether the population size was a hundred thousand, a million, or a hundred million.

Sampling Bias

Probability sampling was developed in large part to address the issue of sampling bias. Sampling bias occurs when a sample is selected in such a way that it is not representative of the entire population and therefore produces inaccurate results. This was the reason that the Literary Digest straw poll was so far off in its prediction of the 1936 presidential election. The mailing lists used came largely from telephone directories and lists of registered automobile owners, which overrepresented wealthier people, who were more likely to vote for Landon. Gallup was successful because he knew about this bias and found ways to sample less wealthy people as well.

There is one form of sampling bias that even careful random sampling is subject to. It is almost never the case that everyone selected for the sample actually responds to the survey. Some may have died or moved away, and others may decline to participate because they are too busy, are not interested in the survey topic, or do not participate in surveys on principle. If these survey nonresponders differ from survey responders in systematic ways, then this can produce nonresponse bias . For example, in a mail survey on alcohol consumption, researcher Vivienne Lahaut and colleagues found that only about half the sample responded after the initial contact and two follow-up reminders (Lahaut, Jansen, van de Mheen, & Garretsen, 2002). The danger here is that the half who responded might have different patterns of alcohol consumption than the half who did not, which could lead to inaccurate conclusions on the part of the researchers. So to test for nonresponse bias, the researchers later made unannounced visits to the homes of a subset of the nonresponders—coming back up to five times if they did not find them at home. They found that the original nonresponders included an especially high proportion of abstainers (nondrinkers), which meant that their estimates of alcohol consumption based only on the original responders were too high.

Although there are methods for statistically correcting for nonresponse bias, they are based on assumptions about the nonresponders—for example, that they are more similar to late responders than to early responders—which may not be correct. For this reason, the best approach to minimizing nonresponse bias is to minimize the number of nonresponders—that is, to maximize the response rate. There is a large research literature on the factors that affect survey response rates (Groves et al., 2004). In general, in-person interviews have the highest response rates, followed by telephone surveys, and then mail and Internet surveys. Among the other factors that increase response rates are sending potential respondents a short prenotification message informing them that they will be asked to participate in a survey in the near future and sending simple follow-up reminders to nonresponders after a few weeks. The perceived length and complexity of the survey also makes a difference, which is why it is important to keep survey questionnaires as short, simple, and on topic as possible. Finally, offering an incentive—especially cash—is a reliable way to increase response rates.

Conducting the Survey

The four main ways to conduct surveys are through in-person interviews, by telephone, through the mail, and over the Internet. As with other aspects of survey design, the choice depends on both the researcher’s goals and the budget. In-person interviews have the highest response rates and provide the closest personal contact with respondents. Personal contact can be important, for example, when the interviewer must see and make judgments about respondents, as is the case with some mental health interviews. But in-person interviewing is by far the most costly approach. Telephone surveys have lower response rates and still provide some personal contact with respondents. They can also be costly but are generally less so than in-person interviews. Traditionally, telephone directories have provided fairly comprehensive sampling frames. Mail surveys are less costly still but generally have even lower response rates—making them most susceptible to nonresponse bias.

Not surprisingly, Internet surveys are becoming more common. They are increasingly easy to construct and use (see “Online Survey Creation”). Although initial contact can be made by mail with a link provided to the survey, this approach does not necessarily produce higher response rates than an ordinary mail survey. A better approach is to make initial contact by e-mail with a link directly to the survey. This can work well when the population consists of the members of an organization who have known e-mail addresses and regularly use them (e.g., a university community). For other populations, it can be difficult or impossible to find a comprehensive list of e-mail addresses to serve as a sampling frame. Alternatively, a request to participate in the survey with a link to it can be posted on websites known to be visited by members of the population. But again it is very difficult to get anything approaching a random sample this way because the members of the population who visit the websites are likely to be different from the population as a whole. However, Internet survey methods are in rapid development. Because of their low cost, and because more people are online than ever before, Internet surveys are likely to become the dominant approach to survey data collection in the near future.

Online Survey Creation

There are now several online tools for creating online questionnaires. After a questionnaire is created, a link to it can then be e-mailed to potential respondents or embedded in a web page. The following websites are among those that offer free accounts. Although the free accounts limit the number of questionnaire items and the number of respondents, they can be useful for doing small-scale surveys and for practicing the principles of good questionnaire construction.

  • Polldaddy— http://www.polldaddy.com
  • QuestionPro— http://www.questionpro.com
  • SurveyGizmo— http://www.surveygizmo.com
  • SurveyMonkey— http://www.surveymonkey.com
  • Zoomerang— http://www.zoomerang.com

Key Takeaways

  • Survey research usually involves probability sampling, in which each member of the population has a known probability of being selected for the sample. Types of probability sampling include simple random sampling, stratified random sampling, and cluster sampling.
  • Sampling bias occurs when a sample is selected in such a way that it is not representative of the population and therefore produces inaccurate results. The most pervasive form of sampling bias is nonresponse bias, which occurs when people who do not respond to the survey differ in important ways from people who do respond. The best way to minimize nonresponse bias is to maximize the response rate by prenotifying respondents, sending them reminders, constructing questionnaires that are short and easy to complete, and offering incentives.
  • Surveys can be conducted in person, by telephone, through the mail, and on the Internet. In-person interviewing has the highest response rates but is the most expensive. Mail and Internet surveys are less expensive but have much lower response rates. Internet surveys are likely to become the dominant approach because of their low cost.

Discussion: If possible, identify an appropriate sampling frame for each of the following populations. If there is no appropriate sampling frame, explain why.

  • students at a particular college or university
  • adults living in the state of Nevada
  • households in Little Rock, Arkansas
  • people with low self-esteem
  • Practice: Use one of the online survey creation tools to create a 10-item survey questionnaire on a topic of your choice.

Groves, R. M., Fowler, F. J., Couper, M. P., Lepkowski, J. M., Singer, E., & Tourangeau, R. (2004). Survey methodology . Hoboken, NJ: Wiley.

Lahaut, V. M. H. C. J., Jansen, H. A. M., van de Mheen, D., & Garretsen, H. F. L. (2002). Non-response bias in a sample survey on alcohol consumption. Alcohol and Alcoholism , 37 , 256–260.

Research Methods in Psychology Copyright © 2016 by University of Minnesota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Study Population

  • Reference work entry
  • pp 6412–6414
  • Cite this reference work entry

Book cover

3172 Accesses

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Babbie, E. R. (2010). The practice of social research . Belmont, CA: Wadsworth Publishing Company.

Google Scholar  

Bickman, L., & Rog, D. J. (1998). Handbook of applied social research methods . Thousand Oaks, CA: Sage Publications.

Friedman, L. M., Furberg, C. D., & DeMets, D. L. (2010). Fundamentals of clinical trials . New York: Springer.

Gerrish, K., & Lacey, A. (2010). The research process in nursing . West Sussex: Wiley-Blackwell.

Henry, G. T. (1990). Practical sampling . Newbury Park, CA: Sage Publications.

Kumar, R. (2011). Research methodology: A step-by-step guide for beginners . London: Sage Publications Limited.

Riegelman, R. K. (2005). Studying a study and testing a test: How to read the medical evidence . Philadelphia: Lippincott Williams & Wilkins.

Download references

Author information

Authors and affiliations.

Sociology Department, National University of Singapore, 11 Arts Link, 117570, Singapore, Singapore

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Shu Hu .

Editor information

Editors and affiliations.

University of Northern British Columbia, Prince George, BC, Canada

Alex C. Michalos

(residence), Brandon, MB, Canada

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer Science+Business Media Dordrecht

About this entry

Cite this entry.

Hu, S. (2014). Study Population. In: Michalos, A.C. (eds) Encyclopedia of Quality of Life and Well-Being Research. Springer, Dordrecht. https://doi.org/10.1007/978-94-007-0753-5_2893

Download citation

DOI : https://doi.org/10.1007/978-94-007-0753-5_2893

Publisher Name : Springer, Dordrecht

Print ISBN : 978-94-007-0752-8

Online ISBN : 978-94-007-0753-5

eBook Packages : Humanities, Social Sciences and Law

Share this entry

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Logo for Kwantlen Polytechnic University

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Survey Research

34 Overview of Survey Research

Learning objectives.

  • Define what survey research is, including its two important characteristics.
  • Describe several different ways that survey research can be used and give some examples.

What Is Survey Research?

Survey research  is a quantitative and qualitative method with two important characteristics. First, the variables of interest are measured using self-reports (using questionnaires or interviews). In essence, survey researchers ask their participants (who are often called respondents  in survey research) to report directly on their own thoughts, feelings, and behaviors. Second, considerable attention is paid to the issue of sampling. In particular, survey researchers have a strong preference for large random samples because they provide the most accurate estimates of what is true in the population. In fact, survey research may be the only approach in psychology in which random sampling is routinely used. Beyond these two characteristics, almost anything goes in survey research. Surveys can be long or short. They can be conducted in person, by telephone, through the mail, or over the Internet. They can be about voting intentions, consumer preferences, social attitudes, health, or anything else that it is possible to ask people about and receive meaningful answers.  Although survey data are often analyzed using statistics, there are many questions that lend themselves to more qualitative analysis.

Most survey research is non-experimental. It is used to describe single variables (e.g., the percentage of voters who prefer one presidential candidate or another, the prevalence of schizophrenia in the general population, etc.) and also to assess statistical relationships between variables (e.g., the relationship between income and health). But surveys can also be used within experimental research. The study by Lerner and her colleagues is a good example. Their use of self-report measures and a large national sample identifies their work as survey research. But their manipulation of an independent variable (anger vs. fear) to assess its effect on a dependent variable (risk judgments) also identifies their work as experimental.

History and Uses of Survey Research

Survey research may have its roots in English and American “social surveys” conducted around the turn of the 20th century by researchers and reformers who wanted to document the extent of social problems such as poverty (Converse, 1987) [1] . By the 1930s, the US government was conducting surveys to document economic and social conditions in the country. The need to draw conclusions about the entire population helped spur advances in sampling procedures. At about the same time, several researchers who had already made a name for themselves in market research, studying consumer preferences for American businesses, turned their attention to election polling. A watershed event was the presidential election of 1936 between Alf Landon and Franklin Roosevelt. A magazine called  Literary Digest  conducted a survey by sending ballots (which were also subscription requests) to millions of Americans. Based on this “straw poll,” the editors predicted that Landon would win in a landslide. At the same time, the new pollsters were using scientific methods with much smaller samples to predict just the opposite—that Roosevelt would win in a landslide. In fact, one of them, George Gallup, publicly criticized the methods of Literary Digest before the election and all but guaranteed that his prediction would be correct. And of course, it was, demonstrating the effectiveness of careful survey methodology (We will consider the reasons that Gallup was right later in this chapter). Gallup’s demonstration of the power of careful survey methods led later researchers to to local, and in 1948, the first national election survey by the Survey Research Center at the University of Michigan. This work eventually became the American National Election Studies ( https://electionstudies.org/ ) as a collaboration of Stanford University and the University of Michigan, and these studies continue today.

From market research and election polling, survey research made its way into several academic fields, including political science, sociology, and public health—where it continues to be one of the primary approaches to collecting new data. Beginning in the 1930s, psychologists made important advances in questionnaire design, including techniques that are still used today, such as the Likert scale. (See “What Is a Likert Scale?” in  Section 7.2 “Constructing Survey Questionnaires” .) Survey research has a strong historical association with the social psychological study of attitudes, stereotypes, and prejudice. Early attitude researchers were also among the first psychologists to seek larger and more diverse samples than the convenience samples of university students that were routinely used in psychology (and still are).

Survey research continues to be important in psychology today. For example, survey data have been instrumental in estimating the prevalence of various mental disorders and identifying statistical relationships among those disorders and with various other factors. The National Comorbidity Survey is a large-scale mental health survey conducted in the United States (see http://www.hcp.med.harvard.edu/ncs ). In just one part of this survey, nearly 10,000 adults were given a structured mental health interview in their homes in 2002 and 2003.  Table 7.1  presents results on the lifetime prevalence of some anxiety, mood, and substance use disorders. (Lifetime prevalence is the percentage of the population that develops the problem sometime in their lifetime.) Obviously, this kind of information can be of great use both to basic researchers seeking to understand the causes and correlates of mental disorders as well as to clinicians and policymakers who need to understand exactly how common these disorders are.

And as the opening example makes clear, survey research can even be used as a data collection method within experimental research to test specific hypotheses about causal relationships between variables. Such studies, when conducted on large and diverse samples, can be a useful supplement to laboratory studies conducted on university students. Survey research is thus a flexible approach that can be used to study a variety of basic and applied research questions.

  • Converse, J. M. (1987). Survey research in the United States: Roots and emergence, 1890–1960 . Berkeley, CA: University of California Press. ↵

A quantitative and qualitative method with two important characteristics; variables are measured using self-reports and considerable attention is paid to the issue of sampling.

Participants in a survey or study.

Research Methods in Psychology Copyright © 2019 by Rajiv S. Jhangiani, I-Chant A. Chiang, Carrie Cuttler, & Dana C. Leighton is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Chapter 9: Survey Research

9.1 overview of survey research, learning objectives.

  • Define what survey research is, including its two important characteristics.
  • Describe several different ways that survey research can be used and give some examples.

What Is Survey Research?

Survey research is a quantitative approach that has two important characteristics. First, the variables of interest are measured using self-reports. In essence, survey researchers ask their participants (who are often called respondents in survey research) to report directly on their own thoughts, feelings, and behaviors. Second, considerable attention is paid to the issue of sampling. In particular, survey researchers have a strong preference for large random samples because they provide the most accurate estimates of what is true in the population. In fact, survey research may be the only approach in psychology in which random sampling is routinely used. Beyond these two characteristics, almost anything goes in survey research. Surveys can be long or short. They can be conducted in person, by telephone, through the mail, or over the Internet. They can be about voting intentions, consumer preferences, social attitudes, health, or anything else that it is possible to ask people about and receive meaningful answers.

Most survey research is nonexperimental. It is used to describe single variables (e.g., the percentage of voters who prefer one presidential candidate or another, the prevalence of schizophrenia in the general population) and also to assess statistical relationships between variables (e.g., the relationship between income and health). But surveys can also be experimental. The study by Lerner and her colleagues is a good example. Their use of self-report measures and a large national sample identifies their work as survey research. But their manipulation of an independent variable (anger vs. fear) to assess its effect on a dependent variable (risk judgments) also identifies their work as experimental.

History and Uses of Survey Research

Survey research may have its roots in English and American “social surveys” conducted around the turn of the 20th century by researchers and reformers who wanted to document the extent of social problems such as poverty (Converse, 1987). By the 1930s, the US government was conducting surveys to document economic and social conditions in the country. The need to draw conclusions about the entire population helped spur advances in sampling procedures. At about the same time, several researchers who had already made a name for themselves in market research, studying consumer preferences for American businesses, turned their attention to election polling. A watershed event was the presidential election of 1936 between Alf Landon and Franklin Roosevelt. A magazine called Literary Digest conducted a survey by sending ballots (which were also subscription requests) to millions of Americans. Based on this “straw poll,” the editors predicted that Landon would win in a landslide. At the same time, the new pollsters were using scientific methods with much smaller samples to predict just the opposite—that Roosevelt would win in a landslide. In fact, one of them, George Gallup, publicly criticized the methods of Literary Digest before the election and all but guaranteed that his prediction would be correct. And of course it was. (We will consider the reasons that Gallup was right later in this chapter.)

From market research and election polling, survey research made its way into several academic fields, including political science, sociology, and public health—where it continues to be one of the primary approaches to collecting new data. Beginning in the 1930s, psychologists made important advances in questionnaire design, including techniques that are still used today, such as the Likert scale. (See “What Is a Likert Scale?” in Section 9.2 “Constructing Survey Questionnaires” .) Survey research has a strong historical association with the social psychological study of attitudes, stereotypes, and prejudice. Early attitude researchers were also among the first psychologists to seek larger and more diverse samples than the convenience samples of college students that were routinely used in psychology (and still are).

Survey research continues to be important in psychology today. For example, survey data have been instrumental in estimating the prevalence of various mental disorders and identifying statistical relationships among those disorders and with various other factors. The National Comorbidity Survey is a large-scale mental health survey conducted in the United States (see http://www.hcp.med.harvard.edu/ncs ). In just one part of this survey, nearly 10,000 adults were given a structured mental health interview in their homes in 2002 and 2003. Table 9.1 “Some Lifetime Prevalence Results From the National Comorbidity Survey” presents results on the lifetime prevalence of some anxiety, mood, and substance use disorders. (Lifetime prevalence is the percentage of the population that develops the problem sometime in their lifetime.) Obviously, this kind of information can be of great use both to basic researchers seeking to understand the causes and correlates of mental disorders and also to clinicians and policymakers who need to understand exactly how common these disorders are.

Table 9.1 Some Lifetime Prevalence Results From the National Comorbidity Survey

And as the opening example makes clear, survey research can even be used to conduct experiments to test specific hypotheses about causal relationships between variables. Such studies, when conducted on large and diverse samples, can be a useful supplement to laboratory studies conducted on college students. Although this is not a typical use of survey research, it certainly illustrates the flexibility of this approach.

Key Takeaways

  • Survey research is a quantitative approach that features the use of self-report measures on carefully selected samples. It is a flexible approach that can be used to study a wide variety of basic and applied research questions.
  • Survey research has its roots in applied social research, market research, and election polling. It has since become an important approach in many academic disciplines, including political science, sociology, public health, and, of course, psychology.

Discussion: Think of a question that each of the following professionals might try to answer using survey research.

  • a social psychologist
  • an educational researcher
  • a market researcher who works for a supermarket chain
  • the mayor of a large city
  • the head of a university police force

Converse, J. M. (1987). Survey research in the United States: Roots and emergence, 1890–1960 . Berkeley, CA: University of California Press.

  • Research Methods in Psychology. Provided by : University of Minnesota Libraries Publishing. Located at : http://open.lib.umn.edu/psychologyresearchmethods . License : CC BY-NC-SA: Attribution-NonCommercial-ShareAlike

Footer Logo Lumen Candela

Privacy Policy

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Indian J Psychol Med
  • v.42(1); Jan-Feb 2020

Sample Size and its Importance in Research

Chittaranjan andrade.

Clinical Psychopharmacology Unit, Department of Clinical Psychopharmacology and Neurotoxicology, National Institute of Mental Health and Neurosciences, Bengaluru, Karnataka, India

The sample size for a study needs to be estimated at the time the study is proposed; too large a sample is unnecessary and unethical, and too small a sample is unscientific and also unethical. The necessary sample size can be calculated, using statistical software, based on certain assumptions. If no assumptions can be made, then an arbitrary sample size is set for a pilot study. This article discusses sample size and how it relates to matters such as ethics, statistical power, the primary and secondary hypotheses in a study, and findings from larger vs. smaller samples.

Studies are conducted on samples because it is usually impossible to study the entire population. Conclusions drawn from samples are intended to be generalized to the population, and sometimes to the future as well. The sample must therefore be representative of the population. This is best ensured by the use of proper methods of sampling. The sample must also be adequate in size – in fact, no more and no less.

SAMPLE SIZE AND ETHICS

A sample that is larger than necessary will be better representative of the population and will hence provide more accurate results. However, beyond a certain point, the increase in accuracy will be small and hence not worth the effort and expense involved in recruiting the extra patients. Furthermore, an overly large sample would inconvenience more patients than might be necessary for the study objectives; this is unethical. In contrast, a sample that is smaller than necessary would have insufficient statistical power to answer the primary research question, and a statistically nonsignificant result could merely be because of inadequate sample size (Type 2 or false negative error). Thus, a small sample could result in the patients in the study being inconvenienced with no benefit to future patients or to science. This is also unethical.

In this regard, inconvenience to patients refers to the time that they spend in clinical assessments and to the psychological and physical discomfort that they experience in assessments such as interviews, blood sampling, and other procedures.

ESTIMATING SAMPLE SIZE

So how large should a sample be? In hypothesis testing studies, this is mathematically calculated, conventionally, as the sample size necessary to be 80% certain of identifying a statistically significant outcome should the hypothesis be true for the population, with P for statistical significance set at 0.05. Some investigators power their studies for 90% instead of 80%, and some set the threshold for significance at 0.01 rather than 0.05. Both choices are uncommon because the necessary sample size becomes large, and the study becomes more expensive and more difficult to conduct. Many investigators increase the sample size by 10%, or by whatever proportion they can justify, to compensate for expected dropout, incomplete records, biological specimens that do not meet laboratory requirements for testing, and other study-related problems.

Sample size calculations require assumptions about expected means and standard deviations, or event risks, in different groups; or, upon expected effect sizes. For example, a study may be powered to detect an effect size of 0.5; or a response rate of 60% with drug vs. 40% with placebo.[ 1 ] When no guesstimates or expectations are possible, pilot studies are conducted on a sample that is arbitrary in size but what might be considered reasonable for the field.

The sample size may need to be larger in multicenter studies because of statistical noise (due to variations in patient characteristics, nonspecific treatment characteristics, rating practices, environments, etc. between study centers).[ 2 ] Sample size calculations can be performed manually or using statistical software; online calculators that provide free service can easily be identified by search engines. G*Power is an example of a free, downloadable program for sample size estimation. The manual and tutorial for G*Power can also be downloaded.

PRIMARY AND SECONDARY ANALYSES

The sample size is calculated for the primary hypothesis of the study. What is the difference between the primary hypothesis, primary outcome and primary outcome measure? As an example, the primary outcome may be a reduction in the severity of depression, the primary outcome measure may be the Montgomery-Asberg Depression Rating Scale (MADRS) and the primary hypothesis may be that reduction in MADRS scores is greater with the drug than with placebo. The primary hypothesis is tested in the primary analysis.

Studies almost always have many hypotheses; for example, that the study drug will outperform placebo on measures of depression, suicidality, anxiety, disability and quality of life. The sample size necessary for adequate statistical power to test each of these hypotheses will be different. Because a study can have only one sample size, it can be powered for only one outcome, the primary outcome. Therefore, the study would be either overpowered or underpowered for the other outcomes. These outcomes are therefore called secondary outcomes, and are associated with secondary hypotheses, and are tested in secondary analyses. Secondary analyses are generally considered exploratory because when many hypotheses in a study are each tested at a P < 0.05 level for significance, some may emerge statistically significant by chance (Type 1 or false positive errors).[ 3 ]

INTERPRETING RESULTS

Here is an interesting question. A test of the primary hypothesis yielded a P value of 0.07. Might we conclude that our sample was underpowered for the study and that, had our sample been larger, we would have identified a significant result? No! The reason is that larger samples will more accurately represent the population value, whereas smaller samples could be off the mark in either direction – towards or away from the population value. In this context, readers should also note that no matter how small the P value for an estimate is, the population value of that estimate remains the same.[ 4 ]

On a parting note, it is unlikely that population values will be null. That is, for example, that the response rate to the drug will be exactly the same as that to placebo, or that the correlation between height and age at onset of schizophrenia will be zero. If the sample size is large enough, even such small differences between groups, or trivial correlations, would be detected as being statistically significant. This does not mean that the findings are clinically significant.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2023 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

How Snowball Sampling Used in Psychology Research

An effective method for recruiting study participants

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

in psychological research the population is

Esa Hiltula/iStock/Getty Images

  • When to Use It

Is Snowball Sampling Qualitative or Quantitative?

  • How It Works
  • Pros and Cons
  • Snowball Sampling Steps
  • Role in Modern Research

Snowball sampling is a recruitment technique in which current research participants are enlisted to help recruit other potential study participants. This involves tapping into each participant's social network to find more subjects for a study. It allows researchers to find subjects who belong to a specific population who might not otherwise volunteer or seek out study participation.

As the name suggests, snowball sampling starts small and slowly "snowballs" into a larger sample. It is sometimes referred to as chain sampling, referral sampling, respondent-driven sampling, or chain-referral sampling.

At a Glance

Snowball sampling is a non-probability method allowing researchers to tap into hard-to-reach populations. Often used in qualitative designs, it allows researchers to recruit participants through referrals. This can be beneficial because it helps connect researchers with individuals they might not otherwise reach, but it can also contribute to sample bias and make it difficult to generalize the results to a larger population.

When to Use Snowball Sampling in Psychology Research

In most cases, researchers want to draw a sample that is both random and representative. Random selection ensures that each member of a group has an equal chance of being chosen, while representativeness ensures that the sample is an accurate reflection of the population as a whole.

While ideal, getting a random, representative sample isn't always possible. In such cases, researchers might turn to another method such as snowball sampling.

There are a number of situations where snowball sampling might be appropriate. These include:

  • When researchers are working with populations that are difficult to reach, including marginalized or hidden groups, such as drug users or sex workers
  • When research is in the exploratory stage, and scientists are still trying to learn more about an emerging phenomenon
  • When researchers are working to generate a hypothesis before they conduct more comprehensive studies
  • When recruiting through social networks makes the most sense in terms of cost and available resources
  • When researchers are studying communities that are highly connected via shared characteristics of interest

Snowball sampling is commonly used in qualitative research. It uses a non-probability sampling method and is often used in studies where researchers are trying to explore different psychological phenomena and gain insights. Sample sizes may be smaller in this type of research, but often results in contextually-rich data. This can help researchers understand the nuances of what they are studying in a specific population.

How Snowball Sampling Works

Snowball sampling starts by finding a few individuals who meet the necessary criteria for a research sample. These individuals are sometimes known as the "seeds." The researcher then asks each participant to provide the names of additional people who meet those criteria.

The seed participants are interviewed and provided with a reward for their participation. They may then be given "coupons" that they can give to other eligible individuals. Each coupon contains information that allows recruiters to trace its origins. Potential participants can then redeem these coupons by enrolling in the study.

Each individual approached for participation is also asked to provide information on potential candidates. This process is continued until enough subjects have been located.

Pros and Cons of Snowball Sampling

Snowball sampling can have some pros and cons. Before using this approach, researchers should carefully weigh the potential advantages against the possible disadvantages and be transparent about any resulting limitations of the findings.

Advantages of Snowball Sampling

Snowball sampling can be particularly important when researchers are dealing with an uncommon or rare phenomenon. Traditional recruitment methods might simply not be able to locate a sufficient sample size .

It can also be helpful when participants are difficult to locate. This can include situations where people might be reticent about volunteering information about themselves or identifying themselves publicly. Because snowball sampling relies on recruiting people via trusted individuals, people may be more willing to participate.

Because snowball sampling provides essential information about the structure of social networks and connections, it can also be a helpful way of looking at the dynamics of the group itself.

Limitations of Snowball Sampling

The problem with snowball sampling is that it can contribute to bias . The opinions and characteristics of the initial members of the sample influence all of the subsequent subjects who are chosen to become part of the study.

This can make it more difficult for researchers to determine who might be missing from their sample and the factors contributing to that exclusion. Some variables might make it less likely for certain people to be referred, which can bias the study outcomes.

Another problem with snowball sampling is that it is difficult to know the size of the total overall population. It's also challenging to determine whether the sample accurately represents the population. If the sample only reflects a few people in the group, it might not be indicative of what is actually going on within the larger group.

Research suggests this sampling method can be a cost-effective way to collect data. However, researchers also caution that it can introduce bias, which means that caution must be used when interpreting the results of studies relying on snowball sampling.

Examples of Snowball Sampling

To understand how snowball sampling can be used in psychology research, looking at a few different examples can be helpful.

LGBTQIA+ Youth

Imagine a study where researchers want to investigate the experiences of LGBTQIA+ youth who live in rural areas. Because this population might be more difficult to reach due to discrimination , researchers might start by recruiting participants through local LGBTQIA+ organizations. Once they have an initial sample, the researchers can ask the current participants to introduce them to other people who are also LGBTQIA+.

Mental Health of Specific Populations

Consider a situation where researchers want to study the mental health of people in a particular profession, such as first responders who work in high-stress settings. The researchers might start by recruiting participants through professional organizations and then ask participants to refer them to colleagues who might also be interested in taking part.

Online Communities

Researchers might interested in learning more about phenomena that affect people who belong to specific online communities. They might reach initial participants by contacting them through online forums or websites and then ask if these participants are willing to share contact information for other members of the community.

Steps to Conduct Snowball Sampling

To conduct a snowball sample, researchers often use the following steps:

  • Create a research question and define the objectives of the study.
  • Identify the initial participants based on specific pre-determined criteria.
  • Obtain informed consent that clearly explains the purpose, benefits, and potential risks of participating in the research.
  • Collect data from the initial participants using surveys , interviews, observations , or other techniques.
  • Ask participants to refer you to other potential participants and obtain contact information if possible.
  • Contact the potential participants who have been referred to you. Explain the study and invite them to participate.
  • Repeat the same process with each subsequent participant. 
  • Continue the process until a sufficient sample has been obtained.

The Role of Snowball Sampling in Modern Research

While snowball sampling has its limitations, it plays an important role in modern psychology research . In particular, it can help researchers make contact with vulnerable or marginalized populations who are often overlooked and left out of more traditional sampling methods.

This technique can help researchers connect with the members of communities who may be hesitant to participate due to discrimination or the stigma associated with their condition.

It can also be a way for researchers to investigate phenomena that may be newly emerging and that might not yet be detectable using other sampling techniques.

Given the importance of social networks in today's highly connected work, snowball sampling also gives researchers a unique opportunity to examine how individuals connect to their communities. Researchers can use the information they collect to re-trace connections, providing valuable insights into how relationships and social dynamics affect the phenomena they study.

Snowball sampling is one method that psychology researchers may use to recruit study participants. While it has a greater risk of bias than drawing a random , representative sample , it does have some essential benefits. In particular, it can be a cost-effective way for researchers to find participants who belong to hidden or hard-to-reach populations. Despite the limitations of snowball sampling, it can play an important role in helping scientists learn more about emerging phenomena and populations that face stigma and marginalization.

Crawford FW, Wu J, Heimer R. Hidden population size estimation from respondent-driven sampling: a network approach . J Am Stat Assoc . 2018;113(522):755-766. doi:10.1080/01621459.2017.1285775

Raina SK. Establishing association . Indian J Med Res . 2015;141(1):127. doi:10.4103/0971-5916.154519

Kirchherr J, Charles K. Enhancing the sample diversity of snowball samples: Recommendations from a research project on anti-dam movements in Southeast Asia . PLoS One . 2018;13(8):e0201710. doi:10.1371/journal.pone.0201710

Martínez-Mesa J, González-Chica DA, Duquia RP, Bonamigo RR, Bastos JL. Sampling: how to select participants in my research study ?  An Bras Dermatol . 2016;91(3):326-330. doi:10.1590/abd1806-4841.20165254

Badowski G, Somera LP, Simsiman B, et al. The efficacy of respondent-driven sampling for the health assessment of minority populations . Cancer Epidemiol . 2017;50(Pt B):214-220. doi:10.1016/j.canep.2017.07.006

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

Mental health and the pandemic: What U.S. surveys have found

in psychological research the population is

The coronavirus pandemic has been associated with worsening mental health among people in the United States and around the world . In the U.S, the COVID-19 outbreak in early 2020 caused widespread lockdowns and disruptions in daily life while triggering a short but severe economic recession that resulted in widespread unemployment. Three years later, Americans have largely returned to normal activities, but challenges with mental health remain.

Here’s a look at what surveys by Pew Research Center and other organizations have found about Americans’ mental health during the pandemic. These findings reflect a snapshot in time, and it’s possible that attitudes and experiences may have changed since these surveys were fielded. It’s also important to note that concerns about mental health were common in the U.S. long before the arrival of COVID-19 .

Three years into the COVID-19 outbreak in the United States , Pew Research Center published this collection of survey findings about Americans’ challenges with mental health during the pandemic. All findings are previously published. Methodological information about each survey cited here, including the sample sizes and field dates, can be found by following the links in the text.

The research behind the first item in this analysis, examining Americans’ experiences with psychological distress, benefited from the advice and counsel of the COVID-19 and mental health measurement group at Johns Hopkins Bloomberg School of Public Health.

At least four-in-ten U.S. adults (41%) have experienced high levels of psychological distress at some point during the pandemic, according to four Pew Research Center surveys conducted between March 2020 and September 2022.

A bar chart showing that young adults are especially likely to have experienced high psychological distress since March 2020

Young adults are especially likely to have faced high levels of psychological distress since the COVID-19 outbreak began: 58% of Americans ages 18 to 29 fall into this category, based on their answers in at least one of these four surveys.

Women are much more likely than men to have experienced high psychological distress (48% vs. 32%), as are people in lower-income households (53%) when compared with those in middle-income (38%) or upper-income (30%) households.

In addition, roughly two-thirds (66%) of adults who have a disability or health condition that prevents them from participating fully in work, school, housework or other activities have experienced a high level of distress during the pandemic.

The Center measured Americans’ psychological distress by asking them a series of five questions on subjects including loneliness, anxiety and trouble sleeping in the past week. The questions are not a clinical measure, nor a diagnostic tool. Instead, they describe people’s emotional experiences during the week before being surveyed.

While these questions did not ask specifically about the pandemic, a sixth question did, inquiring whether respondents had “had physical reactions, such as sweating, trouble breathing, nausea, or a pounding heart” when thinking about their experience with the coronavirus outbreak. In September 2022, the most recent time this question was asked, 14% of Americans said they’d experienced this at least some or a little of the time in the past seven days.

More than a third of high school students have reported mental health challenges during the pandemic. In a survey conducted by the Centers for Disease Control and Prevention from January to June 2021, 37% of students at public and private high schools said their mental health was not good most or all of the time during the pandemic. That included roughly half of girls (49%) and about a quarter of boys (24%).

In the same survey, an even larger share of high school students (44%) said that at some point during the previous 12 months, they had felt sad or hopeless almost every day for two or more weeks in a row – to the point where they had stopped doing some usual activities. Roughly six-in-ten high school girls (57%) said this, as did 31% of boys.

A bar chart showing that Among U.S. high schoolers in 2021, girls and LGB students were most likely to report feeling sad or hopeless in the past year

On both questions, high school students who identify as lesbian, gay, bisexual, other or questioning were far more likely than heterosexual students to report negative experiences related to their mental health.

A bar chart showing that Mental health tops the list of parental concerns, including kids being bullied, kidnapped or abducted, attacked and more

Mental health tops the list of worries that U.S. parents express about their kids’ well-being, according to a fall 2022 Pew Research Center survey of parents with children younger than 18. In that survey, four-in-ten U.S. parents said they’re extremely or very worried about their children struggling with anxiety or depression. That was greater than the share of parents who expressed high levels of concern over seven other dangers asked about.

While the fall 2022 survey was fielded amid the coronavirus outbreak, it did not ask about parental worries in the specific context of the pandemic. It’s also important to note that parental concerns about their kids struggling with anxiety and depression were common long before the pandemic, too . (Due to changes in question wording, the results from the fall 2022 survey of parents are not directly comparable with those from an earlier Center survey of parents, conducted in 2015.)

Among parents of teenagers, roughly three-in-ten (28%) are extremely or very worried that their teen’s use of social media could lead to problems with anxiety or depression, according to a spring 2022 survey of parents with children ages 13 to 17 . Parents of teen girls were more likely than parents of teen boys to be extremely or very worried on this front (32% vs. 24%). And Hispanic parents (37%) were more likely than those who are Black or White (26% each) to express a great deal of concern about this. (There were not enough Asian American parents in the sample to analyze separately. This survey also did not ask about parental concerns specifically in the context of the pandemic.)

A bar chart showing that on balance, K-12 parents say the first year of COVID had a negative impact on their kids’ education, emotional well-being

Looking back, many K-12 parents say the first year of the coronavirus pandemic had a negative effect on their children’s emotional health. In a fall 2022 survey of parents with K-12 children , 48% said the first year of the pandemic had a very or somewhat negative impact on their children’s emotional well-being, while 39% said it had neither a positive nor negative effect. A small share of parents (7%) said the first year of the pandemic had a very or somewhat positive effect in this regard.

White parents and those from upper-income households were especially likely to say the first year of the pandemic had a negative emotional impact on their K-12 children.

While around half of K-12 parents said the first year of the pandemic had a negative emotional impact on their kids, a larger share (61%) said it had a negative effect on their children’s education.

  • Coronavirus (COVID-19)
  • Happiness & Life Satisfaction
  • Medicine & Health
  • Teens & Youth

Portrait photo of staff

How Americans View the Coronavirus, COVID-19 Vaccines Amid Declining Levels of Concern

Online religious services appeal to many americans, but going in person remains more popular, about a third of u.s. workers who can work from home now do so all the time, how the pandemic has affected attendance at u.s. religious services, economy remains the public’s top policy priority; covid-19 concerns decline again, most popular.

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Age & Generations
  • Economy & Work
  • Family & Relationships
  • Gender & LGBTQ
  • Immigration & Migration
  • International Affairs
  • Internet & Technology
  • Methodological Research
  • News Habits & Media
  • Non-U.S. Governments
  • Other Topics
  • Politics & Policy
  • Race & Ethnicity
  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

Copyright 2024 Pew Research Center

Terms & Conditions

Privacy Policy

Cookie Settings

Reprints, Permissions & Use Policy

  • Open access
  • Published: 15 April 2024

Complex PTSD symptom clusters and executive function in UK Armed Forces veterans: a cross-sectional study

  • Natasha Biscoe   ORCID: orcid.org/0000-0003-3471-6472 1 ,
  • Emma New 2 &
  • Dominic Murphy   ORCID: orcid.org/0000-0002-9530-2743 1 , 3  

BMC Psychology volume  12 , Article number:  209 ( 2024 ) Cite this article

161 Accesses

1 Altmetric

Metrics details

Less is known about complex posttraumatic stress disorder (CPTSD) than postrraumatic stress disorder (PTSD) in military veterans, yet this population may be at greater risk of the former diagnosis. Executive function impairment has been linked to PTSD treatment outcomes. The current study therefore aimed to explore possible associations between each CPTSD symptom cluster and executive function to understand if similar treatment trajectories might be observed with the disorder.

A total of 428 veterans from a national charity responded to a self-report questionnaire which measured CPTSD symptom clusters using the International Trauma Questionnaire, and executive function using the Adult Executive Function Inventory. Single and multiple linear regression models were used to analyse the relationship between CPTSD symptom clusters and executive function, including working memory and inhibition.

Each CPTSD symptom cluster was significantly associated with higher executive function impairment, even after controlling for possible mental health confounding variables. Emotion dysregulation was the CPTSD symptom cluster most strongly associated with executive function impairment.

Conclusions

This is the first study to explore the relationship between executive function and CPTSD symptom clusters. The study builds on previous findings and suggests that executive function could be relevant to CPTSD treatment trajectories, as is the case with PTSD alone. Future research should further explore such clinical implications.

Peer Review reports

Military veterans face a greater risk of experiencing PTSD than the general UK population [ 1 ] and are more likely to meet criteria for Complex PTSD (CPTSD) than PTSD [ 2 ]. PTSD encompasses a set of symptoms which may be experienced following a traumatic event, including hyperarousal, re-experiencing (nightmares, intrusions), cognitive and behavioural avoidance and negative alterations in mood (DSM-V; [ 3 ]). CPTSD was added to the International Classification of Diseases in 2011 [ 4 ] as a distinct disorder. A diagnosis of CPTSD includes experiencing clusters of symptoms that encompass PTSD, as well as symptom clusters referred to as Disturbances in Self-Organisation (DSO), which are: emotion dysregulation, interpersonal difficulties, and negative self-concept, as well as functional impairment connected to both PTSD and DSO symptoms.

CPTSD has been linked with sustained and multiple traumas [ 5 ] as well as interpersonal trauma [ 6 ]. Military veterans appear to be at greater risk of CPTSD than PTSD [ 7 ]. Indeed, CPTSD appears to be more prevalent in UK treatment-seeking veterans than PTSD (with 80% meeting criteria for CPTSD compared to 20% for PTSD; [ 2 , 8 ]. Additionally, proportionally higher treatment dropout rates are reported for veterans with CPTSD [ 9 ]. It is therefore clinically important to understand factors which may be relevant to both PTSD and CPTSD, as interventions may need to be tailored to each disorder respectively.

PTSD and executive function

An association between impairments in executive function (EF), and posttraumatic stress disorder (PTSD) is well-established in the literature (for review see: [ 10 , 11 , 12 ]). EFs are a collection of abilities grouped together for their relevance to planning and executing complex, goal-directed behaviour [ 13 , 14 , 15 ]. There is significant variation in both definitions of the concept and how the construct is operationalised, although the current study follows Miyake and colleagues [ 16 ] as this conceptualisation aligns well with the self-report measure of executive function used in this study. These authors identify cognitive flexibility, working memory and inhibition as core EFs, deficits in all of which may be relevant to PTSD [ 17 , 18 , 19 , 20 , 21 ]. Furthermore, one study has reported that greater inhibitory control is associated with a better improvement in PTSD symptoms following psychological treatment, indicating the possible relevance of EF in PTSD recovery trajectories [ 22 ]. Less is known about whether similar trajectories would be observed in those with CPTSD. However, insight may be drawn from neurocognitive explanations of the observed associations between EF and PTSD.

Neurocognitive models of PTSD and EF

Several meta-analyses of lesion and neuroimaging studies implicate the prefrontal cortex (PFC) as key in supporting EF [ 23 , 24 , 25 ]. The PFC has been theorised as a control centre, mediating between sensory inputs and behavioural outputs via regulation of brain systems central to emotion processing such as the amygdala [ 26 ]. The PFC is also structurally associated with PTSD, as well as the amygdala, hippocampus, and cingulate cortex [ 27 ], with this system key to attaching emotional valence to memories relevant to the fear-based experiences that lead to PTSD [ 19 ].

The shared relevance of these brain systems to both EF and PTSD suggests a neurocognitive explanation for the overlap observed between the two constructs. For example, one neurocognitive model of PTSD posits that PFC (and associated deficits in EF) may be ineffectively regulating hyperarousal of the amygdala in individuals with PTSD when a perceive threat is observed in a safe environment [ 28 , 29 , 30 ]. Furthermore, elevated arousal – a symptom of PTSD – may deplete cognitive resources leading to deficits in EF as attention is focused instead on regulating hyperarousal [ 20 , 31 , 32 , 33 ].

EFs and CPTSD

Neuroimaging studies reinforce this theory and suggest functional connectivity between the PFC and brain regions relevant to emotion regulation are key to supporting EF [ 34 , 35 ]. Emotion dysregulation therefore may be pertinent to the observed overlap between PTSD and EF. Given emotion dysregulation is a DSO symptom of CPTSD, exploring associations between CPTSD and EFs could inform understanding of the disorder and how existing PTSD interventions could be tailored to improve treatment response in veterans seeking treatment for CPTSD. In a study using an adolescent sample, deficits in EFs were associated with greater CPTSD severity [ 36 ]. However, less is known about the relationship between CPTSD and EFs in veteran populations.

The current study

Given the potential relevance of EF to PTSD treatment outcomes in veterans, and the need to further understand CPTSD in this population, the current study explores the relationship between both PTSD and CPTSD and a self-report measure of EF (inhibition and working memory) in a clinical sample of UK veterans. Associations between each PTSD symptom cluster and EFs are separately investigated, including the DSO clusters that encompass CPTSD. In line with previous studies [ 36 ], it is hypothesised that lower executive functioning scores (both working memory and inhibition) will be associated with greater severity of CPTSD symptoms.

This study was approved by [blinded for review].

Participants

Of the veterans seeking treatment UK charity, a 20% random sample was selected to assess whether they met study inclusion criteria: (1) having a valid email address; (2) having provided consent to contact from the research team about studies; (3) had attended one or more appointments (classed as treatment-seeking). In total 989 veterans were emailed with the study link, to which 428/989 responded (43.3% response rate; M age =50.4, SD age =10.9). Participation was voluntary. No differences were found between those who returned completed questionnaires and non-responders [ 2 ]. We determined this by analysing predictors of returning a completed survey, including age, sex and service branch.

Eligible and consenting veterans were emailed the link to a self-report questionnaire hosted on Survey Monkey, which included demographic questions and the measures described below. Responses were collected between August and October 2020 and participants were emailed not more than five times. The questionnaire took approximately 20 min to complete. Full study procedure has been described previously [ 2 ].

The Adult Executive Function Inventory (ADEXI; [ 37 ]), measures EF on a 14-item self-report scale, with responses on a five-point Likert scale ranging from zero (definitely not true) to four (definitely true). Items 1, 2, 5, 7, 8, 9, 11, 12 and 13 comprise the working memory subscale, e.g.: “I have difficulty remembering lengthy instructions” and “when someone asks me to do several things, I sometimes only remember the first or last”. The remaining items make up the inhibition subscale, e.g.: “I have a tendency to do things without first thinking about what could happen” and “I sometimes have difficulty stopping myself from doing something that I like even though someone tells me that it is not allowed”. A higher score on the scale or each of the subscales indicates greater impairment. The ADEXI has good internal consistency and test-retest reliability, but poor convergent validity with neuropsychological tests of EF [ 37 ]. The ADEXI has good internal consistency (α = 0.68–0.72; [ 37 ]).

Symptoms of PTSD and CPTSD were measured using the International Trauma Questionnaire [ 38 ], an 18-item scale with responses on a 5-point Likert scale ranging from zero (not at all) to four (extremely). Two items measure each of the three PTSD symptom clusters: hyperarousal, re-experiencing and avoidance. Two items measure each of the three disturbances in self-organisation (DSO) symptom clusters that comprise CPTSD: negative self-concept, interpersonal relationships and affect dysregulation. Three identical items then measure functional impairment related to the PTSD and DSO symptom clusters respectively. The ITQ has strong psychometric properties [ 39 ]. Possible caseness for PTSD is indicated by a score of two or higher on at least one of each item measuring each PTSD symptom cluster, as well scoring two or higher on one of the three functional impairment items relating to PTSD symptom clusters. Possible caseness for CPTSD is indicated by meeting the criteria for PTSD, as well as scoring two or higher on at least one of the two items for each DSO symptom cluster, and at least a two on one of the functional impairment items relating to DSO symptoms. The ITQ has good internal consistency (α = 0.90; [ 39 ]).

Symptoms of generalised anxiety and depression were measured with the General Health Questionnaire (GHQ-12; [ 40 ]), a 12-item scale where a score of four or higher is indicative of potential caseness for common mental health difficulties (CMDs). The GHQ-9 has good internal consistency (α = 0.72; [ 41 ]).

Somatic symptoms were measured using the Patient Health Questionnaire (PHQ-15; [ 41 ]), a 15-item scale where a score above 15 indicates higher severity of somatic symptoms. The PHQ-15 has good internal consistency (α = 0.80; [ 42 ]).

Symptoms of poor sleep quality were measured using the Sleep Condition Indicator (SCI; [ 43 ]), an eight-item scale where a score below 16 is indicative of a potential insomnia disorder. The SCI has good internal consistency (α = 0.86; [ 44 ]).

Symptoms of difficulties with anger were measured using the Dimensions of Anger Reactions (DAR-5; [ 45 ]), a five-item scale where a score higher than 12 is indicative of possible anger difficulties. The DAR-5 has good internal consistency (α = 0.89–0.90; [ 46 ]).

Symptoms of alcohol misuse were measured using the Alcohol Use Disorders Identification Test (AUDIT; [ 47 ]), a 10-item scale where scores higher than eight and 16 respectively are classified as possible hazardous and harmful alcohol use. The AUDIT has good internal consistency (α = 0.60–0.80; [ 48 ]).

Data analysis

Data were prepared in STATA 13.0 and analysed in SPSS v.26. Continuous variables were ADEXI scores and subscale scores. These were averaged so that comparisons could be made across scores calculated from different numbers of items. All other variables were categorical, divided into case and no case or high severity and lower severity for each health outcome, and no PTSD, PTSD, and CPTSD for the ITQ variable. To understand the relationship between mental health variables, including PTSD and EF, single linear regression models were used with demographic and mental health caseness variables as predictors, and ADEXI and inhibition and working memory subscale scores as outcome variables in separate analyses. This was to understand possible confounding variables for any relationship between PTSD and CPTSD with EF. Multiple linear regression models were then used with PTSD and CPTSD caseness as predictor variables, and ADEXI score, and subscale scores as outcome variables. Those variables which were significant in the single linear regression models were included in the multiple regression models to adjust for possible confounding factors. Single linear regression models explored the relationships between individual PTSD and DSO symptom clusters with EF. ‘Caseness’ for each symptom cluster was calculated as a score of two or higher on at least one of the two items measuring each cluster. The sample met assumptions for multiple linear regression: the data were normally distributed (W = 0.96, p  = 0.23), there was low multicollinearity and there is a linear relationship between the variables used in the regression models. As described in [ 2 ], analyses were restricted to responders only and missing data were not included in the models due to the assumption that data were missing at random. A power analysis was not conducted for the present study as the analysis was exploratory and data were collected through convenience sampling [ 49 ]. In regression analysis, B values below 0.1. between 0.1 and 0.5 and above 0.5 are broadly considered small, medium and high respectively [ 50 ].

Demographic characteristics are described in Table  1 , as well as descriptive statistics for the variables included in regression models.

Single regression models

Single linear regression models for demographic and mental health factors are presented in Table  2 . Being unemployed and having an ethnicity other than white were significantly associated with higher overall EF, inhibition and working memory impairment. Having high somatic symptoms and meeting caseness for probable common mental health difficulties were also associated with higher overall EF, inhibition and working memory impairment. In addition, scores indicating hazardous alcohol use were associated with working memory and inhibition impairment, and sleep disturbances were associated with a higher working memory impairment.

Multiple regression models

Multiple regression models for PTSD adjusted for all other significant variables besides CPTSD caseness observed in the single regression models. The same models were analysed including CPTSD as a predictor and not PTSD caseness. These models are displayed in Table  3 . Across all adjusted models, both PTSD and CPTSD remained significant predictors for EF, inhibition and working memory.

PTSD and DSO symptom clusters

Linear regression models for each of the PTSD and DSO symptom clusters and EF, inhibition and working memory are displayed in Table  4 . In line with our hypothesis, each symptom cluster was significantly associated with EF, as well as inhibition and working memory subscales.

The aim of the current study was to explore the associations between CPTSD symptom clusters and EF in a clinical sample of UK veterans. Both PTSD and CPTSD caseness were significantly associated with greater impairment in inhibition and working memory, in line with our hypothesis. All PTSD symptom clusters, and the DSO symptom clusters which encompass CPTSD, were associated with inhibition and working memory. In particular, the DSO symptom emotion dysregulation was most strongly associated with EF impairment. PTSD encompasses symptoms hyperarousal, re-experiencing and avoidance. CPTSD is a relatively new separate diagnosis which includes PTSD symptoms as well as DSO symptoms: emotion dysregulation, negative self-concept and interpersonal difficulties, as well as functional impairment relating to these domains [ 4 ].

These associations remained after controlling for the following possible confounders, which were also found to be associated with greater EF impairment: employment status, ethnicity, somatisation severity, common mental health disorders, alcohol misuse and for working memory, sleep function. The finding that EF impairment is associated with worse health coheres with previous research, which has observed relationships between EF deficits and both depression [ 51 ] and somatisation disorder [ 52 ]. Additionally, sleep deprivation is consistently associated with impairments in working memory [ 53 , 54 ].

Emotion dysregulation and EF impairment

Our finding that emotion dysregulation was the CPTSD symptom cluster most associated with EF coheres with and builds on neurocognitive models espoused in the literature. Previous research has suggested functional connectivity between the PFC and limbic system is key in the overlap observed between PTSD chronicity, severity, and EF impairment [ 10 , 55 ]. In one study, those with greater functional connectivity in this system - termed the frontal parietal control and limbic network (FPCN) - were observed to have less chronicity of and greater reduction in PTSD symptoms [ 56 ]. The FPCN underlies emotion processing [ 57 ], mind wandering [ 58 ] and is neurally connected with the default mode network (DMN; [ 59 ]), all of which are associated with PTSD [ 60 ]. Moreover, the development of the DMN is particularly sensitive during childhood, with research suggesting its development could be affected by early and prolonged trauma [ 61 , 62 ]. Given these factors are more strongly associated with CPTSD than PTSD [ 5 ], the finding that DSO symptom cluster emotion dysregulation was most related to EF suggests similar neurobiological mechanisms may be involved in CPTSD as those espoused for the overlap between EF and PTSD.

Limitations

A number of limitations to the present study should be noted. Firstly, whilst the self-report measure of EF facilitated the collection of data from a larger sample, it has limited convergent validity with neuropsychological measures of EF [ 37 ]. However, as a self-report measure, the scale has strong psychometric properties [ 37 ] and self-report EF measures are strongly related to functional impairment [ 63 ]. Secondly, the scale does not include items measuring cognitive flexibility, although this would be difficult to capture on a self-report measure. Data were collected during the Covid-19 pandemic, and environmental factors related to restrictive measures at the time could have affected participants’ responses. However, our research suggests veterans’ mental health difficulties remained relatively stable throughout the pandemic. Finally, no causal relationships can be interpreted from the current findings due to the cross-sectional design of the study. However, the observed finding of an association between DSO symptom clusters and EF impairment builds on previous findings of similar association with PTSD clusters and this can inform future research and clinical studies.

Implications for treatment

Taken together, the findings of the present study suggest that CPTSD interventions may – as observed with PTSD treatment outcomes [ 22 ] – result in better symptom improvement in patients who display greater inhibitory control in neuropsychological tests. By separately analysing both PTSD and DSO symptom clusters, the current study has highlighted the potential role of emotion dysregulation in the overlap between EF impairment and PTSD observed in previous studies [ 10 , 11 , 12 ]. Future research might explore whether veterans with better inhibitory control and working memory respond better to CPTSD interventions. For example, Enhanced Skills Training in Affective and Interpersonal Regulation (ESTAIR; [ 64 ]) is a modular CPTSD treatment which sequentially targets each DSO symptom – including emotion dysregulation. Future studies might explore whether building skills in emotion regulation reduces impairment in EF and subsequently improves recovery trajectories.

This was the first study to explore the relationship between EF and CPTSD symptom clusters in a clinical sample of UK Armed Forces veterans. That DSO symptom clusters, in addition to PTSD clusters, were associated with EF builds on previous findings and suggests that CPTSD treatment outcomes could similarly be affected by levels of EF impairment in veteran patients. Future research should explore the clinical implications of these findings further.

Data availability

The datasets analysed during the current study are not publicly available due to patient confidentiality.

Abbreviations

Complex posttraumatic stress disorder

Default mode network

Disturbances in self-organisation

  • Executive function

Posttraumatic stress disorder

Stevelink SAM, Jones M, Hull L, Pernet D, MacCrimmon S, Goodwin L, et al. Mental health outcomes at the end of the British involvement in the Iraq and Afghanistan conflicts: a cohort study. Br J Psychiatry. 2018;213(6):690–7.

Article   PubMed   PubMed Central   Google Scholar  

Williamson C, Baumann J, Murphy D. Exploring the health and well-being of a national sample of U.K. treatment-seeking veterans. Psychological Trauma: Theory, Research, Practice, and Policy [Internet]. 2022 Oct 10 [cited 2022 Nov 1]; http://doi.apa.org/getdoi.cfm?doi=10.1037/tra0001356 .

American Psychiatric Association. Diagnostic and statistical manual of mental disorders. 5th ed. Arlington, VA: Author; 2013.

International Classification of Diseases, Eleventh Revision (ICD-11), World Health Organization (WHO) 2019/2021. https://icd.who.int/browse11 .

Murphy D, Karatzias T, Busuttil W, Greenberg N, Shevlin M. ICD-11 posttraumatic stress disorder (PTSD) and complex PTSD (CPTSD) in treatment seeking veterans: risk factors and comorbidity. Soc Psychiatry Psychiatr Epidemiol. 2021;56(7):1289–98.

Article   PubMed   Google Scholar  

Cloitre M, Garvert DW, Brewin CR, Bryant RA, Maercker A. Evidence for proposed ICD-11 PTSD and complex PTSD: a latent profile analysis. Eur J Psychotraumatology. 2013;4(1):20706.

Article   Google Scholar  

Maercker A, Brewin CR, Bryant RA, Cloitre M, Van Ommeren M, Jones LM, et al. Diagnosis and classification of disorders specifically associated with stress: proposals for ICD-11. World Psychiatry. 2013;12(3):198–206.

Murphy D, Busuttil W. Understanding the needs of veterans seeking support for mental health difficulties. BMJ Mil Health. 2020;166(4):211–3.

Karatzias T, Murphy P, Cloitre M, Bisson J, Roberts N, Shevlin M, et al. Psychological interventions for ICD-11 complex PTSD symptoms: systematic review and meta-analysis. Psychol Med. 2019;49(11):1761–75.

Aupperle RL, Melrose AJ, Stein MB, Paulus MP. Executive function and PTSD: disengaging from trauma. Neuropharmacology. 2012;62(2):686–94.

Vasterling JJ, Brewin CR, editors. Neuropsychology of PTSD: Biological, cognitive, and clinical perspectives. The Guilford; 2005.

Scott JC, Matt GE, Wrocklage KM, Crnich C, Jordan J, Southwick SM, et al. A quantitative meta-analysis of neurocognitive functioning in posttraumatic stress disorder. Psychol Bull. 2015;141(1):105–40.

RepovŠ G, Baddeley A. The multi-component model of working memory: explorations in experimental cognitive psychology. Neuroscience. 2006;139(1):5–21.

Stuss DT, Alexander MP. Executive functions and the frontal lobes: a conceptual view. Psychol Res. 2000;63(3–4):289–98.

Diamond A. Executive functions. Annu Rev Psychol. 2013;64(1):135–68.

Miyake A, Friedman NP, Emerson MJ, Witzki AH, Howerter A, Wager TD. The Unity and Diversity of Executive Functions and their contributions to Complex Frontal Lobe tasks: a latent variable analysis. Cogn Psychol. 2000;41(1):49–100.

Ben-Zion Z, Fine NB, Keynan NJ, Admon R, Green N, Halevi M, et al. Cognitive flexibility predicts PTSD symptoms: observational and interventional studies. Front Psychiatry. 2018;9:477.

Polak AR, Witteveen AB, Reitsma JB, Olff M. The role of executive function in posttraumatic stress disorder: a systematic review. J Affect Disord. 2012;141(1):11–21.

Bremner JD, Southwick SM, Johnson DR, Yehuda R, Charney DS. Childhood physical abuse and combat-related posttraumatic stress disorder in Vietnam veterans. Am J Psychiatry. 1993;150(2):235–9.

Vasterling JJ, Duke LM, Brailey K, Constans JI, Allain AN, Sutker PB. Attention, learning, and memory performances and intellectual resources in Vietnam veterans: PTSD and no disorder comparisons. Neuropsychology. 2002;16(1):5–14.

Vyas K, Murphy D, Greenberg N. Cognitive biases in military personnel with and without PTSD: a systematic review. J Mental Health. 2020;1–12.

Wild J, Gur RC. Verbal memory and treatment response in post-traumatic stress disorder. Br J Psychiatry. 2008;193(3):254–5.

Yuan P, Raz N. Prefrontal cortex and executive functions in healthy adults: a meta-analysis of structural neuroimaging studies. Neurosci Biobehavioral Reviews. 2014;42:180–92.

Buchsbaum BR, Greer S, Chang W, Berman KF. Meta-analysis of neuroimaging studies of the Wisconsin Card‐sorting task and component processes. Hum Brain Mapp. 2005;25(1):35–45.

Rottschy C, Langner R, Dogan I, Reetz K, Laird AR, Schulz JB, et al. Modelling neural correlates of working memory: a coordinate-based meta-analysis. NeuroImage. 2012;60(1):830–46.

Norman DA, Shallice T. Attention to Action: Willed and Automatic Control of Behavior. In: Davidson RJ, Schwartz GE, Shapiro D, editors. Consciousness and Self-Regulation [Internet]. Boston, MA: Springer US; 1986 [cited 2023 Oct 20]. pp. 1–18. http://link.springer.com/ https://doi.org/10.1007/978-1-4757-0629-1_1 .

Morey RA, Haswell CC, Hooper SR, De Bellis MD, Amygdala. Hippocampus, and Ventral Medial Prefrontal Cortex Volumes Differ in Maltreated Youth with and without chronic posttraumatic stress disorder. Neuropsychopharmacol. 2016;41(3):791–801.

Koenigs M, Grafman J. The functional neuroanatomy of depression: distinct roles for ventromedial and dorsolateral prefrontal cortex. Behav Brain Res. 2009;201(2):239–43.

Bremner JD, Bolus R, Mayer EA. Psychometric properties of the early trauma inventory–self report. J Nerv Mental Disease. 2007;195(3):211–8.

Pitman RK, Rasmusson AM, Koenen KC, Shin LM, Orr SP, Gilbertson MW, et al. Biological studies of post-traumatic stress disorder. Nat Rev Neurosci. 2012;13(11):769–87.

Eysenck MW, Derakshan N, Santos R, Calvo MG. Anxiety and cognitive performance: attentional control theory. Emotion. 2007;7(2):336–53.

Falconer E, Bryant R, Felmingham KL, Kemp AH, Gordon E, Peduto A, Olivieri G, Williams LM. The neural networks of inhibitory control in posttraumatic stress disorder. J Psychiatry Neurosci. 2008;33(5):413–22. PMID: 18787658; PMCID: PMC2527717.

PubMed   PubMed Central   Google Scholar  

Etkin A, Gyurak A, O’Hara R. A neurobiological approach to the cognitive deficits of psychiatric disorders. Dialog Clin Neurosci. 2013;15(4):419–29.

Bressler SL, Menon V. Large-scale brain networks in cognition: emerging methods and principles. Trends Cogn Sci. 2010;14(6):277–90.

Gold AL, Morey RA, McCarthy G. Amygdala–Prefrontal Cortex Functional Connectivity during threat-Induced anxiety and goal distraction. Biol Psychiatry. 2015;77(4):394–403.

Shin YJ, Kim SM, Hong JS, Han DH. Correlations between cognitive functions and clinical symptoms in adolescents with Complex post-traumatic stress disorder. Front Public Health. 2021;9:586389.

Holst Y, Thorell LB. Adult executive functioning inventory (ADEXI): validity, reliability, and relations to ADHD. Int J Methods Psych Res. 2018;27(1):e1567.

Cloitre M, Shevlin M, Brewin CR, Bisson JI, Roberts NP, Maercker A, et al. The International Trauma Questionnaire: development of a self-report measure of ICD-11 PTSD and complex PTSD. Acta Psychiatr Scand. 2018;138(6):536–46.

Camden AA, Petri JM, Jackson BN, Jeffirs SM, Weathers FW. A psychometric evaluation of the International Trauma Questionnaire (ITQ) in a trauma-exposed college sample. Eur J Trauma Dissociation. 2023;7(1):100305.

Goldberg DP. General Health Questionnaire-12 [Internet]. American Psychological Association; 2011 [cited 2023 Jan 18]. http://doi.apa.org/getdoi.cfm?doi=10.1037/t00297-000 .

Kim YJ, Cho MJ, Park S, Hong JP, Sohn JH, Bae JN, et al. The 12-Item General Health Questionnaire as an effective Mental Health Screening Tool for General Korean Adult Population. Psychiatry Investig. 2013;10(4):352.

Kroenke K, Spitzer RL, Williams JBW. The PHQ-15: validity of a new measure for evaluating the severity of somatic symptoms. Psychosom Med. 2002;64(2):258–66.

Spitzer RL, Kroenke K, Williams JBW, Löwe B. A brief measure for assessing generalized anxiety disorder: the GAD-7. Arch Intern Med. 2006;166(10):1092.

Espie CA, Kyle SD, Hames P, Gardani M, Fleming L, Cape J. The Sleep Condition Indicator: a clinical screening tool to evaluate insomnia disorder. BMJ Open. 2014;4(3):e004183.

Forbes D, Alkemade N, Mitchell D, Elhai JD, McHugh T, Bates G, et al. UTILITY OF THE DIMENSIONS OF ANGER REACTIONS-5 (DAR-5) SCALE AS a BRIEF ANGER MEASURE: Research Article: utility of DAR-5. Depress Anxiety. 2014;31(2):166–73.

Kim HJ, Lee DH, Kim JH, Kang SE. Validation of the dimensions of anger reactions Scale (the DAR-5) in non-clinical South Korean adults. BMC Psychol. 2023;11(1):74.

Saunders JB, Aasland OG, Babor TF, De La Fuente JR, Grant M. Development of the Alcohol Use disorders Identification Test (AUDIT): WHO Collaborative Project on early detection of persons with harmful alcohol Consumption-II. Addiction. 1993;88(6):791–804.

Noorbakhsh S, Shams J, Faghihimohamadi M, Zahiroddin H, Hallgren M, Kallmen H. Psychometric properties of the Alcohol Use disorders Identification Test (AUDIT) and prevalence of alcohol use among Iranian psychiatric outpatients. Subst Abuse Treat Prev Policy. 2018;13(1):5.

Haile ZT. Power Analysis and Exploratory Research. J Hum Lact. 2023;39(4):579–83.

Aggarwal R, Ranganathan P. Common pitfalls in statistical analysis: Linear regression analysis. Perspect Clin Res. 2017;8(2):100.

Alves M, Yamamoto T, Arias-Carrion O, Rocha N, Nardi A, Machado S, et al. Executive function impairments in patients with Depression. CNSNDDT. 2014;13(6):1026–40.

Trivedi J. Cognitive deficits in psychiatric disorders: current status. Indian J Psychiatry. 2006;48(1):10.

Frenda SJ, Fenn KM. Sleep less, think worse: the effect of sleep deprivation on working memory. J Appl Res Memory Cognition. 2016;5(4):463–9.

Peng Z, Dai C, Ba Y, Zhang L, Shao Y, Tian J. Effect of Sleep Deprivation on the Working Memory-related N2-P3 components of the event-related potential waveform. Front Neurosci. 2020;14:469.

Dunsmoor JE, Cisler JM, Fonzo GA, Creech SK, Nemeroff CB. Laboratory models of post-traumatic stress disorder: the elusive bridge to translation. Neuron. 2022;110(11):1754–76.

Jagger-Rickels A, Rothlein D, Stumps A, Evans TC, Bernstein J, Milberg W, et al. An executive function subtype of PTSD with unique neural markers and clinical trajectories. Transl Psychiatry. 2022;12(1):262.

Dixon ML, De La Vega A, Mills C, Andrews-Hanna J, Spreng RN, Cole MW et al. Heterogeneity within the frontoparietal control network and its relationship to the default and dorsal attention networks. Proc Natl Acad Sci USA [Internet]. 2018 Feb 13 [cited 2023 Oct 23];115(7). https://doi.org/10.1073/pnas.1715766115 .

Kucyi A, Hove MJ, Esterman M, Hutchison RM, Valera EM. Dynamic Brain Network correlates of spontaneous fluctuations in attention. Cereb Cortex. 2016;bhw029.

Kucyi A, Esterman M, Capella J, Green A, Uchida M, Biederman J, et al. Prediction of stimulus-independent and task-unrelated thought from functional brain networks. Nat Commun. 2021;12(1):1793.

Daniels J. Default mode alterations in posttraumatic stress disorder related to early-life trauma: a developmental perspective. J Psychiatry Neurosci. 2011;36(1):56–9.

Fair DA, Cohen AL, Dosenbach NUF, Church JA, Miezin FM, Barch DM, et al. The maturing architecture of the brain’s default network. Proc Natl Acad Sci USA. 2008;105(10):4028–32.

Sherman LE, Rudie JD, Pfeifer JH, Masten CL, McNealy K, Dapretto M. Development of the default Mode and Central Executive Networks across early adolescence: a longitudinal study. Dev Cogn Neurosci. 2014;10:148–59.

Barkley RA, Murphy KR. Impairment in Occupational Functioning and adult ADHD: the predictive utility of executive function (EF) ratings Versus EF tests. Arch Clin Neuropsychol. 2010;25(3):157–73.

Karatzias T, Mc Glanaghy E, Cloitre M. Enhanced skills Training in Affective and Interpersonal Regulation (ESTAIR): a New Modular Treatment for ICD-11 Complex Posttraumatic stress disorder (CPTSD). Brain Sci. 2023;13(9):1300.

Download references

Acknowledgements

The authors have no acknowledgements to declare.

This research was unfunded.

Author information

Authors and affiliations.

Combat Stress, Leatherhead, Surrey, KT22 0BX, UK

Natasha Biscoe & Dominic Murphy

Birmingham and Solihull Mental Health NHS Foundation Trust, Birmingham, UK

King’s Centre for Military Health Research, King’s College London, London, SE5 9PR, UK

Dominic Murphy

You can also search for this author in PubMed   Google Scholar

Contributions

DM conceptualised the study and prepared the data. NB analysed the data and drafted the manuscript. EN drafted the manuscript. All authors contributed to manuscript revision.

Corresponding author

Correspondence to Natasha Biscoe .

Ethics declarations

Ethics approval and consent to participate.

Approval for the study was granted by the Combat Stress Research Ethics Committee (ref. pn2020). When providing consent, participants agreed that anonymised survey responses could be used for research. The study was performed in accordance with relevant guidelines and regulations including the Declaration of Helsinki for research with human participants. informed consent was obtained from all participants.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Biscoe, N., New, E. & Murphy, D. Complex PTSD symptom clusters and executive function in UK Armed Forces veterans: a cross-sectional study. BMC Psychol 12 , 209 (2024). https://doi.org/10.1186/s40359-024-01713-w

Download citation

Received : 23 October 2023

Accepted : 05 April 2024

Published : 15 April 2024

DOI : https://doi.org/10.1186/s40359-024-01713-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Mental health
  • Complex PTSD
  • Emotion dysregulation

BMC Psychology

ISSN: 2050-7283

in psychological research the population is

IMAGES

  1. Effects of psychological analysis on the population

    in psychological research the population is

  2. PPT

    in psychological research the population is

  3. 1 introduction to psychological statistics

    in psychological research the population is

  4. An Introduction to the Types Of Psychological Research Methods

    in psychological research the population is

  5. Characteristics of psychological variables of the study population

    in psychological research the population is

  6. 5 Most Popular Research Methods in Psychology Experts Use

    in psychological research the population is

VIDEO

  1. Chapter 7, Importance of Statistics in Psychology and Education

  2. Research Population

  3. The Surprising Truth Behind Synesthesia

  4. Why is Statistics Important for Psychology?

  5. The Physiological Hypothesis of Emotional Aging

  6. Global Demographics: the Diversity of Humanity 🌍📊

COMMENTS

  1. Sampling Methods In Reseach: Types, Techniques, & Examples

    Sampling methods in psychology refer to strategies used to select a subset of individuals (a sample) from a larger population, to study and draw inferences about the entire population. Common methods include random sampling, stratified sampling, cluster sampling, and convenience sampling. Proper sampling ensures representative, generalizable, and valid research results.

  2. 2.1 Basic Concepts

    Research questions in psychology are about variables. A variable is a quantity or quality that varies across people or situations. For example, the height of the students in a psychology class is a variable because it varies from student to student. The sex of the students is also a variable as long as there are both male and female students in ...

  3. What Is the Big Deal About Populations in Research?

    In research, there are 2 kinds of populations: the target population and the accessible population. The accessible population is exactly what it sounds like, the subset of the target population that we can easily get our hands on to conduct our research. While our target population may be Caucasian females with a GFR of 20 or less who are ...

  4. PDF SOME BASIC CONCEPTS IN PSYCHOLOGICAL RESEARCH

    In psychology, quantitative research is almost exclusively carried out on samples drawn from populations. Here the term population has a slightly different mean-ing from the one we use in everyday speech. It need not refer only to people or creatures, for example the population of London, or the population of hedgehogs in Huddersfield.

  5. PDF Sampling

    population. The target population is the total group of individuals from which the sample is drawn. In psychological research, we are interested in learning about large groups of people who all have something in common. We call the group that we are interested in studying our population. In some types of research the population might

  6. PDF Describing Populations and Samples in Doctoral Student Research

    The sampling frame intersects the target population. The sam-ple and sampling frame described extends outside of the target population and population of interest as occa-sionally the sampling frame may include individuals not qualified for the study. Figure 1. The relationship between populations within research.

  7. Research Methods In Psychology

    Olivia Guy-Evans, MSc. Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

  8. Research Methods in Psychology

    Essentially all psychological research involves sampling—selecting a sample to study from the population of interest. Sampling falls into two broad categories. Probability sampling occurs when the researcher can specify the probability that each member of the population will be selected for the sample. Nonprobability sampling occurs when the ...

  9. 9.3 Conducting Surveys

    Essentially all psychological research involves sampling—selecting a sample to study from the population of interest. Sampling falls into two broad categories. Probability sampling occurs when the researcher can specify the probability that each member of the population will be selected for the sample.

  10. 9.3 Conducting Surveys

    Essentially all psychological research involves sampling—selecting a sample to study from the population of interest. Sampling falls into two broad categories. Probability sampling occurs when the researcher can specify the probability that each member of the population will be selected for the sample.

  11. 19 Population Psychology

    The expansionist perspective defines population psychology as any research within the arena of population studies, broadly construed, that includes an individual-level, that is, a psychological, perspective. From this standpoint, most demographers are at least interested in the psychology of demography. There even exists a sub-area of sociology ...

  12. Conducting Surveys

    The first category, Probability sampling, occurs when the researcher can specify the probability that each member of the population will be selected for the sample. The second is Non-probability sampling, which occurs when the researcher cannot specify these probabilities. Most psychological research involves non-probability sampling.

  13. Study Population

    "A researcher wishing to sample psychology professors may limit the study population to those in psychology departments, omitting those in other departments." ... Specifically, defining the study population has received great research attention in medical and clinical study (Friedman et al., 2010; Gerrish & Lacey, 2010; ...

  14. Samples in Psychology Research: Common Types and Errors

    Why Use Samples. Probability Samples. Nonprobability Samples. Sampling Errors. In statistics, a sample is a subset of a population that is used to represent the entire group as a whole. When doing psychology research, it is often impractical to survey every member of a particular population because the number of people is simply too large.

  15. Person, Population, Mechanism. Three Main Branches of Psychological

    The basic suggestion is that psychological science involves research at three different levels: (1) a person-level, (2) a population-level, and (3) a sub-personal mechanism level. The person-level is characterized by a focus on psychological phenomena as experienced and enacted by individual persons in their interaction with other persons and ...

  16. Population vs. Sample

    A population is the entire group that you want to draw conclusions about. A sample is the specific group that you will collect data from. The size of the sample is always less than the total size of the population. In research, a population doesn't always refer to people. It can mean a group containing elements of anything you want to study ...

  17. The Use of Research Methods in Psychological Research: A Systematised

    Introduction. Psychology is an ever-growing and popular field (Gough and Lyons, 2016; Clay, 2017).Due to this growth and the need for science-based research to base health decisions on (Perestelo-Pérez, 2013), the use of research methods in the broad field of psychology is an essential point of investigation (Stangor, 2011; Aanstoos, 2014).Research methods are therefore viewed as important ...

  18. Overview of Survey Research

    In particular, survey researchers have a strong preference for large random samples because they provide the most accurate estimates of what is true in the population. In fact, survey research may be the only approach in psychology in which random sampling is routinely used. Beyond these two characteristics, almost anything goes in survey research.

  19. 9.1 Overview of Survey Research

    In particular, survey researchers have a strong preference for large random samples because they provide the most accurate estimates of what is true in the population. In fact, survey research may be the only approach in psychology in which random sampling is routinely used. Beyond these two characteristics, almost anything goes in survey research.

  20. Sample Size and its Importance in Research

    The sample size for a study needs to be estimated at the time the study is proposed; too large a sample is unnecessary and unethical, and too small a sample is unscientific and also unethical. The necessary sample size can be calculated, using statistical software, based on certain assumptions. If no assumptions can be made, then an arbitrary ...

  21. Problematic research practices in psychology: Misconceptions about data

    Given persistent problems (e.g., replicability), psychological research is increasingly scrutinised. Arocha (2021) critically analyses epistemological problems of positivism and the common population-level statistics, which follow Galtonian instead of Wundtian nomothetic methodologies and therefore cannot explore individual-level structures and processes.

  22. How Snowball Sampling Used in Psychology Research

    When to Use Snowball Sampling in Psychology Research . In most cases, researchers want to draw a sample that is both random and representative. Random selection ensures that each member of a group has an equal chance of being chosen, while representativeness ensures that the sample is an accurate reflection of the population as a whole.

  23. Chapter 2 Psychology Flashcards

    Study with Quizlet and memorize flashcards containing terms like Sample, Population, Research hypothesis and more. ... Psychology — ruby edited year 10 unit 1. 35 terms. Ruby_SINGH75. Preview. dot point 8 psych. 11 terms. ChocChipPia. Preview. MKF1120. 29 terms. HugoD7. Preview. VFRS. Teacher 13 terms.

  24. A Population Health Approach to Youth Substance Use Disorder Prevention

    APA is partnering with the House of Representatives Addiction, Treatment and Recovery Caucus to host a webinar on May 9, focusing on a population health approach to youth Substance Use Disorder (SUD) prevention. This initiative comes in response to the alarming data from the 2022 National Survey on Drug Use and Health, which indicates a ...

  25. Enhancing consumer participation in the context of sport events: The

    Future research should explore the role of attendees' cultural background on satisfaction with basic psychological needs, and its subsequent impact on CP, perhaps by comparing Asian and Western countries with large cultural differences. Lastly, the study does not address how the impact of basic psychological needs on CP may change over time.

  26. Mental health and the pandemic: What U.S. surveys have found

    At least four-in-ten U.S. adults (41%) have experienced high levels of psychological distress at some point during the pandemic, according to four Pew Research Center surveys conducted between March 2020 and September 2022. Young adults are especially likely to have faced high levels of psychological distress since the COVID-19 outbreak began: 58% of Americans ages 18 to 29 fall into this ...

  27. The association between urban greenspace and psychological health among

    Introduction. Psychological disorders are emerging as health priorities in Sub-Saharan Africa, specifically Ethiopia. Urban greenspace - parks, trees, and other vegetation integrated into urban form - may facilitate population psychological health, but is largely understudied outside high-income countries. We explore greenspace in relation to psychological health among young adults in Addis ...

  28. Complex PTSD symptom clusters and executive function in UK Armed Forces

    Less is known about complex posttraumatic stress disorder (CPTSD) than postrraumatic stress disorder (PTSD) in military veterans, yet this population may be at greater risk of the former diagnosis. Executive function impairment has been linked to PTSD treatment outcomes. The current study therefore aimed to explore possible associations between each CPTSD symptom cluster and executive function ...

  29. Sustainability

    With the rapidly aging population, Aging in Place (AIP) assumes an increasingly pivotal role, as it aligns with SDG 3 (Good Health and Well-being) and Environmental, Social, and Governance (ESG) principles. Despite the contributions of AIP, there is a dearth of studies investigating the corresponding needs and well-being of older adults from psychological and sociocultural perspectives.