• USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • Types of Research Designs
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Applying Critical Thinking
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

Introduction

Before beginning your paper, you need to decide how you plan to design the study .

The research design refers to the overall strategy and analytical approach that you have chosen in order to integrate, in a coherent and logical way, the different components of the study, thus ensuring that the research problem will be thoroughly investigated. It constitutes the blueprint for the collection, measurement, and interpretation of information and data. Note that the research problem determines the type of design you choose, not the other way around!

De Vaus, D. A. Research Design in Social Research . London: SAGE, 2001; Trochim, William M.K. Research Methods Knowledge Base. 2006.

General Structure and Writing Style

The function of a research design is to ensure that the evidence obtained enables you to effectively address the research problem logically and as unambiguously as possible . In social sciences research, obtaining information relevant to the research problem generally entails specifying the type of evidence needed to test the underlying assumptions of a theory, to evaluate a program, or to accurately describe and assess meaning related to an observable phenomenon.

With this in mind, a common mistake made by researchers is that they begin their investigations before they have thought critically about what information is required to address the research problem. Without attending to these design issues beforehand, the overall research problem will not be adequately addressed and any conclusions drawn will run the risk of being weak and unconvincing. As a consequence, the overall validity of the study will be undermined.

The length and complexity of describing the research design in your paper can vary considerably, but any well-developed description will achieve the following :

  • Identify the research problem clearly and justify its selection, particularly in relation to any valid alternative designs that could have been used,
  • Review and synthesize previously published literature associated with the research problem,
  • Clearly and explicitly specify hypotheses [i.e., research questions] central to the problem,
  • Effectively describe the information and/or data which will be necessary for an adequate testing of the hypotheses and explain how such information and/or data will be obtained, and
  • Describe the methods of analysis to be applied to the data in determining whether or not the hypotheses are true or false.

The research design is usually incorporated into the introduction of your paper . You can obtain an overall sense of what to do by reviewing studies that have utilized the same research design [e.g., using a case study approach]. This can help you develop an outline to follow for your own paper.

NOTE : Use the SAGE Research Methods Online and Cases and the SAGE Research Methods Videos databases to search for scholarly resources on how to apply specific research designs and methods . The Research Methods Online database contains links to more than 175,000 pages of SAGE publisher's book, journal, and reference content on quantitative, qualitative, and mixed research methodologies. Also included is a collection of case studies of social research projects that can be used to help you better understand abstract or complex methodological concepts. The Research Methods Videos database contains hours of tutorials, interviews, video case studies, and mini-documentaries covering the entire research process.

Creswell, John W. and J. David Creswell. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches . 5th edition. Thousand Oaks, CA: Sage, 2018; De Vaus, D. A. Research Design in Social Research . London: SAGE, 2001; Gorard, Stephen. Research Design: Creating Robust Approaches for the Social Sciences . Thousand Oaks, CA: Sage, 2013; Leedy, Paul D. and Jeanne Ellis Ormrod. Practical Research: Planning and Design . Tenth edition. Boston, MA: Pearson, 2013; Vogt, W. Paul, Dianna C. Gardner, and Lynne M. Haeffele. When to Use What Research Design . New York: Guilford, 2012.

Action Research Design

Definition and Purpose

The essentials of action research design follow a characteristic cycle whereby initially an exploratory stance is adopted, where an understanding of a problem is developed and plans are made for some form of interventionary strategy. Then the intervention is carried out [the "action" in action research] during which time, pertinent observations are collected in various forms. The new interventional strategies are carried out, and this cyclic process repeats, continuing until a sufficient understanding of [or a valid implementation solution for] the problem is achieved. The protocol is iterative or cyclical in nature and is intended to foster deeper understanding of a given situation, starting with conceptualizing and particularizing the problem and moving through several interventions and evaluations.

What do these studies tell you ?

  • This is a collaborative and adaptive research design that lends itself to use in work or community situations.
  • Design focuses on pragmatic and solution-driven research outcomes rather than testing theories.
  • When practitioners use action research, it has the potential to increase the amount they learn consciously from their experience; the action research cycle can be regarded as a learning cycle.
  • Action research studies often have direct and obvious relevance to improving practice and advocating for change.
  • There are no hidden controls or preemption of direction by the researcher.

What these studies don't tell you ?

  • It is harder to do than conducting conventional research because the researcher takes on responsibilities of advocating for change as well as for researching the topic.
  • Action research is much harder to write up because it is less likely that you can use a standard format to report your findings effectively [i.e., data is often in the form of stories or observation].
  • Personal over-involvement of the researcher may bias research results.
  • The cyclic nature of action research to achieve its twin outcomes of action [e.g. change] and research [e.g. understanding] is time-consuming and complex to conduct.
  • Advocating for change usually requires buy-in from study participants.

Coghlan, David and Mary Brydon-Miller. The Sage Encyclopedia of Action Research . Thousand Oaks, CA:  Sage, 2014; Efron, Sara Efrat and Ruth Ravid. Action Research in Education: A Practical Guide . New York: Guilford, 2013; Gall, Meredith. Educational Research: An Introduction . Chapter 18, Action Research. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007; Gorard, Stephen. Research Design: Creating Robust Approaches for the Social Sciences . Thousand Oaks, CA: Sage, 2013; Kemmis, Stephen and Robin McTaggart. “Participatory Action Research.” In Handbook of Qualitative Research . Norman Denzin and Yvonna S. Lincoln, eds. 2nd ed. (Thousand Oaks, CA: SAGE, 2000), pp. 567-605; McNiff, Jean. Writing and Doing Action Research . London: Sage, 2014; Reason, Peter and Hilary Bradbury. Handbook of Action Research: Participative Inquiry and Practice . Thousand Oaks, CA: SAGE, 2001.

Case Study Design

A case study is an in-depth study of a particular research problem rather than a sweeping statistical survey or comprehensive comparative inquiry. It is often used to narrow down a very broad field of research into one or a few easily researchable examples. The case study research design is also useful for testing whether a specific theory and model actually applies to phenomena in the real world. It is a useful design when not much is known about an issue or phenomenon.

  • Approach excels at bringing us to an understanding of a complex issue through detailed contextual analysis of a limited number of events or conditions and their relationships.
  • A researcher using a case study design can apply a variety of methodologies and rely on a variety of sources to investigate a research problem.
  • Design can extend experience or add strength to what is already known through previous research.
  • Social scientists, in particular, make wide use of this research design to examine contemporary real-life situations and provide the basis for the application of concepts and theories and the extension of methodologies.
  • The design can provide detailed descriptions of specific and rare cases.
  • A single or small number of cases offers little basis for establishing reliability or to generalize the findings to a wider population of people, places, or things.
  • Intense exposure to the study of a case may bias a researcher's interpretation of the findings.
  • Design does not facilitate assessment of cause and effect relationships.
  • Vital information may be missing, making the case hard to interpret.
  • The case may not be representative or typical of the larger problem being investigated.
  • If the criteria for selecting a case is because it represents a very unusual or unique phenomenon or problem for study, then your interpretation of the findings can only apply to that particular case.

Case Studies. Writing@CSU. Colorado State University; Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 4, Flexible Methods: Case Study Design. 2nd ed. New York: Columbia University Press, 1999; Gerring, John. “What Is a Case Study and What Is It Good for?” American Political Science Review 98 (May 2004): 341-354; Greenhalgh, Trisha, editor. Case Study Evaluation: Past, Present and Future Challenges . Bingley, UK: Emerald Group Publishing, 2015; Mills, Albert J. , Gabrielle Durepos, and Eiden Wiebe, editors. Encyclopedia of Case Study Research . Thousand Oaks, CA: SAGE Publications, 2010; Stake, Robert E. The Art of Case Study Research . Thousand Oaks, CA: SAGE, 1995; Yin, Robert K. Case Study Research: Design and Theory . Applied Social Research Methods Series, no. 5. 3rd ed. Thousand Oaks, CA: SAGE, 2003.

Causal Design

Causality studies may be thought of as understanding a phenomenon in terms of conditional statements in the form, “If X, then Y.” This type of research is used to measure what impact a specific change will have on existing norms and assumptions. Most social scientists seek causal explanations that reflect tests of hypotheses. Causal effect (nomothetic perspective) occurs when variation in one phenomenon, an independent variable, leads to or results, on average, in variation in another phenomenon, the dependent variable.

Conditions necessary for determining causality:

  • Empirical association -- a valid conclusion is based on finding an association between the independent variable and the dependent variable.
  • Appropriate time order -- to conclude that causation was involved, one must see that cases were exposed to variation in the independent variable before variation in the dependent variable.
  • Nonspuriousness -- a relationship between two variables that is not due to variation in a third variable.
  • Causality research designs assist researchers in understanding why the world works the way it does through the process of proving a causal link between variables and by the process of eliminating other possibilities.
  • Replication is possible.
  • There is greater confidence the study has internal validity due to the systematic subject selection and equity of groups being compared.
  • Not all relationships are causal! The possibility always exists that, by sheer coincidence, two unrelated events appear to be related [e.g., Punxatawney Phil could accurately predict the duration of Winter for five consecutive years but, the fact remains, he's just a big, furry rodent].
  • Conclusions about causal relationships are difficult to determine due to a variety of extraneous and confounding variables that exist in a social environment. This means causality can only be inferred, never proven.
  • If two variables are correlated, the cause must come before the effect. However, even though two variables might be causally related, it can sometimes be difficult to determine which variable comes first and, therefore, to establish which variable is the actual cause and which is the  actual effect.

Beach, Derek and Rasmus Brun Pedersen. Causal Case Study Methods: Foundations and Guidelines for Comparing, Matching, and Tracing . Ann Arbor, MI: University of Michigan Press, 2016; Bachman, Ronet. The Practice of Research in Criminology and Criminal Justice . Chapter 5, Causation and Research Designs. 3rd ed. Thousand Oaks, CA: Pine Forge Press, 2007; Brewer, Ernest W. and Jennifer Kubn. “Causal-Comparative Design.” In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 125-132; Causal Research Design: Experimentation. Anonymous SlideShare Presentation; Gall, Meredith. Educational Research: An Introduction . Chapter 11, Nonexperimental Research: Correlational Designs. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007; Trochim, William M.K. Research Methods Knowledge Base. 2006.

Cohort Design

Often used in the medical sciences, but also found in the applied social sciences, a cohort study generally refers to a study conducted over a period of time involving members of a population which the subject or representative member comes from, and who are united by some commonality or similarity. Using a quantitative framework, a cohort study makes note of statistical occurrence within a specialized subgroup, united by same or similar characteristics that are relevant to the research problem being investigated, rather than studying statistical occurrence within the general population. Using a qualitative framework, cohort studies generally gather data using methods of observation. Cohorts can be either "open" or "closed."

  • Open Cohort Studies [dynamic populations, such as the population of Los Angeles] involve a population that is defined just by the state of being a part of the study in question (and being monitored for the outcome). Date of entry and exit from the study is individually defined, therefore, the size of the study population is not constant. In open cohort studies, researchers can only calculate rate based data, such as, incidence rates and variants thereof.
  • Closed Cohort Studies [static populations, such as patients entered into a clinical trial] involve participants who enter into the study at one defining point in time and where it is presumed that no new participants can enter the cohort. Given this, the number of study participants remains constant (or can only decrease).
  • The use of cohorts is often mandatory because a randomized control study may be unethical. For example, you cannot deliberately expose people to asbestos, you can only study its effects on those who have already been exposed. Research that measures risk factors often relies upon cohort designs.
  • Because cohort studies measure potential causes before the outcome has occurred, they can demonstrate that these “causes” preceded the outcome, thereby avoiding the debate as to which is the cause and which is the effect.
  • Cohort analysis is highly flexible and can provide insight into effects over time and related to a variety of different types of changes [e.g., social, cultural, political, economic, etc.].
  • Either original data or secondary data can be used in this design.
  • In cases where a comparative analysis of two cohorts is made [e.g., studying the effects of one group exposed to asbestos and one that has not], a researcher cannot control for all other factors that might differ between the two groups. These factors are known as confounding variables.
  • Cohort studies can end up taking a long time to complete if the researcher must wait for the conditions of interest to develop within the group. This also increases the chance that key variables change during the course of the study, potentially impacting the validity of the findings.
  • Due to the lack of randominization in the cohort design, its external validity is lower than that of study designs where the researcher randomly assigns participants.

Healy P, Devane D. “Methodological Considerations in Cohort Study Designs.” Nurse Researcher 18 (2011): 32-36; Glenn, Norval D, editor. Cohort Analysis . 2nd edition. Thousand Oaks, CA: Sage, 2005; Levin, Kate Ann. Study Design IV: Cohort Studies. Evidence-Based Dentistry 7 (2003): 51–52; Payne, Geoff. “Cohort Study.” In The SAGE Dictionary of Social Research Methods . Victor Jupp, editor. (Thousand Oaks, CA: Sage, 2006), pp. 31-33; Study Design 101. Himmelfarb Health Sciences Library. George Washington University, November 2011; Cohort Study. Wikipedia.

Cross-Sectional Design

Cross-sectional research designs have three distinctive features: no time dimension; a reliance on existing differences rather than change following intervention; and, groups are selected based on existing differences rather than random allocation. The cross-sectional design can only measure differences between or from among a variety of people, subjects, or phenomena rather than a process of change. As such, researchers using this design can only employ a relatively passive approach to making causal inferences based on findings.

  • Cross-sectional studies provide a clear 'snapshot' of the outcome and the characteristics associated with it, at a specific point in time.
  • Unlike an experimental design, where there is an active intervention by the researcher to produce and measure change or to create differences, cross-sectional designs focus on studying and drawing inferences from existing differences between people, subjects, or phenomena.
  • Entails collecting data at and concerning one point in time. While longitudinal studies involve taking multiple measures over an extended period of time, cross-sectional research is focused on finding relationships between variables at one moment in time.
  • Groups identified for study are purposely selected based upon existing differences in the sample rather than seeking random sampling.
  • Cross-section studies are capable of using data from a large number of subjects and, unlike observational studies, is not geographically bound.
  • Can estimate prevalence of an outcome of interest because the sample is usually taken from the whole population.
  • Because cross-sectional designs generally use survey techniques to gather data, they are relatively inexpensive and take up little time to conduct.
  • Finding people, subjects, or phenomena to study that are very similar except in one specific variable can be difficult.
  • Results are static and time bound and, therefore, give no indication of a sequence of events or reveal historical or temporal contexts.
  • Studies cannot be utilized to establish cause and effect relationships.
  • This design only provides a snapshot of analysis so there is always the possibility that a study could have differing results if another time-frame had been chosen.
  • There is no follow up to the findings.

Bethlehem, Jelke. "7: Cross-sectional Research." In Research Methodology in the Social, Behavioural and Life Sciences . Herman J Adèr and Gideon J Mellenbergh, editors. (London, England: Sage, 1999), pp. 110-43; Bourque, Linda B. “Cross-Sectional Design.” In  The SAGE Encyclopedia of Social Science Research Methods . Michael S. Lewis-Beck, Alan Bryman, and Tim Futing Liao. (Thousand Oaks, CA: 2004), pp. 230-231; Hall, John. “Cross-Sectional Survey Design.” In Encyclopedia of Survey Research Methods . Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 173-174; Helen Barratt, Maria Kirwan. Cross-Sectional Studies: Design Application, Strengths and Weaknesses of Cross-Sectional Studies. Healthknowledge, 2009. Cross-Sectional Study. Wikipedia.

Descriptive Design

Descriptive research designs help provide answers to the questions of who, what, when, where, and how associated with a particular research problem; a descriptive study cannot conclusively ascertain answers to why. Descriptive research is used to obtain information concerning the current status of the phenomena and to describe "what exists" with respect to variables or conditions in a situation.

  • The subject is being observed in a completely natural and unchanged natural environment. True experiments, whilst giving analyzable data, often adversely influence the normal behavior of the subject [a.k.a., the Heisenberg effect whereby measurements of certain systems cannot be made without affecting the systems].
  • Descriptive research is often used as a pre-cursor to more quantitative research designs with the general overview giving some valuable pointers as to what variables are worth testing quantitatively.
  • If the limitations are understood, they can be a useful tool in developing a more focused study.
  • Descriptive studies can yield rich data that lead to important recommendations in practice.
  • Appoach collects a large amount of data for detailed analysis.
  • The results from a descriptive research cannot be used to discover a definitive answer or to disprove a hypothesis.
  • Because descriptive designs often utilize observational methods [as opposed to quantitative methods], the results cannot be replicated.
  • The descriptive function of research is heavily dependent on instrumentation for measurement and observation.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 5, Flexible Methods: Descriptive Research. 2nd ed. New York: Columbia University Press, 1999; Given, Lisa M. "Descriptive Research." In Encyclopedia of Measurement and Statistics . Neil J. Salkind and Kristin Rasmussen, editors. (Thousand Oaks, CA: Sage, 2007), pp. 251-254; McNabb, Connie. Descriptive Research Methodologies. Powerpoint Presentation; Shuttleworth, Martyn. Descriptive Research Design, September 26, 2008; Erickson, G. Scott. "Descriptive Research Design." In New Methods of Market Research and Analysis . (Northampton, MA: Edward Elgar Publishing, 2017), pp. 51-77; Sahin, Sagufta, and Jayanta Mete. "A Brief Study on Descriptive Research: Its Nature and Application in Social Science." International Journal of Research and Analysis in Humanities 1 (2021): 11; K. Swatzell and P. Jennings. “Descriptive Research: The Nuts and Bolts.” Journal of the American Academy of Physician Assistants 20 (2007), pp. 55-56; Kane, E. Doing Your Own Research: Basic Descriptive Research in the Social Sciences and Humanities . London: Marion Boyars, 1985.

Experimental Design

A blueprint of the procedure that enables the researcher to maintain control over all factors that may affect the result of an experiment. In doing this, the researcher attempts to determine or predict what may occur. Experimental research is often used where there is time priority in a causal relationship (cause precedes effect), there is consistency in a causal relationship (a cause will always lead to the same effect), and the magnitude of the correlation is great. The classic experimental design specifies an experimental group and a control group. The independent variable is administered to the experimental group and not to the control group, and both groups are measured on the same dependent variable. Subsequent experimental designs have used more groups and more measurements over longer periods. True experiments must have control, randomization, and manipulation.

  • Experimental research allows the researcher to control the situation. In so doing, it allows researchers to answer the question, “What causes something to occur?”
  • Permits the researcher to identify cause and effect relationships between variables and to distinguish placebo effects from treatment effects.
  • Experimental research designs support the ability to limit alternative explanations and to infer direct causal relationships in the study.
  • Approach provides the highest level of evidence for single studies.
  • The design is artificial, and results may not generalize well to the real world.
  • The artificial settings of experiments may alter the behaviors or responses of participants.
  • Experimental designs can be costly if special equipment or facilities are needed.
  • Some research problems cannot be studied using an experiment because of ethical or technical reasons.
  • Difficult to apply ethnographic and other qualitative methods to experimentally designed studies.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 7, Flexible Methods: Experimental Research. 2nd ed. New York: Columbia University Press, 1999; Chapter 2: Research Design, Experimental Designs. School of Psychology, University of New England, 2000; Chow, Siu L. "Experimental Design." In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 448-453; "Experimental Design." In Social Research Methods . Nicholas Walliman, editor. (London, England: Sage, 2006), pp, 101-110; Experimental Research. Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Kirk, Roger E. Experimental Design: Procedures for the Behavioral Sciences . 4th edition. Thousand Oaks, CA: Sage, 2013; Trochim, William M.K. Experimental Design. Research Methods Knowledge Base. 2006; Rasool, Shafqat. Experimental Research. Slideshare presentation.

Exploratory Design

An exploratory design is conducted about a research problem when there are few or no earlier studies to refer to or rely upon to predict an outcome . The focus is on gaining insights and familiarity for later investigation or undertaken when research problems are in a preliminary stage of investigation. Exploratory designs are often used to establish an understanding of how best to proceed in studying an issue or what methodology would effectively apply to gathering information about the issue.

The goals of exploratory research are intended to produce the following possible insights:

  • Familiarity with basic details, settings, and concerns.
  • Well grounded picture of the situation being developed.
  • Generation of new ideas and assumptions.
  • Development of tentative theories or hypotheses.
  • Determination about whether a study is feasible in the future.
  • Issues get refined for more systematic investigation and formulation of new research questions.
  • Direction for future research and techniques get developed.
  • Design is a useful approach for gaining background information on a particular topic.
  • Exploratory research is flexible and can address research questions of all types (what, why, how).
  • Provides an opportunity to define new terms and clarify existing concepts.
  • Exploratory research is often used to generate formal hypotheses and develop more precise research problems.
  • In the policy arena or applied to practice, exploratory studies help establish research priorities and where resources should be allocated.
  • Exploratory research generally utilizes small sample sizes and, thus, findings are typically not generalizable to the population at large.
  • The exploratory nature of the research inhibits an ability to make definitive conclusions about the findings. They provide insight but not definitive conclusions.
  • The research process underpinning exploratory studies is flexible but often unstructured, leading to only tentative results that have limited value to decision-makers.
  • Design lacks rigorous standards applied to methods of data gathering and analysis because one of the areas for exploration could be to determine what method or methodologies could best fit the research problem.

Cuthill, Michael. “Exploratory Research: Citizen Participation, Local Government, and Sustainable Development in Australia.” Sustainable Development 10 (2002): 79-89; Streb, Christoph K. "Exploratory Case Study." In Encyclopedia of Case Study Research . Albert J. Mills, Gabrielle Durepos and Eiden Wiebe, editors. (Thousand Oaks, CA: Sage, 2010), pp. 372-374; Taylor, P. J., G. Catalano, and D.R.F. Walker. “Exploratory Analysis of the World City Network.” Urban Studies 39 (December 2002): 2377-2394; Exploratory Research. Wikipedia.

Field Research Design

Sometimes referred to as ethnography or participant observation, designs around field research encompass a variety of interpretative procedures [e.g., observation and interviews] rooted in qualitative approaches to studying people individually or in groups while inhabiting their natural environment as opposed to using survey instruments or other forms of impersonal methods of data gathering. Information acquired from observational research takes the form of “ field notes ” that involves documenting what the researcher actually sees and hears while in the field. Findings do not consist of conclusive statements derived from numbers and statistics because field research involves analysis of words and observations of behavior. Conclusions, therefore, are developed from an interpretation of findings that reveal overriding themes, concepts, and ideas. More information can be found HERE .

  • Field research is often necessary to fill gaps in understanding the research problem applied to local conditions or to specific groups of people that cannot be ascertained from existing data.
  • The research helps contextualize already known information about a research problem, thereby facilitating ways to assess the origins, scope, and scale of a problem and to gage the causes, consequences, and means to resolve an issue based on deliberate interaction with people in their natural inhabited spaces.
  • Enables the researcher to corroborate or confirm data by gathering additional information that supports or refutes findings reported in prior studies of the topic.
  • Because the researcher in embedded in the field, they are better able to make observations or ask questions that reflect the specific cultural context of the setting being investigated.
  • Observing the local reality offers the opportunity to gain new perspectives or obtain unique data that challenges existing theoretical propositions or long-standing assumptions found in the literature.

What these studies don't tell you

  • A field research study requires extensive time and resources to carry out the multiple steps involved with preparing for the gathering of information, including for example, examining background information about the study site, obtaining permission to access the study site, and building trust and rapport with subjects.
  • Requires a commitment to staying engaged in the field to ensure that you can adequately document events and behaviors as they unfold.
  • The unpredictable nature of fieldwork means that researchers can never fully control the process of data gathering. They must maintain a flexible approach to studying the setting because events and circumstances can change quickly or unexpectedly.
  • Findings can be difficult to interpret and verify without access to documents and other source materials that help to enhance the credibility of information obtained from the field  [i.e., the act of triangulating the data].
  • Linking the research problem to the selection of study participants inhabiting their natural environment is critical. However, this specificity limits the ability to generalize findings to different situations or in other contexts or to infer courses of action applied to other settings or groups of people.
  • The reporting of findings must take into account how the researcher themselves may have inadvertently affected respondents and their behaviors.

Historical Design

The purpose of a historical research design is to collect, verify, and synthesize evidence from the past to establish facts that defend or refute a hypothesis. It uses secondary sources and a variety of primary documentary evidence, such as, diaries, official records, reports, archives, and non-textual information [maps, pictures, audio and visual recordings]. The limitation is that the sources must be both authentic and valid.

  • The historical research design is unobtrusive; the act of research does not affect the results of the study.
  • The historical approach is well suited for trend analysis.
  • Historical records can add important contextual background required to more fully understand and interpret a research problem.
  • There is often no possibility of researcher-subject interaction that could affect the findings.
  • Historical sources can be used over and over to study different research problems or to replicate a previous study.
  • The ability to fulfill the aims of your research are directly related to the amount and quality of documentation available to understand the research problem.
  • Since historical research relies on data from the past, there is no way to manipulate it to control for contemporary contexts.
  • Interpreting historical sources can be very time consuming.
  • The sources of historical materials must be archived consistently to ensure access. This may especially challenging for digital or online-only sources.
  • Original authors bring their own perspectives and biases to the interpretation of past events and these biases are more difficult to ascertain in historical resources.
  • Due to the lack of control over external variables, historical research is very weak with regard to the demands of internal validity.
  • It is rare that the entirety of historical documentation needed to fully address a research problem is available for interpretation, therefore, gaps need to be acknowledged.

Howell, Martha C. and Walter Prevenier. From Reliable Sources: An Introduction to Historical Methods . Ithaca, NY: Cornell University Press, 2001; Lundy, Karen Saucier. "Historical Research." In The Sage Encyclopedia of Qualitative Research Methods . Lisa M. Given, editor. (Thousand Oaks, CA: Sage, 2008), pp. 396-400; Marius, Richard. and Melvin E. Page. A Short Guide to Writing about History . 9th edition. Boston, MA: Pearson, 2015; Savitt, Ronald. “Historical Research in Marketing.” Journal of Marketing 44 (Autumn, 1980): 52-58;  Gall, Meredith. Educational Research: An Introduction . Chapter 16, Historical Research. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007.

Longitudinal Design

A longitudinal study follows the same sample over time and makes repeated observations. For example, with longitudinal surveys, the same group of people is interviewed at regular intervals, enabling researchers to track changes over time and to relate them to variables that might explain why the changes occur. Longitudinal research designs describe patterns of change and help establish the direction and magnitude of causal relationships. Measurements are taken on each variable over two or more distinct time periods. This allows the researcher to measure change in variables over time. It is a type of observational study sometimes referred to as a panel study.

  • Longitudinal data facilitate the analysis of the duration of a particular phenomenon.
  • Enables survey researchers to get close to the kinds of causal explanations usually attainable only with experiments.
  • The design permits the measurement of differences or change in a variable from one period to another [i.e., the description of patterns of change over time].
  • Longitudinal studies facilitate the prediction of future outcomes based upon earlier factors.
  • The data collection method may change over time.
  • Maintaining the integrity of the original sample can be difficult over an extended period of time.
  • It can be difficult to show more than one variable at a time.
  • This design often needs qualitative research data to explain fluctuations in the results.
  • A longitudinal research design assumes present trends will continue unchanged.
  • It can take a long period of time to gather results.
  • There is a need to have a large sample size and accurate sampling to reach representativness.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 6, Flexible Methods: Relational and Longitudinal Research. 2nd ed. New York: Columbia University Press, 1999; Forgues, Bernard, and Isabelle Vandangeon-Derumez. "Longitudinal Analyses." In Doing Management Research . Raymond-Alain Thiétart and Samantha Wauchope, editors. (London, England: Sage, 2001), pp. 332-351; Kalaian, Sema A. and Rafa M. Kasim. "Longitudinal Studies." In Encyclopedia of Survey Research Methods . Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 440-441; Menard, Scott, editor. Longitudinal Research . Thousand Oaks, CA: Sage, 2002; Ployhart, Robert E. and Robert J. Vandenberg. "Longitudinal Research: The Theory, Design, and Analysis of Change.” Journal of Management 36 (January 2010): 94-120; Longitudinal Study. Wikipedia.

Meta-Analysis Design

Meta-analysis is an analytical methodology designed to systematically evaluate and summarize the results from a number of individual studies, thereby, increasing the overall sample size and the ability of the researcher to study effects of interest. The purpose is to not simply summarize existing knowledge, but to develop a new understanding of a research problem using synoptic reasoning. The main objectives of meta-analysis include analyzing differences in the results among studies and increasing the precision by which effects are estimated. A well-designed meta-analysis depends upon strict adherence to the criteria used for selecting studies and the availability of information in each study to properly analyze their findings. Lack of information can severely limit the type of analyzes and conclusions that can be reached. In addition, the more dissimilarity there is in the results among individual studies [heterogeneity], the more difficult it is to justify interpretations that govern a valid synopsis of results. A meta-analysis needs to fulfill the following requirements to ensure the validity of your findings:

  • Clearly defined description of objectives, including precise definitions of the variables and outcomes that are being evaluated;
  • A well-reasoned and well-documented justification for identification and selection of the studies;
  • Assessment and explicit acknowledgment of any researcher bias in the identification and selection of those studies;
  • Description and evaluation of the degree of heterogeneity among the sample size of studies reviewed; and,
  • Justification of the techniques used to evaluate the studies.
  • Can be an effective strategy for determining gaps in the literature.
  • Provides a means of reviewing research published about a particular topic over an extended period of time and from a variety of sources.
  • Is useful in clarifying what policy or programmatic actions can be justified on the basis of analyzing research results from multiple studies.
  • Provides a method for overcoming small sample sizes in individual studies that previously may have had little relationship to each other.
  • Can be used to generate new hypotheses or highlight research problems for future studies.
  • Small violations in defining the criteria used for content analysis can lead to difficult to interpret and/or meaningless findings.
  • A large sample size can yield reliable, but not necessarily valid, results.
  • A lack of uniformity regarding, for example, the type of literature reviewed, how methods are applied, and how findings are measured within the sample of studies you are analyzing, can make the process of synthesis difficult to perform.
  • Depending on the sample size, the process of reviewing and synthesizing multiple studies can be very time consuming.

Beck, Lewis W. "The Synoptic Method." The Journal of Philosophy 36 (1939): 337-345; Cooper, Harris, Larry V. Hedges, and Jeffrey C. Valentine, eds. The Handbook of Research Synthesis and Meta-Analysis . 2nd edition. New York: Russell Sage Foundation, 2009; Guzzo, Richard A., Susan E. Jackson and Raymond A. Katzell. “Meta-Analysis Analysis.” In Research in Organizational Behavior , Volume 9. (Greenwich, CT: JAI Press, 1987), pp 407-442; Lipsey, Mark W. and David B. Wilson. Practical Meta-Analysis . Thousand Oaks, CA: Sage Publications, 2001; Study Design 101. Meta-Analysis. The Himmelfarb Health Sciences Library, George Washington University; Timulak, Ladislav. “Qualitative Meta-Analysis.” In The SAGE Handbook of Qualitative Data Analysis . Uwe Flick, editor. (Los Angeles, CA: Sage, 2013), pp. 481-495; Walker, Esteban, Adrian V. Hernandez, and Micheal W. Kattan. "Meta-Analysis: It's Strengths and Limitations." Cleveland Clinic Journal of Medicine 75 (June 2008): 431-439.

Mixed-Method Design

  • Narrative and non-textual information can add meaning to numeric data, while numeric data can add precision to narrative and non-textual information.
  • Can utilize existing data while at the same time generating and testing a grounded theory approach to describe and explain the phenomenon under study.
  • A broader, more complex research problem can be investigated because the researcher is not constrained by using only one method.
  • The strengths of one method can be used to overcome the inherent weaknesses of another method.
  • Can provide stronger, more robust evidence to support a conclusion or set of recommendations.
  • May generate new knowledge new insights or uncover hidden insights, patterns, or relationships that a single methodological approach might not reveal.
  • Produces more complete knowledge and understanding of the research problem that can be used to increase the generalizability of findings applied to theory or practice.
  • A researcher must be proficient in understanding how to apply multiple methods to investigating a research problem as well as be proficient in optimizing how to design a study that coherently melds them together.
  • Can increase the likelihood of conflicting results or ambiguous findings that inhibit drawing a valid conclusion or setting forth a recommended course of action [e.g., sample interview responses do not support existing statistical data].
  • Because the research design can be very complex, reporting the findings requires a well-organized narrative, clear writing style, and precise word choice.
  • Design invites collaboration among experts. However, merging different investigative approaches and writing styles requires more attention to the overall research process than studies conducted using only one methodological paradigm.
  • Concurrent merging of quantitative and qualitative research requires greater attention to having adequate sample sizes, using comparable samples, and applying a consistent unit of analysis. For sequential designs where one phase of qualitative research builds on the quantitative phase or vice versa, decisions about what results from the first phase to use in the next phase, the choice of samples and estimating reasonable sample sizes for both phases, and the interpretation of results from both phases can be difficult.
  • Due to multiple forms of data being collected and analyzed, this design requires extensive time and resources to carry out the multiple steps involved in data gathering and interpretation.

Burch, Patricia and Carolyn J. Heinrich. Mixed Methods for Policy Research and Program Evaluation . Thousand Oaks, CA: Sage, 2016; Creswell, John w. et al. Best Practices for Mixed Methods Research in the Health Sciences . Bethesda, MD: Office of Behavioral and Social Sciences Research, National Institutes of Health, 2010Creswell, John W. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches . 4th edition. Thousand Oaks, CA: Sage Publications, 2014; Domínguez, Silvia, editor. Mixed Methods Social Networks Research . Cambridge, UK: Cambridge University Press, 2014; Hesse-Biber, Sharlene Nagy. Mixed Methods Research: Merging Theory with Practice . New York: Guilford Press, 2010; Niglas, Katrin. “How the Novice Researcher Can Make Sense of Mixed Methods Designs.” International Journal of Multiple Research Approaches 3 (2009): 34-46; Onwuegbuzie, Anthony J. and Nancy L. Leech. “Linking Research Questions to Mixed Methods Data Analysis Procedures.” The Qualitative Report 11 (September 2006): 474-498; Tashakorri, Abbas and John W. Creswell. “The New Era of Mixed Methods.” Journal of Mixed Methods Research 1 (January 2007): 3-7; Zhanga, Wanqing. “Mixed Methods Application in Health Intervention Research: A Multiple Case Study.” International Journal of Multiple Research Approaches 8 (2014): 24-35 .

Observational Design

This type of research design draws a conclusion by comparing subjects against a control group, in cases where the researcher has no control over the experiment. There are two general types of observational designs. In direct observations, people know that you are watching them. Unobtrusive measures involve any method for studying behavior where individuals do not know they are being observed. An observational study allows a useful insight into a phenomenon and avoids the ethical and practical difficulties of setting up a large and cumbersome research project.

  • Observational studies are usually flexible and do not necessarily need to be structured around a hypothesis about what you expect to observe [data is emergent rather than pre-existing].
  • The researcher is able to collect in-depth information about a particular behavior.
  • Can reveal interrelationships among multifaceted dimensions of group interactions.
  • You can generalize your results to real life situations.
  • Observational research is useful for discovering what variables may be important before applying other methods like experiments.
  • Observation research designs account for the complexity of group behaviors.
  • Reliability of data is low because seeing behaviors occur over and over again may be a time consuming task and are difficult to replicate.
  • In observational research, findings may only reflect a unique sample population and, thus, cannot be generalized to other groups.
  • There can be problems with bias as the researcher may only "see what they want to see."
  • There is no possibility to determine "cause and effect" relationships since nothing is manipulated.
  • Sources or subjects may not all be equally credible.
  • Any group that is knowingly studied is altered to some degree by the presence of the researcher, therefore, potentially skewing any data collected.

Atkinson, Paul and Martyn Hammersley. “Ethnography and Participant Observation.” In Handbook of Qualitative Research . Norman K. Denzin and Yvonna S. Lincoln, eds. (Thousand Oaks, CA: Sage, 1994), pp. 248-261; Observational Research. Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Patton Michael Quinn. Qualitiative Research and Evaluation Methods . Chapter 6, Fieldwork Strategies and Observational Methods. 3rd ed. Thousand Oaks, CA: Sage, 2002; Payne, Geoff and Judy Payne. "Observation." In Key Concepts in Social Research . The SAGE Key Concepts series. (London, England: Sage, 2004), pp. 158-162; Rosenbaum, Paul R. Design of Observational Studies . New York: Springer, 2010;Williams, J. Patrick. "Nonparticipant Observation." In The Sage Encyclopedia of Qualitative Research Methods . Lisa M. Given, editor.(Thousand Oaks, CA: Sage, 2008), pp. 562-563.

Philosophical Design

Understood more as an broad approach to examining a research problem than a methodological design, philosophical analysis and argumentation is intended to challenge deeply embedded, often intractable, assumptions underpinning an area of study. This approach uses the tools of argumentation derived from philosophical traditions, concepts, models, and theories to critically explore and challenge, for example, the relevance of logic and evidence in academic debates, to analyze arguments about fundamental issues, or to discuss the root of existing discourse about a research problem. These overarching tools of analysis can be framed in three ways:

  • Ontology -- the study that describes the nature of reality; for example, what is real and what is not, what is fundamental and what is derivative?
  • Epistemology -- the study that explores the nature of knowledge; for example, by what means does knowledge and understanding depend upon and how can we be certain of what we know?
  • Axiology -- the study of values; for example, what values does an individual or group hold and why? How are values related to interest, desire, will, experience, and means-to-end? And, what is the difference between a matter of fact and a matter of value?
  • Can provide a basis for applying ethical decision-making to practice.
  • Functions as a means of gaining greater self-understanding and self-knowledge about the purposes of research.
  • Brings clarity to general guiding practices and principles of an individual or group.
  • Philosophy informs methodology.
  • Refine concepts and theories that are invoked in relatively unreflective modes of thought and discourse.
  • Beyond methodology, philosophy also informs critical thinking about epistemology and the structure of reality (metaphysics).
  • Offers clarity and definition to the practical and theoretical uses of terms, concepts, and ideas.
  • Limited application to specific research problems [answering the "So What?" question in social science research].
  • Analysis can be abstract, argumentative, and limited in its practical application to real-life issues.
  • While a philosophical analysis may render problematic that which was once simple or taken-for-granted, the writing can be dense and subject to unnecessary jargon, overstatement, and/or excessive quotation and documentation.
  • There are limitations in the use of metaphor as a vehicle of philosophical analysis.
  • There can be analytical difficulties in moving from philosophy to advocacy and between abstract thought and application to the phenomenal world.

Burton, Dawn. "Part I, Philosophy of the Social Sciences." In Research Training for Social Scientists . (London, England: Sage, 2000), pp. 1-5; Chapter 4, Research Methodology and Design. Unisa Institutional Repository (UnisaIR), University of South Africa; Jarvie, Ian C., and Jesús Zamora-Bonilla, editors. The SAGE Handbook of the Philosophy of Social Sciences . London: Sage, 2011; Labaree, Robert V. and Ross Scimeca. “The Philosophical Problem of Truth in Librarianship.” The Library Quarterly 78 (January 2008): 43-70; Maykut, Pamela S. Beginning Qualitative Research: A Philosophic and Practical Guide . Washington, DC: Falmer Press, 1994; McLaughlin, Hugh. "The Philosophy of Social Research." In Understanding Social Work Research . 2nd edition. (London: SAGE Publications Ltd., 2012), pp. 24-47; Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, CSLI, Stanford University, 2013.

Sequential Design

  • The researcher has a limitless option when it comes to sample size and the sampling schedule.
  • Due to the repetitive nature of this research design, minor changes and adjustments can be done during the initial parts of the study to correct and hone the research method.
  • This is a useful design for exploratory studies.
  • There is very little effort on the part of the researcher when performing this technique. It is generally not expensive, time consuming, or workforce intensive.
  • Because the study is conducted serially, the results of one sample are known before the next sample is taken and analyzed. This provides opportunities for continuous improvement of sampling and methods of analysis.
  • The sampling method is not representative of the entire population. The only possibility of approaching representativeness is when the researcher chooses to use a very large sample size significant enough to represent a significant portion of the entire population. In this case, moving on to study a second or more specific sample can be difficult.
  • The design cannot be used to create conclusions and interpretations that pertain to an entire population because the sampling technique is not randomized. Generalizability from findings is, therefore, limited.
  • Difficult to account for and interpret variation from one sample to another over time, particularly when using qualitative methods of data collection.

Betensky, Rebecca. Harvard University, Course Lecture Note slides; Bovaird, James A. and Kevin A. Kupzyk. "Sequential Design." In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 1347-1352; Cresswell, John W. Et al. “Advanced Mixed-Methods Research Designs.” In Handbook of Mixed Methods in Social and Behavioral Research . Abbas Tashakkori and Charles Teddle, eds. (Thousand Oaks, CA: Sage, 2003), pp. 209-240; Henry, Gary T. "Sequential Sampling." In The SAGE Encyclopedia of Social Science Research Methods . Michael S. Lewis-Beck, Alan Bryman and Tim Futing Liao, editors. (Thousand Oaks, CA: Sage, 2004), pp. 1027-1028; Nataliya V. Ivankova. “Using Mixed-Methods Sequential Explanatory Design: From Theory to Practice.” Field Methods 18 (February 2006): 3-20; Bovaird, James A. and Kevin A. Kupzyk. “Sequential Design.” In Encyclopedia of Research Design . Neil J. Salkind, ed. Thousand Oaks, CA: Sage, 2010; Sequential Analysis. Wikipedia.

Systematic Review

  • A systematic review synthesizes the findings of multiple studies related to each other by incorporating strategies of analysis and interpretation intended to reduce biases and random errors.
  • The application of critical exploration, evaluation, and synthesis methods separates insignificant, unsound, or redundant research from the most salient and relevant studies worthy of reflection.
  • They can be use to identify, justify, and refine hypotheses, recognize and avoid hidden problems in prior studies, and explain data inconsistencies and conflicts in data.
  • Systematic reviews can be used to help policy makers formulate evidence-based guidelines and regulations.
  • The use of strict, explicit, and pre-determined methods of synthesis, when applied appropriately, provide reliable estimates about the effects of interventions, evaluations, and effects related to the overarching research problem investigated by each study under review.
  • Systematic reviews illuminate where knowledge or thorough understanding of a research problem is lacking and, therefore, can then be used to guide future research.
  • The accepted inclusion of unpublished studies [i.e., grey literature] ensures the broadest possible way to analyze and interpret research on a topic.
  • Results of the synthesis can be generalized and the findings extrapolated into the general population with more validity than most other types of studies .
  • Systematic reviews do not create new knowledge per se; they are a method for synthesizing existing studies about a research problem in order to gain new insights and determine gaps in the literature.
  • The way researchers have carried out their investigations [e.g., the period of time covered, number of participants, sources of data analyzed, etc.] can make it difficult to effectively synthesize studies.
  • The inclusion of unpublished studies can introduce bias into the review because they may not have undergone a rigorous peer-review process prior to publication. Examples may include conference presentations or proceedings, publications from government agencies, white papers, working papers, and internal documents from organizations, and doctoral dissertations and Master's theses.

Denyer, David and David Tranfield. "Producing a Systematic Review." In The Sage Handbook of Organizational Research Methods .  David A. Buchanan and Alan Bryman, editors. ( Thousand Oaks, CA: Sage Publications, 2009), pp. 671-689; Foster, Margaret J. and Sarah T. Jewell, editors. Assembling the Pieces of a Systematic Review: A Guide for Librarians . Lanham, MD: Rowman and Littlefield, 2017; Gough, David, Sandy Oliver, James Thomas, editors. Introduction to Systematic Reviews . 2nd edition. Los Angeles, CA: Sage Publications, 2017; Gopalakrishnan, S. and P. Ganeshkumar. “Systematic Reviews and Meta-analysis: Understanding the Best Evidence in Primary Healthcare.” Journal of Family Medicine and Primary Care 2 (2013): 9-14; Gough, David, James Thomas, and Sandy Oliver. "Clarifying Differences between Review Designs and Methods." Systematic Reviews 1 (2012): 1-9; Khan, Khalid S., Regina Kunz, Jos Kleijnen, and Gerd Antes. “Five Steps to Conducting a Systematic Review.” Journal of the Royal Society of Medicine 96 (2003): 118-121; Mulrow, C. D. “Systematic Reviews: Rationale for Systematic Reviews.” BMJ 309:597 (September 1994); O'Dwyer, Linda C., and Q. Eileen Wafford. "Addressing Challenges with Systematic Review Teams through Effective Communication: A Case Report." Journal of the Medical Library Association 109 (October 2021): 643-647; Okoli, Chitu, and Kira Schabram. "A Guide to Conducting a Systematic Literature Review of Information Systems Research."  Sprouts: Working Papers on Information Systems 10 (2010); Siddaway, Andy P., Alex M. Wood, and Larry V. Hedges. "How to Do a Systematic Review: A Best Practice Guide for Conducting and Reporting Narrative Reviews, Meta-analyses, and Meta-syntheses." Annual Review of Psychology 70 (2019): 747-770; Torgerson, Carole J. “Publication Bias: The Achilles’ Heel of Systematic Reviews?” British Journal of Educational Studies 54 (March 2006): 89-102; Torgerson, Carole. Systematic Reviews . New York: Continuum, 2003.

  • << Previous: Purpose of Guide
  • Next: Design Flaws to Avoid >>
  • Last Updated: May 1, 2024 9:25 AM
  • URL: https://libguides.usc.edu/writingguide

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology

Research Design | Step-by-Step Guide with Examples

Published on 5 May 2022 by Shona McCombes . Revised on 20 March 2023.

A research design is a strategy for answering your research question  using empirical data. Creating a research design means making decisions about:

  • Your overall aims and approach
  • The type of research design you’ll use
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods
  • The procedures you’ll follow to collect data
  • Your data analysis methods

A well-planned research design helps ensure that your methods match your research aims and that you use the right kind of analysis for your data.

Table of contents

Step 1: consider your aims and approach, step 2: choose a type of research design, step 3: identify your population and sampling method, step 4: choose your data collection methods, step 5: plan your data collection procedures, step 6: decide on your data analysis strategies, frequently asked questions.

  • Introduction

Before you can start designing your research, you should already have a clear idea of the research question you want to investigate.

There are many different ways you could go about answering this question. Your research design choices should be driven by your aims and priorities – start by thinking carefully about what you want to achieve.

The first choice you need to make is whether you’ll take a qualitative or quantitative approach.

Qualitative research designs tend to be more flexible and inductive , allowing you to adjust your approach based on what you find throughout the research process.

Quantitative research designs tend to be more fixed and deductive , with variables and hypotheses clearly defined in advance of data collection.

It’s also possible to use a mixed methods design that integrates aspects of both approaches. By combining qualitative and quantitative insights, you can gain a more complete picture of the problem you’re studying and strengthen the credibility of your conclusions.

Practical and ethical considerations when designing research

As well as scientific considerations, you need to think practically when designing your research. If your research involves people or animals, you also need to consider research ethics .

  • How much time do you have to collect data and write up the research?
  • Will you be able to gain access to the data you need (e.g., by travelling to a specific location or contacting specific people)?
  • Do you have the necessary research skills (e.g., statistical analysis or interview techniques)?
  • Will you need ethical approval ?

At each stage of the research design process, make sure that your choices are practically feasible.

Prevent plagiarism, run a free check.

Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research.

Types of quantitative research designs

Quantitative designs can be split into four main types. Experimental and   quasi-experimental designs allow you to test cause-and-effect relationships, while descriptive and correlational designs allow you to measure variables and describe relationships between them.

With descriptive and correlational designs, you can get a clear picture of characteristics, trends, and relationships as they exist in the real world. However, you can’t draw conclusions about cause and effect (because correlation doesn’t imply causation ).

Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables influencing the results. However, their controlled conditions may not always reflect how things work in the real world. They’re often also more difficult and expensive to implement.

Types of qualitative research designs

Qualitative designs are less strictly defined. This approach is about gaining a rich, detailed understanding of a specific context or phenomenon, and you can often be more creative and flexible in designing your research.

The table below shows some common types of qualitative design. They often have similar approaches in terms of data collection, but focus on different aspects when analysing the data.

Your research design should clearly define who or what your research will focus on, and how you’ll go about choosing your participants or subjects.

In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you’ll actually collect data from.

Defining the population

A population can be made up of anything you want to study – plants, animals, organisations, texts, countries, etc. In the social sciences, it most often refers to a group of people.

For example, will you focus on people from a specific demographic, region, or background? Are you interested in people with a certain job or medical condition, or users of a particular product?

The more precisely you define your population, the easier it will be to gather a representative sample.

Sampling methods

Even with a narrowly defined population, it’s rarely possible to collect data from every individual. Instead, you’ll collect data from a sample.

To select a sample, there are two main approaches: probability sampling and non-probability sampling . The sampling method you use affects how confidently you can generalise your results to the population as a whole.

Probability sampling is the most statistically valid option, but it’s often difficult to achieve unless you’re dealing with a very small and accessible population.

For practical reasons, many studies use non-probability sampling, but it’s important to be aware of the limitations and carefully consider potential biases. You should always make an effort to gather a sample that’s as representative as possible of the population.

Case selection in qualitative research

In some types of qualitative designs, sampling may not be relevant.

For example, in an ethnography or a case study, your aim is to deeply understand a specific context, not to generalise to a population. Instead of sampling, you may simply aim to collect as much data as possible about the context you are studying.

In these types of design, you still have to carefully consider your choice of case or community. You should have a clear rationale for why this particular case is suitable for answering your research question.

For example, you might choose a case study that reveals an unusual or neglected aspect of your research problem, or you might choose several very similar or very different cases in order to compare them.

Data collection methods are ways of directly measuring variables and gathering information. They allow you to gain first-hand knowledge and original insights into your research problem.

You can choose just one data collection method, or use several methods in the same study.

Survey methods

Surveys allow you to collect data about opinions, behaviours, experiences, and characteristics by asking people directly. There are two main survey methods to choose from: questionnaires and interviews.

Observation methods

Observations allow you to collect data unobtrusively, observing characteristics, behaviours, or social interactions without relying on self-reporting.

Observations may be conducted in real time, taking notes as you observe, or you might make audiovisual recordings for later analysis. They can be qualitative or quantitative.

Other methods of data collection

There are many other ways you might collect data depending on your field and topic.

If you’re not sure which methods will work best for your research design, try reading some papers in your field to see what data collection methods they used.

Secondary data

If you don’t have the time or resources to collect data from the population you’re interested in, you can also choose to use secondary data that other researchers already collected – for example, datasets from government surveys or previous studies on your topic.

With this raw data, you can do your own analysis to answer new research questions that weren’t addressed by the original study.

Using secondary data can expand the scope of your research, as you may be able to access much larger and more varied samples than you could collect yourself.

However, it also means you don’t have any control over which variables to measure or how to measure them, so the conclusions you can draw may be limited.

As well as deciding on your methods, you need to plan exactly how you’ll use these methods to collect data that’s consistent, accurate, and unbiased.

Planning systematic procedures is especially important in quantitative research, where you need to precisely define your variables and ensure your measurements are reliable and valid.

Operationalisation

Some variables, like height or age, are easily measured. But often you’ll be dealing with more abstract concepts, like satisfaction, anxiety, or competence. Operationalisation means turning these fuzzy ideas into measurable indicators.

If you’re using observations , which events or actions will you count?

If you’re using surveys , which questions will you ask and what range of responses will be offered?

You may also choose to use or adapt existing materials designed to measure the concept you’re interested in – for example, questionnaires or inventories whose reliability and validity has already been established.

Reliability and validity

Reliability means your results can be consistently reproduced , while validity means that you’re actually measuring the concept you’re interested in.

For valid and reliable results, your measurement materials should be thoroughly researched and carefully designed. Plan your procedures to make sure you carry out the same steps in the same way for each participant.

If you’re developing a new questionnaire or other instrument to measure a specific concept, running a pilot study allows you to check its validity and reliability in advance.

Sampling procedures

As well as choosing an appropriate sampling method, you need a concrete plan for how you’ll actually contact and recruit your selected sample.

That means making decisions about things like:

  • How many participants do you need for an adequate sample size?
  • What inclusion and exclusion criteria will you use to identify eligible participants?
  • How will you contact your sample – by mail, online, by phone, or in person?

If you’re using a probability sampling method, it’s important that everyone who is randomly selected actually participates in the study. How will you ensure a high response rate?

If you’re using a non-probability method, how will you avoid bias and ensure a representative sample?

Data management

It’s also important to create a data management plan for organising and storing your data.

Will you need to transcribe interviews or perform data entry for observations? You should anonymise and safeguard any sensitive data, and make sure it’s backed up regularly.

Keeping your data well organised will save time when it comes to analysing them. It can also help other researchers validate and add to your findings.

On their own, raw data can’t answer your research question. The last step of designing your research is planning how you’ll analyse the data.

Quantitative data analysis

In quantitative research, you’ll most likely use some form of statistical analysis . With statistics, you can summarise your sample data, make estimates, and test hypotheses.

Using descriptive statistics , you can summarise your sample data in terms of:

  • The distribution of the data (e.g., the frequency of each score on a test)
  • The central tendency of the data (e.g., the mean to describe the average score)
  • The variability of the data (e.g., the standard deviation to describe how spread out the scores are)

The specific calculations you can do depend on the level of measurement of your variables.

Using inferential statistics , you can:

  • Make estimates about the population based on your sample data.
  • Test hypotheses about a relationship between variables.

Regression and correlation tests look for associations between two or more variables, while comparison tests (such as t tests and ANOVAs ) look for differences in the outcomes of different groups.

Your choice of statistical test depends on various aspects of your research design, including the types of variables you’re dealing with and the distribution of your data.

Qualitative data analysis

In qualitative research, your data will usually be very dense with information and ideas. Instead of summing it up in numbers, you’ll need to comb through the data in detail, interpret its meanings, identify patterns, and extract the parts that are most relevant to your research question.

Two of the most common approaches to doing this are thematic analysis and discourse analysis .

There are many other ways of analysing qualitative data depending on the aims of your research. To get a sense of potential approaches, try reading some qualitative research papers in your field.

A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.

For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.

Operationalisation means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts, and meanings, use qualitative methods .
  • If you want to analyse a large amount of readily available data, use secondary data. If you want data specific to your purposes with control over how they are generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2023, March 20). Research Design | Step-by-Step Guide with Examples. Scribbr. Retrieved 29 April 2024, from https://www.scribbr.co.uk/research-methods/research-design/

Is this article helpful?

Shona McCombes

Shona McCombes

Grad Coach

Research Design 101

Everything You Need To Get Started (With Examples)

By: Derek Jansen (MBA) | Reviewers: Eunice Rautenbach (DTech) & Kerryn Warren (PhD) | April 2023

Research design for qualitative and quantitative studies

Navigating the world of research can be daunting, especially if you’re a first-time researcher. One concept you’re bound to run into fairly early in your research journey is that of “ research design ”. Here, we’ll guide you through the basics using practical examples , so that you can approach your research with confidence.

Overview: Research Design 101

What is research design.

  • Research design types for quantitative studies
  • Video explainer : quantitative research design
  • Research design types for qualitative studies
  • Video explainer : qualitative research design
  • How to choose a research design
  • Key takeaways

Research design refers to the overall plan, structure or strategy that guides a research project , from its conception to the final data analysis. A good research design serves as the blueprint for how you, as the researcher, will collect and analyse data while ensuring consistency, reliability and validity throughout your study.

Understanding different types of research designs is essential as helps ensure that your approach is suitable  given your research aims, objectives and questions , as well as the resources you have available to you. Without a clear big-picture view of how you’ll design your research, you run the risk of potentially making misaligned choices in terms of your methodology – especially your sampling , data collection and data analysis decisions.

The problem with defining research design…

One of the reasons students struggle with a clear definition of research design is because the term is used very loosely across the internet, and even within academia.

Some sources claim that the three research design types are qualitative, quantitative and mixed methods , which isn’t quite accurate (these just refer to the type of data that you’ll collect and analyse). Other sources state that research design refers to the sum of all your design choices, suggesting it’s more like a research methodology . Others run off on other less common tangents. No wonder there’s confusion!

In this article, we’ll clear up the confusion. We’ll explain the most common research design types for both qualitative and quantitative research projects, whether that is for a full dissertation or thesis, or a smaller research paper or article.

Free Webinar: Research Methodology 101

Research Design: Quantitative Studies

Quantitative research involves collecting and analysing data in a numerical form. Broadly speaking, there are four types of quantitative research designs: descriptive , correlational , experimental , and quasi-experimental . 

Descriptive Research Design

As the name suggests, descriptive research design focuses on describing existing conditions, behaviours, or characteristics by systematically gathering information without manipulating any variables. In other words, there is no intervention on the researcher’s part – only data collection.

For example, if you’re studying smartphone addiction among adolescents in your community, you could deploy a survey to a sample of teens asking them to rate their agreement with certain statements that relate to smartphone addiction. The collected data would then provide insight regarding how widespread the issue may be – in other words, it would describe the situation.

The key defining attribute of this type of research design is that it purely describes the situation . In other words, descriptive research design does not explore potential relationships between different variables or the causes that may underlie those relationships. Therefore, descriptive research is useful for generating insight into a research problem by describing its characteristics . By doing so, it can provide valuable insights and is often used as a precursor to other research design types.

Correlational Research Design

Correlational design is a popular choice for researchers aiming to identify and measure the relationship between two or more variables without manipulating them . In other words, this type of research design is useful when you want to know whether a change in one thing tends to be accompanied by a change in another thing.

For example, if you wanted to explore the relationship between exercise frequency and overall health, you could use a correlational design to help you achieve this. In this case, you might gather data on participants’ exercise habits, as well as records of their health indicators like blood pressure, heart rate, or body mass index. Thereafter, you’d use a statistical test to assess whether there’s a relationship between the two variables (exercise frequency and health).

As you can see, correlational research design is useful when you want to explore potential relationships between variables that cannot be manipulated or controlled for ethical, practical, or logistical reasons. It is particularly helpful in terms of developing predictions , and given that it doesn’t involve the manipulation of variables, it can be implemented at a large scale more easily than experimental designs (which will look at next).

That said, it’s important to keep in mind that correlational research design has limitations – most notably that it cannot be used to establish causality . In other words, correlation does not equal causation . To establish causality, you’ll need to move into the realm of experimental design, coming up next…

Need a helping hand?

research designs writing assignment (evaluative)

Experimental Research Design

Experimental research design is used to determine if there is a causal relationship between two or more variables . With this type of research design, you, as the researcher, manipulate one variable (the independent variable) while controlling others (dependent variables). Doing so allows you to observe the effect of the former on the latter and draw conclusions about potential causality.

For example, if you wanted to measure if/how different types of fertiliser affect plant growth, you could set up several groups of plants, with each group receiving a different type of fertiliser, as well as one with no fertiliser at all. You could then measure how much each plant group grew (on average) over time and compare the results from the different groups to see which fertiliser was most effective.

Overall, experimental research design provides researchers with a powerful way to identify and measure causal relationships (and the direction of causality) between variables. However, developing a rigorous experimental design can be challenging as it’s not always easy to control all the variables in a study. This often results in smaller sample sizes , which can reduce the statistical power and generalisability of the results.

Moreover, experimental research design requires random assignment . This means that the researcher needs to assign participants to different groups or conditions in a way that each participant has an equal chance of being assigned to any group (note that this is not the same as random sampling ). Doing so helps reduce the potential for bias and confounding variables . This need for random assignment can lead to ethics-related issues . For example, withholding a potentially beneficial medical treatment from a control group may be considered unethical in certain situations.

Quasi-Experimental Research Design

Quasi-experimental research design is used when the research aims involve identifying causal relations , but one cannot (or doesn’t want to) randomly assign participants to different groups (for practical or ethical reasons). Instead, with a quasi-experimental research design, the researcher relies on existing groups or pre-existing conditions to form groups for comparison.

For example, if you were studying the effects of a new teaching method on student achievement in a particular school district, you may be unable to randomly assign students to either group and instead have to choose classes or schools that already use different teaching methods. This way, you still achieve separate groups, without having to assign participants to specific groups yourself.

Naturally, quasi-experimental research designs have limitations when compared to experimental designs. Given that participant assignment is not random, it’s more difficult to confidently establish causality between variables, and, as a researcher, you have less control over other variables that may impact findings.

All that said, quasi-experimental designs can still be valuable in research contexts where random assignment is not possible and can often be undertaken on a much larger scale than experimental research, thus increasing the statistical power of the results. What’s important is that you, as the researcher, understand the limitations of the design and conduct your quasi-experiment as rigorously as possible, paying careful attention to any potential confounding variables .

The four most common quantitative research design types are descriptive, correlational, experimental and quasi-experimental.

Research Design: Qualitative Studies

There are many different research design types when it comes to qualitative studies, but here we’ll narrow our focus to explore the “Big 4”. Specifically, we’ll look at phenomenological design, grounded theory design, ethnographic design, and case study design.

Phenomenological Research Design

Phenomenological design involves exploring the meaning of lived experiences and how they are perceived by individuals. This type of research design seeks to understand people’s perspectives , emotions, and behaviours in specific situations. Here, the aim for researchers is to uncover the essence of human experience without making any assumptions or imposing preconceived ideas on their subjects.

For example, you could adopt a phenomenological design to study why cancer survivors have such varied perceptions of their lives after overcoming their disease. This could be achieved by interviewing survivors and then analysing the data using a qualitative analysis method such as thematic analysis to identify commonalities and differences.

Phenomenological research design typically involves in-depth interviews or open-ended questionnaires to collect rich, detailed data about participants’ subjective experiences. This richness is one of the key strengths of phenomenological research design but, naturally, it also has limitations. These include potential biases in data collection and interpretation and the lack of generalisability of findings to broader populations.

Grounded Theory Research Design

Grounded theory (also referred to as “GT”) aims to develop theories by continuously and iteratively analysing and comparing data collected from a relatively large number of participants in a study. It takes an inductive (bottom-up) approach, with a focus on letting the data “speak for itself”, without being influenced by preexisting theories or the researcher’s preconceptions.

As an example, let’s assume your research aims involved understanding how people cope with chronic pain from a specific medical condition, with a view to developing a theory around this. In this case, grounded theory design would allow you to explore this concept thoroughly without preconceptions about what coping mechanisms might exist. You may find that some patients prefer cognitive-behavioural therapy (CBT) while others prefer to rely on herbal remedies. Based on multiple, iterative rounds of analysis, you could then develop a theory in this regard, derived directly from the data (as opposed to other preexisting theories and models).

Grounded theory typically involves collecting data through interviews or observations and then analysing it to identify patterns and themes that emerge from the data. These emerging ideas are then validated by collecting more data until a saturation point is reached (i.e., no new information can be squeezed from the data). From that base, a theory can then be developed .

As you can see, grounded theory is ideally suited to studies where the research aims involve theory generation , especially in under-researched areas. Keep in mind though that this type of research design can be quite time-intensive , given the need for multiple rounds of data collection and analysis.

research designs writing assignment (evaluative)

Ethnographic Research Design

Ethnographic design involves observing and studying a culture-sharing group of people in their natural setting to gain insight into their behaviours, beliefs, and values. The focus here is on observing participants in their natural environment (as opposed to a controlled environment). This typically involves the researcher spending an extended period of time with the participants in their environment, carefully observing and taking field notes .

All of this is not to say that ethnographic research design relies purely on observation. On the contrary, this design typically also involves in-depth interviews to explore participants’ views, beliefs, etc. However, unobtrusive observation is a core component of the ethnographic approach.

As an example, an ethnographer may study how different communities celebrate traditional festivals or how individuals from different generations interact with technology differently. This may involve a lengthy period of observation, combined with in-depth interviews to further explore specific areas of interest that emerge as a result of the observations that the researcher has made.

As you can probably imagine, ethnographic research design has the ability to provide rich, contextually embedded insights into the socio-cultural dynamics of human behaviour within a natural, uncontrived setting. Naturally, however, it does come with its own set of challenges, including researcher bias (since the researcher can become quite immersed in the group), participant confidentiality and, predictably, ethical complexities . All of these need to be carefully managed if you choose to adopt this type of research design.

Case Study Design

With case study research design, you, as the researcher, investigate a single individual (or a single group of individuals) to gain an in-depth understanding of their experiences, behaviours or outcomes. Unlike other research designs that are aimed at larger sample sizes, case studies offer a deep dive into the specific circumstances surrounding a person, group of people, event or phenomenon, generally within a bounded setting or context .

As an example, a case study design could be used to explore the factors influencing the success of a specific small business. This would involve diving deeply into the organisation to explore and understand what makes it tick – from marketing to HR to finance. In terms of data collection, this could include interviews with staff and management, review of policy documents and financial statements, surveying customers, etc.

While the above example is focused squarely on one organisation, it’s worth noting that case study research designs can have different variation s, including single-case, multiple-case and longitudinal designs. As you can see in the example, a single-case design involves intensely examining a single entity to understand its unique characteristics and complexities. Conversely, in a multiple-case design , multiple cases are compared and contrasted to identify patterns and commonalities. Lastly, in a longitudinal case design , a single case or multiple cases are studied over an extended period of time to understand how factors develop over time.

As you can see, a case study research design is particularly useful where a deep and contextualised understanding of a specific phenomenon or issue is desired. However, this strength is also its weakness. In other words, you can’t generalise the findings from a case study to the broader population. So, keep this in mind if you’re considering going the case study route.

Case study design often involves investigating an individual to gain an in-depth understanding of their experiences, behaviours or outcomes.

How To Choose A Research Design

Having worked through all of these potential research designs, you’d be forgiven for feeling a little overwhelmed and wondering, “ But how do I decide which research design to use? ”. While we could write an entire post covering that alone, here are a few factors to consider that will help you choose a suitable research design for your study.

Data type: The first determining factor is naturally the type of data you plan to be collecting – i.e., qualitative or quantitative. This may sound obvious, but we have to be clear about this – don’t try to use a quantitative research design on qualitative data (or vice versa)!

Research aim(s) and question(s): As with all methodological decisions, your research aim and research questions will heavily influence your research design. For example, if your research aims involve developing a theory from qualitative data, grounded theory would be a strong option. Similarly, if your research aims involve identifying and measuring relationships between variables, one of the experimental designs would likely be a better option.

Time: It’s essential that you consider any time constraints you have, as this will impact the type of research design you can choose. For example, if you’ve only got a month to complete your project, a lengthy design such as ethnography wouldn’t be a good fit.

Resources: Take into account the resources realistically available to you, as these need to factor into your research design choice. For example, if you require highly specialised lab equipment to execute an experimental design, you need to be sure that you’ll have access to that before you make a decision.

Keep in mind that when it comes to research, it’s important to manage your risks and play as conservatively as possible. If your entire project relies on you achieving a huge sample, having access to niche equipment or holding interviews with very difficult-to-reach participants, you’re creating risks that could kill your project. So, be sure to think through your choices carefully and make sure that you have backup plans for any existential risks. Remember that a relatively simple methodology executed well generally will typically earn better marks than a highly-complex methodology executed poorly.

research designs writing assignment (evaluative)

Recap: Key Takeaways

We’ve covered a lot of ground here. Let’s recap by looking at the key takeaways:

  • Research design refers to the overall plan, structure or strategy that guides a research project, from its conception to the final analysis of data.
  • Research designs for quantitative studies include descriptive , correlational , experimental and quasi-experimenta l designs.
  • Research designs for qualitative studies include phenomenological , grounded theory , ethnographic and case study designs.
  • When choosing a research design, you need to consider a variety of factors, including the type of data you’ll be working with, your research aims and questions, your time and the resources available to you.

If you need a helping hand with your research design (or any other aspect of your research), check out our private coaching services .

research designs writing assignment (evaluative)

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

You Might Also Like:

Survey Design 101: The Basics

Is there any blog article explaining more on Case study research design? Is there a Case study write-up template? Thank you.

Solly Khan

Thanks this was quite valuable to clarify such an important concept.

hetty

Thanks for this simplified explanations. it is quite very helpful.

Belz

This was really helpful. thanks

Imur

Thank you for your explanation. I think case study research design and the use of secondary data in researches needs to be talked about more in your videos and articles because there a lot of case studies research design tailored projects out there.

Please is there any template for a case study research design whose data type is a secondary data on your repository?

Sam Msongole

This post is very clear, comprehensive and has been very helpful to me. It has cleared the confusion I had in regard to research design and methodology.

Robyn Pritchard

This post is helpful, easy to understand, and deconstructs what a research design is. Thanks

kelebogile

how to cite this page

Peter

Thank you very much for the post. It is wonderful and has cleared many worries in my mind regarding research designs. I really appreciate .

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

research designs writing assignment (evaluative)

Academic Evaluations

In our daily lives, we are continually evaluating objects, people, and ideas in our immediate environments. We pass judgments in conversation, while reading, while shopping, while eating, and while watching television or movies, often being unaware that we are doing so. Evaluation is an equally fundamental writing process, and writing assignments frequently ask us to make and defend value judgments.

Evaluation is an important step in almost any writing process, since we are constantly making value judgments as we write. When we write an "academic evaluation," however, this type of value judgment is the focus of our writing.

A Definition of Evaluation

Kate Kiefer, English Professor Like most specific assignments that teachers give, writing evaluations mirrors what happens so often in our day-to-day lives. Every day we decide whether the temperature is cold enough to need a light or heavy jacket; whether we're willing to spend money on a good book or a good movie; whether the prices at the grocery store tell us to keep shopping at the same place or somewhere else for a better value. Academic tasks rely on evaluation just as often. Is a source reliable? Does an argument convince? Is the article worth reading? So writing evaluation helps students make this often unconscious daily task more overt and prepares them to examine ideas, facts, arguments, and so on more critically.

To evaluate is to assess or appraise. Evaluation is the process of examining a subject and rating it based on its important features. We determine how much or how little we value something, arriving at our judgment on the basis of criteria that we can define.

We evaluate when we write primarily because it is almost impossible to avoid doing so. If right now you were asked to write for five minutes on any subject and were asked to keep your writing completely value-free, you would probably find such an assignment difficult. Readers come to evaluative writing in part because they seek the opinions of other people for one reason or another.

Uses for Evaluation

Consider a time recently when you decided to watch a movie. There were at least two kinds of evaluation available to you through the media: the rating system and critical reviews.

Newspapers and magazines, radio and TV programs all provide critical evaluations for their readers and viewers. Many movie-goers consult more than one media reviewer to adjust for bias. Most movie-goers also consider the rating system, especially if they are deciding to take children to a movie. In addition, most people will also ask for recommendations from friends who have already seen the movie.

Whether professional or personal, judgments like these are based on the process of evaluation. The terminology associated with the elements of this process--criteria, evidence, and judgment--might seem alien to you, but you have undoubtedly used these elements almost every time you have expressed an opinion on something.

Types of Written Evaluation

Quite a few of the assignments writers are given at the university and in the workplace involve the process of evaluation.

One type of written evaluation that most people are familiar with is the review. Reviewers will attend performances, events, or places (like restaurants, movies, or concerts), basing their evaluations on their observations. Reviewers typically use a particular set of criteria they establish for themselves, and their reviews most often appear in newspapers and magazines.

Critical Writing

Reviews are a type of critical writing, but there are other types of critical writing which focus on objects (like works of art or literature) rather than on events and performances. Literary criticism, for instance, is a way of establishing the worth or literary merit of a text on the basis of certain established criteria. When we write about literary texts, we do so using one of many critical "lenses," viewing the text as it addresses matters like form, culture, historical context, gender, and class (to name a few). Deciding whether a text is "good" or "bad" is a matter of establishing which "lens" you are viewing that text through, and using the appropriate set of criteria to do so. For example, we might say that a poem by an obscure Nineteenth Century African American poet is not "good" or "useful" in terms of formal characteristics like rhyme, meter, or diction, but we might judge that same text as "good" or "useful" in terms of the way it addresses cultural and political issues historically.

Response Essays

One very common type of academic writing is the response essay. In many different disciplines, we are asked to respond to something that we read or observe. Some types of response, like the interpretive response, simply ask us to explain a text. However, there are other types of response (like agree/disagree and analytical response) which demand that we make some sort of judgment based on careful consideration of the text, object, or event in question.

Problem Solving Essays

In writing assignments which focus on issues, policies, or phenomena, we are often asked to propose possible solutions for identifiable problems. This type of essay requires evaluation on two levels. First of all, it demands that we use evaluation in order to determine that there is a legitimate problem. And secondly, it demands that we take more than one policy or solution into consideration to determine which will be the most feasible, viable, or effective one, given that problem.

Arguing Essays

Written argument is a type of evaluative writing, particularly when it focuses on a claim of value (like "The death penalty is cruel and ineffective") or policy claim (like "Oakland's Ebonics program is an effective way of addressing standard English deficiencies among African American students in public schools"). In written argument, we advance a claim like one of the above, then support this claim with solid reasons and evidence.

Process Analysis

In scientific or investigative writing, in which experiments are conducted and processes or phenomena are observed or studied, evaluation plays a part in the writer's discussion of findings. Often, these findings need to be both interpreted and analyzed by way of criteria established by the writer.

Source Evaluation

Although not a form of written evaluation in and of itself, source evaluation is a process that is involved in many other types of academic writing, like argument, investigative and scientific writing, and research papers. When we conduct research, we quickly learn that not every source is a good source and that we need to be selective about the quality of the evidence we transplant into our own writing.

Relevance to the Topic

When you conduct research, you naturally look for sources that are relevant to your topic. However, writers also often fall prey to the tendency to accept sources that are just relevant enough . For example, if you were writing an essay on Internet censorship, you might find that your research yielded quite a few sources on music censorship, art censorship, or censorship in general. Though these sources could possibly be marginally useful in an essay on Internet censorship, you will probably want to find more directly relevant sources to serve a more central role in your essay.

Perspective on the Topic

Another point to consider is that even though you want sources relevant to your topic, you might not necessarily want an exclusive collection of sources which agree with your own perspective on that topic. For example, if you are writing an essay on Internet censorship from an anti-censorship perspective, you will want to include in your research sources which also address the pro-censorship side. In this way, your essay will be able to fully address perspectives other than (and sometimes in opposition to) your own.

Credibility

One of the questions you want to ask yourself when you consider using a source is "How credible will my audience consider this source to be?" You will want to ask this question not only of the source itself (the book, journal, magazine, newspaper, home page, etc.) but also of the author. To use an extreme example, for most academic writing assignments you would probably want to steer clear of using a source like the National Enquirer or like your eight year old brother, even though we could imagine certain writing situations in which such sources would be entirely appropriate. The key to determining the credibility of a source/author is to decide not only whether you think the source is reliable, but also whether your audience will find it so, given the purpose of your writing.

Currency of Publication

Unless you are doing research with an historical emphasis, you will generally want to choose sources which have been published recently. Sometimes research and statistics maintain their authority for a very long time, but the more common trend in most fields is that the more recent a study is, the more comprehensive and accurate it is.

Accessibility

When sorting through research, it is best to select sources that are readable and accessible both for you and for your intended audience. If a piece of writing is laden with incomprehensible jargon and incoherent structure or style, you will want to think twice about directing it toward an audience unfamiliar with that type of jargon, structure, or style. In short, it is a good rule of thumb to avoid using any source which you yourself do not understand and are not able to interpret for your audience.

Quality of Writing

When choosing sources, consider the quality of writing in the texts themselves. It is possible to paraphrase from sources that are sloppily written, but quoting from such a source would serve only to diminish your own credibility in the eyes of your audience.

Understanding of Biases

Few are sources are truly objective or unbiased . Trying to eliminate bias from your sources will be nearly impossible, but all writers can try to understand and recognize the biases of their sources. For instance, if you were doing a comparative study of 1/2-ton pickup trucks on the market, you might consult the Ford home page. However, you would also need to be aware that this source would have some very definite biases. Likewise, it would not be unreasonable to use an article from Catholic World in an anti-abortion argument, but you would want to understand how your audience would be likely to view that source. Although there is no fail-proof way to determine the bias of a particular journal or newspaper, you can normally sleuth this out by looking at the language in the article itself or in the surrounding articles.

Use of Research

In evaluating a source, you will need to examine the sources that it in turn uses. Looking at the research used by the author of your source, what biases can you recognize? What are the quantity and quality of evidence and statistics included? How reliable and readable do the excerpts cited seem to be?

Considering Purpose and Audience

We typically think of "values" as being personal matters. But in our writing, as in other areas of our lives, values often become matters of public and political concern. Therefore, it is important when we evaluate to consider why we are making judgments on a subject (purpose) and who we hope to affect with our judgments (audience).

Purposes of Evaluation

Your purpose in written evaluation is not only to express your opinion or judgment about a subject, but also to convince, persuade, or otherwise influence an audience by way of that judgment. In this way, evaluation is a type of argument, in which you as a writer are attempting consciously to have an effect on your readers' ways of thinking or acting. If, for example, you are writing an evaluation in which you make a judgment that Mountain Bike A is a better buy than Mountain Bike B, you are doing more than expressing your approval of the merits of Bike A; you are attempting to convince your audience that Bike A is the better buy and, ultimately, to persuade them to buy Bike A rather than Bike B.

Effects of Audience

Kate Kiefer, English Professor When we evaluate for ourselves, we don't usually take the time to articulate criteria and detail evidence. Our thought processes work fast enough that we often seem to make split-second decisions. Even when we spend time thinking over a decision--like which expensive toy (car, stereo, skis) to buy--we don't often lay out the criteria explicitly. We can't take that shortcut when we write to other folks, though. If we want readers to accept our judgment, then we need to be clear about the criteria we use and the evidence that helps us determine value for each criterion. After all, why should I agree with you to eat at the Outback Steak House if you care only about cost but I care about taste and safe food handling? To write an effective evaluation, you need to figure out what your readers care about and then match your criteria to their concerns. Similarly, you can overwhelm readers with too much detail when they don't have the background knowledge to care about that level of detail. Or you can ignore the expertise of your readers (at your peril) and not give enough detail. Then, as a writer, you come across as condescending, or worse. So targeting an audience is really key to successful evaluation.

In written evaluation, it is important to keep in mind not only your own system of value, but also that of your audience. Writers do not evaluate in a vacuum. Giving some thought to the audience you are attempting to influence will help you to determine what criteria are important to them and what evidence they will require in order to be convinced or persuaded by your evaluative argument. In order to evaluate effectively, it is important that you consider what motivates and concerns your audience.

Criteria and Audience Considerations

The first step in deciding which criteria will be effective in your evaluation is determining which criteria your audience considers important. For example, if you are writing a review of a Mexican restaurant to an audience comprised mainly of senior citizens from the midwest, it is unlikely that "large portions" and "fiery green chile" will be the criteria most important to them. They might be more concerned, rather, with "quality of service" or "availability of heart smart menu items." Trying to anticipate and address your audience's values is an indispensable step in writing a persuasive evaluative argument. Your next step in suiting your criteria to your audience is to determine how you will explain and/or defend not only your judgments, but the criteria supporting them as well. For example, if you are arguing that a Mexican restaurant is excellent because, among other reasons, the texture of the food is appealing, you might need to explain to your audience why texture is a significant criterion in evaluating Mexican food.

Evidence and Audience Considerations

The amount and type of evidence you use to support your judgments will depend largely on the demands of your audience. Common sense tells us that the more oppositional an audience is, the more evidence will be needed to convince them of the validity a judgment. For instance, if you were writing a favorable review of La Cocina on the basis of their fiery green chile, you might not need to use a great deal of evidence for an audience of people who like spicy food but have not tried any of the Mexican restaurants in town. However, if you are addressing an audience who is deeply devoted to the green chile at Manuel's, you will need to provide a fair amount of solid evidence in order to persuade them to try another restaurant.

Parts of an Evaluation

When we evaluate, we make an overall value claim about a subject, using criteria to make judgments based on evidence. Often, we also make use of comparison and contrast as strategies for determining the relative worth of the subject we are considering. This section examines these parts of an evaluation and shows how each functions in a successful evaluation.

Overall Claim

An overall claim or judgment is an evaluator's final decision about worth. When we evaluate, we make a general statement about the worth of objects, goods, services, or solutions to problems.

An overall claim or judgment in an evaluation can be as simple as "See this movie!" or "Brand X is a better buy than the name brand." It can also be complex, particularly when the evaluator recognizes certain conditions that affect the judgment: If citizens of our community want to improve air and water quality and are willing to forego 300 additional jobs, then we should not approve the new plant Acme is hoping to build here.

Qualifications

An overall claim or judgment usually requires qualification so that it seems balanced. If judgments are weighted too much to one side, they will sometimes mar the credibility of your argument. If your overall judgment is wholly positive, your evaluation will wind up sounding like propaganda or advertisement. If it is wholly negative, you might present yourself as overly critical, unfair, or undiplomatic. An example of a qualified claim or judgment might be the following: Although La Cocina is not without its faults, it is the best Mexican restaurant in town. Qualifications are almost always positive additions to evaluative arguments, but writers must learn not to overuse them. If you make too many qualifications, your audience will be unable to determine your final position on your subject, and you will appear to be "waffling."

Example Text

Creating more parking lots is a possible solution to the horrendous traffic congestion in Taiwan's major cities. When a new building permit is issued, each building must include a certain number of spaces for parking. However, new construction takes time, and results will be seen only as new buildings are erected. This solution alone is inadequate for most of Taiwan's problem areas, which need a solution whose results will be noticed immediately.

Comment Notice how this sentence at the end of the paragraph seems to be a formal "thesis" or "claim" which might drive the rest of the essay. Based on this claim, we would assume that the remainder of the essay will deal with the reasons why the proposed policy along is "inadequate," and will address other possible solutions.

Supporting Judgments

In academic evaluations, the overall claim or judgment is backed up by smaller, more detailed judgments about aspects of a subject being evaluated. Supporting judgments function in the same way that "reasons" function in most arguments. They provide structure and justification for a more general claim. For example, if your overall claim or judgment in your evaluation is

"Although La Cocina is not without its faults, it is the best Mexican restaurant in town,"

one supporting judgment might be

"La Cocina's green chile is superb."

This judgment would be based on criteria you have established, and it would be supported by evidence.

Providing more parking spaces near buildings is not the only act necessary to solve Taiwan's parking problems. A combination of more parking spaces, increased fines, and lowered traffic volume may be necessary to eliminate the nightmare of driving in the cities. In fact, until laws are enforced and fines increased, no number of new parking spaces will impact the congestion seen in downtown areas.

Comment There are arguably three supporting judgments being made here, as three possible solutions are being suggested to rectify this problem of parking in Taiwan. If we were reading these supporting judgments at the beginning of an essay, we would expect the essay to discuss them in depth, pointing out evidence that these proposed solutions would be effective.

When we write evaluations, we consciously adopt certain standards of measurement, or criteria .

Criteria can be concrete standards, like size or speed, or can be abstract, like practicality. When we write evaluations in an academic context, we typically avoid using criteria that are wholly personal, and rely instead on those that are less "subjective" and more likely to be shared by the majority of the audience we are addressing. Choosing appropriate criteria often involves careful consideration of audience demands, values, and concerns.

As an evaluator, you will sometimes discover that you will need to explain and/or defend not only your judgments, but also the criteria informing those judgments. For example, if you are arguing that a Mexican restaurant is excellent because (among other reasons) the texture of the food is appealing, you might need to explain to your audience why texture is a significant criterion in evaluating Mexican food.

Types of Criteria

If you are evaluating a concrete canoe for an engineering class, you will use concrete criteria such as float time, cost of materials, hydrodynamic design, and so on. If you are evaluating the suitability of a textbook for a history class, you will probably rely on more abstract criteria such as readability, length, and controversial vs. mainstream interpretation of history.

In evaluation, we often rely on concrete , measurable standards according to which subjects (usually objects) may be evaluated. For example, cars may be evaluated according to the criteria of size, speed, or cost.

Many academic evaluations, however, don't focus on objects that we can measure in terms of size, speed, or cost. Rather, they look at somewhat more abstract concepts (problems and solutions often), which we might measure in terms of "effectiveness," "feasibility," or other abstract criteria. When writing this kind of evaluation, it is vital to be as clear as possible when articulating, defining, and using your criteria, since not all readers are likely to understand and agree with these criteria as readily as they would understand and agree with concrete criteria.

Related Information: Abstract Criteria

Abstract criteria are not easily measurable, and they are usually less self-evident, more in need of definition, than concrete criteria. Even though criteria may be abstract, they should not be imprecise. Always state your criteria as clearly and precisely as possible. "Feasibility" is one example of an abstract criterion that a writer might use to evaluate a solution to a problem. Feasibility is the degree of likelihood of success of something like a plan of action or a solution to a problem. "Capability of being implemented" is a way to look at feasibility in terms of solutions to problems. The relative ease with which a solution would be adopted is sometimes a way to look at feasibility. The following example mentions directly the criteria it is using (the words in italics). Fire prevention should be the major consideration of a family building a home. By using concrete, the risk of fire is significantly decreased. But that is not all that concrete provides. It is affordable , suitable for all climates , and helps reduce deforestation . Since all of these factors are important, concrete should be demanded more than it is, and it should certainly be used more than wood for homebuilding.

Related Information: Concrete Criteria

Concrete criteria are measurable standards which most people are likely to understand and (usually) to agree with. For example, a person might make use of criteria like "size," "speed," and "cost" when buying a car.

If size is your main criterion, and something with a larger size will receive a more favorable evaluation.

Perhaps the only quality that you desire in a car is low initial cost. You don't need to take into account anything else. In this case, you can put judgments on these three cars in the local used car lot:

Because the Nissan has the lowest initial price, it receives the most favorable judgment. The evidence is found on the price tag. Each car is compared by way of a single criterion: cost.

Using Clear and Well-defined Criteria

When we evaluate informally (passing judgments during the course of conversation, for instance), we typically assume that our criteria are self-evident and require no explanation. However, in written evaluation, it is often necessary that we clarify and define our criteria in order to make a persuasive evaluative argument.

Criteria That Are Too Vague or Personal

Although we frequently find ourselves needing to use abstract criteria like "feasibility" or "effectiveness," we also must avoid using criteria that are overly vague or personal and difficult to support with evidence. As evaluators, we must steer clear of criteria that are matters of taste, belief, or personal preference. For example, the "best" lamp might simply be the one that you think looks prettiest in your home. If you depend on a criterion like "pretty in my home," and neglect to use more common, shared criteria like "brightness," "cost," and "weight," you are probably relying on a criterion that is too specific to your own personal preferences. To make "pretty in my home" an effective criterion, you would need to explain what "pretty in my home" means and how it might relate to other people's value systems. (For example: "Lamp A is attractive because it is an unoffensive style and color that would be appropriate for many people's decorating tastes.")

Using Criteria Based on the Appropriate "Class" of Subjects

When you make judgments, it is important that you use criteria that are appropriate to the type of object, person, policy, etc. that you are examining. If you are evaluating Steven Spielburg's film, Schindler's List , for instance, it is unfair to criticize it because it isn't a knee-slapper. Because "Schindler's List" is a drama and not a comedy, using the criterion of "humor" is inappropriate.

Weighing Criteria

Once you have established criteria for your evaluation of a subject, it is necessary to decide which of these criteria are most important. For example, if you are evaluating a Mexican restaurant and you have arrived at several criteria (variety of items on the menu, spiciness of the food, size of the portions, decor, and service), you need to decide which of these criteria are most critical to your evaluation. If the size of the portions is good, but the service is bad, can you give the restaurant a good rating? What about if the decor is attractive, but the food is bland? Once you have placed your criteria in a hierarchy of importance, it is much easier to make decisions like these.

When we evaluate, we must consider the audience we hope to influence with our judgments. This is particularly true when we decide which criteria are informing (and should inform) these judgments.

After establishing some criteria for your evaluation, it is important to ask yourself whether or not your audience is likely to accept those criteria. It is crucial that they do accept the criteria if, in turn, you expect them to accept the supporting judgments and overall claim or judgment built on them.

Related Information: Explaining and Defending Criteria

In deciding which criteria will be effective in your evaluation is determining which criteria your audience considers important. For example, if you are writing a review of a Mexican restaurant to an audience comprised mainly of senior citizens from the midwest, it is unlikely that "large portions" and "fiery green chile" will be the criteria most important to them. They might be more concerned, rather, with "quality of service" or "availability of heart smart menu items." Trying to anticipate and address your audience's values is an indispensable step in writing a persuasive evaluative argument.

Related Information: Understanding Audience Criteria

How Background Experience Influences Criteria

Laura Thomas - Composition Lecturer Your background experience influences the criteria that you use in evaluation. If you know a lot about something, you will have a good idea of what criteria should govern your judgments. On the other hand, it's hard if you don't know enough about what you're judging. Sometimes you have to research first in order to come up with useful criteria. For example, I recently went shopping for a new pair of skis for the first time in fifteen years. When I began shopping, I realized that I didn't even know what questions to ask anymore. The last time I had bought skis, you judged them according to whether they had a foam core or a wood core. But I had no idea what the important considerations were anymore.

Evidence consists of the specifics you use to reach your conclusion or judgment. For example, if you judge that "La Cocina's green chile is superb" on the basis of the criterion, "Good green chile is so fiery that you can barely eat it," you might offer evidence like the following:

"I drank an entire pitcher of water on my own during the course of the meal."
"Though my friend wouldn't admit that the chile was challenging for him, I saw beads of sweat form on his brow."

Related Information: Example Text

In the following paragraph, evidence appears in italics. Note that the reference to the New York Times backs up the evidence offered in the previous sentence:

Since killer whales have small lymphatic systems, they catch infections more easily when held captive ( Obee 23 ). The orca from the movie "Free Willy," Keiko, developed a skin disorder because the water he was living in was not cold enough. This infection was a result of the combination of tank conditions and the animal's immune system, according to a New York Times article .

Types of Evidence

Evidence for academic evaluations is usually of two types: concrete detail and analytic detail. Analytic detail comes from critical thinking about abstract elements of the thing being evaluated. It will also include quotations from experts. Concrete detail comes from sense perceptions and measurements--facts about color, speed, size, texture, smell, taste, and so on. Concrete details are more likely to support concrete criteria (as opposed to abstract criteria) used in judging objects. Analytic detail will more often support abstract criteria (as opposed to concrete criteria), like the criterion "feasibility," discussed in the section on criteria. Analytic detail also appears most often in academic evaluations of solutions to problems, although such solutions can also sometimes be evaluated according to concrete criteria.

What Kinds of Evidence Work

Good evidence ranges from personal experience to interviews with experts to published sources. The kind of evidence that works best for you will depend on your audience and often on the writing assignment you have been given.

Evidence and the Writing Assignment

When you choose evidence to support the judgments you are making in an evaluation, it will be important to consider what type of evaluation you are being asked to do. If, for instance, you are being asked to review a play you have attended, your evidence will most likely consist primarily of your own observations. However, if your assignment asks you to compare and contrast two potential national health care policies (toward deciding which is the better one), your evidence will need to be more statistical, more dependent on reputable sources, and more directed toward possible effects or outcomes of your judgment.

Comparison and Contrast

Comparison and contrast is the process of positioning an item or concept being evaluated among other like items or concepts. We are all familiar with this technique as it's used in the marketing of products: soft drink "taste tests," comparisons of laundry detergent effectiveness, and the like. It is a way of determining the value of something in relation to comparable things. For example, if you have made the judgment that "La Cocina's green chile is superb" and you have offered evidence of the spiciness and the flavor of the chile, you might also use comparison by giving your audience a scale on which to base judgment: "La Cocina's chile is even more fiery and flavorful than Manuel's, which is by no means a walk in the park."

In this case, the writer compares limestone with wood to show that limestone is a better building material. Although this comparison could be developed much more, it still begins to point out the relative merits of limestone. Concrete is a feasible substitute for wood as a building material. Concrete comes from a rock called limestone. Limestone is found all over the United States. By using limestone instead of wood, the dependence on dwindling forest reserves would decrease. There are more sedimentary rocks than there are forests left in this country, and they are more evenly distributed. For this reason, it is quite possible to switch from wood to concrete as the primary building material for residential construction.

Determining Relative Worth

Comparing and contrasting rarely means placing the item or concept being evaluated in relation to another item or concept that is obviously grossly inferior. For instance, if you are attempting to demonstrate the value of a Cannondale mountain bike, it would be foolish to compare it with a Huffy. However, it would be useful to compare it with a Klein, arguably a similar bicycle. In this type of maneuver, you are not comparing good with bad; rather, you are deciding which bike is better and which bike is worse. In order to determine relative worth in this way, you will need to be very careful in defining the criteria you are using to make the comparison.

Using Comparison and Contrast Effectively

In order to make comparison and contrast function well in evaluation, it is necessary to be attentive to: 1) focusing on the item or concept under consideration and 2) the use of evidence in comparison and contrast. When using comparison and contrast, writers must remember that they are using comparable items or concepts only as a way of demonstrating the worth of the main item or concept under consideration. It is easy to lose focus when using this technique, because of the temptation to evaluate two (or more) items or concepts rather than just the one under consideration. It is important to remember that judgments made on the basis of comparison and contrast need to be supported with evidence. It is not enough to assert that "La Cocina's chile is even more fiery and flavorful than Manuel's." It will be necessary to support this judgment with evidence, showing in what ways La Cocina's chile is more flavorful: "Manuel's chile relies heavily on a tomato base, giving it an Italian flavor. La Cocina follows a more traditional recipe which uses little tomato and instead flavors the chile with shredded pork, a dash of vinegar, and a bit of red chile to give it a piquant taste."

The Process of Writing an Evaluation

A variety of writing assignments call for evaluation. Bearing in mind the various approaches that might be demanded by those particular assignments, this section offers some general strategies for formulating a written evaluation.

Choosing a Topic for Evaluation

Sometimes your topic for evaluation will be dictated by the writing assignment you have been given. Other times, though, you will be required to choose your own topic. Common sense tells you that it is best to choose something about which you already have a base knowledge. For instance, if you are a skier, you might want to evaluate a particular model of skis. In addition, it is best to choose something that is tangible, observable, and/or researchable. For example, if you chose a topic like "methods of sustainable management of forests," you would know that there would be research to support your evaluation. Likewise, if you chose to evaluate a film like Pulp Fiction , you could rent the video and watch it several times in order to get the evidence you needed. However, you would have fewer options if you were to choose an abstract concept like "loyalty" or "faith." When evaluating, it is usually best to steer clear of abstractions like these as much as possible.

Brainstorming Possible Judgments

Once you have chosen a topic, you might begin your evaluation by thinking about what you already know about the topic. In doing this, you will be coming up with possible judgments to include in your evaluation. Begin with a tentative overall judgment or claim. Then decide what supporting judgments you might make to back that claim. Keep in mind that your judgments will likely change as you collect evidence for your evaluation.

Determining a Tentative Overall Judgment

Start by making an overall judgment on the topic in question, based on what you already know. For instance, if you were writing an evaluation of sustainable management practices in forestry, your tentative overall judgment might be: "Sustainable management is a viable way of dealing with deforestation in old growth forests."

Brainstorming Possible Supporting Judgments

With a tentative overall judgment in mind, you can begin to brainstorm judgments (or reasons) that could support your overall judgment by asking the question, "Why?" For example, asking "Why?" of the tentative overall judgment "Sustainable management is a viable way of dealing with deforestation in old growth forests" might yield the following supporting judgments:

  • Sustainable management allows for continued support of the logging industry.
  • It eliminates much unnecessary waste.
  • It is much better for the environment than unrestricted, traditional forestry methods.
  • It is less expensive than these traditional methods.

Anticipating Changes to Your Judgments After Collecting Evidence

When brainstorming possible judgments this early in the writing process, it is necessary to keep an open mind as you enter into the stage in which you collect evidence. Once you have done observations, analysis, or research, you might find that you are unable to advance your tentative overall judgment. Or you might find that some of the supporting judgments you came up with are not true or are not supportable. Your findings might also point you toward other judgments you can make in addition to the ones you are already making.

Defining Criteria

To prepare to organize and write your evaluation, it is important to clearly define the criteria you are using to make your judgments. These criteria govern the direction of the evaluation and provide structure and justification for the judgments you make.

Looking at the Criteria Informing Your Judgments (Working Backwards)

We often work backwards from the judgments we make, discovering what criteria we are using on the basis of what our judgments look like. For instance, our tentative judgments about sustainable management practices are as follows:

If we were to analyze these judgments, asking ourselves why we made them, we would see that we used the following criteria: wellbeing of the logging industry, conservation of resources, wellbeing of the environment, and cost.

Thinking of Additional Criteria

Once you have identified the criteria informing your initial judgments, you will want to determine what other criteria should be included in your evaluation. For example, in addition to the criteria you've already come up with (wellbeing of the logging industry, conservation of resources, wellbeing of the environment, and cost), you might include the criterion of preservation of the old growth forests.

Comparing Your Criteria with Those of Your Audience

In deciding which criteria are most important to include in your evaluation, it is necessary to consider the criteria your audience is likely to find important. Let's say we are directing our evaluation of sustainable management methods toward an audience of loggers. If we look at our list of criteria--wellbeing of the logging industry, conservation of resources, wellbeing of the environment, cost, and preservation of the old growth forests--we might decide that wellbeing of the logging industry and cost are the criteria most important to loggers. At this point, we would also want to identify additional criteria the audience might expect us to address: perhaps feasibility, labor requirements, and efficiency.

Deciding Which Criteria Are Most Important

Once you have developed a long list of possible criteria for judging your subject (in this case, sustainable management methods), you will need to narrow the list, since it is impractical and ineffective to use of all possible criteria in your essay. To decide which criteria to address, determine which are least dispensable, both to you and to your audience. Your own criteria were: wellbeing of the logging industry, conservation of resources, wellbeing of the environment, cost, and preservation of the old growth forests. Those you anticipated for your audience were: feasibility, labor requirements, and efficiency. In the written evaluation, you might choose to address those criteria most important to your audience, with a couple of your own included. For example, your list of indispensable criteria might look like this: wellbeing of the logging industry, cost, labor requirements, efficiency, conservation of resources, and preservation of the old growth forests.

Criteria and Assumptions

Stephen Reid, English Professor Warrants (to use a term from argumentation) come on the scene when we ask why a given criterion should be used or should be acceptable in evaluating the particular text, product, or performance in question. When we ask WHY a particular criterion should be important (let's say, strong performance in an automobile engine, quickly moving plot in a murder mystery, outgoing personality in a teacher), we are getting at the assumptions (i.e., the warrant) behind why the data is relevant to the claim of value we are about to make. Strong performance in an automobile engine might be a positive criterion in an urban, industrialized environment, where traveling at highway speeds on American interstates is important. But we might disagree about whether strong performance (accompanied by lower mileage) might be important in a rural European environment where gas costs are several dollars a litre. Similarly, an outgoing personality for a teacher might be an important standard of judgment or criterion in a teacher-centered classroom, but we could imagine another kind of decentered class where interpersonal skills are more important than teacher personality. By QUESTIONING the validity and appropriateness of a given criterion in a particular situation, we are probing for the ASSUMPTIONS or WARRANTS we are making in using that criterion in that particular situation. Thus, criteria are important, but it is often equally important for writers to discuss the assumptions that they are making in choosing the major criteria in their evaluations.

Collecting Evidence

Once you have established the central criteria you will use in our evaluation, you will investigate your subject in terms of these criteria. In order to investigate the subject of sustainable management methods, you would more than likely have to research whether these methods stand up to the criteria you have established: wellbeing of the logging industry, cost, labor requirements, time efficiency, conservation of resources, and preservation of the old growth forests. However, library research is only one of the techniques evaluators use. Depending on the type of evaluation being made, the evaluator might use such methods as observation, field research, and analysis.

Thinking About What You Already Know

The best place to start looking for evidence is with the knowledge you already possess. To do this, you might try brainstorming, clustering, or freewriting ideas.

Library Research

When you are evaluating policies, issues, or products, you will usually need to conduct library research to find the evidence your evaluation requires. It is always a good idea to check journals, databases, and bibliographies relevant to your subject when you begin research. It is also helpful to speak with a reference librarian about how to get started.

Observation

When you are asked to evaluate a performance, event, place, object, or person, one of the best methods available is simple observation. What makes observation not so simple is the need to focus on criteria you have developed ahead of time. If, for instance, you are reviewing a student production of Hamlet , you will want to review your list of criteria (perhaps quality of acting, costumes, faithfulness to the text, set design, lighting, and length of time before intermission) before attending the play. During or after the play, you will want to take as many notes as possible, keeping these criteria in mind.

Field Research

To expand your evaluation beyond your personal perspective or the perspective of your sources, you might conduct your own field research . Typical field research techniques include interviewing, taking a survey, administering a questionnaire, and conducting an experiment. These methods can help you support your judgment and can sometimes help you determine whether or not your judgment is valid.

When you are asked to evaluate a text, analysis is often the technique you will use in collecting evidence. If you are analyzing an argument, you might use the Toulmin Method. Other texts might not require such a structured analysis but might be better addressed by more general critical reading strategies.

Applying Criteria

After developing a list of indispensable criteria, you will need to "test" the subject according to these criteria. At this point, it will probably be necessary to collect evidence (through research, analysis, or observation) to determine, for example, whether sustainable management methods would hold up to the criteria you have established: wellbeing of the logging industry, cost, labor requirements, efficiency, conservation of resources, and preservation of the old growth forests. One way of recording the results of this "test" is by putting your notes in a three-column log.

Organizing the Evaluation

One of the best ways to organize your information in preparation for writing is to construct an informal outline of sorts. Outlines might be arranged according to criteria, comparison and contrast, chronological order, or causal analysis. They also might follow what Robert K. Miller and Suzanne S. Webb refer to in their book, Motives for Writing (2nd ed.) as "the pattern of classical oration for evaluations" (286). In addition to deciding on a general structure for your evaluation, it will be necessary to determine the most appropriate placement for your overall claim or judgment.

Placement of the Overall Claim or Judgment

Writers can state their final position at the beginning or the end of an essay. The same is true of the overall claim or judgment in a written evaluation.

When you place your overall claim or judgment at the end of your written evaluation, you are able to build up to it and to demonstrate how your evaluative argument (evidence, explanation of criteria, etc.) has led to that judgment.

Writers of academic evaluations normally don't need to keep readers in suspense about their judgments. By stating the overall claim or judgment early in the paper, writers help readers both to see the structure of the essay and to accept the evidence as convincing proof of the judgment. (Writers of evaluations should remember, of course, that there is no rule against stating the overall claim or judgment at both the beginning and the end of the essay.)

Organization by Criteria

The following is an example from Stephen Reid's The Prentice Hall Guide for College Writers (4th ed.), showing how a writer might arrange an evaluation according to criteria:

Introductory paragraphs: information about the restaurant (location, hours, prices), general description of Chinese restaurants today, and overall claim : The Hunan Dynasty is reliable, a good value, and versatile.
Criterion # 1/Judgment: Good restaurants should have an attractive setting and atmosphere/Hunan Dynasty is attractive.
Criterion # 2/Judgment: Good restaurants should give strong priority to service/ Hunan Dynasty has, despite an occasional glitch, expert service.
Criterion # 3/Judgment: Restaurants that serve modestly priced food should have quality main dishes/ Main dishes at Hunan Dynasty are generally good but not often memorable. (Note: The most important criterion--the quality of the main dishes--is saved for last.)
Concluding paragraphs: Hunan Dynasty is a top-flight neighborhood restaurant (338).

Organization by Comparison and Contrast

Sometimes comparison and contrast is not merely a strategy used in part [italics] of an evaluation, but is the strategy governing the organization of the entire essay. The following are examples from Stephen Reid's The Prentice Hall Guide for College Writers (4th ed.), showing two ways that a writer might organize an evaluation according to comparison and contrast.

Introductory paragraph(s)

Thesis [or overall claim/judgment]: Although several friends recommended the Yakitori, we preferred the Unicorn for its more authentic atmosphere, courteous service, and well-prepared food. [Notice that the criteria are stated in this thesis.]

Authentic atmosphere: Yakitori vs. Unicorn

Courteous service: Yakitori vs. Unicorn

Well-prepared food: Yakitori vs. Unicorn

Concluding paragraph(s) (Reid 339)

The Yakitori : atmosphere, service, and food

The Unicorn : atmosphere, service, and food as compared to the Yakitori

Concluding paragraph(s) (Reid 339).

Organization by Chronological Order

Writers often follow chronological order when evaluating or reviewing events or performances. This method of organization allows the writer to evaluate portions of the event or performance in the order in which it happens.

Organization by Causal Analysis

When using analysis to evaluate places, objects, events, or policies, writers often focus on causes or effects. The following is an example from Stephen Reid's The Prentice Hall Guide for College Writers (4th ed.), showing how one writer organizes an evaluation of a Goya painting by discussing its effects on the viewer.

Criterion #1/Judgment: The iconography, or use of symbols, contributes to the powerful effect of this picture on the viewer.

Evidence : The church as a symbol of hopefulness contrasts with the cruelty of the execution. The spire on the church emphasizes for the viewer how powerless the Church is to save the victims.

Criterion #2/Judgment: The use of light contributes to the powerful effect of the picture on the viewer.

Evidence : The light casts an intense glow on the scene, and its glaring, lurid, and artificial qualities create the same effect on the viewer that modern art sometimes does.

Criterion #3/Judgment: The composition or use of formal devices contributes to the powerful effect of the picture on the viewer.

Evidence : The diagonal lines scissors the picture into spaces that give the viewer a claustrophobic feeling. The corpse is foreshortened, so that it looks as though the dead man is bidding the viewer welcome (Reid 340).

Pattern of Classical Oration for Evaluations

Robert K. Miller and Suzanne S. Webb, in their book, Motives for Writing (2nd ed.) discuss what they call "the pattern of classical oration for evaluations," which incorporates opposing evaluations as well as supporting reasons and judgments. This pattern is as follows:

Present your subject. (This discussion includes any background information, description, acknowledgement of weaknesses, and so forth.)

State your criteria. (If your criteria are controversial, be sure to justify them.)

Make your judgment. (State it as clearly and emphatically as possible.)

Give your reasons. (Be sure to present good evidence for each reason.)

Refute opposing evaluations. (Let your reader know you have given thoughtful consideration to opposing views, since such views exist.)

State your conclusion. (You may restate or summarize your judgment.) (Miller and Webb 286-7)

Example: Part of an Outline for an Evaluation

The following is a portion of an outline for an evaluation, organized by way of supporting judgments or reasons. Notice that this pattern would need to be repeated (using criteria other than the fieriness of the green chile) in order to constitute a complete evaluation proving that "Although La Cocina is not without its faults, it is the best Mexican restaurant in town."

Evaluation of La Cocina, a Mexican Restaurant

Intro Paragraph Leading to Overall Judgment: "Although La Cocina is not without its faults, it is the best Mexican restaurant in town."

Supporting Judgment: "La Cocina's green chile is superb."

Criterion used to make this judgment: "Good green chile is so fiery that you can barely eat it."

Evidence in support of this judgment: "I drank an entire pitcher of water on my own during the course of the meal" or "Though my friend wouldn't admit that the chile was challenging for him, I saw beads of sweat form on his brow."

Supporting Judgment made by way of Comparison and Contrast: "La Cocina's chile is even more fiery and flavorful than Manuel's, which is by no means a walk in the park itself."

Evidence in support of this judgment: "Manuel's chile relies heavily on a tomato base, giving it an Italian flavor. La Cocina follows a more traditional recipe which uses little tomato, and instead flavors the chile with shredded pork, a dash of vinegar, and a bit of red chile to give it a piquant taste."

Writing the Draft

If you have an outline to follow, writing a draft of a written evaluation is simple. Stephen Reid, in his Prentice Hall Guide for College Writers , recommends that writers maintain focus on both the audience they are addressing and the central criteria they want to include. Such a focus will help writers remember what their audience expects and values and what is most important in constructing an effective and persuasive evaluation.

Guidelines for Revision

In his Prentice Hall Guide for College Writers , 4th ed., Stephen Reid offers some helpful tips for revising written evaluations. These guidelines are reproduced here and grouped as follows:

Examining Criteria

Criteria are standards of value . They contain categories and judgments, as in "good fuel economy," "good reliability," or "powerful use of light and shade in painting." Some categories, such as "price," have clearly implied judgments ("low price"), but make sure that your criteria refer implicitly or explicitly to a standard of value.

Examine your criteria from your audience's point of view. Which criteria are most important in evaluating your subject? Will your readers agree that the criteria you select are indeed the most important ones? Will changing the order in which you present your criteria make your evaluation more convincing? (Reid 342)

Balancing the Evaluation

Include both positive and negative evaluations of your subject. If all of your judgments are positive, your evaluation will sound like an advertisement. If all of your judgments are negative, your readers may think you are too critical (Reid 342).

Using Evidence

Be sure to include supporting evidence for each criterion. Without any data or support, your evaluation will be just an opinion that will not persuade your reader.

If you need additional evidence to persuade your readers, [go back to the "Collecting" stage of this process] (Reid 343).

Avoiding Overgeneralization

Avoid overgeneralizing your claims. If you are evaluating only three software programs, you cannot say that Lotus 1-2-3 is the best business program around. You can say only that it is the best among the group or the best in the particular class that you measured (Reid 343).

Making Appropriate Comparisons

Unless your goal is humor or irony, compare subjects that belong in the same class. Comparing a Yugo to a BMW is absurd because they are not similar cars in terms of cost, design, or purpose (Reid 343).

Checking for Accuracy

If you are citing other people's data or quoting sources, check to make sure your summaries and data are accurate (Reid 343).

Working on Transitions, Clarity, and Style

Signal the major divisions in your evaluation to your reader using clear transitions, key words, and paragraph hooks. At the beginning of new paragraphs or sections of your essay, let your reader know where you are going.

Revise sentences for directness and clarity.

Edit your evaluation for correct spelling, appropriate word choice, punctuation, usage, and grammar (343).

Nesbitt, Laurel, Kathy Northcut, & Kate Kiefer. (1997). Academic Evaluations. Writing@CSU . Colorado State University. https://writing.colostate.edu/guides/guide.cfm?guideid=47

University of Texas

  • University of Texas Libraries
  • UT Libraries

Information Literacy Toolkit

  • Assignment design rubric for research assignments
  • Welcome to the Toolkit
  • Information Literacy rubric
  • Annotated Bibliography
  • Avoiding Plagiarism Tutorial
  • Background Information and Class Expert
  • Citation managers and research organization skills
  • Comparing Sources
  • Developing a Research Question
  • Developing and Researching a Controversy
  • Digital Projects
  • Everything But the Paper
  • News and Media Literacy
  • Primary Source Literacy
  • How to Read a Scholarly Source (humanities)
  • How to Read a Scholarly Source (sciences/social sciences)
  • Research Log
  • Research Question Abstract
  • Self-Guided Tour of PCL This link opens in a new window
  • Source Analysis/Evaluation
  • Using Scholarly Sources (Synthesizing Sources)
  • Why Use Sources Exercise
  • Write for Wikipedia
  • LAH 350: Treasure Hunt in Campus Archives: Discovering Islands of Order, Creating Original Humanities Research Projects
  • RHE 368C: Writing Center Internship
  • TC 302: Pathways to Civic Engagement
  • UGS 303: Jerusalem
  • UGS 303: Modern Day Slavery
  • UGS 302: Social Inequality and Education in Latin America
  • UGS 302: Tales of Troy
  • Guides for Students
  • Open Educational Resources (OERs) This link opens in a new window

Assessment Resource Description

Undergraduates learn best from assignments that provide concrete and specific guidance on research methods. Librarians can help you design assignments that will guide your students toward effective research, and this rubric is one tool we use to do that.

Apply the assignment design rubric to your assignment to ensure that it has:

  • Clear expectations about source requirements
  • A clear rationale and context for resource requirements
  • Focus on the research process
  • Library engagement
  • Request a tailored assignment or session with a librarian
  • Toolkit Feedback If you use toolkit materials or notice an omission, please give us feedback.
  • Assignment Design Rubric - Google Drive Link
  • Assignment Design Rubric - Download Link

Updated 7/21

  • Last Updated: Apr 11, 2024 7:44 AM
  • URL: https://guides.lib.utexas.edu/toolkit

Creative Commons License

Designing Writing Assignments

Designing Writing Assignments designing-assignments

As you think about creating writing assignments, use these five principles:

  • Tie the writing task to specific pedagogical goals.
  • Note rhetorical aspects of the task, i.e., audience, purpose, writing situation.
  • Make all elements of the task clear.
  • Include grading criteria on the assignment sheet.
  • Break down the task into manageable steps.

You'll find discussions of these principles in the following sections of this guide.

Writing Should Meet Teaching Goals

Working backwards from goals, guidelines for writing assignments, resource: checksheets, resources: sample assignments.

  • Citation Information

To guarantee that writing tasks tie directly to the teaching goals for your class, ask yourself questions such as the following:

  • What specific course objectives will the writing assignment meet?
  • Will informal or formal writing better meet my teaching goals?
  • Will students be writing to learn course material, to master writing conventions in this discipline, or both?
  • Does the assignment make sense?

Although it might seem awkward at first, working backwards from what you hope the final papers will look like often produces the best assignment sheets. We recommend jotting down several points that will help you with this step in writing your assignments:

  • Why should students write in your class? State your goals for the final product as clearly and concretely as possible.
  • Determine what writing products will meet these goals and fit your teaching style/preferences.
  • Note specific skills that will contribute to the final product.
  • Sequence activities (reading, researching, writing) to build toward the final product.

Successful writing assignments depend on preparation, careful and thorough instructions, and on explicit criteria for evaluation. Although your experience with a given assignment will suggest ways of improving a specific paper in your class, the following guidelines should help you anticipate many potential problems and considerably reduce your grading time.

  • Explain the purpose of the writing assignment.
  • Make the format of the writing assignment fit the purpose (format: research paper, position paper, brief or abstract, lab report, problem-solving paper, etc.).

II. The assignment

  • Provide complete written instructions.
  • Provide format models where possible.
  • Discuss sample strong, average, and weak papers.

III. Revision of written drafts

Where appropriate, peer group workshops on rough drafts of papers may improve the overall quality of papers. For example, have students critique each others' papers one week before the due date for format, organization, or mechanics. For these workshops, outline specific and limited tasks on a checksheet. These workshops also give you an opportunity to make sure that all the students are progressing satisfactorily on the project.

IV. Evaluation

On a grading sheet, indicate the percentage of the grade devoted to content and the percentage devoted to writing skills (expression, punctuation, spelling, mechanics). The grading sheet should indicate the important content features as well as the writing skills you consider significant.

Visitors to this site are welcome to download and print these guidelines

Checksheet 1: (thanks to Kate Kiefer and Donna Lecourt)

  • written out the assignment so that students can take away a copy of the precise task?
  • made clear which course goals this writing task helps students meet?
  • specified the audience and purpose of the assignment?
  • outlined clearly all required sub-parts of the assignment (if any)?
  • included my grading criteria on the assignment sheet?
  • pointed students toward appropriate prewriting activities or sources of information?
  • specified the format of the final paper (including documentation, headings or sections, page layout)?
  • given students models or appropriate samples?
  • set a schedule that will encourage students to review each other's drafts and revise their papers?

Checksheet 2: (thanks to Jean Wyrick)

  • Is the assignment written clearly on the board or on a handout?
  • Do the instructions explain the purpose(s) of the assignment?
  • Does the assignment fit the purpose?
  • Is the assignment stated in precise language that cannot be misunderstood?
  • If choices are possible, are these options clearly marked?
  • Are there instructions for the appropriate format? (examples: length? typed? cover sheet? type of paper?)
  • Are there any special instructions, such as use of a particular citation format or kinds of headings? If so, are these clearly stated?
  • Is the due date clearly visible? (Are late assignments accepted? If so, any penalty?)
  • Are any potential problems anticipated and explained?
  • Are the grading criteria spelled out as specifically as possible? How much does content count? Organization? Writing skills? One grade or separate grades on form and content? Etc.
  • Does the grading criteria section specifically indicate which writing skills the teacher considers important as well as the various aspects of content?
  • What part of the course grade is this assignment?
  • Does the assignment include use of models (strong, average, weak) or samples outlines?

Sample Full-Semester Assignment from Ag Econ 4XX

Good analytical writing is a rigorous and difficult task. It involves a process of editing and rewriting, and it is common to do a half dozen or more drafts. Because of the difficulty of analytical writing and the need for drafting, we will be completing the assignment in four stages. A draft of each of the sections described below is due when we finish the class unit related to that topic (see due dates on syllabus). I will read the drafts of each section and provide comments; these drafts will not be graded but failure to pass in a complete version of a section will result in a deduction in your final paper grade. Because of the time both you and I are investing in the project, it will constitute one-half of your semester grade.

Content, Concepts and Substance

Papers will focus on the peoples and policies related to population, food, and the environment of your chosen country. As well as exploring each of these subsets, papers need to highlight the interrelations among them. These interrelations should form part of your revision focus for the final draft. Important concepts relevant to the papers will be covered in class; therefore, your research should be focused on the collection of information on your chosen country or region to substantiate your themes. Specifically, the paper needs to address the following questions.

  • Population - Developing countries have undergone large changes in population. Explain the dynamic nature of this continuing change in your country or region and the forces underlying the changes. Better papers will go beyond description and analyze the situation at hand. That is, go behind the numbers to explain what is happening in your country with respect to the underlying population dynamics: structure of growth, population momentum, rural/urban migration, age structure of population, unanticipated populations shocks, etc. DUE: WEEK 4.
  • Food - What is the nature of food consumption in your country or region? Is the average daily consumption below recommended levels? Is food consumption increasing with economic growth? What is the income elasticity of demand? Use Engel's law to discuss this behavior. Is production able to stay abreast with demand given these trends? What is the nature of agricultural production: traditional agriculture or green revolution technology? Is the trend in food production towards self-sufficiency? If not, can comparative advantage explain this? Does the country import or export food? Is the politico-economic regime supportive of a progressive agricultural sector? DUE: WEEK 8.
  • Environment - This is the third issue to be covered in class. It is crucial to show in your paper the environmental impact of agricultural production techniques as well as any direct impacts from population changes. This is especially true in countries that have evolved from traditional agriculture to green revolution techniques in the wake of population pressures. While there are private benefits to increased production, the use of petroleum-based inputs leads to environmental and human health related social costs which are exacerbated by poorly defined property rights. Use the concepts of technological externalities, assimilative capacity, property rights, etc. to explain the nature of this situation in your country or region. What other environmental problems are evident? Discuss the problems and methods for economically measuring environmental degradation. DUE: WEEK 12.
  • Final Draft - The final draft of the project should consider the economic situation of agriculture in your specified country or region from the three perspectives outlined above. Key to such an analysis are the interrelationships of the three perspectives. How does each factor contribute to an overall analysis of the successes and problems in agricultural policy and production of your chosen country or region? The paper may conclude with recommendations, but, at the very least, it should provide a clear summary statement about the challenges facing your country or region. DUE: WEEK15.

Landscape Architecture 3XX: Design Critique

Critical yet often overlooked components of the landscape architect's professional skills are the ability to critically evaluate existing designs and the ability to eloquently express him/herself in writing. To develop your skills at these fundamental components, you are to professionally critique a built project with which you are personally and directly familiar. The critique is intended for the "informed public" as might be expected to be read in such features in The New York Times or Columbus Monthly ; therefore, it should be insightful and professionally valid, yet also entertaining and eloquent. It should reflect a sophisticated knowledge of the subject without being burdened with professional jargon.

As in most critiques or reviews, you are attempting not only to identify the project's good and bad features but also to interpret the project's significance and meaning. As such, the critique should have a clear "point of view" or thesis that is then supported by evidence (your description of the place) that persuades the reader that your thesis is valid. Note, however, that your primary goal is not to force the reader to agree with your point of view but rather to present a valid discussion that enriches and broadens the reader's understanding of the project.

To assist in the development of the best possible paper, you are to submit a typed draft by 1:00 pm, Monday, February 10th. The drafts will be reviewed as a set and will then serve as a basis of an in-class writing improvement seminar on Friday, February 14th. The seminar will focus on problems identified in the set of drafts, so individual papers will not have been commented on or marked. You may also submit a typed draft of your paper to the course instructor for review and comment at any time prior to the final submission.

Final papers are due at 2:00 pm, Friday, February 23rd.

Animal/Dairy/Poultry Science 2XX: Comparative Animal Nutrition

Purpose: Students should be able to integrate lecture and laboratory material, relate class material to industry situations, and improve their problem-solving abilities.

Assignment 1: Weekly laboratory reports (50 points)

For the first laboratory, students will be expected to provide depth and breadth of knowledge, creativity, and proper writing format in a one-page, typed, double-spaced report. Thus, conciseness will be stressed. Five points total will be possible for the first draft, another five points possible will be given to a student peer-reviewer of the draft, and five final points will be available for a second draft. This assignment, in its entirety, will be due before the first midterm (class 20). Any major writing flaws will be addressed early so that students can grasp concepts stressed by the instructors without major impact on their grades. Additional objectives are to provide students with skills in critically reviewing papers and to acquaint writers and reviewers of the instructors' expectations for assignments 2 and 3, which are weighted much more heavily.

Students will submit seven one-page handwritten reports from each week's previous laboratory. These reports will cover laboratory classes 2-9; note that one report can be dropped and week 10 has no laboratory. Reports will be graded (5 points each) by the instructors for integration of relevant lecture material or prior experience with the current laboratory.

Assignment 2: Group problem-solving approach to a nutritional problem in the animal industry (50 points)

Students will be divided into groups of four. Several problems will be offered by the instructors, but a group can choose an alternative, approved topic. Students should propose a solution to the problem. Because most real-life problems are solved by groups of employees and (or) consultants, this exercise should provide students an opportunity to practice skills they will need after graduation. Groups will divide the assignment as they see fit. However, 25 points will be based on an individual's separate assignment (1-2 typed pages), and 25 points will be based on the group's total document. Thus, it is assumed that papers will be peer-reviewed. The audience intended will be marketing directors, who will need suitable background, illustrations, etc., to help their salespersons sell more products. This assignment will be started in about the second week of class and will be due by class 28.

Assignment 3: Students will develop a topic of their own choosing (approved by instructors) to be written for two audiences (100 points).

The first assignment (25 points) will be written in "common language," e.g., to farmers or salespersons. High clarity of presentation will be expected. It also will be graded for content to assure that the student has developed the topic adequately. This assignment will be due by class 38.

Concomitant with this assignment will be a first draft of a scientific term paper on the same subject. Ten scientific articles and five typed, double-spaced pages are minimum requirements. Basic knowledge of scientific principles will be incorporated into this term paper written to an audience of alumni of this course working in a nutrition-related field. This draft (25 points) will be due by class 38. It will be reviewed by a peer who will receive up to 25 points for his/her critique. It will be returned to the student and instructor by class 43. The final draft, worth an additional 25 points, will be due before class 50 and will be returned to the student during the final exam period.

Integration Papers - HD 3XX

Two papers will be assigned for the semester, each to be no more than three typewritten pages in length. Each paper will be worth 50 points.

Purpose:   The purpose of this assignment is to aid the student in learning skills necessary in forming policy-making decisions and to encourage the student to consider the integral relationship between theory, research, and social policy.

Format:   The student may choose any issue of interest that is appropriate to the socialization focus of the course, but the issue must be clearly stated and the student is advised to carefully limit the scope of the issue question.

There are three sections to the paper:

First:   One page will summarize two conflicting theoretical approaches to the chosen issue. Summarize only what the selected theories may or would say about the particular question you've posed; do not try to summarize the entire theory. Make clear to a reader in what way the two theories disagree or contrast. Your text should provide you with the basic information to do this section.

Second:   On the second page, summarize (abstract) one relevant piece of current research. The research article must be chosen from a professional journal (not a secondary source) written within the last five years. The article should be abstracted and then the student should clearly show how the research relates to the theoretical position(s) stated earlier, in particular, and to the socialization issue chosen in general. Be sure the subjects used, methodology, and assumptions can be reasonably extended to your concern.

Third:   On the third page, the student will present a policy guideline (for example, the Colorado courts should be required to include, on the child's behalf, a child development specialist's testimony at all custody hearings) that can be supported by the information gained and presented in the first two pages. My advice is that you picture a specific audience and the final purpose or use of such a policy guideline. For example, perhaps as a child development specialist you have been requested to present an informed opinion to a federal or state committee whose charge is to develop a particular type of human development program or service. Be specific about your hypothetical situation and this will help you write a realistic policy guideline.

Sample papers will be available in the department reading room.

SP3XX Short Essay Grading Criteria

A (90-100): Thesis is clearly presented in first paragraph. Every subsequent paragraph contributes significantly to the development of the thesis. Final paragraph "pulls together" the body of the essay and demonstrates how the essay as a whole has supported the thesis. In terms of both style and content, the essay is a pleasure to read; ideas are brought forth with clarity and follow each other logically and effortlessly. Essay is virtually free of misspellings, sentence fragments, fused sentences, comma splices, semicolon errors, wrong word choices, and paragraphing errors.

B (80-89): Thesis is clearly presented in first paragraph. Every subsequent paragraph contributes significantly to the development of the thesis. Final paragraph "pulls together" the body of the essay and demonstrates how the essay as a whole has supported the thesis. In terms of style and content, the essay is still clear and progresses logically, but the essay is somewhat weaker due to awkward word choice, sentence structure, or organization. Essay may have a few (approximately 3) instances of misspellings, sentence fragments, fused sentences, comma splices, semicolon errors, wrong word choices, and paragraphing errors.

C (70-79): There is a thesis, but the reader may have to hunt for it a bit. All the paragraphs contribute to the thesis, but the organization of these paragraphs is less than clear. Final paragraph simply summarizes essay without successfully integrating the ideas presented into a unified support for thesis. In terms of style and content, the reader is able to discern the intent of the essay and the support for the thesis, but some amount of mental gymnastics and "reading between the lines" is necessary; the essay is not easy to read, but it still has said some important things. Essay may have instances (approximately 6) of misspellings, sentence fragments, fused sentences, comma splices, semicolon errors, wrong word choices, and paragraphing errors.

D (60-69): Thesis is not clear. Individual paragraphs may have interesting insights, but the paragraphs do not work together well in support of the thesis. In terms of style and content, the essay is difficult to read and to understand, but the reader can see there was a (less than successful) effort to engage a meaningful subject. Essay may have several instances (approximately 6) of misspellings, sentence fragments, fused sentences, comma splices, semicolon errors, wrong word choices, and paragraphing errors.

Teacher Comments

Patrick Fitzhorn, Mechanical Engineering: My expectations for freshman are relatively high. I'm jaded with the seniors, who keep disappointing me. Often, we don't agree on the grading criteria.

There's three parts to our writing in engineering. The first part, is the assignment itself.

The four types: lab reports, technical papers, design reports, and proposals. The other part is expectations in terms of a growth of writing style at each level in our curriculum and an understanding of that from students so they understand that high school writing is not acceptable as a senior in college. Third, is how we transform our expectations into justifiable grades that have real feedback for the students.

To the freshman, I might give a page to a page and one half to here's how I want the design report. To the seniors it was three pages long. We try to capture how our expectations change from freshman to senior. I bet the structure is almost identical...

We always give them pretty rigorous outlines. Often times, the way students write is to take the outline we give them and students write that chunk. Virtually every writing assignment we give, we provide a writing outline of the writing style we want. These patterns are then used in industry. One organization style works for each of the writing styles. Between faculty, some minute details may change with organization, but there is a standard for writers to follow.

Interviewer: How do students determine purpose

Ken Reardon, Chemical Engineerin: Students usually respond to an assignment. That tells them what the purpose is. . . . I think it's something they infer from the assignment sheet.

Interviewer What types of purposes are there?

Ken Reardon: Persuading is the case with proposals. And informing with progress and the final results. Informing is to just "Here are the results of analysis; here's the answer to the question." It's presenting information. Persuasion is analyzing some information and coming to a conclusion. More of the writing I've seen engineers do is a soft version of persuasion, where they're not trying to sell. "Here's my analysis, here's how I interpreted those results and so here's what I think is worthwhile." Justifying.

Interviewer: Why do students need to be aware of this concept?

Ken Reardon: It helps to tell the reader what they're reading. Without it, readers don't know how to read.

Kate Kiefer. (2018). Designing Writing Assignments. The WAC Clearinghouse. Retrieved from https://wac.colostate.edu/repository/teaching/guides/designing-assignments/. Originally developed for Writing@CSU (https://writing.colostate.edu).

  • Privacy Policy

Research Method

Home » Evaluating Research – Process, Examples and Methods

Evaluating Research – Process, Examples and Methods

Table of Contents

Evaluating Research

Evaluating Research

Definition:

Evaluating Research refers to the process of assessing the quality, credibility, and relevance of a research study or project. This involves examining the methods, data, and results of the research in order to determine its validity, reliability, and usefulness. Evaluating research can be done by both experts and non-experts in the field, and involves critical thinking, analysis, and interpretation of the research findings.

Research Evaluating Process

The process of evaluating research typically involves the following steps:

Identify the Research Question

The first step in evaluating research is to identify the research question or problem that the study is addressing. This will help you to determine whether the study is relevant to your needs.

Assess the Study Design

The study design refers to the methodology used to conduct the research. You should assess whether the study design is appropriate for the research question and whether it is likely to produce reliable and valid results.

Evaluate the Sample

The sample refers to the group of participants or subjects who are included in the study. You should evaluate whether the sample size is adequate and whether the participants are representative of the population under study.

Review the Data Collection Methods

You should review the data collection methods used in the study to ensure that they are valid and reliable. This includes assessing the measures used to collect data and the procedures used to collect data.

Examine the Statistical Analysis

Statistical analysis refers to the methods used to analyze the data. You should examine whether the statistical analysis is appropriate for the research question and whether it is likely to produce valid and reliable results.

Assess the Conclusions

You should evaluate whether the data support the conclusions drawn from the study and whether they are relevant to the research question.

Consider the Limitations

Finally, you should consider the limitations of the study, including any potential biases or confounding factors that may have influenced the results.

Evaluating Research Methods

Evaluating Research Methods are as follows:

  • Peer review: Peer review is a process where experts in the field review a study before it is published. This helps ensure that the study is accurate, valid, and relevant to the field.
  • Critical appraisal : Critical appraisal involves systematically evaluating a study based on specific criteria. This helps assess the quality of the study and the reliability of the findings.
  • Replication : Replication involves repeating a study to test the validity and reliability of the findings. This can help identify any errors or biases in the original study.
  • Meta-analysis : Meta-analysis is a statistical method that combines the results of multiple studies to provide a more comprehensive understanding of a particular topic. This can help identify patterns or inconsistencies across studies.
  • Consultation with experts : Consulting with experts in the field can provide valuable insights into the quality and relevance of a study. Experts can also help identify potential limitations or biases in the study.
  • Review of funding sources: Examining the funding sources of a study can help identify any potential conflicts of interest or biases that may have influenced the study design or interpretation of results.

Example of Evaluating Research

Example of Evaluating Research sample for students:

Title of the Study: The Effects of Social Media Use on Mental Health among College Students

Sample Size: 500 college students

Sampling Technique : Convenience sampling

  • Sample Size: The sample size of 500 college students is a moderate sample size, which could be considered representative of the college student population. However, it would be more representative if the sample size was larger, or if a random sampling technique was used.
  • Sampling Technique : Convenience sampling is a non-probability sampling technique, which means that the sample may not be representative of the population. This technique may introduce bias into the study since the participants are self-selected and may not be representative of the entire college student population. Therefore, the results of this study may not be generalizable to other populations.
  • Participant Characteristics: The study does not provide any information about the demographic characteristics of the participants, such as age, gender, race, or socioeconomic status. This information is important because social media use and mental health may vary among different demographic groups.
  • Data Collection Method: The study used a self-administered survey to collect data. Self-administered surveys may be subject to response bias and may not accurately reflect participants’ actual behaviors and experiences.
  • Data Analysis: The study used descriptive statistics and regression analysis to analyze the data. Descriptive statistics provide a summary of the data, while regression analysis is used to examine the relationship between two or more variables. However, the study did not provide information about the statistical significance of the results or the effect sizes.

Overall, while the study provides some insights into the relationship between social media use and mental health among college students, the use of a convenience sampling technique and the lack of information about participant characteristics limit the generalizability of the findings. In addition, the use of self-administered surveys may introduce bias into the study, and the lack of information about the statistical significance of the results limits the interpretation of the findings.

Note*: Above mentioned example is just a sample for students. Do not copy and paste directly into your assignment. Kindly do your own research for academic purposes.

Applications of Evaluating Research

Here are some of the applications of evaluating research:

  • Identifying reliable sources : By evaluating research, researchers, students, and other professionals can identify the most reliable sources of information to use in their work. They can determine the quality of research studies, including the methodology, sample size, data analysis, and conclusions.
  • Validating findings: Evaluating research can help to validate findings from previous studies. By examining the methodology and results of a study, researchers can determine if the findings are reliable and if they can be used to inform future research.
  • Identifying knowledge gaps: Evaluating research can also help to identify gaps in current knowledge. By examining the existing literature on a topic, researchers can determine areas where more research is needed, and they can design studies to address these gaps.
  • Improving research quality : Evaluating research can help to improve the quality of future research. By examining the strengths and weaknesses of previous studies, researchers can design better studies and avoid common pitfalls.
  • Informing policy and decision-making : Evaluating research is crucial in informing policy and decision-making in many fields. By examining the evidence base for a particular issue, policymakers can make informed decisions that are supported by the best available evidence.
  • Enhancing education : Evaluating research is essential in enhancing education. Educators can use research findings to improve teaching methods, curriculum development, and student outcomes.

Purpose of Evaluating Research

Here are some of the key purposes of evaluating research:

  • Determine the reliability and validity of research findings : By evaluating research, researchers can determine the quality of the study design, data collection, and analysis. They can determine whether the findings are reliable, valid, and generalizable to other populations.
  • Identify the strengths and weaknesses of research studies: Evaluating research helps to identify the strengths and weaknesses of research studies, including potential biases, confounding factors, and limitations. This information can help researchers to design better studies in the future.
  • Inform evidence-based decision-making: Evaluating research is crucial in informing evidence-based decision-making in many fields, including healthcare, education, and public policy. Policymakers, educators, and clinicians rely on research evidence to make informed decisions.
  • Identify research gaps : By evaluating research, researchers can identify gaps in the existing literature and design studies to address these gaps. This process can help to advance knowledge and improve the quality of research in a particular field.
  • Ensure research ethics and integrity : Evaluating research helps to ensure that research studies are conducted ethically and with integrity. Researchers must adhere to ethical guidelines to protect the welfare and rights of study participants and to maintain the trust of the public.

Characteristics Evaluating Research

Characteristics Evaluating Research are as follows:

  • Research question/hypothesis: A good research question or hypothesis should be clear, concise, and well-defined. It should address a significant problem or issue in the field and be grounded in relevant theory or prior research.
  • Study design: The research design should be appropriate for answering the research question and be clearly described in the study. The study design should also minimize bias and confounding variables.
  • Sampling : The sample should be representative of the population of interest and the sampling method should be appropriate for the research question and study design.
  • Data collection : The data collection methods should be reliable and valid, and the data should be accurately recorded and analyzed.
  • Results : The results should be presented clearly and accurately, and the statistical analysis should be appropriate for the research question and study design.
  • Interpretation of results : The interpretation of the results should be based on the data and not influenced by personal biases or preconceptions.
  • Generalizability: The study findings should be generalizable to the population of interest and relevant to other settings or contexts.
  • Contribution to the field : The study should make a significant contribution to the field and advance our understanding of the research question or issue.

Advantages of Evaluating Research

Evaluating research has several advantages, including:

  • Ensuring accuracy and validity : By evaluating research, we can ensure that the research is accurate, valid, and reliable. This ensures that the findings are trustworthy and can be used to inform decision-making.
  • Identifying gaps in knowledge : Evaluating research can help identify gaps in knowledge and areas where further research is needed. This can guide future research and help build a stronger evidence base.
  • Promoting critical thinking: Evaluating research requires critical thinking skills, which can be applied in other areas of life. By evaluating research, individuals can develop their critical thinking skills and become more discerning consumers of information.
  • Improving the quality of research : Evaluating research can help improve the quality of research by identifying areas where improvements can be made. This can lead to more rigorous research methods and better-quality research.
  • Informing decision-making: By evaluating research, we can make informed decisions based on the evidence. This is particularly important in fields such as medicine and public health, where decisions can have significant consequences.
  • Advancing the field : Evaluating research can help advance the field by identifying new research questions and areas of inquiry. This can lead to the development of new theories and the refinement of existing ones.

Limitations of Evaluating Research

Limitations of Evaluating Research are as follows:

  • Time-consuming: Evaluating research can be time-consuming, particularly if the study is complex or requires specialized knowledge. This can be a barrier for individuals who are not experts in the field or who have limited time.
  • Subjectivity : Evaluating research can be subjective, as different individuals may have different interpretations of the same study. This can lead to inconsistencies in the evaluation process and make it difficult to compare studies.
  • Limited generalizability: The findings of a study may not be generalizable to other populations or contexts. This limits the usefulness of the study and may make it difficult to apply the findings to other settings.
  • Publication bias: Research that does not find significant results may be less likely to be published, which can create a bias in the published literature. This can limit the amount of information available for evaluation.
  • Lack of transparency: Some studies may not provide enough detail about their methods or results, making it difficult to evaluate their quality or validity.
  • Funding bias : Research funded by particular organizations or industries may be biased towards the interests of the funder. This can influence the study design, methods, and interpretation of results.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Research Process

Research Process – Steps, Examples and Tips

Research Design

Research Design – Types, Methods and Examples

Institutional Review Board (IRB)

Institutional Review Board – Application Sample...

Research Questions

Research Questions – Types, Examples and Writing...

Navigation Menu

Search code, repositories, users, issues, pull requests..., provide feedback.

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly.

To see all available qualifiers, see our documentation .

  • Notifications

Coursera Quantitative Methods Assignments (2016)

rkiyengar/coursera-quant-methods

Folders and files, repository files navigation, coursera_quantitative_methods.

This repo contains files related to the Quantitative Methods course on Coursera.

IMAGES

  1. Evaluative Research: Definition, Methods & Types

    research designs writing assignment (evaluative)

  2. How to Write an Evaluation Essay: Examples and Format

    research designs writing assignment (evaluative)

  3. Evaluative thesis. Expert Advice on How to Write a Successful

    research designs writing assignment (evaluative)

  4. Research (Assignment/Report) Template

    research designs writing assignment (evaluative)

  5. 🌱 Evaluative writing examples. How to Write an Evaluation Report?. 2022

    research designs writing assignment (evaluative)

  6. Evaluation Essay

    research designs writing assignment (evaluative)

VIDEO

  1. Quantitative Research Designs 📊🔍: Know Your Options #shorts #research

  2. WRITING THE CHAPTER 3|| Research Methodology (Research Design and Method)

  3. How to Write an Evaluation Essay

  4. Research Designs: Part 2 of 3: Qualitative Research Designs (ሪሰርች ዲዛይን

  5. Formulating Evaluative Statements || Reading and Writing Skills || SHS Quarter 2/4 Week 3

  6. QUANTITATIVE METHODOLOGY (Part 2 of 3):

COMMENTS

  1. What Is a Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.

  2. Types of Research Designs

    Before beginning your paper, you need to decide how you plan to design the study.. The research design refers to the overall strategy and analytical approach that you have chosen in order to integrate, in a coherent and logical way, the different components of the study, thus ensuring that the research problem will be thoroughly investigated. It constitutes the blueprint for the collection ...

  3. Research Design

    Step 1: Consider your aims and approach. Step 2: Choose a type of research design. Step 3: Identify your population and sampling method. Step 4: Choose your data collection methods. Step 5: Plan your data collection procedures. Step 6: Decide on your data analysis strategies. Frequently asked questions.

  4. What Is Research Design? 8 Types + Examples

    Research design refers to the overall plan, structure or strategy that guides a research project, from its conception to the final analysis of data. Research designs for quantitative studies include descriptive, correlational, experimental and quasi-experimenta l designs. Research designs for qualitative studies include phenomenological ...

  5. Types of Research Designs Compared

    Types of Research Designs Compared | Guide & Examples. Published on June 20, 2019 by Shona McCombes.Revised on June 22, 2023. When you start planning a research project, developing research questions and creating a research design, you will have to make various decisions about the type of research you want to do.. There are many ways to categorize different types of research.

  6. Evaluating Research: Research Designs in Evidence-Based Medicine

    A research design is the orienting plan that shapes and organizes a research project. Researchers use different research designs for projects with distinct goals and purposes. Sometimes this is a researcher-determined choice, and other times practical and ethical issues force the use of specific research designs.

  7. Step 3 of EBP: Part 1—Evaluating Research Designs

    Step 3 of the EBP process involves evaluating the quality and client relevance of research results you have located to inform treatment planning. While some useful clinical resources include careful appraisals of research quality, clinicians must critically evaluate the content both included in these summaries and what is excluded or omitted ...

  8. PDF RESEARCH DESIGNS FOR PROGRAM EVALUATIONS

    research designs in an evaluation, and test different parts of the program logic with each one. These designs are often referred to as patched-up research designs (Poister, 1978), and usually, they do not test all the causal linkages in a logic model. Research designs that fully test the causal links in logic models often

  9. PDF Keys to Designing Effective Writing and Research Assignments

    student-developed research project that includes the research proposal and/or original student research is a widely used construc-tivist assignment. Projects like these provide students with experiences beyond those usually found in a potentially lecture-heavy course that relies on students memorizing research terms and definitions. I have

  10. Research design

    Updated December 14, 2021. A research design is a meticulously calculated and organized blueprint that lays out the methods of a research investigation. A study design is necessary to obtain reliable and well-founded results. It helps ensure that the research process becomes smoother, effortless, well structured, and well defined.

  11. Guide: Academic Evaluations

    Academic Evaluations. In our daily lives, we are continually evaluating objects, people, and ideas in our immediate environments. We pass judgments in conversation, while reading, while shopping, while eating, and while watching television or movies, often being unaware that we are doing so. Evaluation is an equally fundamental writing process ...

  12. Guide to Experimental Design

    Table of contents. Step 1: Define your variables. Step 2: Write your hypothesis. Step 3: Design your experimental treatments. Step 4: Assign your subjects to treatment groups. Step 5: Measure your dependent variable. Other interesting articles. Frequently asked questions about experiments.

  13. Quantitative Methods Course by University of Amsterdam

    Research Designs - Writing Assignment (Evaluative) ... In the previous two modules we discussed research designs and methods to measure and manipulate our variables of interest and disinterest. Before a researcher can move on to the testing phase and can actually collect data, there is just one more procedure that needs to be decided on ...

  14. Planning Qualitative Research: Design and Decision Making for New

    While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...

  15. Assignment design rubric for research assignments

    Undergraduates learn best from assignments that provide concrete and specific guidance on research methods. Librarians can help you design assignments that will guide your students toward effective research, and this rubric is one tool we use to do that. Apply the assignment design rubric to your assignment to ensure that it has:

  16. (PDF) Basics of Research Design: A Guide to selecting appropriate

    for validity and reliability. Design is basically concerned with the aims, uses, purposes, intentions and plans within the. pr actical constraint of location, time, money and the researcher's ...

  17. Designing Writing Assignments

    Designing Writing Assignments designing-assignments. As you think about creating writing assignments, use these five principles: Tie the writing task to specific pedagogical goals. Note rhetorical aspects of the task, i.e., audience, purpose, writing situation. Make all elements of the task clear. Include grading criteria on the assignment ...

  18. Evaluating Research

    Definition: Evaluating Research refers to the process of assessing the quality, credibility, and relevance of a research study or project. This involves examining the methods, data, and results of the research in order to determine its validity, reliability, and usefulness. Evaluating research can be done by both experts and non-experts in the ...

  19. Building A Research Design Assignment

    Research Design building research design building research design assignment jadda yambo department of clinical mental health counseling, liberty university. Skip to document. University; High School. Books; ... Research and Program Evaluation (COUC 515) 81 Documents. Students shared 81 documents in this course. University Liberty University ...

  20. PDF Writing Assessment and Evaluation Rubrics

    Analytic scoring is usually based on a scale of 0-100 with each aspect receiving a portion of the total points. The General Rubric for Analytic Evaluationon page 14 can be used to score a piece of writing in this way as can the rubrics for specific writing types on pages 17, 26, 31, 36-38, and 43.

  21. PDF Designing Better Assignments

    Exercise 1: Improve an assignment. Brainstorm in your breakout group choose one or more way to improve the assignment: Identify the hidden skills or knowledge explicit by creating learning outcomes or objectives. Devise an activity that gives students practice with required skills. Clarify the instructions.

  22. Full article: Developing a framework to re-design writing assignment

    Purpose - course context and learning goals (LGs) Course context analysis is an essential step in the design of any assessment. While the integration of LLMs is widely advocated, teachers should carefully consider how the use of digital tools (i.e., cognitive offloading) aligns with program-level learning outcomes and whether it may hinder students' acquisition of foundational knowledge ...

  23. Coursera Quantitative Methods Assignments (2016)

    Packages. No packages published. Coursera Quantitative Methods Assignments (2016). Contribute to rkiyengar/coursera-quant-methods development by creating an account on GitHub.