Bookmark this page

  • A Model for the National Assessment of Higher Order Thinking
  • International Critical Thinking Essay Test
  • Online Critical Thinking Basic Concepts Test
  • Online Critical Thinking Basic Concepts Sample Test

Consequential Validity: Using Assessment to Drive Instruction

Translate this page from English...

*Machine translated pages not guaranteed for accuracy. Click Here for our professional translations.

critical thinking skills validity

Critical Thinking Testing and Assessment

The purpose of assessment in instruction is improvement. The purpose of assessing instruction for critical thinking is improving the teaching of discipline-based thinking (historical, biological, sociological, mathematical, etc.) It is to improve students’ abilities to think their way through content using disciplined skill in reasoning. The more particular we can be about what we want students to learn about critical thinking, the better we can devise instruction with that particular end in view.

critical thinking skills validity

The Foundation for Critical Thinking offers assessment instruments which share in the same general goal: to enable educators to gather evidence relevant to determining the extent to which instruction is teaching students to think critically (in the process of learning content). To this end, the Fellows of the Foundation recommend:

that academic institutions and units establish an oversight committee for critical thinking, and

that this oversight committee utilizes a combination of assessment instruments (the more the better) to generate incentives for faculty, by providing them with as much evidence as feasible of the actual state of instruction for critical thinking.

The following instruments are available to generate evidence relevant to critical thinking teaching and learning:

Course Evaluation Form : Provides evidence of whether, and to what extent, students perceive faculty as fostering critical thinking in instruction (course by course). Machine-scoreable.

Online Critical Thinking Basic Concepts Test : Provides evidence of whether, and to what extent, students understand the fundamental concepts embedded in critical thinking (and hence tests student readiness to think critically). Machine-scoreable.

Critical Thinking Reading and Writing Test : Provides evidence of whether, and to what extent, students can read closely and write substantively (and hence tests students' abilities to read and write critically). Short-answer.

International Critical Thinking Essay Test : Provides evidence of whether, and to what extent, students are able to analyze and assess excerpts from textbooks or professional writing. Short-answer.

Commission Study Protocol for Interviewing Faculty Regarding Critical Thinking : Provides evidence of whether, and to what extent, critical thinking is being taught at a college or university. Can be adapted for high school. Based on the California Commission Study . Short-answer.

Protocol for Interviewing Faculty Regarding Critical Thinking : Provides evidence of whether, and to what extent, critical thinking is being taught at a college or university. Can be adapted for high school. Short-answer.

Protocol for Interviewing Students Regarding Critical Thinking : Provides evidence of whether, and to what extent, students are learning to think critically at a college or university. Can be adapted for high school). Short-answer. 

Criteria for Critical Thinking Assignments : Can be used by faculty in designing classroom assignments, or by administrators in assessing the extent to which faculty are fostering critical thinking.

Rubrics for Assessing Student Reasoning Abilities : A useful tool in assessing the extent to which students are reasoning well through course content.  

All of the above assessment instruments can be used as part of pre- and post-assessment strategies to gauge development over various time periods.

Consequential Validity

All of the above assessment instruments, when used appropriately and graded accurately, should lead to a high degree of consequential validity. In other words, the use of the instruments should cause teachers to teach in such a way as to foster critical thinking in their various subjects. In this light, for students to perform well on the various instruments, teachers will need to design instruction so that students can perform well on them. Students cannot become skilled in critical thinking without learning (first) the concepts and principles that underlie critical thinking and (second) applying them in a variety of forms of thinking: historical thinking, sociological thinking, biological thinking, etc. Students cannot become skilled in analyzing and assessing reasoning without practicing it. However, when they have routine practice in paraphrasing, summariz­ing, analyzing, and assessing, they will develop skills of mind requisite to the art of thinking well within any subject or discipline, not to mention thinking well within the various domains of human life.

For full copies of this and many other critical thinking articles, books, videos, and more, join us at the Center for Critical Thinking Community Online - the world's leading online community dedicated to critical thinking!   Also featuring interactive learning activities, study groups, and even a social media component, this learning platform will change your conception of intellectual development.

Instantly enhance your writing in real-time while you type. With LanguageTool

Get started for free

What Is Critical Thinking? | Meaning & Examples

White text over gray background reads "What is critical thinking?"

Critical thinking is the process of analyzing information logically and overcoming assumptions, biases, and logical fallacies. Developing critical thinking skills allows us to evaluate information as objectively as possible and reach well-founded conclusions.

Critical thinking example

Thinking critically is a crucial part of academic success, professional development, civic engagement, and personal decision-making.

Table of contents

What is critical thinking, why is critical thinking important, critical thinking strategies.

Critical thinking is the process of evaluating information and arguments in a disciplined and systematic way. It involves questioning assumptions, assessing evidence, and using logical reasoning to form well-reasoned judgments.

Key critical thinking skills:

  • Avoiding unfounded assumptions
  • Identifying and countering biases
  • Recognizing and refuting logical fallacies

These practices enable us to make informed decisions, analyze evidence objectively, consider multiple perspectives, reflect on our own biases, and seek reliable sources.

Critical thinking is enhanced by the deliberate study of biases, logical fallacies, and the different forms of reasoning:

  • Deductive reasoning: Drawing specific conclusions from general premises
  • Inductive reasoning: Generalizing from specific observations
  • Analogical reasoning: Drawing parallels between similar situations
  • Abductive reasoning: Inferring the most likely explanation from incomplete evidence

When assessing sources, critical thinking requires evaluating several factors:

  • Credibility: Check the author’s qualifications and the publication’s reputation.
  • Evidence: Verify that the information is supported by data and references.
  • Bias: Identify any potential biases or conflicts of interest.
  • Currency: Ensure the information is up-to-date and relevant.
  • Purpose: Understand the motivation behind the source and whether it aims to inform, persuade, or sell.

Critical thinking is crucial to decision-making and problem-solving in many domains of life. Social media disinformation and irresponsible uses of AI make it more important than ever to be able to distinguish between credible information and misleading or false content.

Developing critical thinking skills is an essential part of fostering independent thinking, allowing us to:

  • Make informed decisions
  • Solve complex problems
  • Evaluate the logic of arguments

In the process of developing these skills, we become less susceptible to biases, fallacies, and propaganda.

Examples of critical thinking

Critical thinking is an essential part of consuming any form of media, including news, marketing, entertainment, and social media. Media platforms are commonly used to promote biased or manipulative messages, often in a subtle way.

Critical thinking in media example

A news segment claims eating chocolate daily improves cognitive function. After reading more about the research, you find the study had a small sample size and was funded by a chocolate company, indicating bias. This leads you to conclude the claim is unreliable.

Critical thinking is fundamental in logic, math, law, science, and other academic and professional domains. The scientific method is a quintessential example of systematized critical thinking.

Critical thinking in science example

  • Formulate a hypothesis.
  • Design experiments.
  • Analyze data.
  • Draw conclusions.
  • Revise the hypothesis if necessary.

Academic research requires advanced critical thinking skills.

Critical thinking academic example

  • Evaluating the methodology of each study to determine their reliability and validity
  • Checking for potential biases, such as funding sources or conflicts of interest
  • Comparing the sample sizes and demographics of the studies to understand the context of their findings
  • Synthesizing the results, highlighting common trends and discrepancies, and considering the limitations of each study

Critical thinking enhances informed decision-making by equipping us to recognize biases, identify logical fallacies, evaluate evidence, consider alternative perspectives, and learn to identify credible sources.

Key strategies:

  • Recognize biases.
  • Identify logical fallacies.
  • Evaluate sources and evidence.
  • Consider alternative perspectives.

Recommended articles

Do you want to improve your business emails, learn the difference between commonly confused words, or strengthen your understanding of English grammar? Check out the articles below!


Word Choice


Magedah Shabo

Unleash the Professional Writer in You With LanguageTool

Go well beyond grammar and spell checking. Impress with clear, precise, and stylistically flawless writing instead.

Works on All Your Favorite Services

  • Thunderbird
  • Google Docs
  • Microsoft Word
  • Open Office
  • Libre Office

We Value Your Feedback

We’ve made a mistake, forgotten about an important detail, or haven’t managed to get the point across? Let’s help each other to perfect our writing.


An Introduction to Critical Thinking and Creativity: Think More, Think Better by

Get full access to An Introduction to Critical Thinking and Creativity: Think More, Think Better and 60K+ other titles, with a free 10-day trial of O'Reilly.

There are also live events, courses curated by job role, and more.



Validity is a most important concept in critical thinking. A valid argument is one where the conclusion follows logically from the premises. But what does it mean? Here is the official definition:

An argument is valid if and only if there is no logically possible situation in which the premises are true and the conclusion is false.

To put it differently, whenever we have a valid argument, if the premises are all true, then the conclusion must also be true. What this implies is that if you use only valid arguments in your reasoning, as long as you start with true premises, you will never end up with a false conclusion. Here is an example of a valid argument:

This simple argument is obviously valid since it is impossible for the conclusion to be false when the premise is true. However, notice that the validity of the argument can be determined without knowing whether the premise and the conclusion are actually true or not. Validity is about the logical connection between the premises and the conclusion. We might not know how old Marilyn actually is, but it is clear the conclusion follows logically from the premise. The simple argument above will remain valid even if Marilyn is just a baby, in which case the premise and the conclusion are both false. Consider this argument also:

Again the argument is valid—if the ...

Get An Introduction to Critical Thinking and Creativity: Think More, Think Better now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.

Don’t leave empty-handed

Get Mark Richards’s Software Architecture Patterns ebook to better understand how to design components—and how they should interact.

It’s yours, free.

Cover of Software Architecture Patterns

Check it out now on O’Reilly

Dive in for free with a 10-day trial of the O’Reilly learning platform—then explore all the other resources our members count on to build skills and solve problems every day.

critical thinking skills validity

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Working with sources
  • What Is Critical Thinking? | Definition & Examples

What Is Critical Thinking? | Definition & Examples

Published on May 30, 2022 by Eoghan Ryan . Revised on May 31, 2023.

Critical thinking is the ability to effectively analyze information and form a judgment .

To think critically, you must be aware of your own biases and assumptions when encountering information, and apply consistent standards when evaluating sources .

Critical thinking skills help you to:

  • Identify credible sources
  • Evaluate and respond to arguments
  • Assess alternative viewpoints
  • Test hypotheses against relevant criteria

Table of contents

Why is critical thinking important, critical thinking examples, how to think critically, other interesting articles, frequently asked questions about critical thinking.

Critical thinking is important for making judgments about sources of information and forming your own arguments. It emphasizes a rational, objective, and self-aware approach that can help you to identify credible sources and strengthen your conclusions.

Critical thinking is important in all disciplines and throughout all stages of the research process . The types of evidence used in the sciences and in the humanities may differ, but critical thinking skills are relevant to both.

In academic writing , critical thinking can help you to determine whether a source:

  • Is free from research bias
  • Provides evidence to support its research findings
  • Considers alternative viewpoints

Outside of academia, critical thinking goes hand in hand with information literacy to help you form opinions rationally and engage independently and critically with popular media.

Don't submit your assignments before you do this

The academic proofreading tool has been trained on 1000s of academic texts. Making it the most accurate and reliable proofreading tool for students. Free citation check included.

critical thinking skills validity

Try for free

Critical thinking can help you to identify reliable sources of information that you can cite in your research paper . It can also guide your own research methods and inform your own arguments.

Outside of academia, critical thinking can help you to be aware of both your own and others’ biases and assumptions.

Academic examples

However, when you compare the findings of the study with other current research, you determine that the results seem improbable. You analyze the paper again, consulting the sources it cites.

You notice that the research was funded by the pharmaceutical company that created the treatment. Because of this, you view its results skeptically and determine that more independent research is necessary to confirm or refute them. Example: Poor critical thinking in an academic context You’re researching a paper on the impact wireless technology has had on developing countries that previously did not have large-scale communications infrastructure. You read an article that seems to confirm your hypothesis: the impact is mainly positive. Rather than evaluating the research methodology, you accept the findings uncritically.

Nonacademic examples

However, you decide to compare this review article with consumer reviews on a different site. You find that these reviews are not as positive. Some customers have had problems installing the alarm, and some have noted that it activates for no apparent reason.

You revisit the original review article. You notice that the words “sponsored content” appear in small print under the article title. Based on this, you conclude that the review is advertising and is therefore not an unbiased source. Example: Poor critical thinking in a nonacademic context You support a candidate in an upcoming election. You visit an online news site affiliated with their political party and read an article that criticizes their opponent. The article claims that the opponent is inexperienced in politics. You accept this without evidence, because it fits your preconceptions about the opponent.

There is no single way to think critically. How you engage with information will depend on the type of source you’re using and the information you need.

However, you can engage with sources in a systematic and critical way by asking certain questions when you encounter information. Like the CRAAP test , these questions focus on the currency , relevance , authority , accuracy , and purpose of a source of information.

When encountering information, ask:

  • Who is the author? Are they an expert in their field?
  • What do they say? Is their argument clear? Can you summarize it?
  • When did they say this? Is the source current?
  • Where is the information published? Is it an academic article? Is it peer-reviewed ?
  • Why did the author publish it? What is their motivation?
  • How do they make their argument? Is it backed up by evidence? Does it rely on opinion, speculation, or appeals to emotion ? Do they address alternative arguments?

Critical thinking also involves being aware of your own biases, not only those of others. When you make an argument or draw your own conclusions, you can ask similar questions about your own writing:

  • Am I only considering evidence that supports my preconceptions?
  • Is my argument expressed clearly and backed up with credible sources?
  • Would I be convinced by this argument coming from someone else?

If you want to know more about ChatGPT, AI tools , citation , and plagiarism , make sure to check out some of our other articles with explanations and examples.

  • ChatGPT vs human editor
  • ChatGPT citations
  • Is ChatGPT trustworthy?
  • Using ChatGPT for your studies
  • What is ChatGPT?
  • Chicago style
  • Paraphrasing


  • Types of plagiarism
  • Self-plagiarism
  • Avoiding plagiarism
  • Academic integrity
  • Consequences of plagiarism
  • Common knowledge

Critical thinking refers to the ability to evaluate information and to be aware of biases or assumptions, including your own.

Like information literacy , it involves evaluating arguments, identifying and solving problems in an objective and systematic way, and clearly communicating your ideas.

Critical thinking skills include the ability to:

You can assess information and arguments critically by asking certain questions about the source. You can use the CRAAP test , focusing on the currency , relevance , authority , accuracy , and purpose of a source of information.

Ask questions such as:

  • Who is the author? Are they an expert?
  • How do they make their argument? Is it backed up by evidence?

A credible source should pass the CRAAP test  and follow these guidelines:

  • The information should be up to date and current.
  • The author and publication should be a trusted authority on the subject you are researching.
  • The sources the author cited should be easy to find, clear, and unbiased.
  • For a web source, the URL and layout should signify that it is trustworthy.

Information literacy refers to a broad range of skills, including the ability to find, evaluate, and use sources of information effectively.

Being information literate means that you:

  • Know how to find credible sources
  • Use relevant sources to inform your research
  • Understand what constitutes plagiarism
  • Know how to cite your sources correctly

Confirmation bias is the tendency to search, interpret, and recall information in a way that aligns with our pre-existing values, opinions, or beliefs. It refers to the ability to recollect information best when it amplifies what we already believe. Relatedly, we tend to forget information that contradicts our opinions.

Although selective recall is a component of confirmation bias, it should not be confused with recall bias.

On the other hand, recall bias refers to the differences in the ability between study participants to recall past events when self-reporting is used. This difference in accuracy or completeness of recollection is not related to beliefs or opinions. Rather, recall bias relates to other factors, such as the length of the recall period, age, and the characteristics of the disease under investigation.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Ryan, E. (2023, May 31). What Is Critical Thinking? | Definition & Examples. Scribbr. Retrieved July 5, 2024, from

Is this article helpful?

Eoghan Ryan

Eoghan Ryan

Other students also liked, student guide: information literacy | meaning & examples, what are credible sources & how to spot them | examples, applying the craap test & evaluating sources, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Back to Entry
  • Entry Contents
  • Entry Bibliography
  • Academic Tools
  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Supplement to Critical Thinking

How can one assess, for purposes of instruction or research, the degree to which a person possesses the dispositions, skills and knowledge of a critical thinker?

In psychometrics, assessment instruments are judged according to their validity and reliability.

Roughly speaking, an instrument is valid if it measures accurately what it purports to measure, given standard conditions. More precisely, the degree of validity is “the degree to which evidence and theory support the interpretations of test scores for proposed uses of tests” (American Educational Research Association 2014: 11). In other words, a test is not valid or invalid in itself. Rather, validity is a property of an interpretation of a given score on a given test for a specified use. Determining the degree of validity of such an interpretation requires collection and integration of the relevant evidence, which may be based on test content, test takers’ response processes, a test’s internal structure, relationship of test scores to other variables, and consequences of the interpretation (American Educational Research Association 2014: 13–21). Criterion-related evidence consists of correlations between scores on the test and performance on another test of the same construct; its weight depends on how well supported is the assumption that the other test can be used as a criterion. Content-related evidence is evidence that the test covers the full range of abilities that it claims to test. Construct-related evidence is evidence that a correct answer reflects good performance of the kind being measured and an incorrect answer reflects poor performance.

An instrument is reliable if it consistently produces the same result, whether across different forms of the same test (parallel-forms reliability), across different items (internal consistency), across different administrations to the same person (test-retest reliability), or across ratings of the same answer by different people (inter-rater reliability). Internal consistency should be expected only if the instrument purports to measure a single undifferentiated construct, and thus should not be expected of a test that measures a suite of critical thinking dispositions or critical thinking abilities, assuming that some people are better in some of the respects measured than in others (for example, very willing to inquire but rather closed-minded). Otherwise, reliability is a necessary but not a sufficient condition of validity; a standard example of a reliable instrument that is not valid is a bathroom scale that consistently under-reports a person’s weight.

Assessing dispositions is difficult if one uses a multiple-choice format with known adverse consequences of a low score. It is pretty easy to tell what answer to the question “How open-minded are you?” will get the highest score and to give that answer, even if one knows that the answer is incorrect. If an item probes less directly for a critical thinking disposition, for example by asking how often the test taker pays close attention to views with which the test taker disagrees, the answer may differ from reality because of self-deception or simple lack of awareness of one’s personal thinking style, and its interpretation is problematic, even if factor analysis enables one to identify a distinct factor measured by a group of questions that includes this one (Ennis 1996). Nevertheless, Facione, Sánchez, and Facione (1994) used this approach to develop the California Critical Thinking Dispositions Inventory (CCTDI). They began with 225 statements expressive of a disposition towards or away from critical thinking (using the long list of dispositions in Facione 1990a), validated the statements with talk-aloud and conversational strategies in focus groups to determine whether people in the target population understood the items in the way intended, administered a pilot version of the test with 150 items, and eliminated items that failed to discriminate among test takers or were inversely correlated with overall results or added little refinement to overall scores (Facione 2000). They used item analysis and factor analysis to group the measured dispositions into seven broad constructs: open-mindedness, analyticity, cognitive maturity, truth-seeking, systematicity, inquisitiveness, and self-confidence (Facione, Sánchez, and Facione 1994). The resulting test consists of 75 agree-disagree statements and takes 20 minutes to administer. A repeated disturbing finding is that North American students taking the test tend to score low on the truth-seeking sub-scale (on which a low score results from agreeing to such statements as the following: “To get people to agree with me I would give any reason that worked”. “Everyone always argues from their own self-interest, including me”. “If there are four reasons in favor and one against, I’ll go with the four”.) Development of the CCTDI made it possible to test whether good critical thinking abilities and good critical thinking dispositions go together, in which case it might be enough to teach one without the other. Facione (2000) reports that administration of the CCTDI and the California Critical Thinking Skills Test (CCTST) to almost 8,000 post-secondary students in the United States revealed a statistically significant but weak correlation between total scores on the two tests, and also between paired sub-scores from the two tests. The implication is that both abilities and dispositions need to be taught, that one cannot expect improvement in one to bring with it improvement in the other.

A more direct way of assessing critical thinking dispositions would be to see what people do when put in a situation where the dispositions would reveal themselves. Ennis (1996) reports promising initial work with guided open-ended opportunities to give evidence of dispositions, but no standardized test seems to have emerged from this work. There are however standardized aspect-specific tests of critical thinking dispositions. The Critical Problem Solving Scale (Berman et al. 2001: 518) takes as a measure of the disposition to suspend judgment the number of distinct good aspects attributed to an option judged to be the worst among those generated by the test taker. Stanovich, West and Toplak (2011: 800–810) list tests developed by cognitive psychologists of the following dispositions: resistance to miserly information processing, resistance to myside thinking, absence of irrelevant context effects in decision-making, actively open-minded thinking, valuing reason and truth, tendency to seek information, objective reasoning style, tendency to seek consistency, sense of self-efficacy, prudent discounting of the future, self-control skills, and emotional regulation.

It is easier to measure critical thinking skills or abilities than to measure dispositions. The following eight currently available standardized tests purport to measure them: the Watson-Glaser Critical Thinking Appraisal (Watson & Glaser 1980a, 1980b, 1994), the Cornell Critical Thinking Tests Level X and Level Z (Ennis & Millman 1971; Ennis, Millman, & Tomko 1985, 2005), the Ennis-Weir Critical Thinking Essay Test (Ennis & Weir 1985), the California Critical Thinking Skills Test (Facione 1990b, 1992), the Halpern Critical Thinking Assessment (Halpern 2016), the Critical Thinking Assessment Test (Center for Assessment & Improvement of Learning 2017), the Collegiate Learning Assessment (Council for Aid to Education 2017), the HEIghten Critical Thinking Assessment (, and a suite of critical thinking assessments for different groups and purposes offered by Insight Assessment ( The Critical Thinking Assessment Test (CAT) is unique among them in being designed for use by college faculty to help them improve their development of students’ critical thinking skills (Haynes et al. 2015; Haynes & Stein 2021). Also, for some years the United Kingdom body OCR (Oxford Cambridge and RSA Examinations) awarded AS and A Level certificates in critical thinking on the basis of an examination (OCR 2011). Many of these standardized tests have received scholarly evaluations at the hands of, among others, Ennis (1958), McPeck (1981), Norris and Ennis (1989), Fisher and Scriven (1997), Possin (2008, 2013a, 2013b, 2013c, 2014, 2020) and Hatcher and Possin (2021). Their evaluations provide a useful set of criteria that such tests ideally should meet, as does the description by Ennis (1984) of problems in testing for competence in critical thinking: the soundness of multiple-choice items, the clarity and soundness of instructions to test takers, the information and mental processing used in selecting an answer to a multiple-choice item, the role of background beliefs and ideological commitments in selecting an answer to a multiple-choice item, the tenability of a test’s underlying conception of critical thinking and its component abilities, the set of abilities that the test manual claims are covered by the test, the extent to which the test actually covers these abilities, the appropriateness of the weighting given to various abilities in the scoring system, the accuracy and intellectual honesty of the test manual, the interest of the test to the target population of test takers, the scope for guessing, the scope for choosing a keyed answer by being test-wise, precautions against cheating in the administration of the test, clarity and soundness of materials for training essay graders, inter-rater reliability in grading essays, and clarity and soundness of advance guidance to test takers on what is required in an essay. Rear (2019) has challenged the use of standardized tests of critical thinking as a way to measure educational outcomes, on the grounds that  they (1) fail to take into account disputes about conceptions of critical thinking, (2) are not completely valid or reliable, and (3) fail to evaluate skills used in real academic tasks. He proposes instead assessments based on discipline-specific content.

There are also aspect-specific standardized tests of critical thinking abilities. Stanovich, West and Toplak (2011: 800–810) list tests of probabilistic reasoning, insights into qualitative decision theory, knowledge of scientific reasoning, knowledge of rules of logical consistency and validity, and economic thinking. They also list instruments that probe for irrational thinking, such as superstitious thinking, belief in the superiority of intuition, over-reliance on folk wisdom and folk psychology, belief in “special” expertise, financial misconceptions, overestimation of one’s introspective powers, dysfunctional beliefs, and a notion of self that encourages egocentric processing. They regard these tests along with the previously mentioned tests of critical thinking dispositions as the building blocks for a comprehensive test of rationality, whose development (they write) may be logistically difficult and would require millions of dollars.

A superb example of assessment of an aspect of critical thinking ability is the Test on Appraising Observations (Norris & King 1983, 1985, 1990a, 1990b), which was designed for classroom administration to senior high school students. The test focuses entirely on the ability to appraise observation statements and in particular on the ability to determine in a specified context which of two statements there is more reason to believe. According to the test manual (Norris & King 1985, 1990b), a person’s score on the multiple-choice version of the test, which is the number of items that are answered correctly, can justifiably be given either a criterion-referenced or a norm-referenced interpretation.

On a criterion-referenced interpretation, those who do well on the test have a firm grasp of the principles for appraising observation statements, and those who do poorly have a weak grasp of them. This interpretation can be justified by the content of the test and the way it was developed, which incorporated a method of controlling for background beliefs articulated and defended by Norris (1985). Norris and King synthesized from judicial practice, psychological research and common-sense psychology 31 principles for appraising observation statements, in the form of empirical generalizations about tendencies, such as the principle that observation statements tend to be more believable than inferences based on them (Norris & King 1984). They constructed items in which exactly one of the 31 principles determined which of two statements was more believable. Using a carefully constructed protocol, they interviewed about 100 students who responded to these items in order to determine the thinking that led them to choose the answers they did (Norris & King 1984). In several iterations of the test, they adjusted items so that selection of the correct answer generally reflected good thinking and selection of an incorrect answer reflected poor thinking. Thus they have good evidence that good performance on the test is due to good thinking about observation statements and that poor performance is due to poor thinking about observation statements. Collectively, the 50 items on the final version of the test require application of 29 of the 31 principles for appraising observation statements, with 13 principles tested by one item, 12 by two items, three by three items, and one by four items. Thus there is comprehensive coverage of the principles for appraising observation statements. Fisher and Scriven (1997: 135–136) judge the items to be well worked and sound, with one exception. The test is clearly written at a grade 6 reading level, meaning that poor performance cannot be attributed to difficulties in reading comprehension by the intended adolescent test takers. The stories that frame the items are realistic, and are engaging enough to stimulate test takers’ interest. Thus the most plausible explanation of a given score on the test is that it reflects roughly the degree to which the test taker can apply principles for appraising observations in real situations. In other words, there is good justification of the proposed interpretation that those who do well on the test have a firm grasp of the principles for appraising observation statements and those who do poorly have a weak grasp of them.

To get norms for performance on the test, Norris and King arranged for seven groups of high school students in different types of communities and with different levels of academic ability to take the test. The test manual includes percentiles, means, and standard deviations for each of these seven groups. These norms allow teachers to compare the performance of their class on the test to that of a similar group of students.

Copyright © 2022 by David Hitchcock < hitchckd @ mcmaster . ca >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2024 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

Pamukkale critical thinking skill scale: a validity and reliability study

Pamukkale critical thinking skill scale.

  • Erdinç Duru Pamukkale University
  • Ozen Yildirim Pamukkale University
  • Sevgi Özgüngör Pamukkale University
  • Asuman Duatepe Paksu Pamukkale University
  • Sibel Duru Pamukkale University

The aim of this study is to develop a valid and reliable measurement tool that measures critical thinking skills of university students. In this context, a number of studies were conducted on different participant groups. Pamukkale Critical Thinking Skills Scale was developed as two separate forms; multiple choice and open-ended. The validity and reliability studies of the multiple-choice form were constructed on two different theoretical frameworks as classical test theory and item-response theory. According to classical test theory, exploratory and confirmatory factor analyses were performed, to item-response theory, the Generalized Partial Credit Model (GPCM) for one-dimensional and multi-category scales was tested for the construct validity of the multiple-choice form of the scale. Analysis results supported the unidimensional structure of the scale. The reliability analyzes showed that the internal consistency coefficient of the scale and the item-total correlation values were high enough. The test-retest analysis results supported that the scale shows stability over time regarding the field it measures. The results of the item-response theory-based analysis also showed that the scale met the item-model fit assumptions. In the evaluation of the open-ended form of the scale, a rubric was used. Several studies were conducted on the validity and reliability of the open-ended form, and the results of the analysis provided psychometric support for the validity and reliability. As a result, it can be said that Pamukkale Critical Thinking Skills Scale, which was developed in two forms, is a valid and reliable measurement tool that can be used to measure critical thinking skills of university students. The findings were discussed in the light of the literature and some suggestions were given.

Abrami, P.C., Bernard, R.M., Borokhovski, E., Wade, A., Surkes, M. ., Tamim, R., & Zhang, D. (2008). Instructional interventions affecting critical thinking skills and dispositions: A stage 1 meta-analysis. Review of Educational Research, 78(4), 1102-1134.

Anderson, L.W., & Krathwohl, D.R. (2001). A taxonomy for learning, teaching and assessing: A revision of Bloom’s taxonomy of educational objectives: Complete Edition. Longman.

Atay, S., Ekim, E., Gökkaya, S., & Sağım, E. (2009). Sağlık Yüksekokulu öğrencilerinin eleştirel düşünme düzeyleri. [Critical thinking tendencies of Health School students] Sağlık Bilimleri Fakültesi Hemşirelik Dergisi, 39-46.

Ayberk, B., & Çelik, M. (2007). Watson-Glaser Eleştirel, Akıl Yürütme Gücü Ölçeği'nin (W-GEAYGÖ) üniversite ikinci, üçüncü ve dördüncü sınıf İngilizce bölümü öğretmen adayları üzerindeki güvenlik çalışması. [Reliability study related to the power of Watson-Glaser critical thinking appraisal scale on university second, third and fourth-grade English department teacher candıdates] Çukurova Üniversitesi Sosyal Bilimler Enstitüsü Dergisi, 16 (1), 101-112

Aybek, E.C. (2021). CatIRT tools: A “Shiny” application for item response theory calibration and computerized adaptive testing simulation. Journal of Applied Testing Technology, 22(1), 23-27.

Baker, F.B. (2001). The basics of item response theory. College Park, ERIC, Clearinghouse on Assessment and Evaluation.

Batur, Z., & Özcan, H.Z. (2020). Eleştirel düşünme üzerine yazılan lisansüstü tezlerinin bibliyometrik analizi. [Bibliometric analysis of graduate theses written on critical thinking] Uluslararası Türkçe Edebiyat Kültür Eğitim Dergisi, 9(2), 834-854.

Bailey, R., & Mentz, E. (2015). IT teachers’ experience of teaching–learning strategies to promote critical thinking. Issues in Informing Science and Information Technology, 12(1), 141–152.

Bensley, D.A., Crowe, D.S., Bernhardt, P., Buckner, C., & Allman, A.L. (2010). Teaching and assessing critical thinking skills for argument analysis in psychology. Teaching of Psychology, 37(2), 91–96.

Bennett, D.A. (2001). How can I deal with missing data in my study? Australian and New Zealand Journal of Public Health, 25(5), 464–469.

Bentler, P.M. (1990). Comparative fit indexes in structural models. Psychological Bulletin, 107(2), 238–246.

Bilican, S.D. (2021). Başarı testlerinin geliştirilmesi ve madde yazımı [Development of achievement tests and item writing] Yıldırım, Ö. ve Kartal, S.K. (Ed.), Eğitimde Ölçme ve Değerlendirme [Measurement and Evaluation in Education] (125-163) 1. Baskı, Lisans Yayıncılık.

Browne, N., & Freeman, K. (2000). Distinguishing features of critical thinking classrooms. Teaching in Higher Education. 5(3), 301-309.

Boyd, E.M., & Fales, A.W. (1983). Reflective learning: Key to learning from experience. Journal of Humanistic Psychology, 23(2), 99 117.

Browne, M.W., & Cudeck, R. (1993). Alternative ways of assessing model fit. K. A. Bollen and J.S. Long (Ed.), Testing structural equation models (pp. 136-162). Sage.

Carpendale, J.I., & Lewis, C. (2006). How children develop social understanding, Blackwell.

Chalmers, R.P. (2012). Mirt: A multidimensional item response theory package for the R environment. Journal of Statistical Software, 48(6), 1 29.

Cisneros R .M. (2009). Assessment of critical thinking in pharmacy students. American Journal of Pharmaceutical Education, 73(4), 66.

Cohen, R.J., Swerdlik, M.E., Smith, D.K., & Cohen, R.J. (1992). Psychological testing and assessment: An introduction to tests and measurement. Mayfield Pub. Co.

Comfort, L.K. (2007). Crisis management in hindsight: cognition, communication, coordination, and control. Public Administration Review, 67(1), 189–197.

Crockett, L. (2019). Future-focused Learning: 10 essential shifts of everyday practice. Solution Tree Press.

Dewey, J. (1933). How we think. DC Herman.

Doğan, N. (2013). Eleştirel düşünmenin ölçülmesi [Measuring the Critical Thinking]. Cito Eğitim: Kuram ve Uygulama, 22(1), 29-42.

Doğanay, A., Akbulut-Taş, M., & Erden, Ş. (2007). Üniversite öğrencilerinin bir güncel tartışmalı konu bağlamında eleştirel düşünme becerilerinin değerlendirilmesi. [Assessing university students’ critical thinking skills in the context of a current controversial issues]. Kuram ve Uygulamada Eğitim Yönetimi, 52(1), 511-546.

Dumitru, D., Bîgu, D., Elen, J., Jiang, L., Railiene, A., Penkauskiene, D., Papathanasiou, I.V., Tsaras, K., Fradelos, E.C., Ahern, A.K., McNally, C., O'Sullivan, J., Verburgh, A.P., Jarošová, E., Lorencová, H., Poce, A., Agrusti, F., Re, M.R., Puig, B., Blanco, P., Mosquera, I., Crujeiras-Pérez, B., Dominguez, C., Cruz, G., Silva, H., & Morais, M.D., Nascimento, M.M., & Payan-Carreira, R. (2018). A European review on Critical Thinking educational practices in Higher Education Institutions.

Duru, E., Duatepe-Paksu, A., Balkıs, M., Duru, S., & Bakay, E. (2020). Examination of 21st century competencies and skills of graduates from the perspective of sector representatives and academicians. Journal of Qualitative Research in Education, 8(4), 1059-1079.

Dwyer, C.P., Hogan, M.J., & Stewart, I. (2014). An integrated critical thinking framework for the 21st century. Thinking Skills and Creativity, 12(1), 43 52.

Ebel, R.L. (1972). Essentials of educational measurement. Prentice-Hall.

Eğmir, E., & Ocak, G. (2016). Eleştirel düşünme becerisini ölçmeye yönelik bir başarı testi geliştirme. [Developing an achievement test towards evaluating critical thinking skill]. Turkish Studies, 11(19), 337-360.

Ennis, R. (1991). Critical thinking: A streamlined conception. Teaching Philosophy, 14(1), 15-24.

Ennis, R. H., & Millman, J. (1985). Cornell Critical Thinking Test (Level X). Critical Thinking Press & Software.

Facione, P.A. (1990a). Critical thinking: A statement of expert consensus for purposes of educational assessment and ınstruction. executive summary: “The Delphi Report”. The California Academic Press.

Facione, P.A. (1990b). The California Critical Thinking Skills Test-College Level Technical Report 1: Experimental Validation and Content Validity. The California Academic Press.

Facione, P.A. (1990c). The California Critical Thinking Skills Test-College Level Technical Report 2: Factors Predictive of Critical Thinking Skills. The California Academic Press.

Facione, N.C. (1997) Critical thinking assessment in nursing education programs An aggregate data analysis. The California Academic Press.

Facione, P.A. (2015). Critical thinking: What it is and why it counts.

Facione, P.A., & Facione N.C. (1992). The california critical thinking dispositions inventory. The California Academic Press.

Fayers, P., & Machin, D. (1995). Factor analysis for assessing validity. Quality of Life Research, 4(5), 424.

Flores, K., Matkin, G.S., Burbach, M.E., Quinn, C.E., & Harding, H. (2012). Deficient critical thinking skills among college graduates: ımplications for leadership. Educational Philosophy and Theory, 44(2), 212 230. 5812.2010.00672.x

Güçlü, G., & Evcili, F. (2021). Sağlık hizmetleri meslek yüksekokulu öğrencilerinin eleştirel düşünme yetileri ve boyun eğici davranış eğilimlerinin incelenmesi. [Investigation of critical thinking qualifications and submissive behavior tendency of health services vocational school students]. Turkish Journal of Science and Health. 2(1), 31-39.

Haladyna, T.M. (1997). Writing test ıtem to evaluate higher order thinking. Allyn & Bacon.

Hambleton, R.K., & Swaminathan, H. (1985). Item response theory. Kluwer-Nijhoff Publishing.

Huber, C.R., & Kuncel, N.R. (2016). Does college teach critical thinking? A meta-analysis. Review of Educational Research, 86(2), 431 468.

Jacobs, S.S. (1995). Technical characteristics and some correlates of the California Critical Thinking Skills Test, Forms A and B. Research in Higher Education, 36(1), 89-108.

Irani, T., Rudd, R., Gallo, M., Ricke, J., Friedel, C., & Rhoades, E. (2007) Critical thinking ınstrumentation manual. University of Florida.

Lipman, M. (1988) Critical thinking—What can it be? Educational Leadership, 461(1), 38–43.

Marais, I. (2009). Response dependence and the measurement of change. Journal of Applied Measurement, 10(1), 17-29.

Marin, L., & Halpern, D. (2011). Pedagogy for developing critical thinking in adolescents: Explicit instruction produces greatest gains. Thinking Skills and Creativity, 6(1), 1–13.

Mazer, J.P., Hunt, S.K., & Kuznekoff, J.H. (2007). Revising general education: Assessing a critical thinking ınstructional model in the basic communication course. The Journal of General Education 56(3), 173-199.

Mpofu, N., & Maphalala, M.C. (2017). Fostering critical thinking in initial teacher education curriculums: A comprehensive literature review. Gender and Behaviour, 15(2), 9342–9351.

Msila, V. (2014). Critical Thinking in open and distance learning programmes: Lessons from the University of South Africa’s NPDE Programme. Journal of Social Sciences, 38(1), 33–42

Nalçacı, A., Meral, E., & Şahin, İ.F. (2016). Sosyal bilgiler öğretmen adaylarının eleştirel düşünme ile medya okuryazarlıkları arasındaki ilişki [Correlation between critical thinking and media literacy of social sciences pre-service teachers]. Doğu Coğrafya Dergisi, 21(36), 1-12.

Norris, S.P., & Ennis, R.H. (1990). The practitioners' guide to teaching thinking series. Evaluating Critical Thinking. Hawker Bronlow Education.

Özmen, K.S. (2008). İngilizce öğretmeni eğitiminde eleştirel düşünce: Bir vaka çalışması. [Critical Thinking in English teacher education: A case study]. Ekev Akademi Dergisi, 12(36), 253-266.

Orhan, A., & Çeviker-Ay, Ş. (2022). Developing the critical thinking skill test for high school students: A validity and reliability study. International Journal of Psychology and Educational Studies, 9(1), 132-144.

Parkhurst, H.B. (1999). Confusion, lack of consensus, and the definition of creativity as a construct. Journal of Creative Behavior, 33(1), 1–21.

Paul, R. (2005) The state of critical thinking today, New Directions for Community Colleges, 130(1), 27–38.

Paul, R., & Elder, L. (2001) Critical thinking: Inert information, activated ignorance, and activated knowledge, Journal of Developmental Education, 25(2), 36–37.

Paul, R.W., & Elder, L. (2002). Critical thinking: Tools for taking charge of your professional and personal life. Pearson Education Inc.

Paul, R., & Nosich, G. (1991). A proposal for the national assessment of higher-order thinking. Paper commissioned by the U.S. Department of Education Office of Educational Research and Improving National Center for Education Statistics.

Pascarella, E.T., & Terenzini, P.T. (1991). How college affects students: Findings and insights from twenty years of research. Jossey-Bass.

Pascarella, E.T., & Terenzini, P.T. (2005). How college affects students: A third decade of research. Jossey-Bass.

Portney, L.G., & Watkins, M.P. (2000) Foundations of clinical research: Applications to practice. 2nd Edition, Prentice Hall.

Puig, B., Blanco-Anaya, P., Bargiela, I.M., & Crujeiras-Pérez, B. (2019). A systematic review on critical thinking intervention studies in higher education across professional fields. Studies in Higher Education, 44(5), 860-869.

R Core Team (2021). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria.

Rezaee, M., Farahian, M., & Ahmadi, A. (2012). Critical thinking in higher education: Unfulfilled expectations. BRAIN. Broad Research in Artificial Intelligence and Neuroscience, 3(2), 64-73.

Ruggiero V.R. (1990) Beyond feelings. A guide to critical thinking, (3rd ed.) Mayfield Publishing.

Røe, C., Damsgård, E., Fors, T., & Anke, A. (2014). Psychometric properties of the pain stages of change questionnaire as evaluated by Rasch analysis in patients with chronic musculoskeletal pain. BMC Musculoskelet Disord, 15(1), 95. , PubMed 24646065

Sahool, S., & Mohammed, C.A. (2018). Fostering critical thinking and collaborative learning skills among medical students through a research protocol writing activity in the curriculum. Korean J Med Educ, 30(2), 109-118.

Schafer, J.L. (1999). Multiple imputation: a primer. Stat Methods in Med., 8(1), 3–15.

Shipman, V. (1983). New Jersey test of reasoning skills. IAPC, Test Division, Montclair State College.

Siegel, H. (1988). Educating for reason: Rationality, critical thinking, and education. Routledge.

Snyder, L.G., & Snyder, M.J. (2008). Teaching critical thinking and problem solving skills. The Journal of Research in Business Education, 50 (2), 90–99.

Stevens, J. (1992). Applied multivariate statistics for the social sciences. Second Edition, Lawrence Erlbaum Associates.

Tabachnick, B.G., & Fidell, L.S. (2013). Using multivariate statistics (6th ed.), Allyn and Bacon.

Tolman, E.C. (1932). Purposive behavior in animals and men. Century/Random House UK.

Trilling, B., & Fadel, C. (2009). 21st-century skills: Learning for life in our times. John Wiley & Sons.

Uzuntiryaki- Kondakçı, E., & Çapa-Aydın, Y. (2013). Predicting critical thinking skills of university students through metacognitive self-regulation skills and chemistry self-efficacy. Educational Sciences: Theory & Practice, 13(1), 666-670.

Van Laar, E., van Deursen, A.J.A. M., van Dijk, J.A.G.M., & de Haan, J. (2019). Determinants of 21st-century digital skills: A large-scale survey among working professionals. Computers in Human Behavior, 100, 93–104.

Voogt, J., & Roblin, N.P. (2012). A comparative analysis of international frameworks for 21st-century competencies: implications for national curriculum policies. Journal of Curriculum Studies, 44(3), 299–321.

Watson, G., & Glaser, E.M. (1980). Watson-Glaser critical thinking appraisal. Psychological Corporation

Williams, R.L., Oliver, R., Allin, J.L., Winn, B., & Booher, C.S. (2003). Psychological critical thinking as a course predictor and outcome variable. Teaching of Psychology, 30(3), 220–223.

Yen, W.M. (1993). Scaling performance assessments: Strategies for managing local item dependence. Journal of Educational Measurement, 30(1), 187-213.

How to Cite

  • Endnote/Zotero/Mendeley (RIS)

Copyright (c) 2022 International Journal of Assessment Tools in Education

Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License .

critical thinking skills validity

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

How Can Critical Thinking Be Used to Assess the Credibility of Online Information?

Albie van zyl.

Department of Informatics, University of Pretoria, Pretoria, 0001 South Africa

Marita Turpin

Machdel matthee.

The prevalence of unverified information on the internet and the associated potential adverse effect on society led to the development of a number of models and theories to assess the credibility of online information. Existing research consists of two diverse approaches: the first consists of checklist approaches or normative guidelines on how to assess the information whereas the second provides descriptive models and theories of how users actually go about when assessing credibility. The above mentioned approaches consider aspects related to the presentation and content of the information. However, the reasoning in the content is not a concern that is covered in these approaches. Critical thinking is considered an increasingly important 21st century work place skill. This paper investigates the potential value of using critical thinking in assessing the credibility of online information. The paper commences with an overview of existing approaches for assessing the credibility of online information. It then argues that the presence of a well-developed argument in online information to be an indication of credibility. Critical thinking also helps to evaluate the credibility of evidence. These thinking skills can be developed through training. It is shown how a group of first year Information Systems students were able to more critically engage with the content of online news after a course on critical thinking. This paper contributes to the literature on the assessment of the credibility of online information.


The internet has become indispensable as a source of information and news. Given the vast amount of information available as well as the large numbers of information sites, it has become increasingly difficult to judge the credibility of online information [ 1 ]. Metzger argues that in the past, traditional publishing houses used to act as gatekeepers of the information published. There was a cost barrier to printing and the print process allowed for quality control. In the digital age, anyone can be an author of online content. Digital information and content can be published anonymously, and easily plagiarized and altered [ 1 , 2 ]. Online news platforms are in a continual race against time to be the first to publish online, and in the process they sacrifice quality control. In the process, the gatekeeping function of evaluating the credibility of online information has shifted to the individual users.

To date, scholars in information literacy have developed checklists to assist users in assessing the credibility of online information, as well as various theories and models to describe how users evaluate information in practice [ 2 – 5 ]. These models highlight aspects such as the influence of the subjectivity of the user in evaluating content, the process of evaluation as well as the cognitive heuristics that users typically apply during evaluation. The models also recognize that in the era of social computing and social media, evaluation has a strong social component [ 6 ].

In an overview of studies on the assessment of the credibility of online information, it was found that neither the established normative guidelines for evaluating credibility, nor the descriptive models for evaluating credibility consider the quality of reasoning and argumentation contained in the information that is evaluated. That is strange, since critical thinking is generally regarded as an important information literacy skill, and in addition it is viewed as an important 21 st century workplace skill [ 7 , 8 ].

In this paper, we present a case for the use of critical thinking as a means to assess the quality and credibility of online content. We suggest how critical thinking could be used to enhance current credibility assessment practices. The known processes have in common with critical thinking the fact that they are all concerned with the credibility of evidence that is presented to substantiate the findings of an online article. Whereas credibility models mainly focus on presentation and content, critical thinking extends the evaluation of content by evaluating the quality of the argument presented. Admittedly, many fake (and other) news stories contain limited if any arguments to evaluate. While the absence of an argument is not enough to discredit an online article, its presence can be used as a quality indicator. The presence of a weak argument will reduce the perceived credibility of the claim or finding of an article, while a strong argument will enhance its credibility.

This paper commences with a short overview of existing guidelines and descriptive models for evaluating the credibility of online information. The common themes among these models are summarized. Next, the paper introduces the building blocks of critical thinking and proceeds to indicate how critical thinking is used for argument evaluation. A means to assess the credibility of online information is proposed that uses critical thinking in a way that recognizes and builds on previous work related to credibility assessment.

Existing Research on Assessing the Credibility of Online Information

Credibility refers to the believability of information [ 4 ]. Credibility is regarded to be subjective: it is not an objective attribute of an information source, but the subjective perception of believability by the information receiver [ 4 , 9 ]. As such, two different information receivers can have different assessments of the credibility of the same piece of information.

Research on assessing the credibility of online information can be categorized into research on normative guidelines (in other words, what should people be looking at when they assess credibility) and research related to descriptive models or theories (how people are assessing credibility in practice).

A Checklist for Information Credibility Assessment

The normative approach to the assessment of information credibility is promoted by the proponents of digital literacy, who aim to assist internet users in developing the skills required for evaluating online information. Their assumption is that online information can be evaluated in the same manner as information found elsewhere [ 1 ]. A checklist approach is usually followed, where the list covers the following five components: accuracy, authority, objectivity, currency, and coverage or scope [ 1 ]. Accuracy refers to the degree that the content is free from errors and whether the information can be verified elsewhere. It is an indication of the reliability of the information on the website. Authority refers to the author of the website, and whether the website provides contact details of the author and the organisation. It is also concerned with whether the website is recommended or endorsed by a trusted source. Objectivity considers whether the content is opinion or fact, and whether there is commercial interest, indicated for example by a sponsored link. Currency refers to the frequency of updates, and whether the date is visible. Coverage refers to the depth and comprehensiveness of the information [ 1 ]. In a checklist approach, a user is given a list of questions of things to look out for. For example, in terms of currency, the user has to look for evidence of when the page was last updated.

In a series of studies conducted by Metzger and her colleagues [ 1 ], it was found that even when supplied with a checklist, users rarely used it as intended. Currency, comprehensiveness and objectivity were checked occasionally, whilst checking an author’s credentials, was the least preferred by users. This correlates with findings by Eysenbach and Köhler [ 10 ] who indicate that the users in their study, did not search for the sources behind their website information, or how the information was compiled. This lack of thoroughness is ascribed to the users’ lack of willingness to expend cognitive effort [ 6 ]. The apparent attempt by users to minimise cognitive effort has given rise to studies on how users apply cognitive heuristics as well as other means to assess credibility more quickly and with less effort. This research led to the development of a number of descriptive models and theories on how users assess credibility in practice.

Descriptive Models and Theories Related to Information Credibility Assessment

The use of cognitive heuristics..

A number of studies indicate that internet users avoid laborious methods of information evaluation, and that they prefer to use more superficial cues, such as using the look and feel of a website as a proxy for credibility rather than analyzing the content [ 5 , 6 , 11 ]. When evaluating credibility, people tend to apply cognitive heuristics, which are mental short cuts or rules of thumb. Based on their previous experience people respond to cues and act on these subconsciously, without the need to spend mental effort [ 6 , 12 , 13 ]. Five heuristics are identified that users commonly apply to decide on the credibility of online content [ 6 ]. The reputation heuristic is applied when users recognize the source of the information as one they believe to be reputable, possibly because of brand familiarity or authority. The endorsement heuristic means that a source is believed to be credible if other people believe so too; either people they know or people that have given it a good rating. The consistency heuristic means that if similar information about something appears on multiple websites, the information is deemed to be credible. The expectancy violation heuristic is a strong negative heuristic. Information that is contrary to the user’s own beliefs is not deemed to be credible. Lastly, when using the persuasive intent heuristic , users assess whether there is an attempt to persuade them or sell something to them. In this case, the information is perceived to be not credible because there is a perceived ulterior motive or an attempt to manipulate the user.

The Prominence-Interpretation Theory.

The Prominence-Interpretation theory comprises two interlinked components that describe what happens when a user assesses the credibility of a website [ 14 ]. First, a user notices something (prominence), and then they interpret what they see (interpretation). If one of the two components are missing, there is no credibility assessment. A user will notice existing and new elements of a website and interpret the elements for credibility in an iterative fashion until being satisfied that a credibility assessment can be made. Conversely, the user may stop when they reach a constraint, such as running out of time [ 14 ]. A visual representation of the Prominence-Interpretation theory is provided in Fig.  1 .

An external file that holds a picture, illustration, etc.
Object name is 497534_1_En_17_Fig1_HTML.jpg

Prominence-interpretation theory [ 14 ]

Prominence refers to the likelihood that certain elements will be noticed or perceived by the user. The user must first notice the element, to form a judgement of the credibility of the information. If the user does not notice the element, it does not play a role. Five factors are identified that influence prominence, namely: Involvement, topic, task, experience and individual differences. The most dominant influence is user involvement, referring to the user’s motivation and ability to engage with content. Topic refers to the type of website the user visits. The task is the reason why the user is visiting the websites. Experience refers to the experience of the user, in relation to the subject or topic of the website. Individual differences refer to the user’s learning style, literacy level or the user’s need for cognition. When a user’s involvement is high, and the user’s experience is of expert status, the user will cognitively notice more elements [ 14 ].

Interpretation refers to the user’s judgement of the element under review. For example, a broken link on a website will be interpreted as bad and lead to a lower credibility assessment of the website. Interpretation of elements is affected by a user’s assumptions, skills, knowledge and context.


When comparing the research on the use of heuristics [ 6 ] with the Prominence-Interpretation theory [ 14 ], one can see that the use of heuristics fits well into the “interpretation” component of Prominence-Interpretation theory.

A Web Credibility Framework.

Fogg’s web credibility framework [ 15 ] contains the categories of operator , content and design . Operator refers to the source of the website, the person who runs and maintains the website. A user makes a credibility judgement based on the person or organisation operating the website. Content refers to what the site provides in terms of content and functionality. Of importance is the currency, accuracy and relevance of the content and the endorsements of a respected outside organisation. Design refers to the structure and layout of the website. Design has four elements namely information design (structure of the information), technical design (function of the site on a technical level, and search function), aesthetic design (looks, feel and professionality of the design) and interaction design (user experience, user interaction and navigation) [ 15 ].

The web credibility framework was extended by Choi and Stvilia [ 3 ] who divided each of the three categories (operator, content and design) into the two dimensions of trustworthiness and expertise, thereby forming what is called the Measures of Web Credibility Assessment Framework.

When consolidating the web credibility framework [ 15 ] and its extension [ 3 ] with the work on credibility assessment presented in the prior sections, one can say that the web credibility frameworks contribute to prominence as well as the interpretation . The web design contributes to the prominence or noticeability of the information. Further, the level of professionality of the design can be interpreted by means of a heuristic such as the reputation heuristic. The website operator and content, when noticed, get interpreted by means of evaluation heuristics. Hence, the work presented in 2.2.1 – 2.2.3 can be reconciled into different aspects of online information that, when noticed, get interpreted by means of heuristics.

Iterative Models on the Evaluation of Online Information.

According to the Prominence-Interpretation theory [ 14 ] the interpretation of information occurs in an iterative fashion until a credibility assessment can be made. Two other models also recognize the iterative nature of credibility assessment. These are the cognitive authority model [ 2 ] and Wathen and Burkell’s model [ 16 ].

With the cognitive authority model, the information seeker iteratively assesses the authority and credibility of online content by considering the author, document, institution and affiliations [ 2 ]. These are integrated into a credibility judgement. The model is similar to the checklist [ 1 ], but proposes that users employ the technology available to them to make the judgement. Like the checklist, the cognitive authority model is normative.

Wathen and Burkell [ 16 ] also propose an iterative way of assessment. According to their research users first do a surface credibility check based on the appearance and presentation of the website. Secondly, the user will look for message credibility by assessing the source and the content of the message. Lastly, the content itself is evaluated. During this final stage, sense-making of the content occurs, depending on factors such as the user’s previous level of knowledge on the topic. If, at any stage, the user becomes aware of a reason to doubt the credibility of the information, the iterative process is stopped. Wathen and Burkell’s model [ 16 ] is normative but also incorporates descriptive research on information evaluation.

A Synthesised Summary of Existing Work on the Credibility Assessment Process

To synthesise the joint findings from previous work on credibility assessment of online information:

  • Credibility cues need to be noticed before they are processed [ 14 ].
  • The evaluation process is iterative and moves from surface level checks (such as look and feel of a website) through to engagement with the content [ 14 , 16 ].
  • From the onset of the evaluation process, cognitive or judgmental heuristics are applied to assess credibility. This is especially true during the interpretation phase, when a user evaluates the content itself [ 1 , 4 – 6 ]. Judgmental heuristics are used in order to reduce cognitive effort as the user is inundated with information.
  • The evaluation process takes part in a social context and some of the evaluation cues are socially generated, such as number of website visitors, user recommendations or social rankings [ 6 ].

In the section that follows, the principles of critical thinking will be introduced. This is in order to assess how critical thinking might be used to evaluate online content in the light of what is already known about credibility evaluation.

Critical Thinking

The Foundation of Critical Thinking considers critical thinking as “that mode of thinking - about any subject, content, or problem - in which the thinker improves the quality of his or her thinking by skillfully analyzing, assessing, and reconstructing it” [ 17 ]. Some authors consider it an indispensable skill in problem solving. Halpern suggests a taxonomy of critical thinking skills covering a broad range of skills as (1) verbal reasoning skills, (2) argument analysis skills, (3) skills in thinking as hypothesis testing, (4) dealing with likelihood and uncertainties and (5) decision making and problem solving skills [ 18 ]. The aspect of critical thinking of interest in this paper, relates to the analysis of arguments. A useful definition for critical thinking is therefore the one suggested by Tiruneh and his co-authors [ 19 ]: critical thinking is the ability to analyse and evaluate arguments according to their soundness and credibility, respond to arguments and reach conclusions through deduction from given information [ 19 ]. Booth et al. [ 20 ], basing their work on ideas of Toulmin et al. [ 21 ], consider a basic argument to consist of a claim (or conclusion), backed by reasons which is supported by evidence. An argument is stronger if it acknowledges and responds to other views and if necessary, shows how a reason is relevant to a claim by drawing on a general principle (which is referred to as a warrant).

The following argument, adopted from [ 20 : 112] illustrates these components: “TV violence can have harmful psychological effects on children” (CLAIM), “because their constant exposure to violent images makes them unable to distinguish fantasy from reality” (REASON). “Smith (1997) found that children ages 5–7 who watched more than 3 h of violent television a day were 25% more likely to say that what they saw on television was ‘really happening’” (EVIDENCE). “Of course, some children who watch more violent entertainment might already be attracted to violence” (ACKNOWLEDGEMENT). “But Jones (1999) found that children with no predisposition to violence were as attracted to violent images as those with a violent history” (RESPONSE).

Booth and his co-authors [ 20 : 114] use the following argument to illustrate the use of a warrant in an argument: “We are facing significantly higher health care costs in Europe and North America (CLAIM) because global warming is moving the line of extended hard freezes steadily northward.” (REASON). In this case the relevance of the reason to the claim should be stated by a general principle: “When an area has fewer hard freezes, it must pay more to combat new diseases carried by subtropical insects no longer killed by those freezes” (WARRANT).

Of course, good arguments need more than one reason in support of conclusions and complex arguments contains sub-arguments. However the main components remain the same. Figure  2 summarizes the main components of a basic argument.

An external file that holds a picture, illustration, etc.
Object name is 497534_1_En_17_Fig2_HTML.jpg

The core components of an argument [ 20 : 116]

Critical thinking entails the identification of the core components in an argument (analysis) in order to judge its credibility, quality and to formulate a response to it. According to Butterworth and Thwaites [ 7 ], a good quality argument is one where the reasons are true or justified and where the conclusion follows from the reasons. By using these criteria in the evaluation of arguments, classical fallacies such as the post hoc fallacy or circular reasoning can be identified. In addition, the evaluation of an argument entails asking questions and finding counter examples. A good quality argument will pre-empt objections or counter examples and respond to it. Butterworth and Thwaites [ 7 ] consider a credible argument as one which is plausible/believable (acknowledging that some highly improbable claims can be true), and having a trusted source. The credibility is enhanced if the claim is corroborated by different sources with different kinds of evidence.

The Use of Critical Thinking in the Context of Existing Credibility Assessment Models

It is suggested that critical thinking is included in the credibility assessment process, as follows. With reference to the Prominence-Interpretation theory [ 14 ], critical thinking can be applied during the interpretation phase. It can be used to assess the quality of evidence as well as evaluate the argument itself. It will only be used during a later stage in the iterative process of credibility assessment, possibly in the third phase of Wathen an Burkell’s iterative model [ 16 ].

Discussion: Potential Challenges to Using Critical Thinking to Assess the Credibility of Online Information

When considering the use of critical thinking to evaluate the credibility of online information, some challenges are apparent.

First, as indicated earlier, it is known that users, who are flooded with information, are applying as a coping mechanism the use of judgmental heuristics to reduce cognitive effort. Therefore, they prefer to use cues that will give them immediate reason to believe or not believe the information presented to them. Argument evaluation is an exercise that requires cognitive effort, especially when a complex claim is presented. Therefore, users will not go to the effort of thoroughly evaluating an argument if they can help it, unless there is high motivation to do so, for example when university level students are looking for material to support the arguments in their essays.

A second challenge to the use of critical thinking in this context is that online news or other online content does not always contain an argument. A piece of news on social media may just consist of evidence. In that case, critical thinking would require the evaluation of the credibility of the evidence.

A third possible challenge is that in an effort to mislead, the author of fake news may present a credible looking argument on the basis of fake evidence that cannot readily be verified. Hence, while good argumentation is often associated with good quality content, this may not always be the case. However, the cognitive effort of trying to second-guess the veracity of a well presented argument is so high that this is not a feasible task in everyday credibility assessment situations.

Addressing the Challenges

The above mentioned challenges could be addressed as follows.

The challenge of the cognitive effort of critical thinking may be improved by means of training. As motivated earlier in the paper, critical thinking forms part of information literacy and is an important 21st century user skill. Training and regular exercise in argument evaluation will make it become an easier habit, so that it can be more easily applied. A number of universities have compulsory first year information literacy courses, and this is where critical thinking can be introduced. The authors are involved in the teaching of critical thinking and problem solving to IS first year students. A study with a pre- and post- assessment exercise to determine the effect of the course, was done during the first half of 2019. A total of 154 students participated in the pre-assessment evaluation and 166 students in the post-assessment evaluation. The objective of the course was not to train students to identify fake news but to analyse and evaluate arguments and to cultivate a critical attitude towards reading and interpreting texts.

Findings from a Course on Critical Thinking.

Pre - assessment: During the pre-assessment, students were asked several questions to test their critical thinking skills and one question to determine the credibility of a piece of information found online. The information presented to them [ 22 ] was part of fake news and presented an argument against the use of prison inmates to provide laughter in CBS sitcoms. In the pre-assessment only 16% of students could identify it as fake news. Students who identified it as fake news, applied most of the cognitive heuristics listed in Sect.  2.2 . For example, a few students knew that The Onion is a website known for its satire articles (reputation heuristics). A handful of students applied the expectancy violation heuristics (“It just doesn’t make sense to me honestly”; “In today’s   day and age, such practice would never be accepted seeing as people get offended by even the most futile things”; “In today’s age, laughter can be produced on computers or a group of laughs taken once and then played back whenever the producers feel”). Consistency heuristics were also used (“ This is my first time hearing about it ”). Quite a number of students pointed out the lack of credible evidence.

Post - assessment:

In the post-assessment questions were asked to assess critical thinking skills in general and the last question focused on fake news. Two different pieces of information were provided, one fake news and the other not (see Table  1 ).

Table 1.

Article 1 and Article 2

TitleMain ideaSource
Article 1 (fake)Psychologists warn parents, don’t allow your children to watch Peppa PigArgument that watching Peppa Pig can lead to copying the bad behavior of Peppa Pig (envious, arrogant, proud, disrespectful etc.)Adapted from Healthy Fit website [ ]
Article 2 (real)Sheep registered as pupils in bid to save classes at French Alps primary schoolA report of French parents registering sheep as pupils to increase pupil numbers since “National education is only about numbers”SkyNews [ ]

Both articles contained far-fetched claims. Article 1 [ 23 ] is an argument containing unsubstantiated claims, sweeping statements and emotional language. Article 2 [ 24 ] was sourced from a ‘strange but true’ SkyNews site. Article 2 is a report based on claims backed by credible evidence. Students were asked to determine which one is fake news and to provide an argument for their choice. The results are given in Table  2 .

Table 2.

Responses on question to identify articles as fake news

Question optionsResponse
Article 1 only is an example of fake news36%
Article 2 only is an example of fake news36%
Both articles are examples of fake news20%
Neither of the articles are fake news8%

Students who correctly identified Article 1 as fake news (56% of students), typically mentioned the relative obscurity of the website and the absence of names of experts ( “there is said that experts were used in the article but none of the so called “experts” names or institutions were called to show the research” ). In other words, they applied the reputation heuristic. The following student applied the expectancy violation heuristic to (incorrectly) identify Article 1 as real news: (“ Article 1 can be seen as real news because the facts are not absurd”). What was clearly noticeable was that in their assessment, most students used the critical thinking skills taught during the semester: they pointed out that the claims are not supported by evidence (“ they state that there are parents who burned?? the film but no numbers are provided it could be 2 out of 1000 but nothing is stated to prove this reason” ). They further mentioned the subjective nature of the article (“ The use of adjectives such as “arrogant”, “disrespectful”, “envious” makes the article sound extremely bia s ed” ) as well as harsh language (“ The article is also very opinionated and the language used is quite harsh” ). They also think the reasoning to be faulty (“ And the argument is unstructured – “the “reasoning” doesn’t lead up to a suitable conclusion .”)

Students who incorrectly identified Article 2 as fake news (56% of students) in general used the expectancy violation heuristic. They could not imagine sheep to be school pupils. ( “Although article 2 comes from a reliable source the facts are absurd. [However] Article 1 can be seen as real news because the facts are not absurd” ).


Article 1 is an argument whereas Article 2 a report. This explains why students were able to use critical thinking skills to evaluate article 1. In article 2, where no clear argument was present, critical thinking could only be applied to evaluate the evidence. Students found the evidence to be specific and traceable which contributed to its credibility. Only 36% of students were able to classify both articles correctly. However, the fact that only 8% said that neither articles were fake news, was encouraging, compared to the pre-assessment result where 84% of students were not able to recognize the supplied article as fake news. The post-assessment results indicate that most students had developed a critical attitude towards the supplied texts.

Recommendations on Combining Critical Thinking and Cognitive Heuristics.

The use of critical thinking skills in identifying fake news can be complemented by applying the consistency heuristic [ 6 ] to seek for other online sources that carry similar evidence.

Lastly, since the assessment of credibility of online information has been found to be a socially interactive activity [ 6 ], the endorsement heuristic could be used to inquire on a social platform whether information is credible. For example, a hoax website can be visited to see if the information has been exposed by other users as a hoax.

This paper considered the work that has been done to date on the assessment of the credibility of online information. A concise overview was presented of some of the major contributions in this domain. These contributions were synthesized into a list of common attributes that represent the key characteristics of credibility assessment models. Following this, the elements of critical thinking was introduced. Suggestions were made as to how critical thinking could be used for credibility assessment. The challenges related to the use of critical thinking in practice were also considered, and suggestions were made to overcome these challenges. The outcomes of the effect of the teaching of critical thinking skills on IS students’ ability to identify fake news, were discussed. Preliminary findings show that where fake news are presented as arguments, students use their skills of analysis and evaluation of arguments to identify fake news. Where fake news are reports, students look for quality evidence.

This paper contributes to the literature on the assessment of the credibility of online information. It argues that, and suggests how, the important 21st century skill of critical thinking can be applied to assess the credibility of online information. In doing so, this paper makes a contribution in terms of the responsible design, implementation and use of current day information and communications technology.

Contributor Information

Marié Hattingh, Email: [email protected] .

Machdel Matthee, Email: [email protected] .

Hanlie Smuts, Email: [email protected] .

Ilias Pappas, Email: [email protected] .

Yogesh K. Dwivedi, Email: moc.liamg@ideviwdky .

Matti Mäntymäki, Email: [email protected] .

How to Evaluate Sources Using Critical Thinking: A Concise Guide to Informed Research

In today’s world, the internet provides us with a wealth of information , but not all of it is trustworthy. Knowing how to evaluate sources is an essential skill, especially when conducting research or seeking reliable information. Critical thinking plays a vital role in this process, as it allows individuals to assess the credibility, relevance, and quality of sources.

Key Takeaways

Identifying reliable sources.

When conducting research, it is essential to use credible sources to ensure the accuracy and validity of your work. To identify reliable sources, it’s crucial to employ critical thinking skills and assess each source’s authority, publisher, credentials, and affiliations.

A credible source is typically authored by a person with relevant expertise in the field. Assessing the author’s credentials, such as their degrees, certifications, or professional experience, can provide insight into their authority on the subject. Additionally, considering the author’s affiliations and potential biases can help determine the reliability of the information presented.

When analyzing the content itself, the presence of citations and references adds credibility, as it demonstrates that the author’s claims are based on existing research. Well-reasoned and balanced arguments, supported by evidence, are also indicators of a reputable source.

In some cases, it may be useful to compare multiple sources to validate the information and identify potential discrepancies. This is particularly helpful in situations where not all sources may be equally trustworthy.

Assessing Source Credibility

Evaluating sources using critical thinking is essential for determining their credibility. Assessing source credibility involves examining various aspects of the information and the source itself to ensure its accuracy, relevance, and trustworthiness.

Determining the trustworthiness of a source can be achieved by considering the accuracy of the information and data provided. Cross-referencing the mentioned facts with other reputable sources may reveal inconsistencies that suggest unreliable information. Trustworthy sources often cite their sources and provide enough detail to verify their claims.

The Role of Critical Thinking Skills

Critical thinking skills play a crucial role in evaluating sources for their credibility, relevance, and accuracy. These skills involve the use of logic, analysis, and reflection to ensure that the information gathered from various sources is trustworthy and reliable. By applying critical thinking, individuals can make informed decisions and develop well-founded arguments, both in academic and professional settings.

Experience also plays a significant role in developing and applying critical thinking skills. As individuals encounter various sources and engage in different research projects, they can become more adept at identifying trustworthy information. Experience helps refine their ability to discern between credible and unreliable sources, ensuring that the evidence used in their work is accurate and well-founded.

In conclusion, critical thinking skills are vital for evaluating sources and determining their credibility, relevance, and accuracy. By utilizing logic, analysis, questioning, and experience, individuals can ensure that the information they gather is reliable, unbiased, and valuable for their purposes.

Understanding the Purpose and Audience

One of the first steps in evaluating a source is to identify its purpose. The purpose may be to inform, persuade, entertain, or express an opinion. Knowing the purpose behind the content helps readers determine whether the information provided aligns with their own goals and research interests. For example, an academic article seeking to inform would differ in tone and depth from a blog post expressing a personal opinion.

Tone is an important aspect of a source that can reveal the author’s perspective and potential biases. A neutral tone indicates an objective approach, while passionate or persuasive tones could indicate bias or an attempt to sway readers’ opinions. Recognizing the tone helps readers better understand the author’s intent and the reliability of the information provided.

Evaluating the Quality of Information

The first step in evaluating the quality of information is to examine its source. Consider factors such as the author’s qualifications, the publication date, and the publisher’s reputation. Sources should be both recent and reputable to ensure that the data and facts presented are accurate and up-to-date.

Another useful approach to evaluating information quality is the application of information literacy. This involves understanding the purpose of the information, its intended audience, and any potential consequences of using the information. By keeping these factors in mind, you can better assess the suitability of the information for your specific research needs.

In summary, when evaluating the quality of information, be sure to consider factors such as the source’s credibility, the content’s relevance and accuracy, and the application of information literacy. By maintaining a confident, knowledgeable, and neutral approach in your assessment, you can ensure the information you use is of high quality and supports your research effectively.

Determining the Relevance and Value of Sources

To determine the relevance of a source, first consider whether its content directly addresses your research question or provides information that is applicable to your topic. Analyze the main arguments and conclusions of the source to see if they align with your research goals. Also, take note of any biases or opinions the author may have that could affect the source’s relevance.

Investigating the Currency and Timing of Sources

Firstly, consider the publication or posting date of the information. Determine if it’s recent and whether the content has been revised or updated since it was initially published. A source that is current and up-to-date indicates that the author is actively maintaining the information, which could lead to more reliable conclusions in the research. Keep in mind that the importance of currency may vary depending on the topic and the discipline. In some fields, like technology and medicine, current sources are crucial, while in others, like history or literature, older sources may still be relevant.

In sum, investigating the currency and timing of sources is an essential step in evaluating their validity for a research project. By examining factors such as publication dates, revisions, references, and expert agreement, researchers can ensure that the sources they use contribute to a well-informed, relevant, and current understanding of their topic.

Recognizing Bias and Objectivity

Objectivity in a source can be identified by the presence of a balanced presentation of information, which acknowledges various perspectives and counterarguments. Objective sources will provide evidence and cite credible sources to support their claims. They maintain a neutral tone and avoid using emotionally charged language.

Identifying Potential Errors and Limitations

When evaluating sources using critical thinking, it is crucial to identify potential errors and limitations present in the information provided. This helps ensure the accuracy and reliability of the data. Errors can occur at any point in the research process, while limitations are inherent weaknesses or constraints that affect the findings’ overall validity.

Furthermore, take note of the methodology used in the research. A study with rigorous methodology, which meticulously controls various factors and variables, has a higher degree of reliability. Conversely, poor methodology can introduce errors and weaken the research’s reliability. When examining the methodology, look for a clear statement of the research question, an explanation of the research design and data collection procedures, and a transparent presentation of results.

In summary, when using critical thinking to evaluate sources, it is essential to identify potential errors and limitations. This will enable you to assess the credibility, reliability, and overall quality of the information, ensuring that you base your conclusions on sound evidence.

The CRAAP Test for Source Evaluation

Relevance is the degree to which the information relates to the topic being researched. To assess relevance, consider the scope, depth, and target audience of the source. A relevant source should be in alignment with the research question and offer insight or evidence to support the argument being made.

Purpose is the goal or objective behind the information in the source. Analyzing the purpose helps to identify any potential biases or underlying motives. To evaluate purpose, consider the author’s intent, the target audience, and whether the information is presented objectively or with an ulterior agenda.

Applying Source Evaluation in Different Contexts

In personal life , individuals often encounter various sources of information, such as news articles, social media, and online resources. Evaluating these sources helps in distinguishing reliable information from misinformation or biased perspectives. For instance, when making major life decisions, such as choosing a career or purchasing a house, individuals must critically assess the credibility of financial advice, job market trends, and real estate listings to make informed choices.

Research methods play a crucial role in evaluating sources, especially in academic and professional settings. A systematic approach to source evaluation, such as following established criteria like the CRAAP test (Currency, Relevance, Authority, Accuracy, and Purpose), ensures that the chosen sources align with research objectives and contribute to robust arguments. Additionally, researchers must evaluate the methodologies used in the sources to determine the quality and reliability of the research findings.

Consequences of Misleading Information

Misleading information can have significant consequences in various aspects of society. When people encounter false information or propaganda, they may unknowingly make decisions based on inaccurate or incomplete data.

In some cases, misleading information can serve as a tool for political manipulation. When political actors use false or distorted information to shape public opinion, they can gain power or maintain control. This can threaten democratic processes, making it more challenging for citizens to hold their representatives accountable.

Various methods can assist in evaluating the credibility of sources, such as the CRAAP test . This acronym represents Currency, Relevance, Authority, Accuracy, and Purpose as discussed here . By considering each of these factors, individuals can effectively distinguish between trustworthy and unreliable sources, thereby improving their overall knowledge and experience when conducting research.

You may also like

Why podcasts are killing critical thinking: assessing the impact on public discourse, how to answer critical thinking questions, critical thinking questions for your boyfriend, why critical thinking is important for success.

Critical thinking definition

critical thinking skills validity

Critical thinking, as described by Oxford Languages, is the objective analysis and evaluation of an issue in order to form a judgement.

Active and skillful approach, evaluation, assessment, synthesis, and/or evaluation of information obtained from, or made by, observation, knowledge, reflection, acumen or conversation, as a guide to belief and action, requires the critical thinking process, which is why it's often used in education and academics.

Some even may view it as a backbone of modern thought.

However, it's a skill, and skills must be trained and encouraged to be used at its full potential.

People turn up to various approaches in improving their critical thinking, like:

  • Developing technical and problem-solving skills
  • Engaging in more active listening
  • Actively questioning their assumptions and beliefs
  • Seeking out more diversity of thought
  • Opening up their curiosity in an intellectual way etc.

Is critical thinking useful in writing?

Critical thinking can help in planning your paper and making it more concise, but it's not obvious at first. We carefully pinpointed some the questions you should ask yourself when boosting critical thinking in writing:

  • What information should be included?
  • Which information resources should the author look to?
  • What degree of technical knowledge should the report assume its audience has?
  • What is the most effective way to show information?
  • How should the report be organized?
  • How should it be designed?
  • What tone and level of language difficulty should the document have?

Usage of critical thinking comes down not only to the outline of your paper, it also begs the question: How can we use critical thinking solving problems in our writing's topic?

Let's say, you have a Powerpoint on how critical thinking can reduce poverty in the United States. You'll primarily have to define critical thinking for the viewers, as well as use a lot of critical thinking questions and synonyms to get them to be familiar with your methods and start the thinking process behind it.

Are there any services that can help me use more critical thinking?

We understand that it's difficult to learn how to use critical thinking more effectively in just one article, but our service is here to help.

We are a team specializing in writing essays and other assignments for college students and all other types of customers who need a helping hand in its making. We cover a great range of topics, offer perfect quality work, always deliver on time and aim to leave our customers completely satisfied with what they ordered.

The ordering process is fully online, and it goes as follows:

  • Select the topic and the deadline of your essay.
  • Provide us with any details, requirements, statements that should be emphasized or particular parts of the essay writing process you struggle with.
  • Leave the email address, where your completed order will be sent to.
  • Select your prefered payment type, sit back and relax!

With lots of experience on the market, professionally degreed essay writers , online 24/7 customer support and incredibly low prices, you won't find a service offering a better deal than ours.


Financial literacy to develop complex thinking skills: quantitative measurement in mexican women entrepreneurs.

\r\nKarla Bayly-Castaneda&#x;

  • 1 School of Business, Tecnológico de Monterrey, Monterrey, Mexico
  • 2 EGADE Business School, and Institute for the Future of Education, Tecnologico de Monterrey, Monterrey, Mexico
  • 3 Actuary Program, FES Acatlán, Nacional Autonomus University of Mexico, Estado de Mexico, Mexico
  • 4 Hub Europe (Spain), Tecnológico de Monterrey, Monterrey, Mexico

The objective of the study was to validate the construction of a financial literacy measurement instrument aligned with complex thinking competencies in Mexican women entrepreneurs. By means of the construct validation method, the content was validated by expert judgment, validation by exploratory and confirmatory factor analysis, as well as internal consistency by means of a pilot test applied to a sample of 189 participants. A highly valid and reliable version was obtained, organized in four dimensions with a total of 23 items. This study examines and estimates the determinants of financial literacy for the first time under the light of complex reasoning.


Entrepreneurship, as the mainstay of the economy in a complex and constantly changing environment, deserves to be studied not only about the threats it faces or the identification of opportunities and their viability analysis, but also about the characteristics and decision-making skills of those who undertake them, especially women, one of the economically vulnerable sectors of the population. Financial literacy is an essential skill in modern life in both developed and emerging economies ( García-Mata, 2021b ) because, with the increasing complexity of the financial world, the ability to understand and properly manage money has become a pressing need directly related to the success of female entrepreneurship ( Baporikar and Akino, 2020 ). Through the acquisition of knowledge, skills and positive attitudes regarding money management, the female entrepreneur will be better prepared to face recurrent economic crises that threaten the survival of her business ( Niwaha et al., 2016 ; Rachapaettayakom et al., 2020 ). However, few efforts are directed at addressing the need for financial literacy of adult women who are on the path of entrepreneurship.

Beyond theoretical knowledge about financial instruments and services, women entrepreneurs require support for the development of competencies aimed at putting this knowledge into practice in contexts such as scenario analysis, risk assessment, the search for solutions, as well as informed decision making. The promotion of financial literacy initiatives under the scheme of complex thinking can give the entrepreneur the ability to improve her decision-making process for an effective, innovative and efficient management of her business resources ( Dzwigol et al., 2020 ; Ramos et al., 2021 ). The objective of this research is the design and validation of the construction of an instrument to measure the complex thinking sub-competencies of Mexican women entrepreneurs aligned to their level of financial literacy, presenting an original approach as it is the first of its kind. The result has demonstrated that it is possible to evaluate the level of financial literacy of women entrepreneurs in the context of complex thinking, so that future initiatives in this field can be developed.

Financial literacy

Financial literacy, as a discipline of study, awakened interest only at the beginning of the 21st century as a result of the financial crises that have characterized it up to now, resulting in questions about the influence of attitudes, behaviors and knowledge on individuals' decision making with respect to the management of their money and the impact of these decisions on their general wellbeing. Starting with the work of Lusardi (2001) , who was a pioneer in questioning the reasons for the factors underlying the lack of a savings culture and its impact on long-term financial planning, and the subsequent posing of a first set of questions aimed at measuring financial literacy ( Lusardi, 2008 , 2011 ; Lusardi and Mitchell, 2009 ), as well as the work of different governments and institutions, among which the effort of the Organization for Economic Cooperation and Development (OECD) in the development of measurement instruments such as the PISA test (Programme for International Student Assessment), which includes the measurement of basic knowledge on money management among young people ( OECD, 2018 ), stands out, it was in 2012 when the Mexican Government launched a first measurement initiative, the National Survey on Financial Inclusion (ENIF) ( CNBV, 2012 ). The results of these measurements consistently point to the existence of a significant gender gap where women not only show lower rates of financial literacy but also of access to useful and affordable financial products and services ( UNESCO, 2020 ; UN Women, 2021 ). Now, beyond the impact that financial literacy has on women's individual wellbeing, it is time to focus on the influence of this level of financial literacy on the success of female entrepreneurship, especially in micro and small businesses where planning, management and financing decisions are usually made by a single person.

A first approach to the object of study is to establish what is understood as financial literacy, both individually and in the management of the enterprise or business. Although there is diverse literature on the subject, the definition of the construct is heterogeneous and there is no agreement ( Anshika and Mallik, 2021 ). This represents an important challenge when trying to propose a model for its measurement. According to the OECD, financial literacy is defined as the process by which financial consumers improve their understanding of financial products, related concepts, as well as the underlying risks and, through information, instruction and/or objective advice, develop the skills and confidence to become more aware of financial risks and opportunities, make informed decisions, know where to go for help and take any effective action to improve their economic wellbeing ( Raccanello and Guzmán, 2014 ). Another more technical approach establishes the financial literacy of the entrepreneur as the ability to understand the functioning of the business in terms of possessing knowledge about recording income and expenses, separation between personal and business money, as well as financial planning ( Ali et al., 2018 ). Given the above, we can add that, in a business context, this definition can be complemented with the execution of actions aimed at the stable growth and financial viability of the business over time. It is important to note the importance of the ability to make timely and informed decisions according to individual circumstances.

Women and entrepreneurship

Globally, there is an unequal race for business survival. In Latin America, and especially in the case of Mexico, this inequality is extremely marked by the proportion distributed between large and small companies. According to the 2019 Economic Censuses, 99.8% of the country's establishments are micro, small or medium-sized ( INEGI, 2020 ). These micro and small enterprises (MSMEs) are the pillar of economic growth for developing countries in addition to allowing the generation of employment and self-employment for women, one of the vulnerable groups recognized by both the OECD and UNESCO, which in the framework of the 2030 Agenda points out the need for the integration of women not only in productive processes, but also in social and economic ones ( UNESCO, 2017 ), inequality in terms of financial inclusion in Mexico prevents women from accessing savings and investment mechanisms to ensure a decent standard of living in retirement ( INEGI, 2021 ; INMUJERES, 2022 ). Entrepreneurship allows women not only to contribute to the economic wellbeing of the country, but also to guarantee the generation of income for their personal and family wellbeing while facilitating compatibility with care and household tasks.

Despite this, this sector faces enormous barriers in accessing funding and financing sources, as well as financial products and services through formal institutions, in addition to the lack of ability of entrepreneurs for financial management and long-term business planning. Without the understanding of basic financial concepts, people lack the equipment to make decisions related to the management of their resources ( Vanegas et al., 2020 ; Anshika and Mallik, 2021 ) which is reflected in low performance rates and a high mortality rate of MSMEs in Mexico, which until before the COVID-19 pandemic, resulted in an average life time for this sector of 7.8 years ( INEGI, 2020 ). Empowering women entrepreneurs by developing competencies to improve financial decision-making through innovative financial literacy initiatives can translate, over time, into a determining factor for an improvement in the population's wellbeing indexes.

Complex thinking

Decision making is a reflective process that involves analyzing and discerning between different alternatives, a decision implies knowledge, understanding and analysis of a problem in order to generate the right decision. Some authors suggest that correct decision making is linked to greater cognitive capacity, and this is a requirement for financial literacy, also establishing that the greater the cognitive capacity, the greater the gender gap in financial inclusion ( Muñoz-Murillo et al., 2020 ). In addition to the above, studies have shown that financial literacy is neither stable nor constant, nor can it be assumed that once acquired it will affect people's decision making, since it changes according to time and circumstances, being related to the experiences, contexts and skills of the individual or the social situations that surround him/her ( Bay and Catasús, 2014 ). It becomes, then, a personal process in which the individual needs to take responsibility for his or her self-training in order to prepare him or herself for changing economic environments ( Vanegas et al., 2020 ). Technological evolution and the speed with which large volumes of information are transferred have evolved the business world, demanding from its managers the ability to make decisions quickly, to be self-taught, capable of interpreting reality, systematizing their decisions and learning from their own mistakes ( Pacheco-Velázquez et al., 2023 ). In other words, the entrepreneur must not only acquire knowledge related so far to financial literacy, but simultaneously requires the development of complex thinking skills.

Unlike simple reasoning, which focuses on solving problems with a single correct answer, complex reasoning involves the consideration of multiple perspectives and the integration of multiple sources of information to find effective solutions. Complex thinking is defined as the competence acquired by individuals that allows them to develop a holistic view of the world, enabling them to carry out cognitive activities, analysis and synthesis to face challenges and solve problems ( Vázquez-Parra et al., 2022a , b ). According to the complex thinking competency it integrates four sub-competencies:

a) Systemic: allows the identification of the elements and interconnections that make up an environmental problem or situation.

b) Critical: analyzes and evaluates the reasoning considering its validity and its differentiation with existing knowledge and structures.

c) Scientific: provides methodologies that allow the construction of objective reasoning to make decisions.

d) Innovative: provides creative and novel elements to evaluate the environment and the problems faced by the individual.

These sub-competencies are related to financial literacy in that they frame the knowledge of theoretical concepts and the skills to generate behaviors and attitudes for their application based on the experience and understanding of the environment as a consequence of business activities ( Manzanera-Román and Brändle, 2016 ; García-González et al., 2020 ; Oggero et al., 2020 ; Cruz-Sandoval et al., 2023 ). From the literature review it is clear that, although there are measurement instruments regarding the application of financial literacy aimed at micro and small business management, there is no evidence of the existence of measurement instruments that relate the field of financial literacy in the female sector to the subcompetencies of complex thinking. Table 1 summarizes some studies on financial literacy in entrepreneurs that have addressed the operationalization of financial literacy related to entrepreneurship, which have served as the basis for the development of the instrument that we present in the framework of this research.

Table 1 . Previous studies.

The method of the study was the construct validation through the convergent and discriminant validity of a measurement instrument by means of a three-stage process. In the first stage, a literature review was carried out in order to construct the operationalization of variables based on previous research on financial literacy and its relationship with complex thinking competencies. Derived from the review, and given the non-existence of instruments that addressed both constructs, it was decided to proceed with the adaptation of the instrument: “OECD/INFE Survey Instrument to Measure the Financial Literacy of MSMEs” ( Escobar-Pérez and Martínez, 2008 ; OECD, 2019 ) by reformulating items based on four sub-competencies of complex thinking: critical thinking, systems thinking, innovative thinking and scientific thinking according to the framework proposed by Vázquez-Parra et al. (2023) , as well as translation and cultural adaptation ensuring the use of language oriented to favor women's financial inclusion ( Agnew et al., 2018 ; Whitehouse, 2018 ; Osei-Tutu and Weill, 2021 ; Bertrand and Osei-Tutu, 2022 ). Next, a content validation process was carried out by expert judgment followed by the application of Aiken's V concordance test and a qualitative review of observations made by the experts. Finally, once the corrections derived from the previous process were attended to, progress was made toward the piloting of the instrument and its validation by means of exploratory factor analysis, confirmatory factor analysis and, finally, the internal consistency test by calculating Cronbach's Alpha coefficient.

Content validity through expert judgment

A group of seven expert judges was formed with proven experience in the field of financial literacy, as well as professional experience in direct contact with women entrepreneurs. As part of the experience of this group of experts we can highlight that three of them are authors of two or more books of international circulation on the subject of personal finance, three occupy management positions in financial institutions where they actively participate in the development of financial products and training aimed at women, and finally, one expert directs, at a national level, an initiative dedicated to the area of support for entrepreneurs in the post-acceleration stage of their business. The average number of years of experience in the area of personal finance and/or female entrepreneurship in this group is 11.3 years.

This group received the first version of the questionnaire with 40 items grouped into five domains of interest: Attitude, Behavior, Knowledge, correct financial management decision and Training, and four subdomains corresponding to the sub-competencies of Complex Thinking: systemic, innovative, critical and scientific. The experts received the questionnaire via the Google Forms tool, and were asked to evaluate each item based on clarity (the item is understood, correctly worded), coherence (the item is related to the dimension it measures), relevance (the item is important and should be included) and sufficiency (items belonging to the same dimension and sufficient to measure it) ( Matsumoto-Royo et al., 2021 ; Ramírez-Montoya et al., 2022 ; OECD, (n/d)) using a Likert-type polytomous scale from 1 to 5, as well as the inclusion of qualitative comments on the items. The criterion for the acceptance of items as valid was defined as those with a concordance index >0.71 according to the estimate using Alkien's V coefficient test ( Escurra, 1988 ; Penfield and Giacobbi, 2004 ), as well as the content analysis of the comments made by the group of experts.

Construct validity


The questionnaire received responses during February and March 2023 using a convenience sampling of women entrepreneurs between 18 and 69 years of age ( n = 189). From the total number of observations, those where the participant stated that she was not directly involved in the financial decisions of the business or did not own the business (88) were eliminated, thus reducing the sample to 101 women entrepreneurs distributed as follows: 18–19 years ( n = 9), 20–29 years ( n = 37), 30–39 years ( n = 13), 40–49 years ( n = 20), 50–59 years ( n = 14), and 60–69 years ( n = 8), with businesses employing one person ( n = 42) and employing between 2 and 50 people ( n = 59). All of them agreed to participate in the sample collection by expressing their consent for data collection ( Valenzuela and Flores Fahara, 2013 ), declaring to be involved in making both investment and financing decisions for their business.

Once the results obtained during the validation process by expert judgment were integrated, a new version of the questionnaire was integrated with 23 items grouped into four dimensions associated with the sub-competences of complex thinking: (1) Scientific Thinking (SCT) with items R-01, R-02, R-05 and R-10, (2) Critical Thinking (CRT) with items R-06, R-12, R-13 and R15, (3) Innovative Thinking (INT) with items R-09, R-14, R-16, R-17, R-19 and R-23, and finally (4) Systematic Thinking (SYT) with items R-03, R-04, R-07, R-08, R-11, R-18, R-20, R-21 and R-22.

The questionnaire was applied online using the Google Forms tool. The results validation process consisted of performing an exploratory factor analysis in order to identify the relationships between the variables of interest and group them into latent factors or underlying constructs, as well as confirmatory factor analysis to confirm the underlying structure identified in a previous exploratory factor analysis ( García and Caro, 2009 ; García-González et al., 2020 ). This statistical technique was performed using the SSPS tool in order to validate the dimensionality of the data set and its latent factors. The Kaiser–Meyer–Olkin (KMO) test was also performed as a measure of sampling adequacy used as a prior evaluation of data in terms of their adequacy to perform a factor analysis with an expected value between 0 and 1. The reliability of the instrument was calculated using Cronbach's alpha as a measure of reliability or internal consistency of the set of items included in the questionnaire. Cronbach's alpha ranges from 0 to 1, with values >0.7 being considered acceptable for most research purposes ( González and Pazmiño, 2015 ). The data collection and analysis process took care of ethical aspects such as the confidentiality of the participants, their individual responses and the care of the personal data provided. The methodological quality, data management and interpretations of the results were taken care of throughout the research process.

Content validity

Through the calculation of Aiken's V coefficient ( Aiken, 1985 ), the relevance of the items with respect to a content domain was quantified based on the evaluations of the seven expert judges, with a result of total agreement among the judges for 39 of the 40 items evaluated, that is, they obtained a score higher than 0.71 in terms of clarity, relevance, coherence and correspondence. The content analysis of the qualitative observations made by the expert judges was also carried out, finding that all the observations corresponded to the Clarity dimension in nine of the items. In this sense, the observations were grouped in terms of suggestions for modification or elimination. As a result of this analysis, a second version of the questionnaire was constructed with the 23 items best evaluated by the group of expert judges, the results of which are shown in Table 2 , which summarizes the value of Aiken's V coefficient calculated for each item and moves toward the reliability validity stage by piloting the instrument.

Table 2 . Result of the calculation of the Aiken V coefficient.

The suitability of the data for factor analysis was verified by obtaining a KMO value equal to 0.928, whose reading indicates having an excellent sample to perform the factor analysis of the data. The results obtained from Bartlett's test of sphericity indicate that the value of the chi-square test statistic is 2,129.689 with 253 degrees of freedom and a significance level of <0.001, which allows rejecting the null hypothesis that the observed correlation matrix is equal to the expected correlation matrix for independent and uncorrelated variables and indicates that there is sufficient evidence to conclude that the variables are correlated. These statistical indicators reinforce that the sample size is adequate for the methodological objective of the study, in addition to what MacCallum et al. (1999) establish as a recommendation to ensure the validity and reliability of the results obtained, where sample sizes of between 100 and 200 participants are considered adequate for factor analyses in instrument validation studies.

In the next stage of the factor analysis, the principal components table was analyzed with Kaiser normalization rotation and Varimax rotation, which seeks to maximize the variance explained by each component and minimize the variance shared among the components. Table 3 shows that it is possible to group four common factors that explain 74.076% of the accumulated variance.

Table 3 . Total variance explained.

The extraction of four principal components shows in Table 4 the variables that are significantly associated with each of them when having loadings ≥0.3.

Table 4 . Rotated component matrix.

With respect to the data obtained during the piloting of the instrument, descriptive statistics were calculated for each item. Table 5 shows that the highest mean was 3.754 for item R-02, while the lowest mean was 2.257 for R-11.

Table 5 . Descriptive statistics of the pilot test.

The confirmatory factor analysis establishes a model of relationships that empirically confirms the modeled structure of the instrument after the reclassification of the items as a result of the rotated component matrix analysis. The model fit indices are as follows:

• Comparative Fit Index (CFI): CFI value is 0.839. A CFI >0.90 generally indicates a good model fit, while values >0.80 can be considered acceptable.

• Tucker-Lewis Index (TLI): The TLI value is 0.818. Similar to CFI, a TLI >0.90 is indicative of a good model fit, while values >0.80 may be considered acceptable.

• Root Mean Square Error of Approximation (RMSEA): The RMSEA value is 0.122. An RMSEA ≤ 0.05 indicates a good model fit, while values between 0.05 and 0.08 indicate an acceptable fit. In this case, the RMSEA value suggests an acceptable model fit.

• Chi-square by degrees of freedom ( X 2 /df): The X 2 value is 558.472 and there are 224 degrees of freedom, resulting in an X 2 /df of ~2.49. An X 2 /df value close to 2 or less indicates a good model fit.

In summary, the fit indices suggest an acceptable overall model fit. Figure 1 presents the model generated, while Table 6 shows the reclassification of the items for the development of the final version of the measurement instrument.

Figure 1 . Results of the confirmatory factor analysis after item reclassification. Source: Own elaboration using SPSS Amos.

Table 6 . Reclassified reagents.

Finally, to measure the reliability of the instrument, the internal consistency of the instrument was estimated through the calculation of Cronbach's Alpha coefficient, obtaining a measurement precision of 0.966. According to the scale established for this measurement, the result is considered excellent.

Discussion and conclusions

The final version of the instrument allows measuring the level of financial literacy of women entrepreneurs in terms of attitudes, knowledge, behaviors and decision making aligned to the sub-competencies of complex thinking according to the results of the confirmatory factor analysis as shown in Figure 1 , given the need to promote female entrepreneurship not only as a form of self-support but also as a source of employment generation and economic growth ( UNESCO, 2017 , 2020 ; INEGI, 2020 ; UN Women, 2021 ) through financial inclusion initiatives that offer an increase in financial literacy levels allowing women entrepreneurs to face and weather the ups and downs of the economy ( Raccanello and Guzmán, 2014 ; Ali et al., 2018 ; Oggero et al., 2020 ; García-Mata, 2021a ) but also help them to enable complex reasoning competencies needed in a constantly evolving business environment ( Bay and Catasús, 2014 ; Dzwigol et al., 2020 ; Vanegas et al., 2020 ; Ramos et al., 2021 ). Knowing the level of financial literacy aligned to complex thinking competencies through the designed instrument allows the design and generation of effective financial literacy programs.

Content validity through expert judgment, as shown in Table 2 , allowed establishing the relevance of the items designed ( Escurra, 1988 ; Padilla García and Benítez, 2014 ; Sireci and Padilla, 2014 ; Matsumoto-Royo et al., 2021 ), while the exploratory and confirmatory factor analysis established the construct validity and its high relationship with the dimension assigned to them, as well as the reliability and internal consistency of the data validated by calculating Cronbach's Alpha ( García and Caro, 2009 ; González and Pazmiño, 2015 ; García-González et al., 2020 ). The information collected through the instrument can be used in financial literacy programs with emphasis on complex thinking sub-competencies for financial decision making in business.

The analyses carried out show that the instrument designed is a valid and reliable measure of financial literacy aligned with complex thinking competencies in Mexican women entrepreneurs. In conclusion, this study has generated an innovative instrument that can be used to measure the mastery not only of attitudes, knowledge, behaviors and financial decision-making of women entrepreneurs in Mexico, but also their interrelation with the sub-competencies of complex thinking that are necessary in the daily practice of entrepreneurship in the country.

A possible limitation of this study is related to the participants, since it was applied to women with some initiative to support entrepreneurship and who also have access to the internet through a mobile device, which leaves out those with lower socioeconomic status and/or limited use of electronic devices or digital media or who do not belong to any network to promote entrepreneurship. The validation of this instrument through its application to a population sample that includes women entrepreneurs who do not have the support of business development initiatives and in areas with limited access to internet or mobile devices is recommended to cover this limitation. Another future line of research is the design of longitudinal research to measure the effects over time of a personalized financial literacy initiative and its impact not only on the lengthening of the life of the business but also on its growth and performance levels, on the other hand, the possibility of studying in depth the impact of the empowerment in complex thinking skills for the empowerment of female entrepreneurship is generated.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics statement

Ethical approval was not required for the study involving human participants in accordance with the local legislation and institutional requirements. Written informed consent to participate in this study was not required from the participants in accordance with the national legislation and the institutional requirements.

Author contributions

KB-C: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Validation, Writing – original draft, Writing – review & editing. MR-M: Funding acquisition, Methodology, Supervision, Writing – review & editing. AE-R: Writing – review & editing. MM-B: Writing – review & editing.

The author(s) declare that financial support was received for the research, authorship, and/or publication of this article. The authors acknowledge the financial support of Tecnologico de Monterrey through the “Challenge-Based Research Funding Program 2022”. Project ID # I005 - IFE001 - C2-T3 - T also academic support from Writing Lab, Institute for the Future of Education.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

The author(s) declared that they were an editorial board member of Frontiers, at the time of submission. This had no impact on the peer review process and the final decision.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Agnew, S., Maras, P., and Moon, A. (2018). Gender differences in financial socialization in the home-an exploratory study. Int. J. Consum. Stud. 42, 275–282. doi: 10.1111/ijcs.12415

Crossref Full Text | Google Scholar

Aiken, L. R. (1985). Three coefficients for analyzing the reliability and validity of ratings. Educ. Psychol. Meas. 45, 131–142. doi: 10.1177/0013164485451012

Ali, H., Omar, E. N., and Nasir, H. A. (2018). “Financial literacy of entrepreneurs in the small and medium enterprises,” in Proceedings of the 2nd Advances in Business Research International Conference , eds. F. Noordin, A. K. Othman, and E. S. Kassim (New York, NY: Springer), 31–38. doi: 10.1007/978-981-10-6053-3_4

PubMed Abstract | Crossref Full Text | Google Scholar

Anshika and Singla, A.. (2022). Financial literacy of entrepreneurs: a systematic review. Manag. Financ. 48, 1352–1371. doi: 10.1108/MF-06-2021-0260

Anshika, Singla, A., and Mallik, G. (2021). Determinants of financial literacy: empirical evidence from micro and small enterprises in India. Asia Pac. Manag. Rev. 26, 248–255. doi: 10.1016/j.apmrv.2021.03.001

Baporikar, N., and Akino, S. (2020). Financial literacy imperative for success of women entrepreneurship. Int. J. Innov. Digit. Econ. 11, 1–21. doi: 10.4018/IJIDE.2020070101

Barte, R. (2012). Financial Literacy in MicroEnterprises: The Case of Cebu Fish Vendors. Philippine Management Review. Available online at: (accesed November 4, 2023).

Google Scholar

Bay, C., and Catasús, B. (2014). Situating financial literacy. Crit. Perspect. Account. 25, 36–45. doi: 10.1016/

Bertrand, J., and Osei-Tutu, F. (2022). Language gender-marking and borrower discouragement. Econ. Lett. 212:110298. doi: 10.1016/j.econlet.2022.110298

CNBV, I. (2012). National Financial Inclusion Survey (ENIF). Comisión Nacional Bancaria y de Valores, 85. Available onlie at: (accesed November 4, 2023).

Cruz-Sandoval, M., Vázquez-Parra, J. C., and Carlos-Arroyo, M. (2023). Complex thinking and social entrepreneurship. an approach from the methodology of compositional data analysis. Heliyon 9:e13415. doi: 10.1016/j.heliyon.2023.e13415

Dzwigol, H., Dzwigol-Barosz, M., Miśkiewicz, R., and Kwilinski, A. (2020). Manager competency assessment model in the conditions of industry 4.0. Entrep. Sustain. Iss. 7, 2630–2644. doi: 10.9770/jesi.2020.7.4(5)

Eniola, A. A., and Entebang, H. (2017). SME managers and financial literacy. Glob. Bus. Rev. 18, 559–576. doi: 10.1177/0972150917692063

Escobar-Pérez, J., and Martínez, A. (2008). Content validity and expert judgment: an approach to their use. Av. Medición 6, 27–36. doi: 10.32870/ap.v9n2.993

Escurra, L. M. (1988). Quantification of content validity by criterion judges. J. Psychol. 6, 103–111.

García, J. A. M., and Caro, L. M. (2009). Confirmatory factor analysis and the validity of scales in causal models. Ann. Psychol. 25:2. doi: 10.6018/analesps

García-González, A., Ramírez-Montoya, M. S., and León, G. D. (2020). Social entrepreneurship as a transversal competence: construction and validation of an assessment instrument in the university context. REVESCO J. Coop. Stud. 136:e71862. doi: 10.5209/reve.71862

García-Mata, O. (2021a). The effect of financial literacy and gender on retirement planning among young adults. Int. J. Bank Market. 39, 1068–1090. doi: 10.1108/IJBM-10-2020-0518

García-Mata, O. (2021b). Financial literacy among millennials in Ciudad Victoria, Tamaulipas, Mexico. Estud. Gerenciales 37, 399–412. doi: 10.18046/j.estger.2021.160.4021

González, J. A., and Pazmiño, M. R. (2015). Calculation and interpretation of Cronbach's Alpha for the case of validation of the internal consistency of a questionnaire, with two possible Likert-type scales. Rev. Publ. 2, 62–77. Available online at:

Guliman, S. D. (2015). An evaluation of financial literacy of micro and small enterprise owners in Iligan city: knowledge and skills. J. Glob. Bus . 4, 17−23.

INEGI (2020). Economic Censuses 2019. Available online at: (accesed November 4, 2023).

INEGI (2021). Encuesta Nacional de Inclusión Financiera (ENIF) 2021 ( p. 30). Available online at: (accesed November 4, 2023).

INMUJERES (2022). Inequality in figures (Bulletin No. 5). National Institute of Women, 2. Available online at: (accesed November 4, 2023).

Lusardi, A. (2001). Explaining Why so Many People Do Not Save (SSRN Scholarly Paper No. 285978) . Rochester, NY. doi: 10.2139/ssrn.285978

Lusardi, A. (2008). Planning and financial literacy: how do women fare? American Econ. Rev. 98, 413–417. doi: 10.1257/aer.98.2.413

Lusardi, A. (2011). Financial literacy around the world: an overview. J. Pension Econ. Finan. 10, 497–508. doi: 10.1017/S1474747211000448

Lusardi, A., and Mitchell, O. (2009). How Ordinary Consumers Make Complex Economic Decisions: Financial Literacy and Retirement Readiness (No. w15350; p. w15350). Cambridge, MA: National Bureau of Economic Research. doi: 10.3386/w15350

MacCallum, R., Widaman, K., Zhang, S., and Hong, S. (1999). Sample size in factor analysis. Psychol. Methods 4, 84–99. doi: 10.1037/1082-989X.4.1.84

Manzanera-Román, S., and Brändle, G. (2016). Abilities and skills as factors explaining the differences in women entrepreneurship. Suma Neg . 7, 38–46. doi: 10.1016/j.sumneg.2016.02.001

Matsumoto-Royo, K., Ramírez-Montoya, M.-S., and Conget, P. (2021). Design and validation of a questionnaire to assess opportunities for pedagogical practice, metacognition and lifelong learning provided by initial teacher education programs. Stud. Educ. 41, 131–161.

Muñoz-Murillo, M., Álvarez-Franco, P. B., and Restrepo-Tobón, D. A. (2020). The role of cognitive abilities on financial literacy: new experimental evidence. J. Behav. Exp. Econ. 84:101482. doi: 10.1016/j.socec.2019.101482

Niwaha, M., Tibihikirra, P., and Tumuramye, P. (2016). Entrepreneurship and financial literacy: an insight into financial practices of rural small and micro business owners in the Rwenzori region.

OECD (2018). Pisa results (Volume IV): Are students smart about money? Available online at: (accesed November 4, 2023).

OECD (2019). OECD/INFE survey instrument to measure the financial literacy of MSMEs , 42. Available online at: (accesed November 4, 2023).

OECD (n/d). Financial Literacy test-PISA . Available online at: (accessed March 12 2023).

Oggero, N., Rossi, M. C., and Ughetto, E. (2020). Entrepreneurial spirits in women and men. The role of financial literacy and digital skills. Small Bus. Econ. 55, 313–327. doi: 10.1007/s11187-019-00299-7

Osei-Tutu, F., and Weill, L. (2021). Sex, language and financial inclusion * . Econ. Trans. Inst. Change 29, 369–403. doi: 10.1111/ecot.12262

Pacheco-Velázquez, E. A., Vázquez-Parra, J. C., Cruz-Sandoval, M., Salinas-Navarro, D. E., and Carlos-Arroyo, M. (2023). Business decision-making and complex thinking: a bibliometric study. MDPI 13:80. doi: 10.3390/admsci13030080

Padilla García, J. L., and Benítez, I. (2014). Validity Evidence Based on Response Processes. Granada.

Penfield, R. D., and Giacobbi, P. R. Jr. (2004). Applying a score confidence interval to aiken's item content-relevance index. Meas. Phys. Educ. Exerc. Sci. 8, 213–225. doi: 10.1207/s15327841mpee0804_3

Raccanello, K., and Guzmán, E. H. (2014). Education and financial inclusion. Rev. Latinoam. Estud. Educativos 44:2. doi: 10.48102/rlee.2014.44.2.250

Rachapaettayakom, P., Wiriyapinit, M., Cooharojananone, N., Tanthanongsakkun, S., and Charoenruk, N. (2020). The need for financial knowledge acquisition tools and technology by small business entrepreneurs. J. Innov. Entrep. 9. doi: 10.1186/s13731-020-00136-2

Rahim, S., and Binod, R. B. (2021). Financial literacy: the impact on the profitability of the SMEs in kuching. Int. J. Bus. Soc. 21, 1172–1191. doi: 10.33736/ijbs.3333.2020

Ramírez-Montoya, M. S., Castillo-Martínez, I. M., Sanabria-Zepeda, J., and Miranda, J. (2022). Complex thinking in the framework of education 4.0 and open innovation: a systematic literature review. J. Open Innov. Technol. Mark. Complex. 8. doi: 10.3390/joitmc8010004

Ramos, E. V. R., Otero, C. A., and Heredia, F. D. (2021). Competency-based training of the management professional: from a contingency approach. Rev. Cienc. Soc. 451–466. doi: 10.31876/rcs.v27i2.35933

Sireci, S., and Padilla, J. L. (2014). Validating assessments: introduction to the special section. Psicothema 26, 97–99. doi: 10.7334/psicothema2013.255

UN Women (2021). Win-Win: Gender Equality is Good Business . UN Women. Available online at: (accesed November 4, 2023).

UNESCO (2017). UNESCO Moves Forward. The 2030 Agenda for Sustainable Development . United Nations Educational, Scientific and Cultural Organization. Available online at: (accesed November 4, 2023).

UNESCO (2020). Summary of the Global Education Monitoring Report, 2020: Inclusion and Education: All without Exception . Available online at: (accesed November 4, 2023).

Valenzuela, J. R., and Flores Fahara, J. R. (2013). Fundamentos de Investigación Educativa , Vol. 2, 1st Edn. Monterrey: Editorial Digital Tecnológico de Monterrey.

PubMed Abstract | Google Scholar

Vanegas, J. G., Mesa, M. A. A., and Gómez-Betancur, L. (2020). Financial education in women: a study in the López de Mesa neighborhood of Medellín1. Rev. Fac. Cienc. Econ. 28, 121–141. doi: 10.18359/rfce.4929

Vázquez-Parra, J. C., Castillo-Martínez, I. M., and Ramírez-Montoya, M. S. (2022a). Development of the perception of achievement of complex thinking: a disciplinary approach in a Latin American student population. Educ. Sci. 12:5. doi: 10.3390/educsci12050289

Vázquez-Parra, J. C., Castillo-Martínez, I. M., Ramírez-Montoya, M. S., and Amézquita-Zamora, J. A. (2023). Gender gap in the perceived mastery of reasoning-for-complexity competency: an approach in Latin America. J. Appl. Res. High. Educ . 12. doi: 10.1108/JARHE-11-2022-0355

Vázquez-Parra, J. C., Cruz-Sandoval, M., and Carlos-Arroyo, M. (2022b). Social Entrepreneurship and complex thinking: a bibliometric study. Sustainability 14:20. doi: 10.3390/su142013187

Whitehouse, M. (2018). The language of numbers. transdisciplinary action research and financial communication. AILA Rev. 31, 81–112. doi: 10.1075/aila.00014.whi

Keywords: financial literacy, complex thinking, measurement instrument, higher education, educational innovation

Citation: Bayly-Castaneda K, Ramírez-Montoya MS, Erdély-Ruiz A and Montoya-Bayardo MA (2024) Financial literacy to develop complex thinking skills: quantitative measurement in Mexican women entrepreneurs. Front. Educ. 9:1331866. doi: 10.3389/feduc.2024.1331866

Received: 01 November 2023; Accepted: 29 May 2024; Published: 03 July 2024.

Reviewed by:

Copyright © 2024 Bayly-Castaneda, Ramírez-Montoya, Erdély-Ruiz and Montoya-Bayardo. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: María Soledad Ramírez-Montoya,

† ORCID: Karla Bayly-Castaneda María Soledad Ramírez-Montoya Arturo Erdély-Ruiz Miguel Angel Montoya-Bayardo

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.


  1. ULTIMATE CRITICAL THINKING CHEAT SHEET Published 01/19/2017 Infographic

    critical thinking skills validity

  2. Critical Thinking Skills

    critical thinking skills validity

  3. Validity of critical and creative thinking skills

    critical thinking skills validity

  4. Critical_Thinking_Skills_Diagram_svg

    critical thinking skills validity

  5. 6 Ways to Improve Critical Thinking at Work

    critical thinking skills validity

  6. Critical Thinking

    critical thinking skills validity


  1. #criticalthinking #argument #validity #logic

  2. Top Critical Thinking Skills

  3. Learning Logic [] 65 [] Check Validity

  4. Advanced Research Methodology

  5. Taking Personal Responsibility: A Life-Changing Revelation

  6. Critical rows in Arguments, premises and conclusion mth001 Lec 5 and 6 #virtualuniversity


  1. PDF Pamukkale critical thinking skill scale: a validity and ...

    that measures critical thinking skills of university students. Pamukkale Critical Thinking Skills Scale was developed as two separate forms; multiple choice and open-ended. The validity and reliability studies of the multiple-choice form were constructed on two different theoretical frameworks as classical test theory and item-response theory.

  2. The development and psychometric validation of a Critical Thinking

    1. Introduction. Critical thinking has become an educational ideal with most policy makers and educationists calling for the development of critical attitudes in students (Ennis, 2008, McBride et al., 2002, Stapleton, 2010).Most definitions of critical thinking acknowledge the importance of both cognitive (i.e. skills) and dispositional (i.e. propensity) dimensions in the thinking process ...

  3. Assessing Critical Thinking in Higher Education: Current State and

    Critical thinking is one of the most frequently discussed higher order skills, believed to play a central role in logical thinking, decision making, and problem solving (Butler, 2012; Halpern, 2003).It is also a highly contentious skill in that researchers debate about its definition; its amenability to assessment; its degree of generality or specificity; and the evidence of its practical ...

  4. Critical Thinking Testing and Assessment

    The purpose of assessing instruction for critical thinking is improving the teaching of discipline-based thinking (historical, biological, sociological, mathematical, etc.) ... read the white paper on consequential validity by Richard Paul and Linda Elder: ... summariz­ing, analyzing, and assessing, they will develop skills of mind requisite ...

  5. What Is Critical Thinking?

    Critical thinking is the process of analyzing information logically and overcoming assumptions, biases, and logical fallacies. Developing critical thinking skills allows us to evaluate information as objectively as possible and reach well-founded conclusions. When researching a political candidate you support, you find an article criticizing ...

  6. Critical Thinking

    Critical Theory refers to a way of doing philosophy that involves a moral critique of culture. A "critical" theory, in this sense, is a theory that attempts to disprove or discredit a widely held or influential idea or way of thinking in society. Thus, critical race theorists and critical gender theorists offer critiques of traditional ...

  7. 8.10: Critical Thinking Skills

    Critical Thinkers are independent Thinkers. They have the confidence to state their opinions and point of view to others who might disagree. They use the skills of critical thinking to support their positions and make their arguments. Critical thinkers seek a "dialogical" approach to the process of argument. "Dialogical" thinkers ...

  8. Critical Thinking

    Critical thinking is the discipline of rigorously and skillfully using information, experience, observation, and reasoning to guide your decisions, actions, and beliefs. You'll need to actively question every step of your thinking process to do it well. Collecting, analyzing and evaluating information is an important skill in life, and a highly ...

  9. The California Critical Thinking Skills Test: College Level

    The current study used a minimalistic approach to selecting the psychological variables which encompass both 21st century skills and the sub-skills of the 21st century skills, such as critical ...

  10. An Introduction to Critical Thinking and Creativity: Think More, Think

    CHAPTER 9 VALID AND SOUND ARGUMENTS 9.1 VALIDITY AND SOUNDNESS Validity is a most important concept in critical thinking. A valid argument is one where the conclusion follows logically from … - Selection from An Introduction to Critical Thinking and Creativity: Think More, Think Better [Book]

  11. What Is Critical Thinking?

    Critical thinking is the ability to effectively analyze information and form a judgment. To think critically, you must be aware of your own biases and assumptions when encountering information, and apply consistent standards when evaluating sources. Critical thinking skills help you to: Identify credible sources. Evaluate and respond to arguments.

  12. The factorial validity of the Cornell Critical Thinking Tests: A multi

    An unresolved issue in the measurement of critical thinking is the underlying factor structure of available instruments. The Cornell Critical Thinking Tests Level X (Ennis, Millman, & Tomko, 2005) is a widely used assessment of critical thinking in educational settings for students in grades 4−14.Perhaps due to the scale's intentional complex factor structure with overlapping dimensions ...

  13. 8.8: Differences Between Truth and Validity

    How to recognize valid positions and finding out which position is the most valid one is the goal of a course in argumentation and critical thinking. This page titled 8.8: Differences Between Truth and Validity is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Jim Marteney ( ASCCC Open Educational Resources ...

  14. Critical Thinking > Assessment (Stanford Encyclopedia of Philosophy)

    The Critical Thinking Assessment Test (CAT) is unique among them in being designed for use by college faculty to help them improve their development of students' critical thinking skills (Haynes et al. 2015; Haynes & Stein 2021). Also, for some years the United Kingdom body OCR (Oxford Cambridge and RSA Examinations) awarded AS and A Level ...

  15. Pamukkale critical thinking skill scale: a validity and reliability

    The aim of this study is to develop a valid and reliable measurement tool that measures critical thinking skills of university students. In this context, a number of studies were conducted on different participant groups. Pamukkale Critical Thinking Skills Scale was developed as two separate forms; multiple choice and open-ended. The validity and reliability studies of the multiple-choice form ...

  16. How Can Critical Thinking Be Used to Assess the Credibility of Online

    The use of critical thinking skills in identifying fake news can be complemented by applying the consistency heuristic to seek for other online sources that carry similar evidence. Lastly, since the assessment of credibility of online information has been found to be a socially interactive activity [ 6 ], the endorsement heuristic could be used ...

  17. Critical Thinking

    Critical thinking refers to the process of actively analyzing, assessing, synthesizing, evaluating and reflecting on information gathered from observation, experience, or communication. It is thinking in a clear, logical, reasoned, and reflective manner to solve problems or make decisions. Basically, critical thinking is taking a hard look at ...

  18. What Are Critical Thinking Skills and Why Are They Important?

    According to the University of the People in California, having critical thinking skills is important because they are [ 1 ]: Universal. Crucial for the economy. Essential for improving language and presentation skills. Very helpful in promoting creativity. Important for self-reflection.

  19. Critical Thinking Models: A Comprehensive Guide for Effective Decision

    Evaluate the validity: Assess whether the conclusion logically follows from the premises and if the argument's structure is sound. ... To build and enhance critical thinking skills, individuals should practice and develop higher-order thinking, such as critical alertness, critical reflection, and critical analysis. ...

  20. Reliability and concurrent validity of a measure of critical thinking

    Furthermore, analyses involving ACT, Watson-Glaser Critical Thinking Appraisal, and Group Embedded Figures Test scores also suggest that the critical thinking test items have good concurrent validity. Thus, the measure may be useful in both science instruction and future research regarding critical thinking phenomena.

  21. 2.7: Validity and Soundness

    Soundness. Given a valid argument, all we know is that if the premises are true, so is the conclusion. But validity does not tell us whether the premises or the conclusion are true or not. If an argument is valid, and all the premises are true, then it is a sound argument. Of course, it follows from such a definition that a sound argument must ...

  22. How to Evaluate Sources Using Critical Thinking: A Concise Guide to

    The Role of Critical Thinking Skills. Critical thinking skills play a crucial role in evaluating sources for their credibility, relevance, and accuracy. These skills involve the use of logic, analysis, and reflection to ensure that the information gathered from various sources is trustworthy and reliable.

  23. Using Critical Thinking in Essays and other Assignments

    Critical thinking, as described by Oxford Languages, is the objective analysis and evaluation of an issue in order to form a judgement. Active and skillful approach, evaluation, assessment, synthesis, and/or evaluation of information obtained from, or made by, observation, knowledge, reflection, acumen or conversation, as a guide to belief and action, requires the critical thinking process ...

  24. What You Can Do To Improve Critical-Thinking Skills

    But poorly thought-out plans and decisions aren't good for business or for the career prospects of individuals. Critical thinking is as important as ever - in business and in life - and here ...

  25. Frontiers

    According to the complex thinking competency it integrates four sub-competencies: a) Systemic: allows the identification of the elements and interconnections that make up an environmental problem or situation. b) Critical: analyzes and evaluates the reasoning considering its validity and its differentiation with existing knowledge and structures.

  26. Critical Thinking

    PRICES MAY VARY. Educational Tool: This poster is an excellent educational tool, designed to enhance critical thinking skills in reading and writing Classroom Must-Have: It's a must-have for any classroom, making it an ideal resource for teachers. High-Quality Print: The poster is printed on high-quality paper, ensuring durability and longevity.