This is an important question to ask yourself. As well as helping you to write a good literature review, fully understanding the need for such work is what allows you to know you're on-track, why what you're doing is worthwhile, and that you do have a contribution to make. In other words, the literature review is integral to the whole thesis; it is not just a routine step taken to fulfil formal requirements.
You need a good literature review because it:
The literature review becomes your springboard for the whole thesis.
from
Be sure to select a topic you can manage in the time frame you have to complete the project. Narrow down the topic if it is too broad. If you need help with this, ask your professor, ask a librarian, or use subject suggestions in GALILEO.
Use a variety of sources: books, articles, conference proceedings, government reports, thesis and dissertations, etc! Do NOT rely solely on electronic full-text material (which is readily available). Reference resources, such as dictionaries, may be useful in defining key terminology, and encyclopedic sources may provide a good introduction in to specific areas of the topic.
The most important part of this step is to review and analyze the literature you collect! The review process is ongoing - you may need to go back to locate additional materials as you identify new ideas to see if others have written on similar topics.
During the review, you can begin to notice patterns in the literature, and to separate your findings in to different categories.
Remember, a literature review is NOT simply a list of the resources with a summary of each one!
You can organize the review in many ways; for example, you can center the review historically (how this topic has been dealt with over time); or center it on the theoretical positions surrounding your topic (those for a position vs. those against, for example); or you can focus on how each of your sources contributes to your understanding of your project.
Your literature review should include
Asa H. Gordon Library
Savannah State University 2200 Tompkins Rd Savannah, GA 31404 Phone: (912) 358-4324 Reference Text Line: (912) 226-2479
by Antony W
July 11, 2022
If you’re a student currently studying for a PhD in forensic science, you’ll need to write a dissertation in your area of study to graduate and earn your masters. That’s why it’s important to look at some forensic science dissertation topics to help you find an area to investigate further in your research.
Forensic science is an area of study that focuses on the application of science to civil and criminal law during criminal investigation. As a forensics student, you’ll learn how to examine traces of material evidence to determine what exactly occurred. Also, the study involves the presentation of impartial scientific evidence that the authorities can use in court.
Our guide to choosing dissertation topics , even for the field of forensic, remains unchanged. Choose an interesting topic, but one that you can explore within the scope or research constraints of the project.
With that said, let’s look at some interesting topics that you can start to explore right away.
Here are some interesting forensic dissertation topics that are likely to catch your professor’s attention. Pick any of the topic depending on the selection criteria we’ve shared with you and present it to your supervisor for review.
We can define forensic science as the application of scientific procedures such as data gathering, testing, and observation to discover how historical events occurred with the goal of generating unbiased evidence in a court of law.
Formally, the term forensic referred to public gathering spaces where individuals assembled to talk about criminal matters. Defendants would utilize these venues to testify in front of a court about their innocence. There has been an evolution to the term, which now refers to the act of collecting of legal evidence that the people involved in a case can produce in a court.
Notably, forensic science also involves the application of scientific and empirical methodologies to falsify or verify evidence to determine the trustworthiness of a case.
Some dissertation topics that you can research further in this category are as follows:
Also Read: How to Reference a Dissertation Project
In contrast to trade-based occupations, forensic science is controlled by a self-imposed ethical code of conduct that all practitioners must follow.
The following are examples of great topics that you can explore in your dissertation project, if you decide to do a forensic project on ethical issues:
Get custom dissertation writing help form a team of professional writers who have experience in writing the best dissertation topics in Forensic science. Get up to 30% discount on your order and enjoy the flexibility of assignment writing help .
The ways in which crimes are committed and investigated are evolving because of technological advancements and society’s growing reliance on technology. The globe is on the verge of a slew of new technologies that will open up new avenues for criminals while also posing new obstacles for law enforcement.
The most prominent example is the threat posed by cybercrime. Other technologies, such as artificial intelligence and blockchain, are examples of completely new fields that will bring dramatic change in forensic investigation.
Some topics you can explore in this category are as follows:
You May Also Like: How to Create an Outline for a Dissertation
Brexit has had and continues to have a significant influence on the British economy and society. The growth of police and judicial cooperation in the EU has helped the United Kingdom. These have included participation in Europol, the EU’s arrest warrant, and the exchange of forensic data. These advantages have been in the field of forensic science, notably in terms of scientific funding and collaboration with EU research initiatives.
One of the most pressing questions is how the UK’s criminal justice system and forensic science, in particular, can cope now that the UK is no longer a member of the EU.
Here are some of the best dissertation topics to consider following the present Brexit issue:
About the author
Antony W is a professional writer and coach at Help for Assessment. He spends countless hours every day researching and writing great content filled with expert advice on how to write engaging essays, research papers, and assignments.
Forensics Digest
All about Forensics
A review article, also known as literature review, is an evaluation of previously published literature or data on a topic. It gives an overview of what has been done and found and generally does not present new data from the author’s own experiments.
The objectives of a literature review is to lay down a comprehensive foundation on the existing literature and current trends, highlight the main methodologies and research techniques, provide a critical and constructive analysis and possibly identify potential areas for future studies. Review articles, thus, help other researchers by laying out the current knowledge, existing gaps and future research directions.
Given below are a few guidelines on how to write a review paper.
Reader interactions, leave a reply cancel reply.
For immediate release, acs news service weekly presspac: april 20, 2022.
Forensic scientists collect and analyze evidence during a criminal investigation to identify victims, determine the cause of death and figure out “who done it.” Below are some recent papers published in ACS journals reporting on new advances that could help forensic scientists solve crimes. Reporters can request free access to these papers by emailing newsroom@acs.org .
“Insights into the Differential Preservation of Bone Proteomes in Inhumed and Entombed Cadavers from Italian Forensic Caseworks” Journal of Proteome Research March 22, 2022 Bone proteins can help determine how long ago a person died (post-mortem interval, PMI) and how old they were at the time of their death (age at death, AAD), but the levels of these proteins could vary with burial conditions. By comparing bone proteomes of exhumed individuals who had been entombed in mausoleums or buried in the ground, the researchers found several proteins whose levels were not affected by the burial environment, which they say could help with AAD or PMI estimation.
“Carbon Dot Powders with Cross-Linking-Based Long-Wavelength Emission for Multicolor Imaging of Latent Fingerprints” ACS Applied Nanomaterials Jan. 21, 2022 For decades, criminal investigators have recognized the importance of analyzing latent fingerprints left at crime scenes to help identify a perpetrator, but current methods to make these prints visible have limitations, including low contrast, low sensitivity and high toxicity. These researchers devised a simple way to make fluorescent carbon dot powders that can be applied to latent fingerprints, making them fluoresce under UV light with red, orange and yellow colors.
“Proteomics Offers New Clues for Forensic Investigations” ACS Central Science Oct. 18, 2021 This review article describes how forensic scientists are now turning their attention to proteins in bone, blood or other biological samples, which can sometimes answer questions that DNA can’t. For example, unlike DNA, a person’s complement of proteins (or proteome) changes over time, providing important clues about when a person died and their age at death.
“Integrating the MasSpec Pen with Sub-Atmospheric Pressure Chemical Ionization for Rapid Chemical Analysis and Forensic Applications” Analytical Chemistry May 19, 2021 These researchers previously developed a “MasSpec Pen,” a handheld device integrated with a mass spectrometer for direct analysis and molecular profiling of biological samples. In this article, they develop a new version that can quickly and easily detect and measure compounds, including cocaine, oxycodone and explosives, which can be important in forensics investigations.
The American Chemical Society (ACS) is a nonprofit organization chartered by the U.S. Congress. ACS’ mission is to advance the broader chemistry enterprise and its practitioners for the benefit of Earth and all its people. The Society is a global leader in promoting excellence in science education and providing access to chemistry-related information and research through its multiple research solutions, peer-reviewed journals, scientific conferences, eBooks and weekly news periodical Chemical & Engineering News . ACS journals are among the most cited, most trusted and most read within the scientific literature; however, ACS itself does not conduct chemical research. As a leader in scientific information solutions, its CAS division partners with global innovators to accelerate breakthroughs by curating, connecting and analyzing the world’s scientific knowledge. ACS’ main offices are in Washington, D.C., and Columbus, Ohio.
To automatically receive press releases from the American Chemical Society, contact newsroom@acs.org .
Note: ACS does not conduct research, but publishes and publicizes peer-reviewed scientific studies.
ACS Newsroom newsroom@acs.org
Accept & Close The ACS takes your privacy seriously as it relates to cookies. We use cookies to remember users, better understand ways to serve them, improve our value proposition, and optimize their experience. Learn more about managing your cookies at Cookies Policy .
1155 Sixteenth Street, NW, Washington, DC 20036, USA | service@acs.org | 1-800-333-9511 (US and Canada) | 614-447-3776 (outside North America)
Copyright © 2024 American Chemical Society
Forensic science, getting started with research, steps to creating search strategies.
Not sure where to start your research? Below are three common types of sources used in research. Read about what they can contribute to your research and then explore the rest of the guide to learn how to find the sources. Remember the library staff is always here to help you! Contact our Ask Us service or subject librarians if you have any questions.
Articles in both print and electronic format provide:
Books in both print and electronic format provide:
Websites must be evaluated for credibility, authority and accuracy before using and provide:
Identify the keywords in your research question..
Keywords are words that carry content and meaning. The keywords in the research question "What is the feeding range of the blue whale in the Pacific Ocean?" are feeding range, blue whale and Pacific Ocean.
Think of words similar to your keywords in case a database doesn't use your original keywords. Synonyms for blue whale are baleen whale and Balaenoptera musculus.
A Boolean search is a search using the words AND, OR and NOT between the keywords. These words have a special function when used in a database.
You can avoid doing multiple searches for variations on word endings using the truncation symbol * (the asterisk) in most databases. Entering the keyword "blue whale*" will look for both blue whale and blue whales.
If you want a literature review, add "AND review" to your keywords. To find a research study, add "AND study" to your keywords.
Always go to the Advanced Search in a database to enter your Boolean searches because it gives you multiple boxes with the Boolean operators between them. If you are using a search with multiple search strings, enter OR within the search boxes and AND between the search boxes, e.g., [blue whale OR Balaenoptera musculus] AND [feeding range OR feeding grounds] AND [Pacific Ocean].
Need help? Then use the library's Ask Us service. Get help from real people face-to-face, by phone, by email, or by live chat.
UNT: Apply now UNT: Schedule a tour UNT: Get more info about the University of North Texas
UNT: Disclaimer | UNT: AA/EOE/ADA | UNT: Privacy | UNT: Electronic Accessibility | UNT: Required Links | UNT: UNT Home
An official website of the United States government
Here’s how you know
Official websites use .gov A .gov website belongs to an official government organization in the United States.
Secure .gov websites use HTTPS A lock ( Lock A locked padlock ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.
https://www.nist.gov/forensic-science/interdisciplinary-topics/scientific-foundation-reviews
Scientific foundation reviews.
Forensic science plays a crucial role in our criminal justice system. If the right evidence is present, forensic science can help investigators solve crimes, including cases that have long been cold. It can help exclude innocent people from an investigation or exonerate them in cases of wrongful conviction. And it can help juries as they make decisions that have enormous consequences on people’s lives.
But how do we know if we can trust the results of forensic analysis when making important decisions? NIST is helping to answer this question with a series of studies, called "scientific foundation reviews."
Our approach to conducting these studies, also known as technical merit evaluations, is described in NIST Interagency Report NISTIR 8225: NIST Scientific Foundation Reviews and generally follows these steps:
These scientific foundation reviews will be useful in a number of ways. First, they can help establish trust in methods that, when properly applied, rest on solid scientific foundations. Second, they can help forensic practitioners, investigators, courts and other stakeholders understand the capabilities and limitations of forensic methods and help ensure that those methods are used appropriately. Third, by identifying knowledge gaps, they can help provide strategic direction for future research.
These studies will also fulfill a critical need identified in a landmark 2009 report by the National Academy of Sciences. Titled Strengthening Forensic Science in the United States: A Path Forward , that report called for “studies establishing the scientific bases demonstrating the validity of forensic methods.” In addition, in 2016, the National Commission on Forensic Science recommended that NIST “conduct independent scientific evaluations of the technical merit of test methods and practices used in forensic science disciplines.” The U.S. Congress appropriated funds for NIST to conduct these reviews starting in 2018.
Each scientific foundation review will result in a report that will be freely available on this website. If you’d like to receive an alert when the reports are published or when related information becomes available, please sign up for our email list .
Digital evidence.
The field of digital forensics is constantly changing as new devices and applications become available. This review documents and evaluates the scientific foundations of digital evidence examination and recommends steps to advance the field.
Bitemark analysis is a forensic technique in which marks on the skin of a biting victim are compared with the teeth of a suspected biter. In addition to a review of the scientific literature, this review includes a report from an October 2019 workshop, hosted by the Center for Statistics and Applications in Forensic Science ( CSAFE ), where odontologists, researchers, statisticians, lawyers and other experts addressed scientific questions around bitemark analysis.
) October 2019 CSAFE Bitemark Thinkshop Report ) Standards and Guidelines in Forensic Odontology ) Published Criticisms of Bitemark Foundations ) Bitemark Analysis Reference List – |
Dna mixture interpretation.
DNA evidence that contains very small quantities of DNA or a mixture of DNA from several people can be difficult to interpret reliably. This review focuses on the methods that forensic labs use when interpreting these challenging types of DNA evidence. More information about the NIST Interagency Report 8351-DRAFT “DNA Mixture Interpretation: A NIST Scientific Foundation Review” is available on the report's home page .
|
Forensic firearm experts can assess whether a specific gun was used in a crime by examining bullets and cartridge cases under a comparison microscope. This study will document the scientific foundations of that method and assess its reliability by evaluating the scientific literature on error rates.
|
Performing forensic footwear examinations involves photographing and collecting footwear impressions from a crime scene, analyzing the evidence in the lab and comparing the marks to known footwear impressions on the same or similar substrate. The NIST scientific review on footwear impressions will aim to identify what established scientific laws and principles underpin the forensic science methods and what publicly available empirical data exist to support the methods practitioners use to analyze the evidence.
|
Communicating Forensic Findings (June 25-26, 2024) - Workshop complete and team in development
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .
The scientific reinvention of forensic science, jonathan j. koehler.
a Northwestern Pritzker School of Law, Chicago, IL 60611
b Office of the Chancellor, University of Wisconsin-Madison, Madison, WI 53706
c Sandra Day O’Connor College of Law, Arizona State University, Phoenix, AZ 85004
There are no data underlying this work.
Forensic science is undergoing an evolution in which a long-standing “trust the examiner” focus is being replaced by a “trust the scientific method” focus. This shift, which is in progress and still partial, is critical to ensure that the legal system uses forensic information in an accurate and valid way. In this Perspective, we discuss the ways in which the move to a more empirically grounded scientific culture for the forensic sciences impacts testing, error rate analyses, procedural safeguards, and the reporting of forensic results. However, we caution that the ultimate success of this scientific reinvention likely depends on whether the courts begin to engage with forensic science claims in a more rigorous way.
It would be hard to overstate the importance of the transformation that is underway throughout most of the forensic sciences. For much of the 20th century, evidence from a variety of forensic sciences was routinely admitted in state and federal courts with very little scrutiny of whether it had either substantial validity or a genuine scientific foundation. Experts, usually associated with law enforcement and often without any formal scientific training, testified in court to the validity and outsized accuracy of the techniques and their conclusions. Courts admitted their testimony, generally without limitation or careful scrutiny, based on assurances from the forensic science community that the techniques were accurate, effective, and broadly accepted as valid. Assertions unsupported by empirical validation sufficed. The scientific authority of forensic science testimony rarely faced significant challenge from the opposing party, and the occasional challenges that were offered were nearly always unsuccessful.
The story began to change when DNA evidence emerged in the late 1980s and early 1990s. After initial breathless enthusiasm by courts about this transformative new identification technique, highly credentialed scientists identified meaningful concerns regarding how to “translate” laboratory DNA assessments for courtroom use. Several judges excluded DNA evidence to ensure adequate vetting by the scientific community. In the 1990s, scientists from various core disciplines including genetics, statistics, and psychology engaged in lively and sometimes contentious debates in peer-reviewed, scientific journals about the forensic use of DNA profiling, including such matters as population genetics, error rates, standards for defining a DNA match, and communicating the evidentiary meaning of a match. Those debates, and two DNA reports issued by the National Academy of Sciences (NAS), impacted the way DNA evidence was treated in court, creating a greater focus on scientific validity than existed for prior forensic techniques. Also in the 1990s, the Supreme Court decided a trio of critical cases on the use of scientific and other expert evidence in the courts. These cases emphasized that the Federal Rules of Evidence gave judges the responsibility to engage in judicial “gatekeeping” to determine whether that scientific and expert evidence was sufficiently reliable and valid to be admitted in court ( 1 – 3 ).
By the early part of the 21st century, a shift to a more scientific paradigm for the forensic sciences was observable, though still in its infancy ( 4 ). This shift represented a move from a framework of “trusting the examiner” to “trusting the method.” Rather than relying on untested foundational assumptions, and assurances from witnesses that their training and experience makes their confident conclusions accurate and trustworthy, legal scholars, scientists, and some forensic practitioners began endorsing a more scientific model that prioritizes common and detailed protocols, empirical testing, and more moderate, data-driven knowledge claims. Some have hinted that a scientific paradigm shift has already occurred ( 5 , 6 ); others see little evidence of a shift ( 7 ). Most likely, the transformation remains a work in progress: Notable progress has been made on some fronts, but significant concerns remain ( 8 ).
In some areas, when scientific reviews established that available empirical science did not support experts’ claims, entire subfields of forensic science that had contributed to criminal convictions for decades ceased (e.g., bullet lead analysis) or ceased using discredited principles (e.g., fire and arson analysis). In other areas, scrutiny led to reduced credibility and a shift away from exaggerated claims (e.g., microscopic hair analysis). However, other fields, such as bitemark identification, continued despite adverse scientific reviews ( 9 ).
Some forensic subfields, such as single-source DNA identification, survived scientific scrutiny quite well. Latent fingerprint identification, which has been scrutinized more than most other forms of pattern identification evidence, has survived as well, although it has scaled back on its claims in recognition of the role that human factors and subjectivity play in reaching conclusions ( 10 ). Firearms evidence is gaining attention from the scientific community, and weaknesses in its scientific foundation and reporting traditions have been identified ( 11 ).
In what follows, we discuss how the move to a more empirically grounded scientific culture in the forensic sciences impacts testing, error rate analyses, procedural safeguards, and the reporting of results. Whereas there can be no debate that forensic science claims must be grounded in both relevant testing and data, legitimate open questions remain about how best to make the forensic sciences “scientific.” How should errors and mistakes by forensic practitioners be defined and counted? How should conclusions be reported? These questions are currently being discussed and debated by the scientific community. Responsibility for implementing recommendations from the scientific community ultimately rests with the courts. Unfortunately, few courts have undertaken serious gatekeeping of forensic science evidence. We discuss this problem and conclude by examining how to build on institutional and structural opportunities to assure that this vital reinvention of forensic science proceeds.
The shift to a truly scientific framework in the forensic sciences requires attention to empirical testing of the techniques and methods employed under realistic conditions. As PCAST ( 12 ) notes, “Scientific validity and reliability require that a method has been subjected to empirical testing, under conditions appropriate to its intended use, that provides valid estimates of how often the method reaches an incorrect conclusion” (p. 27 and p. 118). Empirical testing is a sine qua non for moving from a “trust the examiner” to a “trust the methods” ethos.
Although scientifically-minded people understand the importance of empirical testing in any scientific endeavor, calls to test the accuracy of forensic science claims are relatively recent. For most of the 20th century, few asked forensic scientists to provide empirical proof that they could do what they claimed. The training, knowledge, and experience of the examiner, coupled with assurances that the method used was generally accepted in the forensic community, were deemed sufficient to admit nearly every forensic science that was proffered in court in the 20th century. Once admitted, forensic scientists commonly offered conclusions with 100% confidence and claimed, with little evidence, a 0% error rate ( 13 ). Although some optional forms of certification existed, little attention was paid to whether, or how, forensic examiners should be required to pass proficiency tests or what those tests should include. Nor did judges require any form of testing or certification as a prerequisite to allowing forensic testimony.
Most forensic sciences were raised, if not always born, in the world of law enforcement for the purpose of helping police identify criminals. The granddaddy of forensic identification, anthropometry was invented by Alphonse Bertillon in the Paris Prefecture of Police in the 1880s. This technique involved making systematic measurements of bodies of prisoners to assist with their identification at a later date if they were using aliases ( 14 ). Fingerprints soon proved to be a more useful means of identifying criminals, and courts eagerly admitted this evidence without serious inquiry into the scientific underpinnings of the claim that experts could accurately identify the source of partial prints recovered from crime scenes. At no point did the fingerprinting method face the rough-and-tumble questioning of a scientific discipline where everything is questioned and tested, progress is incremental, and cautious, tentative claims are the norm. Over time, other forensic science techniques were invented and introduced on the basis of assurances from practitioners rather than persuasive evidence from rigorous scientific tests.
When DNA technology burst onto the legal landscape in the late 1980s—a technology that, unlike most forensic disciplines that came before it, derived from basic scientific disciplines—the broader scientific community took notice. Initially, this impressive technology was received with great enthusiasm. But questions about its courtroom use soon emerged. In People v. Castro ( 15 ), through the involvement of talented defense counsel and distinguished scientists as defense experts, substantial concerns about how laboratory DNA science was being “translated” for courtroom use gained prominence ( 16 ). In the wake of Castro and several cases that followed, the National Research Council of the National Academy of Sciences convened a blue-ribbon committee to examine DNA evidence, and a flurry of additional scientific activity ensued. Geneticists, statisticians, evolutionary biologists, psychologists, and others debated, tested, and wrote about various aspects of this new technique in prestigious scientific journals. It was not forensic science business as usual; this time there would be no deference to authority or to the say-so of a narrowly defined forensic community.
The National Research Council (NRC) ended up writing two reports, four years apart, about DNA evidence ( 17 [NRC I] and 18 [NRC II]). We do not focus on the reports as a whole but limit our attention to their respective treatments of testing in the forensic sciences.
Two types of proficiency tests were needed to legitimate the use of DNA profiling in court. One type of test would address issues that were internal to the forensic sciences. These tests address matters such as whether examiners can follow the protocols for a particular technique and whether different examiners and different laboratories obtain identical (or nearly identical) results on identical samples. A second type of test focused more on matters external to the day-to-day workings of forensic science analyses, such as helping triers of fact assign appropriate weight to DNA evidence. This goal is best accomplished through another type of proficiency test designed specifically to identify accuracy and error rates under various casework-like conditions ( 19 ). As NRC I noted, “Interpretation of DNA typing results depends not only on population genetics, but also on laboratory error” ( 17 , p. 88). This report referenced the results of a DNA proficiency test conducted a few years earlier that identified a false positive error rate of 2%. Noting that some of the early proficiency tests were “less than ideal,” NRC I stressed that for DNA typing, “laboratory error rates must be continually estimated in blind proficiency testing and must be disclosed to juries” ( 17 , p. 89).
This testing recommendation was largely ignored by the forensic science community and the courts. Moreover, some influential forensic science voices actively counseled against error rate testing on the specious grounds that error rates are irrelevant to individual cases because they change over time (testimony from a leading FBI scientist in United States v. Llera Plaza ( 20 , p. 510). At trial, prosecutors argued that the source opinions of DNA examiners were reliable. With few exceptions, trial judges gave little weight to defense arguments that DNA evidence should be limited or excluded when error rate tests had not been performed.
NRC II offered a different perspective on tests designed to measure laboratory error rates than that taken by NRC I. NRC II offered four arguments against performing such tests: 1) error rates are unknowable because they are always in flux, 2) error rates never translate directly into an estimate of error for a given case because each “particular case depends on many variables,” 3) general error rate estimates “penalize the better laboratories,” and 4) an “unrealistically large number of proficiency trials” would be required to obtain reliable error rate estimates ( 18 , p. 85–86). Although these arguments were widely rebutted ( 21 – 23 ), this report stifled calls for empirical testing and made it difficult for defense attorneys to argue that the reliability of any proffered forensic science method is unknowable without such data.
Fourteen years later, yet another National Research Council report was issued ( 24 [NAS]). This report examined a variety of non-DNA forensic science disciplines (latent prints, shoeprints, toolmarks, hair, etc.) and concluded that nearly all had failed to test their fundamental premises and claims. According to NAS, testing requires an “assessment of the accuracy of the conclusions from forensic analyses and the estimation of relevant error rates” ( 24 , p. 122). A follow-up report by the President’s Council of Advisors on Science and Technology (PCAST) argued even more forcefully for empirical error rate testing programs: “Without appropriate estimates of accuracy, an examiner’s statement that two samples are similar—or even indistinguishable—is scientifically meaningless: it has no probative value, and considerable potential for prejudicial impact” ( 12 , p. 6).
We thus see a variety of particularized approaches to proficiency testing in the forensic sciences across blue-ribbon analyses of the topics. Three of the four reports noted above emphasized the importance of proficiency testing and the development of empirically grounded error rates. Although there are challenges to developing meaningful error rates, the program of proficiency testing called for in the PCAST and various NAS reports is an indispensable part of the evolving scientific framework in the forensic sciences. Error rate proficiency tests have now been conducted with forensic examiners in various subfields including latent prints ( 25 , 26 ), firearms and toolmarks ( 27 , 28 ), and footwear ( 29 ). These studies are important steps forward and have prompted interest in how error rates should be computed and reported. A consensus has not yet emerged. Far from signaling a discipline in disarray, ongoing research and sophisticated debates depict a field that is undergoing a scientific transformation.
In the late 1900s, proficiency testing in the forensic sciences focused mainly on the issue of examiner competence. Could the examiner conduct a proper analysis using simple exemplars, and did the conclusions reached by different examiners agree? To the extent error rates were computed from these proficiency tests, it was clear that those rates should be considered with a grain of salt. The study participants were usually volunteers who knew that they were being tested and who may or may not have collaborated with others or otherwise examined the test samples differently than they treat casework samples. The test providers often were not disinterested parties, and the samples used were less challenging than many that appear in actual cases. Although some of these testing problems remain, efforts have been made in recent years to employ realistic samples and to blind examiners to the fact that they are working with test samples rather than casework samples ( 30 , 31 ).
A focus on testing and accuracy raises important correlative questions: Precisely what counts as an error and how should error rates be computed? There is no single “correct” error rate ( 32 , 33 ). False-positive error rates, false-negative error rates, and false discovery rates are all different, legitimate error rates. But even when there is agreement about which error rate is of interest, scientists might not agree about what “counts” as an attempt (or trial) and what “counts” as an error. If examiners always reached either an identification conclusion (i.e., that two patterns derive from the same source) or an exclusion (i.e., they come from different sources) for all sample pairs in a test situation, it would be a simple matter to compute, say, a false-positive error rate. It would be the number of times the examiner reached a “same source” conclusion divided by the number of sample pairs that were known to have been produced by a different source.
But forensic examiners do not always reach a firm binary source decision. Depending on the subfield, they might reach more limited judgments, such as leaning toward identification, high degree of association, association of class characteristics, limited association of class characteristics, inconclusive, indications of nonassociation, and leaning toward exclusion. * We discuss the wisdom of categorical conclusions later. For now, we simply note that error rate computations are not straightforward when an examiner reaches a conclusion other than identification or exclusion for a given paired comparison. Because all pairwise samples are, as a matter of ground truth, either produced by a common source (corresponding to a conclusion of identification) or by different sources (corresponding to a conclusion of exclusion), any conclusion other than identification or exclusion cannot be factually correct. This raises the question: Should conclusions other than identification or exclusion be classified as errors? If not, should these comparisons be included in the error rate denominator?
Some scholars have argued that under particular circumstances, uncertain conclusions (e.g., “inconclusive”) should be scored as correct or incorrect and should be included in error rate computations ( 34 ). According to this argument, inconclusives should be scored as errors when the available information—as judged by qualified experts or by the set of tested examiners themselves in aggregate—suggests that one of the two conclusive decisions could in fact be reached by a competent examiner. Dror ( 35 , pp. 1036–1037) goes so far as to say that, even when an examiner correctly concludes that two samples came from the same source, that decision should be scored as a false-positive error when a panel of experts or group of other examinees regard the comparison to be inconclusive.
Others have argued that inconclusives should not be scored as errors or counted in error rate computations on grounds that when examiners fail to offer a conclusive decision, they are neither wrong nor right because they have not made a claim about the underlying state of nature ( 36 , 37 ). According to this view, neither a panel of independent experts nor a wisdom-of-the-crowd approach provides a dependable gold standard for ascertaining when a pairwise comparison should be deemed inconclusive ( 38 ). Indeed, experts are most likely to disagree with one another on hard cases which, of course, are also the cases where examiners will be tempted to offer an inconclusive decision.
Resolution of this debate is complicated by the practical reality that forensic scientists might be motivated to minimize their reported error rates. If inconclusives are not treated as errors, then examiners might be incentivized to minimize their reported error rates in known test situations by deeming all but the most obvious comparisons inconclusive, even if they might reach a definitive conclusion about many or even most of those same stimuli in real-world casework. Conversely, if inconclusives are treated as errors, examiners might be incentivized to reach conclusions on even the most difficult cases and thereby increase the risk that innocent people are convicted based on faulty forensic science. Misuse of the inconclusive category is likely to be reduced when blind testing is broadly implemented and when examiners provide weight-of-evidence testimony rather than source conclusion testimony. This very debate, and the sophistication of the engagement with this set of questions about measuring error, is a welcome development.
For more than a century, the forensic science enterprise in the United States has been controlled and often staffed by law enforcement agencies. This may not be surprising given that police are responsible for investigating crimes, and forensic scientists have the ability to collect and examine evidence in a wide range of cases. But forensic science should not be the exclusive tool of law enforcement for several reasons. First, for the adversary system to work as intended, all parties—including criminal defendants—need to have equal access to forensic science resources. Second, the scientific status of the forensic sciences is compromised by its close association with one side. If crime laboratories are beholden to the needs of law enforcement, they might be discouraged from pursuing scientific investigations that are not aligned with the interests of law enforcement ( 24 , pp. 78–79; 39 , p. 775). Relatedly, if forensic scientists see themselves as working in partnership with police and prosecutors, subtle contextual and cognitive biases might creep into their work at various stages.
There has long been concern that expert witnesses who are retained by one side or the other in legal cases will, intentionally or unintentionally, slant their conclusions and testimony in favor of the party retaining them ( 40 ). Psychologists theorize that experts see themselves as part of a team and often develop a so-called “myside bias” ( 41 ) or “adversarial allegiance” to their team and teammates ( 42 ). In one controlled experiment, 108 forensic psychologists evaluated the risk posed by certain sex offenders at the request of either the prosecution or the defense. After reviewing and scoring four case files using standard risk-assessment instruments, the psychologists who thought that they had been hired by the prosecution viewed the offenders as posing greater risks than did the psychologists who thought that they had been hired by the defense ( 43 ).
The tendency to favor one’s own side in an adversarial setting is one of many demonstrated psychological influences (or biases) on human judgment and decision. These biases may be perceptual, cognitive, or motivational in nature. Perceptual biases commonly refer to situations in which a person’s expectations, beliefs, or preferences affect their processing of visual stimuli ( 44 ). For example, a latent print examiner might “see” a point of similarity between two prints after having noted several other points of similarity between the prints, whereas another examiner—or even the same examiner—might not see the similarity absent an expectation that the two prints share a common source. Cognitive biases refer to systematic distortions in thinking that occur when people are processing information. Confirmation bias is a well-known cognitive bias in which people seek, interpret, and recall information in ways that tend to confirm their prior beliefs ( 45 ). Motivational biases, such as motivated reasoning, refer to the phenomenon in which our wishes distort our interpretations of events ( 46 ). The significance of these overlapping biases for forensic science work is that they might affect what examiners choose to look at, what they see when they look, and the conclusions that they reach about what they have seen.
Research shows that irrelevant contextual, cognitive, and motivational factors can alter the judgments and decisions of forensic scientists in many areas, including fingerprint ( 47 ), handwriting ( 48 ), firearms ( 49 ), DNA ( 50 ), pathology ( 51 ), forensic anthropology ( 52 ), digital forensics ( 53 ), bloodstain pattern ( 54 ), and forensic odontology ( 55 ). The takeaway point of these studies is not that forensic science evidence is fatally flawed. The point is that forensic scientists, like other scientists ( 56 , 57 ), are subject to potentially significant biases that should be examined empirically and minimized where possible.
Despite the ubiquity of subtle biases in human judgments ( 58 ), people do not readily recognize that their own judgments and decisions could be biased ( 59 ). Unsurprisingly, this reluctance has been observed in the forensic science community. When a small group of psychologists and forensic scientists debated the risk of bias in forensic judgment in a scientific journal in the late 1990s, some forensic scientists argued that their disciplines were objective (hence unbiased) and that potentially biasing information therefore need not be withheld from examiners ( 60 ). Two decades later, a survey of 403 forensic scientists suggested that this view may still be common. Most of the survey respondents did not think that their own judgments were influenced by cognitive bias, and most did not agree that examiners in their domain “should be shielded from irrelevant contextual information” ( 61 , p. 455). Regardless of whether practicing forensic scientists support efforts to guard against unwanted influences, it is incumbent on the broader scientific community to continue researching potential sources of bias and to continue proposing reforms designed to blunt the impact of bias on forensic judgments.
Perhaps the most important reform is blind testing and blind review. Training in most scientific fields includes learning how scientific judgments and choices might be tainted by subtle psychological forces. This problem is best addressed in human research by blinding investigators and participants alike to the participants’ condition (e.g., placebo or treatment). Similarly, in fields that rely heavily on subjective judgments—as many pattern-matching forensic sciences do—it would seem important to prevent analysts from receiving extraneous information that could affect their judgments about the patterns they analyze. In forensic science, blind analysis requires an administrator or case manager to provide examiners with case information on a need-to-know basis. Trace samples recovered from crime scenes (i.e., unknown samples) should be examined thoroughly prior to the introduction of reference samples (i.e., known samples). Knowledge about features of known samples, like knowledge about other aspects of the case, could inadvertently cause an examiner to see features in the unknown sample that are not there or fail to see features that are there ( 17 ).
Similar precautions should be taken for verifiers, i.e., examiners who are called on to provide a second opinion. These examiners should be unaware of their role as verifier of the conclusions offered by another examiner. Such knowledge could create a confirmation bias that affects the verifier’s forensic perceptions and judgments.
Scientists have recommended various blinding procedures for the forensic sciences. These include sequential unmasking ( 62 ), case manager models ( 63 ), and evidence line-ups ( 64 ). Sequential unmasking minimizes bias by blinding examiners to information about known samples until after the examiners have completed an initial review of the unknown samples. Information related to the known samples that is required for the examiner to draw additional conclusions is “unmasked” as needed. Whereas separate analyses of unknown and known samples will generally work well for DNA and fingerprint analysis, a modified version of this procedure is needed for fields such as firearms and handwriting where the known sample provides information needed for a proper examination of the unknown sample. Sequential unmasking has been implemented on occasion in the United States ( 65 ) and is employed as a working standard for fingerprint and DNA evidence at the Netherlands Forensic Institute and at the Dutch National Police for DNA ( 66 ). Recently, extensions of this technique have been proposed ( 67 , 68 ).
The case manager method minimizes bias by assigning a forensic “manager” to interact with investigators and to participate in decisions related to what is tested and how a “blind” examiner conducts those tests. The manager then tells an examiner what to do without revealing other case-relevant (or potentially biasing) information. In evidence line-ups, known reference samples that are not the source of the unknown sample are provided to the examiner at the comparison stage along with a reference sample from the suspected source of the unknown. In the context of an eyewitness lineup, this “filler-control procedure” ( 69 ) purportedly reduces errors that incriminate innocent suspects by spreading the errors among a set of fillers as well as the innocent suspects ( 70 ). This technique, which could be costly to implement broadly ( 69 ), may reduce false positive errors in forensic contexts as well ( 71 ).
Growing attention to bias-reducing reforms, though implemented only to a limited degree thus far, suggests that the forensic sciences are beginning to recognize that examiners may be influenced by irrelevant contextual knowledge. Behavioral science research holds the key to identifying procedural guardrails that should be erected to reduce unintentional bias.
4.1. categorical reporting..
Forensic scientists in many subfields offer one of three categorical conclusions when comparing an unknown (questioned) sample to a known (reference) sample: exclusion (the paired samples come from different sources), individualization (the paired samples come from the same source), or inconclusive (insufficient basis for excluding or individualizing). Exclusions arise when an examiner determines that there are important identifiable features in one of the samples that are not present in the other sample. That determination is left to the judgment of the individual examiner ( 72 ). When examiners feel that they lack sufficient evidence that two samples come from different sources, they must decide whether there is enough evidence to conclude that the pair come from the same source. An individualization—sometimes referred to as an identification—is a conclusion that a particular item or person is the one and only possible source of an unknown item of forensic evidence. † Despite the long history of reaching individualization conclusions in most forensic sciences, it is an unscientific practice that should be abandoned.
Individualization has long been central to the forensic science enterprise. ‡ Examiners make individualizations in most of their casework ( 73 ). Until recently, such testimony was routinely offered with “100% certainty” § and assurances of a 0% error rate. ¶ Although vestiges of this type of hyperbole remain, several forensic professional associations now warn their members not to engage in these practices.
However, the individualization claims themselves are nearly as problematic from a scientific standpoint as the exaggerated ways in which those claims are sometimes made. Individualization claims exaggerate what the underlying science can reveal ( 7 , 74 – 76 ). A scientist cannot determine that there is no chance that any object other than a particular known sample could be the source of an unknown sample simply because the known and unknown samples share many features ( 77 ). When forensic scientists offer individualization conclusions, they are merely offering personal speculation that markings on one of the samples that are not shared by the other sample are unimportant for source determination purposes and that they believe that the samples show sufficient similarity to conclude that they share a common source.
The individualization problem cannot be solved by adding a caveat that an individualization is a personal opinion rather than a scientific statement or that it is made to “a reasonable degree of scientific certainty,” as had become common in recent years ( 78 ). An examiner who offers such an opinion would still be engaged in an unwarranted “leap of faith” ( 76 ). Moreover, empirical research shows that such caveats have little impact on the weight that people assign to the forensic testimony ( 79 , 80 ).
Furthermore, if individualization testimony is abandoned, it should not be replaced by a statement that provides an estimate of the probability that the samples in question were produced by a common source. First, most forensic disciplines do not have extensive data on the frequency with which the various markings appear in various populations or statistical models that reveal the frequency with which particular markings appear in particular combinations. Therefore, no scientific basis exists for estimating the chance that observed similarities between items were merely coincidental. Second, even in disciplines where such data have been collected (e.g., DNA) or are being collected (e.g., fingerprints), it would still be inappropriate to use those data to provide source probability estimates. According to Bayesian logic, these estimates require the examiner to take account of the prior probability that the known source is the actual source of the unknown sample before reaching a conclusion about the source probability in question. The prior probability is informed by a variety of nonforensic considerations, including the existence and strength of other evidence in the case that the forensic scientist should not and likely would not know. Even when the forensic scientist does know the nonforensic facts of a case, that knowledge and its corresponding impact on the forensic scientist’s beliefs are not relevant at trial. Instead, jurors’ own prior beliefs about the source of the forensic evidence, based on other evidence in the case, should inform their source probability estimates.
How then should forensic examiners provide information to a factfinder? There is broad agreement in the scientific community that forensic scientists can and should confine their testimony to providing information pertinent to the weight of the forensic evidence ( 81 , 82 ). The question to be addressed is how much support do the results of the forensic analysis provide for the proposition that the unknown and known samples share a common source? Note that this is a different question from how likely it is that the two samples share a common source. Triers of fact should make the latter judgment for themselves by updating their initial beliefs about the common source hypothesis with the additional weight provided by the results of the forensic analysis.
There is also an emerging consensus in the scientific and statistical communities that likelihood ratios (LRs) are the most appropriate tool for identifying the strength of forensic evidence ( 10 , 83 – 85 ). # In its most common form, the LR measures the strength of support that the forensic findings provide for the hypothesis that two samples share a common source relative to the alternative hypothesis that the two samples do not share a common source. If E denotes the evidence from the forensic analysis and CS denotes the hypothesis that the two samples share a common source, then the LR is P(E|CS)/P(E|-CS). In words, the LR is the probability of obtaining this forensic evidence if the two samples came from a common source divided by the probability of obtaining this evidence if the two samples did not come from a common source.
At an abstract level, the LR is an appealing way to report forensic science evidence. In practice, however, it raises a set of challenges. Aside from a relative dearth of data, a significant obstacle to employing LRs to assess evidentiary weight is that it often is not obvious what values to use for the LR numerator and denominator. Even when LRs are computed using reliable data, human judgment usually plays a significant role. For example, reasonable people might disagree about the size and composition of the reference population used to inform the denominator of the LR. Consequently, the size of the LR may vary, sometimes by orders of magnitude.
Choices related to how to handle the risk of human error can also affect the magnitude of the LR. When the risk of such errors is ignored, LRs may become astronomically large. But when estimates of the rates at which recording errors, mislabeling errors, and sample mix-ups are incorporated into LR computations, the resultant LRs will typically be smaller ( 86 ). Whether the risk of error is expressly included in the LR computation or provided to jurors in some other way, this risk is always present, and it should place an upper limit on the weight assigned to the forensic evidence.
Misinterpretation poses another obstacle to employing LRs to describe the strength of forensic evidence ( 87 ). Studies show that people commonly transpose conditional probabilities and thereby end up treating LRs as posterior odds ratios ( 88 ). That is, rather than using LRs as a measure of the weight of evidence, people mistakenly treat LRs as if they directly answer the question, “What are the odds that these two samples come from a common source?” The error of confusing LRs with posterior odds ratios is committed by laypeople, judges, attorneys, and even the experts who present this evidence at trial.
Some scholars have proposed using verbal scales and qualitative expressions to convey forensic conclusions. For example, a popular scale in Europe describes LRs < 10 as providing slight support/limited support for the source proposition, LRs between 10 and 100 as providing moderate support, LRs between 100 and 1,000 as providing moderately strong support, etc. ( 83 , p. 64). This well-intentioned idea should not be implemented absent empirical evidence that people give appropriate weight to the evidence that is described using those qualitative terms. For example, if studies show that people treat, say, a 10,000:1 LR as if it were a 100:1 LR when the term “more likely” is used, then a different qualitative phrase is needed. It is not appropriate to simply assign verbal labels to LRs without knowing how people interpret those labels. Preliminary research suggests that some verbal scale expressions are treated roughly in accordance with their corresponding LRs, but some are not ( 89 ).
Even as the forensic sciences continue to evolve, it will likely take years before conclusory individualizations are replaced by more scientifically justifiable weight-of-evidence measures such as LRs, verbal scales, or some other probabilistic indicator. A recent survey of 301 fingerprint examiners found that 98% of respondents report categorically rather than probabilistically and that a large majority regard probabilistic reporting to be inappropriate ( 90 ). To the extent that examiners in other forensic fields hold similar beliefs—and that prosecutors persuade judges that categorical reporting serves the interests of justice—change may be slow in coming. Further research on how factfinders hear and receive evidence must continue to be a priority.
What role have the courts played in improving the scientific quality of forensic science? How can the courts do better? For centuries, courts have appreciated both the value and risk of inviting expert witnesses to help factfinders find their way to the truth of disputed facts. Where specialized knowledge can cast useful light, it would be foolish to disregard it. On the other hand, parties in our adversarial legal system are motivated to present experts only when their testimony will advance the advocate’s case, regardless of whether their words illuminate underlying truths.
Courts and other rulemaking bodies have developed various legal tests calculated to facilitate the screening of expert evidence. One hundred years ago, in Frye v. United States ( 91 ), a court turned to the intellectual market for guidance. Only those propositions and techniques that had “gained general acceptance in the particular field in which it belongs” would be admissible ( 91 , p. 1014). The Frye test, which has its merits, also exposed the courts to the substantial risk that those who stood to benefit most from the admission of certain types of expert evidence might be called upon to vouch for questionable evidence if the “particular field” was defined too narrowly. Over subsequent decades, judges variously employed the Frye test, related tests, and, often, no test at all to screen experts, including forensic science experts. As noted earlier, many different types of forensic science were admitted based simply on the say-so of the few who practiced the technique at issue.
In 1993, the US Supreme Court held that the Federal Rules of Evidence (promulgated in 1975) did not incorporate Frye’s general acceptance test. Instead, judges must determine whether the methods used by proffered experts were reliable and valid, although the Court held that “general acceptance” could be one element of that inquiry. According to the Court, the “overarching subject” of “[t]he inquiry envisioned by Rule 702 … is the scientific validity and thus the evidentiary relevance and reliability—of the principles that underlie a proposed submission” ( 1 pp. 594–595). Daubert’s focus on scientific validity is consistent with efforts to increase a scientific approach within the forensic sciences. However, judges may not have the scientific training necessary to know whether “the principles that underlie a proposed submission” have been adequately tested and validated.
Whether or not this point can serve as explanation or excuse, the fact is that when called on to evaluate the proffers of forensic science, courts have not done well. As NAS observed, “Forensic science professionals have yet to establish either the validity of their approach or the accuracy of their conclusions, and the courts have been utterly ineffective in addressing this problem” ( 24 , p. 53). Rather than engage with the underlying science, most trial judges simply opted to follow past practice and allow proffered forensic science evidence to reach the jury. In the wake of this NAS report, numerous courts made modest gestures toward a more engaged assessment of forensic pattern evidence, limiting it around the edges (i.e., prohibiting claims of zero error rate or 100% certainty) or noting the lack of empirical support with surprise. But nearly all forensic science pattern evidence continued to be admitted.
PCAST sought to help the courts fix this problem by providing specific guidance to the courts for assessing the validity of feature-matching forensic science evidence (e.g., DNA, hair, fingerprints, firearms, toolmarks, and tire tracks). Not surprisingly, the guidance focused on rigorous empirical testing and the estimation of accuracy and error rates for the different methods.
Earlier we noted that several fields of forensic science—including bullet lead comparison, microscopic hair identification, and arson indicators—have been transformed or abolished following serious scientific reviews. Notably, the judicial system did not initiate, and barely even contributed to, these transformations. The courts have not led. Indeed, the courts have often not even followed, as some of these unvalidated techniques continue to be admitted.
Whether the courts will ultimately choose to a) follow the mandates of Daubert and the guidance provided by PCAST, or b) remain “utterly ineffective” at holding the forensic sciences scientifically accountable for their claims, is not yet clear. Although it has been business as usual in most post-PCAST cases, there are some signs of more full-throated, robust engagement, and even occasional exclusions [see, e.g., People of Illinois v. Winfield ( 92 ), excluding firearms evidence].
Thanks to Daubert, Federal Rule of Evidence 702, the 2009 NAS report and the 2016 PCAST report, judges indisputably have both the authority and the tools to insist that forensic evidence has an adequate scientific foundation. But they have only rarely availed themselves of this power. As the primary consumers of forensic science evidence, the courts can hold the forensic science community’s feet to the fire by requiring that expert testimony is backed by “sufficient facts or data” ( 93 ), accompanied by relevant error rates from methodologically sound studies, and presented without exaggeration ( 94 ).
The scientific reinvention of forensic science is not an all or nothing concept. Rather, it is a process of gradual and continuing change. The most important element of change currently under way in forensic science is a recognition that a framework of trusting the examiner must give way to one that trusts the empirical science. Although the training, knowledge, and experience of the examiner are important, they will not be enough to sustain the forensic enterprise going forward. Forensic science is becoming an actual science: “The debate and rigor of academic science is now influencing much of forensic science and that is the most significant change from the past” ( 95 ).
Empirical testing has proceeded rapidly in some disciplines, and efforts are under way to measure sample difficulty and to identify statistical models that capture the probative value of forensic evidence. Extreme and unsupportable claims (e.g., 0% error rate and 100% certainty), once widespread, have been rejected by numerous scientific authorities and forensic science associations. Techniques that relied on false assumptions have exited the stage, and others whose validity appears doubtful seem to be headed toward the graveyard of unsupported science as well.
Perhaps the most important institutional step forward thus far has been the creation of national scientific bodies whose purpose is to increase the scientific rigor of the various forensic fields. The Organization of Scientific Area Committees (OSACs)—a complex of interconnected, multispecialty entities operating mainly under the auspices of the National Institute of Standards and Technology—were established in 2014 to do the heavy lifting. These committees, which are composed of more than 800 crime lab examiners, administrators, conventional scientists, and legal experts, create standards which, when fully developed, approved, and published, are available for adoption by individual crime labs. “OSAC-approved standards must have strong scientific foundations so that the methods that practitioners employ are scientifically valid, and the resulting claims are trustworthy” ( 96 ). As of March 2023, there are 97 published standards and 37 proposed for an array of different forensic disciplines. These developments count as successes. Institutions have been built and staffed, and a process is underway.
On the other hand, it is not obvious that the emerging OSAC standards go far enough in terms of ensuring that examiners’ methods are valid and that their claims are trustworthy. Rather than squarely addressing major challenges such as the individualization problem discussed above, many of the standards merely nibble around the less controversial edges. Even if the OSACs do decide to take on the most important forensic challenges, it is crucial that the standards they create be supported by an empirical foundation. But many accepted that forensic techniques remain underresearched. The scientific evolution that we have described would benefit greatly from an overarching research agenda that coordinates both the needs of standards development and the research that gets funded. For example, a gap analysis would reveal the distance between what is believed (assumed) and what has been empirically validated. Research should be aimed at filling the discovered gaps. Unfortunately, as of 2015, a report on the funding of forensic science research found that “such a research agenda has not yet been developed” ( 97 , p. 14). To be sure, such assessments and gap analyses have begun, but they are incomplete and have yet to receive much attention from practitioners or courts.
Even if the OSACs can address these issues, a practical problem remains: The OSACs lack enforcement power. Individual crime labs are free to adopt OSAC standards as they please. Even those labs that do endorse OSAC guidelines may decide to do so only nominally and then fail to incorporate them into day-to-day work.
The solution to this practical problem lies with the courts: If judges refused to admit evidence produced by laboratories that could not demonstrate how, exactly, they have incorporated OSAC guidelines and other scientific recommendations into their work, compliance would be guaranteed. More generally, if judges took seriously their duties under the Daubert line of cases (and state equivalents) and refused to admit insufficiently validated claims, the forensic sciences would adopt scientific practices more quickly and completely. Unfortunately, few courts have been so bold. The scientific advances that have been made are largely due to initiatives by the forensic fields themselves or by the wider scientific community. However, given that most forensic disciplines have ignored calls from the broader scientific community to replace individualizations with a more appropriate weight-of-evidence measure, a push from outside the fields themselves is needed.
In short, although a scientific reinvention of the forensic sciences is underway, its ultimate success is not assured. Its success depends on consistent attention to empirical validation of methods and conclusions and that in turn requires institutional structures that can help make that focus meaningful in courts of law. One such institutional structure was proposed by the NAS report. This report called for the creation of a new federal agency that focused on forensic science. Among other things, this agency, which would operate independently of law enforcement or any other potentially interested party, would be responsible for establishing and enforcing scientific practices in the forensic sciences. Ultimately, however, such an independent agency was not created.
Courts of law provide an alternative institutional structure for advancing the forensic sciences. Although the courts may not seem like an obvious force for advancing a scientific agenda, the expert evidence gate-keeping duties imposed on trial judges by Daubert and the relevant Federal Rule of Evidence, if faithfully followed, will promote a scientific focus and culture within the forensic sciences. To be sure, the courts’ record on this front does not warrant much optimism. But the scientific paradigm is young and there are signs of hope and progress. The future of forensic science is ours to choose.
J.J.K., J.L.M., and M.J.S. wrote the paper.
The authors declare no competing interest.
This article is a PNAS Direct Submission.
* Similarly, examiners are often permitted to conclude that samples are “unsuitable” or “insufficient” for reaching any conclusion.
† For shoeprint evidence, “An identification means the shoe positively made the questioned impression and no other shoe in the world could have made that particular impression” ( 98 , p. 347).
‡ ”The concept of individualization is clearly central to the consideration of physical evidence. Our belief that uniqueness is both attainable and existent is central to our work as forensic scientists” ( 99 , p. 123).
§ ”Latent fingerprint identifications are subject to a standard of 100% certainty” ( 100 , p. 8).
¶ Responding to a question by 60-Minutes interviewer Leslie Stahl, Stephen Meagher, the former head of the FBI’s latent print unit, said that the chance that a reported fingerprint match is in error is “zero” ( 101 ).
# Log-LRs provide equally rigorous measures of probative value.
COMMENTS
Top 20 Research Topics For DNA Analysis. Advances in Next-Generation Sequencing Technologies. Application of DNA Phenotyping in Criminal Investigations. Forensic Use of Microbial DNA Analysis. Ethical Implications of DNA Data Sharing. Rapid DNA Testing in Law Enforcement. Epigenetics and Its Role in DNA Analysis.
Use any of the topics given below to write an impressive thesis that showcases in-depth knowledge. These topics provide ample scope to delve deeper into the subject and write after thorough research. Fingerprint science — an insight. Crime scene fingerprinting — a detailed study. Forensic anthropology — an insight.
From digital forensics to forensic psychology, the chosen dissertation topics reflect the evolving challenges and advancements in solving complex legal puzzles. Forensic DNA Analysis: "Next-Generation Sequencing (NGS) in Forensic DNA Profiling: Opportunities and Challenges". "The Impact of DNA Transfer and Secondary DNA Transfer in ...
Forensic Science is a field of academic study that entails applying scientific methods and processes to solve crimes. It mainly applies to the courts of the judicial system. College students should use scientific innovations and forensic science advancements to bring solutions to criminal offenses. Due to the technicality of the field, it is ...
Find methods information, sources, references or conduct a literature review on FORENSIC SCIENCE. Science topics: Medicine Forensic Science. Science topic. Forensic Science - Science topic.
In various disciplines, the term "literature review" may refer to: Article-length studies which consist entirely of a review of academic literature on a given topic of study in a given discipline. One section of a scholarly article, dissertation, or even a book, in which the literature pertaining to the topic of study is reviewed.
J Forensic Sci Educ 2022, 5(1) 2023 Journal Forensic Science Education Stacey A review of forensic science peer-reviewed primary literature: a guide for students and professionals Catherine Stacey 1, Sateedrah Beckwith1, Alexandra Kuchinos , Aubrey Shanahan , Mia Fabbri 1, Caitlyn Kresge , Kelsey Patterson1, Nyla Ngegba1, Brittany Claassen1, Morgan Maddock 1, Kelly Reading , Lawrence Quarino1*
Topic 5: Determining the effectiveness of blood spatter studies in identifying the nature and timing of crime at crime scenes. Topic. 1: Forensic science in the 20th century and today. Topic. 2: Case Study of the criminal cases and convictions resolved through forensic science. Topic. 3: Role of botany and entomology in the forensic science.
Remember, a literature review is NOT simply a list of the resources with a summary of each one! You can organize the review in many ways; for example, you can center the review historically (how this topic has been dealt with over time); or center it on the theoretical positions surrounding your topic (those for a position vs. those against, for example); or you can focus on how each of your ...
Here are some interesting forensic dissertation topics that are likely to catch your professor's attention. Pick any of the topic depending on the selection criteria we've shared with you and present it to your supervisor for review. 1. General Issues in Forensic Studies. We can define forensic science as the application of scientific ...
As the official publication of AAFS, the Journal of Forensic Sciences (JFS) brings you original investigations, observations, scholarly inquiries and reviews in various branches of the forensic sciences. Through the JFS, we aim to strengthen the scientific foundation of forensic science in legal and regulatory communities around the world.If you are not a member of the Academy, you must ...
Given below are a few guidelines on how to write a review paper. Choose a topic and define the scope: The foremost step is to do a broad survey of research topics. Current research questions of public interest or any areas of controversy in the field of interest may be used as factors in deciding the topic. Define the scope of your article so ...
The American Chemical Society (ACS) is a nonprofit organization chartered by the U.S. Congress. ACS' mission is to advance the broader chemistry enterprise and its practitioners for the benefit of Earth and all its people. The Society is a global leader in promoting excellence in science education and providing access to chemistry-related ...
If you want a literature review, add "AND review" to your keywords. To find a research study, add "AND study" to your keywords. Enter your Boolean searches in the Advanced Search of a database. Always go to the Advanced Search in a database to enter your Boolean searches because it gives you multiple boxes with the Boolean operators between them.
This paper gives an overview of the state-of-the-art by presenting a structured literature review of digital forensic about methods and concepts using Artificial Intelligence (AI) technologies. The review focuses on science done on topics in strong AI and forensics. Graphical abstract. Download: Download high-res image ...
Critical Review - JFS welcomes detailed Critical Reviews of a topic area of interest to forensic science. Critical Reviews may be invited by the EIC and are subject to peer review. Non-critical, narrative compilations of literature will not be accepted. Critical Reviews will be peer-reviewed for quality, considering, among others,
This paper summarizes the literature in forensic science management generally from 2009 to 2013, with some recent additions, to provide an overview of the growth of topics, results, and ...
Find methods information, sources, references or conduct a literature review on FORENSIC PSYCHOLOGY. Science topics: Psychology Forensic Psychology. Science topic.
In addition to a review of the scientific literature, this review includes a report from an October 2019 workshop, hosted by the Center for Statistics and Applications in Forensic Science , where odontologists, researchers, statisticians, lawyers and other experts addressed scientific questions around bitemark analysis.
This review paper covers the forensic-relevant literature in biological sciences from 2019 to 2022 as a part of the 20th INTERPOL International Forensic Science Managers Symposium. Topics reviewed ...
This article provides a Systematic Literature Review (SLR) on forensic journalism. This is a field which combines the areas of journalism and forensic investigation and plays an essential role in crime report and journalistic investigation, in connection with the legal system and advocacy of justice. The SLR considered a corpus of 90 scientific ...
Abstract. Forensic science is undergoing an evolution in which a long-standing "trust the examiner" focus is being replaced by a "trust the scientific method" focus. This shift, which is in progress and still partial, is critical to ensure that the legal system uses forensic information in an accurate and valid way.
In the application of such methods to the forensic science literature it was ... Raghunath investigated five topics in forensic science and identified associations between forensic anthropology ... When the results of the Scopus literature review were examined for cross references within the IFSMS reviews, overall only 3216, 21.9%, references ...