Greater Good Science Center • Magazine • In Action • In Education

Workplace Articles & More

Right and wrong in the real world, from our friendships to our jobs to our conduct in public, seemingly small decisions often pose tough ethical dilemmas, says joshua halberstam . he offers guidance for navigating the ethical dimension of everyday life..

Some years ago, a student asked to see me during office hours to talk about a personal problem that, she assured me, related to our recent ethics class. It seemed she was having difficulties with a new friend from the Dominican Republic. She explained that in normal circumstances she would have ended the relationship, but she was reluctant to do so now because of affirmative action.

“I’m convinced by the arguments and decided it would be wrong to demand the same standards from this girl as I do from my other friends,” she said. I, of course, immediately commented on how this was condescending and then pointed out that governmental and institutional policies don’t readily apply to our personal relationships.

“But why not?” she pressed. “If it’s a good moral argument, shouldn’t it apply to my own life?”

ethical judgement essay

My student’s sensitivities were surely misplaced, but explaining why isn’t quite so easy. In fact, they reflect the complex relationship between communal and personal ethics, between moral theory and our everyday ethical decisions. These aren’t idle ruminations: How we understand these connections is critical to understanding the moral quality of our lives. This is the realm of everyday ethics.

Now would certainly seem to be the time to care more about everyday ethics. We regularly complain about the moral decay of our age, and we have good reason to do so. Ethical misconduct is a mainstay of the news: CEOs raiding corporate coffers, widespread auditing fraud, unbridled cheating in school, scientists doctoring data, reporters lying about sources, politicians still acting like politicians—the incidence and variety of transgressions seem interminable. No wonder that in a recent Gallup Poll , nearly 80 percent of Americans rated the overall state of morality in the United States as fair or poor. Even more troubling is the widely held opinion that people are becoming more selfish and dishonest. According to that same Gallup Poll, 77 percent of Americans believe that the state of moral values is getting worse. This perception of decaying values—accurate or not—has its own adverse consequences: It lowers our expectations for other people’s behavior and leads us to tolerate unethical actions. For example, in a National Business Survey conducted in October of 2005, a majority of workers claimed to have observed ethical misconduct in the workplace, roughly the same number as reported misconduct in the 2003 survey, but the number of employees who bothered reporting those transgressions fell by 10 percentage points.

But should these findings surprise us? Isn’t wrongdoing just part of “the human condition”? Can we really teach our children to be more ethical? Or improve ourselves when we are adults? Moreover, when it comes to our personal interactions, who decides—and how—what is or isn’t moral?

These are difficult but not rhetorical questions. To address them, we need to get a better sense of what we mean by “everyday ethics” and where it fits into the larger picture of morality.

What is everyday ethics?

• The ATM spits out an extra $100 in your favor. Keep the money and your mouth shut?

• At a restaurant you notice your friend’s wife engaged in some serious flirting with another man. Tell your friend—and possibly ruin his marriage—or mind your own business?

• You can avail yourself of a free wireless connection by accessing the account of your next-door neighbor. Silly not to? 

• Your colleague is forever taking credit for your and other people’s work. Is it okay to exact a little revenge and for once take credit for her labors?

• Your friend is on her way out the door for a significant date and asks whether you like her blouse. Do you tell her the truth: It’s hideous?

• Is it all right to laugh at a sexist joke?

We face choices like these daily: morally laden quandaries that demand direct and immediate decisions. Unlike moral issues that dominate our dinner conversations—legalizing abortion, preemptive war, raising the minimum wage—about which we do little more than pontificate, the problems of everyday ethics call for our own resolutions. But how do we arrive at our judgments? For example, in answering the questions above, do you have a quick, intuitive response about what is proper, or do you consider broader moral principles and then derive a solution? 

The history of philosophy is filled with competing theories that offer such moral principles—for example, there’s theological ethics, which looks to religious sources for moral guidance ( see sidebar ); consequentialist theories, which judge the moral value of an act by its results; rational, rule-based theories, such as proposed by Immanuel Kant , which argue that proper intentions are essential to moral value; and virtue-based theories, which focus more on character than on behavior.

But when your teenager asks if you ever did drugs, it’s unlikely that you’ll undertake a complex utilitarian calculus or work out the details of how a categorical imperative would apply in this case. In fact, in dealing with so many of our everyday moral challenges, it is difficult to see just how one would implement the principles of a moral theory. No wonder that many moral philosophers insist they have no more to say about these specific situations than a theoretical physicist does when confronting a faulty spark plug. Nonetheless, your response to your curious teenager, as with all cases in the domain of everyday ethics, presents a practical, immediate moral challenge that you cannot avoid.

Embracing the moral importance of these ordinary dilemmas, some ethicists have posited a bottom-up perspective of ethical decision making that places these “mundane,” ordinary human interactions at the very heart of moral philosophy.

According to this view, because traditional moral theories can’t reach down to our routine lives, we should question their practical value. Take, for example, the “demand for impartiality,” the notion, common to many moral theories, that we treat everyone the same. But of course we don’t—nor should we. Suppose you spend three hours at the bedside of your sick spouse and then declare, “Hey, you know I would do the same for anyone. It’s my moral duty.” Don’t expect your spouse to be delighted with your righteousness. Caring for a loved one because of a moral principle is, as the philosopher Bernard Williams said, “one reason too many.”

Other philosophers are uneasy with the moral ideal posited in mainstream theories; not only is the theoretical idea of moral perfection unattainable, it’s not even desirable. After all, who wants to hang out and grab a beer with a moral saint? Indeed, who wants to be the kind of person who never hangs out and has a beer because of more pressing moral tasks? Still other critics note that typical academic moral arguments ignore the complexity and texture of our ordinary lives. As philosopher Martha Nussbaum and others suggest, an observant novel will often be more instructive about our moral lives than an academic treatise. 

Well, if we don’t appeal to moral theories when deciding problems of everyday ethics, how then do we make these decisions? 

At the outset, we need to recognize—and take seriously—the difficulties inherent in these judgments. The interesting ethical questions aren’t those that offer a choice between good and evil—that’s easy—but pit good versus good, or bad versus even worse. Take, for example, the case of our friend walking out the door wearing that unappealing blouse on her way to a crucial date. She asks for your opinion on her attire. Honesty demands you to tell her the truth, but compassion urges you to give her the thumbs up. It’s worth noticing that other values, say friendship, surely should count here… but how? Perhaps one ought to be more truthful to a friend than a stranger, but then, too, one ought to be especially encouraging to a friend. Appealing to clear-cut moral principles such as “Do unto others as you have them do unto you” isn’t decisive here, either: Do you want to be told the truth in this case?

Presumably, different people might offer different answers.

We can, nonetheless, draw a few lessons from even this hasty consideration of everyday moral dilemmas.

One: We need to be clear about which values are at play. While we often don’t have the luxury of a long, careful weighing of competing principles, our actions will be moral only if they are the firm result of our intention to act morally and not, say, to fulfill a selfish interest.

Two: Intellectual honesty is always a challenge. With regard to lying, for example, we need to acknowledge how easy it is to justify dishonesty by claiming compassion or some other good when, in fact, we merely want to avoid unpleasant confrontations. Our capacity for rationalization is remarkable: “Everyone does it,” “I’ll do it just this one time,” “It’s for her own good,” “It’s none of my business,” and on and on.

Three: We need to give slack to people with whom we disagree. Inasmuch as the problems posed by everyday ethics are genuine dilemmas but do not allow the luxury of lengthy, careful analysis, decent people for decent reasons can reach opposing conclusions. 

But how then do we make our quick judgments about what to do in these everyday moral situations? What’s going on in our minds?

The science of everyday ethics

Over the past few years, evolutionary biologists, neuroscientists, and cognitive psychologists have been exploring these very questions. And they are making some startling discoveries. 

For example, using functional MRI (fMRI) scans of the brain, neuropsychologist Joshua Greene has found that different types of moral choices stimulate different areas of the brain. His findings present an astonishing challenge to the way we usually approach moral decisions.

Consider, for example, a popular thought experiment posed by moral philosophers: the “trolley-car” cases. Suppose you are the driver of a runaway trolley car that is approaching five men working on the track. As you speed down toward this tragedy, you realize you can divert the train to a side track and thereby kill only one person who is working on that other track. What do you do?

Now consider an alternative case: Suppose you aren’t the train conductor but are standing on a cliff watching the train careen toward the endangered five people. Next to you is a fat person whose sheer bulk could stop the oncoming trolley. Should you give him a shove so that he’ll fall onto the track and be killed by the train—but in the process, you’d save five other lives?

Most people say they would save the five lives in case one, but not in case two—and offer complicated reasons for their choices. What Greene found in his research was that different parts of our brains are at work when we consider these two different scenarios. In the first case, the area associated with the emotions remains quiet—we are just calculating—but in the second case, which asks us to imagine actually killing someone up close and personally, albeit to save five other people, the emotional area of the brain lights up. In Greene’s view, this suggests that we bring to our moral judgments predilections that are hard-wired in our brains, and emotions might play a more significant role in our decision making than we realize, particularly in the case of everyday ethical dilemmas that affect us personally.

Brain research of this kind underscores the claims of evolutionary psychologists who maintain that many of our moral attitudes are grounded in our genetic history. They suggest, as does Greene, that because we evolved in small groups, unaware of people living halfway around the world, we have stronger instinctive moral reactions to problems that affect us directly than to those that are more abstract. In this view, for example, evolutionary strategy dictates our preferences for kin over strangers, and makes us more likely to display altruism toward people we can see first-hand.

Cognitive psychologists, for their part, are examining how moral decisions are formed—demonstrating, for example, how selective images, such as pictures of starving children, can alter and enlarge our sphere of empathy, and how social environments can either stultify or nurture compassion. 

Many warn against seeing a “science of ethics” as the ultimate arena for the study of moral decision making. They remind us that our pre-set inclinations—how we are—do not prescribe or justify how we ought to be.

But this ongoing research is of vital importance to our understanding of ethics, and in particular, everyday ethics. In the first place, we will better acknowledge the constraints we battle in acting “against our natures.” For example, if evolutionary psychologists are right and our ethical decisions are informed by an evolutionary preference for those in our immediate group, we can better understand why it takes such an effort to get people to spend their money on the poor of Africa rather than on another pair of ice skates for their kids, or to respect members of other cultures as they do their own. Moreover, this research can be extremely helpful as we determine how best to teach ethics to our children. Indeed, studies of the brain and our genome might shed light on how it is that some individuals turn out decent and caring and others cold and obnoxious.

The challenges of everyday ethics

All this data cannot, however, answer our fundamental challenge: How should we act and what kind of people should we strive to be? As we’ve seen, we cannot rely on rarified moral theories to help us deal with the pressing demands of everyday ethics. Nor can we rely on our biological dispositions to point us toward the best ethical judgments. Rather, we have to confront the integrity of our character, our honed intuitions, our developed sense of fairness and honesty. And to see how these traits are exhibited, we need to see how they work in action. 

The articles in the rest of this issue do just that. This is how ethics gets played in the classroom, at work, at the supermarket, over the dinner table. While the usual moral evaluations of societies tend to focus on such broad issues as crime, economic equity, and foreign policy, just as important to consider is the moral health of our everyday interactions. For after all, this is how our lives are lived: day by day, one “small” moral judgment after another. 

About the Author

Joshua halberstam.

Joshua Halberstam, Ph.D., is the author of Everyday Ethics: Inspired Solutions to Moral Dilemmas (Viking) and is currently an adjunct professor at Teachers College, Columbia University. In addition to his professional writings in philosophy, he has written several books for the general reader on the subjects of ethics and culture.

You May Also Enjoy

In Search of the Moral Voice

This article — and everything on this site — is funded by readers like you.

Become a subscribing member today. Help us continue to bring “the science of a meaningful life” to you and to millions around the globe.

Logo for Achieving the Dream | OER Course Library

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

14 What is Ethical Judgement?

Ethics is concerned with the kind of people we are, but also with the things we do or fail to do. This could be called the “ethics of doing.” Some people, however, don’t take the time to consider the ethical dimensions of given situations before they act. This may happen  because they have not gathered all of the necessary information needed, while others might rationalize excuses, employ defense mechanisms, or incorrectly gauge the intensity of the situation.

A well-known joke shared regularly when many of us were growing up asks, “How do you clean Dracula’s teeth?” The response is very simple: “Very carefully.”  When we think about the question, “How do we make ethical decisions in our modern world?” the response to this childhood joke seems very appropriate here as well.  Unfortunately, we live in an time where many important situations are not thought through carefully, and too often, are responded to impulsively.

We need to help students realize that in order to know what to do in a given situation, they should explore issues carefully–gathering all the relevant facts, considering the actions involved, and evaluating the potential consequences. Once they have clarified these points, their personal values can guide them in making a final decision.  This is the process and basis for what we can call “ethical judgment.”

Judgment on an ethical issue will usually depend on two things: values and priorities.

Values are the things that we hold important for our sense of who we are. They are expressed in statements such as “human life and dignity should be protected,” or “cheating is wrong.” They develop over time and are influenced by family, religion, education, peers and a whole range of experiences, both good and bad, that have helped shape us.

In some situations, even people who agree on the same values, will disagree on the decision because a particular situation brings different values into conflict. This will require people to prioritize their values. It is sometimes referred to as an “ethical dilemma,” where there does not seem to be any solution without compromising one’s values, or where one’s decision may have negative consequences.

This was famously demonstrated by social psychologist Stanley Milgram, whose research experiment exposed how external social forces, even the most subtle, have surprisingly powerful effects on our behavior and our ethical judgment.

Milgram created an electric ‘shock generator’ with 30 switches. The switch was marked clearly in 15-volt increments, ranging from 15 to 450 volts.  The “shock generator” was in fact phony and would only produce sound when the switches were pressed. 40 subjects (males) were recruited via mail and a newspaper ad. They thought they were going to participate in an experiment about memory and learning. In the test, each subject was informed clearly that their payment was for showing up, and they could keep the payment regardless of what happened after they arrived.

Next, the subject met an “experimenter,” the person leading the experiment, and another person told to be another subject. The other subject was in fact a confederate to the experiment, only acting as a subject.  The two subjects drew slips of paper to indicate who was going to be a “teacher” and who was going to be a “learner.” The lottery was in fact a set-up, and the real subject would always get the role of the teacher.

The teacher saw that the learner was strapped to a chair and electrodes were attached. The subject was then seated in another room in front of the shock generator, unable to see the learner. The subject was then instructed to “teach” word-pairs to the learner. When the learner made a mistake, the subject was instructed to punish the learner by giving him a shock, 15 volts higher for each mistake. The “learner,” keep in mind, never received the shocks, but pre-taped audio was triggered when a shock-switch was pressed.

If the experimenter, seated in the same room, was contacted, the experimenter would answer with predefined prodding such as “Please continue,” “Please go on,” “The experiment requires that you go on,” “It is absolutely essential that you continue,” “You have no other choice, you must go on.” If the subject asked who was responsible if anything would happen to the learner, the experimenter answered “I am responsible.” This gave the subject a relief and many continued with the process of administering shocks.

Although most subjects were uncomfortable doing it, all 40 subjects obeyed up to 300 volts. 25 of the 40 subjects continued to complete to give shocks until the maximum level of 450 volts was reached.

So what happened to each participant’s ethical judgment?

While we would like to believe that when confronted with ethical dilemmas we will all act in the best possible way, Milgram’s experiment revealed that in a concrete situation with powerful social constraints, ethical systems can be compromised. This experiment also shows us the necessity for people to improve their ethical judgment.

Introduction to Ethics Copyright © by Lumen Learning is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted.

Share This Book

ethical judgement essay

Thinking Ethically

  • Markkula Center for Applied Ethics
  • Ethics Resources
  • Ethical Decision Making

Moral issues greet us each morning in the newspaper, confront us in the memos on our desks, nag us from our children's soccer fields, and bid us good night on the evening news. We are bombarded daily with questions about the justice of our foreign policy, the morality of medical technologies that can prolong our lives, the rights of the homeless, the fairness of our children's teachers to the diverse students in their classrooms.

Dealing with these moral issues is often perplexing. How, exactly, should we think through an ethical issue? What questions should we ask? What factors should we consider?

The first step in analyzing moral issues is obvious but not always easy: Get the facts. Some moral issues create controversies simply because we do not bother to check the facts. This first step, although obvious, is also among the most important and the most frequently overlooked.

But having the facts is not enough. Facts by themselves only tell us what is ; they do not tell us what ought to be. In addition to getting the facts, resolving an ethical issue also requires an appeal to values. Philosophers have developed five different approaches to values to deal with moral issues.

The Utilitarian Approach Utilitarianism was conceived in the 19th century by Jeremy Bentham and John Stuart Mill to help legislators determine which laws were morally best. Both Bentham and Mill suggested that ethical actions are those that provide the greatest balance of good over evil.

To analyze an issue using the utilitarian approach, we first identify the various courses of action available to us. Second, we ask who will be affected by each action and what benefits or harms will be derived from each. And third, we choose the action that will produce the greatest benefits and the least harm. The ethical action is the one that provides the greatest good for the greatest number.

The Rights Approach The second important approach to ethics has its roots in the philosophy of the 18th-century thinker Immanuel Kant and others like him, who focused on the individual's right to choose for herself or himself. According to these philosophers, what makes human beings different from mere things is that people have dignity based on their ability to choose freely what they will do with their lives, and they have a fundamental moral right to have these choices respected. People are not objects to be manipulated; it is a violation of human dignity to use people in ways they do not freely choose.

Of course, many different, but related, rights exist besides this basic one. These other rights (an incomplete list below) can be thought of as different aspects of the basic right to be treated as we choose.

The right to the truth: We have a right to be told the truth and to be informed about matters that significantly affect our choices.

The right of privacy: We have the right to do, believe, and say whatever we choose in our personal lives so long as we do not violate the rights of others.

The right not to be injured: We have the right not to be harmed or injured unless we freely and knowingly do something to deserve punishment or we freely and knowingly choose to risk such injuries.

The right to what is agreed: We have a right to what has been promised by those with whom we have freely entered into a contract or agreement.

In deciding whether an action is moral or immoral using this second approach, then, we must ask, Does the action respect the moral rights of everyone? Actions are wrong to the extent that they violate the rights of individuals; the more serious the violation, the more wrongful the action.

The Fairness or Justice Approach The fairness or justice approach to ethics has its roots in the teachings of the ancient Greek philosopher Aristotle, who said that "equals should be treated equally and unequals unequally." The basic moral question in this approach is: How fair is an action? Does it treat everyone in the same way, or does it show favoritism and discrimination?

Favoritism gives benefits to some people without a justifiable reason for singling them out; discrimination imposes burdens on people who are no different from those on whom burdens are not imposed. Both favoritism and discrimination are unjust and wrong.

The Common-Good Approach This approach to ethics assumes a society comprising individuals whose own good is inextricably linked to the good of the community. Community members are bound by the pursuit of common values and goals.

The common good is a notion that originated more than 2,000 years ago in the writings of Plato, Aristotle, and Cicero. More recently, contemporary ethicist John Rawls defined the common good as "certain general conditions that are...equally to everyone's advantage."

In this approach, we focus on ensuring that the social policies, social systems, institutions, and environments on which we depend are beneficial to all. Examples of goods common to all include affordable health care, effective public safety, peace among nations, a just legal system, and an unpolluted environment.

Appeals to the common good urge us to view ourselves as members of the same community, reflecting on broad questions concerning the kind of society we want to become and how we are to achieve that society. While respecting and valuing the freedom of individuals to pursue their own goals, the common-good approach challenges us also to recognize and further those goals we share in common.

The Virtue Approach The virtue approach to ethics assumes that there are certain ideals toward which we should strive, which provide for the full development of our humanity. These ideals are discovered through thoughtful reflection on what kind of people we have the potential to become.

Virtues are attitudes or character traits that enable us to be and to act in ways that develop our highest potential. They enable us to pursue the ideals we have adopted. Honesty, courage, compassion, generosity, fidelity, integrity, fairness, self-control, and prudence are all examples of virtues.

Virtues are like habits; that is, once acquired, they become characteristic of a person. Moreover, a person who has developed virtues will be naturally disposed to act in ways consistent with moral principles. The virtuous person is the ethical person.

In dealing with an ethical problem using the virtue approach, we might ask, What kind of person should I be? What will promote the development of character within myself and my community?

Ethical Problem Solving These five approaches suggest that once we have ascertained the facts, we should ask ourselves five questions when trying to resolve a moral issue:

What benefits and what harms will each course of action produce, and which alternative will lead to the best overall consequences?

What moral rights do the affected parties have, and which course of action best respects those rights?

Which course of action treats everyone the same, except where there is a morally justifiable reason not to, and does not show favoritism or discrimination?

Which course of action advances the common good?

Which course of action develops moral virtues?

This method, of course, does not provide an automatic solution to moral problems. It is not meant to. The method is merely meant to help identify most of the important ethical considerations. In the end, we must deliberate on moral issues for ourselves, keeping a careful eye on both the facts and on the ethical considerations involved.

This article updates several previous pieces from Issues in Ethics by Manuel Velasquez - Dirksen Professor of Business Ethics at Santa Clara University and former Center director - and Claire Andre, associate Center director. "Thinking Ethically" is based on a framework developed by the authors in collaboration with Center Director Thomas Shanks, S.J., Presidential Professor of Ethics and the Common Good Michael J. Meyer, and others. The framework is used as the basis for many programs and presentations at the Markkula Center for Applied Ethics.

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Moral Reasoning

While moral reasoning can be undertaken on another’s behalf, it is paradigmatically an agent’s first-personal (individual or collective) practical reasoning about what, morally, they ought to do. Philosophical examination of moral reasoning faces both distinctive puzzles – about how we recognize moral considerations and cope with conflicts among them and about how they move us to act – and distinctive opportunities for gleaning insight about what we ought to do from how we reason about what we ought to do.

Part I of this article characterizes moral reasoning more fully, situates it in relation both to first-order accounts of what morality requires of us and to philosophical accounts of the metaphysics of morality, and explains the interest of the topic. Part II then takes up a series of philosophical questions about moral reasoning, so understood and so situated.

1.1 Defining “Moral Reasoning”

1.2 empirical challenges to moral reasoning, 1.3 situating moral reasoning, 1.4 gaining moral insight from studying moral reasoning, 1.5 how distinct is moral reasoning from practical reasoning in general, 2.1 moral uptake, 2.2 moral principles, 2.3 sorting out which considerations are most relevant, 2.4 moral reasoning and moral psychology, 2.5 modeling conflicting moral considerations, 2.6 moral learning and the revision of moral views, 2.7 how can we reason, morally, with one another, other internet resources, related entries, 1. the philosophical importance of moral reasoning.

This article takes up moral reasoning as a species of practical reasoning – that is, as a type of reasoning directed towards deciding what to do and, when successful, issuing in an intention (see entry on practical reason ). Of course, we also reason theoretically about what morality requires of us; but the nature of purely theoretical reasoning about ethics is adequately addressed in the various articles on ethics . It is also true that, on some understandings, moral reasoning directed towards deciding what to do involves forming judgments about what one ought, morally, to do. On these understandings, asking what one ought (morally) to do can be a practical question, a certain way of asking about what to do. (See section 1.5 on the question of whether this is a distinctive practical question.) In order to do justice to the full range of philosophical views about moral reasoning, we will need to have a capacious understanding of what counts as a moral question. For instance, since a prominent position about moral reasoning is that the relevant considerations are not codifiable, we would beg a central question if we here defined “ morality ” as involving codifiable principles or rules. For present purposes, we may understand issues about what is right or wrong, or virtuous or vicious, as raising moral questions.

Even when moral questions explicitly arise in daily life, just as when we are faced with child-rearing, agricultural, and business questions, sometimes we act impulsively or instinctively rather than pausing to reason, not just about what to do, but about what we ought to do. Jean-Paul Sartre described a case of one of his students who came to him in occupied Paris during World War II, asking advice about whether to stay by his mother, who otherwise would have been left alone, or rather to go join the forces of the Free French, then massing in England (Sartre 1975). In the capacious sense just described, this is probably a moral question; and the young man paused long enough to ask Sartre’s advice. Does that mean that this young man was reasoning about his practical question? Not necessarily. Indeed, Sartre used the case to expound his skepticism about the possibility of addressing such a practical question by reasoning. But what is reasoning?

Reasoning, of the sort discussed here, is active or explicit thinking, in which the reasoner, responsibly guided by her assessments of her reasons (Kolodny 2005) and of any applicable requirements of rationality (Broome 2009, 2013), attempts to reach a well-supported answer to a well-defined question (Hieronymi 2013). For Sartre’s student, at least such a question had arisen. Indeed, the question was relatively definite, implying that the student had already engaged in some reflection about the various alternatives available to him – a process that has well been described as an important phase of practical reasoning, one that aptly precedes the effort to make up one’s mind (Harman 1986, 2).

Characterizing reasoning as responsibly conducted thinking of course does not suffice to analyze the notion. For one thing, it fails to address the fraught question of reasoning’s relation to inference (Harman 1986, Broome 2009). In addition, it does not settle whether formulating an intention about what to do suffices to conclude practical reasoning or whether such intentions cannot be adequately worked out except by starting to act. Perhaps one cannot adequately reason about how to repair a stone wall or how to make an omelet with the available ingredients without actually starting to repair or to cook (cf. Fernandez 2016). Still, it will do for present purposes. It suffices to make clear that the idea of reasoning involves norms of thinking. These norms of aptness or correctness in practical thinking surely do not require us to think along a single prescribed pathway, but rather permit only certain pathways and not others (Broome 2013, 219). Even so, we doubtless often fail to live up to them.

Our thinking, including our moral thinking, is often not explicit. We could say that we also reason tacitly, thinking in much the same way as during explicit reasoning, but without any explicit attempt to reach well-supported answers. In some situations, even moral ones, we might be ill-advised to attempt to answer our practical questions by explicit reasoning. In others, it might even be a mistake to reason tacitly – because, say, we face a pressing emergency. “Sometimes we should not deliberate about what to do, and just drive” (Arpaly and Schroeder 2014, 50). Yet even if we are not called upon to think through our options in all situations, and even if sometimes it would be positively better if we did not, still, if we are called upon to do so, then we should conduct our thinking responsibly: we should reason.

Recent work in empirical ethics has indicated that even when we are called upon to reason morally, we often do so badly. When asked to give reasons for our moral intuitions, we are often “dumbfounded,” finding nothing to say in their defense (Haidt 2001). Our thinking about hypothetical moral scenarios has been shown to be highly sensitive to arbitrary variations, such as in the order of presentation. Even professional philosophers have been found to be prone to such lapses of clear thinking (e.g., Schwitzgebel & Cushman 2012). Some of our dumbfounding and confusion has been laid at the feet of our having both a fast, more emotional way of processing moral stimuli and a slow, more cognitive way (e.g., Greene 2014). An alternative explanation of moral dumbfounding looks to social norms of moral reasoning (Sneddon 2007). And a more optimistic reaction to our confusion sees our established patterns of “moral consistency reasoning” as being well-suited to cope with the clashing input generated by our fast and slow systems (Campbell & Kumar 2012) or as constituting “a flexible learning system that generates and updates a multidimensional evaluative landscape to guide decision and action” (Railton, 2014, 813).

Eventually, such empirical work on our moral reasoning may yield revisions in our norms of moral reasoning. This has not yet happened. This article is principally concerned with philosophical issues posed by our current norms of moral reasoning. For example, given those norms and assuming that they are more or less followed, how do moral considerations enter into moral reasoning, get sorted out by it when they clash, and lead to action? And what do those norms indicate about what we ought to do do?

The topic of moral reasoning lies in between two other commonly addressed topics in moral philosophy. On the one side, there is the first-order question of what moral truths there are, if any. For instance, are there any true general principles of morality, and if so, what are they? At this level utilitarianism competes with Kantianism, for instance, and both compete with anti-theorists of various stripes, who recognize only particular truths about morality (Clarke & Simpson 1989). On the other side, a quite different sort of question arises from seeking to give a metaphysical grounding for moral truths or for the claim that there are none. Supposing there are some moral truths, what makes them true? What account can be given of the truth-conditions of moral statements? Here arise familiar questions of moral skepticism and moral relativism ; here, the idea of “a reason” is wielded by many hoping to defend a non-skeptical moral metaphysics (e.g., Smith 2013). The topic of moral reasoning lies in between these two other familiar topics in the following simple sense: moral reasoners operate with what they take to be morally true but, instead of asking what makes their moral beliefs true, they proceed responsibly to attempt to figure out what to do in light of those considerations. The philosophical study of moral reasoning concerns itself with the nature of these attempts.

These three topics clearly interrelate. Conceivably, the relations between them would be so tight as to rule out any independent interest in the topic of moral reasoning. For instance, if all that could usefully be said about moral reasoning were that it is a matter of attending to the moral facts, then all interest would devolve upon the question of what those facts are – with some residual focus on the idea of moral attention (McNaughton 1988). Alternatively, it might be thought that moral reasoning is simply a matter of applying the correct moral theory via ordinary modes of deductive and empirical reasoning. Again, if that were true, one’s sufficient goal would be to find that theory and get the non-moral facts right. Neither of these reductive extremes seems plausible, however. Take the potential reduction to getting the facts right, first.

Contemporary advocates of the importance of correctly perceiving the morally relevant facts tend to focus on facts that we can perceive using our ordinary sense faculties and our ordinary capacities of recognition, such as that this person has an infection or that this person needs my medical help . On such a footing, it is possible to launch powerful arguments against the claim that moral principles undergird every moral truth (Dancy 1993) and for the claim that we can sometimes perfectly well decide what to do by acting on the reasons we perceive instinctively – or as we have been trained – without engaging in any moral reasoning. Yet this is not a sound footing for arguing that moral reasoning, beyond simply attending to the moral facts, is always unnecessary. On the contrary, we often find ourselves facing novel perplexities and moral conflicts in which our moral perception is an inadequate guide. In addressing the moral questions surrounding whether society ought to enforce surrogate-motherhood contracts, for instance, the scientific and technological novelties involved make our moral perceptions unreliable and shaky guides. When a medical researcher who has noted an individual’s illness also notes the fact that diverting resources to caring, clinically, for this individual would inhibit the progress of my research, thus harming the long-term health chances of future sufferers of this illness , he or she comes face to face with conflicting moral considerations. At this juncture, it is far less plausible or satisfying simply to say that, employing one’s ordinary sensory and recognitional capacities, one sees what is to be done, both things considered. To posit a special faculty of moral intuition that generates such overall judgments in the face of conflicting considerations is to wheel in a deus ex machina . It cuts inquiry short in a way that serves the purposes of fiction better than it serves the purposes of understanding. It is plausible instead to suppose that moral reasoning comes in at this point (Campbell & Kumar 2012).

For present purposes, it is worth noting, David Hume and the moral sense theorists do not count as short-circuiting our understanding of moral reasoning in this way. It is true that Hume presents himself, especially in the Treatise of Human Nature , as a disbeliever in any specifically practical or moral reasoning. In doing so, however, he employs an exceedingly narrow definition of “reasoning” (Hume 2000, Book I, Part iii, sect. ii). For present purposes, by contrast, we are using a broader working gloss of “reasoning,” one not controlled by an ambition to parse out the relative contributions of (the faculty of) reason and of the passions. And about moral reasoning in this broader sense, as responsible thinking about what one ought to do, Hume has many interesting things to say, starting with the thought that moral reasoning must involve a double correction of perspective (see section 2.4 ) adequately to account for the claims of other people and of the farther future, a double correction that is accomplished with the aid of the so-called “calm passions.”

If we turn from the possibility that perceiving the facts aright will displace moral reasoning to the possibility that applying the correct moral theory will displace – or exhaust – moral reasoning, there are again reasons to be skeptical. One reason is that moral theories do not arise in a vacuum; instead, they develop against a broad backdrop of moral convictions. Insofar as the first potentially reductive strand, emphasizing the importance of perceiving moral facts, has force – and it does have some – it also tends to show that moral theories need to gain support by systematizing or accounting for a wide range of moral facts (Sidgwick 1981). As in most other arenas in which theoretical explanation is called for, the degree of explanatory success will remain partial and open to improvement via revisions in the theory (see section 2.6 ). Unlike the natural sciences, however, moral theory is an endeavor that, as John Rawls once put it, is “Socratic” in that it is a subject pertaining to actions “shaped by self-examination” (Rawls 1971, 48f.). If this observation is correct, it suggests that the moral questions we set out to answer arise from our reflections about what matters. By the same token – and this is the present point – a moral theory is subject to being overturned because it generates concrete implications that do not sit well with us on due reflection. This being so, and granting the great complexity of the moral terrain, it seems highly unlikely that we will ever generate a moral theory on the basis of which we can serenely and confidently proceed in a deductive way to generate answers to what we ought to do in all concrete cases. This conclusion is reinforced by a second consideration, namely that insofar as a moral theory is faithful to the complexity of the moral phenomena, it will contain within it many possibilities for conflicts among its own elements. Even if it does deploy some priority rules, these are unlikely to be able to cover all contingencies. Hence, some moral reasoning that goes beyond the deductive application of the correct theory is bound to be needed.

In short, a sound understanding of moral reasoning will not take the form of reducing it to one of the other two levels of moral philosophy identified above. Neither the demand to attend to the moral facts nor the directive to apply the correct moral theory exhausts or sufficiently describes moral reasoning.

In addition to posing philosophical problems in its own right, moral reasoning is of interest on account of its implications for moral facts and moral theories. Accordingly, attending to moral reasoning will often be useful to those whose real interest is in determining the right answer to some concrete moral problem or in arguing for or against some moral theory. The characteristic ways we attempt to work through a given sort of moral quandary can be just as revealing about our considered approaches to these matters as are any bottom-line judgments we may characteristically come to. Further, we may have firm, reflective convictions about how a given class of problems is best tackled, deliberatively, even when we remain in doubt about what should be done. In such cases, attending to the modes of moral reasoning that we characteristically accept can usefully expand the set of moral information from which we start, suggesting ways to structure the competing considerations.

Facts about the nature of moral inference and moral reasoning may have important direct implications for moral theory. For instance, it might be taken to be a condition of adequacy of any moral theory that it play a practically useful role in our efforts at self-understanding and deliberation. It should be deliberation-guiding (Richardson 2018, §1.2). If this condition is accepted, then any moral theory that would require agents to engage in abstruse or difficult reasoning may be inadequate for that reason, as would be any theory that assumes that ordinary individuals are generally unable to reason in the ways that the theory calls for. J.S. Mill (1979) conceded that we are generally unable to do the calculations called for by utilitarianism, as he understood it, and argued that we should be consoled by the fact that, over the course of history, experience has generated secondary principles that guide us well enough. Rather more dramatically, R. M. Hare defended utilitarianism as well capturing the reasoning of ideally informed and rational “archangels” (1981). Taking seriously a deliberation-guidance desideratum for moral theory would favor, instead, theories that more directly inform efforts at moral reasoning by we “proletarians,” to use Hare’s contrasting term.

Accordingly, the close relations between moral reasoning, the moral facts, and moral theory do not eliminate moral reasoning as a topic of interest. To the contrary, because moral reasoning has important implications about moral facts and moral theories, these close relations lend additional interest to the topic of moral reasoning.

The final threshold question is whether moral reasoning is truly distinct from practical reasoning more generally understood. (The question of whether moral reasoning, even if practical, is structurally distinct from theoretical reasoning that simply proceeds from a proper recognition of the moral facts has already been implicitly addressed and answered, for the purposes of the present discussion, in the affirmative.) In addressing this final question, it is difficult to overlook the way different moral theories project quite different models of moral reasoning – again a link that might be pursued by the moral philosopher seeking leverage in either direction. For instance, Aristotle’s views might be as follows: a quite general account can be given of practical reasoning, which includes selecting means to ends and determining the constituents of a desired activity. The difference between the reasoning of a vicious person and that of a virtuous person differs not at all in its structure, but only in its content, for the virtuous person pursues true goods, whereas the vicious person simply gets side-tracked by apparent ones. To be sure, the virtuous person may be able to achieve a greater integration of his or her ends via practical reasoning (because of the way the various virtues cohere), but this is a difference in the result of practical reasoning and not in its structure. At an opposite extreme, Kant’s categorical imperative has been taken to generate an approach to practical reasoning (via a “typic of practical judgment”) that is distinctive from other practical reasoning both in the range of considerations it addresses and its structure (Nell 1975). Whereas prudential practical reasoning, on Kant’s view, aims to maximize one’s happiness, moral reasoning addresses the potential universalizability of the maxims – roughly, the intentions – on which one acts. Views intermediate between Aristotle’s and Kant’s in this respect include Hare’s utilitarian view and Aquinas’ natural-law view. On Hare’s view, just as an ideal prudential agent applies maximizing rationality to his or her own preferences, an ideal moral agent’s reasoning applies maximizing rationality to the set of everyone’s preferences that its archangelic capacity for sympathy has enabled it to internalize (Hare 1981). Thomistic, natural-law views share the Aristotelian view about the general unity of practical reasoning in pursuit of the good, rightly or wrongly conceived, but add that practical reason, in addition to demanding that we pursue the fundamental human goods, also, and distinctly, demands that we not attack these goods. In this way, natural-law views incorporate some distinctively moral structuring – such as the distinctions between doing and allowing and the so-called doctrine of double effect’s distinction between intending as a means and accepting as a by-product – within a unified account of practical reasoning (see entry on the natural law tradition in ethics ). In light of this diversity of views about the relation between moral reasoning and practical or prudential reasoning, a general account of moral reasoning that does not want to presume the correctness of a definite moral theory will do well to remain agnostic on the question of how moral reasoning relates to non-moral practical reasoning.

2. General Philosophical Questions about Moral Reasoning

To be sure, most great philosophers who have addressed the nature of moral reasoning were far from agnostic about the content of the correct moral theory, and developed their reflections about moral reasoning in support of or in derivation from their moral theory. Nonetheless, contemporary discussions that are somewhat agnostic about the content of moral theory have arisen around important and controversial aspects of moral reasoning. We may group these around the following seven questions:

  • How do relevant considerations get taken up in moral reasoning?
  • Is it essential to moral reasoning for the considerations it takes up to be crystallized into, or ranged under, principles?
  • How do we sort out which moral considerations are most relevant?
  • In what ways do motivational elements shape moral reasoning?
  • What is the best way to model the kinds of conflicts among considerations that arise in moral reasoning?
  • Does moral reasoning include learning from experience and changing one’s mind?
  • How can we reason, morally, with one another?

The remainder of this article takes up these seven questions in turn.

One advantage to defining “reasoning” capaciously, as here, is that it helps one recognize that the processes whereby we come to be concretely aware of moral issues are integral to moral reasoning as it might more narrowly be understood. Recognizing moral issues when they arise requires a highly trained set of capacities and a broad range of emotional attunements. Philosophers of the moral sense school of the 17th and 18th centuries stressed innate emotional propensities, such as sympathy with other humans. Classically influenced virtue theorists, by contrast, give more importance to the training of perception and the emotional growth that must accompany it. Among contemporary philosophers working in empirical ethics there is a similar divide, with some arguing that we process situations using an innate moral grammar (Mikhail 2011) and some emphasizing the role of emotions in that processing (Haidt 2001, Prinz 2007, Greene 2014). For the moral reasoner, a crucial task for our capacities of moral recognition is to mark out certain features of a situation as being morally salient. Sartre’s student, for instance, focused on the competing claims of his mother and the Free French, giving them each an importance to his situation that he did not give to eating French cheese or wearing a uniform. To say that certain features are marked out as morally salient is not to imply that the features thus singled out answer to the terms of some general principle or other: we will come to the question of particularism, below. Rather, it is simply to say that recognitional attention must have a selective focus.

What will be counted as a moral issue or difficulty, in the sense requiring moral agents’ recognition, will again vary by moral theory. Not all moral theories would count filial loyalty and patriotism as moral duties. It is only at great cost, however, that any moral theory could claim to do without a layer of moral thinking involving situation-recognition. A calculative sort of utilitarianism, perhaps, might be imagined according to which there is no need to spot a moral issue or difficulty, as every choice node in life presents the agent with the same, utility-maximizing task. Perhaps Jeremy Bentham held a utilitarianism of this sort. For the more plausible utilitarianisms mentioned above, however, such as Mill’s and Hare’s, agents need not always calculate afresh, but must instead be alive to the possibility that because the ordinary “landmarks and direction posts” lead one astray in the situation at hand, they must make recourse to a more direct and critical mode of moral reasoning. Recognizing whether one is in one of those situations thus becomes the principal recognitional task for the utilitarian agent. (Whether this task can be suitably confined, of course, has long been one of the crucial questions about whether such indirect forms of utilitarianism, attractive on other grounds, can prevent themselves from collapsing into a more Benthamite, direct form: cf. Brandt 1979.)

Note that, as we have been describing moral uptake, we have not implied that what is perceived is ever a moral fact. Rather, it might be that what is perceived is some ordinary, descriptive feature of a situation that is, for whatever reason, morally relevant. An account of moral uptake will interestingly impinge upon the metaphysics of moral facts, however, if it holds that moral facts can be perceived. Importantly intermediate, in this respect, is the set of judgments involving so-called “thick” evaluative concepts – for example, that someone is callous, boorish, just, or brave (see the entry on thick ethical concepts ). These do not invoke the supposedly “thinner” terms of overall moral assessment, “good,” or “right.” Yet they are not innocent of normative content, either. Plainly, we do recognize callousness when we see clear cases of it. Plainly, too – whatever the metaphysical implications of the last fact – our ability to describe our situations in these thick normative terms is crucial to our ability to reason morally.

It is debated how closely our abilities of moral discernment are tied to our moral motivations. For Aristotle and many of his ancient successors, the two are closely linked, in that someone not brought up into virtuous motivations will not see things correctly. For instance, cowards will overestimate dangers, the rash will underestimate them, and the virtuous will perceive them correctly ( Eudemian Ethics 1229b23–27). By the Stoics, too, having the right motivations was regarded as intimately tied to perceiving the world correctly; but whereas Aristotle saw the emotions as allies to enlist in support of sound moral discernment, the Stoics saw them as inimical to clear perception of the truth (cf. Nussbaum 2001).

That one discerns features and qualities of some situation that are relevant to sizing it up morally does not yet imply that one explicitly or even implicitly employs any general claims in describing it. Perhaps all that one perceives are particularly embedded features and qualities, without saliently perceiving them as instantiations of any types. Sartre’s student may be focused on his mother and on the particular plights of several of his fellow Frenchmen under Nazi occupation, rather than on any purported requirements of filial duty or patriotism. Having become aware of some moral issue in such relatively particular terms, he might proceed directly to sorting out the conflict between them. Another possibility, however, and one that we frequently seem to exploit, is to formulate the issue in general terms: “An only child should stick by an otherwise isolated parent,” for instance, or “one should help those in dire need if one can do so without significant personal sacrifice.” Such general statements would be examples of “moral principles,” in a broad sense. (We do not here distinguish between principles and rules. Those who do include Dworkin 1978 and Gert 1998.)

We must be careful, here, to distinguish the issue of whether principles commonly play an implicit or explicit role in moral reasoning, including well-conducted moral reasoning, from the issue of whether principles necessarily figure as part of the basis of moral truth. The latter issue is best understood as a metaphysical question about the nature and basis of moral facts. What is currently known as moral particularism is the view that there are no defensible moral principles and that moral reasons, or well-grounded moral facts, can exist independently of any basis in a general principle. A contrary view holds that moral reasons are necessarily general, whether because the sources of their justification are all general or because a moral claim is ill-formed if it contains particularities. But whether principles play a useful role in moral reasoning is certainly a different question from whether principles play a necessary role in accounting for the ultimate truth-conditions of moral statements. Moral particularism, as just defined, denies their latter role. Some moral particularists seem also to believe that moral particularism implies that moral principles cannot soundly play a useful role in reasoning. This claim is disputable, as it seems a contingent matter whether the relevant particular facts arrange themselves in ways susceptible to general summary and whether our cognitive apparatus can cope with them at all without employing general principles. Although the metaphysical controversy about moral particularism lies largely outside our topic, we will revisit it in section 2.5 , in connection with the weighing of conflicting reasons.

With regard to moral reasoning, while there are some self-styled “anti-theorists” who deny that abstract structures of linked generalities are important to moral reasoning (Clarke, et al. 1989), it is more common to find philosophers who recognize both some role for particular judgment and some role for moral principles. Thus, neo-Aristotelians like Nussbaum who emphasize the importance of “finely tuned and richly aware” particular discernment also regard that discernment as being guided by a set of generally describable virtues whose general descriptions will come into play in at least some kinds of cases (Nussbaum 1990). “Situation ethicists” of an earlier generation (e.g. Fletcher 1997) emphasized the importance of taking into account a wide range of circumstantial differentiae, but against the background of some general principles whose application the differentiae help sort out. Feminist ethicists influenced by Carol Gilligan’s path breaking work on moral development have stressed the moral centrality of the kind of care and discernment that are salient and well-developed by people immersed in particular relationships (Held 1995); but this emphasis is consistent with such general principles as “one ought to be sensitive to the wishes of one’s friends”(see the entry on feminist moral psychology ). Again, if we distinguish the question of whether principles are useful in responsibly-conducted moral thinking from the question of whether moral reasons ultimately all derive from general principles, and concentrate our attention solely on the former, we will see that some of the opposition to general moral principles melts away.

It should be noted that we have been using a weak notion of generality, here. It is contrasted only with the kind of strict particularity that comes with indexicals and proper names. General statements or claims – ones that contain no such particular references – are not necessarily universal generalizations, making an assertion about all cases of the mentioned type. Thus, “one should normally help those in dire need” is a general principle, in this weak sense. Possibly, such logically loose principles would be obfuscatory in the context of an attempt to reconstruct the ultimate truth-conditions of moral statements. Such logically loose principles would clearly be useless in any attempt to generate a deductively tight “practical syllogism.” In our day-to-day, non-deductive reasoning, however, such logically loose principles appear to be quite useful. (Recall that we are understanding “reasoning” quite broadly, as responsibly conducted thinking: nothing in this understanding of reasoning suggests any uniquely privileged place for deductive inference: cf. Harman 1986. For more on defeasible or “default” principles, see section 2.5 .)

In this terminology, establishing that general principles are essential to moral reasoning leaves open the further question whether logically tight, or exceptionless, principles are also essential to moral reasoning. Certainly, much of our actual moral reasoning seems to be driven by attempts to recast or reinterpret principles so that they can be taken to be exceptionless. Adherents and inheritors of the natural-law tradition in ethics (e.g. Donagan 1977) are particularly supple defenders of exceptionless moral principles, as they are able to avail themselves not only of a refined tradition of casuistry but also of a wide array of subtle – some would say overly subtle – distinctions, such as those mentioned above between doing and allowing and between intending as a means and accepting as a byproduct.

A related role for a strong form of generality in moral reasoning comes from the Kantian thought that one’s moral reasoning must counter one’s tendency to make exceptions for oneself. Accordingly, Kant holds, as we have noted, that we must ask whether the maxims of our actions can serve as universal laws. As most contemporary readers understand this demand, it requires that we engage in a kind of hypothetical generalization across agents, and ask about the implications of everybody acting that way in those circumstances. The grounds for developing Kant’s thought in this direction have been well explored (e.g., Nell 1975, Korsgaard 1996, Engstrom 2009). The importance and the difficulties of such a hypothetical generalization test in ethics were discussed the influential works Gibbard 1965 and Goldman 1974.

Whether or not moral considerations need the backing of general principles, we must expect situations of action to present us with multiple moral considerations. In addition, of course, these situations will also present us with a lot of information that is not morally relevant. On any realistic account, a central task of moral reasoning is to sort out relevant considerations from irrelevant ones, as well as to determine which are especially relevant and which only slightly so. That a certain woman is Sartre’s student’s mother seems arguably to be a morally relevant fact; what about the fact (supposing it is one) that she has no other children to take care of her? Addressing the task of sorting what is morally relevant from what is not, some philosophers have offered general accounts of moral relevant features. Others have given accounts of how we sort out which of the relevant features are most relevant, a process of thinking that sometimes goes by the name of “casuistry.”

Before we look at ways of sorting out which features are morally relevant or most morally relevant, it may be useful to note a prior step taken by some casuists, which was to attempt to set out a schema that would capture all of the features of an action or proposed action. The Roman Catholic casuists of the middle ages did so by drawing on Aristotle’s categories. Accordingly, they asked, where, when, why, how, by what means, to whom, or by whom the action in question is to be done or avoided (see Jonsen and Toulmin 1988). The idea was that complete answers to these questions would contain all of the features of the action, of which the morally relevant ones would be a subset. Although metaphysically uninteresting, the idea of attempting to list all of an action’s features in this way represents a distinctive – and extreme – heuristic for moral reasoning.

Turning to the morally relevant features, one of the most developed accounts is Bernard Gert’s. He develops a list of features relevant to whether the violation of a moral rule should be generally allowed. Given the designed function of Gert’s list, it is natural that most of his morally relevant features make reference to the set of moral rules he defended. Accordingly, some of Gert’s distinctions between dimensions of relevant features reflect controversial stances in moral theory. For example, one of the dimensions is whether “the violation [is] done intentionally or only knowingly” (Gert 1998, 234) – a distinction that those who reject the doctrine of double effect would not find relevant.

In deliberating about what we ought, morally, to do, we also often attempt to figure out which considerations are most relevant. To take an issue mentioned above: Are surrogate motherhood contracts more akin to agreements with babysitters (clearly acceptable) or to agreements with prostitutes (not clearly so)? That is, which feature of surrogate motherhood is more relevant: that it involves a contract for child-care services or that it involves payment for the intimate use of the body? Both in such relatively novel cases and in more familiar ones, reasoning by analogy plays a large role in ordinary moral thinking. When this reasoning by analogy starts to become systematic – a social achievement that requires some historical stability and reflectiveness about what are taken to be moral norms – it begins to exploit comparison to cases that are “paradigmatic,” in the sense of being taken as settled. Within such a stable background, a system of casuistry can develop that lends some order to the appeal to analogous cases. To use an analogy: the availability of a widely accepted and systematic set of analogies and the availability of what are taken to be moral norms may stand to one another as chicken does to egg: each may be an indispensable moment in the genesis of the other.

Casuistry, thus understood, is an indispensable aid to moral reasoning. At least, that it is would follow from conjoining two features of the human moral situation mentioned above: the multifariousness of moral considerations that arise in particular cases and the need and possibility for employing moral principles in sound moral reasoning. We require moral judgment, not simply a deductive application of principles or a particularist bottom-line intuition about what we should do. This judgment must be responsible to moral principles yet cannot be straightforwardly derived from them. Accordingly, our moral judgment is greatly aided if it is able to rest on the sort of heuristic support that casuistry offers. Thinking through which of two analogous cases provides a better key to understanding the case at hand is a useful way of organizing our moral reasoning, and one on which we must continue to depend. If we lack the kind of broad consensus on a set of paradigm cases on which the Renaissance Catholic or Talmudic casuists could draw, our casuistic efforts will necessarily be more controversial and tentative than theirs; but we are not wholly without settled cases from which to work. Indeed, as Jonsen and Toulmin suggest at the outset of their thorough explanation and defense of casuistry, the depth of disagreement about moral theories that characterizes a pluralist society may leave us having to rest comparatively more weight on the cases about which we can find agreement than did the classic casuists (Jonsen and Toulmin 1988).

Despite the long history of casuistry, there is little that can usefully be said about how one ought to reason about competing analogies. In the law, where previous cases have precedential importance, more can be said. As Sunstein notes (Sunstein 1996, chap. 3), the law deals with particular cases, which are always “potentially distinguishable” (72); yet the law also imposes “a requirement of practical consistency” (67). This combination of features makes reasoning by analogy particularly influential in the law, for one must decide whether a given case is more like one set of precedents or more like another. Since the law must proceed even within a pluralist society such as ours, Sunstein argues, we see that analogical reasoning can go forward on the basis of “incompletely theorized judgments” or of what Rawls calls an “overlapping consensus” (Rawls 1996). That is, although a robust use of analogous cases depends, as we have noted, on some shared background agreement, this agreement need not extend to all matters or all levels of individuals’ moral thinking. Accordingly, although in a pluralist society we may lack the kind of comprehensive normative agreement that made the high casuistry of Renaissance Christianity possible, the path of the law suggests that normatively forceful, case-based, analogical reasoning can still go on. A modern, competing approach to case-based or precedent-respecting reasoning has been developed by John F. Horty (2016). On Horty’s approach, which builds on the default logic developed in (Horty 2012), the body of precedent systematically shifts the weights of the reasons arising in a new case.

Reasoning by appeal to cases is also a favorite mode of some recent moral philosophers. Since our focus here is not on the methods of moral theory, we do not need to go into any detail in comparing different ways in which philosophers wield cases for and against alternative moral theories. There is, however, an important and broadly applicable point worth making about ordinary reasoning by reference to cases that emerges most clearly from the philosophical use of such reasoning. Philosophers often feel free to imagine cases, often quite unlikely ones, in order to attempt to isolate relevant differences. An infamous example is a pair of cases offered by James Rachels to cast doubt on the moral significance of the distinction between killing and letting die, here slightly redescribed. In both cases, there is at the outset a boy in a bathtub and a greedy older cousin downstairs who will inherit the family manse if and only if the boy predeceases him (Rachels 1975). In Case A, the cousin hears a thump, runs up to find the boy unconscious in the bath, and reaches out to turn on the tap so that the water will rise up to drown the boy. In Case B, the cousin hears a thump, runs up to find the boy unconscious in the bath with the water running, and decides to sit back and do nothing until the boy drowns. Since there is surely no moral difference between these cases, Rachels argued, the general distinction between killing and letting die is undercut. “Not so fast!” is the well-justified reaction (cf. Beauchamp 1979). Just because a factor is morally relevant in a certain way in comparing one pair of cases does not mean that it either is or must be relevant in the same way or to the same degree when comparing other cases. Shelly Kagan has dubbed the failure to take account of this fact of contextual interaction when wielding comparison cases the “additive fallacy” (1988). Kagan concludes from this that the reasoning of moral theorists must depend upon some theory that helps us anticipate and account for ways in which factors will interact in various contexts. A parallel lesson, reinforcing what we have already observed in connection with casuistry proper, would apply for moral reasoning in general: reasoning from cases must at least implicitly rely upon a set of organizing judgments or beliefs, of a kind that would, on some understandings, count as a moral “theory.” If this is correct, it provides another kind of reason to think that moral considerations could be crystallized into principles that make manifest the organizing structure involved.

We are concerned here with moral reasoning as a species of practical reasoning – reasoning directed to deciding what to do and, if successful, issuing in an intention. But how can such practical reasoning succeed? How can moral reasoning hook up with motivationally effective psychological states so as to have this kind of causal effect? “Moral psychology” – the traditional name for the philosophical study of intention and action – has a lot to say to such questions, both in its traditional, a priori form and its newly popular empirical form. In addition, the conclusions of moral psychology can have substantive moral implications, for it may be reasonable to assume that if there are deep reasons that a given type of moral reasoning cannot be practical, then any principles that demand such reasoning are unsound. In this spirit, Samuel Scheffler has explored “the importance for moral philosophy of some tolerably realistic understanding of human motivational psychology” (Scheffler 1992, 8) and Peter Railton has developed the idea that certain moral principles might generate a kind of “alienation” (Railton 1984). In short, we may be interested in what makes practical reasoning of a certain sort psychologically possible both for its own sake and as a way of working out some of the content of moral theory.

The issue of psychological possibility is an important one for all kinds of practical reasoning (cf. Audi 1989). In morality, it is especially pressing, as morality often asks individuals to depart from satisfying their own interests. As a result, it may appear that moral reasoning’s practical effect could not be explained by a simple appeal to the initial motivations that shape or constitute someone’s interests, in combination with a requirement, like that mentioned above, to will the necessary means to one’s ends. Morality, it may seem, instead requires individuals to act on ends that may not be part of their “motivational set,” in the terminology of Williams 1981. How can moral reasoning lead people to do that? The question is a traditional one. Plato’s Republic answered that the appearances are deceiving, and that acting morally is, in fact, in the enlightened self-interest of the agent. Kant, in stark contrast, held that our transcendent capacity to act on our conception of a practical law enables us to set ends and to follow morality even when doing so sharply conflicts with our interests. Many other answers have been given. In recent times, philosophers have defended what has been called “internalism” about morality, which claims that there is a necessary conceptual link between agents’ moral judgment and their motivation. Michael Smith, for instance, puts the claim as follows (Smith 1994, 61):

If an agent judges that it is right for her to Φ in circumstances C , then either she is motivated to Φ in C or she is practically irrational.

Even this defeasible version of moral judgment internalism may be too strong; but instead of pursuing this issue further, let us turn to a question more internal to moral reasoning. (For more on the issue of moral judgment internalism, see moral motivation .)

The traditional question we were just glancing at picks up when moral reasoning is done. Supposing that we have some moral conclusion, it asks how agents can be motivated to go along with it. A different question about the intersection of moral reasoning and moral psychology, one more immanent to the former, concerns how motivational elements shape the reasoning process itself.

A powerful philosophical picture of human psychology, stemming from Hume, insists that beliefs and desires are distinct existences (Hume 2000, Book II, part iii, sect. iii; cf. Smith 1994, 7). This means that there is always a potential problem about how reasoning, which seems to work by concatenating beliefs, links up to the motivations that desire provides. The paradigmatic link is that of instrumental action: the desire to Ψ links with the belief that by Φing in circumstances C one will Ψ. Accordingly, philosophers who have examined moral reasoning within an essentially Humean, belief-desire psychology have sometimes accepted a constrained account of moral reasoning. Hume’s own account exemplifies the sort of constraint that is involved. As Hume has it, the calm passions support the dual correction of perspective constitutive of morality, alluded to above. Since these calm passions are seen as competing with our other passions in essentially the same motivational coinage, as it were, our passions limit the reach of moral reasoning.

An important step away from a narrow understanding of Humean moral psychology is taken if one recognizes the existence of what Rawls has called “principle-dependent desires” (Rawls 1996, 82–83; Rawls 2000, 46–47). These are desires whose objects cannot be characterized without reference to some rational or moral principle. An important special case of these is that of “conception-dependent desires,” in which the principle-dependent desire in question is seen by the agent as belonging to a broader conception, and as important on that account (Rawls 1996, 83–84; Rawls 2000, 148–152). For instance, conceiving of oneself as a citizen, one may desire to bear one’s fair share of society’s burdens. Although it may look like any content, including this, may substitute for Ψ in the Humean conception of desire, and although Hume set out to show how moral sentiments such as pride could be explained in terms of simple psychological mechanisms, his influential empiricism actually tends to restrict the possible content of desires. Introducing principle-dependent desires thus seems to mark a departure from a Humean psychology. As Rawls remarks, if “we may find ourselves drawn to the conceptions and ideals that both the right and the good express … , [h]ow is one to fix limits on what people might be moved by in thought and deliberation and hence may act from?” (1996, 85). While Rawls developed this point by contrasting Hume’s moral psychology with Kant’s, the same basic point is also made by neo-Aristotelians (e.g., McDowell 1998).

The introduction of principle-dependent desires bursts any would-be naturalist limit on their content; nonetheless, some philosophers hold that this notion remains too beholden to an essentially Humean picture to be able to capture the idea of a moral commitment. Desires, it may seem, remain motivational items that compete on the basis of strength. Saying that one’s desire to be just may be outweighed by one’s desire for advancement may seem to fail to capture the thought that one has a commitment – even a non-absolute one – to justice. Sartre designed his example of the student torn between staying with his mother and going to fight with the Free French so as to make it seem implausible that he ought to decide simply by determining which he more strongly wanted to do.

One way to get at the idea of commitment is to emphasize our capacity to reflect about what we want. By this route, one might distinguish, in the fashion of Harry Frankfurt, between the strength of our desires and “the importance of what we care about” (Frankfurt 1988). Although this idea is evocative, it provides relatively little insight into how it is that we thus reflect. Another way to model commitment is to take it that our intentions operate at a level distinct from our desires, structuring what we are willing to reconsider at any point in our deliberations (e.g. Bratman 1999). While this two-level approach offers some advantages, it is limited by its concession of a kind of normative primacy to the unreconstructed desires at the unreflective level. A more integrated approach might model the psychology of commitment in a way that reconceives the nature of desire from the ground up. One attractive possibility is to return to the Aristotelian conception of desire as being for the sake of some good or apparent good (cf. Richardson 2004). On this conception, the end for the sake of which an action is done plays an important regulating role, indicating, in part, what one will not do (Richardson 2018, §§8.3–8.4). Reasoning about final ends accordingly has a distinctive character (see Richardson 1994, Schmidtz 1995). Whatever the best philosophical account of the notion of a commitment – for another alternative, see (Tiberius 2000) – much of our moral reasoning does seem to involve expressions of and challenges to our commitments (Anderson and Pildes 2000).

Recent experimental work, employing both survey instruments and brain imaging technologies, has allowed philosophers to approach questions about the psychological basis of moral reasoning from novel angles. The initial brain data seems to show that individuals with damage to the pre-frontal lobes tend to reason in more straightforwardly consequentialist fashion than those without such damage (Koenigs et al. 2007). Some theorists take this finding as tending to confirm that fully competent human moral reasoning goes beyond a simple weighing of pros and cons to include assessment of moral constraints (e.g., Wellman & Miller 2008, Young & Saxe 2008). Others, however, have argued that the emotional responses of the prefrontal lobes interfere with the more sober and sound, consequentialist-style reasoning of the other parts of the brain (e.g. Greene 2014). The survey data reveals or confirms, among other things, interesting, normatively loaded asymmetries in our attribution of such concepts as responsibility and causality (Knobe 2006). It also reveals that many of moral theory’s most subtle distinctions, such as the distinction between an intended means and a foreseen side-effect, are deeply built into our psychologies, being present cross-culturally and in young children, in a way that suggests to some the possibility of an innate “moral grammar” (Mikhail 2011).

A final question about the connection between moral motivation and moral reasoning is whether someone without the right motivational commitments can reason well, morally. On Hume’s official, narrow conception of reasoning, which essentially limits it to tracing empirical and logical connections, the answer would be yes. The vicious person could trace the causal and logical implications of acting in a certain way just as a virtuous person could. The only difference would be practical, not rational: the two would not act in the same way. Note, however, that the Humean’s affirmative answer depends on departing from the working definition of “moral reasoning” used in this article, which casts it as a species of practical reasoning. Interestingly, Kant can answer “yes” while still casting moral reasoning as practical. On his view in the Groundwork and the Critique of Practical Reason , reasoning well, morally, does not depend on any prior motivational commitment, yet remains practical reasoning. That is because he thinks the moral law can itself generate motivation. (Kant’s Metaphysics of Morals and Religion offer a more complex psychology.) For Aristotle, by contrast, an agent whose motivations are not virtuously constituted will systematically misperceive what is good and what is bad, and hence will be unable to reason excellently. The best reasoning that a vicious person is capable of, according to Aristotle, is a defective simulacrum of practical wisdom that he calls “cleverness” ( Nicomachean Ethics 1144a25).

Moral considerations often conflict with one another. So do moral principles and moral commitments. Assuming that filial loyalty and patriotism are moral considerations, then Sartre’s student faces a moral conflict. Recall that it is one thing to model the metaphysics of morality or the truth conditions of moral statements and another to give an account of moral reasoning. In now looking at conflicting considerations, our interest here remains with the latter and not the former. Our principal interest is in ways that we need to structure or think about conflicting considerations in order to negotiate well our reasoning involving them.

One influential building-block for thinking about moral conflicts is W. D. Ross’s notion of a “ prima facie duty”. Although this term misleadingly suggests mere appearance – the way things seem at first glance – it has stuck. Some moral philosophers prefer the term “ pro tanto duty” (e.g., Hurley 1989). Ross explained that his term provides “a brief way of referring to the characteristic (quite distinct from that of being a duty proper) which an act has, in virtue of being of a certain kind (e.g., the keeping of a promise), of being an act which would be a duty proper if it were not at the same time of another kind which is morally significant.” Illustrating the point, he noted that a prima facie duty to keep a promise can be overridden by a prima facie duty to avert a serious accident, resulting in a proper, or unqualified, duty to do the latter (Ross 1988, 18–19). Ross described each prima facie duty as a “parti-resultant” attribute, grounded or explained by one aspect of an act, whereas “being one’s [actual] duty” is a “toti-resultant” attribute resulting from all such aspects of an act, taken together (28; see Pietroski 1993). This suggests that in each case there is, in principle, some function that generally maps from the partial contributions of each prima facie duty to some actual duty. What might that function be? To Ross’s credit, he writes that “for the estimation of the comparative stringency of these prima facie obligations no general rules can, so far as I can see, be laid down” (41). Accordingly, a second strand in Ross simply emphasizes, following Aristotle, the need for practical judgment by those who have been brought up into virtue (42).

How might considerations of the sort constituted by prima facie duties enter our moral reasoning? They might do so explicitly, or only implicitly. There is also a third, still weaker possibility (Scheffler 1992, 32): it might simply be the case that if the agent had recognized a prima facie duty, he would have acted on it unless he considered it to be overridden. This is a fact about how he would have reasoned.

Despite Ross’s denial that there is any general method for estimating the comparative stringency of prima facie duties, there is a further strand in his exposition that many find irresistible and that tends to undercut this denial. In the very same paragraph in which he states that he sees no general rules for dealing with conflicts, he speaks in terms of “the greatest balance of prima facie rightness.” This language, together with the idea of “comparative stringency,” ineluctably suggests the idea that the mapping function might be the same in each case of conflict and that it might be a quantitative one. On this conception, if there is a conflict between two prima facie duties, the one that is strongest in the circumstances should be taken to win. Duly cautioned about the additive fallacy (see section 2.3 ), we might recognize that the strength of a moral consideration in one set of circumstances cannot be inferred from its strength in other circumstances. Hence, this approach will need still to rely on intuitive judgments in many cases. But this intuitive judgment will be about which prima facie consideration is stronger in the circumstances, not simply about what ought to be done.

The thought that our moral reasoning either requires or is benefited by a virtual quantitative crutch of this kind has a long pedigree. Can we really reason well morally in a way that boils down to assessing the weights of the competing considerations? Addressing this question will require an excursus on the nature of moral reasons. Philosophical support for this possibility involves an idea of practical commensurability. We need to distinguish, here, two kinds of practical commensurability or incommensurability, one defined in metaphysical terms and one in deliberative terms. Each of these forms might be stated evaluatively or deontically. The first, metaphysical sort of value incommensurability is defined directly in terms of what is the case. Thus, to state an evaluative version: two values are metaphysically incommensurable just in case neither is better than the other nor are they equally good (see Chang 1998). Now, the metaphysical incommensurability of values, or its absence, is only loosely linked to how it would be reasonable to deliberate. If all values or moral considerations are metaphysically (that is, in fact) commensurable, still it might well be the case that our access to the ultimate commensurating function is so limited that we would fare ill by proceeding in our deliberations to try to think about which outcomes are “better” or which considerations are “stronger.” We might have no clue about how to measure the relevant “strength.” Conversely, even if metaphysical value incommensurability is common, we might do well, deliberatively, to proceed as if this were not the case, just as we proceed in thermodynamics as if the gas laws obtained in their idealized form. Hence, in thinking about the deliberative implications of incommensurable values , we would do well to think in terms of a definition tailored to the deliberative context. Start with a local, pairwise form. We may say that two options, A and B, are deliberatively commensurable just in case there is some one dimension of value in terms of which, prior to – or logically independently of – choosing between them, it is possible adequately to represent the force of the considerations bearing on the choice.

Philosophers as diverse as Immanuel Kant and John Stuart Mill have argued that unless two options are deliberatively commensurable, in this sense, it is impossible to choose rationally between them. Interestingly, Kant limited this claim to the domain of prudential considerations, recognizing moral reasoning as invoking considerations incommensurable with those of prudence. For Mill, this claim formed an important part of his argument that there must be some one, ultimate “umpire” principle – namely, on his view, the principle of utility. Henry Sidgwick elaborated Mill’s argument and helpfully made explicit its crucial assumption, which he called the “principle of superior validity” (Sidgwick 1981; cf. Schneewind 1977). This is the principle that conflict between distinct moral or practical considerations can be rationally resolved only on the basis of some third principle or consideration that is both more general and more firmly warranted than the two initial competitors. From this assumption, one can readily build an argument for the rational necessity not merely of local deliberative commensurability, but of a global deliberative commensurability that, like Mill and Sidgwick, accepts just one ultimate umpire principle (cf. Richardson 1994, chap. 6).

Sidgwick’s explicitness, here, is valuable also in helping one see how to resist the demand for deliberative commensurability. Deliberative commensurability is not necessary for proceeding rationally if conflicting considerations can be rationally dealt with in a holistic way that does not involve the appeal to a principle of “superior validity.” That our moral reasoning can proceed holistically is strongly affirmed by Rawls. Rawls’s characterizations of the influential ideal of reflective equilibrium and his related ideas about the nature of justification imply that we can deal with conflicting considerations in less hierarchical ways than imagined by Mill or Sidgwick. Instead of proceeding up a ladder of appeal to some highest court or supreme umpire, Rawls suggests, when we face conflicting considerations “we work from both ends” (Rawls 1999, 18). Sometimes indeed we revise our more particular judgments in light of some general principle to which we adhere; but we are also free to revise more general principles in light of some relatively concrete considered judgment. On this picture, there is no necessary correlation between degree of generality and strength of authority or warrant. That this holistic way of proceeding (whether in building moral theory or in deliberating: cf. Hurley 1989) can be rational is confirmed by the possibility of a form of justification that is similarly holistic: “justification is a matter of the mutual support of many considerations, of everything fitting together into one coherent view” (Rawls 1999, 19, 507). (Note that this statement, which expresses a necessary aspect of moral or practical justification, should not be taken as a definition or analysis thereof.) So there is an alternative to depending, deliberatively, on finding a dimension in terms of which considerations can be ranked as “stronger” or “better” or “more stringent”: one can instead “prune and adjust” with an eye to building more mutual support among the considerations that one endorses on due reflection. If even the desideratum of practical coherence is subject to such re-specification, then this holistic possibility really does represent an alternative to commensuration, as the deliberator, and not some coherence standard, retains reflective sovereignty (Richardson 1994, sec. 26). The result can be one in which the originally competing considerations are not so much compared as transformed (Richardson 2018, chap. 1)

Suppose that we start with a set of first-order moral considerations that are all commensurable as a matter of ultimate, metaphysical fact, but that our grasp of the actual strength of these considerations is quite poor and subject to systematic distortions. Perhaps some people are much better placed than others to appreciate certain considerations, and perhaps our strategic interactions would cause us to reach suboptimal outcomes if we each pursued our own unfettered judgment of how the overall set of considerations plays out. In such circumstances, there is a strong case for departing from maximizing reasoning without swinging all the way to the holist alternative. This case has been influentially articulated by Joseph Raz, who develops the notion of an “exclusionary reason” to occupy this middle position (Raz 1990).

“An exclusionary reason,” in Raz’s terminology, “is a second order reason to refrain from acting for some reason” (39). A simple example is that of Ann, who is tired after a long and stressful day, and hence has reason not to act on her best assessment of the reasons bearing on a particularly important investment decision that she immediately faces (37). This notion of an exclusionary reason allowed Raz to capture many of the complexities of our moral reasoning, especially as it involves principled commitments, while conceding that, at the first order, all practical reasons might be commensurable. Raz’s early strategy for reconciling commensurability with complexity of structure was to limit the claim that reasons are comparable with regard to strength to reasons of a given order. First-order reasons compete on the basis of strength; but conflicts between first- and second-order reasons “are resolved not by the strength of the competing reasons but by a general principle of practical reasoning which determines that exclusionary reasons always prevail” (40).

If we take for granted this “general principle of practical reasoning,” why should we recognize the existence of any exclusionary reasons, which by definition prevail independently of any contest of strength? Raz’s principal answer to this question shifts from the metaphysical domain of the strengths that various reasons “have” to the epistemically limited viewpoint of the deliberator. As in Ann’s case, we can see in certain contexts that a deliberator is likely to get things wrong if he or she acts on his or her perception of the first-order reasons. Second-order reasons indicate, with respect to a certain range of first-order reasons, that the agent “must not act for those reasons” (185). The broader justification of an exclusionary reason, then, can consistently be put in terms of the commensurable first-order reasons. Such a justification can have the following form: “Given this agent’s deliberative limitations, the balance of first-order reasons will likely be better conformed with if he or she refrains from acting for certain of those reasons.”

Raz’s account of exclusionary reasons might be used to reconcile ultimate commensurability with the structured complexity of our moral reasoning. Whether such an attempt could succeed would depend, in part, on the extent to which we have an actual grasp of first-order reasons, conflict among which can be settled solely on the basis of their comparative strength. Our consideration, above, of casuistry, the additive fallacy, and deliberative incommensurability may combine to make it seem that only in rare pockets of our practice do we have a good grasp of first-order reasons, if these are defined, à la Raz, as competing only in terms of strength. If that is right, then we will almost always have good exclusionary reasons to reason on some other basis than in terms of the relative strength of first-order reasons. Under those assumptions, the middle way that Raz’s idea of exclusionary reasons seems to open up would more closely approach the holist’s.

The notion of a moral consideration’s “strength,” whether put forward as part of a metaphysical picture of how first-order considerations interact in fact or as a suggestion about how to go about resolving a moral conflict, should not be confused with the bottom-line determination of whether one consideration, and specifically one duty, overrides another. In Ross’s example of conflicting prima facie duties, someone must choose between averting a serious accident and keeping a promise to meet someone. (Ross chose the case to illustrate that an “imperfect” duty, or a duty of commission, can override a strict, prohibitive duty.) Ross’s assumption is that all well brought-up people would agree, in this case, that the duty to avert serious harm to someone overrides the duty to keep such a promise. We may take it, if we like, that this judgment implies that we consider the duty to save a life, here, to be stronger than the duty to keep the promise; but in fact this claim about relative strength adds nothing to our understanding of the situation. Yet we do not reach our practical conclusion in this case by determining that the duty to save the boy’s life is stronger. The statement that this duty is here stronger is simply a way to embellish the conclusion that of the two prima facie duties that here conflict, it is the one that states the all-things-considered duty. To be “overridden” is just to be a prima facie duty that fails to generate an actual duty because another prima facie duty that conflicts with it – or several of them that do – does generate an actual duty. Hence, the judgment that some duties override others can be understood just in terms of their deontic upshots and without reference to considerations of strength. To confirm this, note that we can say, “As a matter of fidelity, we ought to keep the promise; as a matter of beneficence, we ought to save the life; we cannot do both; and both categories considered we ought to save the life.”

Understanding the notion of one duty overriding another in this way puts us in a position to take up the topic of moral dilemmas . Since this topic is covered in a separate article, here we may simply take up one attractive definition of a moral dilemma. Sinnott-Armstrong (1988) suggested that a moral dilemma is a situation in which the following are true of a single agent:

  • He ought to do A .
  • He ought to do B .
  • He cannot do both A and B .
  • (1) does not override (2) and (2) does not override (1).

This way of defining moral dilemmas distinguishes them from the kind of moral conflict, such as Ross’s promise-keeping/accident-prevention case, in which one of the duties is overridden by the other. Arguably, Sartre’s student faces a moral dilemma. Making sense of a situation in which neither of two duties overrides the other is easier if deliberative commensurability is denied. Whether moral dilemmas are possible will depend crucially on whether “ought” implies “can” and whether any pair of duties such as those comprised by (1) and (2) implies a single, “agglomerated” duty that the agent do both A and B . If either of these purported principles of the logic of duties is false, then moral dilemmas are possible.

Jonathan Dancy has well highlighted a kind of contextual variability in moral reasons that has come to be known as “reasons holism”: “a feature that is a reason in one case may be no reason at all, or an opposite reason, in another” (Dancy 2004). To adapt one of his examples: while there is often moral reason not to lie, when playing liar’s poker one generally ought to lie; otherwise, one will spoil the game (cf. Dancy 1993, 61). Dancy argues that reasons holism supports moral particularism of the kind discussed in section 2.2 , according to which there are no defensible moral principles. Taking this conclusion seriously would radically affect how we conducted our moral reasoning. The argument’s premise of holism has been challenged (e.g., Audi 2004, McKeever & Ridge 2006). Philosophers have also challenged the inference from reasons holism to particularism in various ways. Mark Lance and Margaret Olivia Little (2007) have done so by exhibiting how defeasible generalizations, in ethics and elsewhere, depend systematically on context. We can work with them, they suggest, by utilizing a skill that is similar to the skill of discerning morally salient considerations, namely the skill of discerning relevant similarities among possible worlds. More generally, John F. Horty has developed a logical and semantic account according to which reasons are defaults and so behave holistically, but there are nonetheless general principles that explain how they behave (Horty 2012). And Mark Schroeder has argued that our holistic views about reasons are actually better explained by supposing that there are general principles (Schroeder 2011).

This excursus on moral reasons suggests that there are a number of good reasons why reasoning about moral matters might not simply reduce to assessing the weights of competing considerations.

If we have any moral knowledge, whether concerning general moral principles or concrete moral conclusions, it is surely very imperfect. What moral knowledge we are capable of will depend, in part, on what sorts of moral reasoning we are capable of. Although some moral learning may result from the theoretical work of moral philosophers and theorists, much of what we learn with regard to morality surely arises in the practical context of deliberation about new and difficult cases. This deliberation might be merely instrumental, concerned only with settling on means to moral ends, or it might be concerned with settling those ends. There is no special problem about learning what conduces to morally obligatory ends: that is an ordinary matter of empirical learning. But by what sorts of process can we learn which ends are morally obligatory, or which norms morally required? And, more specifically, is strictly moral learning possible via moral reasoning?

Much of what was said above with regard to moral uptake applies again in this context, with approximately the same degree of dubiousness or persuasiveness. If there is a role for moral perception or for emotions in agents’ becoming aware of moral considerations, these may function also to guide agents to new conclusions. For instance, it is conceivable that our capacity for outrage is a relatively reliable detector of wrong actions, even novel ones, or that our capacity for pleasure is a reliable detector of actions worth doing, even novel ones. (For a thorough defense of the latter possibility, which intriguingly interprets pleasure as a judgment of value, see Millgram 1997.) Perhaps these capacities for emotional judgment enable strictly moral learning in roughly the same way that chess-players’ trained sensibilities enable them to recognize the threat in a previously unencountered situation on the chessboard (Lance and Tanesini 2004). That is to say, perhaps our moral emotions play a crucial role in the exercise of a skill whereby we come to be able to articulate moral insights that we have never before attained. Perhaps competing moral considerations interact in contextually specific and complex ways much as competing chess considerations do. If so, it would make sense to rely on our emotionally-guided capacities of judgment to cope with complexities that we cannot model explicitly, but also to hope that, once having been so guided, we might in retrospect be able to articulate something about the lesson of a well-navigated situation.

A different model of strictly moral learning puts the emphasis on our after-the-fact reactions rather than on any prior, tacit emotional or judgmental guidance: the model of “experiments in living,” to use John Stuart Mill’s phrase (see Anderson 1991). Here, the basic thought is that we can try something and see if “it works.” For this to be an alternative to empirical learning about what causally conduces to what, it must be the case that we remain open as to what we mean by things “working.” In Mill’s terminology, for instance, we need to remain open as to what are the important “parts” of happiness. If we are, then perhaps we can learn by experience what some of them are – that is, what are some of the constitutive means of happiness. These paired thoughts, that our practical life is experimental and that we have no firmly fixed conception of what it is for something to “work,” come to the fore in Dewey’s pragmatist ethics (see esp. Dewey 1967 [1922]). This experimentalist conception of strictly moral learning is brought to bear on moral reasoning in Dewey’s eloquent characterizations of “practical intelligence” as involving a creative and flexible approach to figuring out “what works” in a way that is thoroughly open to rethinking our ultimate aims.

Once we recognize that moral learning is a possibility for us, we can recognize a broader range of ways of coping with moral conflicts than was canvassed in the last section. There, moral conflicts were described in a way that assumed that the set of moral considerations, among which conflicts were arising, was to be taken as fixed. If we can learn, morally, however, then we probably can and should revise the set of moral considerations that we recognize. Often, we do this by re-interpreting some moral principle that we had started with, whether by making it more specific, making it more abstract, or in some other way (cf. Richardson 2000 and 2018).

So far, we have mainly been discussing moral reasoning as if it were a solitary endeavor. This is, at best, a convenient simplification. At worst, it is, as Jürgen Habermas has long argued, deeply distorting of reasoning’s essentially dialogical or conversational character (e.g., Habermas 1984; cf. Laden 2012). In any case, it is clear that we often do need to reason morally with one another.

Here, we are interested in how people may actually reason with one another – not in how imagined participants in an original position or ideal speech situation may be said to reason with one another, which is a concern for moral theory, proper. There are two salient and distinct ways of thinking about people morally reasoning with one another: as members of an organized or corporate body that is capable of reaching practical decisions of its own; and as autonomous individuals working outside any such structure to figure out with each other what they ought, morally, to do.

The nature and possibility of collective reasoning within an organized collective body has recently been the subject of some discussion. Collectives can reason if they are structured as an agent. This structure might or might not be institutionalized. In line with the gloss of reasoning offered above, which presupposes being guided by an assessment of one’s reasons, it is plausible to hold that a group agent “counts as reasoning, not just rational, only if it is able to form not only beliefs in propositions – that is, object-language beliefs – but also belief about propositions” (List and Pettit 2011, 63). As List and Pettit have shown (2011, 109–113), participants in a collective agent will unavoidably have incentives to misrepresent their own preferences in conditions involving ideologically structured disagreements where the contending parties are oriented to achieving or avoiding certain outcomes – as is sometimes the case where serious moral disagreements arise. In contexts where what ultimately matters is how well the relevant group or collective ends up faring, “team reasoning” that takes advantage of orientation towards the collective flourishing of the group can help it reach a collectively optimal outcome (Sugden 1993, Bacharach 2006; see entry on collective intentionality ). Where the group in question is smaller than the set of persons, however, such a collectively prudential focus is distinct from a moral focus and seems at odds with the kind of impartiality typically thought distinctive of the moral point of view. Thinking about what a “team-orientation” to the set all persons might look like might bring us back to thoughts of Kantian universalizability; but recall that here we are focused on actual reasoning, not hypothetical reasoning. With regard to actual reasoning, even if individuals can take up such an orientation towards the “team” of all persons, there is serious reason, highlighted by another strand of the Kantian tradition, for doubting that any individual can aptly surrender their moral judgment to any group’s verdict (Wolff 1998).

This does not mean that people cannot reason together, morally. It suggests, however, that such joint reasoning is best pursued as a matter of working out together, as independent moral agents, what they ought to do with regard to an issue on which they have some need to cooperate. Even if deferring to another agent’s verdict as to how one morally ought to act is off the cards, it is still possible that one may licitly take account of the moral testimony of others (for differing views, see McGrath 2009, Enoch 2014).

In the case of independent individuals reasoning morally with one another, we may expect that moral disagreement provides the occasion rather than an obstacle. To be sure, if individuals’ moral disagreement is very deep, they may not be able to get this reasoning off the ground; but as Kant’s example of Charles V and his brother each wanting Milan reminds us, intractable disagreement can arise also from disagreements that, while conceptually shallow, are circumstantially sharp. If it were true that clear-headed justification of one’s moral beliefs required seeing them as being ultimately grounded in a priori principles, as G.A. Cohen argued (Cohen 2008, chap. 6), then room for individuals to work out their moral disagreements by reasoning with one another would seem to be relatively restricted; but whether the nature of (clearheaded) moral grounding is really so restricted is seriously doubtful (Richardson 2018, §9.2). In contrast to what such a picture suggests, individuals’ moral commitments seem sufficiently open to being re-thought that people seem able to engage in principled – that is, not simply loss-minimizing – compromise (Richardson 2018, §8.5).

What about the possibility that the moral community as a whole – roughly, the community of all persons – can reason? This possibility does not raise the kind of threat to impartiality that is raised by the team reasoning of a smaller group of people; but it is hard to see it working in a way that does not run afoul of the concern about whether any person can aptly defer, in a strong sense, to the moral judgments of another agent. Even so, a residual possibility remains, which is that the moral community can reason in just one way, namely by accepting or ratifying a moral conclusion that has already become shared in a sufficiently inclusive and broad way (Richardson 2018, chap. 7).

  • Anderson, E. S., 1991. “John Stuart Mill and experiments in living,” Ethics , 102: 4–26.
  • Anderson, E. S. and Pildes, R. H., 2000. “Expressive theories of law: A general restatement,” University of Pennsylvania Law Review , 148: 1503–1575.
  • Arpaly, N. and Schroeder, T. In praise of desire , Oxford: Oxford University Press.
  • Audi, R., 1989. Practical reasoning , London: Routledge.
  • –––. 2004. The good in the right: A theory of good and intrinsic value , Princeton: Princeton University Press.
  • Bacharach, M., 2006. Beyond individual choice: Teams and frames in game theory , Princeton: Princeton University Press.
  • Beauchamp, T. L., 1979. “A reply to Rachels on active and passive euthanasia,” in Medical responsibility , ed. W. L. Robinson, Clifton, N.J.: Humana Press, 182–95.
  • Brandt, R. B., 1979. A theory of the good and the right , Oxford: Oxford University Press.
  • Bratman, M., 1999. Faces of intention: Selected essays on intention and agency , Cambridge, England: Cambridge University Press.
  • Broome, J., 2009. “The unity of reasoning?” in Spheres of reason , ed. S. Robertson, Oxford: Oxford University Press.
  • –––, 2013. Rationality through Reasoning , Chichester, West Sussex: Wiley Blackwell.
  • Campbell, R. and Kumar, V., 2012. “Moral reasoning on the ground,” Ethics , 122: 273–312.
  • Chang, R. (ed.), 1998. Incommensurability, incomparability, and practical reason , Cambridge, Mass.: Harvard University Press.
  • Clarke, S. G., and E. Simpson, 1989. Anti-theory in ethics and moral conservativism , Albany: SUNY Press.
  • Dancy, J., 1993. Moral reasons , Oxford: Blackwell.
  • –––, 2004. Ethics without principles , Oxford: Oxford University Press.
  • Dewey, J., 1967. The middle works, 1899–1924 , Vol. 14, Human nature and conduct , ed. J. A. Boydston, Carbondale: Southern Illinois University Press.
  • Donagan, A., 1977. The theory of morality , Chicago: University of Chicago Press.
  • Dworkin, R., 1978. Taking rights seriously , Cambridge: Harvard University Press.
  • Engstrom, S., 2009. The form of practical knowledge: A study of the categorical imperative , Cambridge, Mass.: Harvard University Press.
  • Enoch, D., 2014. “In defense of moral deference,” Journal of philosophy , 111: 229–58.
  • Fernandez, P. A., 2016. “Practical reasoning: Where the action is,” Ethics , 126: 869–900.
  • Fletcher, J., 1997. Situation ethics: The new morality , Louisville: Westminster John Knox Press.
  • Frankfurt, H. G., 1988. The importance of what we care about: Philosophical essays , Cambridge: Cambridge University Press.
  • Gert, B., 1998. Morality: Its nature and justification , New York: Oxford University Press.
  • Gibbard, Allan, 1965. “Rule-utilitarianism: Merely an illusory alternative?,” Australasian Journal of Philosophy , 43: 211–220.
  • Goldman, Holly S., 1974. “David Lyons on utilitarian generalization,” Philosophical Studies , 26: 77–95.
  • Greene, J. D., 2014. “Beyond point-and-shoot morality: Why cognitive (neuro)science matters for ethics,” Ethics , 124: 695–726.
  • Habermas, J., 1984. The theory of communicative action: Vol. I, Reason and the rationalization of society , Boston: Beacon Press.
  • Haidt, J., 2001. “The emotional dog and its rational tail: A social intuitionist approach to moral judgment,” Psychological Review , 108: 814–34.
  • Hare, R. M., 1981. Moral thinking: Its levels, method, and point , Oxford: Oxford University Press.
  • Harman, G., 1986. Change in view: principles of peasoning , Cambridge, Mass.: MIT Press.
  • Held, V., 1995. Justice and care: Essential readings in feminist ethics , Boulder, Colo.: Westview Press.
  • Hieronymi, P., 2013. “The use of reasons in thought (and the use of earmarks in arguments),” Ethics , 124: 124–27.
  • Horty, J. F., 2012. Reasons as defaults , Oxford: Oxford University Press.
  • –––, 2016. “Reasoning with precedents as constrained natural reasoning,” in E. Lord and B. McGuire (eds.), Weighing Reasons , Oxford: Oxford University Press: 193–212.
  • Hume, D., 2000 [1739–40]. A treatise of human nature , ed. D. F. Norton and M. J. Norton, Oxford: Oxford University Press.
  • Hurley, S. L., 1989. Natural reasons: Personality and polity , New York: Oxford University Press.
  • Jonsen, A. R., and S. Toulmin, 1988. The abuse of casuistry: A history of moral reasoning , Berkeley: University of California Press.
  • Kagan, S., 1988. “The additive fallacy,” Ethics , 90: 5–31.
  • Knobe, J., 2006. “The concept of individual action: A case study in the uses of folk psychology,” Philosophical Studies , 130: 203–231.
  • Koenigs, M., 2007. “Damage to the prefrontal cortex increases utilitarian moral judgments,” Nature , 446: 908–911.
  • Kolodny, N., 2005. “Why be rational?” Mind , 114: 509–63.
  • Laden, A. S., 2012. Reasoning: A social picture , Oxford: Oxford University Press.
  • Korsgaard, C. M., 1996. Creating the kingdom of ends , Cambridge: Cambridge University Press.
  • Lance, M. and Little, M., 2007. “Where the Laws Are,” in R. Shafer-Landau (ed.), Oxford Studies in Metaethics (Volume 2), Oxford: Oxford University Press.
  • List, C. and Pettit, P., 2011. Group agency: The possibility, design, and status of corporate agents , Oxford: Oxford University Press.
  • McDowell, John, 1998. Mind, value, and reality , Cambridge, Mass.: Harvard University Press.
  • McGrath, S., 2009. “The puzzle of moral deference,” Philosophical Perspectives , 23: 321–44.
  • McKeever, S. and Ridge, M. 2006., Principled Ethics: Generalism as a Regulative Idea , Oxford: Oxford University Press.
  • McNaughton, D., 1988. Moral vision: An introduction to ethics , Oxford: Blackwell.
  • Mill, J. S., 1979 [1861]. Utilitarianism , Indianapolis: Hackett Publishing.
  • Millgram, E., 1997. Practical induction , Cambridge, Mass.: Harvard University Press.
  • Mikhail, J., 2011. Elements of moral cognition: Rawls’s linguistic analogy and the cognitive science of moral and legal judgment , Cambridge: Cambridge University Press.
  • Nell, O., 1975. Acting on principle: An essay on Kantian ethics , New York: Columbia University Press.
  • Nussbaum, M. C., 1990. Love’s knowledge: Essays on philosophy and literature , New York: Oxford University Press.
  • –––, 2001. Upheavals of thought: The intelligence of emotions , Cambridge, England: Cambridge University Press.
  • Pietroski, P. J., 1993. “Prima facie obligations, ceteris paribus laws in moral theory,” Ethics , 103: 489–515.
  • Prinz, J., 2007. The emotional construction of morals , Oxford: Oxford University Press.
  • Rachels, J., 1975. “Active and passive euthanasia,” New England Journal of Medicine , 292: 78–80.
  • Railton, P., 1984. “Alienation, consequentialism, and the demands of morality,” Philosophy and Public Affairs , 13: 134–71.
  • –––, 2014. “The affective dog and its rational tale: Intuition and attunement,” Ethics , 124: 813–59.
  • Rawls, J., 1971. A theory of justice , Cambridge, Mass.: Harvard University Press.
  • –––, 1996. Political liberalism , New York: Columbia University Press.
  • –––, 1999. A theory of justice , revised edition, Cambridge, Mass.: Harvard University Press.
  • –––, 2000. Lectures on the history of moral philosophy , Cambridge, Mass.: Harvard University Press.
  • Raz, J., 1990. Practical reason and norms , Princeton: Princeton University Press.
  • Richardson, H. S., 1994. Practical reasoning about final ends , Cambridge: Cambridge University Press.
  • –––, 2000. “Specifying, balancing, and interpreting bioethical principles,” Journal of Medicine and Philosophy , 25: 285–307.
  • –––, 2002. Democratic autonomy: Public reasoning about the ends of policy , New York: Oxford University Press.
  • –––, 2004. “Thinking about conflicts of desires,” in Practical conflicts: New philosophical essays , eds. P. Baumann and M. Betzler, Cambridge: Cambridge University Press, 96–117.
  • –––, 2018. Articulating the moral community: Toward a constructive ethical pragmatism , New York: Oxford University Press.
  • Ross, W. D., 1988. The right and the good , Indianapolis: Hackett.
  • Sandel, M., 1998. Liberalism and the limits of justice , Cambridge: Cambridge University Press.
  • Sartre, J. P., 1975. “Existentialism is a Humanism,” in Existentialism from Dostoyevsky to Sartre , ed. W. Kaufmann, New York: Meridian-New American, 345–69.
  • Scheffler, Samuel, 1992. Human morality , New York: Oxford University Press.
  • Schmidtz, D., 1995. Rational choice and moral agency , Princeton: Princeton University Press.
  • Schneewind, J.B., 1977. Sidgwick’s ethics and Victorian moral philosophy , Oxford: Oxford University Press.
  • Schroeder, M., 2011. “Holism, weight, and undercutting.” Noûs , 45: 328–44.
  • Schwitzgebel, E. and Cushman, F., 2012. “Expertise in moral reasoning? Order effects on moral judgment in professional philosophers and non-philosophers,” Mind and Language , 27: 135–53.
  • Sidgwick, H., 1981. The methods of ethics , reprinted, 7th edition, Indianapolis: Hackett.
  • Sinnott-Armstrong, W., 1988. Moral dilemmas , Oxford: Basil Blackwell.
  • Smith, M., 1994. The moral problem , Oxford: Blackwell.
  • –––, 2013. “A constitutivist theory of reasons: Its promise and parts,” Law, Ethics and Philosophy , 1: 9–30.
  • Sneddon, A., 2007. “A social model of moral dumbfounding: Implications for studying moral reasoning and moral judgment,” Philosophical Psychology , 20: 731–48.
  • Sugden, R., 1993. “Thinking as a team: Towards an explanation of nonselfish behavior,” Social Philosophy and Policy , 10: 69–89.
  • Sunstein, C. R., 1996. Legal reasoning and political conflict , New York: Oxford University Press.
  • Tiberius, V., 2000. “Humean heroism: Value commitments and the source of normativity,” Pacific Philosophical Quarterly , 81: 426–446.
  • Vogler, C., 1998. “Sex and talk,” Critical Inquiry , 24: 328–65.
  • Wellman, H. and Miller, J., 2008. “Including deontic reasoning as fundamental to theory of mind,” Human Development , 51: 105–35
  • Williams, B., 1981. Moral luck: Philosophical papers 1973–1980 , Cambridge: Cambridge University Press.
  • Wolff, R. P., 1998. In defense of anarchism , Berkeley and Los Angeles: University of California Press.
  • Young, L. and Saxe, R., 2008. “The neural basis of belief encoding and integration in moral judgment,” NeuroImage , 40: 1912–20.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.

[Please contact the author with suggestions.]

agency: shared | intentionality: collective | moral dilemmas | moral particularism | moral particularism: and moral generalism | moral relativism | moral skepticism | practical reason | prisoner’s dilemma | reflective equilibrium | value: incommensurable

Acknowledgments

The author is grateful for help received from Gopal Sreenivasan and the students in a seminar on moral reasoning taught jointly with him, to the students in a more recent seminar in moral reasoning, and, for criticisms received, to David Brink, Margaret Olivia Little and Mark Murphy. He welcomes further criticisms and suggestions for improvement.

Copyright © 2018 by Henry S. Richardson < richardh @ georgetown . edu >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

Ethical Judgments: What Do We Know, Where Do We Go?

  • Published: 08 August 2012
  • Volume 115 , pages 575–597, ( 2013 )

Cite this article

ethical judgement essay

  • Peter E. Mudrack 1 &
  • E. Sharon Mason 2  

2237 Accesses

49 Citations

Explore all metrics

Investigations into ethical judgments generally seem fuzzy as to the relevant research domain. We first attempted to clarify the construct and determine domain parameters. This attempt required addressing difficulties associated with pinpointing relevant literature, most notably the varied nomenclature used to refer to ethical judgments (individual evaluations of actions’ ethicality). Given this variation in construct nomenclature and the difficulties it presented in identifying pertinent focal studies, we elected to focus on research that cited papers featuring prominent and often-used measures of ethical judgments (primarily, but not exclusively, the Multidimensional Ethics Scale). Our review of these studies indicated a preponderance of inferences and conclusions unwarranted by empirical evidence (likely attributable at least partly to inconsistent nomenclature). Moreover, ethical judgments related consistently to few respondent characteristics or any other variables, emergent relationships may not always be especially meaningful, and much research seems inclined to repetition of already verified findings. Although we concluded that knowledge about ethical judgments seems not to have advanced appreciably after decades of investigation, we suggested a possible path forward that focuses on the content of what is actually being judged as reflected in the myriad of vignettes used in the literature to elicit judgments.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

Similar content being viewed by others

ethical judgement essay

Qualitative Research: Ethical Considerations

ethical judgement essay

Reporting reliability, convergent and discriminant validity with structural equation modeling: A review and best-practice recommendations

Focus group methodology: some ethical challenges, * indicates a study that reported the results of factor analyses of scores on ethical judgments survey items. (r) refers to a study that basically did no more than essentially replicate reidenbach and robin ( 1990 ). (h) refers to a study that advanced at least one specific hypothesis concerning connections between ethical judgments and behavioral intentions (or between different indicators of intentions)..

Ayers, S., & Kaplan, S. E. (2005). Wrongdoing by consultants: An examination of employees’ reporting intentions. Journal of Business Ethics, 57 , 121–137. (H).

Google Scholar  

Babin, B. J., & Babin, L. A. (1996). Effects of moral cognitions and customer emotions on shoplifting intentions. Psychology and Marketing, 13 , 785–803. (R).*

Babin, B. J., Griffin, M., & Boles, J. S. (2004). Buyer reactions to ethical beliefs in the retail environment. Journal of Business Research, 57 , 1155–1163.*

Bailey, W., & Spicer, A. (2007). When does national identity matter? Convergence and divergence in international business ethics. Academy of Management Journal, 50 , 1462–1480.

Barnett, T. (2001). Dimensions of moral intensity and ethical decision making: An empirical study. Journal of Applied Social Psychology, 31 , 1038–1057.

Barnett, T., Bass, K., & Brown, G. (1994a). Ethical ideology and ethical judgment regarding ethical issues in business. Journal of Business Ethics, 13 , 469–480.

Barnett, T., Bass, K., & Brown, G. (1996). Religiosity ethical ideology, and intentions to report a peer’s Wrongdoing. Journal of Business Ethics, 15 , 1161–1174.

Barnett, T., Bass, K., Brown, G., & Hebert, F. J. (1998). Ethical ideology and the ethical judgments of marketing professionals. Journal of Business Ethics, 17 , 715–723.

Barnett, T., Brown, G., & Bass, K. (1994b). The ethical judgments of college students regarding business issues. Journal of Education for Business, 69 , 333–338.

Barnett, T., & Vaicys, C. (2000). The moderating effect of individuals’ perceptions of ethical work climate on ethical judgments and behavioral intentions. Journal of Business Ethics, 27 , 351–362.

Barnett, T., & Valentine, S. (2004). Issue contingencies and marketers’ recognition of ethical issues ethical judgments and behavioral intentions. Journal of Business Research, 57 , 338–346.

Bass, K., Barnett, T., & Brown, G. (1999). Individual difference variables ethical judgments, and ethical behavioral intentions. Business Ethics Quarterly, 9 , 183–205. (H).

Bateman, C. R., Fraedrich, J. P., & Iyer, R. (2003). The integration and testing of the janus-headed model within marketing. Journal of Business Research, 56 , 587–596.

Bateman, C. R., Valentine, S., & Rittenburg, T. (2012). Ethical decision making in a peer-to-peer file sharing situation: the role of moral absolutes and social consensus. Journal of Business Ethics . doi: 10.1007/s10551-012-1388-1

Bay, D., & Nikitkov, A. (2011). Subjective probability assessments of the incidence of unethical behavior: The importance of scenario-respondent fit. Business Ethics: A European Review, 20 , 1–11.

Beekun, R. I., Hamdy, R., Westerman, J. W., & HassabElnaby, H. R. (2008). An exploration of ethical decision-making processes in the United States and Egypt. Journal of Business Ethics, 82 , 587–605.

Beekun, R. I., Stedham, Y., Westerman, J. W., & Yamamura, J. H. (2010). Effects of justice and utilitarianism on ethical decision making: A cross-cultural examination of gender similarities and differences. Business Ethics: A European Review, 19 , 309–325.

Beekun, R. I., Stedham, Y., & Yamamura, J. H. (2003a). Business ethics in Brazil and the U.S.: A comparative investigation. Journal of Business Ethics, 42 , 267–279.

Beekun, R. I., Stedham, Y., Yamamura, J. H., & Barghouti, J. A. (2003b). Comparing business ethics in Russia and the U.S. International Journal of Human Resource Management, 14 , 1333–1349.

Beekun, R. I., Westerman, J., & Barghouti, J. (2005). Utility of ethical frameworks in determining behavioral intention: A comparison of the U.S. and Russia. Journal of Business Ethics, 61 , 235–247.

Boyle, B. A. (2000). The impact of customer characteristics and moral philosophies on ethical judgments of salespeople. Journal of Business Ethics, 23 , 249–267.

Brunton, M., & Eweje, G. (2010). The influence of culture on ethical perception held by business students in a New Zealand University. Business Ethics: A European Review, 19 , 349–362. (R).

Bucar, B., Glas, M., & Hisrich, R. D. (2003). Ethics and entrepreneurs: An international comparative study. Journal of Business Venturing, 18 , 261–281.

Buchan, H. F. (2005). Ethical decision making in the public accounting profession: An extension of Ajzen’s theory of planned behavior. Journal of Business Ethics, 61 , 165–181. (H)*.

Carroll, A. B., & Buchholtz, A. K. (2012). Business and society: Ethics, sustainability, and stakeholder management (8th ed.). Mason, OH: South-Western.

Chan, R. Y. K., Wong, Y. H., & Leung, T. K. P. (2008). Applying ethical concepts to the study of “Green” consumer behavior: An analysis of Chinese consumers’ intentions to bring their own shopping bags. Journal of Business Ethics, 79 , 469–481. (H).

Chen, M. F., Pan, C. T., & Pan, M. C. (2009). The joint moderating impact of moral intensity and moral judgment on consumer’s use intention of pirated software. Journal of Business Ethics, 90 , 361–373. (H).

Cherry, J. (2006). The impact of normative influence and locus of control on ethical judgments and intentions: A cross-cultural comparison. Journal of Business Ethics, 68 , 113–132.

Cherry, J., & Fraedrich, J. (2000). An empirical investigation of locus of control and the structure of moral reasoning: Examining the ethical decision-making processes of sales managers. Journal of Personal Selling and Sales Management, 20 , 173–188.

Cherry, J., & Fraedrich, J. (2002). Perceived risk moral philosophy and marketing ethics: Mediating influences on sales managers’ ethical decision-making. Journal of Business Research, 55 , 951–962.

Cherry, J., Lee, M., & Chien, C. S. (2003). A cross-cultural application of a theoretical model of business ethics: Bridging the gap between theory and data. Journal of Business Ethics, 44 , 359–376. (H).

Chiu, R. K. (2003). Ethical judgment and whistleblowing intention: Examining the moderating role of locus of control. Journal of Business Ethics, 43 , 65–74. (H).

Chiu, R. K., & Erdener, C. B. (2003). The ethics of peer reporting in Chinese societies: Evidence from Hong Kong and Shanghai. International Journal of Human Resource Management, 14 , 335–353. (H).

Christie, R., & Geis, F. L. (1970). Studies in Machiavellianism . New York: Academic Press.

Clark, J. W., & Dawson, L. E. (1996). Personal religiousness and ethical judgments: An empirical analysis. Journal of Business Ethics, 15 , 359–372.*

Cohen, J. R., & Bennie, N. M. (2006). The applicability of a contingent factors model to accounting ethics research. Journal of Business Ethics, 68 , 1–18.

Cohen, J., Pant, L., & Sharp, D. (1993). A validation and extension of a multidimensional ethics scale. Journal of Business Ethics, 12 , 13–26. (R).*

Cohen, J. R., Pant, L. W., & Sharp, D. J. (1995). An exploratory examination of international differences in auditors’ ethical perceptions. Behavioral Research in Accounting, 7 , 37–64. (R).

Cohen, J. R., Pant, L. W., & Sharp, D. J. (1996). Measuring the ethical awareness and ethical orientation of Canadian auditors. Behavioral Research in Accounting, 8 , 98–119. (R).

Cohen, J. R., Pant, L. W., & Sharp, D. J. (1998). The effect of gender and academic discipline diversity on the ethical evaluations, ethical intentions and ethical orientation of potential public accounting recruits. Accounting Horizons, 12 , 250–270.*

Cohen, J. R., Pant, L. W., & Sharp, D. J. (2001). An examination of differences in ethical decision-making between Canadian business students and accounting professionals. Journal of Business Ethics, 30 , 319–336. (R).*

Cole, B. C., & Smith, D. L. (1996). Perceptions of business ethics: Students vs business people. Journal of Business Ethics, 15 , 889–896.

Collins, D. (2000). The quest to improve the human condition: The first 1500 articles published in journal of business ethics. Journal of Business Ethics, 26 , 1–73.

Cruz, C. A., Shafer, W. E., & Strauser, J. R. (2000). A multidimensional analysis of tax practitioners’ ethical judgments. Journal of Business Ethics, 24 , 223–244. (R).*

Dabholkar, P. A., & Kellaris, J. J. (1992). Toward understanding marketing students’ ethical judgment of controversial selling practices. Journal of Business Research, 24 , 313–329.

Davis, M. A., Andersen, M. G., & Curtis, M. B. (2001). Measuring ethical ideology in business ethics: A critical analysis of the ethics position questionnaire. Journal of Business Ethics, 32 , 35–53.*

Dean, D. H. (2004). Perceptions of the ethicality of consumer insurance claim fraud. Journal of Business Ethics, 54 , 67–79.

Ding, C. G., Chang, K., & Liu, N. T. (2009). The roles of personality and general ethical judgments in intention to not repay credit card expenses. Service Industries Journal, 29 , 813–834.

Dornoff, R. J., & Tankersley, C. B. (1975). Perceptual differences in market transactions: A source of consumer frustration. Journal of Consumer Affairs, 97 , 97–103.

Dubinsky, A. J., Nataraajan, R., & Huang, W. Y. (2004). The influence of moral philosophy on retail salespeoples’ ethical perceptions. Journal of Consumer Affairs, 38 , 297–319.

Dunfee, T. W. (2006). A critical perspective of integrative social contracts theory: Recurring criticisms and next generation research topics. Journal of Business Ethics, 68 , 303–328.

Duska, R. (1996). Ethics law, and the social sciences: Reflections on Robin, King, and Reidenbach. American Business Law Journal, 34 , 301–316.

Eastman, J. K., Eastman, K. L., & Tolson, M. A. (2001). The relationship between ethical ideology and ethical behavioral intentions: An exploratory look at physicians’ responses to managed case dilemmas. Journal of Business Ethics, 31 , 209–224.

Ellis, T. S., & Griffith, D. (2001). The evaluation of IT ethical scenarios using a multidimensional scale. The Data Base for Advances in Information Systems, 32 , 75–84. (R).*

Emerson, T. L. N., Conroy, S. J., & Stanley, C. W. (2007). Ethical attitudes of accountants: Recent evidence from a practitioners’ survey. Journal of Business Ethics, 71 , 73–87.

Eweje, G., & Brunton, M. (2010). Ethical perceptions of business students in a New Zealand university: Do gender, age, and work experience matter? Business Ethics: A European Review, 19 , 95–111.

Eynon, G., Hill, N. Y., & Stevens, K. T. (1997). Factors that influence the moral reasoning abilities of accountants: Implications for universities and the profession. Journal of Business Ethics, 16 , 1297–1309.

Fennell, D. A., & Malloy, D. C. (1999). Measuring the ethical nature of tourism operators. Annals of Tourism Research, 26 , 928–943.

Fleischman, G., & Valentine, S. (2003). Professionals’ tax liability assessments and ethical evaluations in an equitable relief innocent spouse case. Journal of Business Ethics, 42 , 27–44.

Flory, S. M., Phillips, T. J., Jr., Reidenbach, R. E., & Robin, D. P. (1992). A multidimensional analysis of selected ethical issues in accounting. Accounting Review, 67 , 284–302. (R).*

Flory, S. M., Phillips, T. J, Jr., Reidenbach, R. E., & Robin, D. P. (1993). A reply to “A Comment on a Multidimensional Analysis of Selected Ethical Issues in Accounting”. Accounting Review, 68 , 417–421.

Forsyth, D. R. (1980). A taxonomy of ethical ideologies. Journal of Personality and Social Psychology, 39 , 175–184.

Froelich, K. S., & Kottke, J. L. (1991). Measuring individual beliefs about organizational ethics. Educational and Psychological Measurement, 51 , 377–383.

Ge, L., & Thomas, S. (2008). A cross-cultural comparison of the deliberative reasoning of Canadian and Chinese accounting students. Journal of Business Ethics, 82 , 189–211.

Haines, R., Street, M. D., & Haines, D. (2008). The influence of perceived importance of an ethical issue on moral judgment, moral obligation, and moral intent. Journal of Business Ethics, 81 , 387–399.*

Hansen, R. S. (1992). A multidimensional scale for measuring business ethics: A purification and a refinement. Journal of Business Ethics, 11 , 523–534.*

Hebert, F. J., Bass, K. E., & Tomkiewicz, J. (2002). Ethics in family vs nonfamily owned businesses. Psychological Reports, 91 , 952–954.

Henderson, B. C., & Kaplan, S. E. (2005). An examination of the role of ethics in tax compliance decisions. Journal of the American Taxation Association, 27 , 39–72. (H).*

Henthorne, T. L., & LaTour, M. S. (1995). A model to explore the ethics of erotic stimuli in print advertising. Journal of Business Ethics, 14 , 561–569.

Henthorne, T. L., Robin, D. P., & Reidenbach, R. E. (1992). Identifying the gaps in ethical perceptions between managers and salespersons: A multidimensional approach. Journal of Business Ethics, 11 , 849–856.

Herndon, N. C, Jr., Fraedrich, J. P., & Yeh, Q. J. (2001). An investigation of moral values and the ethical content of corporate culture: Taiwanese versus US sales people. Journal of Business Ethics, 30 , 73–85.

Hsu, Y. H., Fang, W., & Lee, Y. (2009). Ethically questionable behavior in sales representatives—An example from the taiwanese pharmaceutical industry. Journal of Business Ethics, 88 , 155–166.

Hudson, S. (2007). To go or not to go? Ethical perspectives on tourism in an “Outpost of Tyranny”. Journal of Business Ethics, 76 , 385–396.

Hudson, S., Hudson, D., & Peloza, J. (2008). Meet the parents: A parents’ perspective on product placement in children’s films. Journal of Business Ethics, 80 , 289–304.

Hudson, S., & Miller, G. (2005). Ethical orientation and awareness of tourism students. Journal of Business Ethics, 62 , 383–396.

Humphreys, N., Robin, D. P., Reidenbach, R. E., & Moak, D. L. (1993). The ethical decision making process of small business owner/managers and their customers. Journal of Small Business Management, 31 , 9–22. (R).

Jones, W. A, Jr. (1990). Student views of “Ethical” issues: A situational analysis. Journal of Business Ethics, 9 , 201–205.

Jones, T. M. (1991). Ethical decision making by individuals in organizations: An issue-contingent model. Academy of Management Review, 16 , 366–395.

Jones, J. L., & Middleton, K. L. (2007). Ethical decision-making by consumers: The roles of product harm and consumer vulnerability. Journal of Business Ethics, 70 , 247–264.

Jung, I. (2009). Ethical judgments and behaviors: Applying a multidimensional ethics scale to measuring ICT ethics of college students. Computers and Education, 53 , 940–949. (R/H).*

Kaplan, S. E., Samuels, J. A., & Thorne, L. (2009). Ethical norms of CFO insider trading. Journal of Accounting and Public Policy, 28 , 386–400.

Kaynama, S. A., King, A., & Smith, L. W. (1996). The impact of a shift in organizational role on ethical perceptions: A comparative study. Journal of Business Ethics, 15 , 581–590.

Klein, J. G., & Smith, N. C. (2004). Forewarning and debriefing as remedies to deception in consumer research. Advances in Consumer Research, 31 , 759–765.*

Kleiser, S. B., Sivadas, E., Kellaris, J. J., & Dahlstrom, R. F. (2003). Ethical ideologies: efficient assessment and influence on ethical judgments of marketing practices. Psychology and Marketing, 20 , 1–21.*

Knotts, T. L., Lopez, T. B., & Mesak, H. I. (2000). Ethical judgments of college students: An empirical analysis. Journal of Education for Business, 75 , 158–163.

Kohlberg, L. (1976). Moral stages and moralization: The cognitive-developmental approach. In T. Lickona (Ed.), Moral development and behavior (pp. 31–53). New York: Holt Rinehart and Winston.

Kujala, J. (2001). A multidimensional approach to Finnish managers’ moral decision-making. Journal of Business Ethics, 34 , 231–254.*

Kujala, J., Lamsa, A. M., & Penttila, K. (2011). Managers’ moral decision-making patterns over time: A multidimensional approach. Journal of Business Ethics, 100 , 191–207.*

Kujala, J., & Pietilainen, T. (2007). Female managers’ ethical decision-making: A multidimensional approach. Journal of Business Ethics, 70 , 153–163.*

LaFleur, E. K., Reidenbach, R. E., Robin, D. P., & Forrest, P. J. (1996). An exploration of rule configuration effects on the ethical decision processes of advertising professionals. Journal of the Academy of Marketing Science, 24 (1), 66–76.*

Landeros, R., & Plank, R. E. (1996). How ethical are purchasing management professionals? Journal of Business Ethics, 15 , 789–803.*

Larkin, J. M. (2000). The ability of internal auditors to identify ethical dilemmas. Journal of Business Ethics, 23 , 401–409.

Latif, D. A. (2000). Ethical cognition and selection-socialization in retail pharmacy. Journal of Business Ethics, 25 , 343–357.

LaTour, M. S., & Henthorne, T. L. (1994). Ethical judgments of sexual appeals in print advertising. Journal of Advertising, 23 (3), 81–90.*

LaTour, M. S., Snipes, R. L., & Bliss, S. J. (1996). Don’t be afraid to use fear appeals: An experimental study. Journal of Advertising Research, 36 (2), 59–67.

Lin, C. Y., & Ho, Y. H. (2008). An examination of cultural differences in ethical decision making using the multidimensional ethics scale. Social Behavior and Personality, 36 , 1213–1222. (R).*

Loo, R. (2001). Encouraging classroom discussion of ethical dilemmas in research management: Three vignettes. Teaching Business Ethics, 5 , 195–212.

Loo, R. (2002). Tackling ethical dilemmas in project management using vignettes. International Journal of Project Management, 20 , 489–495.

Loo, R. (2004). Support for Reidenbach and Robin’s (1990). Eight-item multidimensional ethics scale. Social Science Journal, 41 , 289–294.*

Marques, P. A., & Azevedo-Pereira, J. (2009). Ethical ideology and ethical judgments in the Portugese accounting profession. Journal of Business Ethics, 86 , 227–242.

McDonald, G. (2000). Cross-cultural methodological issues in ethical research. Journal of Business Ethics, 27 , 89–104.

McDonald, G., & Pak, P. C. (1996). It’s all fair in love war, and business: Cognitive philosophies in ethical decision making. Journal of Business Ethics, 15 , 973–996.

McMahon, J. M., & Cohen, R. (2009). Lost in cyberspace: Ethical decision making in the online environment. Ethics and Information Technology, 11 , 1–17.

McMahon, J. M., & Harvey, R. J. (2006). An analysis of the factor structure of Jones’ moral intensity construct. Journal of Business Ethics, 64 , 381–404.*

McMahon, J. M., & Harvey, R. J. (2007a). The effect of moral intensity on ethical judgment. Journal of Business Ethics, 72 , 335–357.*

McMahon, J. M., & Harvey, R. J. (2007b). Psychometric properties of the Reidenbach-Robin multidimensional ethics scale. Journal of Business Ethics, 72 , 27–39.*

Mittal, B., & Lassar, W. M. (2000). Sexual liberalism as a determinant of consumer response to sex in advertising. Journal of Business and Psychology, 15 , 111–127.*

Mudrack, P. E., Bloodgood, J. M., & Turnley, W. H. (2012). Some ethical implications of individual competitiveness. Journal of Business Ethics, 108 , 347–359.

Mudrack, P. E., & Mason, E. S. (2010). The asceticism dimension of the protestant work ethic: Shedding its status of invisibility. Journal of Applied Social Psychology, 40 , 2043–2070.

Nguyen, N. T., Basuray, M. T., Smith, W. P., Kopka, D., & McCulloh, D. (2008). Moral issues and gender differences in ethical judgment using Reidenbach and Robin’s (1990) multidimensional ethics scale: Implications in teaching of business ethics. Journal of Business Ethics, 77 , 417–430.*

Nguyen, N. T., & Biderman, M. D. (2008). Studying ethical judgments and behavioral intentions using structural equations: Evidence from the multidimensional ethics scale. Journal of Business Ethics, 83 , 627–640. (H).*

O’Fallon, M. J., & Butterfield, K. D. (2005). A review of the empirical ethical decision-making literature: 1996–2003. Journal of Business Ethics, 59 , 375–413.

Oumlil, A. B., & Balloun, J. L. (2009). Ethical decision-making differences between American and Moroccan managers. Journal of Business Ethics, 84 , 457–478.

Palau, S. L. (2001). Ethical evaluations, intentions, and orientations of accountants: Evidence from a cross-cultural examination. International Advances in Economic Research, 7 , 351–364. (R).*

Pan, Y., & Sparks, J. R. (2012). Predictors consequence, and measurement of ethical judgments: Review and meta-analysis. Journal of Business Research, 65 , 84–91. (H).

Patel, C. (2003). Some cross-cultural evidence on whistle-blowing as an internal control mechanism. Journal of International Accounting Research, 2 , 69–96. (R).

Radtke, R. R. (2000). The effects of gender and setting on accountants’ ethically sensitive situations. Journal of Business Ethics, 24 , 299–312.

Rallapalli, K. C., Vitell, S. J., & Barnes, J. H. (1998). The influence of norms on ethical judgments and intentions: An empirical study of marketing professionals. Journal of Business Research, 43 , 157–168. (H).

Razzaque, M. A., & Hwee, T. P. (2002). Ethics and purchasing dilemma: A Singaporean view. Journal of Business Ethics, 35 , 307–326.*

Reichert, T., LaTour, M. S., & Ford, J. B. (2011). The naked truth: Revealing the affinity for graphic sexual appeals in advertising. Journal of Advertising Research, 51 , 436–448.

Reidenbach, R. E., & Robin, D. P. (1988). Some initial steps toward improving the measurement of ethical evaluations of marketing activities. Journal of Business Ethics, 7 , 871–879.

Reidenbach, R. E., & Robin, D. P. (1990). Toward the development of a multidimensional scale for improving evaluations of business ethics. Journal of Business Ethics, 9 , 639–653.

Reidenbach, R. E., Robin, D. P., & Dawson, L. (1991). An application and extension of a multidimensional ethics scale to selected marketing practices and marketing groups. Journal of the Academy of Marketing Science, 19 , 83–92. (R).

Rest, J. R. (1979). Development in judging moral issues . Minneapolis: University of Minnesota Press.

Rest, J. R. (1986). Moral development: Advances in theory and research . New York: Praeger.

Rittenburg, T. L., & Valentine, S. R. (2002). Spanish and American Executives’ ethical judgments and intentions. Journal of Business Ethics, 38 , 291–306.*

Robin, D. P., Gordon, G., Jordan, C., & Reidenbach, R. E. (1996a). The empirical performance of cognitive moral development in predicting behavioral intent. Business Ethics Quarterly, 6 , 493–515.

Robin, D. P., King, E. W., & Reidenbach, R. E. (1996b). The effect of attorneys’ perceived duty to client on their ethical decision making process. American Business Law Journal, 34 , 277–299.*

Robin, D. P., Reidenbach, R. E., & Forrest, P. J. (1996c). The perceived importance of an ethical issue as an influence on the ethical decision-making of ad managers. Journal of Business Research, 35 , 17–28.*

Robin, D. P., Reidenbach, R. E., & Babin, B. J. (1997). The nature measurement, and stability of ethical judgments in the workplace. Psychological Reports, 80 , 563–580.

Rottig, D., Koufteros, X., & Umphress, E. (2011). Formal infrastructure and ethical decision making: An empirical investigation and implications for supply management. Decision Sciences, 42 , 163–204. (H).

Sarwono, S. S., & Armstrong, R. W. (2001). Microcultural differences and perceived ethical problems: An international business perspective. Journal of Business Ethics, 30 , 41–56.

Schepers, D. H. (2003). Machiavellianism, profit, and the dimensions of ethical judgment: A study of impact. Journal of Business Ethics, 42 , 339–352. (R).*

Schmidt, C. D., McAdams, C. R., & Foster, V. (2009). Promoting the moral reasoning of undergraduate business students through a deliberate psychological education-based classroom intervention. Journal of Moral Education, 38 , 315–334.

Schwepker, C. H., Jr. (1999). Understanding salespeople’s intention to behave unethically: The effects of perceived competitive intensity, cognitive moral development and moral judgment. Journal of Business Ethics, 21 , 303–316. (R).*

Schwepker, C. H, Jr., Farrell, O. C., & Ingram, T. N. (1997). The influence of ethical climate and ethical conflict on role stress in the sales force. Journal of the Academy of Marketing Science, 25 , 99–108.

Schwepker, C. H., & Good, D. J. (2011). Moral judgment and its impact on business-to-business sales performance and customer relationships. Journal of Business Ethics, 98 , 609–625.

Schwepker, C. H., Jr., & Ingram, T. N. (1996). Improving sales performance through ethics: The relationship between salesperson moral judgment and job performance. Journal of Business Ethics, 15 , 1151–1160.*

Shafer, W. E. (2002). Effects of materiality risk, and ethical perceptions on fraudulent reporting by financial executives. Journal of Business Ethics, 38 , 243–262.

Shafer, W. E. (2008). Ethical climate in Chinese CPA firms. Accounting, Organizations and Society, 33 , 825–835.*

Shafer, W. E., & Simmons, R. S. (2008). Social responsibility Machiavellianism and tax avoidance. Accounting, Auditing and Accountability Journal, 21 , 695–720. (H).

Shaw, T. R. (2003). The moral intensity of privacy: An empirical study of webmasters’ attitudes. Journal of Business Ethics, 46 , 301–318.*

Shawver, T. J., & Clements, L. H. (2007). The intention of accounting students to whistleblow in situations of questionable ethical dilemmas. Research on Professional Responsibility and Ethics in Accounting, 11 , 177–191. (R/H).

Shawver, T. J., & Sennetti, J. T. (2009). Measuring ethical sensitivity and evaluation. Journal of Business Ethics, 88 , 663–678. (H).

Simpson, P. M., Brown, G., & Widing, R. E., II (1998). The association of ethical judgment of advertising and selected advertising effectiveness response variables. Journal of Business Ethics, 17 , 125–136.*

Singh, J. J., Vitell, S. J., Al-Khatib, J., & Clark, I., III. (2007). The role of moral intensity and personal moral philosophies in the ethical decision making of marketers: A cross-cultural comparison of china and the United States. Journal of International Marketing, 15 , 86–112. (H).

Singhapakdi, A. (1999). Perceived importance of ethics and ethical decisions in marketing. Journal of Business Research, 45 , 89–99.

Singhapakdi, A., Vitell, S. J., & Franke, G. R. (1999). Antecedents consequences, and mediating effects of perceived moral intensity and personal moral philosophies. Journal of the Academy of Marketing Science, 27 , 19–36.

Smith, N. C., & Cooper-Martin, E. (1997). Ethics and target marketing: The role of product harm and consumer vulnerability. Journal of Marketing, 61 (3), 1–20.

Smith, N. C., Simpson, S. S., & Huang, C. Y. (2007). Why managers fail to do the right thing: An empirical study of unethical and illegal conduct. Business Ethics Quarterly, 17 , 633–667. (H).*

Snipes, R. L., LaTour, M. S., & Bliss, S. J. (1999). A model of the effects of self-efficacy on the perceived ethicality and performance of fear appeals in advertising. Journal of Business Ethics, 19 , 273–285.*

Sparks, J. R., & Pan, Y. (2010). Ethical judgments in business ethics research: Definition, and research agenda. Journal of Business Ethics, 91 , 405–418.

Spicer, A., Dunfee, T. W., & Bailey, W. J. (2004). Does national context matter in ethical decision making? An empirical test of integrative social contracts theory. Academy of Management Journal, 47 , 610–620.

Staw, B. M. (1975). Attribution of the “Causes” of performance: A general alternative interpretation of cross-sectional research on organizations. Organizational Behavior and Human Performance, 13 , 414–432.

Stevenson, T. H., & Bodkin, C. D. (1998). A cross-national comparison of university students’ perceptions regarding the ethics and acceptability of sales practices. Journal of Business Ethics, 17 , 45–55.

Swimberghe, K., Flurry, L. A., & Parker, J. M. (2011a). Consumer religiosity: Consequences for consumer activism in the United States. Journal of Business Ethics, 103 , 453–467.

Swimberghe, K., Sharma, D., & Flurry, L. W. (2011b). Does a consumer’s religion really matter in the buyer-seller Dyad? An empirical study examining the relationship between consumer religious commitment, Christian conservatism and the ethical judgment of a seller’s controversial business decision. Journal of Business Ethics, 102 , 581–598.

Tansey, R., Brown, G., Hyman, M. R., & Dawson, L. E., Jr. (1994). Personal moral philosophies and the moral judgments of salespeople. Journal of Personal Selling and Sales Management, 14 (1), 59–74.*

Tansey, R., Hyman, M. R., & Brown, G. (1992). Ethical judgments about wartime ads depicting combat. Journal of Advertising, 21 (3), 57–74.*

Taylor, E. Z., & Curtis, M. B. (2010). An examination of the layers of workplace influences in ethical judgments: Whistleblowing likelihood and perseverance in public accounting. Journal of Business Ethics, 93 , 21–37.

Thoma, S. J., Rest, J. R., & Davison, M. L. (1991). Describing and testing a moderator of the moral judgment and action relationship. Journal of Personality and Social Psychology, 61 , 659–669.

Tsalikis, J., & LaTour, M. S. (1995). Bribery and extortion in international business: Ethical perceptions of Greeks compared to Americans. Journal of Business Ethics, 14 , 249–264.

Tsalikis, J., & Nwachukwu, O. (1991). A comparison of Nigerian to American views of bribery and extortion in international commerce. Journal of Business Ethics, 10 , 85–98.

Tsalikis, J., & Ortiz-Buonafina, M. (1990). Ethical beliefs’ differences of males and females. Journal of Business Ethics, 9 , 509–517.

Tsalikis, J., Seaton, B., & Shepherd, P. L. (2001). Relativism in ethical research: A proposed model and mode of inquiry. Journal of Business Ethics, 32 , 231–246.

Tsalikis, J. B., Seaton, B., & Tomaras, P. (2002). A new perspective on cross-cultural ethical evaluations: The use of conjoint analysis. Journal of Business Ethics, 35 , 281–292.

Tuttle, B., Harrell, A., & Harrison, P. (1997). Moral hazard, ethical considerations, and the decision to implement an information system. Journal of Management Information Systems, 13 (4), 7–27. (R).*

Valentine, S., & Barnett, T. (2007). Perceived organizational ethics and the ethical decisions of sales and marketing personnel. Journal of Personal Selling and Sales Management, 27 , 373–388. (H).

Valentine, S. R., & Bateman, C. R. (2011). The impact of ethical ideologies, moral intensity, and social context on sales-based ethical reasoning. Journal of Business Ethics, 102 , 155–168.

Valentine, S., & Fleischman, G. (2003). Ethical reasoning in an equitable relief innocent spouse context. Journal of Business Ethics, 45 , 325–339.

Valentine, S., Fleischman, G. M., Sprague, R., & Godkin, L. (2010). Exploring the ethicality of firing employees who blog. Human Resource Management, 49 , 87–108. (H).

Valentine, S., & Hollingsworth, D. (2012). Moral intensity issue importance, and ethical reasoning in operations situations. Journal of Business Ethics, 108 , 509–523.

Valentine, S. R., & Page, K. (2006). Nine to five: Skepticism of women’s employment and ethical reasoning. Journal of Business Ethics, 63 , 53–61.

Valentine, S. R., & Rittenburg, T. L. (2004). Spanish and American professionals’ ethical evaluations in global situations. Journal of Business Ethics, 51 , 1–13. (H).*

Valentine, S. R., & Rittenburg, T. L. (2007). The ethical decision making of men and women executives in international business situations. Journal of Business Ethics, 71 , 125–134.

Verbeke, W., Uwerkerk, C., & Peelen, E. (1996). Exploring the contextual and individual factors on ethical decision making of salespeople. Journal of Business Ethics, 15 , 1175–1187.

Vitell, S. J., Bakir, A., Paolillo, J. G. P., Hidalgo, E. R., Al-Khatib, J., & Rawwas, M. Y. A. (2003). Ethical judgments and intentions: A multinational study of marketing professionals. Business Ethics: A European Review, 12 , 151–171. (H).

Vitell, S. J., & Ho, F. N. (1997). Ethical decision making in marketing: A synthesis and evaluation of scales measuring the various components of decision making in ethical situations. Journal of Business Ethics, 16 , 699–717.

Vitell, S. J., & Muncy, J. (1992). Consumer ethics: An empirical investigation of factors influencing ethical judgments of the final consumer. Journal of Business Ethics, 11 , 585–597.

Vitell, S. J., & Patwardhan, A. (2008). The role of moral intensity and moral philosophy in ethical decision-making: A cross-cultural comparison of China and the European Union. Business Ethics: A European Review, 17 , 196–209.

Wagner, S. C., & Sanders, G. L. (2001). Considerations in ethical decision-making and software piracy. Journal of Business Ethics, 29 , 161–167.*

Westerman, J. W., Beekun, R. I., Stedham, Y., & Yamamura, J. H. (2007). Peers versus national culture: An analysis of antecedents to ethical decision-making. Journal of Business Ethics, 75 , 239–252. (H).

Wimalasiri, J. S., Pavri, F., & Jalil, A. A. K. (1996). An empirical study of moral reasoning among managers in Singapore. Journal of Business Ethics, 15 , 1331–1341.

Wu, C. (2003). A study of the adjustment of ethical recognition and ethical decision-making of managers-to-be across the Taiwan Strait before and after receiving a business ethics education. Journal of Business Ethics, 45 , 291–307.

Yoon, C. (2011a). Ethical decision-making in the internet context: Development and test of an initial model based on moral philosophy. Computers in Human Behavior, 27 , 2401–2409. (R/H).*

Yoon, C. (2011b). Theory of planned behavior and ethics theory in digital piracy: An integrated model. Journal of Business Ethics, 100 , 405–417. (H).*

Yoon, C. (2012). Digital piracy intention: A comparison of theoretical models. Behaviour and Information Technology, 31 , 565–576. (H).*

Zhang, J., Chiu, R., & Wei, L. (2009a). Decision-making process of internal whistleblowing behavior in China: Empirical evidence and implications. Journal of Business Ethics, 88 , 25–41. (H).

Zhang, J., Chiu, R. R., & Wei, L. Q. (2009b). On whistleblowing judgment and intention: The roles of positive mood and ethical culture. Journal of Managerial Psychology, 24 , 627–649. (H).

Download references

Author information

Authors and affiliations.

College of Business Administration, Kansas State University, 101 Calvin Hall, Manhattan, KS, 66503, USA

Peter E. Mudrack

Department of Organizational Behaviour, Human Resources, Entrepreneurship, Ethics, Brock University, St. Catharines, ON, Canada

E. Sharon Mason

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Peter E. Mudrack .

Negatively signed relationships occurred here when the direction of scoring was reversed for either judgments or intentions. For example, in the Barnett and Valentine ( 2004 ) study, higher “judgments” scores implied stronger beliefs that questionable actions were unethical and higher “intentions” scores suggested greater tendencies to behave similarly (p. 342). Persons who regarded the activities as wrong (high judgments scorers) thus tended to score low on intentions. Although negatively signed correlations should perhaps be avoided here because of potential confusion that might arise, Barnett and Valentine ( 2004 ) were nonetheless unambiguously clear about the precise interpretations of high and low scores. Not all papers reviewed matched this level of clarity. For example, higher scores on ethical judgments have “represented items that are less consistent with the underlying philosophy” (Cruz et al. 2000 , p. 228), “a high level of individual moral values” (Herndon et al. 2001 , p. 76), “lower ethicality” (Rittenburg and Valentine 2002 , p. 297), “higher agreement toward business problems perceived with ethical dilemma” (Sarwono and Armstrong 2001 , p. 45), “higher moral value judgment” (Schwepker 1999 , p. 306), and “greater generalized ethicality” (Valentine and Fleischman 2003 , p. 331). Does greater “consistency” or higher “agreement” suggest that high scorers viewed the activities as more or less appropriate than low scorers? Does “lower ethicality” refer to the action depicted in the vignette or to the respondents themselves? Any ambiguity here makes results more difficult-to-interpret than they might otherwise be and likely need to be. For example, Pan and Sparks ( 2012 , p. 86) stated that, contrary to the general pattern of results from most studies, “Conversely, Barnett ( 2001 ) reports a negative relationship between ethical awareness and judgments”. When discussing the ethical judgments measure, however, Barnett ( 2001 ) did not specify the precise meaning of high or low scores (pp. 1044–1045), but did reveal (p. 1048) that “recognizing an issue as having a moral component was associated with judgments that the actions were, indeed, unethical”, which suggests that a low score on ethical judgments meant that respondents viewed the activity as inappropriate. In effect, this negative relationship was consistent with positive relationships from other studies, but did not appear to be interpreted as such.

Rights and permissions

Reprints and permissions

About this article

Mudrack, P.E., Mason, E.S. Ethical Judgments: What Do We Know, Where Do We Go?. J Bus Ethics 115 , 575–597 (2013). https://doi.org/10.1007/s10551-012-1426-z

Download citation

Received : 05 April 2012

Accepted : 25 July 2012

Published : 08 August 2012

Issue Date : July 2013

DOI : https://doi.org/10.1007/s10551-012-1426-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Ethical judgments
  • Literature review
  • Find a journal
  • Publish with us
  • Track your research

Wharton Magazine

  • Class Notes

Digital Exclusives

  • Blog Network

School News

  • Alumni Spotlight

Wharton Magazine

Show Your Logic

Avoid conflict and build trust by establishing the “why” behind decisions and sharing it with colleagues.

A hand putting together a jigsaw puzzle with a question mark and a lightbulb

Asking Questions, Unlocking Solutions

How reframing a problem creates value for customers

Vance Chang in a chef hat and Dine Brands Global uniform.

The Future of Fast Food

Alumni dish on the industry's digital transformation.

Wilglory Tanjong in a white jacket and with a black bow around her neck stands in front of a whiteboard holding a bright pink bag. Other bright pink bags are arranged on a table in front of her.

On a meteoric rise through the fiercely competitive luxury retail market, high-end handbag brand Anima Iris has been picked up by Nordstrom, Saks Fifth Avenue, and even Beyoncé. With geometric and bold designs, founder Wilglory Tanjong G22 WG22 expresses her ancestry in a fashionable and sustainable way. The bags are made in Senegal by expert craftspeople who have honed their techniques over decades and draw inspiration from centuries of heritage. The leather and other materials are sourced through local African business merchants. Anima Iris is environmentally friendly and employs a zero-waste model that ensures all materials are used and that no two products are the same.

Portrait of Ankur Jain wearing a black shirt and black jeans seated on a chair and holding a microphone.

Bilt Rewards

Bilt Rewards launched in 2021 and achieved immediate success in its first year. The startup credit-card rewards program by founder and CEO Ankur Jain W11 makes redeeming points from purchases easy with a unique twist — the card can be used toward rent payments. Jain explains that renters today are living with inflation and rising rent costs, resulting in many who now must pay close to 50 percent of their earned income on rent. Bilt helps this generation build credit while earning rewards that open up affordability in other areas of their lives, such as travel experiences and eventual home ownership.

A woman in a pink shirt and a man in a light green shirt stand together behind a marble countertop that has six jars of coconut spread on it.

An organic coconut butter with its early roots in Venture Lab’s Food Innovation Lab can now be found in 1,300 stores, including national chains Sprouts and Wegmans. Couple-turned-business-partners Breanna Golestani WG23 and Jared Golestani WG23 founded Kokada in 2020 to provide a healthier alternative to sugar-laden snacks and spreads typically found at the grocery store. Kokada offers a range of coconut butters that are all peanut-free and sugar-free and designed to be enjoyed as a dip, with a treat, or as part of a meal. The company gives back two percent of all sales to SERVE, a certified NGO based in Sri Lanka, where its ingredients are sourced.

Red, white, and black illustration of a doctor and a robot standing side by side and holding hands.

Flagler Health

Developed by Albert Katz WG23 and Will Hu GED19, Flagler Health combines patient data and the power of AI to help physicians recommend treatments to their patients. (“It’s like giving a calculator to a mathematician,” says Katz.) Backed by $6 million in funding, Flagler Health now serves more than 1.5 million patients and recently launched a new product that provides remote patients with exercises to keep joints moving pre- and post-op. The startup made the Poets & Quants “Most Disruptive MBA Startups of 2023” list and was a finalist in Penn’s 2023 Venture Lab Startup Challenge.

Conceptual illustration of animals displaying various emotions such as surprise, sadness and delight, with the emotions written across their bodies.

Catching Eyes in the Attention Economy

New research shows how to use language to capture audience attention, from word choice to building suspense.

Headshots of Jagmeet Lamba and Dudley Brundige.

Juggling multiple vendors can be daunting for a small-business owner. Certa, led by CEO Jagmeet Lamba WG07 and CFO Dudley Brundige WG07, streamlines relationships with third-party vendors, making onboarding up to three times faster. The platform itself can reduce IT labor needs, allowing users to create personalized workflows. The company also has its own AI technology — CertaAssist — that can fill out supplier questionnaires, consolidate intake requests, and create data visualizations. Certa’s clients include Uber, Instacart, and Box, whose executives have reported reduced cycle times and operating costs after using the procurement software.

Wharton Dean Erika James poses wearing a turquoise dress shirt.

United for a Brighter Future

Dean Erika James reflects on opportunities for the Wharton community to come together and lead.

Six people in business clothing gather for a photo in a crowded event space.

On the Scene

From Hong Kong to New York, Wharton alumni unite for Impact Tour gatherings, GOLD events, good music, and more.

Sigo Seguros founders Nestor Hugo Solari and Julio Erdos seated in chairs on a rooftop.

Sigo Seguros

Spanish remains the most widely used language in the U.S. behind English, with an estimated 41 million current speakers. But Hispanic immigrants still face cultural barriers when they arrive in the States. Nestor Hugo Solari G19 WG19 and Júlio Erdos C10 ENG10 G19 WG19 created Sigo Seguros, a bilingual Texas-based car insurance technology company, to better serve this population. “Our differentiated product starts with a deep understanding of our community and its needs,” says Solari. The Spanish-language mobile and web portals, coupled with quick payback periods, are particularly appealing to working-class drivers. The “insurtech” company raised $5.1 million in additional pre-seed funding in 2023.

Collage with headshots of Sam Altman, David Hsu, and Shellye Archambeau.

Supercharge Your Startup

Resources to help you jump-start your venture’s growth

Red, white, and black illustration of a medical professional staring up at a large microscope with cells under the glass.

Cancer can bring your life to a screeching halt. Along with the burden of navigating through new medical terminology and uncertainty, a positive diagnosis can generate feelings of loneliness and isolation. CancerIQ was founded by Feyi Olopade Ayodele W05 WG12 to offer a supportive and more strategic solution for health-care providers working with patients in early cancer detection and prevention. As a software platform, CancerIQ offers hyper-personalized care plans and assesses risks in patients by avoiding the one-size-fits-all approach. The tool focuses on early detection with more precise screening. CancerIQ has been implemented in more than 200 clinic locations across the U.S.

Headshot of Yuval Shmul Shuminer in a white formal jacket.

Every day seems to bring a new way to send, receive, or manage money. Managing cash flow on numerous platforms has become quite onerous, non? Au contraire . Piere, an AI-powered app founded by Yuval Shmul Shuminer W19, analyzes past transactions to create a customized budget in two taps. It’s a peer-to-peer facilitator (for such tasks as getting reimbursed for a group meal) and a spending tracker in one. Since Intuit shut down its popular Mint budgeting app, Piere is reported to be the ideal successor: News outlets have featured the app as part of the “loud budgeting” social media trend, and financial publications highlight it as a valuable tool for monitoring spending.

Portrait of Heidi Block wearing a black Play-PKL shirt and standing in front of her business's apparel.

Heidi Block WG95 and her family first got hooked on pickleball during the COVID-19 lockdowns, when they played the sport together at home in New Jersey to pass the time and stay active. But when Block couldn’t find apparel specifically designed for pickleball, she decided to make her own. Along with her eldest son, Max, she founded Play-PKL, an online retailer selling premium pickleball equipment and stylish outfits for recreational players. The site also offers tips and lessons for beginning pickleballers.

Ethics and Judgment: Judging What Is More Right and Less Wrong

Ethics and Judgment: Judging What Is More Right and Less Wrong

The following is taken from Lawrence Zicklin’s address to incoming students at this year’s MBA Student Convocation.

When we talk about ethics, we’re not talking about the obvious, or about what is right and wrong. Rather, we’re talking about the subtleties of the gray area and what is more right and less wrong. We’re talking about the moral minefield of a business life, where so many decisions involve not only business judgments, but ethical problems that have the potential to harm both you and your company. You will be making those decisions in that minefield almost every day.

Good ethics inevitably involves good judgment, and that means not taking unnecessary risks. If the risk is high enough, it doesn’t matter what the probability is: you can’t afford it. You only have to be wrong once to destroy your reputation or that of your company. That’s where judgment enters the picture. It’s judgment that separates the mediocre from the superior.

So what is judgment? It’s your capacity to understand a situation early and then take the proper action before it’s too late. Developing good judgment means being curious, listening and learning from your mistakes. Successful executives are also able to learn from the errors of others and change their view when the evidence demands change. And judgment isn’t all about experience. You need intelligent reasoning to be applied to that experience or else it doesn’t work.

To have good judgment you have to be able to look at the same set of facts as everyone else but draw better conclusions. Remember that judgment comes from looking at a mosaic of incomplete information, at a time that is not of your choosing, while the clock is ticking and the pressure to make a decision is most intense.

Permit me to offer you a few tips:

  • Staying out of trouble is difficult, but getting out of trouble is impossible.
  • Put a special emphasis on working with companies and people of good character.
  • Be tolerant of business errors but be ruthless with people who put your reputation or your company’s reputation in danger.
  • Solving your problems before they become critical will pay dividends.
  • Be sensitive to facts that conflict with your preconceived notions.
  • You have to be disciplined to accept what is, and change your view when what is conflicts with what you would like or expect.
  • Have courage.
  • People respect those who express an honest difference of opinion.
  • Pay attention to the New York Times theory.
  • Would you be embarrassed if what you are about to do was on the front page?
  • Finally, make sure you tell the truth and keep your word.

In the end, one of the determinants of your own success in school and in life will revolve around your values and your judgments. You have all the talent and ability that you need to succeed. Otherwise you wouldn’t be here. So take advantage of what it offered. Refine your values and sharpen your judgment. The rest will take care of itself.

ethical judgement essay

Wharton Now

Profitability for Good

Profitability for Good

How business is the best chance for solving the world’s problems.

Portrait of alumnus Rick Perkins.

Life Lessons: A CFO's Path to Nonprofit Nirvana

Rick Perkins WG70 on revitalizing the Kimmel Center, the importance of adapting to different bosses, and knowing when to retire

Portraits of Wharton professors Natalya Vinokurova, Peter Cappelli, and Leandro Pongeluppe.

A New Old-Fashioned Leadership

Startups and their founders grab headlines, but established businesses also need visionaries who can find success in the face of shifting business trends and new challengers.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 11 May 2023

Moral judgement and decision-making: theoretical predictions and null results

  • Uri Hertz 1 ,
  • Fanli Jia 2 &
  • Kathryn B. Francis   ORCID: orcid.org/0000-0002-3875-8904 3  

Scientific Reports volume  13 , Article number:  7688 ( 2023 ) Cite this article

4886 Accesses

3 Citations

13 Altmetric

Metrics details

  • Human behaviour

The study of moral judgement and decision making examines the way predictions made by moral and ethical theories fare in real world settings. Such investigations are carried out using a variety of approaches and methods, such as experiments, modeling, and observational and field studies, in a variety of populations. The current Collection on moral judgments and decision making includes works that represent this variety, while focusing on some common themes, including group morality and the role of affect in moral judgment. The Collection also includes a significant number of studies that made theoretically driven predictions and failed to find support for them. We highlight the importance of such null-results papers, especially in fields that are traditionally governed by theoretical frameworks.

Introduction

The study of moral judgement and decision making examines the way people behave and react to social and moral dilemmas. Moral and ethical theories usually provide the foundation for such efforts, providing important constructs and definitions, and even suggesting hypothetical experimental designs. A good example is the differentiation between deontological and utilitarian basis of moral action selection. Characteristically utilitarian approaches look at the overall benefit of each action, while characteristically deontological approaches set principles, prohibiting some actions regardless of their ultimate outcome. Both approaches provide predictions for moral decisions and use hypothetical scenarios such as personal versus impersonal trolley-type problems to illustrate the different predictions. In recent years researchers have been putting such theories to the test in a variety of experimental designs and populations. Translating theoretical hypotheses and constructs to an experimental paradigm or an operational prediction is not trivial. Participants’ individual traits and their cultural and societal context introduces variability and nuances to ethical theories. In addition, the technical need to build a robust and reliable experimental design, which can be evaluated using statistical tools, leads researchers to adopt experimental designs from different fields, such as economics and cognitive psychology.

Common themes in the collection

The current Collection invited works that employ a variety of paradigms and analyses tools to experimentally test predictions of moral judgement and decision-making. At the time this Editorial is written, the Collection covers several themes in moral judgment and decision making with the research included using different experimental approaches.

One common theme regards the deontological-utilitarian response differences mentioned above, studied from different approaches. One study examined whether people tend to trust deontological decision makers more than utilitarians 1 , another looked at the persuasive effect of deontological and utilitarian messages 2 , and yet another examined the way depression affected utilitarian and deontological aspects of moral decisions 3 . Like other works in this Collection, these include experimental designs that relied on vignettes, describing such moral dilemmas as the footbridge problem. To study how trust inference of moral decision-makers is moderated by several contextual factors , Bostyn et al. 1 used a behavioral game theory task, the trust game, where participants endow some of their money to a trustee in the hope that they will reciprocate, and the amount indicates their level of trust. Other studies in this Collection used monetary transactions as a proxy for cooperation and trust, using trust games 4 , the common-goods game 5 , 6 , decisions under risk 7 , variations of the dictator game where participants split money with others 8 , 9 and paradigms in which participants gain money from harm to others 10 , 11 . The use of such different approaches in the study of the same topic is important, as it allows evidence to converge across different studies, each with its own weaknesses and strengths.

Another common theme is the move beyond the single decision maker to examine group and collective effects on moral judgement and decisions. Two studies examined the effect of diffusion of moral responsibility and causality. Hansson et al. 9 studied the hypothesis that voting may lead to diffused responsibility, and through it to more selfish and immoral behavior. Keshmirian et al. 12 studied the moral judgment of individuals that performed moral transgressions against their own or in group, where causal responsibility may appear to be diffused. Interestingly, no evidence was found for diffusion of responsibility on moral behavior, while diffusion was found to affect moral judgment and punishment. In another study, norms and knowledge of other people’s actions were found to affect risk-based decisions concerning others’ wellbeing 7 .

At the time of writing this editorial, the Collection included studies that examine other aspects of model decisions. Hoeing et al. 5 examined whether political ideology affects moral decisions regarding money allocations, and Holbrook et al. 13 examined cultural differences in moral parochialism judgments. Moreover, Krupp and Maciejewski 14 discussed the evolutionary aspects of self-sacrifice in the context of interactions between sedentary actors and kinship, and Atari et al. 15 examined corpora of everyday discussions and evaluated how many of these are devoted to morality. These works indicate that moral judgment and behavior should be studied not only at the individual level, but also as a collective phenomenon; an emerging property of groups of interacting individuals. Others examined the effects of individual traits and emotions on moral decisions. Yin et al. 3 examined the levels of emotional and cognitive processes of depression on moral judgement. Du et al. 11 demonstrated mindfulness training could prevent moral preference decline over time without changing emotional regulation strategies. Diaz and Prinz 16 found that level of emotional awareness (such as alexithymia) played an important role in moral evaluation while controlling reasoning.

The rise of null-results in experimentation of moral theories

An important common characteristic in this Collection is the report of null-results. A number of studies took up an important theoretical question, used pre-registered experimental designs and sample sizes to tackle it, and reported no evidence supporting their initial hypothesis. For example, Hansson et al. 9 examined whether responsibility diffusion lead voting crowds to behave more selfishly than individuals in two preregistered experiments and found no evidence for such an effect. Bahník and Vranka 8 studied the effect of moral licensing on bribe taking in a preregistered study, hypothesizing that avoiding a small bribe may lead to increased likelihood to take a larger bribe later, and did not find evidence for such moral licensing effect. Cabrales et al. 4 hypothesized that in a trust game, time constraints may push trustees to be more generous, and therefore that knowledge about trustee’s time constraints may make participants more likely to trust them. In three experiments, they found no evidence for this effect. Hoenig et al. 5 examined the cooperation levels of left-leaning and right-leaning individuals and found that left-leaning individuals tended to cooperate more only on decisions that involved equality and did not differ from right-leaning individuals on decisions where outcomes were non-equally dispersed. Bocian et al. 17 manipulated when moral information should be presented, expecting that the manipulation would moderate the impact of liking bias on moral judgment. However, the results of their preregistered study 2 did not support this hypothesis. Finally, in a registered report, Bostyn et al. 1 tested the way contextual features affect trust in deontological and utilitarian decision makers, following Everett et al. 18 . They found no evidence of an overall effect that people trust deontological decision makers more than utilitarian ones.

These results, obtained by studying thousands of participants on multiple platforms and using multiple experimental designs, pose an important contribution to the literature on moral judgement and decision making. While all studies relied on sound theoretical principles, their null results help delineate the limits of these theories’ predictive power. As we argued above, carrying out experiments involves making practical decisions about populations, experimental designs and manipulations, and statistical analyses. This means that experimenters must deal with more nuanced, complex, and context-dependent effects than the more abstract and context-independent settings in which theoretical predictions are made. Experimental evidence, both in support of and against theoretical predictions, is important in the process of refining key moral theories; enabling researchers to investigate under which circumstances they operate as well as their limits.

Traditionally, null-results are less likely to be published, either due to the editing and review process, but also because of the self-censoring processes by which authors are less likely to finalize and submit for publication these projects 19 . As demonstrated here, the process of preregistration, and especially registered reports, ensures that these projects are published and shared with the relevant academic communities. This is important, as null-results are informative and can greatly contribute to the literature and theoretical development of the field. It is also important to highlight these results and encourage other researchers to experimentally test their theoretical predictions without fearing the lost cost of obtaining no evidence to support them. This is especially important in the field of moral decision making, which heavily relies on moral and ethical theory, and where experimentation can greatly inform broad societal problems.

Bostyn, D. H., Chandrashekar, S. P. & Roets, A. Deontologists are not always trusted over utilitarians: Revisiting inferences of trustworthiness from moral judgments. Sci. Rep. 13 , 1665 (2023).

Article   ADS   CAS   PubMed   PubMed Central   Google Scholar  

Balafoutas, L. & Rezaei, S. Moral suasion and charitable giving. Sci. Rep. 12 , 20780 (2022).

Yin, X., Hong, Z., Zheng, Y. & Ni, Y. Effect of subclinical depression on moral judgment dilemmas: A process dissociation approach. Sci. Rep. 12 , 20065 (2022).

Cabrales, A., Espín, A. M., Kujal, P. & Rassenti, S. Trustors’ disregard for trustees deciding quickly or slowly in three experiments with time constraints. Sci. Rep. 12 , 12120 (2022).

Hoenig, L. C., Pliskin, R. & De Dreu, C. K. W. Political ideology and moral dilemmas in public good provision. Sci. Rep. 13 , 2519 (2023).

Miranda-Rodríguez, R. A., Leenen, I., Han, H., Palafox-Palafox, G. & García-Rodríguez, G. Moral reasoning and moral competence as predictors of cooperative behavior in a social dilemma. Sci. Rep. 13 , 3724 (2023).

Article   ADS   PubMed   PubMed Central   Google Scholar  

Jiang, Y., Marcowski, P., Ryazanov, A. & Winkielman, P. People conform to social norms when gambling with lives or money. Sci. Rep. 13 , 853 (2023).

Bahník, Š & Vranka, M. No evidence of moral licensing in a laboratory bribe-taking task. Sci. Rep. 12 , 13860 (2022).

Hansson, K., Persson, E. & Tinghög, G. Voting and (im)moral behavior. Sci. Rep. 12 , 22643 (2022).

Siegel, J. Z., van der Plas, E., Heise, F., Clithero, J. A. & Crockett, M. J. A computational account of how individuals resolve the dilemma of dirty money. Sci. Rep. 12 , 18638 (2022).

Du, W., Yu, H., Liu, X. & Zhou, X. Mindfulness training reduces slippery slope effects in moral decision-making and moral judgment. Sci. Rep. 13 , 2967 (2023).

Keshmirian, A., Hemmatian, B., Bahrami, B., Deroy, O. & Cushman, F. Diffusion of punishment in collective norm violations. Sci. Rep. 12 , 15318 (2022).

Holbrook, C. et al. Moral parochialism and causal appraisal of transgressive harm in Seoul and Los Angeles. Sci. Rep. 12 , 14227 (2022).

Krupp, D. B. & Maciejewski, W. The evolution of extraordinary self-sacrifice. Sci. Rep. 12 , 90 (2022).

Atari, M. et al. The paucity of morality in everyday talk. Sci. Rep. 13 , 5967 (2023).

Díaz, R. & Prinz, J. The role of emotional awareness in evaluative judgment: Evidence from alexithymia. Sci. Rep. 13 , 5183 (2023).

Bocian, K., Szarek, K. M., Miazek, K., Baryla, W. & Wojciszke, B. The boundary conditions of the liking bias in moral character judgments. Sci. Rep. 12 , 17217 (2022).

Everett, J. A. C., Pizarro, D. A. & Crockett, M. J. Inference of trustworthiness from intuitive moral judgments. J. Exp. Psychol. Gen. 145 , 772–787 (2016).

Article   PubMed   Google Scholar  

Franco, A., Malhotra, N. & Simonovits, G. Social science. Publication bias in the social sciences: Unlocking the file drawer. Science 345 , 1502–1505 (2014).

Article   ADS   CAS   PubMed   Google Scholar  

Download references

Author information

Authors and affiliations.

Department of Cognitive Sciences, University of Haifa, 3498838, Haifa, Israel

Department of Psychology, Seton Hall University, South Orange, NJ, 07079, USA

School of Psychology, Keele University, Keele, Staffordshire, ST5 5BG, UK

Kathryn B. Francis

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Uri Hertz .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Hertz, U., Jia, F. & Francis, K.B. Moral judgement and decision-making: theoretical predictions and null results. Sci Rep 13 , 7688 (2023). https://doi.org/10.1038/s41598-023-34899-x

Download citation

Published : 11 May 2023

DOI : https://doi.org/10.1038/s41598-023-34899-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

ethical judgement essay

Professional Ethics: Moral Judgments Essay

Although there is no complete list of adequacy criteria for moral judgments, moral judgments have certain requirements that should be followed.

First and foremost, moral judgments should be logical. Hence, there should be a logical connection between the established standard and the conduct. The formation of a moral judgment should be essentially supported by the relevant reasons and evidence. It is crucial to ensure that the judgment is consistent and can be compared to other logical judgments. It is likewise critical to avoid making exceptions for oneself.

Secondly, a moral judgment should have a factual basis. Thus, it cannot be formed irrelevantly to the existing environment. Hence, forming a moral judgment requires possessing reliable and inclusive data that refers directly to the judgment’s content. The accuracy of the information used while forming a moral judgment is equally important as its credibility.

Finally, moral judgments should rely on common moral principles. Hence, every moral judgment is based on some standards. The latter generally reflect the moral principles. Credible moral judgments should be based on the so-called sound moral principles – in other words, those principles that cannot be shattered by skepticism or criticism. A reliable moral judgment can also be based on considered moral beliefs that are formulated upon the relevant reflection.

Before evaluating utilitarianism, one should understand some points that might lead to confusion and misapplication

First, every action should be regarded from at least two perspectives. Hence, for instance, there are two possible ways of acting, and it is required to choose only one. Therefore, it is essential to evaluate potential outcomes from both negative and positive standpoints. Let us assume that the first act is likely to make ten people happy. At this point, it is vital to consider how many people will become unhappy. Let us suppose that there will be one unhappy person as the outcome of the proposed act. As a result, the net value of the positive impact for this act is four. Another act, in its turn, is apt to make five people happy and three people unhappy.

Its net value of the positive impact, in this case, is two. While making a decision, it will most rational to choose the first alternative as it has a higher net value. Otherwise stated, the decision-making process should not base entirely on the positive prospects – it is essential to evaluate every decision from both positive and negative sides.

Secondly, every decision making should base on the careful evaluation of the impact’s degree. Otherwise stated, it is critical to foresee how largely this or another act might influence another person.

Thirdly, it should be realized that every action might be morally justified under certain conditions. Hence, while making decisions, it is crucial to consider the given circumstances critically and evaluate the outcomes by them.

Explain the differences between the two approaches: Utilitarian and Libertarian?

Utilitarian approach advocates for social welfare. It claims that social well-being is of primary moral value. The Libertarian approach, in its turn, focuses on advocating for the right to property only. This approach bases on the promotion of the so-called negative rights. In other words, it puts a particular emphasis on such rights as “being alone” and “not doing something.” From this perspective, it has more negative connotations than the Utilitarian approach.

The latter proclaims the overall right to property, social welfare, and education. This approach turns social welfare into a common obligation. In other words, this targeted objective is supposed to be achieved even if it requires neglecting the rights of particular individuals.

Hence, it might be concluded that the Utilitarian approach advocates for a community, whereas the Libertarian approach advocates for an individual.

  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2021, April 26). Professional Ethics: Moral Judgments. https://ivypanda.com/essays/professional-ethics-moral-judgments/

"Professional Ethics: Moral Judgments." IvyPanda , 26 Apr. 2021, ivypanda.com/essays/professional-ethics-moral-judgments/.

IvyPanda . (2021) 'Professional Ethics: Moral Judgments'. 26 April.

IvyPanda . 2021. "Professional Ethics: Moral Judgments." April 26, 2021. https://ivypanda.com/essays/professional-ethics-moral-judgments/.

1. IvyPanda . "Professional Ethics: Moral Judgments." April 26, 2021. https://ivypanda.com/essays/professional-ethics-moral-judgments/.

Bibliography

IvyPanda . "Professional Ethics: Moral Judgments." April 26, 2021. https://ivypanda.com/essays/professional-ethics-moral-judgments/.

  • Libertarian and Utilitarian Approaches to Ethics
  • The Libertarian Position on the Welfare State
  • Libertarian Approach to Paternalism
  • Gender Affirming Care: A Libertarian Perspective
  • Utilitarian, Libertarian, Deontological, and Virtue Ethics Perspectives
  • Utilitarian vs Libertarian Principles
  • Jeffersonian Democracy and the Pursuit of Happiness
  • Utopian Society: National Socialism and Libertarian Democracy
  • Concepts of Determinism, Compatibilism, and Libertarianism
  • Libertarians in the US Presidential Election of 2008
  • Justice and Vengeance: What Is the Difference?
  • Vulnerable Populations' Protection in Research
  • Faith Integration: Dora’s Ethical and Legal Decision
  • Ethics Code in School Leadership
  • Rehabilitation Professional's Values and Ethics
  • Business Essentials
  • Leadership & Management
  • Credential of Leadership, Impact, and Management in Business (CLIMB)
  • Entrepreneurship & Innovation
  • Digital Transformation
  • Finance & Accounting
  • Business in Society
  • For Organizations
  • Support Portal
  • Media Coverage
  • Founding Donors
  • Leadership Team

ethical judgement essay

  • Harvard Business School →
  • HBS Online →
  • Business Insights →

Business Insights

Harvard Business School Online's Business Insights Blog provides the career insights you need to achieve your goals and gain confidence in your business skills.

  • Career Development
  • Communication
  • Decision-Making
  • Earning Your MBA
  • Negotiation
  • News & Events
  • Productivity
  • Staff Spotlight
  • Student Profiles
  • Work-Life Balance
  • AI Essentials for Business
  • Alternative Investments
  • Business Analytics
  • Business Strategy
  • Business and Climate Change
  • Design Thinking and Innovation
  • Digital Marketing Strategy
  • Disruptive Strategy
  • Economics for Managers
  • Entrepreneurship Essentials
  • Financial Accounting
  • Global Business
  • Launching Tech Ventures
  • Leadership Principles
  • Leadership, Ethics, and Corporate Accountability
  • Leading with Finance
  • Management Essentials
  • Negotiation Mastery
  • Organizational Leadership
  • Power and Influence for Positive Impact
  • Strategy Execution
  • Sustainable Business Strategy
  • Sustainable Investing
  • Winning with Digital Platforms

7 Ways to Improve Your Ethical Decision-Making

A diverse team of five business professionals having a discussion

  • 03 Aug 2023

Effective decision-making is the cornerstone of any thriving business. According to a survey of 760 companies cited in the Harvard Business Review , decision effectiveness and financial results correlated at a 95 percent confidence level across countries, industries, and organization sizes.

Yet, making ethical decisions can be difficult in the workplace and often requires dealing with ambiguous situations.

If you want to become a more effective leader , here’s an overview of why ethical decision-making is important in business and how to be better at it.

Access your free e-book today.

The Importance of Ethical Decision-Making

Any management position involves decision-making .

“Even with formal systems in place, managers have a great deal of discretion in making decisions that affect employees,” says Harvard Business School Professor Nien-hê Hsieh in the online course Leadership, Ethics, and Corporate Accountability . “This is because many of the activities companies need to carry out are too complex to specify in advance.”

This is where ethical decision-making comes in. As a leader, your decisions influence your company’s culture, employees’ motivation and productivity, and business processes’ effectiveness.

It also impacts your organization’s reputation—in terms of how customers, partners, investors, and prospective employees perceive it—and long-term success.

With such a large portion of your company’s performance relying on your guidance, here are seven ways to improve your ethical decision-making.

1. Gain Clarity Around Personal Commitments

You may be familiar with the saying, “Know thyself.” The first step to including ethics in your decision-making process is defining your personal commitments.

To gain clarity around those, Hsieh recommends asking:

  • What’s core to my identity? How do I perceive myself?
  • What lines or boundaries will I not cross?
  • What kind of life do I want to live?
  • What type of leader do I want to be?

Once you better understand your core beliefs, values, and ideals, it’s easier to commit to ethical guidelines in the workplace. If you get stuck when making challenging decisions, revisit those questions for guidance.

2. Overcome Biases

A bias is a systematic, often unconscious inclination toward a belief, opinion, perspective, or decision. It influences how you perceive and interpret information, make judgments, and behave.

Bias is often based on:

  • Personal experience
  • Cultural background
  • Social conditioning
  • Individual preference

It exists in the workplace as well.

“Most of the time, people try to act fairly, but personal beliefs or attitudes—both conscious and subconscious—affect our ability to do so,” Hsieh says in Leadership, Ethics, and Corporate Accountability .

There are two types of bias:

  • Explicit: A bias you’re aware of, such as ageism.
  • Implicit: A bias that operates outside your awareness, such as cultural conditioning.

Whether explicit or implicit, you must overcome bias to make ethical, fair decisions.

Related: How to Overcome Stereotypes in Your Organization

3. Reflect on Past Decisions

The next step is reflecting on previous decisions.

“By understanding different kinds of bias and how they can show themselves in the workplace, we can reflect on past decisions, experiences, and emotions to help identify problem areas,” Hsieh says in the course.

Reflect on your decisions’ processes and the outcomes. Were they favorable? What would you do differently? Did bias affect them?

Through analyzing prior experiences, you can learn lessons that help guide your ethical decision-making.

4. Be Compassionate

Decisions requiring an ethical lens are often difficult, such as terminating an employee.

“Termination decisions are some of the hardest that managers will ever have to make,” Hsieh says in Leadership, Ethics, and Corporate Accountability . “These decisions affect real people with whom we often work every day and who are likely to depend on their job for their livelihood.”

Such decisions require a compassionate approach. Try imagining yourself in the other person’s shoes, and think about what you would want to hear. Doing so allows you to approach decision-making with more empathy.

Leadership, Ethics, and Corporate Accountability | Develop a toolkit for making tough leadership decisions| Learn More

5. Focus on Fairness

Being “fair” in the workplace is often ambiguous, but it’s vital to ethical decision-making.

“Fairness is not only an ethical response to power asymmetries in the work environment,” Hsieh says in Leadership, Ethics, and Corporate Accountability . “Fairness–and having a successful organizational culture–can benefit the organization economically and legally as well.”

It’s particularly important to consider fairness in the context of your employees. According to Leadership, Ethics, and Corporate Accountability , operationalizing fairness in employment relationships requires:

  • Legitimate expectations: Expectations stemming from a promise or regular practice that employees can anticipate and rely on.
  • Procedural fairness: Concern with whether decisions are made and carried out impartially, consistently, and transparently.
  • Distributive fairness: The fair allocation of opportunities, benefits, and burdens based on employees’ efforts or contributions.

Keeping these aspects of fairness in mind can be the difference between a harmonious team and an employment lawsuit. When in doubt, ask yourself: “If I or someone I loved was at the receiving end of this decision, what would I consider ‘fair’?”

6. Take an Individualized Approach

Not every employee is the same. Your relationships with team members, managers, and organizational leaders differ based on factors like context and personality types.

“Given the personal nature of employment relationships, your judgment and actions in these areas will often require adjustment according to each specific situation,” Hsieh explains in Leadership, Ethics, and Corporate Accountability .

One way to achieve this is by tailoring your decision-making based on employees’ values and beliefs. For example, if a colleague expresses concerns about a project’s environmental impact, explore eco-friendly approaches that align with their values.

Another way you can customize your ethical decision-making is by accommodating employees’ cultural differences. Doing so can foster a more inclusive work environment and boost your team’s performance .

7. Accept Feedback

Ethical decision-making is susceptible to gray areas and often met with dissent, so it’s critical to be approachable and open to feedback .

The benefits of receiving feedback include:

  • Learning from mistakes.
  • Having more opportunities to exhibit compassion, fairness, and transparency.
  • Identifying blind spots you weren’t aware of.
  • Bringing your team into the decision-making process.

While such conversations can be uncomfortable, don’t avoid them. Accepting feedback will not only make you a more effective leader but also help your employees gain a voice in the workplace.

How to Become a More Effective Leader | Access Your Free E-Book | Download Now

Ethical Decision-Making Is a Continuous Learning Process

Ethical decision-making doesn’t come with right or wrong answers—it’s a continuous learning process.

“There often is no right answer, only imperfect solutions to difficult problems,” Hsieh says. “But even without a single ‘right’ answer, making thoughtful, ethical decisions can make a major difference in the lives of your employees and colleagues.”

By taking an online course, such as Leadership, Ethics, and Corporate Accountability , you can develop the frameworks and tools to make effective decisions that benefit all aspects of your business.

Ready to improve your ethical decision-making? Enroll in Leadership, Ethics, and Corporate Accountability —one of our online leadership and management courses —and download our free e-book on how to become a more effective leader.

ethical judgement essay

About the Author

Ethical Subjectivism Vs Ethical Relativism

This essay about contrasting perspectives in moral philosophy: ethical individualism and cultural relativism. Ethical individualism posits that morality is subjective, shaped by individual beliefs and experiences, while cultural relativism argues that moral truths are culturally contingent. Both perspectives acknowledge moral diversity but differ in their emphasis on individual versus collective influences on morality. Critics raise concerns about moral solipsism and moral nihilism, respectively, within these frameworks. Despite their differences, both offer valuable insights into the complexities of moral judgment and the diversity of moral beliefs in human societies.

How it works

In the vast terrain of moral philosophy, two distinct vantage points stand out: ethical individualism and cultural relativism. These perspectives offer contrasting lenses through which we perceive the intricacies of morality, diverging in their approaches to the origins of moral truths and the foundations of ethical judgment.

Ethical individualism, akin to a solitary voyager charting their moral compass, asserts that the essence of morality resides within the individual. Here, moral truths are not fixed stars guiding all travelers but rather ephemeral constellations shaped by individual beliefs and sentiments.

Each person becomes a moral cartographer, mapping out their own terrain of right and wrong, shaped by the contours of personal experiences, cultural upbringing, and emotional resonances. In this realm, moral landscapes are as varied as the individuals who traverse them, with no single beacon illuminating the path to righteousness.

In contrast, cultural relativism paints a broader canvas, where the brushstrokes of morality are colored by the collective hues of society and culture. Here, morality is not a solitary quest but a communal endeavor, shaped by the norms, values, and traditions of a particular cultural milieu. What is deemed virtuous or vice-laden is not etched in cosmic stone but etched in the collective consciousness of a society, evolving alongside its cultural tapestry. In this worldview, moral judgments are not absolute but contextual, shifting like sands in the desert of cultural diversity.

Despite their disparate vistas, both ethical individualism and cultural relativism converge on the acknowledgment of moral diversity and the elusiveness of absolute moral truths. They both cast doubt on the notion of a universal moral code, recognizing instead the kaleidoscopic array of moral perspectives that populate our moral landscape. Yet, they diverge in their emphasis, with ethical individualism shining the spotlight on the inner sanctum of individual subjectivity, while cultural relativism casts its gaze outward, towards the collective mosaic of cultural norms and practices.

Critics of ethical individualism raise concerns about the moral solipsism it may engender, where individuals are adrift in a sea of subjective preferences, with no moral compass to guide them. Without a shared moral framework, they argue, moral discourse devolves into a cacophony of conflicting voices, devoid of moral authority or common ground.

Similarly, critics of cultural relativism sound the alarm against the specter of moral nihilism it may invoke, where all moral judgments are rendered moot in the face of cultural diversity. Without a universal moral standard to arbitrate between conflicting cultural practices, they argue, we are left morally impotent, unable to condemn even the most egregious violations of human dignity and rights.

In navigating the turbulent waters of moral philosophy, it becomes apparent that both ethical individualism and cultural relativism offer valuable insights into the labyrinthine nature of morality. While they may diverge in their emphases and implications, they both beckon us to grapple with the complexities of moral judgment and the ever-shifting contours of moral truth.

owl

Cite this page

Ethical Subjectivism Vs Ethical Relativism. (2024, Apr 29). Retrieved from https://papersowl.com/examples/ethical-subjectivism-vs-ethical-relativism/

"Ethical Subjectivism Vs Ethical Relativism." PapersOwl.com , 29 Apr 2024, https://papersowl.com/examples/ethical-subjectivism-vs-ethical-relativism/

PapersOwl.com. (2024). Ethical Subjectivism Vs Ethical Relativism . [Online]. Available at: https://papersowl.com/examples/ethical-subjectivism-vs-ethical-relativism/ [Accessed: 1 May. 2024]

"Ethical Subjectivism Vs Ethical Relativism." PapersOwl.com, Apr 29, 2024. Accessed May 1, 2024. https://papersowl.com/examples/ethical-subjectivism-vs-ethical-relativism/

"Ethical Subjectivism Vs Ethical Relativism," PapersOwl.com , 29-Apr-2024. [Online]. Available: https://papersowl.com/examples/ethical-subjectivism-vs-ethical-relativism/. [Accessed: 1-May-2024]

PapersOwl.com. (2024). Ethical Subjectivism Vs Ethical Relativism . [Online]. Available at: https://papersowl.com/examples/ethical-subjectivism-vs-ethical-relativism/ [Accessed: 1-May-2024]

Don't let plagiarism ruin your grade

Hire a writer to get a unique paper crafted to your needs.

owl

Our writers will help you fix any mistakes and get an A+!

Please check your inbox.

You can order an original essay written according to your instructions.

Trusted by over 1 million students worldwide

1. Tell Us Your Requirements

2. Pick your perfect writer

3. Get Your Paper and Pay

Hi! I'm Amy, your personal assistant!

Don't know where to start? Give me your paper requirements and I connect you to an academic expert.

short deadlines

100% Plagiarism-Free

Certified writers

  • Open access
  • Published: 18 April 2024

Research ethics and artificial intelligence for global health: perspectives from the global forum on bioethics in research

  • James Shaw 1 , 13 ,
  • Joseph Ali 2 , 3 ,
  • Caesar A. Atuire 4 , 5 ,
  • Phaik Yeong Cheah 6 ,
  • Armando Guio Español 7 ,
  • Judy Wawira Gichoya 8 ,
  • Adrienne Hunt 9 ,
  • Daudi Jjingo 10 ,
  • Katherine Littler 9 ,
  • Daniela Paolotti 11 &
  • Effy Vayena 12  

BMC Medical Ethics volume  25 , Article number:  46 ( 2024 ) Cite this article

1065 Accesses

6 Altmetric

Metrics details

The ethical governance of Artificial Intelligence (AI) in health care and public health continues to be an urgent issue for attention in policy, research, and practice. In this paper we report on central themes related to challenges and strategies for promoting ethics in research involving AI in global health, arising from the Global Forum on Bioethics in Research (GFBR), held in Cape Town, South Africa in November 2022.

The GFBR is an annual meeting organized by the World Health Organization and supported by the Wellcome Trust, the US National Institutes of Health, the UK Medical Research Council (MRC) and the South African MRC. The forum aims to bring together ethicists, researchers, policymakers, research ethics committee members and other actors to engage with challenges and opportunities specifically related to research ethics. In 2022 the focus of the GFBR was “Ethics of AI in Global Health Research”. The forum consisted of 6 case study presentations, 16 governance presentations, and a series of small group and large group discussions. A total of 87 participants attended the forum from 31 countries around the world, representing disciplines of bioethics, AI, health policy, health professional practice, research funding, and bioinformatics. In this paper, we highlight central insights arising from GFBR 2022.

We describe the significance of four thematic insights arising from the forum: (1) Appropriateness of building AI, (2) Transferability of AI systems, (3) Accountability for AI decision-making and outcomes, and (4) Individual consent. We then describe eight recommendations for governance leaders to enhance the ethical governance of AI in global health research, addressing issues such as AI impact assessments, environmental values, and fair partnerships.

Conclusions

The 2022 Global Forum on Bioethics in Research illustrated several innovations in ethical governance of AI for global health research, as well as several areas in need of urgent attention internationally. This summary is intended to inform international and domestic efforts to strengthen research ethics and support the evolution of governance leadership to meet the demands of AI in global health research.

Peer Review reports

Introduction

The ethical governance of Artificial Intelligence (AI) in health care and public health continues to be an urgent issue for attention in policy, research, and practice [ 1 , 2 , 3 ]. Beyond the growing number of AI applications being implemented in health care, capabilities of AI models such as Large Language Models (LLMs) expand the potential reach and significance of AI technologies across health-related fields [ 4 , 5 ]. Discussion about effective, ethical governance of AI technologies has spanned a range of governance approaches, including government regulation, organizational decision-making, professional self-regulation, and research ethics review [ 6 , 7 , 8 ]. In this paper, we report on central themes related to challenges and strategies for promoting ethics in research involving AI in global health research, arising from the Global Forum on Bioethics in Research (GFBR), held in Cape Town, South Africa in November 2022. Although applications of AI for research, health care, and public health are diverse and advancing rapidly, the insights generated at the forum remain highly relevant from a global health perspective. After summarizing important context for work in this domain, we highlight categories of ethical issues emphasized at the forum for attention from a research ethics perspective internationally. We then outline strategies proposed for research, innovation, and governance to support more ethical AI for global health.

In this paper, we adopt the definition of AI systems provided by the Organization for Economic Cooperation and Development (OECD) as our starting point. Their definition states that an AI system is “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy” [ 9 ]. The conceptualization of an algorithm as helping to constitute an AI system, along with hardware, other elements of software, and a particular context of use, illustrates the wide variety of ways in which AI can be applied. We have found it useful to differentiate applications of AI in research as those classified as “AI systems for discovery” and “AI systems for intervention”. An AI system for discovery is one that is intended to generate new knowledge, for example in drug discovery or public health research in which researchers are seeking potential targets for intervention, innovation, or further research. An AI system for intervention is one that directly contributes to enacting an intervention in a particular context, for example informing decision-making at the point of care or assisting with accuracy in a surgical procedure.

The mandate of the GFBR is to take a broad view of what constitutes research and its regulation in global health, with special attention to bioethics in Low- and Middle- Income Countries. AI as a group of technologies demands such a broad view. AI development for health occurs in a variety of environments, including universities and academic health sciences centers where research ethics review remains an important element of the governance of science and innovation internationally [ 10 , 11 ]. In these settings, research ethics committees (RECs; also known by different names such as Institutional Review Boards or IRBs) make decisions about the ethical appropriateness of projects proposed by researchers and other institutional members, ultimately determining whether a given project is allowed to proceed on ethical grounds [ 12 ].

However, research involving AI for health also takes place in large corporations and smaller scale start-ups, which in some jurisdictions fall outside the scope of research ethics regulation. In the domain of AI, the question of what constitutes research also becomes blurred. For example, is the development of an algorithm itself considered a part of the research process? Or only when that algorithm is tested under the formal constraints of a systematic research methodology? In this paper we take an inclusive view, in which AI development is included in the definition of research activity and within scope for our inquiry, regardless of the setting in which it takes place. This broad perspective characterizes the approach to “research ethics” we take in this paper, extending beyond the work of RECs to include the ethical analysis of the wide range of activities that constitute research as the generation of new knowledge and intervention in the world.

Ethical governance of AI in global health

The ethical governance of AI for global health has been widely discussed in recent years. The World Health Organization (WHO) released its guidelines on ethics and governance of AI for health in 2021, endorsing a set of six ethical principles and exploring the relevance of those principles through a variety of use cases. The WHO guidelines also provided an overview of AI governance, defining governance as covering “a range of steering and rule-making functions of governments and other decision-makers, including international health agencies, for the achievement of national health policy objectives conducive to universal health coverage.” (p. 81) The report usefully provided a series of recommendations related to governance of seven domains pertaining to AI for health: data, benefit sharing, the private sector, the public sector, regulation, policy observatories/model legislation, and global governance. The report acknowledges that much work is yet to be done to advance international cooperation on AI governance, especially related to prioritizing voices from Low- and Middle-Income Countries (LMICs) in global dialogue.

One important point emphasized in the WHO report that reinforces the broader literature on global governance of AI is the distribution of responsibility across a wide range of actors in the AI ecosystem. This is especially important to highlight when focused on research for global health, which is specifically about work that transcends national borders. Alami et al. (2020) discussed the unique risks raised by AI research in global health, ranging from the unavailability of data in many LMICs required to train locally relevant AI models to the capacity of health systems to absorb new AI technologies that demand the use of resources from elsewhere in the system. These observations illustrate the need to identify the unique issues posed by AI research for global health specifically, and the strategies that can be employed by all those implicated in AI governance to promote ethically responsible use of AI in global health research.

RECs and the regulation of research involving AI

RECs represent an important element of the governance of AI for global health research, and thus warrant further commentary as background to our paper. Despite the importance of RECs, foundational questions have been raised about their capabilities to accurately understand and address ethical issues raised by studies involving AI. Rahimzadeh et al. (2023) outlined how RECs in the United States are under-prepared to align with recent federal policy requiring that RECs review data sharing and management plans with attention to the unique ethical issues raised in AI research for health [ 13 ]. Similar research in South Africa identified variability in understanding of existing regulations and ethical issues associated with health-related big data sharing and management among research ethics committee members [ 14 , 15 ]. The effort to address harms accruing to groups or communities as opposed to individuals whose data are included in AI research has also been identified as a unique challenge for RECs [ 16 , 17 ]. Doerr and Meeder (2022) suggested that current regulatory frameworks for research ethics might actually prevent RECs from adequately addressing such issues, as they are deemed out of scope of REC review [ 16 ]. Furthermore, research in the United Kingdom and Canada has suggested that researchers using AI methods for health tend to distinguish between ethical issues and social impact of their research, adopting an overly narrow view of what constitutes ethical issues in their work [ 18 ].

The challenges for RECs in adequately addressing ethical issues in AI research for health care and public health exceed a straightforward survey of ethical considerations. As Ferretti et al. (2021) contend, some capabilities of RECs adequately cover certain issues in AI-based health research, such as the common occurrence of conflicts of interest where researchers who accept funds from commercial technology providers are implicitly incentivized to produce results that align with commercial interests [ 12 ]. However, some features of REC review require reform to adequately meet ethical needs. Ferretti et al. outlined weaknesses of RECs that are longstanding and those that are novel to AI-related projects, proposing a series of directions for development that are regulatory, procedural, and complementary to REC functionality. The work required on a global scale to update the REC function in response to the demands of research involving AI is substantial.

These issues take greater urgency in the context of global health [ 19 ]. Teixeira da Silva (2022) described the global practice of “ethics dumping”, where researchers from high income countries bring ethically contentious practices to RECs in low-income countries as a strategy to gain approval and move projects forward [ 20 ]. Although not yet systematically documented in AI research for health, risk of ethics dumping in AI research is high. Evidence is already emerging of practices of “health data colonialism”, in which AI researchers and developers from large organizations in high-income countries acquire data to build algorithms in LMICs to avoid stricter regulations [ 21 ]. This specific practice is part of a larger collection of practices that characterize health data colonialism, involving the broader exploitation of data and the populations they represent primarily for commercial gain [ 21 , 22 ]. As an additional complication, AI algorithms trained on data from high-income contexts are unlikely to apply in straightforward ways to LMIC settings [ 21 , 23 ]. In the context of global health, there is widespread acknowledgement about the need to not only enhance the knowledge base of REC members about AI-based methods internationally, but to acknowledge the broader shifts required to encourage their capabilities to more fully address these and other ethical issues associated with AI research for health [ 8 ].

Although RECs are an important part of the story of the ethical governance of AI for global health research, they are not the only part. The responsibilities of supra-national entities such as the World Health Organization, national governments, organizational leaders, commercial AI technology providers, health care professionals, and other groups continue to be worked out internationally. In this context of ongoing work, examining issues that demand attention and strategies to address them remains an urgent and valuable task.

The GFBR is an annual meeting organized by the World Health Organization and supported by the Wellcome Trust, the US National Institutes of Health, the UK Medical Research Council (MRC) and the South African MRC. The forum aims to bring together ethicists, researchers, policymakers, REC members and other actors to engage with challenges and opportunities specifically related to research ethics. Each year the GFBR meeting includes a series of case studies and keynotes presented in plenary format to an audience of approximately 100 people who have applied and been competitively selected to attend, along with small-group breakout discussions to advance thinking on related issues. The specific topic of the forum changes each year, with past topics including ethical issues in research with people living with mental health conditions (2021), genome editing (2019), and biobanking/data sharing (2018). The forum is intended to remain grounded in the practical challenges of engaging in research ethics, with special interest in low resource settings from a global health perspective. A post-meeting fellowship scheme is open to all LMIC participants, providing a unique opportunity to apply for funding to further explore and address the ethical challenges that are identified during the meeting.

In 2022, the focus of the GFBR was “Ethics of AI in Global Health Research”. The forum consisted of 6 case study presentations (both short and long form) reporting on specific initiatives related to research ethics and AI for health, and 16 governance presentations (both short and long form) reporting on actual approaches to governing AI in different country settings. A keynote presentation from Professor Effy Vayena addressed the topic of the broader context for AI ethics in a rapidly evolving field. A total of 87 participants attended the forum from 31 countries around the world, representing disciplines of bioethics, AI, health policy, health professional practice, research funding, and bioinformatics. The 2-day forum addressed a wide range of themes. The conference report provides a detailed overview of each of the specific topics addressed while a policy paper outlines the cross-cutting themes (both documents are available at the GFBR website: https://www.gfbr.global/past-meetings/16th-forum-cape-town-south-africa-29-30-november-2022/ ). As opposed to providing a detailed summary in this paper, we aim to briefly highlight central issues raised, solutions proposed, and the challenges facing the research ethics community in the years to come.

In this way, our primary aim in this paper is to present a synthesis of the challenges and opportunities raised at the GFBR meeting and in the planning process, followed by our reflections as a group of authors on their significance for governance leaders in the coming years. We acknowledge that the views represented at the meeting and in our results are a partial representation of the universe of views on this topic; however, the GFBR leadership invested a great deal of resources in convening a deeply diverse and thoughtful group of researchers and practitioners working on themes of bioethics related to AI for global health including those based in LMICs. We contend that it remains rare to convene such a strong group for an extended time and believe that many of the challenges and opportunities raised demand attention for more ethical futures of AI for health. Nonetheless, our results are primarily descriptive and are thus not explicitly grounded in a normative argument. We make effort in the Discussion section to contextualize our results by describing their significance and connecting them to broader efforts to reform global health research and practice.

Uniquely important ethical issues for AI in global health research

Presentations and group dialogue over the course of the forum raised several issues for consideration, and here we describe four overarching themes for the ethical governance of AI in global health research. Brief descriptions of each issue can be found in Table  1 . Reports referred to throughout the paper are available at the GFBR website provided above.

The first overarching thematic issue relates to the appropriateness of building AI technologies in response to health-related challenges in the first place. Case study presentations referred to initiatives where AI technologies were highly appropriate, such as in ear shape biometric identification to more accurately link electronic health care records to individual patients in Zambia (Alinani Simukanga). Although important ethical issues were raised with respect to privacy, trust, and community engagement in this initiative, the AI-based solution was appropriately matched to the challenge of accurately linking electronic records to specific patient identities. In contrast, forum participants raised questions about the appropriateness of an initiative using AI to improve the quality of handwashing practices in an acute care hospital in India (Niyoshi Shah), which led to gaming the algorithm. Overall, participants acknowledged the dangers of techno-solutionism, in which AI researchers and developers treat AI technologies as the most obvious solutions to problems that in actuality demand much more complex strategies to address [ 24 ]. However, forum participants agreed that RECs in different contexts have differing degrees of power to raise issues of the appropriateness of an AI-based intervention.

The second overarching thematic issue related to whether and how AI-based systems transfer from one national health context to another. One central issue raised by a number of case study presentations related to the challenges of validating an algorithm with data collected in a local environment. For example, one case study presentation described a project that would involve the collection of personally identifiable data for sensitive group identities, such as tribe, clan, or religion, in the jurisdictions involved (South Africa, Nigeria, Tanzania, Uganda and the US; Gakii Masunga). Doing so would enable the team to ensure that those groups were adequately represented in the dataset to ensure the resulting algorithm was not biased against specific community groups when deployed in that context. However, some members of these communities might desire to be represented in the dataset, whereas others might not, illustrating the need to balance autonomy and inclusivity. It was also widely recognized that collecting these data is an immense challenge, particularly when historically oppressive practices have led to a low-trust environment for international organizations and the technologies they produce. It is important to note that in some countries such as South Africa and Rwanda, it is illegal to collect information such as race and tribal identities, re-emphasizing the importance for cultural awareness and avoiding “one size fits all” solutions.

The third overarching thematic issue is related to understanding accountabilities for both the impacts of AI technologies and governance decision-making regarding their use. Where global health research involving AI leads to longer-term harms that might fall outside the usual scope of issues considered by a REC, who is to be held accountable, and how? This question was raised as one that requires much further attention, with law being mixed internationally regarding the mechanisms available to hold researchers, innovators, and their institutions accountable over the longer term. However, it was recognized in breakout group discussion that many jurisdictions are developing strong data protection regimes related specifically to international collaboration for research involving health data. For example, Kenya’s Data Protection Act requires that any internationally funded projects have a local principal investigator who will hold accountability for how data are shared and used [ 25 ]. The issue of research partnerships with commercial entities was raised by many participants in the context of accountability, pointing toward the urgent need for clear principles related to strategies for engagement with commercial technology companies in global health research.

The fourth and final overarching thematic issue raised here is that of consent. The issue of consent was framed by the widely shared recognition that models of individual, explicit consent might not produce a supportive environment for AI innovation that relies on the secondary uses of health-related datasets to build AI algorithms. Given this recognition, approaches such as community oversight of health data uses were suggested as a potential solution. However, the details of implementing such community oversight mechanisms require much further attention, particularly given the unique perspectives on health data in different country settings in global health research. Furthermore, some uses of health data do continue to require consent. One case study of South Africa, Nigeria, Kenya, Ethiopia and Uganda suggested that when health data are shared across borders, individual consent remains necessary when data is transferred from certain countries (Nezerith Cengiz). Broader clarity is necessary to support the ethical governance of health data uses for AI in global health research.

Recommendations for ethical governance of AI in global health research

Dialogue at the forum led to a range of suggestions for promoting ethical conduct of AI research for global health, related to the various roles of actors involved in the governance of AI research broadly defined. The strategies are written for actors we refer to as “governance leaders”, those people distributed throughout the AI for global health research ecosystem who are responsible for ensuring the ethical and socially responsible conduct of global health research involving AI (including researchers themselves). These include RECs, government regulators, health care leaders, health professionals, corporate social accountability officers, and others. Enacting these strategies would bolster the ethical governance of AI for global health more generally, enabling multiple actors to fulfill their roles related to governing research and development activities carried out across multiple organizations, including universities, academic health sciences centers, start-ups, and technology corporations. Specific suggestions are summarized in Table  2 .

First, forum participants suggested that governance leaders including RECs, should remain up to date on recent advances in the regulation of AI for health. Regulation of AI for health advances rapidly and takes on different forms in jurisdictions around the world. RECs play an important role in governance, but only a partial role; it was deemed important for RECs to acknowledge how they fit within a broader governance ecosystem in order to more effectively address the issues within their scope. Not only RECs but organizational leaders responsible for procurement, researchers, and commercial actors should all commit to efforts to remain up to date about the relevant approaches to regulating AI for health care and public health in jurisdictions internationally. In this way, governance can more adequately remain up to date with advances in regulation.

Second, forum participants suggested that governance leaders should focus on ethical governance of health data as a basis for ethical global health AI research. Health data are considered the foundation of AI development, being used to train AI algorithms for various uses [ 26 ]. By focusing on ethical governance of health data generation, sharing, and use, multiple actors will help to build an ethical foundation for AI development among global health researchers.

Third, forum participants believed that governance processes should incorporate AI impact assessments where appropriate. An AI impact assessment is the process of evaluating the potential effects, both positive and negative, of implementing an AI algorithm on individuals, society, and various stakeholders, generally over time frames specified in advance of implementation [ 27 ]. Although not all types of AI research in global health would warrant an AI impact assessment, this is especially relevant for those studies aiming to implement an AI system for intervention into health care or public health. Organizations such as RECs can use AI impact assessments to boost understanding of potential harms at the outset of a research project, encouraging researchers to more deeply consider potential harms in the development of their study.

Fourth, forum participants suggested that governance decisions should incorporate the use of environmental impact assessments, or at least the incorporation of environment values when assessing the potential impact of an AI system. An environmental impact assessment involves evaluating and anticipating the potential environmental effects of a proposed project to inform ethical decision-making that supports sustainability [ 28 ]. Although a relatively new consideration in research ethics conversations [ 29 ], the environmental impact of building technologies is a crucial consideration for the public health commitment to environmental sustainability. Governance leaders can use environmental impact assessments to boost understanding of potential environmental harms linked to AI research projects in global health over both the shorter and longer terms.

Fifth, forum participants suggested that governance leaders should require stronger transparency in the development of AI algorithms in global health research. Transparency was considered essential in the design and development of AI algorithms for global health to ensure ethical and accountable decision-making throughout the process. Furthermore, whether and how researchers have considered the unique contexts into which such algorithms may be deployed can be surfaced through stronger transparency, for example in describing what primary considerations were made at the outset of the project and which stakeholders were consulted along the way. Sharing information about data provenance and methods used in AI development will also enhance the trustworthiness of the AI-based research process.

Sixth, forum participants suggested that governance leaders can encourage or require community engagement at various points throughout an AI project. It was considered that engaging patients and communities is crucial in AI algorithm development to ensure that the technology aligns with community needs and values. However, participants acknowledged that this is not a straightforward process. Effective community engagement requires lengthy commitments to meeting with and hearing from diverse communities in a given setting, and demands a particular set of skills in communication and dialogue that are not possessed by all researchers. Encouraging AI researchers to begin this process early and build long-term partnerships with community members is a promising strategy to deepen community engagement in AI research for global health. One notable recommendation was that research funders have an opportunity to incentivize and enable community engagement with funds dedicated to these activities in AI research in global health.

Seventh, forum participants suggested that governance leaders can encourage researchers to build strong, fair partnerships between institutions and individuals across country settings. In a context of longstanding imbalances in geopolitical and economic power, fair partnerships in global health demand a priori commitments to share benefits related to advances in medical technologies, knowledge, and financial gains. Although enforcement of this point might be beyond the remit of RECs, commentary will encourage researchers to consider stronger, fairer partnerships in global health in the longer term.

Eighth, it became evident that it is necessary to explore new forms of regulatory experimentation given the complexity of regulating a technology of this nature. In addition, the health sector has a series of particularities that make it especially complicated to generate rules that have not been previously tested. Several participants highlighted the desire to promote spaces for experimentation such as regulatory sandboxes or innovation hubs in health. These spaces can have several benefits for addressing issues surrounding the regulation of AI in the health sector, such as: (i) increasing the capacities and knowledge of health authorities about this technology; (ii) identifying the major problems surrounding AI regulation in the health sector; (iii) establishing possibilities for exchange and learning with other authorities; (iv) promoting innovation and entrepreneurship in AI in health; and (vi) identifying the need to regulate AI in this sector and update other existing regulations.

Ninth and finally, forum participants believed that the capabilities of governance leaders need to evolve to better incorporate expertise related to AI in ways that make sense within a given jurisdiction. With respect to RECs, for example, it might not make sense for every REC to recruit a member with expertise in AI methods. Rather, it will make more sense in some jurisdictions to consult with members of the scientific community with expertise in AI when research protocols are submitted that demand such expertise. Furthermore, RECs and other approaches to research governance in jurisdictions around the world will need to evolve in order to adopt the suggestions outlined above, developing processes that apply specifically to the ethical governance of research using AI methods in global health.

Research involving the development and implementation of AI technologies continues to grow in global health, posing important challenges for ethical governance of AI in global health research around the world. In this paper we have summarized insights from the 2022 GFBR, focused specifically on issues in research ethics related to AI for global health research. We summarized four thematic challenges for governance related to AI in global health research and nine suggestions arising from presentations and dialogue at the forum. In this brief discussion section, we present an overarching observation about power imbalances that frames efforts to evolve the role of governance in global health research, and then outline two important opportunity areas as the field develops to meet the challenges of AI in global health research.

Dialogue about power is not unfamiliar in global health, especially given recent contributions exploring what it would mean to de-colonize global health research, funding, and practice [ 30 , 31 ]. Discussions of research ethics applied to AI research in global health contexts are deeply infused with power imbalances. The existing context of global health is one in which high-income countries primarily located in the “Global North” charitably invest in projects taking place primarily in the “Global South” while recouping knowledge, financial, and reputational benefits [ 32 ]. With respect to AI development in particular, recent examples of digital colonialism frame dialogue about global partnerships, raising attention to the role of large commercial entities and global financial capitalism in global health research [ 21 , 22 ]. Furthermore, the power of governance organizations such as RECs to intervene in the process of AI research in global health varies widely around the world, depending on the authorities assigned to them by domestic research governance policies. These observations frame the challenges outlined in our paper, highlighting the difficulties associated with making meaningful change in this field.

Despite these overarching challenges of the global health research context, there are clear strategies for progress in this domain. Firstly, AI innovation is rapidly evolving, which means approaches to the governance of AI for health are rapidly evolving too. Such rapid evolution presents an important opportunity for governance leaders to clarify their vision and influence over AI innovation in global health research, boosting the expertise, structure, and functionality required to meet the demands of research involving AI. Secondly, the research ethics community has strong international ties, linked to a global scholarly community that is committed to sharing insights and best practices around the world. This global community can be leveraged to coordinate efforts to produce advances in the capabilities and authorities of governance leaders to meaningfully govern AI research for global health given the challenges summarized in our paper.

Limitations

Our paper includes two specific limitations that we address explicitly here. First, it is still early in the lifetime of the development of applications of AI for use in global health, and as such, the global community has had limited opportunity to learn from experience. For example, there were many fewer case studies, which detail experiences with the actual implementation of an AI technology, submitted to GFBR 2022 for consideration than was expected. In contrast, there were many more governance reports submitted, which detail the processes and outputs of governance processes that anticipate the development and dissemination of AI technologies. This observation represents both a success and a challenge. It is a success that so many groups are engaging in anticipatory governance of AI technologies, exploring evidence of their likely impacts and governing technologies in novel and well-designed ways. It is a challenge that there is little experience to build upon of the successful implementation of AI technologies in ways that have limited harms while promoting innovation. Further experience with AI technologies in global health will contribute to revising and enhancing the challenges and recommendations we have outlined in our paper.

Second, global trends in the politics and economics of AI technologies are evolving rapidly. Although some nations are advancing detailed policy approaches to regulating AI more generally, including for uses in health care and public health, the impacts of corporate investments in AI and political responses related to governance remain to be seen. The excitement around large language models (LLMs) and large multimodal models (LMMs) has drawn deeper attention to the challenges of regulating AI in any general sense, opening dialogue about health sector-specific regulations. The direction of this global dialogue, strongly linked to high-profile corporate actors and multi-national governance institutions, will strongly influence the development of boundaries around what is possible for the ethical governance of AI for global health. We have written this paper at a point when these developments are proceeding rapidly, and as such, we acknowledge that our recommendations will need updating as the broader field evolves.

Ultimately, coordination and collaboration between many stakeholders in the research ethics ecosystem will be necessary to strengthen the ethical governance of AI in global health research. The 2022 GFBR illustrated several innovations in ethical governance of AI for global health research, as well as several areas in need of urgent attention internationally. This summary is intended to inform international and domestic efforts to strengthen research ethics and support the evolution of governance leadership to meet the demands of AI in global health research.

Data availability

All data and materials analyzed to produce this paper are available on the GFBR website: https://www.gfbr.global/past-meetings/16th-forum-cape-town-south-africa-29-30-november-2022/ .

Clark P, Kim J, Aphinyanaphongs Y, Marketing, Food US. Drug Administration Clearance of Artificial Intelligence and Machine Learning Enabled Software in and as Medical devices: a systematic review. JAMA Netw Open. 2023;6(7):e2321792–2321792.

Article   Google Scholar  

Potnis KC, Ross JS, Aneja S, Gross CP, Richman IB. Artificial intelligence in breast cancer screening: evaluation of FDA device regulation and future recommendations. JAMA Intern Med. 2022;182(12):1306–12.

Siala H, Wang Y. SHIFTing artificial intelligence to be responsible in healthcare: a systematic review. Soc Sci Med. 2022;296:114782.

Yang X, Chen A, PourNejatian N, Shin HC, Smith KE, Parisien C, et al. A large language model for electronic health records. NPJ Digit Med. 2022;5(1):194.

Meskó B, Topol EJ. The imperative for regulatory oversight of large language models (or generative AI) in healthcare. NPJ Digit Med. 2023;6(1):120.

Jobin A, Ienca M, Vayena E. The global landscape of AI ethics guidelines. Nat Mach Intell. 2019;1(9):389–99.

Minssen T, Vayena E, Cohen IG. The challenges for Regulating Medical Use of ChatGPT and other large Language models. JAMA. 2023.

Ho CWL, Malpani R. Scaling up the research ethics framework for healthcare machine learning as global health ethics and governance. Am J Bioeth. 2022;22(5):36–8.

Yeung K. Recommendation of the council on artificial intelligence (OECD). Int Leg Mater. 2020;59(1):27–34.

Maddox TM, Rumsfeld JS, Payne PR. Questions for artificial intelligence in health care. JAMA. 2019;321(1):31–2.

Dzau VJ, Balatbat CA, Ellaissi WF. Revisiting academic health sciences systems a decade later: discovery to health to population to society. Lancet. 2021;398(10318):2300–4.

Ferretti A, Ienca M, Sheehan M, Blasimme A, Dove ES, Farsides B, et al. Ethics review of big data research: what should stay and what should be reformed? BMC Med Ethics. 2021;22(1):1–13.

Rahimzadeh V, Serpico K, Gelinas L. Institutional review boards need new skills to review data sharing and management plans. Nat Med. 2023;1–3.

Kling S, Singh S, Burgess TL, Nair G. The role of an ethics advisory committee in data science research in sub-saharan Africa. South Afr J Sci. 2023;119(5–6):1–3.

Google Scholar  

Cengiz N, Kabanda SM, Esterhuizen TM, Moodley K. Exploring perspectives of research ethics committee members on the governance of big data in sub-saharan Africa. South Afr J Sci. 2023;119(5–6):1–9.

Doerr M, Meeder S. Big health data research and group harm: the scope of IRB review. Ethics Hum Res. 2022;44(4):34–8.

Ballantyne A, Stewart C. Big data and public-private partnerships in healthcare and research: the application of an ethics framework for big data in health and research. Asian Bioeth Rev. 2019;11(3):315–26.

Samuel G, Chubb J, Derrick G. Boundaries between research ethics and ethical research use in artificial intelligence health research. J Empir Res Hum Res Ethics. 2021;16(3):325–37.

Murphy K, Di Ruggiero E, Upshur R, Willison DJ, Malhotra N, Cai JC, et al. Artificial intelligence for good health: a scoping review of the ethics literature. BMC Med Ethics. 2021;22(1):1–17.

Teixeira da Silva JA. Handling ethics dumping and neo-colonial research: from the laboratory to the academic literature. J Bioethical Inq. 2022;19(3):433–43.

Ferryman K. The dangers of data colonialism in precision public health. Glob Policy. 2021;12:90–2.

Couldry N, Mejias UA. Data colonialism: rethinking big data’s relation to the contemporary subject. Telev New Media. 2019;20(4):336–49.

Organization WH. Ethics and governance of artificial intelligence for health: WHO guidance. 2021.

Metcalf J, Moss E. Owning ethics: corporate logics, silicon valley, and the institutionalization of ethics. Soc Res Int Q. 2019;86(2):449–76.

Data Protection Act - OFFICE OF THE DATA PROTECTION COMMISSIONER KENYA [Internet]. 2021 [cited 2023 Sep 30]. https://www.odpc.go.ke/dpa-act/ .

Sharon T, Lucivero F. Introduction to the special theme: the expansion of the health data ecosystem–rethinking data ethics and governance. Big Data & Society. Volume 6. London, England: SAGE Publications Sage UK; 2019. p. 2053951719852969.

Reisman D, Schultz J, Crawford K, Whittaker M. Algorithmic impact assessments: a practical Framework for Public Agency. AI Now. 2018.

Morgan RK. Environmental impact assessment: the state of the art. Impact Assess Proj Apprais. 2012;30(1):5–14.

Samuel G, Richie C. Reimagining research ethics to include environmental sustainability: a principled approach, including a case study of data-driven health research. J Med Ethics. 2023;49(6):428–33.

Kwete X, Tang K, Chen L, Ren R, Chen Q, Wu Z, et al. Decolonizing global health: what should be the target of this movement and where does it lead us? Glob Health Res Policy. 2022;7(1):3.

Abimbola S, Asthana S, Montenegro C, Guinto RR, Jumbam DT, Louskieter L, et al. Addressing power asymmetries in global health: imperatives in the wake of the COVID-19 pandemic. PLoS Med. 2021;18(4):e1003604.

Benatar S. Politics, power, poverty and global health: systems and frames. Int J Health Policy Manag. 2016;5(10):599.

Download references

Acknowledgements

We would like to acknowledge the outstanding contributions of the attendees of GFBR 2022 in Cape Town, South Africa. This paper is authored by members of the GFBR 2022 Planning Committee. We would like to acknowledge additional members Tamra Lysaght, National University of Singapore, and Niresh Bhagwandin, South African Medical Research Council, for their input during the planning stages and as reviewers of the applications to attend the Forum.

This work was supported by Wellcome [222525/Z/21/Z], the US National Institutes of Health, the UK Medical Research Council (part of UK Research and Innovation), and the South African Medical Research Council through funding to the Global Forum on Bioethics in Research.

Author information

Authors and affiliations.

Department of Physical Therapy, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada

Berman Institute of Bioethics, Johns Hopkins University, Baltimore, MD, USA

Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD, USA

Department of Philosophy and Classics, University of Ghana, Legon-Accra, Ghana

Caesar A. Atuire

Centre for Tropical Medicine and Global Health, Nuffield Department of Medicine, University of Oxford, Oxford, UK

Mahidol Oxford Tropical Medicine Research Unit, Faculty of Tropical Medicine, Mahidol University, Bangkok, Thailand

Phaik Yeong Cheah

Berkman Klein Center, Harvard University, Bogotá, Colombia

Armando Guio Español

Department of Radiology and Informatics, Emory University School of Medicine, Atlanta, GA, USA

Judy Wawira Gichoya

Health Ethics & Governance Unit, Research for Health Department, Science Division, World Health Organization, Geneva, Switzerland

Adrienne Hunt & Katherine Littler

African Center of Excellence in Bioinformatics and Data Intensive Science, Infectious Diseases Institute, Makerere University, Kampala, Uganda

Daudi Jjingo

ISI Foundation, Turin, Italy

Daniela Paolotti

Department of Health Sciences and Technology, ETH Zurich, Zürich, Switzerland

Effy Vayena

Joint Centre for Bioethics, Dalla Lana School of Public Health, University of Toronto, Toronto, Canada

You can also search for this author in PubMed   Google Scholar

Contributions

JS led the writing, contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. JA contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. CA contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. PYC contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. AE contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. JWG contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. AH contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. DJ contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. KL contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. DP contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. EV contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper.

Corresponding author

Correspondence to James Shaw .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Shaw, J., Ali, J., Atuire, C.A. et al. Research ethics and artificial intelligence for global health: perspectives from the global forum on bioethics in research. BMC Med Ethics 25 , 46 (2024). https://doi.org/10.1186/s12910-024-01044-w

Download citation

Received : 31 October 2023

Accepted : 01 April 2024

Published : 18 April 2024

DOI : https://doi.org/10.1186/s12910-024-01044-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Artificial intelligence
  • Machine learning
  • Research ethics
  • Global health

BMC Medical Ethics

ISSN: 1472-6939

ethical judgement essay

Search

Essay On Ethical Judgements

According to Hunt and Vitell (1986)1, ethical judgment is the process of considering several alternatives and choosing the most ethical one. In my opinion, ethical judgments are the moral principles that justify the values of certain behaviors. Ethical judgments can be very subjective for different people because people use their own cultural backgrounds, religious beliefs, personal perspectives and life experiences to make judgments. The question arises: how do we justify what ethical judgments limit methods in art and nature science? The way to justify and who justifies the judgments brings different results in limiting methods in art and nature science. Most of the ethical judgments reduce the amount of methods in the production of knowledge; however, a few expand the methods to create art and explore natural science. People usually use reasoning to address ethical judgments, but at the same time personal emotions also affect their judgments. Differences in ethics and values lead to different results in making judgments. The ethics and values of some Muslim citizens lead them to believe it’s ethical and necessary for women to be covered, because women need to keep themselves chaste by concealing their beauty. This risks losing the chance to appreciate the beauty of the human body and limiting fashion design as a method of art. Moreover, Muslim women will get hurt emotionally if their bodies are considered unethical and illegal and they will be ashamed of their own body. Many non-Muslim people also believe that dress codes are uncomfortable during hot summers and dangerous for women to do work and also restrain the chance to show personality through clothing. This Muslim ethical judgment on clothing runs into conflict in other... ... middle of paper ... ...t if I drew Mary in an elegant appearance with a sacred surrounding, even with Asian features, she would be appropriate and interesting. Thus, I tried to express my respect and admiration, drawing a graceful Mary. The goal was to create art while not breaking ethical judgments and it’s definitely ethical to paint an elegant Asian Mary in Asian Catholics’ mind. From the two examples, ethical judgment can reduce the influence and creation of art, but sometimes it also encourages peoples to think of alternative methods to making art, especially of justifications for ethical judgments are considered. The way we justify ethical judgments will affect whether the judgment limit the methods in art and natural sciences or not. Different people will have different opinion about the ethical judgments thus how it restrain or increase methods in art and natural science varies.

Uustal Ethical Decision Making Model Essay

We have one resident in the long-term facility who has stage four cancer of spinal cord and he has been suffering from intense pain. Every time when I enter his room, he cries and implore to the god that he can minimize his suffering. He has prescription of hydromorphone 8 mg every 4 hourly PRN , oxycodone 5 mg every 6 hourly and 50 mcg of fentanyl path change every 3rd day. After giving all scheduled and PRN medicine his pain level remains same as before. When I see that patients I feel like to give highest dose of medicine as well as alternative pain management therapy so that he can have some comfort but ethically I have no right to do that. He is hospice but he has no comfort at all. Following are the nine steps of Uustal ethical decision making model.

Veiled Intentions: Don T Judge A Muslim Girl

Ever pass by Muslim woman in a hijab at the mall or park and think how oppressive and restraining her culture must be? Maysan Haydar, a New York social worker who practices the Muslim tradition of veiling, believes otherwise. In her article, “Veiled Intentions: Don’t Judge a Muslim Girl by Her Covering,” Haydar highlights on her experiences as a Muslim living in an American culture, where showing more skin is the “norm.” Haydar speaks specifically to a crowd who unconsciously makes assumptions about certain Muslim practices, in hopes of sharing the truth behind them. Haydar suggests that, contrary to popular belief, not all Muslim women cover themselves strictly as an “oppressive” religious practice, but that some women, like herself, find

Gaut's Argument

In Gaut’s essay, “The Ethical Criticism of Art”, he addresses the relevance of an art piece’s ethical value when making an aesthetic evaluation. His key argument revolves around the attitudes that works of art manifest such that he presents the following summary “If a work manifests ethically reprehensible attitudes, it is to that extent aesthetically defective, and if a work manifests ethically commendable attitudes, it is to that extent aesthetically meritorious”. In direct contrast with formalists, who divine a work’s merit through an assessment of its style and compositional aspects, Gaut states that any art piece’s value requires a pro tanto judgement. This pro tanto position allows for pieces considered stylistic masterpieces, to be

Ethical Judgements Essay

Our perception of moral judgments sometimes affects the ways in which knowledge is produced. In these two areas of knowledge, the natural sciences and the arts, the ways of knowing are different as is the nature of the knowledge produced. Likewise, ethical judgments may or may not limit knowledge in these areas but in different ways. Ethical judgments may lead to questioning the means by which some scientific knowledge is produced. Significant, meaningful works of art are produced only when the artist is able to transmit an emotion to the spectator, reader or listener effectively. This is why powerful emotional reactions to a work of art sometimes produce strong and often opposing ethical judgements which can limit the artist’s opportunities to produce knowledge.

Ethics And Ethics Essay

Ethics is a branch of philosophy that deals with the moral principles and values that govern our behavior as human beings. It is important in the human experience that we are able to grasp the idea of our own ethical code in order to become the most sensible human beings. But in that process, can ethics be taught to us? Or later in a person’s life, can he or she teach ethics the way they learned it? It is a unique and challenging concept because it is difficult to attempt to answer that question objectively because everybody has his or her own sense of morality. And at the same time, another person could have a completely different set of morals. Depending on the state of the person’s life and how they have morally developed vary from one human

Ethical Decision Making Essay

In the profession of Dental Hygiene, ethical dilemmas are nearly impossible to avoid, and most hygienists at some point in their professional life will have to face and answer ethical questions. Some ethical conflicts the dental hygienist may encounter can be quite complex and an obvious answer may not be readily available. In the article Ethical Decision Making, Phyllis Beemsterboer suggests an ethical decision-making model can aide the dental hygienist in making appropriate decisions when confronted with an ethical situation, and that the six-step model can serve dental hygienists in making the most advantageous ethical decision (2010).

Ethics : Approaching Moral Decisions

Some of the deficiencies in the way cultural relativism addresses moral problems, according to Holmes; are that they remain impractical, they are subject to change depending on where you live, and that people tolerate the different cultures. As a professional business person, I agree with Holmes analysis. Allowing others perceptions or beliefs to get away with our own personal beliefs would be contradicting ourselves. It is important to stand up for our beliefs, and help educate others on ethical issues. Over time we can make a difference in the world by modeling moral beliefs and ethics.

Ethical Ethics Essay

Ethics are influence from many demographics which include family influences, peer influences, past experiences, religion, and situations. People decide whether something is ethical and whether or not it is right or wrong based on these influences. Individuals decide whether something is ethical or unethical based on family influences because people absorb about the ethical status or something family members based on how our families act. Also individuals also conduct their decisions based on peer influences because classmates and friends that surround us, usually impact a person’s believes on what is right or wrong overtime. Furthermore, people also resolve to their decisions on whether something is right or wrong established on past experiences because they predict their benefits on demographics that had happened to them in the past. Additionally, people select some decisions based on religion because a person’s religious beliefs will usually inspire he or she on what is right or wrong. Finally, another way people base their ethical decisions is based on the situations they are in because people sometimes change their beliefs depending on the circumstances they are in.

The term ethics originates from the Greek word ethikos and later translated into Latin as moralis. Therefore it is easy to see the link between ethics and morals. When we refer to irresponsible behaviour we refer to it as immoral or unethical. The focus is on the character and mannerism of a person. Ethics is based on the fact that it is unselfish and balances what is good for one’s self and what is good for others. An action is therefore unethical if the person doing the action is only concerned about the self and not about the good and the other.

Ethical Theories Essay

Ethical theories are a way of finding solutions to ethical dilemmas using moral reasoning or moral character. The overall classification of ethical theories involves finding a resolution to ethical problems that are not necessarily answered by laws or principles already in place but that achieve justice and allow for individual rights. There are many different ethical theories and each takes a different approach as to the process in which they find a resolution. Ethical actions are those that increase prosperity, but ethics in business is not only focused on actions, it can also involve consequences of actions and a person’s own moral character.

The Production of Knowledge in both the Arts and the Natural Sciences

Based on this creator-centric definition, one may claim that art is purely a form of individual expression, and therefore creation of art should not be hindered by ethical consideration. Tattoos as pieces of artwork offer a great example of this issue. However, one may take it from the viewer’s perspective and claim that because art heavily involves emotion and the response of a community after viewing it, the message behind what is being presented is what should actually be judged. To what extent do ethical judgements limit the way the arts are created?... ...

Ethical Judgments Limit the Pursuit of Knowledge

This essay will show that ethical considerations do limit the production of knowledge in both art and natural sciences and that such kind of limitations are present to a higher extent in the natural sciences.

Law And Morality Essay

The relationship between law and morality has been argued over by legal theorists for centuries. The debate is constantly be readdressed with new cases raising important moral and legal questions. This essay will explain the nature of law and morality and how they are linked.

Ethics and Scientific Research

Ethics is the study of moral values and the principles we use to evaluate actions. Ethical concerns can sometimes stand as a barrier to the development of the arts and the natural sciences. They hinder the process of scientific research and the production of art, preventing us from arriving at knowledge. This raises the knowledge issues of: To what extent do moral values confine the production of knowledge in the arts, and to what extent are the ways of achieving scientific development limited due to ethical concerns? The two main ways of knowing used to produce ethical judgements are reason, the power of the mind to form judgements logically , and emotion, our instinctive feelings . I will explore their applications in various ethical controversies in science and arts as well as the implications of morals in these two areas of knowledge.

Essay On Consensus And Consensus

I am going to illustrate my reservations on the examples of natural science and ethics. Natural science uses reason and sense perception as ways of knowing. A verified hypothesis becomes a theory and in even longer run turns into a law, which can be only dismissed once proven wrong by a scientific method. In ethics we

More about Essay On Ethical Judgements

Related topics.

  • Share full article

Advertisement

Supported by

Guest Essay

I Thought the Bragg Case Against Trump Was a Legal Embarrassment. Now I Think It’s a Historic Mistake.

A black-and-white photo with a camera in the foreground and mid-ground and a building in the background.

By Jed Handelsman Shugerman

Mr. Shugerman is a law professor at Boston University.

About a year ago, when Alvin Bragg, the Manhattan district attorney, indicted former President Donald Trump, I was critical of the case and called it an embarrassment. I thought an array of legal problems would and should lead to long delays in federal courts.

After listening to Monday’s opening statement by prosecutors, I still think the district attorney has made a historic mistake. Their vague allegation about “a criminal scheme to corrupt the 2016 presidential election” has me more concerned than ever about their unprecedented use of state law and their persistent avoidance of specifying an election crime or a valid theory of fraud.

To recap: Mr. Trump is accused in the case of falsifying business records. Those are misdemeanor charges. To elevate it to a criminal case, Mr. Bragg and his team have pointed to potential violations of federal election law and state tax fraud. They also cite state election law, but state statutory definitions of “public office” seem to limit those statutes to state and local races.

Both the misdemeanor and felony charges require that the defendant made the false record with “intent to defraud.” A year ago, I wondered how entirely internal business records (the daily ledger, pay stubs and invoices) could be the basis of any fraud if they are not shared with anyone outside the business. I suggested that the real fraud was Mr. Trump’s filing an (allegedly) false report to the Federal Election Commission, and that only federal prosecutors had jurisdiction over that filing.

A recent conversation with Jeffrey Cohen, a friend, Boston College law professor and former prosecutor, made me think that the case could turn out to be more legitimate than I had originally thought. The reason has to do with those allegedly falsified business records: Most of them were entered in early 2017, generally before Mr. Trump filed his Federal Election Commission report that summer. Mr. Trump may have foreseen an investigation into his campaign, leading to its financial records. He may have falsely recorded these internal records before the F.E.C. filing as consciously part of the same fraud: to create a consistent paper trail and to hide intent to violate federal election laws, or defraud the F.E.C.

In short: It’s not the crime; it’s the cover-up.

Looking at the case in this way might address concerns about state jurisdiction. In this scenario, Mr. Trump arguably intended to deceive state investigators, too. State investigators could find these inconsistencies and alert federal agencies. Prosecutors could argue that New York State agencies have an interest in detecting conspiracies to defraud federal entities; they might also have a plausible answer to significant questions about whether New York State has jurisdiction or whether this stretch of a state business filing law is pre-empted by federal law.

However, this explanation is a novel interpretation with many significant legal problems. And none of the Manhattan district attorney’s filings or today’s opening statement even hint at this approach.

Instead of a theory of defrauding state regulators, Mr. Bragg has adopted a weak theory of “election interference,” and Justice Juan Merchan described the case , in his summary of it during jury selection, as an allegation of falsifying business records “to conceal an agreement with others to unlawfully influence the 2016 election.”

As a reality check: It is legal for a candidate to pay for a nondisclosure agreement. Hush money is unseemly, but it is legal. The election law scholar Richard Hasen rightly observed , “Calling it election interference actually cheapens the term and undermines the deadly serious charges in the real election interference cases.”

In Monday’s opening argument, the prosecutor Matthew Colangelo still evaded specifics about what was illegal about influencing an election, but then he claimed , “It was election fraud, pure and simple.” None of the relevant state or federal statutes refer to filing violations as fraud. Calling it “election fraud” is a legal and strategic mistake, exaggerating the case and setting up the jury with high expectations that the prosecutors cannot meet.

The most accurate description of this criminal case is a federal campaign finance filing violation. Without a federal violation (which the state election statute is tethered to), Mr. Bragg cannot upgrade the misdemeanor counts into felonies. Moreover, it is unclear how this case would even fulfill the misdemeanor requirement of “intent to defraud” without the federal crime.

In stretching jurisdiction and trying a federal crime in state court, the Manhattan district attorney is now pushing untested legal interpretations and applications. I see three red flags raising concerns about selective prosecution upon appeal.

First, I could find no previous case of any state prosecutor relying on the Federal Election Campaign Act either as a direct crime or a predicate crime. Whether state prosecutors have avoided doing so as a matter of law, norms or lack of expertise, this novel attempt is a sign of overreach.

Second, Mr. Trump’s lawyers argued that the New York statute requires that the predicate (underlying) crime must also be a New York crime, not a crime in another jurisdiction. The district attorney responded with judicial precedents only about other criminal statutes, not the statute in this case. In the end, the prosecutors could not cite a single judicial interpretation of this particular statute supporting their use of the statute (a plea deal and a single jury instruction do not count).

Third, no New York precedent has allowed an interpretation of defrauding the general public. Legal experts have noted that such a broad “election interference” theory is unprecedented, and a conviction based on it may not survive a state appeal.

Mr. Trump’s legal team also undercut itself for its decisions in the past year: His lawyers essentially put all of their eggs in the meritless basket of seeking to move the trial to federal court, instead of seeking a federal injunction to stop the trial entirely. If they had raised the issues of selective or vindictive prosecution and a mix of jurisdictional, pre-emption and constitutional claims, they could have delayed the trial past Election Day, even if they lost at each federal stage.

Another reason a federal crime has wound up in state court is that President Biden’s Justice Department bent over backward not to reopen this valid case or appoint a special counsel. Mr. Trump has tried to blame Mr. Biden for this prosecution as the real “election interference.” The Biden administration’s extra restraint belies this allegation and deserves more credit.

Eight years after the alleged crime itself, it is reasonable to ask if this is more about Manhattan politics than New York law. This case should serve as a cautionary tale about broader prosecutorial abuses in America — and promote bipartisan reforms of our partisan prosecutorial system.

Nevertheless, prosecutors should have some latitude to develop their case during trial, and maybe they will be more careful and precise about the underlying crime, fraud and the jurisdictional questions. Mr. Trump has received sufficient notice of the charges, and he can raise his arguments on appeal. One important principle of “ our Federalism ,” in the Supreme Court’s terms, is abstention , that federal courts should generally allow state trials to proceed first and wait to hear challenges later.

This case is still an embarrassment, in terms of prosecutorial ethics and apparent selectivity. Nevertheless, each side should have its day in court. If convicted, Mr. Trump can fight many other days — and perhaps win — in appellate courts. But if Monday’s opening is a preview of exaggerated allegations, imprecise legal theories and persistently unaddressed problems, the prosecutors might not win a conviction at all.

Jed Handelsman Shugerman (@jedshug) is a law professor at Boston University.

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips . And here’s our email: [email protected] .

Follow the New York Times Opinion section on Facebook , Instagram , TikTok , WhatsApp , X and Threads .

IMAGES

  1. (PDF) Ethical judgement among university accounting students: the

    ethical judgement essay

  2. Rendering Ethical Judgement in Sports

    ethical judgement essay

  3. Sample essay on ethics

    ethical judgement essay

  4. Moral Reasoning and Ethical Complexity Free Essay Example

    ethical judgement essay

  5. Professional Ethics and Ethical Practice In Counselling Free Essay Example

    ethical judgement essay

  6. Sample essay on ethics

    ethical judgement essay

VIDEO

  1. Virtue Ethics (Ethics Online

  2. Prepare Yourself: Day Of Judgement Questions Revealed! || Religious Scholar: Shuja Ud Din Shaikh

  3. Judgement and Mercy: Narration of Genesis 18

  4. Ethical review for research with human participants at TU Eindhoven

  5. Introductory video to Ethics by M. Haarish

  6. Ethical Principles and Cultural Competence in Healthcare

COMMENTS

  1. A Framework for Ethical Decision Making

    Ethics Resources. A Framework for Ethical Decision Making. This document is designed as an introduction to thinking ethically. Read more about what the framework can (and cannot) do. We all have an image of our better selves—of how we are when we act ethically or are "at our best.". We probably also have an image of what an ethical ...

  2. Right and Wrong in the Real World

    At the outset, we need to recognize—and take seriously—the difficulties inherent in these judgments. The interesting ethical questions aren't those that offer a choice between good and evil—that's easy—but pit good versus good, or bad versus even worse. Take, for example, the case of our friend walking out the door wearing that ...

  3. Ethical Judgments: What Do We Know, Where Do We Go?

    A reasonable approach to begin the process of identifying and retrieving relevant literature is to consult review papers. For example, O'Fallon and Butterfield (2005, Table 3) compiled and summarized an extensive body of research dealing with judgment in the context of ethics.However, the pertinence of some of this research to the construct of ethical judgments as defined earlier is ...

  4. How Do We Make Ethical Decisions? An Essay

    He concluded that ethical action is the result of four psychological processes: (1) moral sensitivity (recognition), (2) moral judgment (reasoning), (3) moral focus (motivation), and (4) moral character (action). Moral Sensitivity. The first step in moral behavior requires that the individual interpret the situation as moral.

  5. PDF Practical Cognitivism: An Essay on Normative Judgment

    We have no choice but to uphold some ethical standards and resist others. We couldn't coherently reject all ethical standards and still go on living a remotely human life. Being ethical is a way of responding to the legitimate needs and interests of other people, who are just as real and just as important as you are. Being ethical makes us happy.

  6. What is Ethical Judgement?

    This is the process and basis for what we can call "ethical judgment.". Judgment on an ethical issue will usually depend on two things: values and priorities. Values are the things that we hold important for our sense of who we are. They are expressed in statements such as "human life and dignity should be protected," or "cheating is ...

  7. Thinking Ethically

    This approach to ethics assumes a society comprising individuals whose own good is inextricably linked to the good of the community. Community members are bound by the pursuit of common values and goals. The common good is a notion that originated more than 2,000 years ago in the writings of Plato, Aristotle, and Cicero.

  8. Moral Reasoning

    Importantly intermediate, in this respect, is the set of judgments involving so-called "thick" evaluative concepts - for example, that someone is callous, boorish, just, or brave (see the entry on thick ethical concepts). These do not invoke the supposedly "thinner" terms of overall moral assessment, "good," or "right."

  9. PDF Ethical Judgments: What Do We Know, Where Do We Go?

    papers from the time period of their review (identified below) that we were able to locate by means of the SSCI (Pan and Sparks (2012) also left out several pertinent ... as ''ethical judgments'', were labels that conveyed little to no indication of applicability to judgments in the context of ethics. Beekun et al. (2003a, p. 276), for ...

  10. PDF Professional Judgement in Ethical Decision-Making: Dialogue and ...

    The four moral intensity dimensions are: seriousness of the consequences, social consensus on what is right or wrong, temporal immediacy (now or later), and proximity (how close the counsellor is to the client). In addition, Cottone (2001) expands on this process of decision-making by offering a social constructivist model of ethical decision ...

  11. Why Moral Judgments Can Be Objective

    In this essay I defend the objectivity of ethical judgments by deploying a neo-Aristotelian naturalism by which to keep the "is-ought" gap at bay and place morality on an objective footing. I do this with the aid of the ideas of Ayn Rand as well as, but only by implication and association, those of Martha Nussbaum and Philippa Foot ...

  12. Ethics and Judgment: Judging What Is More Right and Less Wrong

    Developing good judgment means being curious, listening and learning from your mistakes. Successful executives are also able to learn from the errors of others and change their view when the evidence demands change. And judgment isn't all about experience. You need intelligent reasoning to be applied to that experience or else it doesn't work.

  13. Ethics

    The term ethics may refer to the philosophical study of the concepts of moral right and wrong and moral good and bad, to any philosophical theory of what is morally right and wrong or morally good and bad, and to any system or code of moral rules, principles, or values. The last may be associated with particular religions, cultures, professions, or virtually any other group that is at least ...

  14. The Foundation of Moral and Ethical Judgment Essay

    Philosophers explore such ethical problems as the sources of moral notions and norms, the standards of morals in society. For example, there are philosophers that see morals as the set of norms dictated to humanity from the above. The source of morals, according to John Locke, is divine. According to Immanuel Kant, human morals have a non ...

  15. What Is Ethical Dilemma?

    Essay Example: Ethical dilemmas stand as intricate labyrinths of morality, where individuals tread carefully to reconcile conflicting principles amidst the ebb and flow of life's challenges. ... This inherent tension complicates the decision-making process and underscores the need for nuanced judgment. Ethical dilemmas extend beyond individual ...

  16. PDF Ethical Judgment

    short answer, or essay tests to assess student's knowledge of strategies to push oneself. Have students write reports, based on observations or ... Ethical judgment is reasoning about the possible actions in the situation and judg-ing which action is most ethical. A person making an ethical judgment uses reason

  17. Moral judgement and decision-making: theoretical predictions ...

    The study of moral judgement and decision making examines the way predictions made by moral and ethical theories fare in real world settings. Such investigations are carried out using a variety of ...

  18. Professional Ethics: Moral Judgments

    The accuracy of the information used while forming a moral judgment is equally important as its credibility. Finally, moral judgments should rely on common moral principles. Hence, every moral judgment is based on some standards. The latter generally reflect the moral principles. Credible moral judgments should be based on the so-called sound ...

  19. PDF Ethical judgments: what do we know, where do we go?

    Ethical judgments refer to individual determinations of the appropriateness of a course of action that could possibly be interpreted as wrong (Reidenbach and Robin, 1990; Robin, Reidenbach, and ... likely have cited seminal papers in which these measures first appeared. Much research reviewed by O'Fallon and Butterfield (2005; e.g., Wu, 2003 ...

  20. 7 Ways to Improve Your Ethical Decision-Making

    7. Accept Feedback. Ethical decision-making is susceptible to gray areas and often met with dissent, so it's critical to be approachable and open to feedback. The benefits of receiving feedback include: Learning from mistakes. Having more opportunities to exhibit compassion, fairness, and transparency.

  21. How do clinical psychologists make ethical decisions? A systematic

    This research assessed ethical intention (an ethical decision which is made, which can be hypothetical, rather than a completed action) in comparison with the Theory of Planned Behaviour. The model suggests that decision-makers' reported attitudes (such as a positive evaluation of a course of action), subjective norms (such as social pressure ...

  22. Ethical Subjectivism Vs Ethical Relativism

    In the vast terrain of moral philosophy, two distinct vantage points stand out: ethical individualism and cultural relativism. These perspectives offer contrasting lenses through which we perceive the intricacies of morality, diverging in their approaches to the origins of moral truths and the foundations of ethical judgment.

  23. Professional judgement and decision-making in social work

    Welcome to the second of two inter-related special issues. The first focused upon risk in social work (Whittaker & Taylor, Citation 2017) and this special issue focuses upon professional judgement and decision-making.It consists of eight articles across a range of countries and settings that examine key issues that are relevant to practitioners and managers as well as researchers and policy ...

  24. Research ethics and artificial intelligence for global health

    The ethical governance of Artificial Intelligence (AI) in health care and public health continues to be an urgent issue for attention in policy, research, and practice. In this paper we report on central themes related to challenges and strategies for promoting ethics in research involving AI in global health, arising from the Global Forum on Bioethics in Research (GFBR), held in Cape Town ...

  25. Essay On Ethical Judgements

    Essay On Ethical Judgements. 1422 Words3 Pages. According to Hunt and Vitell (1986)1, ethical judgment is the process of considering several alternatives and choosing the most ethical one. In my opinion, ethical judgments are the moral principles that justify the values of certain behaviors. Ethical judgments can be very subjective for ...

  26. Situation Ethics Essay Plan Flashcards

    Study with Quizlet and memorize flashcards containing terms like Does situation ethics provides a helpful method of moral decision-making?/ Can an ethical judgement about something being good, bad, right or wrong can be based on the extent to which, in any given situation, agape is best served?, Is Fletcher's understanding of agape is really religious or whether it means nothing more than ...

  27. Algorithmic Judicial Ethics by Keith Swisher :: SSRN

    This article explores these algorithmic developments in criminal courts across the country and makes four contributions: (1) a survey and preliminary application of judicial ethics to this development; (2) a preliminary moral argument, informed by related judicial ethics and legal standards, suggesting that judges should use these algorithmic ...

  28. Opinion

    Mr. Shugerman is a law professor at Boston University. About a year ago, when Alvin Bragg, the Manhattan district attorney, indicted former President Donald Trump, I was critical of the case and ...