gen_62.1.gif WelcomeAboutThe Booke-mail me

Chapter 5 - Moderation

In the application of moral principles, moderation is the key.

There are many people who do not drink alcohol. Abstinence will certainly ensure that one does not overdo, not having to face the effects in the morning after a couple of bottles of wine the night before. But wine is not all bad - medical research has shown that a glass of wine a day is good. The inability to sustain moderation is the problem.

Moderation also comes from the toleration of fallibility: the recognition that the real world does not allow the existence of perfect states or qualities. This is especially true for well-being. Since there are a myriad of qualities that go into a full determination of well-being and they are often interrelated, there is no way of achieving the highest possible condition for even one of these qualities.

As an example of this, consider the tradeoff between freedom and justice. Absolute justice can be considered to have been accomplished when every action that has an imbalance between the parties involved in that action is rectified in order to restore that balance. Absolute freedom, on the other hand is the ability to take action unfettered by any restraints placed upon those actions by other persons. To be able to achieve an absolute justice in an imperfect world, each action would have to be answered with some sort of rectification, since no act can be perfectly balanced. But this means that every act would be constrained and there would be no freedom at all. On the other hand, an absolute freedom would mean that no act, no matter how unjust, would be constrained. The best place to be is where the amount of freedom and the amount of justice is maximized. This balance point is not necessarily at the 50-50 level. Given some measure of how much freedom and justice we can give up (or attain) it may be possible to have 95% of the possible justice by giving up 10% of one's freedom, or 10% of justice gives one 92% of one's freedom. In that case, the best choice may be to settle with 95% justice and 90% freedom.

Morality has similar properties, although there is no equivalent to Planck's constant to measure the effect. Given any two dimensions of well-being, the attempt to maximize one of these will, once we pass a certain point, ultimately lead to a decrease in the other. Once we pass the balance point for these two dimensions, the combined measure of well-being will decrease, eventually dropping off towards zero.

This trade-off occurs both for the person and for society. The attempt to maximize the health of a person will, if taken too far, lead to a decrease in that person's happiness, sense of security and wealth. A society that is secure against enemies, internal and external, gets out of balance in terms of its honesty, happiness and efficiency. Even the effort of striking a balance itself, being a separate (though hardly independent) dimension of well-being, cannot be maximized. That is, the effort involved in "hill-climbing" serves to lower the hill. It is better to expend a certain amount of energy to "get close". Although a higher hill may be out there, it won't be as high if you strive to reach it.

One case where this tradeoff is seen in its simplest and most abstract form is the two forms of the Golden Rule. "Do unto others as you would have them do unto you" is the positive injunction. Its negative is "first, do no harm". The trade-off between the two is usually a probabilistic one. That is, if one does not allow for a reasonable risk of something harmful happening in order to accomplish a good, the available choices of action are circumscribed to the most limited positive gains. Neither the ultra-conservative path nor actions that recklessly endanger the other person are called for. To function in a way that maximizes the chance of the best well-being, a moderate risk should be accepted, or even encouraged.

There is a balance between good and harm in most situations, which is not just the result of risk. Although the injunction "first, do no harm" is often applied to the field of medicine, both medications and surgery incorporate both health and harm in their effects. The cut of the surgeon's knife destroys health flesh before the illness is treated. The benefits of the medication come with the side-effects that must be accepted. Although the benefit and loss have some randomness, this is secondary. Primary is the fact that almost all medical treatments have both benefit and harm. Moderation demands that both be taken into account.

Because it is usually recognized that there can be no gain without some loss, the two different injunctions are actually implicit ways of evaluating the negative effects higher than the positive or vice versa. "Do unto others" works as a positive precept because, in the daily affairs of mankind, doing a good thing for someone else has little risk of a negative outcome. Stepping aside for someone who is obviously in a hurry does not make one that much later, but can certainly help that person who is late for an appointment. On the other hand, in the medical sphere where harm can mean a loss of function or even the loss of life, "First, do no harm" explicitly mentions the benefits of good outcomes, which sometimes can be, at best, limited, versus the costs of bad outcomes, which can sometimes be disastrous.

In general, every moral action is a maximization problem of some kind. People are urged to live moderately. This advice applies to the single person alone, but it applies just as well to any action, no matter how many and what type of entities are involved. But just as often people are urged to search the highest quality. Yes, but at what cost? The advice to strive for the best will work only in cases where the unspoken simplifying assumption is close to true: when the costs of each alternative are the same.

This simplification is found also in epistemology - the philosophy of knowledge. Epistemology is a necessary condition for the types of moral action being discussed here, since one cannot act to maximize well-being unless, one knows what well-being is and how to improve on it. This requires knowledge and the ability to learn.

The simplifying assumption that the costs of each choice are similar underlies one of the basic concepts of epistemology - Ockham's razor. The razor as it was originally expressed was that "entities are not to be multiplied beyond necessity". This means that one should not make more assumptions than the minimum needed. This principle of parsimony urges us, when trying to model some process, to choose the simplest model from a set of equivalent models. This principle was the reason why the Copernican was superior to the Ptolemaic planetary system and to a third competing system by Tycho Brahe. This third system postulated that the planets circled the sun, but the sun circled a fixed earth. Even though the three systems are computationally equivalent, the Copernican system had fewer entities in the form of epicycles.

The problem with Ockham's Razor is that in real life the models are hardly ever equivalent in explanatory power. Given a list of observations to explain, a simple listing of all of the observations is the most accurate representation of what has happened. Unfortunately it has no predictive power, since it claims that there are no other cases than the ones we have seen. This prediction is probably wrong. At the other extreme - the model that says that all observations are possible - is easily the simplest model. It also has no predictive power, since it claims that any other possible observation can happen.

Somewhere between the most detailed and the simplest model can be found a wide range of models. Some of these models make simplifying assumptions that enable certain details in the observations to be ignored, thus producing a simpler explanation than the complete listing. If the details that were ignored are extraneous, then the accuracy is the same as the complete listing and Ockham's Razor applies - the model where the extraneous details are left out is preferable. But it is more likely that the model, due to its simplifications, gives up some accuracy, so that the cases it claims is in the set it is trying to model is not quite the set of observations, but a list of items that resemble the observations with minor, perhaps random differences. Other simplifications can also be done on this model, resulting in a series of simpler models of decreasing accuracy.

This process can go in the other direction. Starting with the model that claims anything is acceptable, restrictions can be placed that removes whole classes of data from consideration as part of the set to be modeled. This process can even eliminate all of the observed data, replacing it with a set of archetypal values that are close. These predictive values may each be slightly different from the real observations, but the claimed values may have statistical properties that match the statistical properties of the observed data, something that may be more important than the ability to model each data value exactly.

This tradeoff can be purely considered in terms of the descriptive power of the models - the ability to summarize the data that has already been seen. But it is usually considered even more important that the model has predictive power. The two abilities are usually related, if the ability to descriptively summarize the observed data is due to the model's being able to capture some underlying principle that governs how the observations came to be in the past which works just as well in the future.

An example of this progression of models is the series of astronomical theories created by Kepler, Newton and Einstein. Kepler's model postulated the motion of the planets in elliptical orbits. Newton postulated a general force of gravity that predicted the motion of the planets just as well, but was able to predict the motions of comets, which were not explained by Kepler. Newtonian physics was also applicable to objects outside of the solar system, such as other galaxies. Einstein's General Theory of Relativity was able to explain the precession of the orbit of Mercury, which was a deviation from observation that Newton's theory could not explain. Einstein's model was also able to predict the existence of gravitational lensing, a phenomenon that was only observed recently.

In the tradeoff between simplicity and accuracy, sometimes it is not necessary to consider the most accurate model. In calculating the orbit of man-made satellites, it is sufficient to perform the calculations using Newtonian gravity, but it is not necessary to take into account general relativistic effects.

The tradeoff between simplicity and accuracy implies a means to calculate the tradeoff. What is an acceptable accuracy? What is the cost to make a more precise calculation?

If the accuracy and costs can be translated into probabilistic terms, then techniques such as Bayesian models of learning can be used. Bayes theorem says that the probability of one event times the conditional probability of a second event (given that the first event occurred) is the same value as the probability of the second event times the conditional probability of the first event assuming that the second event has occurred: P(A)*P(B|A)=P(B)*P(A|B).

For a made-up example, assume that a person has a 25% chance of having lung cancer if that person smokes, and the percentage of people who smoke is 15%. Assume that if the person does not smoke, then the chance of getting lung cancer is 2%. Then out of 2000 people, 34 of the 1700 nonsmokers and 75 of the 300 smokers will get lung cancer. This gives a probability of getting lung cancer at 109/2000. Given the 109 cases of lung cancer, the chances that one of these people is a smoker are 75/109. The probability of getting lung cancer times the probability of being a smoker if you have lung cancer is P(LC)*P(S|LC)=(109/2000)*(75/109)=75/2000. The probability of being a smoker times the probability of having lung cancer if you smoke is P(S)*P(LC|S)=(300/2000)*(75/300)=75/2000.

Bayesian theory is applied to learning theory by computing the chance of generating a given model out of the universe of possible models by a random hypothesis generator. This is the first event. The second event is the data itself: how likely is the data set, assuming the model is the right one? A measure of how good a model is at explaining the data is the probability of the model given the data: P(M|D). Since we have the equation P(D)*P(M|D)=P(M)*P(D|M), the value P(M|D) equals P(M)*P(D|M)/P(D). That is the probability of the model times the probability of the data given the model divided by the probability of generating the data set out of the universe of possible data set by a random data set generator. Given two models M and N, we compare the values P(M)*P(D|M)/P(D) against P(N)*P(D|N)/P(D). Since the denominator P(D) is the same in both cases, it can be ignored. So model M is better than model N if the value P(M)*P(D|M) is more than P(N)*P(D|N).

Jerome Feldman later generalized this technique by replacing Bayesian statistics by a calculation that involves two functions, the cost of the complexity of an hypothesis C(M) and the error rate of the hypothesis E(M|D). As the complexity of the models increases, C(M) increases. Similarly, the error rate increases as the models get worse in their predictive ability. These two values are combined, but not necessarily by multiplication. Whatever combination function X is used, it must be increasing in each argument. That is, is A is greater than B, then for any C, AXC is greater then BXC and if D is greater than E then for any F, FXD is greater than FXE. Therefore, given a pair of models M and N, M is the best model is the value C(M) X E(M|D) is less than C(N) X E(N|D). This means that the best hypothesis to express a set of data is the one which has the smallest combination of the complexity and error rate.

This way of coming up with the best model for a set of data was also developed independently by Ayn Rand as the Epistemological Razor: "Concepts are not to be multiplied beyond necessity - the corollary of which is: nor are they to be integrated in disregard of necessity". In this case, the more the concepts are multiplied, the more complicated the model. Integration simplifies the model. Rand recognizes that Ockham's Razor is one-sided. If the model is kept simple, it is usually at the cost of accepting incompleteness in the explanation that lead to erroneous results. The correct balance is maintained as a tradeoff between the integration of the concepts and the necessity to model the data accurately.

These concepts have their application in morality in cost-benefit analysis. Every action that can be taken has a certain cost to the actor that reduces some part of their well-being. This cost must be taken into account when considering the benefit of the proposed action. Sometimes the most advantageous alternative comes at too high a price.

The benefits and benefits of any moral act can be more complicated to determine as the number of dimensions of well-being are increased. The study of different alternatives to poverty can consider actions that benefit the poor's financial state, but can also affect their educational level, their happiness and self-esteem, their health, the risks of failure and even how well they fit into society. But the costs of the alternatives may be more than the money that would be spent to address the problem. The complexity of the solution can delay the time it takes to implement it, the amount of expertise required for the implementers, the number of people involved and the changes that it imposes on society as a whole. Some techniques may not even be directly comparable - their comparison might only be made by reference to a third approach, which could be a combination of the two. Even if the combined approach fares worse than the two pure approaches, it might reveal which method is comparatively better. So, in terms of the Epistemological Razor, considerations should not be ignored in disregard to necessity.

But when we make choices that are not the best possible in a particular case, there are usually one of two reasons for this: the first is not having good evaluation functions to guide one's choice, and the second is putting an inappropriate effort into finding and implementing the best choice. Both of these types of failures can either be due to a well-meaning but unfortunate selection of methodology to make the choice, or could be due to a premeditated choice to select a method that stints in its effort to choose the best result. Whether or not the action is morally questionable depends on the degree of volition involved and the effort made to arrive at a methodology that works and to apply that methodology.

Consider a person who decides to rob a bank. The simple thing to say is that they are making a bad moral decision, but it is more interesting to ask what they were thinking of. Typically, their thought process involves only a few of the entities involved in their decision - themselves, who win, and the bank, that loses. Their evaluation does not adequately judge the effort that society will make to try to catch them. Such a person usually also does not usually judge the chances of success very well, believing that the odds of getting caught are low if they succeed. This makes them more likely to try again: the typical criminal does not stop at one bank robbery. Also, they do not usually consider the consequences of becoming an outlaw. Even if not caught, there is a chance that they could be identified, making it hard for them to fit into society. They also have the problem of accounting for their new found wealth. A more sober assessment of the costs and benefits of their actions alone should be enough to show that crime does not pay. Leaving aside any desire to lead a righteous, upstanding life a rational self-interest would argue against bank robbery as a profession.

Now consider another person who is not considered generous. A lack of charity is not a crime - certain people are just not giving by nature. It is not possible to make a blanket claim that everyone gains through giving to others. Some people, such as those who grew up with never enough would feel a keen sense of loss if they were required to give of what was theirs to someone less fortunate. But with most people, the gains that result from a reasonable amount of charity are worth more to them than that amount of the money spent on themselves. It is important, though, to be aware of one's own sense of values. If it turns out that being viewed as a miser were actually part of a person's sense of well-being, then a well-functioning ability to make moral decisions should be able to identify this as the choices are being made, not just as a later consequence as that person finds out how they are viewed by the community.

Although I am arguing that a moral injunction to be charitable is not a universal law that applies equally to all persons, it is important to note that, since humans are social creatures, the welfare of those around us is important to our finding ourselves in a society that we find comfortable. Therefore, although not reaching unanimity, the need to be generous is almost universal. This is another case of having an appropriate evaluation function. Although moral injunctions such as the need to be charitable could be stated with conditions such as "don't bother if it hurts you to do so and you get nothing out of it", this encompasses such a small number of exceptions to the general rule that the Epistemological Razor defaults to Ockham's original rule and these other qualifications are not added because they are not necessary - they add a complication that to most people is not helpful.

This analysis applies to societies also. During the Great Depression, the New Deal was instituted because society decided that the gain in overall welfare of the populace was worth the cost in extra taxes even though that might result in a slower recovery. Societies make these types of tradeoffs all the time. The decision to go to war is usually an analysis that includes other alternatives. The society believes that these alternatives just do not yield a benefit, or result in a net loss, whereas a war, it is believed, can be won at a reasonable cost. These debates are usually framed in very emotional terms, with different sides arguing for different conclusions. In these cases, the factions do not agree on a method of evaluating the costs and benefits. To be able to reach any understanding, it is important to lay bare these differences in values.

One of society's ongoing debates concerns the different approaches to helping the needy. Some people look at the children in need and urge that every family be given the resources it requires to raise each child in at least a minimum acceptable manner. Other groups look at this welfare and warn that it leads to a loss of independence and initiative in the families who receive these gifts. Both sides have a tendency to overgeneralize, making claims that seem to imply that the needy are all poor helpless victims, or they are all people who have lost the will go earn a living for themselves. The Epistemological Razor demands that to reach a meeting of mind on this issue, both sides must expand the number of identifiable groups to that which is necessary to account for the different cases that make up the population of the poor and needy. There are certainly children in need whose parents, despite their best efforts, are not able to care for them, as well as those whose parents are not fulfilling their obligations. It is not helpful to try to argue the existence of either group - they both exist. Also, those who are mentally ill or incapacitated, and those who have fallen upon hard times. The successful evaluation of this problem must identify the different classes up to the level of necessity, in that each class that requires a different method to address their situation has to be determined before the solution to their problem is implemented. This means that no group or viewpoint is correct by itself. They are right for those they have correctly identified but wrong for all those who have not been.

This construction of appropriate ways of evaluating moral actions is the primary task of moral living. How many dimensions of well-being to consider? How many entities to consider - who must be included and who can be left out? What degree of aggregation?

The second kind of errors that we make are caused by our failure to put appropriate effort into finding the best choice, followed by inappropriate effort towards implementing our choice. Even if we had the best possible evaluation function to allow us to make discriminating decisions that accurately measure costs and benefits, we still make poor decisions. These are operational errors.

Some operational errors are due to the fact that even making the decision involves costs, in terms of the time and effort involved in the decision making. Sometimes this tradeoff is simply due to impatience, or a lack of ability on our part that makes decision-making hard. Sometimes it is willful decision to cut corners with the expectation that this will not adversely affect us. Sometimes it is simply that we cannot spare the resources of time and effort to apply to decision making, having to use our efforts in implementing a solution even though we know that this choice is far from the best.

Sometimes this lack of effort in looking for a good decision comes from temperament. Some people are just the type who are comfortable with what has worked for them before. This is not associated with a particular viewpoint. Both the stodgy conservative and the knee-jerk liberal are people who rely on the same approach that has worked for them in the past.

Examples of this type of behavior can be easily seen in the decisions that people make about their own welfare. These are not usually considered moral choices because they usually affect only the person who takes the action, but in the sense that moral actions affect well-being, and the well-being of that single person is at stake, these are moral choices also.

Examples abound. One is the tendency to quickly turn to vitamins instead of more nutritious food. This is an error of seeking out the simplest answer to the question of eating right instead of putting effort into choosing a diet that is healthy. Another common situation is where individuals choose a field of study or even a career based on an incomplete or superficial analysis of what makes them happy or what they are good at. Sometimes a lifetime commitment is made on the basis of a subjective observation that they are good at certain subjects at school and good money can be made in a job using those skills.

The most common way in which people take too little time and effort in making good decisions is the tendency to follow the crowd with fads. This is the situation where each individual considers that the number of people making the same decision means that this decision is right for them without stopping to analyze deeply enough their own unique situation. To be sure, the decision to follow a current fad may be simply be because this action is the right thing to do for that person, but until it became a fad the person was not aware of it. Sometimes following a fad may be appropriate, simply since doing so the individual will fit in, accommodating to society in a way that is important for that person to fit in, or conversely, it is just not worth the effort to be different. But quite often it is a lack of imagination, suppressing the person's individuality and choosing the common course. This trades the simple answer for what could be a more satisfactory answer.

The opposite of not making enough effort to reach a decision is making too much effort. Sometimes this is due to a psychological dysfunction in the individual: they tend towards anxiety, or are obsessive by nature. This causes them to delay and agonize over decisions that would come easily to a normal person. It can, though, come about not through an inner turmoil but instead a mistaken impression that if one puts out an extra effort, this will result in a choice that has more quality. Up to a point this is true. But there is a point of diminishing returns. The amount of effort should be comparable to the differential gain that the effort yields.

The problem of putting in the right amount of effort is that of moderation. To determine the appropriate amount, the first approximation is always to apply oneself to the same extent as others are doing. After all, people in similar circumstances should expend the same amount of time and effort making up their minds, all things being equal. After this starting point, the effort can be adjusted according to individual results. If a person is not making good choices or expanding too much effort making them compared to the value of the outcome, then that person should try to adjust the amount of effort that they put into decision making. It is a curious paradox that even this second-guessing can be done with too little or too much effort and requires a balance.

The other type of operational error is a failure to put enough effort into implementing a choice once it has been made. Even the best decision suffers from not putting enough into it. Obviously, trying halfheartedly will result in an outcome that is not as effective as the analysis led us to expect. But also many moral choices are being simultaneously made by many members of a community. Evil often comes not from making decisions that we know to be wrong, but from a more banal reason: we put less effort into our choices expecting the effort of others to make up the slack.

This second type of moral failing can be seen in many ways. For example, it is common to see dishes of extra pennies beside cash registers that are available for people to use for change for purchases that are a penny or two above a round number. These dishes usually work because the value of the pennies and the nuisance of having them are not great enough to result in people taking out more, on average, than they put in. On the other hand, unless the community involved is especially unselfish, a similar dish of quarters in a laundromat to be used for those caught short when putting in that final load to dry, will eventually be left empty. Although people consider themselves well-meaning, the cost of replenishing the dish when the person has a few quarters to spare is too high to offset the gain of taking one to finish the load.

This lack of effort can be dehumanizing in its effects. Although there was great evil in the decisions of the Nazi government in setting up and implementing the extermination of European Jewry, the lack of effort of each individual German in opposing these actions leveraged the effort of the evil few.

Sometimes the result can be more mundane. The rate of overweight and obesity in this country is growing. Losing weight is a combination of eating right and exercising more. It requires effort both to maintain vigilance in making the right eating decisions and to put a vigorous enough effort into a workout to produce results.

A social problem where insufficient effort is put into implementation of what is known to be adequate solutions is the care and treatment of the mentally ill. In the last few generations, the discovery of effective medication has emptied the mental hospitals of all but the most difficult or violent cases. But too often these people have been turned out into the streets to fend for themselves. The medications they have been given go only part way, but society has lacked to will to provide the resources we require to address the problem adequately.

Is it possible to have a case where too much effort has been put into a solution? Typically this happens when a poor decision has been made but due to pride or willfulness the decision is pressed forward well beyond the time when it would be more appropriate to step back and reassess. It has been experimentally determined in psychology that this ability to appropriately give up on a failing decision is a trait in human beings. We tend to value the effort that we have put into a poor choice in and of itself. This makes it harder to admit defeat and go back and think about a better way of tackling the problem.

One pervasive failing that is both a combination of insufficient contemplation of the better solution and an insufficient effort to implement that solution is in poor parenting. A common problem is the case where the parent wants a certain type of behavior changed - for a child to do something they ought to or to stop doing something they should not be doing. Instead of finding an effective way to make the change happen, the parent simply orders to child to change. This is accompanied with threats of punishment if the command is not obeyed. There are very few cases where giving a command under threat of punishment is the optimum way to effect change. It is certainly the easiest on the parent - it requires no thought to come up with. It is much harder to think about why the behavior is happening in the first place and to decide how to correct the situation in a more effective way. Compared to an analytical approach to parenting, giving commands the easiest action to take.

The analysis of the situation sometimes shows that the problem comes directly from the parent's behavior in the first place. Children are influenced by the behavior of the people around them. Ordering a child not to eat unhealthfully is ineffective if the adults do not do this themselves. There is a special paradox in ordering a child not to use force or threats against other children. The very act of commanding this behavior belies the command.

The ability to arrive at a good approach to parenting requires more observational skills than many parents have or take the time to use. A child cannot be told to eat everything on their plate if the amount they are given is more than is appropriate for them. Sometimes standards of behavior are established that are beyond the ability of the child to meet. It is essential to know what a child is capable of before making expectations.

The moral failings that have been discussed here can be well-meaning but ultimately incorrect choices or they can be result of deliberate evil. What is the difference? Evil is usually considered to have a volitional component - a deliberate choice to do what one knows is wrong. Much of the evil in the world comes from acts of dehumanization as we covered in the previous chapter - the deliberate turning away from the Golden Rule. What deliberate acts of immoderation can be considered as acts of evil?

One of the most prominent ways that evil comes into being is from acts of destruction instead of construction. To tear down is always easier than building up. Both actions are great equalizers. If one person is better off than another, this may have come about by luck, a better starting point, or just a greater capacity or ability. Given a better start in life or the benefits of good luck later on, the advantage may be established and remain throughout the different people's lifetime. Given a difference in abilities or looks, the difference may even widen through no fault of the person left behind. Since the chance of reaching equality is unlikely, the evil person turns to destruction to force those ahead back down.

Sometimes this inequality does not result in acts of destruction, but in giving up. Defeatism is not bad, but it is the absence of good. Sometimes just giving up is the right choice to make in certain cases, when one has done all one could. But this must be selective - it cannot apply to all of life, if there is enough energy for more. But every life comes to an end, and that time, it is appropriate to let go. But for someone in the prime of life actively embracing surrender cannot be considered virtuous.

The other way that evil comes is when either the benefits are unequally distributed or the costs. When the benefits are more than a person's fair share, such as in acts of thievery, swindling or embezzlement, the nature of the evil is usually obvious. It is often less obvious when the act of evil is an unequal distribution of the costs. One can sometimes tell whether a person did not pull their load due to laziness or incapacity, at least if the effort is made in the open with that person's previous capability to compare to. This is a lack of effort in implementation. What is especially hard to determine is when there has been a lack of effort in deciding on an appropriate course of action. This can be due to a deliberate mental laziness or an unwillingness to do the work to collect the facts to make a good decision. But since this mostly occurs within a person's head, there is little evidence to base a judgment on.

The difference between good and bad is between symbiosis and parasitism. In many cases an act of evil results both in unequal benefits and unequal effort. Since the basic equation of moderation is the tradeoff between costs and benefits, these types of inequalities are in effect the same because the difference is the same.

Most religious traditions classify the sins and failings that human nature is capable of. I will use the Buddhist three poisons to illustrate how these sins are either a consequence of dehumanization or a lack of moderation. This shows that although the classification here and the Buddhist classification are different, they seem to cover the same failings

The first poison is termed aversion. It is basically dehumanization in its causes. Aversion leads to failings such as hatred, fear, hostility, envy and jealousy. It is caused by people's failure to follow the golden rule and to treat others the same way that they expect to be treated.

The second poison is that of ignorance or delusion. This leads to failings such as sloth, foolishness, doubt, confusion and boredom. This can be easily seen to be caused a lack of effort into making and carrying decisions out. The effort to dispel ignorance is amply rewarded in increased effectiveness and improved results.

The last poison is desire and ill will. The resultant evils are lust, anger, greed, pride and arrogance. Initially caused by dehumanization, it is the easy choice of destruction that leads to these failings. Sometimes instead of destruction, it comes from the unwillingness to put in the effort required to get the desired result.

Despite our best efforts, problems constantly arise in making moral choices. The question is whether there is some way to improve our ability to make moral choices. A more detailed analysis of this question will have to wait for a later chapter, when it will be discussed as part of reasoning. But we can make some preliminary remarks.

First there is the need to avoid complacency. Although we try hard to make good decisions, the fact that we ourselves and the world around us are constantly changing means that there never will be a fixed, unchanging moral code, or even a set of methods to come up with the best moral code for a particular time.

Second, moral decisions are made through a combination of careful observation, logical reasoning and gut feeling. Similarly, to improve the decisions one makes, we need reliable data about our past efforts and the willingness to use both reasoning and intuition to make things better.

Third, we learn from our mistakes. The fact that we are correct time and again strengthens our confidence in our methods, but only our mistakes give us the knowledge we need to fix the imperfections we inevitably have. Sometimes too much success makes us overconfident, and that makes it harder to change when push comes to shove.

Finally, perfection is impossible. Strive as we might to reach a golden mean, doing too much or doing to little is part of the imperfectness of life. We must also recognize that we have to err on both sides to be in balance. A person who chronically errs on one side or the other is guaranteed to be doing worse than the one who balances their mistakes. In learning to do better, we will have this problem also. It is unavoidable that in correcting our mistakes we will sometimes incorporate into our new methods some special circumstances from our past that are not applicable anymore, or conversely, we will leave some detail out that would make our decisions even better. It is best that we don't make too big a fuss about it and recognize that the ideal is a figment of our imagination.

This chapter has been about the virtue of moderation. It is a basic fact of life that all things suffer from too much or too little. This is even true when trying to do good. Attempting to achieve a state of perfection in any one aspect of our lives means that other aspects will suffer. In our moral choices, we must strive to follow the Golden Rule but at the same time avoiding doing harm to others through our efforts.

In moral decision making, our analysis should not be directed towards simplicity in our choices, but instead by the use of the Epistemological Razor - a balance between simplicity and necessity. This can be done by looking at the cost our actions will demand versus the benefits that accrue.

In making these types of tradeoffs we will make mistakes both in making the decisions and implementing them. These mistakes can either be innocent errors or they can be the result of deliberate destructiveness or pushing the costs onto others and taking more than our share. Most sins and failings come down to these kinds of errors, if they are not acts of dehumanization.

Although it is possible to learn to make better decisions, learning is as prone to error as the decisions themselves, since moderation is a balancing act, not a state of grace.