gen_53.1.gif WelcomeAboutThe Booke-mail me
gen_18.1.gif
gen_29.1.gif

Chapter 9 - Reasoning

A workable methodology for arriving at moral precepts requires more than logic.

The goal of this chapter is to formalize and define how to reason about what is the good. Much of moral reasoning is involved with the search for the rules to determine what is good, then deriving consequences of these rules for certain situations. Essentially, this is the process of deriving conclusions from a set of axioms. But we have seen that the limits of logic are found in its undecidability. This comes out of the ability of logic to be self-referential, leading to infinite loops or self-contradictions. This is a problem especially when the entities being reasoned about are themselves capable of complex reasoning, especially the ability to learn from the actions of others and to respond to those actions in ways that aim for different goals.

It is worthwhile to retain, as much as possible, an objective, scientific approach to morality, because it has worked so well in other fields. In fields of knowledge such as physics and chemistry, and even in areas of biology that look at the workings of life not involving human psychology, formal logic is adequate for reaching conclusions about almost all problems of interest. This is true for the hard sciences, but also for applied sciences, too. Besides reasoning about what is observable in physics and chemistry, logic helps to determine what is healthful, in medicine, and what works and is practical in engineering.

But once the ability to consciously reason is put into the mix, paradoxes can arise that make it hard to reach workable conclusions on the basis of logic alone. These self-referential paradoxes can have moral implications. I am indebted to the book Paradoxes from A to Z by Michael Clark for some of the examples in this chapter and the next.

To give a simple example of a logical paradox, consider the Barber Paradox, first expressed by Bertrand Russell. A village passes a law that every resident male who does not shave himself can only be shaved by the village barber, who must also be male and also a resident. The barber also cannot shave anyone who shaves himself. If the barber does not shave himself, then no other male in the village can by the first part of the law. But the barber cannot shave himself, because that leads to breaking the second part of the law. Therefore the barber must either break the law, not shave, or be shaved by someone who is not a male resident of the village. It is, of course possible to add amendments to the law that forbid other groups to shave the barber or not allow him to go unshaved, eventually reaching an absurdity. Eventually, the restrictions lead to a situation where the law refers to no admissible situation. Although there are arcane ways to get around this problem, in effect, making it a pseudoparadox - the situation such as this can have moral implications.

Can these types of paradoxes arise in morality? It is certainly possible. A practical example of a paradox is the way a placebo works. It is known that placebos, pills containing no medication, can actually cure people. People with psychological disorders, for example, respond positively to placebos more than one quarter of the time. The only problem is, they work only if you believe in them. Should I believe that it cures me or not?

A related moral problem would arise for a person who believes in voodoo. If a person claims that they have put a curse on someone, is it morally wrong if the person who claims to have put the curse doesn't really believe the claim? After all, the belief of the person who is the source of the curse is not the source of the injury. As in the placebo example, it is the belief of the recipient of the curse that determines whether the curse is effective or not, and that is not under the control of the person making the claim. Therefore they could be held responsible for an action that is not under their control, since that belief is the product of the mind of the person who believes they are being cursed. There is a case to be made that there is no moral wrong if the statement that there is a curse was made in jest, but taken seriously by the person who was the object of the jest.

This is not a strictly logical paradox, though. It may possible to create a true logical paradox by creating a situation where, for example, there are two stock-picking computer programs that go head to head for the same stock, where each program is able to get full access to the programs and data available to the other machine. Given that a buy order for the stock will result in an increase in the price of the stock, it is to the advantage of the two machines to go second. In reality neither would go at all if both machines were to simulate the computations of the other as a means to make the decision. So a true logical paradox is hard to achieve in reality.

But still, the rise of pseudo-paradoxes is troublesome enough in themselves. For example, consider the case of a legal guardian taking power over a person. This guardian is often required to protect that person in general, but one of the things being protected is the ability of the person to do as they please. The actions of the guardian in themselves violate this protection, especially if the actions of the guardian are to optimize the person's freedom of choice.

Another example comes from those religions where humility is a virtue. Someone who claims to know the will of god is a sinner because they are guilty of hubris. But someone who does not know the will of god is a sinner because they do not know how to act with righteousness. Ether way, it is impossible to completely avoid sin.

In general, the determination of what is moral or not is based on the determination of what is best for the well-being of the entity involved. But due to the fact that moral entities such as humans are conscious, evaluating well-being must take that into account. But we have shown that these moral arguments are subject to paradoxes due to this self-awareness of the entities means that there is no objective measure of well-being. In essence a pure, objective logical approach to well-being must address an unlimited variety of recursive function to compute well-being. To claim a complete measure of well-being that would, in effect, solve the halting problem, or give the appearance of that violation.

Because these logical paradoxes can arise it means that a universal pronouncement, such as 'Thou shalt not kill' can immediately be contradicted by a counter example, no matter what it means to kill. A typical response to this conundrum is to postulate that the absolute truth exists at a higher level, a "meta-level" that contains the non-contradicted essence of the principle. This metalogic could be a methodology from which we build our own moral system relative to our own well-being. This system must be open to error and interpretation and in a continual state of development. If the universe were immune, this would allow us to build a moral system in a context of a learning system that transcends the limitations of a purely recursive-theoretical process.

But this metalogic would lead to a morality that is relative to each entity, once the individual's well-being is superimposed upon it. This implies that there would be no practical foundation we all agree on, only an abstract higher level. In contrast to the absolute morality of Western philosophy and religion, the ensuing metalogic is more like Taoism - we all start from an arbitrary place, trying to view the Tao in different ways, as best we can. In this methodology, we hypothesize an axiom set to reason our way to decisions, and then learn from our mistakes and refine our axioms. But there is no single absolute morality - each moral code is relative to our own situation.

The viewpoint of a limited consistency then brings us to the view that this attempt has not brought us higher; instead it has pushed us to a different place in the conceptual web, where a new set of inconsistencies immediately arise. Therefore, there is no ultimate level - there a just an infinite series of successive reformulations of principles that incrementally differ from the previous one, ad infinitum. Eventually, you just have to stay with the principle 'Thou shalt not kill' as is, contradictions and all.

Because there is no absolute truth to the practice of morality then it is sometimes unavoidable to speak in contradictions. This is the essence of a Taoist way - the recognition that any duality is inaccurate. Thus a universal pronouncement, such as 'Thou shalt not kill' can immediately be contradicted by a counter example. This means that universality is not absolute, at least in the way it is observed, analyzed and applied, and that all concepts are to some extent, oxymorons. This leads to the intellectual equivalent of the golden mean. No philosophy is absolutely correct, but since people have many characteristics in common, it is possible to define an evaluation of well-being and an ethics that uses it that is by and large correct. No philosophy can be completely correct, but there are many that are close.

Along with the observation that the ability to learn avoids the problems of a fixed absolute system, the presumption of an adaptable relative morality makes the process of developing a moral viewpoint adaptable itself. This leads to a flexibility that is actually a source of strength. A philosophical system that claims absolute consistency is fragile. An example is the moral philosophy of the more extreme followers of Ayn Rand - accept it all or reject it. This leads to a brittle system. The same problem can arise with Roman Catholicism and the insistence of Papal infallibility when making a pronouncement ex cathedra. An absolute moral statement of this kind runs the risk of demanding that a believer must be excommunicated even though they agree with 99% of the dogma. As a metaphor of the differing way the Taoist views truth, the example of bamboo is presented. Compared to a tall, strong tree, bamboo bends with the storm, but a tree eventually falls.

Another strength of departing from a strict requirement of consistency is that it allows for the compartmentalization of reasoning domains. This allows for reasoning in a context. The conventional opinion is that this is undesirable, but consistency limited to a context can actually be a virtue. Typically, emotions and subjectivity provide the context to make relative choices. This changes the focus to what is relevant or most important in a particular context. As an example when the lack of compartmentalization can lead to trouble, consider the problem of dogmatic reasoning. When someone has dogma, they will force all new facts into the dogma. This happens because the person has extended a context with limited consistency into an inappropriate universal outlook. In effect, the relativistic fallacy is addressed by deciding that living with fallacies, if handled in a common sense manner, is not so bad, after all. Indeed, in real life, that is the norm.

[Law of the excluded middle is impractical]

This brings us to the Law of the Excluded Middle. The law states that in logical reasoning, there are no gray areas in deciding. Something is either true or false. The identity of an object is fixed - A is A, not B. I would claim that the Aristotelian law of the excluded middle, although a good first approximation, is not an absolute basis for understanding reality.

To continue the use of Taoism as an example of an alternative method of looking at reasoning, the Taoist knows that it is impossible to see good without evil - humans are always seeing in opposites. But this belies the essential nature of reality. Good and evil is not a duality inherent in the universe itself. Instead, they are an artifact of the way we reason. Such dualities come about because our neurons work this way. They either fire or they don't.

It is not hard to imagine why the Law of the Excluded Middle is so pervasive: logic is a tool of the mind to make sense of the world that has been remarkably successful. Therefore the mechanism by which logic is applied is superimposed upon reality. Since all reasoning is mediated through this mechanism, it appears to be an essential aspect of reality itself.

This does not deny that there do exist absolute laws or rules. For example, there are absolute laws like "1+1=2". They are absolute because they exist in an absolute universe built around the laws of logic and mathematics. But that odes not mean that this world actually exists. This absolute universe is a mental construct. It is part of the universe of logical statements that gives rise to paradoxes. We can admit the existence of absolutes and therefore come up against paradoxes. Even the postulation of an absolute rule denying all absolutes has an exception in the absolute world of absolutes. Since this absolute rule exists in this world of absolutes, it becomes an example of Russell's paradox. In reality 1+1=2 is true up to Planck's constant due to the uncertainty principle. Every instance in reality of an instantiation of the notion of a quantity is open to the natural uncertainty of the universe.

If the universe is made of integers, then the law of the excluded middle follows. If it is an integer, n-tuple or whatever then it has a boundary. This boundary forces everything into a state of being that is either that integer or not. It even tries to force the real numbers into some infinite representation of a finite sequence of digits, allowing for a countable number of real numbers, such as pi, to be represented by a finite algorithm with an infinite computation. Turing's representation of functions works because you name some function up to a finite number of states. But each naming has to stop at a level - the process could go on forever, but unlike the expansion of pi, each new rule makes the function more unpredictable.

An attempt to address this incompatibility leads to the development of the dialectic. Dialectic is a natural psychological process. It starts with the false assumption that the Excluded Middle actually exists. The synthesis of thesis and antithesis happens when people finally realize that neither theory is completely correct and truth is somewhere in between.

In a certain sense, quantum mechanics seems to imply the existence of integers. In this case, the world has an indeterminate existence until it is measured, then it collapses into one of a finite number of states. But even this attempt is transient. Further observations and interactions with the world bring the uncertainties back. Randomness seems to imply more than quantumness. It makes the uncertainty the base state of reality, and a measurement into some transient attempt to pin this randomness down, an attempt that lasts until the next interaction. We do have the ability to set up situations such as the example of Schrodinger's cat, that is alive or dead depending on a single quantum event, but that is an outlier - a fixed even in a world of more or less probabilistic events - the exception, not the rule.

Instead of demanding that reality conform to the laws of thought, we can admit that logic may not necessarily be universally applicable. An example of where this type of reasoning has entered Western thought, consider the Complementarity Principle of the Copenhagen School of Physics. Complementarity shows that some phenomena, such as the electron, manifest themselves in ways that are mutually exclusive forms of existence such as a wave and a particle. Instead of forcing the physical phenomena to be absolutely one manifestation or the other, possibly relegating the alternative manifestation to an inferior role, complementarity maintains that the two are equally valid and one should not be preferred over the other in any universal sense; the preference applies to the context of the phenomenon.

Complementarity can be also true of the identification of moral characteristics. Any action can be any combination of good and evil, right and wrong, helpful and destructive, and in the real, nonideal, world, every action is all of the above if you look long enough. The Taoist would argue that it is better to think of the person as part of a river of experience. The flow is sometimes good, and sometimes evil, but any direction is dependent on the flow of river and the goal the person is trying to achieve.

The ultimate lack of universality of the Law of the Excluded Middle may be especially relevant if in fact the world is immune as was discussed in the previous chapter. The truth may be that no matter how many laws we have and how well they work, reality is ultimately unknowable.

Related to the law of the excluded middle is the law of identity. Having some property as either true or false makes us identify an object with these properties and exclude other objects where the association of one or more of these properties if false. Thus the law of Identity gives rise to definitions. Definitions of words are in turn defined by other words. This gives rise to semantic nets. A semantic net is a collection of facts and concepts that are related to other concepts. For example, the fact 'a whale is a marine mammal' is a fact in which the object whale is related to the concept mammal by an ISA link, with a link to the concept marine that indicates that it is a qualifier of this ISA link. The concepts mammal and marine each have links to other concepts, such as water and animal.

Seen this way, definitions are operations in a network of words and concepts. Definitions are the local contexts in the web. As new definitions are being acquired it is impossible to have a perfectly consistent knowledge space a priori. They must be checked.

The attempt to preserve the Law of the Excluded Middle sometimes leads to taking excessive pains to preserve an absolute consistency. This has two avoidable costs. The first is that the attempt to preserve consistency forces an inflexibility upon the conceptual system that makes it difficult, if not impossible to incorporate new or changed knowledge that contradicts some aspect of the cognitive structure that has already been established. The second cost is the simple expenditure of time and effort that it takes to preserve this consistency. As the number of concepts grows linearly, the number of interconnections that must be checked grows exponentially. Eventually, there comes a point where this effort must be curtailed or the system cannot learn in a reasonable time.

In computer science the breakdown of this effort arises in the creation and maintenance of semantic nets. The addition of each new fact requires comparing it to each other fact in the net. This is simple for the second concept, but the one millionth concept requires a cost a million times greater. For example, if the fact that mammals have four legs has already been entered into the semantic net, consistency would have to be re-established by making an exception for marine mammals, or by indicating that whales only have vestigial legs. This effort can be limited by only requiring consistency to the objects in the semantic net that are within a fixed number of linkages between one fact and the next. The consistency between facts about whales, mammals and animals may be required to preserve consistency, but consistency with facts about plants and bacteria may be waived, even though certain properties between bacteria and whales may present consistency problems. Both bacteria and whales can have individual facts attached to them relating to how they react in the presence of oxygen and each could be a statement inconsistent with the other, but a general statement about how animals react to oxygen might be consistent with both. If the number of connections between one object and its neighbors is limited, then the requirements on consistency can be bounded, even though the number of concepts grows without bound.

This points out the fact that although formal definitions are a useful tool, they cannot be absolutely applicable. Formal definitions cannot take the place of common usage, because common usage enshrines implicitly the recognition of what consistencies are important. If a consistency is not established explicitly, it does not necessarily mean it is not important, but precedence must be given to the explicit consistencies over the implicit. This means that ultimately we have to use a utilitarian definition for our concepts over one that is of formal rigor.

This is an important distinction to make. The creation and use of knowledge is a practical art, subject to usage and not required to preserve a mathematical rigor. The rigor of abstract mathematical concepts are a luxury that can be allowed in the creation of a formal representation of knowledge about the real world, just as a map is a formal representation of a territory. But the map can never be the territory - it can only be an abstraction of it. Inconsistencies can arise from the selection of what details are abstracted away in the creation of the formal map.

Letting go of the claim to universality of formal logical validity opens us up to belief systems that contain contradictions - a situation in logic that leads to the complete breakdown of the system. How can one prove the correctness of a claim that violates the excluded middle, thus leading to these kinds of contradictions? The answer is, you can't - not logically. One has to use a non-logical method, or to extend or alter logic in some way.

A simple fix is to preserve logic for the most part, but limiting the context of logical conclusions. In that case, for every use of logic we must define a local context and require soundness in that context. This means that any conclusion that is drawn is not absolute but relative to a particular universe of discourse. This can reduce the problem of paradoxes, especially by limiting the universe of discourse in time as well as space. In that case, reference to a single entity can be qualified by a conditional such as 'knowing then what I know now' where what you know can be determined well enough to draw the necessary conclusion. Most paradoxes implicitly assume some sort of infinite regression, a situation that has not been shown to exist in reality. Providing some sort of limit that removes infinities from consideration, if done carefully, eliminates most paradoxes.

Most other paradoxes come about by presuming a limit, although an undefined one. Also, most countable infinites, that is, infinites involving every number that can be potentially written down can be bounded by existential quantification - that is, instead of saying 'for all t' you can often reformulate this as 'there exists an s such that for all t less than s'. Let's consider how to convert an infinite regression into this other form.

Consider Zeno's paradox of Achilles and the tortoise. The tortoise gets a head start of 8 feet. In one second, Achilles covers 50 feet and the tortoise has barely moved. A half second later, Achilles has covered 2 feet. Dividing the interval in half, the infinite regression goes forever, without the sum ever amounting to the starting distance from Achilles to the tortoise. But given any finite interval, there is a specified time that Achilles crosses that interval, and for any such interval, there is an approximate time at which Achilles passes the tortoise, which serves as an upper bound to this time. Now the question is to determine the smallest such interval. Of course, if there is no limit on the smallest time, this infinite regress has been replaced by a fruitless search to find an infinitely small infinitesimal.

In most realistic cases, there is a recognizable limit. For this example, the length covered by a photon in Planck's constant is the shortest time and distance. For human decision making, the shortest reasonable time is often the time it takes for a neuron to fire. For the longest time the current age of the universe is acceptable, or twice the typical human lifespan when thinking of an individual. This makes most infinites of interest into finite universes.

But this bounding can lead to unacceptably large spaces, even though they are finite. For example, it is possible to calculate the number of ideas every possible person on earth could ever have, given the size and age of the earth, and different physical characteristics of every typical human. In these cases, we may have to resort to heuristics to find contradictions and resolve them.

It is not even necessary to rely strictly on logic. People use other methods of decision making, such as a reliance on intuition or the emotions. Emotional reasoning does not require the excluded middle. Emotional reactions to people and objects overlap - they are not mutually exclusive. Emotions have their limits, though. They lead to simple, unverifiable results, which can be shown to have limited validity as later experience comes along. This is because the emotions are built on past personal experience that may or may not be generalizable to future experience. Emotional reasoning must be extended by experimentation and learning to moderate the limits of past experience. Evolution built emotions into us because of their survival value, but provided the rational abilities to ride above them, a later development that extends and increases the applicability of our first impressions.

An analysis of the limits of logic does not necessarily lead to modified formal methods of logic such as paraconsistency and dialetheism. There is a difference between the attempt to change the formal rules of logic and the recognition of the limits of logic. To use an analogy, the creation of paraconsistent logic is like modifying a simple tool to make it useful under more situations. It appears that the ultimate goal of paraconsistent logic is to modify the rules of logic in an attempt to better match the way the world works. In effect, it takes the map and tries to make it more like the territory. But this can only go so far, then the map becomes too unwieldy and loses its value as a map.

It is preferable instead to recognize that, since logic is a mental construct, there is no guarantee that it is universally applicable to the world around us, and that any attempt to make it so will lead to failure. Instead, it is better to qualify the use of logic by trying to identify its limits In those situations where the Law of the Excluded Middle is too restrictive, logical reasoning can bring us part way to a solution. Sometimes it takes more than one tool to finish a job.

[Statistics reasoning as an alternative]

For an objective method of reaching conclusions that can be an alternative to a purely rational reasoning, without the problems of emotional reasoning, probably the best method known to date is the use of statistical analysis of populations of similar cases. There are a variety of reasons why using statistics to make ethical judgments is worthwhile. For example, even though individual cases are subject to incompleteness problems, this is often moderated in populations because individual actions tend to average out. Even when individuals are acting in a manner that makes the individual analysis difficult, the statistical analysis of populations can bring out trends. Other advantages of statistics come from the regularity of populations. The Central Limit Theorem allows us to use the Standard Normal distribution in the limit, even though the analysis of smaller populations can be more complicated. Even cases where the statistics cannot be effectively analyzed in detail, the statistics can usually exhibit some noneffective but approximate order. This helps us in the formation and modification of hypotheses. As time goes on, the probability that the confidence of the hypothesis tends towards certainty will itself tend toward certainty.

Statistics are especially helpful where the Law of the Excluded Middle is not practicable, even though the analysis of probability distributions is almost always done with the tools of standard truth-valued logic. Fuzzy logic is one of the formalisms that tries to marry statistical probabilities to logic.

Because the world is so complicated, you can't be absolutely right or wrong unless you are omniscient. This means that a practical analysis cannot classify things absolutely. Thus, something like statistics are needed to add approximations to the analysis and reach conclusions that, although they are not absolute still have practical validity.

An example of this in physics is the three body problem. Even in well-known fields such as gravitation, approximate analysis is necessary. Although the two body problem has a simple elliptical solution, adding just a single third body makes the problem unsolvable in a closed form, except for the simplest of cases, such as the Lagrange points.

Three body problems arise in morality as often as they do in physics, but in an even more complicated multidimensional form. Since human beings have controlling forces and interests that do not solve simple equations like the inverse square law, it is impossible to determine most common day-to-day moral positions from first principles. It is necessary to either make simplifying assumptions, or observational studies or simulations. Thus the need for statistics.

But even this is not enough to be usefully right or wrong. A practical use of statistics requires that one balance the cost of a false positive condition versus a false negative. These costs in turn are offset by the benefits for the correct positive and negative choices. A false positive is an erroneous determination that an object has a certain property when it does not. A false negative is the opposite error. In this case it has been missed that the entity possesses that quality. To give an example of a medical diagnosis, a false positive is a test that shows a patient has signs cancer although there is no cancer present. A false negative is to miss a potentially dangerous cancer tumor.

An advantage of statistics is that it provides a measurable frame of reference to a moral relativity. Statistics triangulates a moral relativity. Although taking many measurements does not make an absolute morality, it does instead determine the relative frame of reference of the moral act. It both quantizes the frequency of a moral action and the cost of it.

Another advantage of a statistical approach is that it gives a way of handling the inevitable gray areas that arise in practical ethics. Logical analysis does not have clean way of handling what is known as the 'heap paradox'. Given a heap of sand made up of 1,000,000 grains, if you remove one grain of sand, you still have a heap. But a single grain of sand is not a heap. Not even ten grains of sand are a heap. At what point does a heap of sand become a heap? This decision is made in the context of how a heap is defined in this case. One can then set two limits. A number of grains of sand above the upper limit will certainly be considered a heap of sand. A number of grains of sand below the lower limit will certainly not be considered a heap of sand. In between, we can assign a probability, or just be vague and say that this is 'possibly' a heap.

Statistical reasoning brings its own paradoxes also. One of the paradoxes is Simpson's paradox. Consider the case of two hospitals, where A and B have differing cure rates for diseases X and Y. Assume that 350 patients with disease X were admitted to hospital A and 7 of them are not cured. That is a 2% fatality rate. Out of 650 patients with disease Y, 15 of them die. Although there are twice as many fatalities, almost twice as many patients with disease Y die, so the fatality rate is 2.3%, slightly worse than that of disease X. On the other hand, of 650 cases of X admitted to hospital B, 40 die - a 6.5% fatality rate. Of 350 cases of Y, 28 die - a fatality rate of 8%, which is the worst of all. It appears that no matter which hospital a patient with disease Y is admitted to, they fare worse. But consider the fatality rates over both hospitals. In both cases there were 1000 cases of X and 1000 cases of Y admitted in total. But the total fatality rate for disease X is 7 + 40 = 47, compared to the total fatalities disease Y of 15+28=43. When both hospitals are compared together, patients with disease Y come out better. How can this be?

The answer is that the patients with disease X are being admitted to the hospital with the higher overall fatality rate. In hospital B, 1000 people are admitted, and 68 of them die. In the case of hospital A, only 22 fatalities are recorded. The lower overall fatality rate for disease Y reflects the fact that they are going to the hospital with the better outcome. This may be a reflection of the better quality of the care, or that the people being admitted to hospital B are sicker than the patients in hospital A. There just isn't enough information available to explain the discrepancy. But it brings up the point that statistics can be incomplete, and this can lead to paradoxical situations.

Another type of statistical paradox comes from the problem of false positive errors versus false negatives in statistical analysis. Clark refers to this as the Xenophobic paradox. I will use his medical example of false diagnosis. Consider the case where a disease occurs in 10% of the population and can be identified with 80% accuracy using a particular diagnostic test. If a population is tested for the disease, and an individual is told that the test shows that they have the disease. It turns out that there is only about a 30% chance that this is in fact true. Out of 100 people being tested, 90 are disease free, but since the test has a 20% error rate, there are 18 false positives. Of the 10 people with the disease, there are 2 false negatives where the disease is not identified and 8 cases of the disease that the test. So there are 18+8 positive tests, but since the false positives outweigh the true positives, only 8/26 or about 30% need to be treated. This is not a problem with statistics per se, but with the incapacity for human intuition to correctly judge the chance of the false positive being confused for what are actually rare events.

The paradox of the ravens is another case of a statistical paradox involving confirmation. To claim that all ravens are black as a scientific hypothesis, we would look for confirmations in the population of ravens to support or disconfirm the hypothesis. The claim logically translates to 'Nothing that is not black is not a raven. Therefore the observation of a white swan would logically confirm the hypothesis that nothing that is not black is a raven, but it does not say anything about ravens. The problem seems to arise from the fact that there is a much larger space of things that are nonravens than ravens, so the observation of a nonblack nonraven says very little about ravens. It does give some confirmation, but the amount of confirmation is vanishingly small because the sample size of nonravens is so much larger than the set of ravens. In a sense, this is related to the heap paradox. It takes a heap of observations of nonravens to balance any direct observations of ravens, so that it does not appear that confirming the hypothesis by looking at nonravens is very productive. It is possible to say that one has looked at the universe of all things and found that every case of a nonblack object was also a nonraven is an inefficient way to test the hypothesis. It takes a lot of observations of nonravens to make a heap of nonravens that gives a similar confirmatory weigh to a set of observations of a bunch of ravens.

Some statistical paradoxes can be downright strange, even when they are explained. Consider a case where there are three adjacent regions that are being explored for oil. It is known that the oil is down there, but not well enough to determine if it is under region A, B or C. Assume you choose to drill in region A, but before you start, a competitor drills in region B and comes up dry. Does it make sense then to switch drilling to region C?

Paradoxically, it does make sense to switch to C. Region A had only a 1/3 chance of coming up with oil. But, knowing that region B came up empty, choosing to switch to C now gives you a 50-50 chance of getting the oil. Therefore, you have a higher probability of success.

The trick to understanding this paradox is in the nature of what it means to switch. Consider that, instead of just switching to C, you instead decide to switch based on the flip of a fair coin. Then you have a 50-50 chance of switching, not to C, but actually to your original choice A. The act of flipping the coin and choosing on the basis of the switch is the same as switching to C, but half the time your switch takes you back to A. But in this case, you have made a second choice of A, this time with a better likelihood of success. The paradox is resolved by identifying the psychological quirk that switching from A to A is not usually considered a switch, but in this case it actually is an act that changes your chances.

There are other statistical paradoxes, some of which are esoteric. Bertrand's chord paradox points out the problem of sampling. Even though you are randomly selecting samples from a population, there may be a bias in the way the samples are selected that can lead to vastly different results.

It seems counter productive to leave pure logic as the predominant reasoning tool because of the natural paradoxes that arise in its application to moral reasoning, but to advocate as an alternative the use of probability to reason with, when it has its own set of paradoxes, some of which are equally intractable. The answer is that we are not leaving logic behind so much as adding statistical reasoning to our toolchest. Sometimes logic is sufficient for arriving at an acceptable answer to a moral problem. But, due to the problems given here, we cannot rely solely on logic to arrive at an answer. On the other hand, the weaknesses of statistical reasoning are sometimes amenable to a logical analysis. Therefore, we are at our best advantage when we look at a given moral problem with an eye to which tool would give the best answer and pick accordingly. People have a tendency to create false universals. In this case, no one tool works for every case. Sometimes we choose one and sometimes the other.

[Definitions]

To return to the topic of definitions, probability theory gives us a different way of looking at how definitions are created and used. In a formal system of absolutes, definitions are the axiom set that forms the starting point of reasoning. In an absolute sense, the definition consists of the properties that the concept contains or does not contain. Instead, using a statistical approach, definitions are given by consensus usage. Seen as a statistical process, the meanings of words are operational and based on common usage - they are not absolute. Meaning comes from observable phenomena. Observations are not black and white.

Because of this, definitions grow and change as knowledge grows and changes, but within limits. The definition attached to a word is dependent on the population that uses the word. In fact, the definition can change for different populations. Definitions can have no absolute definition outside of the population using the term. This definition has limits. It is common for people to have different meanings attached to the same words. Some of those differences can lead to violent disagreements. These disagreements must be solved democratically. An individual or group may shift a definition, but cannot go too far or others will not agree to follow that change

Keith Stanovich in his book 'How to Think Straight About Psychology', that inspired and informed some of the thinking in this chapter and the next, points out that the creation and use of definitions in scientific reasoning fits this statistical approach. He mentions that meaning changes as the operational definition changes, since this changes the relation from the definition to the observable phenomena. This means that definitions depend on consensus, because this is required for sharing observations.

A consensus approach means that the definition contains the properties that are deemed most often to occur. This set of properties form a boundary in which the objects fitting the definition tend to fit inside the boundary of properties. If a property does fit to some extent or is completely absent, the object may still fit the definition if the rest of the properties fit the consensus. A definition is therefore useful in the degree in which the boundary is such that the objects all fit well.

Since any axiom set is incomplete, definitions must grow and change. A definition is thus some sort of generalization. It starts from pattern matching. Logically this type of definition is self-referential. As you learn more the definition changes. This extends the set of items matching the pattern as the generalization changes. Some items can later be dropped.

Definitions are, and always have been a social construct. They have meaning relative to the society that created and uses the definition. As the society changes, the members of that society use the words in different ways, thereby changing the definitions by usage. This allows for the meaning of words to change in small increments as time goes on. But if a single person uses a word in a significantly different manner, that person will not be understood unless a coherent subgroup is built around that changed usage. Once that subgroup grows either in size or influence, the new definition will become, sometimes suddenly, the current meaning of the word.

It has often been remarked that philosophical disputes quite often are disputes of definitions rather than something more substantive. The dispute may claim to be about different ways that a certain group should operate in a certain situation, but because the definitions are different, the applicable groups could be different. It might even be possible for the different parties in the dispute to agree on the rightness of a certain action once they have agreed on the same group. But since the definitions are different, the underlying agreement about actions might be obscured.

One way to resolve these differences is to distinguish between the qualities denoted by the definition from the qualities expressed in the word's connotation. The distinction between the qualities of an object that considered for it to fit a definition versus the other qualities that it connotes is well known. Connotations that have been associated with a definition can have both the problem of not being sufficiently universal or only applicable at the present time. They are subject to the problems of overfitting in learning. We must be aware of the emotional content of definitions. The emotions recognize and associate properties with the definition they denote, but the emotions act as the selector in a search space - a preliminary choice that the more rational parts of the brain must judge, upon reflection - to be worthy of keeping or discarded. The emotions also pull in a framework of related things that carry a similar emotional load.

But attempting to reduce a definition to a formal construct removes the definition of its power. The practical use of definitions cannot be reduced to a formal logical game. The reason we define things is in fact to provide that framework for us to react to the thing defined. Therefore, the emotional content cannot be dispensed with. It just must be used judiciously

[An answer to Hume]

A statistical approach to moral reasoning also has a bearing on Hume's observation that one cannot derive an 'ought' from an 'is'. Hume's argument is that morality has a deontological basis - morality is not supposed to be contingent upon the achieving of any type of goal. Therefore rationality does not rule the passions - you cannot derive an 'ought' from an 'is'.

Hume's approach to morality seems to lead to a moral code that is essentially arbitrary. He says that "Where a passion is neither founded on false suppositions, nor chuses means insufficient for the end, the understanding can neither justify nor condemn it. It is not contrary to reason to prefer the destruction of the whole world to the scratching of my finger. It is not contrary to reason for me to chuse my total ruin, to prevent the least uneasiness of an Indian or person wholly unknown to me. It is as little contrary to reason to prefer even my own acknowledgeed lesser good to my greater, and have a more ardent affection for the former than the latter."

Hume goes on to say "Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them." He is right - rationality does not rule the passions. In fact, it is dependent on them for the source of the ojects of reason. But then we must ask the question where do the passions come from. Both humans and animals have passions. But passions are the driving force behind any entity with the power to make decisions. It could also be argued that an Artificial Intelligence could be imbued with passions, even a simple one. The source of the passions in a human or animal is in their genetic makeup. In a computer program, they are part of the goals that the system is programmed to accomplish. That is, the passions make up an entity's nature. Without passions, an entity would have no character at all - the entity may process sensation the same way it processes food, but nothing would come of them.

Bu where does what we ought to do come from? Does what we ought come from the realm of what God intends - that, some external agent that defines the goal of the entity? It does not matter that we ought we do what is right because we want to. We also do what's wrong because we want to. The oughts arise out of our evolutionary nature - out of our programming. The male peacock ought to strut. We ought to raise a finger if it improves our survival. Sociobiology shows ought can be social.

What this means is that what we ought to do actually does come from what is, but the 'is' that is important is the nature of the entity for which the question is asked. The oughts arise out of the nature of the entity and can be determined by an inspection of the makeup of the entity - its nature and the needs that it requires. These needs are somewhat different for each individual, but can be given generally by a statistical analysis of the population. They can be further refined by looking at populations with similar qualities to that of the entity in question. Therefore, what an entity ought to do comes from the nature of what it is. This determination is made not just by the analysis of the individual in isolation, but as part of a population of similar entities.

This also addresses the mind-body problem. There is no such thing as a pure mentality - there is no pure mind. Similarly, for any entity capable of moral action, there is no pure body. Mind is not an epiphenomenon, but emergent behavior. It is not a separate duality, but a process that arises out of and is embodied in a physical manifestation. It is not directly observable, in the sense that physical properties are observable. It is inferable by the use of empathy - the analogous reasoning that compares the nature of the observer to the observed. Here again, a statistical approach can be useful. Generalization over a set of populations can arrive at the essential characteristics of the mind.

[What is the good?]

The matter of definitions brings us back to the question of what is the good. The good was considered as almost axiomatic in the first chapter. This is so, but the good is different in different times and different places. More important than being axiomatic it should have measurable dimensions. What is considered to be the best for the well-being of an individual must be cast in a form that a statistical analysis can be performed.

This does not entirely remove qualitative features, such as asking whether a person is happy or not. These qualities cannot be meaningfully be quantified, but they can be compared to previous states to give a simple better-than or less-than answer to the subjective judgment. These types of comparisons are easily capable of analysis either by logical means or by statistical analysis.

But it is not possible to give an absolute definition of well-being that applies to all time. Definitions are reached by consensus and change when the consensus changes. Five hundred years ago, the state of one's immortal soul was more important than the condition of the body. This has changed. With the changes in technology since the Industrial revolution, there are a variety of factors than go into an estimation of well-being that never existed before, and there will be more to come that we cannot imagine today. It is not possible to abstract a core set of qualities that are timeless, because we are not omniscient. As knowledge increases, the dimensions by which well-being is measured will increase also and any attempt to collapse this growth in dimensions into a number of basic qualities runs the risk of coming up with a set of qualities that are increasingly irrelevant. It is like the construction of language. The letters of the alphabet had an original meaning when they were first given, but as the language developed, the meanings became irrelevant. And there is no way to come up with a fixed number of qualities in the way that phonemes describe the words of a language, because human welfare is not predefined the same way as the vocal tract is.

Again, it is important to remember that this statement of well-being must be used in a relative sense. Measures of well-being that are relative to recent developments, such as access to electricity are changeable and may be irrelevant when electricity is superceded or universally available. Other measure of well-being, such as happiness or satisfaction are more timeless, but are measured differently at different times. Basic parameters such as life and death, or even expected lifespan have a more timeless component. But even this is capable of change, for example, if a true immortality is achieved.

Since we discussed in the previous chapter the mathematical aspects of learning, we can apply the concept of learning to the problem of the relative applicability of definitions of the good. Is it possible to create a moral code that enumerates over all moral codes and looks for improvements or generalizations that increase the applicability of the moral code to more times and places? The answer is that this can increase the boundaries of applicability, but will never reach the level of absolute generalization. Assume the universe is immune. Then the procedure will not halt on an absolute rule since the actors themselves are learning also.

In summary, the measure of well-being used to define the good changes over time and distance relative to the local consensus. No dimension of this measurement is absolutely timeless, but some have a greater degree of universality then others. But it is not true that the more timeless the dimension is, the more important it is. The relative importance is determined by the consensus across the dimensions of space and time one is comparing well-being. And some of the qualities applicable only to this region of interest may be very important indeed.