Eisspeedway

Inductive reasoning: Difference between revisions

Content deleted Content added
Sunrise (talk | contribs)
Reverted good faith edits by 24.209.28.56 (talk): This is a discussion of a *statistical* syllogism; also, "after meeting 9 people" is incorrect. (TW)
Line 53: Line 53:


==Bias==
==Bias==
Inductive reasoning is also known as hypothesis construction because any conclusions made are based on educated predictions.{{Citation needed|date=March 2012}} As with deductive arguments, biases can distort the proper application of inductive argument, thereby preventing the reasoner from forming the most logical conclusion based on the clues. Examples of these biases include the [[availability heuristic]], [[confirmation bias]], and the predictable-world bias.
Inductive reasoning is also known as hypothesis construction because any conclusions made are based on educated predictions.{{Citation needed|date=March 2012}} As with deductive arguments, biases can distort the proper application of inductive argument, thereby preventing the reasoner from forming the most [[Logical consequence|logical conclusion]] based on the clues. Examples of these biases include the [[availability heuristic]], [[confirmation bias]], and the predictable-world bias.


The availability heuristic causes the reasoner to depend primarily upon information that is readily available to him/her. People have a tendency to rely on information that is easily accessible in the world around them. For example, in surveys, when people are asked to estimate the percentage of people who died from various causes, most respondents would choose the causes that have been most prevalent in the media such as terrorism, and murders, and airplane accidents rather than causes such as disease and traffic accidents, which have been technically "less accessible" to the individual since they are not emphasized as heavily in the world around him/her.
The availability heuristic causes the reasoner to depend primarily upon information that is readily available to him/her. People have a tendency to rely on information that is easily accessible in the world around them. For example, in surveys, when people are asked to estimate the percentage of people who died from various causes, most respondents would choose the causes that have been most prevalent in the media such as terrorism, and murders, and airplane accidents rather than causes such as disease and traffic accidents, which have been technically "less accessible" to the individual since they are not emphasized as heavily in the world around him/her.

Revision as of 00:33, 12 November 2012

Inductive reasoning, also known as induction, is a kind of reasoning that constructs or evaluates general propositions that are derived from specific examples. Inductive reasoning contrasts with deductive reasoning, in which specific examples are derived from general propositions.

Definition

The philosophical definition of inductive reasoning is much more nuanced than simple progression from particular/individual instances to broader generalizations. Rather, the premises of an inductive logical argument indicate some degree of support (inductive probability) for the conclusion but do not entail it; that is, they suggest truth but do not ensure it. In this manner, there is the possibility of moving from generalizations to individual instances. Inductive reasoning consists of inferring general principles or rules from specific facts. A well-known laboratory example of inductive reasoning works like a guessing game. The participants are shown cards that contain figures differing in several ways, such as shape, number, and color. On each trial, they are given two cards and asked to choose the one that represents a particular concept. After they choose a card, the researcher says "right" or "wrong."[1]

Though many dictionaries define inductive reasoning as reasoning that derives general principles from specific observations, this usage is outdated.[2]

Description

Inductive reasoning is probabilistic; it only states that, given the premises, the conclusion is probable.

A statistical syllogism is an example of inductive reasoning:

  1. 90% of humans are right-handed.
  2. Joe is a human.
  3. Therefore, the probability that Joe is right-handed is 90% (therefore, if we are required to guess we will choose "right-handed" in the absence of any other evidence).

As a stronger example:

100% of life forms that we know of depend on liquid water to exist.
Therefore, if we discover a new life form it will probably depend on liquid water to exist.

This argument could have been made every time a new life form was found, and would have been correct every time. While it is possible that in the future a life form that does not require water will be discovered, in the absence of other factors (e.g. if it were from another planet) then the conclusion is probably correct as it has been in the past.

As a result, the argument may be stated less formally as:

All life forms that we know of depend on liquid water to exist.
All life depends on liquid water to exist.

Inductive vs. deductive reasoning

Unlike deductive arguments, inductive reasoning allows for the possibility that the conclusion is false, even if all of the premises are true.[3] Instead of being valid or invalid, inductive arguments are either strong or weak, which describes how probable it is that the conclusion is true.[4]

A classical example of an incorrect inductive argument was presented by John Vickers:

All of the swans we have seen are white.
Therefore, all swans are white.

Note that this definition of inductive reasoning excludes mathematical induction, which is a form of deductive reasoning.

Induction

Inductive reasoning has been criticized by thinkers as diverse as Sextus Empiricus[5] and Karl Popper.[6]

The classic philosophical treatment of the problem of induction was given by the Scottish philosopher David Hume. Hume highlighted the fact that our everyday habits of mind depend on drawing uncertain conclusions from our relatively limited experiences rather than on deductively valid arguments. For example, we believe that bread will nourish us because it has done so in the past, despite no guarantee that it will do so. Hume argued that it is impossible to justify inductive reasoning: specifically, that it cannot be justified deductively, so our only option is to justify it inductively. Since this is circular he concluded that it is impossible to justify induction.[7]

However, Hume then stated that even if induction were proved unreliable, we would still have to rely on it. So instead of a position of severe skepticism, Hume advocated a practical skepticism based on common sense, where the inevitability of induction is accepted.[8]

Bias

Inductive reasoning is also known as hypothesis construction because any conclusions made are based on educated predictions.[citation needed] As with deductive arguments, biases can distort the proper application of inductive argument, thereby preventing the reasoner from forming the most logical conclusion based on the clues. Examples of these biases include the availability heuristic, confirmation bias, and the predictable-world bias.

The availability heuristic causes the reasoner to depend primarily upon information that is readily available to him/her. People have a tendency to rely on information that is easily accessible in the world around them. For example, in surveys, when people are asked to estimate the percentage of people who died from various causes, most respondents would choose the causes that have been most prevalent in the media such as terrorism, and murders, and airplane accidents rather than causes such as disease and traffic accidents, which have been technically "less accessible" to the individual since they are not emphasized as heavily in the world around him/her.

The confirmation bias is based on the natural tendency to confirm rather than to deny a current hypothesis. Research has demonstrated that people are inclined to seek solutions to problems that are more consistent with known hypotheses rather than attempt to refute those hypotheses. Often, in experiments, subjects will ask questions that seek answers that fit established hypotheses, thus confirming these hypotheses. For example, if it is hypothesized that Sally is a sociable individual, subjects will naturally seek to confirm the premise by asking questions that would produce answers confirming that Sally is in fact a sociable individual.

The predictable-world bias revolves around the inclination to perceive order where it has not been proved to exist. A major aspect of this bias is superstition, which is derived from the inability to acknowledge that coincidences are merely coincidences. Gambling, for example, is one of the most obvious forms of predictable-world bias. Gamblers often begin to think that they see patterns in the outcomes and, therefore, believe that they are able to predict outcomes based upon what they have witnessed. In reality, however, the outcomes of these games are difficult, if not impossible to predict. The perception of order arises from wishful thinking. Since people constantly seek some type of order to explain or justify their beliefs and experiences, it is difficult for them to acknowledge that the perceived or assumed order may be entirely different from that they believe they are experiencing.[9]

Types

Generalization

A generalization (more accurately, an inductive generalization) proceeds from a premise about a sample to a conclusion about the population.

The proportion Q of the sample has attribute A.
Therefore:
The proportion Q of the population has attribute A.
Example

There are 20 balls—either black or white—in an urn. To estimate their respective numbers, you draw a sample of four balls and find that three are black and one is white. A good inductive generalization would be that there are 15 black, and five white, balls in the urn.

How much the premises support the conclusion depends upon (a) the number in the sample group, (b) the number in the population, and (c) the degree to which the sample represents the population (which may be achieved by taking a random sample). The hasty generalization and the biased sample are generalization fallacies.

Statistical syllogism

A statistical syllogism proceeds from a generalization to a conclusion about an individual.

A proportion Q of population P has attribute A.
An individual X is a member of P.
Therefore:
There is a probability which corresponds to Q that X has A.

The proportion in the first premise would be something like "3/5ths of", "all", "few", etc. Two dicto simpliciter fallacies can occur in statistical syllogisms: "accident" and "converse accident".

Simple induction

Simple induction proceeds from a premise about a sample group to a conclusion about another individual.

Proportion Q of the known instances of population P has attribute A.
Individual I is another member of P.
Therefore:
There is a probability corresponding to Q that I has A.

This is a combination of a generalization and a statistical syllogism, where the conclusion of the generalization is also the first premise of the statistical syllogism.

Argument from analogy

The process of analogical inference involves noting the shared properties of two or more things, and from this basis inferring that they also share some further property:[10]

P and Q are similar in respect to properties a, b, and c.
Object P has been observed to have further property x.
Therefore, Q probably has property x also.

Analogical reasoning is very frequent in common sense, science, philosophy and the humanities, but sometimes it is accepted only as an auxiliary method. A refined approach is case-based reasoning. For more information on inferences by analogy, see Juthe, 2005.

Causal inference

A causal inference draws a conclusion about a causal connection based on the conditions of the occurrence of an effect. Premises about the correlation of two things can indicate a causal relationship between them, but additional factors must be confirmed to establish the exact form of the causal relationship.

Prediction

A prediction draws a conclusion about a future individual from a past sample.

Proportion Q of observed members of group G have had attribute A.
Therefore:
There is a probability corresponding to Q that other members of group G will have attribute A when next observed.

Bayesian inference

As a logic of induction rather than a theory of belief, Bayesian inference does not determine which beliefs are a priori rational, but rather determines how we should rationally change the beliefs we have when presented with evidence. We begin by committing to a prior probability for a hypothesis based on logic or previous experience, and when faced with evidence, we adjust the strength of our belief in that hypothesis in a precise manner using Bayesian logic.

Inductive inference

Around 1960, Ray Solomonoff founded the theory of universal inductive inference, the theory of prediction based on observations; for example, predicting the next symbol based upon a given series of symbols. This is a formal inductive framework that combines algorithmic information theory with the Bayesian framework. Universal inductive inference is based on solid philosophical foundations[11] and can be considered as a mathematically formalized Occam's razor. Fundamental ingredients of the theory are the concepts of algorithmic probability and Kolmogorov complexity.

See also

References

  1. ^ Carlson, N.R. & Heth, C.D.(2009).Psychology the Science of Behavior.Toronto:Pearson Education Canada
  2. ^ "Deductive and Inductive Arguments", Internet Encyclopedia of Philosophy, Some dictionaries define "deduction" as reasoning from the general to specific and "induction" as reasoning from the specific to the general. While this usage is still sometimes found even in philosophical and mathematical contexts, for the most part, it is outdated. For example, [~snip~]
  3. ^ John Vickers. The Problem of Induction. The Stanford Encyclopedia of Philosophy.
  4. ^ Herms, D. "Logical Basis of Hypothesis Testing in Scientific Research" (pdf).
  5. ^ Sextus Empiricus, Outlines Of Pyrrhonism. Trans. R.G. Bury, Harvard University Press, Cambridge, Massachusetts, 1933, p. 283.
  6. ^ Karl R. Popper, David W. Miller. "A proof of the impossibility of inductive probability." Nature 302 (1983), 687–688.
  7. ^ Vickers, John. "The Problem of Induction" (Section 2). Stanford Encyclopedia of Philosophy. 21 June 2010
  8. ^ Vickers, John. "The Problem of Induction" (Section 2.1). Stanford Encyclopedia of Philosophy. 21 June 2010.
  9. ^ Gray, Peter. Psychology. New York: Worth, 2011. Print.
  10. ^ Baronett, Stan (2008). Logic. Upper Saddle River, NJ: Pearson Prentice Hall. pp. 321–325.
  11. ^ Samuel Rathmanner and Marcus Hutter. A philosophical treatise of universal induction. Entropy, 13(6):1076–1136, 2011