How Do We Know What We Know?

What is a human being? What is life? Can science give us reliable answers to such questions? The electricity of life. The meaning of human consciousness. Are we alone? Are the traditional contests between science and religion still relevant? Does the word "spirit" still hold meaning today?

Moderators: MGmirkin, bboyer

Locked
Steve Smith
Guest

How Do We Know What We Know?

Unread post by Steve Smith » Fri May 16, 2008 9:36 am

WHAT IS ACTUALLY THE CASE?
Mel Acheson

Everything I know I've read in a book. You may then ask, How is this knowing different from reading? I see the words; I understand the sentences; I make sense of the ideas; I comprehend what the author is proposing. But is the proposal actually the case? How do I know it is or isn't?

The same question arises with the philosophy of physics. In its most simplistic form, that philosophy assumes knowing is looking and knowing more is looking more closely. At first look, this appears to be the case. But looking more closely at looking and knowing reveals surprises and raises doubts.

Edwin Land, the inventor of the Polaroid camera, photographed an arrangement of flowers with black-and-white transparencies. One transparency was taken in yellow light, another in orange light. He then projected the images simultaneously, each in the same light with which it was taken. The audience expected to see a yellow-orange bouquet of flowers. They saw instead reds and blues and greens and purples, as well as yellow and orange. Perception of color is not a simple response of the eye to each wavelength of light but a complex of activities that converge on a judgment of color.

After cataract surgery was perfected, many people who had been blind from birth suddenly were enabled to see. But they saw only senseless patches of color. The doctors were surprised to learn the patients had to learn to see. It took great effort for the patients to make the patches make sense. Not only did they have to learn to interpret the new ocular stimuli, they had to reinterpret the old stimuli of touch and hearing and smell and taste. They weren't simply adding knowledge to what they already knew, they had to learn to know a different and unfamiliar world. Some gave up, closed their eyes, and retreated to their familiar world of sensation and interpretation that omitted the new ocular stimuli.

Most people learn to see in the first few weeks of life. By the time they've learned to speak and can tell someone about the experience, they've forgotten it. The linking of stimuli and concepts comes to be taken for granted, the composite nature of perception is overlooked, and people assume that sensory stimuli come pre-assembled into intelligible configurations. Those who become physicists mistake "seeing what's there" for "knowing what's there". This lapse of awareness leads them to reify their preconceptions and to betray their empirical principles for a blind idealism that leaps from fervent faith to foregone conclusions. The irony of modern physics is that the more its theories have achieved, the less its philosophy has been supported by discoveries of how perception and cognition work. (Or maybe, as my astronomy advisor warned me, this just means philosophy is irrelevant.)

Reading is the linking of ideas with ocular stimuli. An astronomer looking at the spectrum of a quasar is also linking ideas with ocular stimuli. How does he know the redshifted lines in the spectrum are those of a superluminous object on the frontier of the observable universe? How do I know the words in the textbook that describe the astronomer's linkage of idea and looking are what the quasar is? What assurance does either of us have that the ideas we link with the particular ocular stimuli we experience are what's happening?

At first look, it appears we can be assured by ideas that have been verified. Many associated stimuli have been linked repeatedly with the same ideas by many investigators, forming a web among several disciplines of interlocking ideas and lookings. The intensity of light decreases as distance increases. The frequency of light decreases as velocity of separation increases. The angular size of an object decreases as distance increases. The angular sizes of galaxies decrease as their luminosities and the frequencies of their light decrease. Therefore quasars must be bright and distant. It all fits together assuringly.

But this web of verification only confirms that I've understood my reading, made sense of my looking. It doesn't answer the question, Is my understanding and sense-making actually the case? Fantasies also make sense and can be verified. The history of science is the story of linkages that came unlinked. Remember Eijkman and Grijns:

Toward the end of the nineteenth century, Dr. Eijkman proved that beri-beri was caused by a bacterium in rice kernels and could be cured by an antitoxin in the polishings. In the largest case study ever conducted, he ruled out every other imaginable cause. He demonstrated that eating polished rice caused the disease and eating the polishings cured it. He was awarded a Nobel Prize.

Not long after, Dr. Grijns imagined something Dr. Eijkman had not: Perhaps beri-beri was caused not by something in the rice but by something not in the rice. The idea of 'bacterial infection' was severed from the experience of beri-beri and the idea of 'nutritional deficiency' became linked instead.

If sensory stimuli give no IDEA of what's the case, and theories give no ASSURANCE of what's the case, and verification can't PRECLUDE that something else might be the case, how can we KNOW what's the case? After the stimuli and the ideas have been linked, after the observations have been classified and the experiments replicated, after the theory has been formulated and verified, the critical question still guards the door to knowledge: Is it actually the case?

This is a question for judgment. But how are we to judge? To remain scientific, the judgment must arise from the cognitive activities that define science: from sensory observations and intellectual hypotheses. There can be no appeal to the revelation of religion or to the intuition of mystical or spiritual realities, even though creative insights may be revealed or intuited.

Because this judgment arises from and is reflected back into the data and ideas that are judged, it appears to be circular. But ideas have implications that can lead to new data; data contain anomalies that can lead to new ideas. Instead of a closed and static circle of certain knowledge, we have a spiraling process of knowledge production that is inherently uncertain and evolving. Knowing is not simply taking a look as a camera takes a snapshot but a constructive struggle of cognitive artistry.

This view of knowledge as dynamic, provisional, and adaptive provokes another question: What else could be the case? What other theories might make better sense of the same observations? What other observations might verify a bolder theory? Anomalies and impossibilities are the soil in which the answers to these questions grow.

Oliver Sacks notes, in an essay on "Scotoma: Forgetting and Neglect in Science:"

"The first difficulty, the first barrier, lies in one's own mind, in allowing oneself to encounter new ideas and then to bring them into full and stable consciousness, and to give them conceptual form, holding them in mind even if they do not fit, or contradict, one's existing concepts, beliefs, or categories. Darwin remarks on the importance of 'negative instances' or 'exceptions,' and how crucial it is to make immediate note of them, for otherwise they are 'sure to be forgotten.'"

Grijns couldn't find the bacterium that Eijkman had proved must cause beri-beri. This anomaly caused Grijns to doubt what was accepted as secure knowledge: the germ theory of disease. He wondered what else might be the case and came up with nutritional deficiencies. Arp found connections among quasars and nearby galaxies that almost any astronomer can prove is impossible. This anomaly caused Arp to doubt what is currently accepted as secure knowledge: the expanding universe theory of cosmology. He and his colleagues are wondering what else might be the case and are exploring such ideas as mass variability and plasma cosmology. Anomalies and doubts such as these keep knowledge on the move.

The question of what is actually the case is actually defective: To be answered scientifically, it must be asked in the context of human senses, human intelligence, and human judgment—in the context of adaptive knowledge. We can only observe those parts of 'the case' to which our senses and instruments respond, can only hypothesize from insights and inspirations that are circumscribed by history and culture, can only judge as those observations and hypotheses evolve.

The question of what is actually the case must be conceived on a higher level of abstraction than that of the content of particular theories. A more accurate question is, What do WE KNOW is the case? What is actually the case with human knowing of mutable groupings of experiences and ideas? Scientific truth is not written once and for all on the sky, despite its descent from mytho-religious fiat, but in the cognitive functioning of the human brain.

Mel Acheson

Plasmatic
Posts: 800
Joined: Thu Mar 13, 2008 11:14 pm

Re: How Do We Know What We Know?

Unread post by Plasmatic » Fri May 16, 2008 2:10 pm

Ah Mel, he is a such a sophisticated skeptic. ;)
"Logic is the art of non-contradictory identification"......" I am therefore Ill think"
Ayn Rand
"It is the mark of an educated mind to be able to entertain a thought without accepting it."
Aristotle

User avatar
junglelord
Posts: 3693
Joined: Mon Mar 17, 2008 5:39 am
Location: Canada

Re: How Do We Know What We Know?

Unread post by junglelord » Fri May 16, 2008 3:01 pm

Neurological development, the ability to learn things past a certain hard wired point in neurological development is well known. In reality, Tarzan would not have been able to speak as he did not hear language past the point of the brain being able to learn language, the left temporal lobe. Past five years old, if one does not learn language, one cannot process language. The sight issue with blind people is also a sign of the neurological limits of plasticity. Kittens in a enviorment that has only horizontal lines for the first two weeks of sight cannot process verticle lines after this neurological plasticity has formed neurological pathways. When running in a normal room they run into verticle table legs for life.
:?

Possibly learning new things is hard for a lot of people for many reasons. We have lost the ability to see what is right in front of us.
;)
If you only knew the magnificence of the 3, 6 and 9, then you would have a key to the universe.
— Nikola Tesla
Casting Out the Nines from PHI into Indigs reveals the Cosmic Harmonic Code.
— Junglelord.
Knowledge is Structured in Consciouness. Structure and Function Cannot Be Seperated.
— Junglelord

User avatar
Antone
Posts: 148
Joined: Fri Jun 27, 2008 5:28 pm
Contact:

Re: How Do We Know What We Know?

Unread post by Antone » Sun Jul 06, 2008 1:04 pm

Steve Smith wrote:Mel Acheson

Everything I know I've read in a book. You may then ask, How is this knowing different from reading? I see the words; I understand the sentences; I make sense of the ideas; I comprehend what the author is proposing. But is the proposal actually the case? How do I know it is or isn't?
Interesting question. And, as with everything else, I believe the answer lies in the fact that knowledge isn't a singular thing--but rather it is an entity that is best defined by reciprocal aspects.

The Broken Clock Analogy
Consider the following saying:
Even a broken clock is right twice a day.
To most of us it seems pretty obvious that a working clock, even if it does not keep very accurate time, is still a lot more accurate than a broken clock with hands that don’t move at all. But ironically, this is only true when you look at the world in relative terms. In a strictly absolute world—one in which there is only [true] or [false] and nothing else is possible—saying, “Something is very close to being right,” is just another way of saying, “But it’s wrong.”

In such an absolute world, it is either [ten o’clock] or it is [not ten o’clock]. And the clock is either [accurate] or it is [not accurate]. There are no subtle distinctions between these two extremes. Thus, in the strictest absolute sense, it doesn't matter if the clock is [1/10, 000 of a second] off true time or [six hours] off true time. Neither time is [ten o’clock] so neither time is [absolutely accurate]. Furthermore, since there are only two choices—
or [wrong]—neither choice can be understood as being [more right] or [more wrong] than the other.

Ironically, when we understand truth in such a strict, absolute way, a broken clock is
more often than any working clock could ever be, even if the working clock keeps exceptionally good time. In fact, the better a clock keeps time the less often its will experience [absolutely accurate time].

For example, if the hands do not move at all, the clock experiences absolute accuracy once every 12 hours, as the correct time passes through the position of the clock’s hands. If the clock’s hands move exactly one hour every day, (meaning that it is a little better at keeping time) then the clock experiences moments of absolute accuracy approximately once every 12 1/2 hours. If the clock’s hands move 12 hours every day then the clock experiences absolute accuracy about once every 24 hours. And if the clock’s hands move 22 hours every day the clock experiences absolute accuracy about once every six days.

I won’t try to figure times for anything more accurate, but from what we’ve already looked at we can see a distinct pattern: the closer the clock comes to keeping [absolutely accurate time] the longer the [period of time between moments of absolute accuracy]. Or in other words, the more the clock becomes [relatively right] the less often it experiences being [absolutely right].

No matter where the hands have stopped, a broken clock experiences absolute accuracy twice a day, every day. These moments of accuracy may be infinitely short in duration, but they none the less occur twice each and every day. The only time a clock that is [slightly slow] or [slightly fast] is absolutely accurate is that [rare occasion when the hands pass through the correct time before once again being inaccurate on the opposite side], and since this is most likely to occur shortly after the clock has been reset, it stands to reason that the more accurately a clock keeps time the less often it will typically experience [absolutely accurate time].

The Compound Nature of Truth
The Analogy of the Broken Clock has two reciprocal aspects at work—the absolute nature of a given thing, and the relative nature of a given thing.

In this case, the reciprocal aspects are used to define the accuracy of a clock, but we can also use these same reciprocal aspects to define the truth of a statement, such as the following:
(AC) This clock keeps accurate time.
It is not possible to simply say that (AC) is either [true] or [false], for which of these values we assign to it is entirely determined by how we interpret the statement. If this statement is meant to reflect the following proposition:
(AAC) This clock keeps absolutely accurate time.
Then no matter what clock we’re talking about, the statement will invariably be [false].

Even the most accurate clock in the world is only capable of keeping time that is accurate to within certain parameters. It may only loose [1/1,000,000] of a second every 10 billion years, but that is not absolutely accurate—and so it is not accurate enough to make proposition (AAC) absolutely true. Thinking in the strictest absolute terms makes the [idea of an accurate clock] entirely meaningless—no such thing can physically exist, and so by extension, proposition (AAC) is necessarily trivial because it will always be false about every possible clock.

We can use the same basic logic to deduce that virtually any statement about any physical object will always be less than absolutely true.

We encounter a similar (if reciprocal) problem when we think in strictly relative terms—and this is true regardless of whether or not we restrict our answer to bi-valent values. For example, if we insist on a [true] or [false] answer, then the proposition:
(RAC) This clock keeps relatively accurate time.
Is true for every clock, since even [a clock with hands that don’t move] occasionally reflects the correct time—and thus expresses some relative degree of accuracy. In fact, there is at least one sense in which it can be understood to be more accurate than a working clock. For any given clock scenario, there is at least some sense in which the clock can be said to keep some [relative degree of accurate time]. And so, once again, our answer becomes trivial. It will invariably be [true], in some possible sense, for every clock we encounter.

Turning it into a Question: One possible way of trying to work around this problem is to turn (AAC) and (RAC) into a relative question, such as:
(QAC) How accurately does this clock keep time?
Now we can give a relatively accurate answer for each specific clock we encounter. For instance we might say, “clock [x] is accurate to within 1.341 second per week,” or “clock [y] is accurate to within 1 minute and seven seconds every ten years.”

The problem with this strategy is that we haven’t actually solved anything—we’ve simply delayed the resolution by moving the problem to a new location. Instead of making a [determination of truth] about a [statement], we’re making a [statement] about which a [determination of truth needs to be made]. So once again, we’re faced with virtually the same problem as before. If we think in strictly absolute terms, the answer to (QAC) will always be [false], since the answer can no more be absolutely accurate than a [clock can be absolutely accurate]. And, if we think in strictly relative terms, the answer will always be [false], since every answer will have at least some small amount of truth, no matter how far it is from being [absolutely true].

We can try to hide this recursive problem in successive layers of camouflage, but these vicious circles will never lead us away from the original problem regardless of whether we think in [absolute] or [relative] terms; or how deeply we bury it in successive layers.

Harnessing the Vicious Circles of Reality

Fortunately, the DS theory allows us to rise out of this quagmire by using each reciprocal property to define the other.

For example, when a stranger asks, “what time is it?” I might reply, “It is exactly twelve o’clock.” Intuitively, I know that (in the strictest sense) it is not possible for my answer to be [absolutely true]. But I know that in all probability my watch is keeping reasonably accurate time—because I know that I have not forgotten to wind it.

Given this assumption, I might ask myself the relative question, “How accurately is my watch keeping time.” My answer, of course, will only be relatively accurate—but it will be close enough for practical purposes, for I know my watch is typically accurate to within approximately [30 seconds per month]. Since I’ve recently adjusted the time (using a more accurate clock), I have every reason to assume that, at the very worst, my watch is (in a strictly absolute sense) accurate to within less than 45 seconds of true-time; and since it takes me less than [15 seconds] to say, “It is exactly twelve o’clock” I can assume (with a very high degree of confidence) that my [statement about the time] is [accurate to within 60 seconds].

Now, I have a quantifiable way of defining the precise nature of my statement. Since there are 1440 minutes in a 24 hour period, and the watch is apparently accurate to within one of those minutes, I can say that the watch has an error potential of less than [1 in 1440]. If I convert this error potential to the decimal [.0006944], then let [0.000…] stand for absolute accuracy and [1.000…] stand for absolute inaccuracy, then I can say that [.0006944] stands for the approximate value of my watch’s relative accuracy. And clearly, my watch is much, much closer to being [absolutely accurate] than it is to being [absolutely inaccurate].

Thus, (without being trivial) I have a way to evaluate in a relatively-absolute sense the truth of the statement, “It is exactly twelve o’clock.” And I can honestly say that the statement is [absolutely true], because I have (in essence) redefined [what it means to be absolute] in a distinctly [relative] way.

Using the [relative aspect] to define the [absolute aspect] in this way is a tremendous advantage because:
1. It allows us to break the chain of triviality that is necessarily inevitable in all [strictly relative] or [strictly absolute] patterns of thinking, and
2. Because it produces a reasonably accurate way to assign weighted, relative values to the various factors that we must use to make our decisions.

User avatar
Antone
Posts: 148
Joined: Fri Jun 27, 2008 5:28 pm
Contact:

Re: How Do We Know What We Know?

Unread post by Antone » Sun Jul 06, 2008 2:13 pm

Steve Smith wrote:Mel Acheson

Everything I know I've read in a book. You may then ask, How is this knowing different from reading? I see the words; I understand the sentences; I make sense of the ideas; I comprehend what the author is proposing. But is the proposal actually the case? How do I know it is or isn't?
Like truth, knowledge is a complex thing. It is best defined by using reciprocal aspects. For instance, there is a distinct difference between knowing and believing. Yet it is quite common for someone to say they know something when what they really mean is that they believe it.

Similarly, there is an important distinction between knowing and understanding. I believe that mainstream physicists know physics principles, but they do not understand them. Electric Universe cosmologists understand cosmology much better than mainstramers, but even they do not have a complete understanding of their topic--even if they have a complete awareness of all there is to "know" on the subject.

To understand this complex nature of knowledge, I believe it is useful to consider how we learn.

How We Learn
Composite Photography is a technique whereby an ordinary piece of film is exposed with several overlapping images. Each image is of a similar [type of object]. For instance, composite photography is sometimes used to create a composite image of all the members in a given family. Each portrait is taken from the same distance and angle so that when the film is developed the same basic features of each person will appear (as much as possible) in the same place on the film.

The result is that those features that the family members hold in common will be reinforced, while those features that are different will be deemphasize. If the family has very similar features, the resulting image will be relatively sharp and well defined; if their features vary widely, then the image will be a fuzzier blend of all the shades and colors that were exposed to that part of the film. Pictures made with such negatives may look as if they were taken slightly out of focus, but for the most part, if you place the [composite picture] beside [ordinary pictures] of all the family members, you might have to look rather closely to pick out which picture was the composite photograph—particularly if you aren’t familiar with the family.

I believe that we use a very similar mental technique to learn the meaning of concepts and the names of objects. Suppose, for instance, that a young child is being raised in a family that has a Persian [cat]. If this is the only [cat] that the child has ever seen, then the collection of images that the child uses to define [what it means to be a cat] is quite sharp—since the images are entirely unmuddied by examples of a other [types of cats]. But the child’s actual concept of [what it means to be a cat] is still quite vague, because the [one sampling of cat] is insufficient to clarify what it is about this particular animal that makes it a cat. The more cats the child sees the more he learns to distinguish [cat] from [non-cat].
Thus, if the child sees a small [dog] for the very first time, he recognizes that the [dog] is different from the animals he has learned to call [cats], but several of the cats were quite different from one another as well. For instance, to his young mind, the [Siamese cat] that he saw recently, may have looked considerably less like the [Persian cat] than this [dog] does.

But when he points at the [dog] and says, “Cat!” his mother gently corrects him by saying, “No honey, that’s a dog,” he is forced to make new evaluations about [what a cat is] and [what a cat is not]. The [image of the dog] goes into the [collective whole that is the child’s concept of NOT cat], forcing him to reexamine his collective memories and decide what exactly it is about this new animal that prevents it from being a [cat].

This, of course, is all done subconsciously... As the child sees more examples of [cats] the composite image of what it means to be a cat becomes stronger, as the essential qualities are reinforced and the non-essential qualities become more and more blurred. And because this process of composite holography is primarily irrational in nature, it is also largely automatic. It is not something we have to think about doing in order to do. Instead, it is a simple matter of the overlapping images producing a sharper and sharper mental composite. In this way, the child learns to associate certain [groupings of irrational mental images] with a [particular rational term]. And while the term itself, may be rational, the composite image created by all the mental images is not.

Perhaps what is most important to recognize is that the [mental images] and the [composite of those mental images] are reciprocal aspects of thinking. We can visualize these reciprocal aspects by imagining a larger circle with lots of smaller circles inside of it. The larger circle represents the composite image that is associated with a single term, such as the word [cat], while the collection of smaller circles represent all the [individual mental images of a cat] that the child has seen.

The larger circle is a singular, abstract thing, which we can represent using the abstraction set
{x:x is y}

where [y] is a unique property or set of properties that only the smaller circles possess.

Keep in mind that the larger circle necessarily includes all of the smaller circles, but it does not include them as individuals—just as a composite photo includes images of all the parts that were used to create it, but is not itself an image of any of the individual parts. The result is that, just as we can add new photos to a composite photo, so too we can mentally add (or subtract) the smaller mental circles from the larger mental circle without changing the basic nature of the larger mental circle.

For instance, if we let (PB) stand for the concept, [parts of my body], then the smaller circles might be given terms such things as [my head], [my arms], [my legs], and so forth.

Now then, we can represent the larger circle using the abstraction set:
(PB)a {x:x is a part of my body}
And the smaller circles can be represented by the enumeration set:
(PB)e {[my head], [my right arm], [my left arm], [my torso],…}
As a set, (PB)a, contains but the single element, [parts of my body]; while (PB)e contains several different elements.

If we add another element to ((PB)e—say by distinguishing for the first time between the [upper arm] and the [fore arm]—we do not change the element that (PB)a contains. Its single element is still [parts of my body].

Thus, in a very real sense, while the abstraction and enumeration sets refer to the same thing, they are also independent of one another.

The mind is compartmentalized into numerous [kinds of mind]. But when we are thinking properly, each [kind of mind] serves as feedback to keep the reciprocal [kind of mind] working at top efficiency. This is true at virtually every possible level: for each [reciprocal kinds of mind] share an intimate and co-dependent relationship, with both aspects being equally important to proper thinking.

Now, each of us has numerous smaller circles (or examples) which we have used to define the various terms we understand. And many of the same terms make use of the same examples. For example, while a [dog] and [cat] may share such characteristics as [four legs] and [fur], a [dog] may be the same color as the [family’s pet turtle], while [Snowball, their pet cat], may be the same color as the [child’s ball]. A child first begins to develop these mental correlations because each time his mother corrects him she is teaching him which mental images belong together in the same composite images. Thus, the holographic whole that is our body of knowledge is a very complex network of interrelated ideas, which we might call a mind set. And because each person forms their personal mind set using [unique instances] and a [unique mental apparatus], they necessarily develop a unique mind set.

Common Sense
At the same time that the child is learning to match names and objects together, they are learning other, subtler relational correlations. This learning occurs in much the same way: by repeatedly encountering similar things (which are none-the-less different) and different things (which are somehow the same), and building from these individual instances a collective, irrational concept that defines the whole picture. We sometimes call this collective body of irrational concepts common sense.

For the most part, these relational rules are so obvious (and in some cases, so subtle) that we rarely stop to think about what they really are. If asked, we would generally be hard pressed to put them into words—not surprising, given their holistic and irrational nature. Common sense is constructed from the composite “images” of our mental concepts, so just as we can recognize a person’s face even though we can’t adequately describe what they look like in words, we can generally recognize common sense when we see it, even though we are generally unable to put it into words. When we try, our words frequently tend to fall far short of the mark.

This helps to explain why it is that when philosophers try to examine their common sense very closely, it always seems that common sense eventually lead to paradox.

I believe that most paradoxes are produced precisely because we are trying to take the [holistic nature of common sense] and break it into [rational bits] that we can examine as individual elements of the whole. When we do this we commonly produce two apparently incompatible [bits of common sense] which are both equally satisfied by the same scenario. Since neither [bit of common sense] gives rise to a chain of logic that is less appropriate than the other, we are unable to simply discount one bit in favor of the other. But trying to define the scenario strictly in terms of one of these bits (but not the other), invariably leads to one paradox or another.

This idea, that common sense is less than obvious, is not an entirely new thought. For example, Sorensen expresses a very similar idea when he says:
… common sense is reactive. We do not bother to defend (or even think about) the proposition that the future resembles the past until David Hume formulates the problem of induction. Paradoxes illuminate common sense by provoking bits of it into consciousness. As more paradoxes are discovered, more of common sense becomes visible. Without a provocateur, common sense is faceless.
Apparently, common sense is faceless precisely because it is the type of thought which is most hidden from our rational minds, and it is our rational mind that we are typically most aware of. What is not so apparent, however, is why common sense should primarily be a [product of our irrational mind] in the first place.

Modern men are creatures who are dominated by their rational minds. We utilize our reason to puzzle out the consequences of believing in and acting on our common sense notions. In the real world, the reason this is such a crucial skill is because humans are weaker and slower than many other animals. Our main advantage is our faculty for applying rational thought to hypothetical situations. This allow us make decisions not on [what is happening], but rather on [what might happen]. By making such predictions, we are able to avoid potentially dangerous situation and invent new and better ways to do things.

In this sense, we might think of reason as the [rules by which we apply common sense to create new mental structures], which can in turned be modified by reason to produce newer and yet more subtle concepts. Often times, however, the complex interrelated, and cyclic nature of this process makes it very difficult to distinguish between what is common sense and what is reason. For example, according to Roy Sorensen,
G. E. Moore, admitted that common sense underestimates the distance from the earth to other heavenly bodies.
I would suggest, however, that Sorensen and Moore seem to be confusing common sense with simplistic (or perhaps even flawed) reasoning. I don’t think there is anything at all about common sense that would lend itself to predicting the distance to the planets or the stars.

Here is my idea of what common sense can tell us about the distance of the stars:
1. Because a [small object that is close to us] can appear larger than a [large object that is far away], we can’t possibly determine how far heavenly bodies are from us without knowing the relative size of the object in question, and
2. Because it becomes increasingly difficult to judge the size of an object the further away from us it is, if astronomical objects are very far from us, it will not be possible to accurately judge their relative size or distance.

To me, these are bits of common sense.

Trying to estimating the distance to heavenly bodies (despite these bits of common sense) is an obvious exercise in flawed reason.

This may seem a bit backwards. We commonly suppose that rational thinking is the source that gives accuracy and clarity to our mental ruminations, but just as we saw with the Broken Clock analogy, it is only possible to deal with something in an absolute sense when we have defined [what it is to be absolute] in a relative way. In much the same way, it is only possible to [think rationally] because we have used our irrational minds to define [what it means to be rational]. In other words, we have used our irrational minds to define what [instances] belong to the [terms we have defined].

In Western philosophy, the goal seems to be to express everything in increasingly rational terms. Thus, we rely on our reason to help us figure out what our common sense is telling us—but because these faculties stem from very different ways of thinking, this process is often a complete failure, or it meets with very limited success.

For example, our common sense tells us that every proposition is either [true] or [false]. But, as we’ve seen, this only names one aspect of that particular bit of common sense—the very same application of common sense also tells us that virtually every [ostensibly true] proposition is [both true and false]. So when I say, “It’s exactly twelve o’clock,” the statement is [true] but it is also [false]—each in its own way.

The apparent paradox results because we are not looking at the whole picture of common sense, but only at partial snippets of the whole. As we’ve seen, treating these incomplete snippets as if they were the whole frequently leads to paradoxical conclusions—it can also sometimes lead to deductions that are the opposite of what they should be. This is not unlike the situation that can occur when you’re looking out the front window of a moving car on an exceptionally humid day. We see fat beads of water splattering on the windshield and assume it is raining. But if you change your perspective, by stopping the car (or by looking out the side window), you suddenly realize that (from that perspective, at least) it isn’t raining at all.

What prevents these observations from being inconsistent is that they involve a change in our perspective. As Aristotle pointed out, no [single thing] is ever both [true and false] in exactly the same way at exactly same time. But everything that is true in one way is false in another. As we saw when we changed our perspective from looking out the front window to looking out the side window. By changing something about the specific scenario and thus our perspective, we change whether it makes sense to believe the claim "It is raining" to be true or false.

Collective Common Sense
What we normally tend to refer to as [common sense] is the [set of more-or-less shared beliefs that are common to the majority of people]. This kind of collective common sense is the opposite of individual [common sense] in many ways. In fact, about the only thing they have in common is that they both involve a [holistic aspect]. Individual [common sense] is holistic because of the [way a person’s irrational mind processes it]. The [collective common sense], on the other hand, is holistic in terms of being a statistical average (so to speak) of the rational ideas of all the individuals in a whole community.

It is very important to keep in mind is that these two bodies of belief are not identical. In fact, I would suggest that there are probably more [ways in which they are different] than there are [ways in which they are the same]. Certainly, there are some ways in which they capture the same information: the longer you stick your hand into a flame, the worse you get burned. Experience has taught virtually everyone this same general principle. And it is not difficult to convert this [irrational understanding] into [rational thoughts] that everyone can understand in a very similar way.

Other experiences, however, are not so easy to convert from irrational to rational thought—as we saw with the Broken Clock Analogy, for instance. For example, there was a time when the [collective common sense] held that the [earth was flat]. This was such a commonly held belief that it was rarely if ever questioned. To the rational mind of these ancient people it made a lot of sense, after all whenever the surface of the earth is free from valleys or mountains (which are just surface features, after all) the earth appears to be flat for as far as we can see.

What [individual common sense] actually tells us, however, is more like the following:
...1. What we can see of the world appears to be more or less flat.
...2. Something that has a particular characteristic at one place will sometimes continue to have that characteristic at another place.
...3. The world extends beyond what we can actually see.
...4. We cannot know for certain what is beyond what we’ve seen.
...5. Sometimes our senses can appear to tell us things that are not true.

Using these elements of [individual common sense], we can make a flawed rational deduction that [the world is flat], but only because we choose to focus on certain parts of the whole. There are actually two rational deductions that we should make from these [common sense] principles:
...1. the world may be flat (1, 2)
...2. it may not be flat (3, 4, 5)

The Purpose of our rational minds is to use Common Sense to Make Deductions: Because of this, we are unlikely to reach both decisions (1) and (2). Instead, we are more likely to choose one of them and ignore the other. And this is why, when we examine our beliefs closely, we often discover that they lead to contradictory and paradoxical conclusions.
Last edited by Antone on Sun Jul 06, 2008 2:50 pm, edited 4 times in total.

User avatar
Antone
Posts: 148
Joined: Fri Jun 27, 2008 5:28 pm
Contact:

Re: How Do We Know What We Know?

Unread post by Antone » Sun Jul 06, 2008 2:32 pm

Now we can reexamine the meaning of knowledge.
As I said earlier, there is a distinction between what we [know] and what we [believe].

I think one of the best ways to define this difference is to say that what we "know" are the individual examples of our [individual common sense]... And what we "believe" is the relationship between the individual examples (or the small circles) and the terms we use (or the big circles).

Notice that what we know isn't really knowledge, so much as awareness. I know [what I see when I look at my cat]. I believe that what I see is a [cat].
For example, I may believe that the [color of my cat] is [white], but what I know (or am aware of) is actually a [very light cream color]. Like the broken clock, it is neither [absolutely true] nor [absolutely false] to say that the cat is white. The cat is only [white] by virtue of the fact that I have defined [the color of the cat] as being [white].

What I know (or am aware of) is the [actual color of the cat] that I see.
What I believe is that that color is [white].

What I know, is something that is [impossible for me to fully understand or express].
What I believe is something that is [impossible to be entirely true], precisely because it is not possible for my understanding of it to be fully understood or expressed.

These then are two reciprocal aspects of knowledge.
There are other reciprocal aspects as well, and these other aspects interact with these in complex ways of their own.

User avatar
StevenO
Posts: 894
Joined: Tue Apr 01, 2008 11:08 pm

Re: How Do We Know What We Know?

Unread post by StevenO » Sun Jul 06, 2008 5:57 pm

I think what we know is what we experienced with our own senses. What we believe is what somebody else told us to be true.
First, God decided he was lonely. Then it got out of hand. Now we have this mess called life...
The past is out of date. Start living your future. Align with your dreams. Now execute.

Muser
Guest

Re: How Do We Know What We Know?

Unread post by Muser » Tue Jul 15, 2008 7:08 am

This is a fascinating question, but it also goes to the depth of understanding.

How we use language and what we think different words/sentence structures mean, depends upon our level of understanding of what we were taught about language, and what cultural framework, and even time period we are living in. For example the word "gay" in the past had more pleasurable, innocent connotations, as can still be heard today on old films, etc. But today, the word "gay" refers mainly to "homosexuals and lesbians" and this has a totally different connotation. This has not only become generational but also cultural, but refers only to the language we use, in this instance, English.

But we also have to realise that the words, knowledge, understanding, communication, information, all have slightly different meanings and context.

For me, "knowing" can be information and understanding at the same time. The sources for this concept can have come from a variety of places.

There have been times in my life when I have "known" something would happen, or some idea was true, even without the physical supporting evidence to support this idea. Some would say this was actually belief, however, I can believe something as possible, but to be so sure, so absolutely certain about something that I can say "I Know" is something much more. There is an emotional feel about it, but it is more than that. I can't describe such a state in ordinary words. There are no words to describe the difference, just some form of "knowing" that is different to gaining knowledge from an ordinary outside source, from a rational logical conclusion, or a general belief. This "knowing" is something else!!! Once you have experienced such a situation you can no longer say the word, "Know", without comparing the difference.

We do everything by comparison. It is the way our brains work. But this comparison between knowing and "knowing" is something concrete, but cannot be explained in the normal way. Some day science/mathematics may have an answer, but it will never be the same as personally, "knowing" the truth about something.

Knowledge is truth, but is also more than understanding alone, more than being able to action things in a practical way, all of this I "know" makes up "wisdom", a word that has become despised in this day and age.

To know is to have wisdom.

whitenightf3
Posts: 49
Joined: Wed Sep 10, 2008 4:30 am

Re: How Do We Know What We Know?

Unread post by whitenightf3 » Fri Sep 12, 2008 12:34 pm

The only way to know anything is through experience and in that sense I am an Empiricist :)
No matter how knowledgeable one becomes about a subject you will always come to a point whereby you realize you Know Nothing. Life is a Mystery and it is the Mystery that inspires the AWE!

Locked

Who is online

Users browsing this forum: No registered users and 5 guests