Archive

Tag Archives: Philosophy

Sam Harris is a neuroscientist and prominent “new atheist,” who along with others like Richard Dawkins, Daniel Dennett and Christopher Hitchens helped put criticism of religion at the forefront of public debate in recent years. In two previous books, “The End of Faith” and “Letter to a Christian Nation,” Harris argued that theistic religion has no place in a world of science. In his latest book, “Waking Up,” his thought takes a new direction. While still rejecting theism, Harris nonetheless makes a case for the value of “spirituality,” which he bases on his experiences in meditation. I interviewed him recently about the book and some of the arguments he makes in it.

Gary Gutting: A common basis for atheism is naturalism — the view that only science can give a reliable account of what’s in the world. But in “Waking Up” you say that consciousness resists scientific description, which seems to imply that it’s a reality beyond the grasp of science. Have you moved away from an atheistic view?

Sam Harris: I don’t actually argue that consciousness is “a reality” beyond the grasp of science. I just think that it is conceptually irreducible — that is, I don’t think we can fully understand it in terms of unconscious information processing. Consciousness is “subjective”— not in the pejorative sense of being unscientific, biased or merely personal, but in the sense that it is intrinsically first-person, experiential and qualitative.

The only thing in this universe that suggests the reality of consciousness is consciousness itself. Many philosophers have made this argument in one way or another — Thomas Nagel, John Searle, David Chalmers. And while I don’t agree with everything they say about consciousness, I agree with them on this point.

The primary approach to understanding consciousness in neuroscience entails correlating changes in its contents with changes in the brain. But no matter how reliable these correlations become, they won’t allow us to drop the first-person side of the equation. The experiential character of consciousness is part of the very reality we are studying. Consequently, I think science needs to be extended to include a disciplined approach to introspection.

G.G.: But science aims at objective truth, which has to be verifiable: open to confirmation by other people. In what sense do you think first-person descriptions of subjective experience can be scientific?

S.H.: In a very strong sense. The only difference between claims about first-person experience and claims about the physical world is that the latter are easier for others to verify. That is an important distinction in practical terms — it’s easier to study rocks than to study moods — but it isn’t a difference that marks a boundary between science and non-science. Nothing, in principle, prevents a solitary genius on a desert island from doing groundbreaking science. Confirmation by others is not what puts the “truth” in a truth claim. And nothing prevents us from making objective claims about subjective experience.

Are you thinking about Margaret Thatcher right now? Well, now you are. Were you thinking about her exactly six minutes ago? Probably not. There are answers to questions of this kind, whether or not anyone is in a position to verify them.

And certain truths about the nature of our minds are well worth knowing. For instance, the anger you felt yesterday, or a year ago, isn’t here anymore, and if it arises in the next moment, based on your thinking about the past, it will quickly pass away when you are no longer thinking about it. This is a profoundly important truth about the mind — and it can be absolutely liberating to understand it deeply. If you do understand it deeply — that is, if you are able to pay clear attention to the arising and passing away of anger, rather than merely think about why you have every right to be angry — it becomes impossible to stay angry for more than a few moments at a time. Again, this is an objective claim about the character of subjective experience. And I invite our readers to test it in the laboratory of their own minds.

G. G.: Of course, we all have some access to what other people are thinking or feeling. But that access is through probable inference and so lacks the special authority of first-person descriptions. Suppose I told you that in fact I didn’t think of Margaret Thatcher when I read your comment, because I misread your text as referring to Becky Thatcher in “The Adventures of Tom Sawyer”? If that’s true, I have evidence for it that you can’t have. There are some features of consciousness that we will agree on. But when our first-person accounts differ, then there’s no way to resolve the disagreement by looking at one another’s evidence. That’s very different from the way things are in science.

S.H.: This difference doesn’t run very deep. People can be mistaken about the world and about the experiences of others — and they can even be mistaken about the character of their own experience. But these forms of confusion aren’t fundamentally different. Whatever we study, we are obliged to take subjective reports seriously, all the while knowing that they are sometimes false or incomplete.

For instance, consider an emotion like fear. We now have many physiological markers for fear that we consider quite reliable, from increased activity in the amygdala and spikes in blood cortisol to peripheral physiological changes like sweating palms. However, just imagine what would happen if people started showing up in the lab complaining of feeling intense fear without showing any of these signs — and they claimed to feel suddenly quite calm when their amygdalae lit up on fMRI, their cortisol spiked, and their skin conductance increased. We would no longer consider these objective measures of fear to be valid. So everything still depends on people telling us how they feel and our (usually) believing them.

However, it is true that people can be very poor judges of their inner experience. That is why I think disciplined training in a technique like “mindfulness,” apart from its personal benefits, can be scientifically important.

G.G.: You deny the existence of the self, understood as “an inner subject thinking our thoughts and experiencing our experiences.” You say, further, that the experience of meditation (as practiced, for example, in Buddhism) shows that there is no self. But you also admit that we all “feel like an internal self at almost every waking moment.” Why should a relatively rare — and deliberately cultivated — experience of no-self trump this almost constant feeling of a self?

S.H.: Because what does not survive scrutiny cannot be real. Perhaps you can see the same effect in this perceptual illusion:
stone-optical-illusion-blog480
It certainly looks like there is a white square in the center of this figure, but when we study the image, it becomes clear that there are only four partial circles. The square has been imposed by our visual system, whose edge detectors have been fooled. Can we know that the black shapes are more real than the white one? Yes, because the square doesn’t survive our efforts to locate it — its edges literally disappear. A little investigation and we see that its form has been merely implied.

What could we say to a skeptic who insisted that the white square is just as real as the three-quarter circles and that its disappearance is nothing more than, as you say, “a relatively rare — and deliberately cultivated — experience”? All we could do is urge him to look more closely.

The same is true about the conventional sense of self — the feeling of being a subject inside your head, a locus of consciousness behind your eyes, a thinker in addition to the flow of thoughts. This form of subjectivity does not survive scrutiny. If you really look for what you are calling “I,” this feeling will disappear. In fact, it is easier to experience consciousness without the feeling of self than it is to banish the white square in the above image.

G.G.: But it seems to depend on who’s looking. Buddhist schools of philosophy say there is no self, and Buddhist meditators claim that their experiences confirm this. But Hindu schools of philosophy say there is a self, a subject of experience, disagreeing only about its exact nature; and Hindu meditators claim that their experiences confirm this. Why prefer the Buddhist experiences to the Hindu experiences? Similarly, in Western philosophy we have the phenomenological method, an elaborate technique for rigorously describing consciousness. Some phenomenologists find a self and others don’t. With so much disagreement, it’s hard to see how your claim that there’s really no self can be scientifically established.

S.H.: Well, I would challenge your interpretation of the Indian literature. The difference between the claims of Hindu yogis and those of Buddhist meditators largely boil down to differences in terminology. Buddhists tend to emphasize what the mind isn’t — using words like selfless, unborn, unconditioned, empty, and so forth. Hindus tend to describe the experience of self-transcendence in positive terms — using terms such as bliss, wisdom, being, and even “capital-S” Self. However, in a tradition like Advaita Vedanta, they are definitely talking about cutting through the illusion of the self.

The basic claim, common to both traditions, is that we spend our lives lost in thought. The feeling that we call “I”— the sense of being a subject inside the body — is what it feels like to be thinking without knowing that you are thinking. The moment that you truly break the spell of thought, you can notice what consciousness is like between thoughts — that is, prior to the arising of the next one. And consciousness does not feel like a self. It does not feel like “I.” In fact, the feeling of being a self is just another appearance in consciousness (how else could you feel it?).

There are glimmers of this insight in the Western philosophical tradition, as you point out. But the West has never had a truly rigorous approach to introspection. The only analog to a Tibetan or Indian yogi sitting for years in a cave contemplating the nature of consciousness has been a Christian monastic exerting a similar effort praying to Jesus. There is a wide literature on Christian mysticism, of course. But it is irretrievably dualistic and faith-based. Along with Jews and Muslims, Christians are committed to the belief that the self (soul) really exists as a separate entity and that the path forward is to worship a really existing God. Granted, Buddhism and Hinduism have very crowded pantheons, and a fair number of spooky and unsupportable doctrines, but the core insight into the illusoriness of the self can be found there in a way that it can’t in the Abrahamic tradition. And cutting through this illusion does not require faith in anything.

G.G.: Suppose we agree that “spiritual experiences” can yield truths about reality and specifically accept the truth of Buddhist experiences of no-self. Many Christians claim to have direct experiences of the presence of God — not visions or apparitions but a strong sense of contact with a good and powerful being. Why accept the Buddhist experiences and reject the Christian experiences?

S.H.: There is a big difference between making claims about the mind and making claims about the cosmos. Every religion (including Buddhism) uses first-person experience to do both of these things, but the latter pretensions to knowledge are almost always unwarranted. There is nothing that you can experience in the darkness of your closed eyes that will help you understand the Big Bang or the connection between consciousness and the physical world. Look within, and you will find no evidence that you even have a brain, much less gain any insight into how it works.

However, one can discover specific truths about the nature of consciousness through a practice like meditation. Religious people are always entitled to claim that certain experiences are possible — feelings of bliss or selfless love, for instance. But Christians, Hindus and atheists have experienced the same states of consciousness. So what do these experiences prove? They certainly don’t support claims about the unique divinity of Christ or about the existence of the monkey god Hanuman. Nor do they demonstrate the divine origin of certain books. These reports only suggest that certain rare and wonderful experiences are possible. But this is all we need to take “spirituality” (the unavoidable term for this project of self-transcendence) seriously. To understand what is actually going on — in the mind and in the world — we need to talk about these experiences in the context of science.

G.G.: I’m not talking about highly specific experiences of Christ or of a monkey god. I mean simply a sense of a good and powerful spiritual reality—no more, no less. Why accept Buddhist experiences but not experiences like that?

S.H.: I wouldn’t place the boundary between religious traditions quite where you do — because Buddhists also make claims about invisible entities, spiritual energies, other planes of existence and so forth. However, claims of this kind are generally suspect because they are based on experiences that are open to rival interpretations. We know, for instance, that people can be led to feel an unseen presence simply by having specific regions of their brains stimulated in the lab. And those who suffer from epilepsy, especially in the temporal lobe, have all kinds of visionary experiences.

Again, the crucial distinction is between making claims about reality at large or about possible states of consciousness. The former is the province of religious belief and science (though science has standards of intellectual honesty, logical coherence and empirical rigor that constrain it, while religion has almost none). In “Waking Up,” I argue that spirituality need not rest on any faith-based assumptions about what exists outside of our own experience. And it arises from the same spirit of honest inquiry that motivates science itself.

Consciousness exists (whatever its relationship to the physical world happens to be), and it is the experiential basis of both the examined and the unexamined life. If you turn consciousness upon itself in this moment, you will discover that your mind tends to wander into thought. If you look closely at thoughts themselves, you will notice that they continually arise and pass away. If you look for the thinker of these thoughts, you will not find one. And the sense that you have — “What the hell is Harris talking about? I’m the thinker!”— is just another thought, arising in consciousness.

If you repeatedly turn consciousness upon itself in this way, you will discover that the feeling of being a self disappears. There is nothing Buddhist about such inquiry, and nothing need be believed on insufficient evidence to pursue it. One need only accept the following premise: If you want to know what your mind is really like, it makes sense to pay close attention to it.

Gary Gutting is a professor of philosophy at the University of Notre Dame, and an editor of Notre Dame Philosophical Reviews. He is the author of, most recently, “Thinking the Impossible: French Philosophy since 1960,” and is a regular contributor to The Stone.

HUMAN-RACE_241742a

University of California at Los Angeles Medical School
To science we owe dramatic changes in our smug self-image. Astronomy taught us that our earth isn’t the center of the universe but merely one of billions of heavenly bodies. From biology we learned that we weren’t specially created by God but evolved along with millions of other species. Now archaeology is demolishing another sacred belief: that human history over the past million years has been a long tale of progress. In particular, recent discoveries suggest that the adoption of agriculture, supposedly our most decisive step toward a better life, was in many ways a catastrophe from which we have never recovered. With agriculture came the gross social and sexual inequality, the disease and despotism, that curse our existence.

At first, the evidence against this revisionist interpretation will strike twentieth century Americans as irrefutable. We’re better off in almost every respect than people of the Middle Ages, who in turn had it easier than cavemen, who in turn were better off than apes. Just count our advantages. We enjoy the most abundant and varied foods, the best tools and material goods, some of the longest and healthiest lives, in history. Most of us are safe from starvation and predators. We get our energy from oil and machines, not from our sweat. What neo-Luddite among us would trade his life for that of a medieval peasant, a caveman, or an ape?

For most of our history we supported ourselves by hunting and gathering: we hunted wild animals and foraged for wild plants. It’s a life that philosophers have traditionally regarded as nasty, brutish, and short. Since no food is grown and little is stored, there is (in this view) no respite from the struggle that starts anew each day to find wild foods and avoid starving. Our escape from this misery was facilitated only 10,000 years ago, when in different parts of the world people began to domesticate plants and animals. The agricultural revolution spread until today it’s nearly universal and few tribes of hunter-gatherers survive.

From the progressivist perspective on which I was brought up, to ask “Why did almost all our hunter-gatherer ancestors adopt agriculture?” is silly. Of course they adopted it because agriculture is an efficient way to get more food for less work. Planted crops yield far more tons per acre than roots and berries. Just imagine a band of savages, exhausted from searching for nuts or chasing wild animals, suddenly grazing for the first time at a fruit-laden orchard or a pasture full of sheep. How many milliseconds do you think it would take them to appreciate the advantages of agriculture?

The progressivist party line sometimes even goes so far as to credit agriculture with the remarkable flowering of art that has taken place over the past few thousand years. Since crops can be stored, and since it takes less time to pick food from a garden than to find it in the wild, agriculture gave us free time that hunter-gatherers never had. Thus it was agriculture that enabled us to build the Parthenon and compose the B-minor Mass.

While the case for the progressivist view seems overwhelming, it’s hard to prove. How do you show that the lives of people 10,000 years ago got better when they abandoned hunting and gathering for farming? Until recently, archaeologists had to resort to indirect tests, whose results (surprisingly) failed to support the progressivist view. Here’s one example of an indirect test: Are twentieth century hunter-gatherers really worse off than farmers? Scattered throughout the world, several dozen groups of so-called primitive people, like the Kalahari bushmen, continue to support themselves that way. It turns out that these people have plenty of leisure time, sleep a good deal, and work less hard than their farming neighbors. For instance, the average time devoted each week to obtaining food is only 12 to 19 hours for one group of Bushmen, 14 hours or less for the Hadza nomads of Tanzania. One Bushman, when asked why he hadn’t emulated neighboring tribes by adopting agriculture, replied, “Why should we, when there are so many mongongo nuts in the world?”

While farmers concentrate on high-carbohydrate crops like rice and potatoes, the mix of wild plants and animals in the diets of surviving hunter-gatherers provides more protein and a bettter balance of other nutrients. In one study, the Bushmen’s average daily food intake (during a month when food was plentiful) was 2,140 calories and 93 grams of protein, considerably greater than the recommended daily allowance for people of their size. It’s almost inconceivable that Bushmen, who eat 75 or so wild plants, could die of starvation the way hundreds of thousands of Irish farmers and their families did during the potato famine of the 1840s.

So the lives of at least the surviving hunter-gatherers aren’t nasty and brutish, even though farmes have pushed them into some of the world’s worst real estate. But modern hunter-gatherer societies that have rubbed shoulders with farming societies for thousands of years don’t tell us about conditions before the agricultural revolution. The progressivist view is really making a claim about the distant past: that the lives of primitive people improved when they switched from gathering to farming. Archaeologists can date that switch by distinguishing remains of wild plants and animals from those of domesticated ones in prehistoric garbage dumps.

How can one deduce the health of the prehistoric garbage makers, and thereby directly test the progressivist view? That question has become answerable only in recent years, in part through the newly emerging techniques of paleopathology, the study of signs of disease in the remains of ancient peoples.

In some lucky situations, the paleopathologist has almost as much material to study as a pathologist today. For example, archaeologists in the Chilean deserts found well preserved mummies whose medical conditions at time of death could be determined by autopsy (Discover, October). And feces of long-dead Indians who lived in dry caves in Nevada remain sufficiently well preserved to be examined for hookworm and other parasites.

Usually the only human remains available for study are skeletons, but they permit a surprising number of deductions. To begin with, a skeleton reveals its owner’s sex, weight, and approximate age. In the few cases where there are many skeletons, one can construct mortality tables like the ones life insurance companies use to calculate expected life span and risk of death at any given age. Paleopathologists can also calculate growth rates by measuring bones of people of different ages, examine teeth for enamel defects (signs of childhood malnutrition), and recognize scars left on bones by anemia, tuberculosis, leprosy, and other diseases.

One straight forward example of what paleopathologists have learned from skeletons concerns historical changes in height. Skeletons from Greece and Turkey show that the average height of hunger-gatherers toward the end of the ice ages was a generous 5′ 9” for men, 5′ 5” for women. With the adoption of agriculture, height crashed, and by 3000 B. C. had reached a low of only 5′ 3” for men, 5′ for women. By classical times heights were very slowly on the rise again, but modern Greeks and Turks have still not regained the average height of their distant ancestors.

Another example of paleopathology at work is the study of Indian skeletons from burial mounds in the Illinois and Ohio river valleys. At Dickson Mounds, located near the confluence of the Spoon and Illinois rivers, archaeologists have excavated some 800 skeletons that paint a picture of the health changes that occurred when a hunter-gatherer culture gave way to intensive maize farming around A. D. 1150. Studies by George Armelagos and his colleagues then at the University of Massachusetts show these early farmers paid a price for their new-found livelihood. Compared to the hunter-gatherers who preceded them, the farmers had a nearly 50 per cent increase in enamel defects indicative of malnutrition, a fourfold increase in iron-deficiency anemia (evidenced by a bone condition called porotic hyperostosis), a theefold rise in bone lesions reflecting infectious disease in general, and an increase in degenerative conditions of the spine, probably reflecting a lot of hard physical labor. “Life expectancy at birth in the pre-agricultural community was bout twenty-six years,” says Armelagos, “but in the post-agricultural community it was nineteen years. So these episodes of nutritional stress and infectious disease were seriously affecting their ability to survive.”

The evidence suggests that the Indians at Dickson Mounds, like many other primitive peoples, took up farming not by choice but from necessity in order to feed their constantly growing numbers. “I don’t think most hunger-gatherers farmed until they had to, and when they switched to farming they traded quality for quantity,” says Mark Cohen of the State University of New York at Plattsburgh, co-editor with Armelagos, of one of the seminal books in the field, Paleopathology at the Origins of Agriculture. “When I first started making that argument ten years ago, not many people agreed with me. Now it’s become a respectable, albeit controversial, side of the debate.”

There are at least three sets of reasons to explain the findings that agriculture was bad for health. First, hunter-gatherers enjoyed a varied diet, while early fanners obtained most of their food from one or a few starchy crops. The farmers gained cheap calories at the cost of poor nutrition, (today just three high-carbohydrate plants — wheat, rice, and corn — provide the bulk of the calories consumed by the human species, yet each one is deficient in certain vitamins or amino acids essential to life.) Second, because of dependence on a limited number of crops, farmers ran the risk of starvation if one crop failed. Finally, the mere fact that agriculture encouraged people to clump together in crowded societies, many of which then carried on trade with other crowded societies, led to the spread of parasites and infectious disease. (Some archaeologists think it was the crowding, rather than agriculture, that promoted disease, but this is a chicken-and-egg argument, because crowding encourages agriculture and vice versa.) Epidemics couldn’t take hold when populations were scattered in small bands that constantly shifted camp. Tuberculosis and diarrheal disease had to await the rise of farming, measles and bubonic plague the appearnce of large cities.

Besides malnutrition, starvation, and epidemic diseases, farming helped bring another curse upon humanity: deep class divisions. Hunter-gatherers have little or no stored food, and no concentrated food sources, like an orchard or a herd of cows: they live off the wild plants and animals they obtain each day. Therefore, there can be no kings, no class of social parasites who grow fat on food seized from others. Only in a farming population could a healthy, non-producing elite set itself above the disease-ridden masses. Skeletons from Greek tombs at Mycenae c. 1500 B. C. suggest that royals enjoyed a better diet than commoners, since the royal skeletons were two or three inches taller and had better teeth (on the average, one instead of six cavities or missing teeth). Among Chilean mummies from c. A. D. 1000, the elite were distinguished not only by ornaments and gold hair clips but also by a fourfold lower rate of bone lesions caused by disease.

Similar contrasts in nutrition and health persist on a global scale today. To people in rich countries like the U. S., it sounds ridiculous to extol the virtues of hunting and gathering. But Americans are an elite, dependent on oil and minerals that must often be imported from countries with poorer health and nutrition. If one could choose between being a peasant farmer in Ethiopia or a bushman gatherer in the Kalahari, which do you think would be the better choice?

Farming may have encouraged inequality between the sexes, as well. Freed from the need to transport their babies during a nomadic existence, and under pressure to produce more hands to till the fields, farming women tended to have more frequent pregnancies than their hunter-gatherer counterparts — with consequent drains on their health. Among the Chilean mummies for example, more women than men had bone lesions from infectious disease.

Women in agricultural societies were sometimes made beasts of burden. In New Guinea farming communities today I often see women staggering under loads of vegetables and firewood while the men walk empty-handed. Once while on a field trip there studying birds, I offered to pay some villagers to carry supplies from an airstrip to my mountain camp. The heaviest item was a 110-pound bag of rice, which I lashed to a pole and assigned to a team of four men to shoulder together. When I eventually caught up with the villagers, the men were carrying light loads, while one small woman weighing less than the bag of rice was bent under it, supporting its weight by a cord across her temples.

As for the claim that agriculture encouraged the flowering of art by providing us with leisure time, modern hunter-gatherers have at least as much free time as do farmers. The whole emphasis on leisure time as a critical factor seems to me misguided. Gorillas have had ample free time to build their own Parthenon, had they wanted to. While post-agricultural technological advances did make new art forms possible and preservation of art easier, great paintings and sculptures were already being produced by hunter-gatherers 15,000 years ago, and were still being produced as recently as the last century by such hunter-gatherers as some Eskimos and the Indians of the Pacific Northwest.

Thus with the advent of agriculture and elite became better off, but most people became worse off. Instead of swallowing the progressivist party line that we chose agriculture because it was good for us, we must ask how we got trapped by it despite its pitfalls.

One answer boils down to the adage “Might makes right.” Farming could support many more people than hunting, albeit with a poorer quality of life. (Population densities of hunter-gatherers are rarely over on person per ten square miles, while farmers average 100 times that.) Partly, this is because a field planted entirely in edible crops lets one feed far more mouths than a forest with scattered edible plants. Partly, too, it’s because nomadic hunter-gatherers have to keep their children spaced at four-year intervals by infanticide and other means, since a mother must carry her toddler until it’s old enough to keep up with the adults. Because farm women don’t have that burden, they can and often do bear a child every two years.

As population densities of hunter-gatherers slowly rose at the end of the ice ages, bands had to choose between feeding more mouths by taking the first steps toward agriculture, or else finding ways to limit growth. Some bands chose the former solution, unable to anticipate the evils of farming, and seduced by the transient abundance they enjoyed until population growth caught up with increased food production. Such bands outbred and then drove off or killed the bands that chose to remain hunter-gatherers, because a hundred malnourished farmers can still outfight one healthy hunter. It’s not that hunter-gatherers abandoned their life style, but that those sensible enough not to abandon it were forced out of all areas except the ones farmers didn’t want.

At this point it’s instructive to recall the common complaint that archaeology is a luxury, concerned with the remote past, and offering no lessons for the present. Archaeologists studying the rise of farming have reconstructed a crucial stage at which we made the worst mistake in human history. Forced to choose between limiting population or trying to increase food production, we chose the latter and ended up with starvation, warfare, and tyranny.

Hunter-gatherers practiced the most successful and longest-lasting life style in human history. In contrast, we’re still struggling with the mess into which agriculture has tumbled us, and it’s unclear whether we can solve it. Suppose that an archaeologist who had visited from outer space were trying to explain human history to his fellow spacelings. He might illustrate the results of his digs by a 24-hour clock on which one hour represents 100,000 years of real past time. If the history of the human race began at midnight, then we would now be almost at the end of our first day. We lived as hunter-gatherers for nearly the whole of that day, from midnight through dawn, noon, and sunset. Finally, at 11:54 p. m. we adopted agriculture. As our second midnight approaches, will the plight of famine-stricken peasants gradually spread to engulf us all? Or will we somehow achieve those seductive blessings that we imagine behind agriculture’s glittering facade, and that have so far eluded us?

Much of Western religious and philosophical endeavor discounts personal experience as unreliable and has thus focused on discovering the truth that lies behind appearances. Buddhist thought, by contrast, has been distrustful of the idea of objective truth and has been more concerned with investigating the process of experience itself. These observations have led to the insight, consistent with recent postmodern approaches to many subjects, that meaning is something created rather than discovered.

–Andrew Olendzki