The Dan Schneider Interview 4: Steven Pinker (first posted 8/25/07)
Dan Schneider's review of Steven Pinker's latest book, The Stuff Of Thought
As I embark on a fourth DSI sojourn with one of today’s leading writers and
thinkers, Steven Pinker- noted cognitive psychologist at Harvard University, I
want to first thank you for agreeing to be queried. Since several other
interviews have occurred in this series, you know that I have striven to make
these not just interviews of the moment, but to still have much of, if not all,
the relevancy it now contains if someone should read this years later, online,
in a book, or in some yet to be written biography of you. Of course, we will
touch upon your latest work, but I am more interested in the Pinkerian take on
things, not only in your fields of expertise, but on matters a lay reader, who
may have but a vague notion of who you are, would be surprised to hear you opine
on. One of the reasons I think you are one of the leading writers in the
sciences is because you have an excellent and lucid style of writing. You
analogize well, to distill arcane points into explicable nuggets, and you are a
good raconteur. That being stated, let me first allow the newcomer to your work
a chance to read, from the horse’s mouth, who Steven Pinker is, what he does,
and some of the major accomplishments you’ve made, as well as some of the
goals you seek to achieve in your various fields.
SP: Thanks, Dan. I was born in 1954 in the Jewish Anglophone community of Montreal. After getting a bachelor’s in experimental psychology at McGill, I’ve spent most of my life bouncing between Harvard and MIT, with a few intervals in California (Stanford and Santa Barbara). My initial research was in visual cognition – mental imagery, shape recognition, visual attention. But starting in graduate school I cultivated an interest in language, particularly language development in children, which eventually took over my research. I’ve written many experimental papers in language and visual cognition, and, in the 1980s, two highly technical books on language. The first outlined a theory of how children acquire the words and grammar of their mother tongue. The second focused on one aspect of this process, the ability to use different verbs in appropriate sentences, such as intransitive verbs, transitive verbs, and verbs taking various combinations of complements and indirect objects.
After that book, I spent the next fifteen years or so on the distinction between irregular verbs, like bring-brought, and regular verbs like walk-walked. The reason I obsessed over this seemingly small topic is that the two kinds of verbs neatly embody the two processes that make language possible: looking up words in memory, and combining words (or parts of words) according to rules. Among the papers I wrote during this project was a monograph that analyzed 20,000 past-tense forms in children’s speech, concentrating on errors like bringed and holded that reveal children’s linguistic creativity at work.
In 1994 I published the first of five books written for a general audience. The Language Instinct was an introduction to everything you always wanted to know about language, held together by the idea that language is a biological adaptation. This was followed in 1997 by How the Mind Works, which offered a similar synthesis of the rest of the mind—from vision and reasoning to the emotions, humor, and the arts. In 1999 I published Words and Rules: The Ingredients of Language, which presented my thoughts on regular and irregular verbs as a way of explaining how language works in general. In 2002 I published The Blank Slate, which explored the political, moral, and emotional colorings of the concept of human nature. My new book, The Stuff of Thought, is about language as a window into human nature: what tense reveals about the human concept of time, what verbs reveal about causality, what prepositions reveal about our sense of space, what swearing shows about emotion, what innuendo and euphemism show about social relationships.
I also write for the press on various topics relating to language and human nature – my most recent articles have been on the psychology of kinship, the historical decline of violence, and the use of metaphor in politics.
Before we wade into the morass of language and the mind, my wife Jessica once
had to memorize this textbook definition of language: Language is a
purely human and non-instinctive method of communicating ideas, emotions, and
desires by means of a system of voluntarily produced symbols. My query-
is it still applicable today? If not, what more can be added to that definition?
SP: As the author of a book called The Language Instinct, I’d certainly disagree with the “non-instinctive” part. The definition-writer is correct to note that human language, unlike most forms of animal communication, is voluntarily produced (in physiological terms, it is under the control of the cerebral cortex rather than the limbic system), and that the content of the linguistic signals (words, their meanings, and the constructions in which they are assembled) have to be acquired to a much greater extent. But the very fact that language is “purely human” suggests that we humans have to be equipped with an instinct to use and language, since all neurologically normal kids in a normal social environment acquire (and in some cases invent) language, without needing formal lessons.
I first encountered you and your work over a dozen years ago, on the
excellent (albeit now sadly forgotten) PBS tv show Thinking Allowed,
hosted by Jeffrey Mishlove. I actually had the chance to interview Mishlove a
few years ago, on my now defunct Internet radio show Omniversica,
and enjoyed his willingness to go beyond materialism, even if I disagreed with
some of his ideas on the preternatural. That’s an area I will touch on in a
bit, but what struck me most about that three part series he did with you was
how good a communicator you were- in both the Carl Sagan/Albert Einstein media
savvy way (you and your famous locks) as well as an ability to break down the
most ineffable concepts into morsels the laity can understand. First off, were
you always a Great Communicator (pardons to President Reagan), or was it a
developed skill, needed for your mission? And, though I will delve more deeply
into this in a bit, as well, that ability to communicate also includes a knack
for the written word. Again, natural, or acquired?
That’s very kind. I’ve always enjoyed the challenge of conveying difficult
concepts without dumbing them down. I’ve long been a teacher (I put myself
through college as a math tutor and Jewish sunday-school teacher), and have long
paid attention to the mechanics of pedagogy – especially the use of language,
visuals, and analogies to get ideas across. Not only has my research touched on
all of these mechanisms (I once was involved in a project on the perception of
graphs, for instance), but I read
style manuals for fun, and like to analyze sentences I admire in other
people’s prose to figure out why they work. I have no way of knowing whether I
inherited any talent at communication -- my parents and siblings are highly
articulate, but of course I grew up with them, so we don’t have an
unconfounded nature-nurture comparison there. But whatever talents I did happen
to be born with, I certainly cultivated, and continue to cultivate.
Many scientists simply have not a way with words. Einstein’s essays, as
example, are not engaging nor well phrased, and even Darwin, who could have
moments of clarity, seemed to rarely find concision an ally. Yet, while I think
the creative arts, including fictive writing and poetry, are in a several
decades long down cycle, I think science writing is in a Golden Age since the
mid-1970s or so. From E.O. Wilson, to the essays of Stephen Jay Gould, to Sagan
to Jared Diamond to Martin Rees and Timothy Ferris to Robert Bakker and Jack
Horner, Daniel Dennett, and a few dozen others, the world of science is
bristling not only with ideas, but people who can clarify and excite the public.
Science books often make best seller lists, yet, if that is so, why are
Americans so ignorant on things like abortion, stem cells, evolution, race,
sexuality, and on and on? Is it the old phenomenon of wanting to have the books
on their shelves, as status symbols, but their never being actually read? I
recall the old scene in Annie Hall, where Woody Allen exasperates over
some boob misinterpreting Marshall McLuhan, and Woody pulling out the man,
himself, to depants the fool. Do you ever encounter folks like that, who think
they know more about ‘Steven Pinker’ and his ideas than you do?
SP: I agree that we’re living in a golden age of science writing. No such list would be complete without Richard Dawkins, and I’d also add John McWhorter and Geoff Pullum in linguistics, Judith Rich Harris in psychology, Steven Landsberg and Robert Frank in economics, and Robert Wright and Matt Ridley in evolutionary psychology, among others.
As for scientific illiteracy, there is a combination of causes: bad science education, the continuing higher prestige of the humanities over the sciences in American and British elites and universities, the ascendancy of romanticism in American popular culture since the 1960s, and the fact that ideas are sociopolitical and moral identity badges as well as true-or-false propositions. Republican politicians distance themselves from evolution not because they are ignorant of it or misunderstand it: (probably both are true, but that’s also true of many who profess a belief in evolution. They distance themselves because they identify evolution with amorality and nihilism.
Why is it that science books, when reviewed, are almost always reviewed
solely for their social or political relevance and their rightness or wrongness
on a given issue, rather than their crafted skill with words?
My impression is that the quality of writing is often briefly commented upon,
but that there is little analysis or criticism of what makes the prose work or
DS: After all, every science text will be outdated in a few decades, but if the writing is great, it should still be read. I think of the great essayist Loren Eiseley, whose supernal prose is as poetic and cogent today as it ever was, even if some terms are outdated. Have you ever read Eiseley? Who were the scientists whose writing, as well as works, inspired you as a child? I recall the How, Why And Wonder Books, as well as the dinosaur paintings of Charles R. Knight.
SP: I only became aware of Eiseley through your recommendation, and have yet to read his books, but I look forward to doing so. As a child and teenager, I devoured the World Book Encyclopedia, the mail-order Time-Life series of books on science (one arrived at our house every month), and heroic biographies of scientists and inventors. As far as stylists are concerned I loved George Gamow’s whimsy in One, Two, Three, Infinity, Martin Gardner’s economic prose in his Scientific American “Mathematical Games” feature, and an old textbook on invertebrate biology my mother gave me called Animals Without Backbones by Ralph Buchsbaum (I quoted it in The Language Instinct), and I see that it is still in print). In college I discovered the witty prose of Anglo-American analytic philosophers like A. J. Ayer, Gilbert Ryle, W. V. O. Quine, Nelson Goodman, Hilary Putnam, and Jerry Fodor, and then two great stylists in my own field, George Miller and Roger Brown. Roger was one of my graduate school advisors, and I wrote an obituary for him that called attention to his literary style.
Let’s talk of your latest book, The Stuff Of Thought: Language As A Window
Into Human Nature. I posted the first in depth
online review of it, and I think it’s perhaps your most accessible book.
When I read it my first thought, re: the book and my using it for this
interview, was ‘Pinker just lobbed a grapefruit into my gearhouse.’ Now,
this is a softball metaphor meaning you just threw me a ball that’s easy to
hit a home run off of. Why do our minds think in such ways? Is it an
evolutionary adaptation? If so, what possible benefit can it have?
SP: In SOT (as well as two previous books) I speculate that analogy and metaphor may
be the gift that allows us to apply cognitive abilities that evolved for concrete pursuits (such as dealing with space, time, force, and matter) to more abstract domains, like science, government, and economics. We use the language of space to talk about abstract variables (e.g., the economy rose, my spirits fell), and we use the language of force to talk about steady states and causation of change (e.g., Amy forced herself to go; The bureaucracy won’t budge). Presumably the language reflects the way we think about these phenomena, at least for some of us some of the time. An open question is: for how many of us, and at which times? That’s something I’m now studying empirically, with a graduate student, James Lee.
Let me focus on your latest book for a while, and ask a few questions based upon
specifics that you raise in that book. The book opens with the notion that 9/11
may be considered one or two events (or three or four). How is how we view such
an event important? After all, I recall arguing with people, right after it
happened, that the very image of the falling towers would leave a far longer and
more deeply lasting impression than anything else. This video on YouTube
seems to bear me out. People have long forgotten most details, and it all has
blurred into a gray fog. Why is the old cliché, ‘an image is worth a thousand
words,’ so correct?
SP: One reason that the number of events on 9/11 is significant is that it was the subject of a $3.5 billion lawsuit – in particular, over whether the leaseholder was covered for two “events” or just one. That hilarious-but-sad YouTube clip, in which people could not say which month “9/11” happened in, makes a linguistic point – that over time, transparent expressions, such as “9/11,” congeal into rote-memorized sounds, so people stop hearing the “9” in the “9/11.” Much of language is shaped by this process, as I note in SOT and in Words and Rules.
You also mention the last two Presidents infamous parsings of meaning- Bush’s
2003 State Of The Union claim about Iraq seeking uranium, where the word in
question was ‘learned,’ and Clinton’s use or abuse of ‘is.’ Which man
abused language more? From a pragmatic standpoint, however, was not Bush’s
abuse of language worse, since thousands (or up to hundreds of thousands) have
died because of what he said. Clinton’s words merely soiled a dress.
SP: Yes, I agree. Also, as I note in the book, Clinton’s notorious discussion of the meaning of is was linguistically sound, whereas Bush’s use of learn was probably mendacious.
Since the purpose of these interviews is to get away from the formulaic
interviews that proliferate online, where the same queries are asked over and
again, let ask a few questions you pose rhetorically, on pages 4-5 of The
Stuff Of Thought, so that folk who merely think of you as a brainiac with a
thick mop of hair, might get to know a bit more of Steven Pinker, the man,
citizen, and scientists. I quote: ‘Does stem-cell research destroy a
ball of cells or an incipient human? Is the American military incursion into
Iraq a case of invading a country or liberating a country? Does abortion consist
of ending a pregnancy or killing a child? Are high taxes a way to redistribute
wealth or to confiscate earnings? Is socialized medicine a program to protect
citizen’s health or to expand government power?’ To me, questions 1,
2, and 4 present two reasonable views of the same thing. In case 1, the ball of
cells is an incipient human, but the fact that it’s incipient is what matters,
not its destruction. In case 2 we did both; it’s the aftermath, and lack of
planning for it, where the horrors have arisen. And in case 4, both apply, but I
see no harm in redistributing wealth, since the very structure of our system
allows certain people via their work or ingenuity (but most likely via their
connections) to get rich in the first place. Without the system there’d be no
wealth to distribute or redistribute. In case 3, the former claim is so, because
a fetus is not a child. It is not born yet, and few human cultures have ever
recognized conception days, for a very good reason- since the day can rarely be
pinpointed. The final case is an example of the latter option necessarily
following the former. The real question is whether that is a good thing or not,
not whether it is one nor the other. To use another metaphor, ‘are you willing
to step in the shit?’ What are your views?
SP: Your elaborations are incisive, and are consistent with my argument later in the book that our ability to frame an issue in different ways does not imply that political debate reduces to a beauty contest between rival frames, or that it is a matter of “mere words.” People can analyze, question, and evaluate rival framings, as you have done here. But I think I will accept your invitation to avoid stepping into the canine biosolids.
You also deal with the idea of identity, and use the examples of William
Shakespeare and Paul McCartney. In my review of the book I opined, ‘No,
he does not dig into that silly canard over whether Shakespeare was gay or not
because he wrote some sonnets from a feminine perspective, but he asks simply
what do the words William Shakespeare mean? When one talks of The Bard, is one
referring to the man commonly thought of as the Bard- the playwright who used
that name (or its differently spelt variations), those thought to be Shakespeare
(such as Francis Bacon, Edward de Vere, Christopher Marlowe, and a galaxy of
others!), a great writer/poet, the author of Hamlet, or whomever it was wielded
the pen?….He claims that whatever the case, the name is still attached to that
‘guy.’ Well, yes and no. Here’s why, and perhaps this is a point of view
that only an artist or creator (as opposed to its antipodes- a scientist, or
discoverer, like Pinker) could have. The name William Shakespeare not only
refers to a human being, and a deceased one, but also that person’s
works….while it may be true to state that William Shakespeare was a great
writer, the author of Hamlet, or really Edward de Vere, that is only true when
using the past tense. When one states that William Shakespeare is….all of that
is false, except for the fact that Shakespeare is the poems and plays
collectively, for when one asks, ‘Have you ever read Shakespeare?’ they are
not asking if you ever saw the moldering tattoo on the dead man’s thigh that
said, ‘Mother.’’ What is your main assertion about appellations,
and do you agree with my idea that the word ‘Shakespeare’ now subsumes the
I do agree, and it’s a subtle and interesting point. The famous argument that
names are “rigid designators” (i.e., apply to the entity in the world
originally dubbed with the name, rather than to whatever satisfies some
definition) does not apply when the name is used metonymically (i.e., to refer
to something associated with the usual referent of the name). The word Shakespeare, when used as a shorthand for “the works of William
Shakespeare,” is not a rigid designator, but has a definition.
As an asides, and mentioning people becoming words, or eponyms- such as gerrymander,
bowdlerize, and boycott, is it the uniqueness of a
name that heightens its chances of incidental immortality? As example, I often
thoroughly respond to folk, in email or on blogs, and do so point by point. I
guess I’d call it ‘schneiderizing’ an argument, but it
sounds silly, as my name is not as ‘odd’ to the ear as the three mentioned.
That, and the term to fisk has become accepted as a point by point
refutation (named after a British journalist, Robert Fisk). Yet, I see that as
almost as silly as using my own name. Perhaps that’s due to my being an
American, and Yankees fan, and recall the way Boston Red Sox catcher Carlton
Fisk willed a Game 6 home run with his arms. To me, that’s true fisking.
What do you see as markers of such aborning eponyms?
SP: What makes a new word stick in the language is something of a mystery, as I discuss at length in the chapter on naming. Sometimes successful new words fill a lexical gap – a concept that people need to express, but lack a word for, such as spam – but not always. We still lack a good word for unmarried heterosexual partners, for example. The fame of the referent probably does play a role, as does the sound of the name. But to be honest, no one really knows why some words stick and others don’t.
You also write, ‘While taboo language is an affront to common
sensibilities, the phenomenon of taboo language is an
affront to common sense. Excretion is an activity that every incarnate being
must engage in daily, yet all the English words for it are indecent, juvenile,
or clinical. The elegant lexicon of Anglo-Saxon monosyllables that give the
English language its rhythmic vigor turns up empty-handed just when it comes to
an activity that no one can avoid. Also conspicuous by its absence is a polite
transitive verb for sex- a word that would fit into the frame Adam
verbed Eve or Eve verbed Adam. The simple
transitive verbs for sexual relations are either obscene or disrespectful, and
the most common ones are among the seven words you can’t say on television.’
Simply put, I think it’s one of the best published assaults on the subject
I’ve read, and my query is, ‘What the fuck is wrong with so many people that
they get so bent out of shape about words regarding such basic issues?’
SP: Thanks, Dan. In part, people get upset about taboo words because they get upset about what the words refer to – we don’t like to smell or step in feces, and most of the time we don’t like to see people urinate or copulate or flatulate, so we don’t like to think about these things by hearing words for them, either. But clearly this is only a small part of the story. In the earlier part of this answer, I did use words for each of these things which were not objectionable, like urinate. So another part of the story is that taboo words are produced with the tacit recognition that they are dysphemistic – that is, used to call attention to the disagreeable nature of the referent, use with the precise intention of offending, or both. So sure enough, listeners are offended. Yet another part of the story is that taboo words vary across time and subculture, so that the same word can have very different effects on different speakers. And part of the story – the one your question taps, I think – is that people differ in terms of how much informality they expect, and how much open discussion of sexuality, religion, and other touchy subjects they are prepared to take part in.
To me, it is an excessive indulgence in, or aversion to, taboo language that is
the odd thing. Especially when such occurs online, at blogs that try to censor
and shape discussions. Do you feel that blogs, and the Internet do anything
more than display the utter stupidity, cowardice, and sciolism of humanity?
Online, people do not like it if one has a) an opinion b) states it and c) are
correct. Wrongness is forgivable, being right is not. What is it about the
Internet that fosters such ills- the anonymity?
It’s not clear that it’s a phenomenon specific to the internet – just
listen to AM talk radio.
The book is also suffused in culturata- from the pop- such as Seinfeld,
to the obscure- such as urban legends, and I think you are a master at using
examples of accessible tidbits to illustrate your points, whether or not I may
agree with the point. Do you take voluminous notes and record such things? Have
you many scrapbooks. I am never without pen and paper, and constantly write
notes. Or do you simply have the detritus of trivia floating around your head?
I keep a physical file of clippings and cartoons that may be relevant to a given
book. I also have a directory on my computer disk, a “Favorites” category in
my web bookmarks, and a text file in which I tap in allusions or ideas that
could be helpful some day.
You also flay a philosopher named Jerry Fodor, and his idea that things are
conceptually innate, and stand apart from a relation to other things. I agree
that this is false, although, at the other end of the spectrum, I have argued
with others who believe that all is subjective. To me, there are blacks and
whites with a helluva lot of gray between. Not all gray, not black and white.
You seem to take that view. I state, in reviewing your book, ‘….on a
scientific level, the book does something quite amazing: it bridges the chasm
that many Academics have over language itself. Postmodernists believe
language is a circular self-referential trap, while pragmatists believe it lends
insight into what reality is. Pinker’s book seems to posit that that is a
false dichotomy, not because both claims are false, but because both are
fundamentally true.’ Have I stated what
your view is? If not, clarify and expound, please.
This would require another book, but in brief: I think that a language maps onto
internal representations (in a language of thought) that are not the same as the
language itself (e.g., English). I think that those internal representations get
their meaning both from the relationships among the representations (e.g., the
meaning of my concept of “dog” in part comes form its connection to my
concept “animal”) and from the relationship between the representation and
the world (the meaning of “dog” comes from the fact that when my visual
system is in the presence of a dog, I think the thought “dog”). By the way,
Fodor himself has endorsed these positions (indeed, was responsible for first
articulating them) at various points in his career.
In a stinging display of humor, you write this, in ridiculing Fodor’s belief:
‘Fodor correctly notes that history has often vindicated
unconventional ideas- after all, they laughed at Christopher Columbus and Thomas
Edison. The problem is they laughed at Manny Schwartz, too. What, you’ve never
heard of Manny Schwartz? He was the originator and chief defender of the theory
of Continental Drip: that the southern continents are pointy at the bottom
because they dribbled downward as they cooled from a molten state. The point is
that they were right to laugh at Manny Schwartz.’
You demonstrate the Appeal to Authority fallacy here. Is the use of such a
fallacy usually based on an intellectual or ethical lack? What other fallacies
are your pet peeves?
SP: I debated whether to retain that passage, and decided to keep it because Jerry (a former colleague of mine at MIT, and someone I respect a great deal) is himself is an avid practitioner of aggressive humor. If he can dish it out, he can take it. But back to your question. The fallacy here is not really the Appeal to Authority, but the opposite fallacy – The Appeal to the Heretic, namely that if someone is a revolutionary who bucks the establishment consensus, that is sufficient reason to believe his claims.
You also state, of Fodor’s idea, ‘….it’s hard to see how an
innate grasp of carburetors and trombones could have been useful hundreds of
thousands of years before they were invented.’ This got me thinking on
an old idea I had, and one which I’ve heard a few times, as the basis for
possible stories- that is the idea of being born ‘out of time.’ As example,
there are doubtlessly living potential blacksmiths and abacus whizzes whose
talents are meaningless today, just as there were potential astronauts or
computer programmers centuries or eons ago, who never got a chance to display
their skills. If such talents are not immanent, what are they? Is the analogy to
a potential drunkard who never tastes alcohol in his life apt?
SP: I have a gentle anthropologist friend who told me, “Every day I thank God that I was not born a Yanomamö tribesman” (and he is an atheist). It’s a great question. Presumably to the extent the society defined specialized niches and freedom of choice, people would have gravitated to professions demanding cognate abilities. The programmer might have been a “computer” in the original sense (a guy paid to do sums), or perhaps a bureaucrat who implemented precise laws, or a Latin teacher. Perhaps one of the tragedies of postindustrial society is that certain talents (e.g., being a superb machinest, or seamstress) no longer have such niches.
In your chapter, Cleaving The Air, you write of how people often
mistake chronology for causality. As example, you cite two potential assassins
who try to kill a man, and use this as an example of the ‘counterfactual
theory.’ Please elucidate.
Actually, the counterfactual theory arose to solve the problem that chronology is not causality. I take some
herbs and my cold sore goes away. Does that prove that the herbs cured the cold
sore? No, to show that you’d have to show that if the person failed to
eat the herbs (the counterfactual scenario), the cold sores would have remained.
The dual-assassin thought-experiment, for its part, was intended to make life
difficult for the counterfactual theory. Specifically: two assassins conspire to
take out a dictator at a public rally, with the first one to get a clear shot
firing whereupon the other melts into the crowd. They end up killing him with
simultaneously fired bullets. But if Assassin A hadn’t fired, the dictator
would still be dead, and ditto for Assassin B. Hence, according to the
counterfactual theory, neither one killed him! But that can’t be right. So the
counterfactual theory has problems, too.
To give this a real world grounding, let’s go to the
JFK Assassination, and the ideas of whether Oswald acted alone, or was part
of a conspiracy. Putting aside the facts, and arguing over them, I believe
that’s a false choice. Oswald could have acted alone, yet there could have
also been a conspiracy. His claims of being a patsy may have been true. Suppose
he told others of his plan, in a fit of macho braggadocio, and then some of the
slimy people he hung around with shadowed him, and had assassins in place,
should Oswald miss. Oswald shoots the ‘magic bullet,’ then there’s a
frontal kill shot by another of the gunmen Oswald was unaware of, and Oswald
panics, flees, kills the cop, and looks guilty as hell. Yes, he planned and shot
at Kennedy. Even hit the President, and Governor Connally. But, technically, he
did not kill JFK. Is he guilty of assassinating the President?
There you have it – a possible real-life example. “Multiple sufficient
causes,” it’s sometimes called.
In a sense, though, such an exercise seems akin to the Presidential parsings you
mention. Also, it reminds me of one of Zeno’s Paradoxes- the one where one can
never move because one would have to get halfway to a place, then a quarter of a
way, then an eighth, and so on. Is counterfactualism merely mental masturbation?
SP: You haven’t watched enough Law and Order – courtroom examples pop up all the time. Can a widow of a smoking asbestos miner sue the tobacco company (who will say the asbestos killed him) or the asbestos company (who will say the smoking killed him)?
In that same chapter you mention force-dynamics and morals (or as I prefer, secular
ethics). Please elucidate. In reading of it, it reminded me of the old
canard about how would your life be affected if the whole population of China
disappeared overnight. I have always answered honestly. I’d be taken aback,
shake my head, then do what I gotta do. Yet, so many others, of a PC mindset,
would pontificate on how upset they were. I see that as hypocrisy. Is that
Yes, the example comes from Adam Smith. I’m not sure if force-dynamics (the
idea that we conceive of causation as the exertion of force by a potent agonist
against a resistant antagonist) is the best explanation here. It probably has
more to do with the triggers for empathy.
In a similar vein, two other arguments on ethics come to mind. One is that I do
not necessarily value human life over other forms of life, or even non-life. As
example, a few years ago, a
cat I adored ran away. Last year, another
cat I loved died. I still recall when the first cat was lost, how my best
friend could not comprehend my devastation. ‘It’s just a cat,’ he said.
From his perspective, he likely dismissed my grief as anthropomorphizing. Yet,
it was not. I simply valued a being that gave me nothing but joy and love.
Unlike mankind, cats do not steal, lie, cheat, and wantonly murder. Yet, there
are some people- and not just wacky anti-abortionists, who value the slightest
thing human over all else. What are your views on such?
SP: I am not a vegan, whereas I am opposed to murder and slavery, so I must be at least something of a human chauvinist.
One escape hatch would be to argue that humans, because of our social ties, self-consciousness, and ability to anticipate the future, suffer more acutely from murder and slavery than animals do, and that’s why it’s not as bad to kill an animal as to kill a human. But I think that such an argument is not enough to truly justify meat-eating and leather-wearing.
it isn’t, then either I’m a horribly immoral person (which is certainly
possible) or the human-animal boundary would have to have some moral status. One
could argue that the boundary a bright line that, on one side, prevents obvious
horrors like infanticide and involuntary euthanasia of the retarded or demented
(who may have cognitive abilities akin to those of animals), while on the other,
still allowing us to swat flies, comb out lice, and poison rats (and perhaps eat
clams, or fish, or chicken, or beef, depending on how widely you spray-paint the
line). I suspect that this is ultimately not a solvable problem, and that
we’ll muddle through with a compromise: on the one hand, animal life deserves
our moral consideration; on the other, the human-animal divide has a place in
moral deliberation as well.
Then there is the old example of, ‘What if a building was burning, and you
could only save a person or the last extant manuscript of the works of William
Shakespeare (or The Mona Lisa, or some other great work of art). Which
would you save?’ Most people say, the person, and likely mean it. Yet, to me,
I would have to weigh the person and the works. Even a good person is likely to
not have a fraction of the cultural impact of a great work of art, especially
over the centuries. Yes, saving Darwin or Galileo or Picasso or Rembrandt, over
their works, is easy, for they can recapitulate most of that stuff. But saving
Larry MacDougall, of MacDougall’s Plumbing? I’m not gonna lie, Larry would
probably die, because nothing he could ever do would likely be as valuable to
human culture as that great work of art. And it’s not because I devalue a
human life, as much as I truly value human creations over human non-creators.
Does that belief make one a cold, calculating proto-Fascist, a Stalinist
wannabe, an über-sensitive lover of all things, or simply a mature, rational
SP: I think I’ll stay away from that one. For one thing, my plumber might be reading this.
To me, the great art that survives always leaves its audience looking upwards;
it forces understanding on the percipient, whereas bad and pretentious art is
hermetic. In my review of The Stuff Of Thought, I bring out that to play
with words is to inevitably play with ideas, yet few seem to see that. Why?
SP: I’m not clear enough about who isn’t seeing what to answer that.
In the chapter, The Metaphor Metaphor, you write of the inability of most
people to separate themselves from themselves with language. And it put me in
mind of the very notion of so-called ‘stream-of-consciousness’ writing, by
writers like James Joyce, Virginia Woolf, and William Faulkner. I’ve long
found this to be bunkum. Not only does the human mind think metaphorically, but
it thinks punctually. Punctuation is not a mere ad hoc device for
the page, but a representation of the mind’s processes. Thus, most
stream-of-consciousness writing fails and feels patently phony. Do you feel
punctuation is generated, and not generative?
SP: An interesting question. You’re certainly right that stream-of-consciousness prose did not catch on as a compelling literary device. We’re all conscious, and we all want to get inside the heads of other people, and consciousness really does seem like a stream (in William James’ original metaphor), so one might have thought that a flow of unpunctuated words would simulate consciousness and be an appealing way to experience another person’s mind. But as you note, it does seem to have been more of a one-shot experiment – perhaps even a gimmick – rather than an enduringly effective medium. The question is why, and I can only guess.
In the history of punctuation itself, there’s a tension between the use of punctuation to indicate prosody (melody and rhythm in speech sound) and to indicate syntax (and the correlated distinctions in semantics and thought). If the former, then wordless thought may indeed be punctuation-free. If the latter, then thoughts would have natural boundaries – perhaps temporal breaks between mental propositions, or parts thereof -- that are indeed akin to punctuation, so omitting them is unnatural.
But many other issues are mixed in to your question. Are there different modes of thought, some discrete, others more flowing? Does the discrete nature of language make the rendering of stream-of-consciousness in words inherently difficult, so that the reader has to be aware of what the author is trying to achieve to appreciate his or her efforts -- which would thereby be a rarefied artistic accomplishment, rather than an easily accessible simulation?
What of most contemporary art? You have been called retro by some avant-garde
types, because you criticize much modern art. In a New Yorker review
Louis Menand writes, ‘Pinker thinks that modern art is all ideas
because it is only as ideas that he can experience it. In fact, Ofili’s
painting is not ‘smeared in elephant dung,’ and Serrano’s ‘Piss
Christ’ is not ‘a crucifix in a jar of the artist’s urine.’ It’s a photograph
of a crucifix in a jar of urine, and, technically and formally, a rather
beautiful and evocative piece.’ The problem I have with this sort or
review is that it’s all the criticism of intent. Menand simply does not deal
with your writing, only what he feels you believe, and whether or not it’s
good or bad. This is de rigueur in reviewing. Why is that? After all, when I
read the elegant prose of a Loren
Eiseley it matters not if the science is decades out of date. The writing is
SP: As far as modern art is concerned, my intent was not really to criticize it (I like many of the products of Modernism) as to explain a phenomenon, namely that the elite arts are in trouble. Humanities departments are floundering, the contemporary visual art scene is a travesty, and elite art music has become esoteric and marginalized (in an era in which popular music has exploded in creativity). The point of my chapter was to connect this decline to the denial of human nature among 20th-century intellectuals, critics, and elite artists – in particular, to the claim that beauty is a bourgeois social construction, rather than reflecting properties or our perceptual, emotional, and cognitive faculties.
If I were to write the chapter today I would have drawn finer distinctions. I would have distinguished more sharply between modernism and postmodernism; the latter is far more guilty of the denial of human nature. I would have distinguished between the different forms that modernism took in different genres (music, for example, as opposed to fiction, painting, and architecture). And I would have distinguished between the great original works of modernism, which represented admirable creativity, and the products that arose in its decadent phase, when it became a stultifying dogma.
Still, I stand by my main argument – that critically admired yet popularly despised products of 20th-century elite arts such as atonal music, brutalist architecture, postmodernist lit-crit, and grotesque conceptual art are, at least in part, products of the modern denial of human nature and the separation of the arts and humanities from the sciences.
DS: I never practice the ‘criticism of intent,’ nor do I focus solely on the ideas. If the actual craft of wordplay is bad, who cares if the idea is good? In my review of The Stuff Of Thought, I write, ‘Another thing that makes Pinker’s writing so good is that whether or not one agrees with his view, on a moralistic or logical level, one cannot help but be caught up in its argument, for look how plain and lucidly unfolded his argument is. There is no preening intellectually, nor self-congratulatory backpatting. And, finally, while I mentioned his nice inversion of both the human anatomy and grammar, he also distinguishes between taboo language as the thing itself, and the reasons why we invented and use such language. It is in these sly little *pops* that Pinker shows he not only understands the origins of language, but how to subtly use its often hidden ‘tricks,’ such as recapitulating visuals with linguistic tropes, and also using semi-hidden anaphora to induce an almost mesmeric quality before hitting a reader with an idea. What is anaphora? Read a Walt Whitman poem, where every line begins with the same few words or phrase, or read the Biblical ‘begats.’ Anaphora tends to have a mesmeric effect on a reader, lulling him into a sense of complacency so that the turn on to a new idea, theory, or concept is all the more jarring. In effect, anaphora acts as an amplifier to make the proposition all the more memorable in the reader’s mind, and also likely more receptive to it….That Pinker uses such verbal abracadabra, not only here, but throughout this book and his others, is more proof of what separates him, technically and stylistically, from many of the other excellent writers in science’s current Golden Age. Again, you may disagree with Pinker on any or all levels in regards to his scientific claims, but my assertion of his excellent writing is unassailable.’ Now, aside from the fact that I praise the craft, I actually am dealing with the words you write. The criticism of intent disallows that, for it presumes an übertext behind what is written, and only Menand can decode that for potential readers. In effect, he’s reviewing a different book from the one you wrote- in this case, The Blank Slate: The Modern Denial Of Human Nature. Yet, when he writes, ‘In fact, [Chris] Ofili’s painting is not ‘smeared in elephant dung,’ and [Andres] Serrano’s ‘Piss Christ’ is not ‘a crucifix in a jar of the artist’s urine.’ It’s a photograph of a crucifix in a jar of urine, and, technically and formally, a rather beautiful and evocative piece,’ we have no idea if he’s quoting you or Tom Wolfe, from Wolfe’s book The Painted Word, which he says you quote. Aside from being grammatically fuzzy as to who he is referencing, he’s playing a semantic game, just as such shock artists, as the two mentioned, or Karen Finley- who has smeared herself in her own feces, do so just to get attention. Ofili’s paintings are, indeed, smeared in elephant dung, as well as with it. I have seen them up close. Thus, Menand is being disingenuous, and trying to impute that disingenuity on you. And while it’s true that Serrano’s work is a photo, not the bottle itself; a) we do not know if he is quoting you nor Wolfe, b) the difference between the single metaphysical level of the bottle as a Duchampian work, and the photograph, is slight in comparison to its artistic merit, and c) if the crucifix was replaced by a Twizzler, no one would care. The very selection of the crucifix is the giveaway as to the fact that the ‘art’ is simply an idea, and prank. Yes, one may admire the shade of the urine color, for aesthetic or other reasons, but it’s clear that Menand has an axe, and is wielding it willy-nilly. Thoughts?
SP: The howl of rage in Menand’s review is largely defensive. Menand knows that his home fields, the humanities and arts, are in bad shape, and that the school of thought he identifies with, postmodernism, deserves much of the blame. But he bristles at an outsider making these points, and refuses to hear out the suggestion that an increasing consilience between the arts and sciences points to a way out of the self-inflicted catastrophe. (Brian Boyd, in his American Scholar essay Getting It All Wrong, offers a brilliant critique of this blind spot in Menand.) Rather than coming to grips with the phenomenon I identified, Menand tried to discredit my competence to write about the topic with various “gotchas,” such as the list of works of shocking art that have made recent headlines. Sure, Piss Christ is literally a photograph of a crucifix in urine, not a crucifix in urine, just as the expression “Warhol’s soup cans” does not literally refer to cans of soup but to paintings of cans of soup. I agree that his pedantic harping on this shorthand is a sign of desperation.
for the photo itself—well, I know a thing or two about the technical side of photography,
and I can tell you that “technically and formally,” Piss Christ is similar to what you can see from serious amateurs in
any issue of Popular Photography. As
you point out, to discuss this work in technical and formal terms, rather than
as an attempt to shock, is disingenuous – if it were a photo of a twizzler in
a glass of apple-juice, no one would ever have heard of it. And what if it were Piss
Martin Luther King? Or Piss Anne
Frank? Or Piss Mohammed?
In a lighter vein, you wax on about the name Steven in science, and then
discourse on the popularity of naming children. Yet, you only touch on,
tangentially, one of the more bizarrely interesting phenomena in this regard-
the naming habits of black Americans. After all, there are not many LaToyas
running about the veldts after wildebeests. To what extent is this similar to
the phenomenon in the 1970s, where blacks declaimed they were ‘descended from
kings’? Most of these American black names have nothing in common with African
SP: There are two phenomena here. One is the romantic connection with Africa that became a source of cultural motifs in African American culture beginning in the late 1960s (dashikis, Afro hair styles, names like Aisha, and so on). That is similar to the vogue for Celtic names among contemporary Irish-Americans , or the fad for Israeli names among Orthodox Jews in the 1970s and 1980s. Another is a trend in which African Americans have been giving their children creative and euphonious new names – Shaquille, Latrelle, LeBron, LaTonya, Chamique, Semeka, and so on. These seem more faux-French than faux-African, and what they illustrate is that trends in baby naming often revolve around sound patterns rather than meanings. (Examples from mainstream white naming patterns include Jennifer, Jenna, Jessica, Jesse in the 1970s, and Lois, Gladys, Doris, Dolores, Glennis in the 1920s). The popular sounds vary across subcultures. Sometimes they sample from a romantic or nationalistic source, but sometimes they just recombine an endemic pool of favored sounds or sound templates.
Ok, let’s take a step back from The Stuff Of Thought, and speak more
generally. Since language is manifestly a major part of your life’s
work, what do you see as causing the devolution of simple and engaging
conversation? Is it emailese, Postmodernism, Political Correctness, video games,
Madison Avenue, hip hop, etc.? Or, is this a cyclic thing, as I think, and there
will have to come a time when people will want to stop reading novels by a
writer simply for his or her social status- ethnicity, sexual preference,
disability, socioeconomic status, and appreciate the written word for its
ability to move alone, by planting abstraction sin the mind’s palette?
SP: I’m wary of interpreting social trends, since so few of them are independently documented. Do we really know that conversation has deteriorated, or, do we, like so many generations before us, simply assume that civilization has declined, the younger generation is going to pot, and the good old days are gone? I’m with you in deploring certain phenomena – pretentious Pomo gibberish, identity politics in the arts, and the phobia of articulateness among many college-age people (I remember recently being interviewed by the editor of a student newspaper at a major university and being appalled at her unwillingness to frame a single question as a complete grammatical sentence. As you note, this could be cyclical – I hope so.) But other of the phenomena you cite may be neutral or even healthy. People command multiple registers, so I don’t think email diction will change the spoken language any more than the advent of the telegram a century ago caused people to omit articles and prepositions or end every sentence with “STOP.” Words infiltrating our language from other subcultures and dialects has always been the source of its richness – whether from sports, jazz, sailing, technology, or hiphop. Literally thousands of words and idioms we now find indispensable came into the language as slang or jargon from some subculture.
Although it’s not PC to admit, I feel that- like language acquisition- and the
need to learn it by 6 or 7, lest end up a wild child like Kaspar Hauser, there
is a similar limit to honing one’s talent with words, and if one has not done
so by 30, that’s it: lights out. Also, that writing talent is immanent. You
cannot learn to be a great writer. You are or are not. Someone without the gift
is doomed to failure. I’ve seen this with literally hundreds of wannabe
writers. I know, when I’m in a groove, it’s like the view from the cyborg in
The Terminator films- I can instantly revise and handle multiple drafts
at once. Lesser writers cannot. Also, I’ve seen, historically, how most
writers tend to peak between the ages of 35 and 50. Do you agree with these
views on writing? Have their been studies on writing and other artistic
abilities that have demonstrated that these can be so?
SP: I’m not an expert on the lifespan development of talents in different fields, but certainly writing is more forgiving of the aging process than, say, mathematics, where people really do accomplish their best work in their twenties and early thirties. Great writers are thought to reach their peak later in life than mathematicians and scientists (though I recently read that even with writers there is a peak; you don’t get better and better as you get older and older, alas).
Presumably success in each field depends on a specific tradeoff between raw brain power – CPU speed, so to speak – which declines with age, and the acquisition of an inventory of elements, motifs, and strategies to recombine, which increases with age (in the case of writers, this would be words, idioms, constructions, and turns of phrase). Different fields require different mixtures of computational power and inventory richness, and so the peaks are found at different ages.
I suspect you’re right that good prose style is partly heritable, since everything is partly heritable. And as a teacher, I clearly see some students who just have a way with words from Day 1, and others who, with all the tutoring in the world, still have to struggle to compose a phrase that isn’t clumsy or opaque. As with everything else, the very best writers cultivate whatever talents they are born with, paying attention to good examples, scrutinizing their own works, constantly trying to improve.
artists seem to deny their own creativity, pawning it off on God, or some other
force or demiurge. I call this the Divine
Inspiration Fallacy. There is no Muse. For better or worse, it’s all
me, or you, or any artist. Comments on its existence, origins, verity?
SP: With ourselves, it must be because our own thought processes are mostly unconscious, so we have no access to the true source of our ideas. In viewing other people, we see the product, but not the process, of their thoughts. We don’t see the years of apprenticeship and immersion, the crumpled drafts in the wastebasket, the trains of thought that led nowhere, the penultimate attempt that brought the thinker to the threshold of Eureka! As I note in How the Mind Works, careful, fact-driven accounts of the creative process from historians and biographers tend to be deflationary – geniuses engage in a lot of practice, a lot of play, a lot failed experiments, and a lot of slow incremental progress, rather than being struck by lightning bolts of inspiration.
I maintain that the creative arts are higher than the performing or interpretive
arts, because you are basically starting with less to work with. In short, an
actor interpreting Shakespeare or O’Neill has it much easier than the two
playwrights did in conjuring the drama. Similarly, I posit that writing and
poetry are the two highest general and specific art forms, for writing is wholly
abstract- black squiggles on white that merely represent and must be decoded,
whereas the visual arts are inbred, and one can instantly be moved by a great
photo or painting, while even the greatest haiku will take five or ten seconds
to read and digest. Poetry is the highest form of writing because, unlike
fiction, it needs no narrative spine to drape its art over- it can be a moment
captured, and wholly abstractly, unlike a photo. Do you agree with these views?
If so, why do you think this is so? I would bet that since language (at least
written) is only a six or so thousand year old phenomenon, while sight has been
around for 600 million years or more, that’s a hell of a head start the visual
arts have over writing.
SP: Too hard a question! I wouldn’t disagree, but am not so sure I could support you either. By the way, I’d date language to 60,000 rather that 6,000 years ago (that’s how old our species is, and every human group ever discovered has complex language, regardless of its degree of technological development). But that’s still consistent with your argument.
Speaking of words, and their practice, where have all the great interviewers
like a Mishlove, Phil Donahue, or Bill Buckley gone? In preparing for this
interview, I read many online transcripts, and watched some video interviews,
and I was underwhelmed by both the queries and any real sense of passion on the
interviewers’ parts. One of the things we’ve tried to do with these
interviews is avoid the canned sort of responses that most interviews- print or
videotaped, indulge in, yet most people find comfort in hearing the expected. On
a tangential note, a similar claim can be made about clichés providing comfort.
Have you ever studied clichés in your work? I would think that, since they are
defined by their numerical frequency, they would be an easy subject to take up.
SP: I think you’re right about the art of interviewing, especially in the popular media. Even a middlebrow with a know-nothing persona like Johnny Carson used to have many scientists and intellectuals on his program, like Carl Sagan, who was such a regular that Carson’s comic impersonation of him (“billlllions and billllions of stars”) was instantly recognizable. The younger, hipper hosts – Leno, Letterman, O’Brien – restrict their interviews to actresses and comedians. Even the more intellectual-friendly hosts—Jon Stewart and Stephen Colbert—do their interviews with an ironic smirk. Again, if we judge the culture by the level of its middlebrow accomplishments, the lack of any contemporary interview format as extended and in-depth as what one used to read in Playboy (the only reason I looked at the magazine, of course) tells you something.
That having been said, my girlfriend and I recently rented DVDs of old Dick Cavett shows from the 1970s, hoping to indulge our nostalgia for an era in which an urbane, witty, literate man could host a popular talk show. What a disappointment! By today’s standards the show (which stretched over 90 minutes, not today’s 60) dragged interminably, with huge expanses of dead air and conversations that went nowhere. The show clearly came from a more leisurely age, which is not necessarily a good thing, In terms of intellectual stimulation, 90 minutes watching Cavett in the 1970s was far more a waste of time than 90 minutes of surfing the Internet, or even the cable dial, today.
Let’s talk about life as a public intellectual. Obviously, I am engaging
you for what you have put forth into the public arena, and to get new spins,
insights, on to certain things, and to elicit heretofore unknown opinions. Yet, intellectual-
as a noun, has suffered many blows since the 1950s, when the Marxists of
Academia were shamed by their folly in support of Stalinist Russia. There was
some redemption when the Left was right on Vietnam and Civil Rights, but
increasingly, just as the Right has gone off the deep end with their obsessions
on homosexuality, abortion, the rise of Right Wing Agitprop Radio, and viewing
America as the Great Savior of the world (hence the Iraq War), the Left has been
every bit as silly, with New Age Charlatans, the Feminazi rise, the embrace of
censorship under the auspices of PC protection for the innocent, and the
demonization of America as the Great Satan.
the Right Wing has left its mark on Corporate America and Evangelical
Christians, the Left has a stranglehold on Academia. Let me raise the specter of
two names that I think have done grievous damage to the term ‘intellectual.’
The first is likely the more obvious name- linguist Noam Chomsky- a former
colleague of yours at MIT. I am no expert on his scientific work, but I do know
a bit more than the average layman, and while there’s no denying his
eminence in cognitive theory, why would someone like him basically abandon his
eminence and research in a field he is the Elvis Presley of, and spend decades
playing the buffoon on subjects well outside his purview? Certainly, no one
denies him his right as a citizen to speak out politically, but, as I am a
writer, and would put my chops on, say- poetry, against anyone living or dead, I
would never think that that expertise qualifies me as an expert on Bulgarian
politics nor the mating habits of bees in Malaysia. So, why do you think Chomsky,
and others in Academia, feel a need to display their sciolism in so many areas
outside their expertise, and why do people take a Chomsky so seriously when he
has proven to be so wrong on so many political issues- from shilling for the
Khmer Rouge and other Communist Totalitarian states to exculpating terrorism to
seemingly agreeing with some Holocaust Deniers? Granted, he’s been right on
many issues domestically, too, but HE’S A LINGUIST, not an expert on
everything. And it seems to me he has wasted his truest gifts in the field he
helped establish, on things he’s accomplished nothing in, and damaged his own
reputation to boot. By contrast, I’ve never read nor heard you opining on such
things. Are you just a mealy-mouthed wimp, or do you learn from others’
SP: Mealy-mouthed wimp, for sure. Though I share your assessment of Chomsky’s political opinions, I diagnose the situation differently. We’re not seeing a case of dilettantism or ignorance – Chomsky commands vast amounts of knowledge in the political fields he writes about, which is one of the reasons he impresses, indeed intimidates, audiences. Nor could he ever be called intellectually lazy – he applies a powerful intellect to advancing his world view. And as he rightly notes, professional credentials should not be the main criterion in evaluating someone’s arguments.
I would say that the problem with Chomsky is rather that with such a clever mind, such impressive erudition, and such formidable rhetorical skills, he has the power to push an idée fixe arbitrarily far. He can wow sycophants, blow off critics as stupid or evil, explain away embarrassing data, and rationalize mistakes at will. Lesser mortals might be humbled by a critic, or embarrassed by a counterexample, or forced into a reassessment by an unpredicted turn of events.
In Chomsky’s case, as I noted in The Blank Slate, we’re seeing a fundamentally romantic view of human nature, in which people naturally cooperate and create without the need for external incentives, until these faculties are stifled by malign social institutions. We also see an all-encompassing moralistic theory of political and historical causation – that world events can be understood as the intended outcomes of a morally odious agent, namely the United States and its allies. Tragedies, well-meaning blunders, painful tradeoffs, human limitations, least bad options, historic changes in contemporary standards of political conduct—none of these play a role in Chomsky’s causal model. Disciplinary expertise and training are beside the point – when you’re determined to advance an all-encompassing theory, intellectual and scholarly power can work to your ultimate disadvantage in terms of providing an accurate rendering of reality.
I mentioned sciolism, and the Internet, Google, and outlets like Wikipedia, have
led to what I’d term a sciolistic dialectic online. Since Wikipedia
and other outlets are so manifestly flawed- again, why should I be considered
adept enough to comment on Bulgarian politics?, what do you see as a solution to
this detrital mass of misinformation? How can the average layman, who wants to
improve his knowledge of whatever subject, possibly distinguish the good and
trustworthy information from the 99.99% of utter garbage out there?
SP: Thanks for teaching me the useful word sciolism. Wikipedia is flawed, to be sure, but I’m rather impressed by how good it is. It is a surprisingly self-healing system whose notorious errors get corrected in minutes, and it is infinitely more useful than, say, the Encyclopedia Britannica. Whereas Wikipedia embodies a collective, distributed intelligence (of the sort that allows market economies to outperform planned economies, or Linux to be less buggy than Windows), articles in the Britannica reflect the quirks of the single academic who has been charged with writing them. Many of the articles are so parochial and oblivious to the background assumptions of laypeople as to be effectively useless. When I have used it to learn things in technical fields I don’t know about, I often find that I can’t understand a word of the Britannica piece. I used to blame myself, until it dawned on me: if I find a Britannica article too hard to understand, who on Earth is it intended for? Also, entrusting a topic to a credentialed expert often means entrusting it to the oldest scholar around, and the one with the most time on his hands – not a good way to get state-of-the-art knowledge about a field of science! Britannica articles in my own field are often written by embittered graybeards who long ago fell out of touch with the advancing front.
A general principle in cognitive psychology is that “statistical prediction” outperforms “clinical prediction.” That is, a statistical aggregate of a lot of data (even when the formula for aggregating them is fairly primitive) has a better track record than a single “expert.” For example, simple formulas and algorithms do better at diagnosing diseases, investing in stocks, and predicting recidivism than doctors, financial analysts, and parole officers, respectively. It wouldn’t surprise me if, contrary to intuition, a large self-regulating community would converge on higher-quality information than a single credentialed expert.
Back to the idea of ‘intellectual.’ The second name I will drop
aggravates me far more than the political nonsense (or naïve-té, to be
generous) of Chomsky, and that is the late New Age charlatan, Joseph Campbell.
Granted, without George Lucas’s mind-numbing Star Wars films, no one
would likely have heard of him. Then there were the interviews and PBS
television series with Bill Moyers- another of the Left’s noxious counterparts
to Rush Limbaugh and Ann Coulter, and his wholesale mangling of basic
mythologies. Even worse were some of his ludicrous notions on the monomyth (Yes,
Joe, humans do tell tales that are similar in structure and content!) and
other obvious literary devices. Kurt Vonnegut once parodied Campbell’s
nonsense with his own ‘in the hole’ trope: The hero gets into trouble,
the hero gets out of trouble. What aggravates me more about Campbell is that
while Chomsky is both championed and denounced in colleges, Campbell’s New Age
ideas are accepted with little rebuke, which leads to the dumbing down of
culture by the Oprah Winfreys of the world. PC is now about a quarter century
old phenomenon. You have been around long enough to see its rise. Do you see an
end in sight? And while no one denies that there are good aims in
multiculturalism, simply put, there are simply not enough qualified nor
excellent voices in most fields to warrant many of the changes in curricula.
Anyone who would suggest that, say, a Nikki Giovanni, should have her
‘poetry’ taught in favor of Percy Shelley, is a fool. And what galls me is
that it is always the worst and most politicized hacks whose works are held up
to bump off a Dead White Male from his perch, whilst, in my poetry example, a
great black poet, like a Robert Hayden or James Emanuel, is never seen as an
alternative. This manifests that the Old Boys Network is merely under siege from
a new Girls And Boys Network that cares as little for true quality and diversity
as the dinosaurs they seek to displace. When do you believe multicultis will
tire of the mere novelty of exotica and demand true excellence- not just writers
that ‘respect’ them and their tribe?
I wish I knew. It’s generally impossible to predict how long social trends
(like the rise and fall of the name “Steve”) will last. For one thing, PC,
PoMo, and multi-culti got their start in the 1960s, whereas the
counterrevolution only found its legs in the early 1990s (perhaps the kickoff
was the Newsweek cover story which first popularized the old term
“political correctness.”). Perhaps a change will only happen when the cohort
of academics who got tenure in the expansive 1960s retires – as Max Planck
said about science, the field advances funeral by funeral.
My wife Jessica, has worked in science, and has told me tales of office
politics, but even though science has a reputation as being politicized, is it
really as bad as the rest of Academia? Does politics determine granting? If so,
it seems that the writings of scientists- at least in popular books, is far less
politicized than those from the MFA creative writing mills. Is this an accurate
SP: There’s politics in the sense of who-you-know, and politics in the left-wing-right-wing sense. Both are operative in science funding, unfortunately. But at least they are viewed as bad things that should be eliminated or minimized, whereas I get the sense that within much of the humanities, politics (in the left-right sense) is seen as ineliminable and perhaps even to be welcomed. Scholarship is seen a means of advancing a salubrious social and political agenda.
How has the Bush regime hurt science funding in America? Recently, some
former Surgeons General testified to Congress re: the politicization of their
work by the current and past administrations. Has your university and/or field
been affected? I would think that studies of the brain and language are less
controversial than D&C abortions (misleadingly called partial birth
abortions), stem cells, and studies on homosexuality.
SP: You’re right: the fields I work in are, fortunately, not as vulnerable to contemporary political interference as stem cells or global warming. What damage there is comes more from the left more than from the right (though the right does plenty of damage in other fields). “Evolution” is a poison word in grant applications in psychology, not because it will lead to godless humanism but because it will lead to Nazi eugenics. And there is tremendous support for any effort to prove that women are indistinguishable from men.
Since I mentioned homosexuality, I don’t believe I’ve ever heard your views
on the subject. One of the interesting things about research into it is that
many of the top researchers are gay, even though many of the top critics of such
research are straight, and feel that finding a ‘key’ to homosexuality will
somehow lead to genetic efforts to eradicate it. Yet, it seems that a ‘single
point of origin’ for the behavior- or, a smoking gun, seems increasingly
unlikely. Both the gay gene and the gay brain proved
to be rather silly ideas, and no more likely to cause homosexuality than a weak
father figure. Human beings are far more complex a thing than any other living
creature on this planet- even the dolts, therefore if one could actually
pinpoint a cause or causes of any behavior- especially complex things like
sexuality (be it preferences, fetishes, frequency), it’s likely to be
multivalent. That is, one would find dozens of ‘causes’ for any group of a
thousand homosexuals, with many things overlapping, but each person’s ‘real
reason’ being a secret formula. In short, I think the dog is chasing its tail,
and origins are not as important, in such cases, as implications. Thoughts? And,
if so, could homosexuality be a) more closely related to fetishism because b)
same sex sexual play and dominance play are rampant among higher animals,
SP: I think one has to distinguish homosexual behavior, which is no more of a puzzle than any other form of non-procreative sex (such as masturbation), from exclusive homosexuality, that is, the avoidance of, or failure to seek, heterosexual opportunities. The latter really is an evolutionary puzzle, because (at least in men) it is partly heritable (i.e., is affected by the genes, though not necessarily a single “gay gene”), and one would expect that any genes that lead, on average, to fewer offspring than their alternative alleles would quickly be selected out.
At this point there are no good theories on the evolutionary basis of preferential or exclusive homosexuality. One reason is that this is a topic that neither the left nor the right particularly wants to see funded. You’re right that political correctness is pushing its thumb on the other side of the scale this time. In this case, the left likes genetic explanations, and the right hates them, because a genetic basis for homosexuality would seem to imply that gay men can’t be blamed for having made a sinful choice, that they can’t be persuaded out of it through religious counseling, and that they don’t need to be kept from children to prevent them from proselytizing new converts. (Never mind that all of these concerns are non sequiturs.)
When an issue gets politicized, science is the first casualty, and here too advocates have tried to bully researchers away from conclusions that appear not to put their favored groups in a good light. Michael Bailey, perhaps the country’s leading researcher on homosexuality, nearly had his life ruined by nuisance lawsuits, bogus ethics charges, and false personal accusations because he argued that in one kind of male-to-female transsexual (the ones who are attracted to women) the men are motivated by sexual concerns rather than by being women trapped in men’s bodies. (I even got some abuse for writing a nice blurb to Bailey’s book.) So it’s not a topic in which a wave of smart young researchers are going to dedicate their budding careers.
I mentioned, at this point, we don’t have a good theory. E. O. Wilson
suggested that gay men are like “helpers at the nest” and channel their
resources into nieces and nephews rather than offspring – which is clearly
wrong. (Among other things, gay men have been found not to indulge their nieces
and nephews any more than straight men do.) Dean Hamer had the most interesting
suggestion – that the “gay gene” he discovered (still contested), when
passed onto women (2/3 of the time, because it is on the X chromosome) made
women go through menarche at a younger age, resulting
in a lifelong reproductive advantage, which more than compensated for the
disadvantage when the gene is in men. Bailey, following Ray Blanchard, has
suggested that the mother’s immune system is sensitized by male fetuses and
produces antibodies that inactivate testosterone or its receptors in the fetal
brain; his evidence is that men with more older brothers are more likely to be
gay. Another hypothesis, proposed by Gregory Cochran and Paul Ewald, is that
homosexuality is caused by an infectious agent. Yet another is that our
environments have recently change in such a way that genetically sensitive men
who might have been heterosexual in evolutionary typical environments are
tweaked toward homosexuality today. But no one knows the answer, and no one is
likely to find out any time soon, at least not in the United States.
the answer is, it may be different from women, because women’s sexuality is so
much more complex and fluid than men’s. Women are far more likely than men to
change sexual orientation during their lifetimes (the LUG or
lesbian-until-graduation phenomenon), to experiment with homosexuality, to
happily do without sex at certain stages
of their lives, and so on. For those reasons I’d predict that homosexuality is
less heritable in women than in men, and less sensitive to other biological
factors like prenatal influences.
On a related score, most mammals seem to have a need to play- i.e.- do
activities that seemingly serve no benefit in terms of seeking food, sex, etc.
What is the cause of play? Could it be that more complex brains simply need to
unwind, and ‘cool down’?
SP: Juvenile play is common in the animal kingdom, and its obvious function is experimentation and practice in preparation for actual encounters with the world. Juvenile predators play at stalking (as in a kitten with a string toy), juvenile prey animals play at dodging and fleeing, juvenile apes engage in play fighting, little boys play with toy weapons, and so on. Peter Gray, the author of the psychology textbook I use in my course and an expert on play, notes that even in grim situations like refugee and concentration camps, children’s play is distinctively practical – while the adults use play as an escape from reality (e.g., card or board games), the children invent macabre reality-based games, like withstanding simulated abuse from pretend guards. With adults, too, a lot of play consists in pushing the outside of the envelope of survival – experiencing controllable doses of ancestral dangers like speed, heights, water, animals, exhaustion, and enemies, presumably to see how far one can go into dangerous territory without crossing the line into genuine harm.
This is not to deny the possibility that some play just consists of pressing the pleasure buttons of the brain. Given that we are technologically clever enough to do all kinds of amazing things, we are bound to be clever enough to short-circuit our pleasure circuitry as well, such as with recreational drugs, music, dance, and other enjoyable pursuits.
And, if it is true that play is a way to cool down a brain, what of dreams?
At least to me, since I only know what my dreams are like, my dreams seem to be
void of symbolism, and are merely the unspoolings of my mind from a day’s
stress. As example, a few months ago, while preparing an early draft of this
interview, I was doing research on you by watching this video
of a lecture you did after the release of The Blank Slate. I watched
the full near two hour long video, and that night, both Jessica and I had dreams
with you in it. Jess could not recall her dream, but mine was this: I was
the narrator of a film- perhaps a documentary about you, and possibly a student
of yours. Yet, the Steven Pinker in the dream, while looking as you do, was not
a cognitive psychologist, but an expert on global warming and the melting Arctic
ice cap. There were some other students who accompanied you and me to the Arctic
to do some research. One was a female student who was likely your lover. Then,
after forgetting some parts, the ice caps quickly melted, and all of us returned
to the Arctic shores drenched. However, you were immediately arrested by
government authorities for either plagiarism or tax evasion, spent a few years
in jail, and won the Nobel Prize for something or other. When you got out of
jail, with me narrating the film of this all, you gave a big kiss off to the
media, not unlike the Woody Allen character does in the Martin Ritt film The
Front. Then, I woke.
I see no great Freudian symbolism, and most of my recalled dreams are similar,
where people from my past, or an occasional celebrity, meets me and engages in
rather dull things (save for a recurrent dream of Sharon Stone in fishnet
stockings!). It seems obvious that you were in the dream because I saw
the video not long before I slept, a day or two earlier I’d read a piece in Discover
magazine on the melting of Arctic ice, and the hiking around that occurred in
the dream was because that day Jessica and I had gone hiking in a state park.
if dreams, and their causes, are so pedestrian, why do so many still try to
imbue so much into them? Or are my dreams an exception? Has anyone ever studied
the obsession over dream interpretation rather than the interpretation of dreams
themselves? And does anyone really understand why we dream? Is it the brain’s
way of cooling down, as I suggest?
SP: Yes, I tend to agree that the content of dreams is a screensaver – any old pattern will do. Lots of interesting and important things happen to the brain during sleep, but the actual screenplay of dreams is unlikely to matter.
raise a good question about the appeal of dream interpretation, which may be
universal, and is always accompanied by a sense of profundity and portent. The
nineteenth century anthropologist Edward Tylor suggested that the experience of
dreaming is a major reason that people everywhere are dualists, believing that
the mind and body can part company. After all, when you’re dreaming, some part
of you is up and about in the world – indeed, a netherworld that follows
inscrutable laws – while your body is in bed the whole time. Much of religion
and mysticism consists of thinking about a mysterious other-world of spirits,
ghosts, and souls, which is often felt to be close to the incorporeal world of
Let’s move on to eugenics. Despite the infamous misapplications by the
Nazis, as well as White Supremacists in this nation, I think both eugenics and
euthenics are good things, and I feel both hover over all discussions of cloning
and stem cells today. Yet, how can one claim to be for liberty and deny others a
right to choose their kids eye color, or sex, or a desire to clone oneself? Do
you feel that the fear of creating a race of supermen is overblown? Would not
Murphy’s Law play a role? Or the Law Of Unintended Consequences?
SP: Yes, I’ve made such an argument before the President’s Council on Bioethics, and in a related Boston Globe op-ed.
On a related note, I cite the two above ‘Laws’ because I feel
‘greatness’ is a random thing. When people have tried to make available the
sperm or eggs of Nobel Laureates or Mensans, the kids turn out to be rather
average. This gibes with the fact that almost all great people, such as Picasso,
Newton, Einstein, and most famously-Thomas Jefferson, have never had any
forebears nor descendents come close to their achievements. And the few famed
people who’ve had success run in their families- the Adamses, the Darwins, the
Barrymores, have never really had greats in their clans, or- as in the Darwin
case, Erasmus was not in a league with his grandson Charles. I call this fact
the Infinity Spike, meaning that the idea that a Master Race could be
engineered- at least intellectually, is folly. Perhaps physical characteristics,
but the chances of two Mensans or Nobel Laureates producing another Michelangelo
or Kurosawa are only negligibly greater than such a person coming from a plumber
and a teacher. Perhaps a three or four out of fifty million chance versus a one
and a half to two chance. In short, greatness spikes toward infinity out of
nowhere- there is no predictable bell curve nor progression toward excellence.
What are your thoughts on this posit?
SP: Yes, see above. The great biochemist George Wald, one of the Cambridge lefty scientists of the 1960s, was asked to contribute to Shockley’s sperm bank for Nobel prizewinners. He wrote, “If you want sperm that produces Nobel prizewinners, you should be asking people like my father, a poor immigrant tailor. What have my sperm produced? Two no-good guitarists!”
course, today’s women who pay more for sperm from elite college graduates, or
who choose not to bear the child of some low-life from a drunken
one-night-stand, are not being irrational. Intelligence and personality are
heritable, at least statistically. But you’re right that true genius and
other extreme traits are not heritable because of the laws of probability –
what statisticians call regression to the mean. This is related to your Infinity
And, what if my idea about an Infinity Spike is correct- would that mean that
it’s folly to try to ‘breed’ a race of Nobel Laureates?
SP: As far as extreme people are concerned (Hitlers, Einsteins, Mozarts), your “infinity spike” idea is surely right. A countless number of things have to align adventitiously for such an unusual person to arise. First, there might have to be dozens or hundreds or thousands of genes, not just one or two, in an exact combination—what behavioral geneticists call “emergenesis.” Second, even identical twins raised together are nowhere near perfectly correlated, despite their identical genomes and near-identical environments. This shows that there must be an enormous role for chance, either in brain development, unique experiences, or both. Third, the exact time and place in which a baby is born surely matters – a Mozart or Hitler today may not find a niche that allows their peculiar powers to express themselves.
And on top of this these factual uncertainties, there are the numerous tradeoffs. There may be genes that might have side effects, such as increasing the IQ of some of your children by 10 points and leaving others confined by spasms to a wheelchair. There may be genes that are desirable only in optimal doses: a gene that makes kids bolder might be desirable if his other 20,000 genes would render him pathologically shy, but not if the other 20,000 would render him a reckless maniac.
Then you have to ask how we would get there from here – if there is even a small chance that the first experiments in enhancement would produce a deformed child, how would those experiments ever get done?
And even if all these technical issues were worked out, there would still be the issue of personal preferences. People, probably irrationally, are horrified by genetically manipulated soybeans – can we really be so sure that they would welcome genetically manipulated babies? It’s instructive to look at the predictions about our “inevitable” technological future that were common when I was a child in the 1950s and 1960s -- nuclear-powered automobiles, moving sidewalks in domed cities, meals in a squeeze tube, blowing the Great Barrier Reef to smithereens with nuclear bombs to create new shipping lanes. These are risible today not just because of technical infeasibility or costs but because of changes in values. At the time, convenience and effortlessness trumped everything else in life. Today we put a value on exercise, biodiversity, naturalness, sensory richness, and medical safety that just didn’t figure at the time. The same may be true for designer babies and other Brave-New-World scenarios.
Let’s move on to a different meaning of The Bell Curve. I refer to
the infamous 1994 book (and some would call racial screed) by
Richard Herrnstein and Charles Murray. You seem to be sympathetic to it, on a
statistical level, but averse to many of its conclusions. As example, you
believe that IQ is not immanent, but malleable, and cite statistics that show
the IQs of ethnic groups can change over time. To me, this seems rather obvious.
example, about 15 years ago a cousin of mine urged me to join Minnesota’s
Mensa Society, to which she belonged. Well, the experience was disturbing, for
the Mensans were a collection of all the worst stereotypes of nerds, geeks, and
weirdos one could imagine. Furthermore, I had to take one of their IQ tests,
which was dated 1969. The questions were atrociously constructed, and were
manifestly not objective, nor did they measure anything which could be called
creative. To join, one had to achieve the 98th percentile. I got to
the 92nd percentile, but there were a dozen or so questions that I
knowingly gave the ‘wrong’ answer to simply because the premises were false
and/or I disagreed with the ‘correct’ answer I knew they wanted.
examples: one question asked to choose the object that went with a cup. The
choices were a spoon, fork, saucer, or table. To me, the natural answer, from my
less than middle class childhood, was a table. Cups go on tables. Of course, I
knew they wanted the answer to be saucer; but that’s simply a cultural bias,
and says nothing of a real intellectual nature. The second example was to link
geometric shapes. They gave a square and asked to link it to other geometric
forms: a circle, a hexagon, an octagon, and a square. Now, I knew they wanted it
linked to the latter three shapes, because all of those shapes were polygons.
Yet, to me, since the latter three had an even number of angles, and the
triangle an odd number, it could be linked to the circle, since a circle has an
infinite number of angles, and infinity can be either an odd or even number,
they can be linked most closely.
I think IQ tests merely measure a pedestrian or functionary level of intellect.
What are your thoughts on its efficacy in measuring real human intelligence?
And, the main criticism- amongst seemingly hundreds, of The Bell Curve,
was that is was not multivalent, and did not include different sorts of
intelligence, such as Howard Gardner’s
Seven Intelligences: language, math and logic, musical, spatial, bodily &
kinaesthetic, interpersonal, and intrapersonal. Comments?
SP: The thing about Mensa these days is that it is aimed at people whose social identity hinges on being smart, rather than at people who are smart or people who do interesting things thanks to the fact that they are smart. Hence the wonk stereotype. My sense is that this was different before urbanization, ubiquitous college education, and social stratification by intelligence, at a time when smart people were more scattered across the country and different socioeconomic circles. During my 21 years as a professor at MIT I would often meet students who had been the only smart kid in their small town. Their lives had been miserable – a fat smart girl in a Maine mill town where cheerleaders had all the status, or a reflective introverted boy in a town where only the jocks were respected. The other kids ostracized and persecuted them, and often their own families didn’t understand them, thinking that they were “showing off” when they did well in math or read books. Going to MIT meant that for the first time they were valued for what they were. I imagine that at one time Mensa had a similar function. The upshot, though, is that the IQ tests they administer are fairly crude, and far from the state of the art.
I think you’re wrong about IQ tests in general. They’ve been shown to predict (statistically, of course) a vast array of outcomes that one would guess require intelligence, including success at school, choice of intellectually demanding professions, income (in a modern economy), tenure and publications in academia, and other indicators, together with lower crime rates, lower infant mortality, lower rates of divorce, and other measures of well-being. The idea that IQ tests don’t predict anything in the real world is one of the great myths of the intellectuals.
I’m sympathetic to modular theories of the generic human mind like Howard Gardner’s, but they have nothing to do with individual differences in intelligence. For one thing, the inclusion of “musical” and “bodily and kinesthetic” intelligence is mainly a tactic to morally elevate those traits by rebranding them as forms of “intelligence.” But a great athlete or drummer is not necessarily “intelligent” in the sense that people ordinarily mean by the term. Secondly, though modularity may apply to the universal design specs of the human mind, and may help explain pathologies that selectively affect one faculty (such as from brain damage or a genetic deficit), that has nothing to do with quantitative variation among individuals in the normal range. It’s an empirical fact – massively and repeatedly demonstrated – that people who do well on tests of verbal intelligence also do well on tests of spatial and quantitative intelligence, and vice-versa. The correlation is nowhere near perfect (some people really are better at math, others with words), but it is undoubtedly a positive correlation. General intelligence in this sense is a real phenomenon.
DS: And would not the fact that racial or ethnic strengths wax and wane due to non-biological reasons be manifest? Take a look at the most demotic of all sports- boxing. A century ago, Jews and Irishmen dominated. Then, blacks took over. Now it’s Latinos that dominate. Did Jews and Irishmen suddenly turn wimpy? And if one could quantify any groups’ qualities, would they not be in constant flux due to aging, birth, death, and other health conditions? And would not systemic poverty inevitably skew and retard certain groups over others? After all, even in nations less diverse than America, the poorer groups are always at the bottom of the intellectual ladder, even if the same or similar groups dominate in other nations where they are not as impoverished.
SP: Differences between groups may have a different explanation that differences within groups, to be sure. But historical changes of the kind you just mentioned do not show that the relevant ethnic differences are arbitrary. If you exclude blacks from professional basketball, then open the doors and they flood in while the Jews are driven out, that doesn’t mean that on average blacks and Jews are equally good at basketball and that the NBA suddenly became anti-Semitic. Likewise as the media become more global, travel becomes cheaper, and sports become more professional and competitive, the best athletes in a sport will be found in whatever nook or cranny of the planet they inhabit, such as east Africa for marathon running. The fact that an arbitrary exclusion can distort the composition of some set of actors does not imply the converse – that when the exclusions are eliminated, the set will reflect the population in perfect proportion. On the contrary, as the selection becomes fairer and more acute, and the stakes for success in competition become higher, one expects to find that any group with even a slight advantage can crowd out the others.
DS: The Bell Curve leads me into my own ideas on human intellect, from decades of observing the creative an non-creative minds. I first posited this in an essay on the literary critic Harold Bloom: Here is my posit: the human mind has 3 types of intellect. #1 is the Functionary- all of us have it- it is the basic intelligence that IQ tests purport to measure, & it operates on a fairly simple add & subtract basis. #2 is the Creationary- only about 1% of the population has it in any measurable quantity- artists, discoverers, leaders & scientists have this. It is the ability to see beyond the Functionary, & also to see more deeply- especially where pattern recognition is concerned. And also to be able to lead observers with their art. Think of it as Functionary2 . #3 is the Visionary- perhaps only 1% of the Creationary have this in measurable amounts- or 1 in 10,000 people. These are the GREAT artists, etc. It is the ability to see farther than the Creationary, not only see patterns but to make good predictive & productive use of them, to help with creative leaps of illogic (Keats’ Negative Capability), & also not just lead an observer, but impose will on an observer with their art. Think of it as Creationary2, or Functionary3 .
I have not, admittedly, ever subjected this idea to rigorous scientific testing
(how could one?), I think that, anecdotally, it hold sup. There is simply a
difference between Functionary minds and Creationary minds, and an even bigger
one between merely Creationary minds (your average artist, leader, or scientist)
and those who are truly Visionary. Do you agree with any of this? And is there
any squaring of Gardner’s Seven Intelligences with my idea of Three
It does seem plausible, but you’re right that psychometrics has little to say
about it. There has been some work on distinguishing intelligence from
creativity, but much less on visionary genius, and those that exist tend to be
more biographical than psychometric. I don’t think they would easily map onto
Gardner’s intelligences, since one can be a functionary, creator, or visionary
in any of them.
Let’s take a Keatsian leap of illogic and segue back to something that you
have expressed an interest in, and take up again in The Stuff Of Thought:
epithets, curses, swears. Back in the early 1970s, the comedian George Carlin
popularized the seven words that the Federal Communications Commission had put
on its banned list for commercial television and radio: shit, piss, fuck,
cunt, motherfucker, cocksucker, and tits. There have been
claimed variants- such as cock and pussy, and some of the words- like piss and
tits, have seemed to cross back over the line to acceptability. Why do all
cultures have a prohibition against certain words? And why do Americans obsess
over sexual terms more than other cultures? What are some of the equivalents of
‘fuck,’ say, in other languages- not in terms of meaning, but in terms of
cultural taboo? And how do curses originate and evolve? Do all such epithets
follow similar trajectories?
SP: To make a long and interesting story short: People everywhere tacitly believe in word magic – the idea that words are not arbitrary labels but are part of the referent’s essence, and can therefore, by the fact of being uttered, impinge the referent itself. If you don’t believe it, just say “I hope my child will get cancer” aloud, or say “No one in my family has ever had a serious disease or accident” without feeling a strong urge to follow it with “Thank God” or “knock wood.” Taboo words tend to be ones associated with strong negative emotion – awe of deities, fear of death and disease, disgust at bodily secretions, revulsion at depraved sexual acts, contempt for minorities, enemies, and cripples. The specifics obviously vary from culture to culture and from time to time: just look at the fate of damn and bloody in English-speaking countries during the twentieth century, or at my native Québec, where the two main curses are translated as “Chalice!” and “Tabernacle!” It’s not a coincidence that Québec was, until quite recently, a traditional Catholic society, and that as English-speaking countries became more secular in earlier periods, the religious epithets lost their punch and were replaced by sexual ones. There is a rough correlation between a culture’s values and its profanities, though because taboo words can remain taboo simply because everyone treats them as taboo—that is, people recognize that they are intended as releases, or as offensive, or as a way to show that one means business—a taboo word can remain taboo long after its referent strikes a chord with the speakers.
Curses tend to be words as substitutes for violent actions or thoughts, yet a
euphemism is the substitution of a word for another word. Why do people use
euphemisms? Is the impulse the same as a curse word? Perhaps the worst sort of
euphemisms come in the political sphere, such as pro-choice for pro-abortion,
and pro-life for anti-abortion. Yet, many people
tend to fall for such nonsense. Wherefore this gullibility? Even more annoying,
to me, is how words are twisted upon themselves, such as liberal, conservative,
and libertarian. What other areas of human endeavor twist words as much as
politics? I’m thinking of the sciences, where minutia, as priority, seems to
take the place of common sense- such as the naming rules for fossils, which
resulted in the wonderfully evocative name Brontosaurus
being replaced by the inapt Apatosaurus, or how a handful of astronomers have
suddenly decided to demote Pluto from the ranks of planet, and reclassify it a
‘dwarf planet.’ Yet, a dwarf planet is still a planet, just as a human dwarf
is still a human. Any thoughts on these lingual gymnastics?
SP: Taboo words are often dysphemisms—words deliberately intended to make listeners think about the disagreeable or emotionally fraught aspect of their referents, as with shit, fuck, and piss. An irate gardener might shout, “Stop your dog from pissing on my roses!”, but a nurse would be unlikely to say “Mrs. Jones, you’ll need to give us a sample of your piss.” Euphemisms do the opposite—they are a way to refer to a fraught entity (which we all must do from time to time, because we are incarnate beings, who get sick, copulate, die, produce waste, and engage in other messy activities) while making it clear to the listener that one has no desire to offend him by making him think that unpleasant thought.
for the other questions – we’re barely half way through, and I’m in danger
of retyping my entire book into this interview, so let me just say that these
are topics that are covered The Stuff of
Thought, in particular in the chapters on metaphor and on naming.
Of course, the worst offense against writing is censorship. Where does this
impulse come from? Yet, all sides do it- be it the Christian Right or Feminist
and PC Left. What are your views on both these extremes? And what drives folks
to such extremes. Should not moderation be more attractive? And why and how has
the Internet- chatrooms and blogs, accelerated this trend toward extremism? Is
it the anonymity that the Internet provides?
SP: The primary urge to censor content comes from the desire not to allow people to know arguments or facts that could compromise one’s own claim to expertise or authority. This is further inflamed by the psychology of taboo – the mindset we all are vulnerable to, in which certain ideas are considered not just illogical or false but sinful to think and worthy of punishment. (The psychologist Philip Tetlock has done many experiments showing how prone people are to this mindset – I discuss his work in Slate and in Stuff.)
With taboo words (as opposed to taboo ideas), there’s also a concern about the presuppositions and attitudes that a listener has to entertain just to understand the word or expression. To understand an epithet like nigger or fucking Jew or cunt (as a misogynistic term for a woman) is to be complicit, if only for a moment, in an implied community of speakers who codified the contemptuous attitude into a word or expression. It feels morally corrosive even to hear the word and understand how it is intended. Hence the desire not hear them, to prevent people from hearing them.
Of course it’s the very essence of democracy that words and ideas cannot be stifled by force except under very narrowly defined circumstances. And academia can only justify its existence if it is an open forum for ideas, including those that are heterodox at any moment. So wide latitude must be given to the expression of ideas. Still, the policy depends on the context – there’s a difference between privately owned media enforcing a house style (which is not unreasonable) and government censorship (which almost always is).
Let me return to the idea of how words and ideas spread, or as evolutionary
biologist Richard Dawkins, the idea of memes. Some memes- such as curse words,
tend to spread, while others tend to die. Why do you think some terms prosper
and others fail? One of the examples of a failed- or wannabe, meme, is the term
‘bright’, for an atheist. Philosopher Daniel Dennett- an atheist, has
embraced the term, while you- also an atheist, seem less comfortable with it. Is
this so? And why? I believe ‘Bright’ is a weak neologism, and sounds
like a bunch of smarty-pants kids wearing beanies with propellers on top, and
out of touch with reality. It is a contrived and puerile term. Neologisms like bright
tend to succeed only when there is a void to fill, not when they are given
Madison Avenue-like deliberation. Sans the void, and with plenty of better and
more specific options, neologisms die. Bright is a bad term, ill-defined,
inappropriate, and superfluous. Even Political Correctness, at least, has
some worth in its reality as Left Wing Fascism, rather than what it purports to
be. Also, it’s part of the American movement of dishonestly labeling things-
be it pro-abortionists and anti-abortionists who call themselves pro-choice and
pro-life, or creationists who call themselves Intelligent Designers.
Thoughts on the merits of ‘Bright’? And, is Intelligent Design a
pseudoscience like that practiced by Nazi and Soviet scientists?
I’ll note that the question of which neologisms succeed and which ones fail is
the topic of one of the chapters in The
Stuff of Thought. In these examples, I think you may be missing a layer of
irony. Political correctness is a
sarcastic term, so its purported use and its use in reality are in fact the
same. The term bright, which is
self-consciously coined and introduced (unlike most successful words, as you
note), is intended to call attention to itself and the circumstances under which
it was coined, rather than as a serious attempt to infiltrate the language. We
live in an age in which belief in God is considered the default, reasonable
state of opinion, and in which most people equate the term atheist with amoral (polls
consistently show that Americans are less likely to vote for an “atheist” as
president than any other disfavored category). By announcing that the concept of
atheist needs positive rebranding, and that it’s a position that intelligent
people are likely to arrive at through reasoning, the movement to introduce the
term is making a meta-statement, a kind of lexicographic guerilla theater.
Whether the word itself catches on as a neutral descriptor is irrelevant.
DS: Here is
another thing I have noticed. While the meme of ‘meme’ has been very
successful in proliferating, the fact is that most time that I see someone use
the term they speak of a meme as if it was a material thing, rather than being
merely a metaphor. The meme’s meme, in other words, has been dumbed down to a
memetic dead end. Do you see irony in this?
SP: I don’t
really understand this – it hasn’t been my experience.
DS: I mentioned
that you are an atheist, so let me ask you this: a few months ago, the ABC
television network aired, on Nightline, a
‘debate’ on God between a former sitcom star, Kirk Cameron, and his vapid
guru, and two almost equally dumb atheists who offer a ‘Blasphemy
Challenge’ online. These nitwits are associated with a bad video called The
God Who Wasn’t There, made by a recovering religion addict. I reviewed
the film, and these folks are as dogmatic as the religiots. Why is it that
whenever I’ve seen videos of you or other intellectuals debating on a topic
such as this, you are usually pitted against the Lowest Common Denominator
representative of dissenting opinion, rather than a serious theologian?
SP: I did have a friendly exchange with the Chief Rabbi of England, as well as a four-way micro-debate in Time magazine with Francis Collins (the head of Human Genome Project, and an apologist for Christian theology), Michael Behe (the patron saint of Intelligent Design), and a Christian fundamentalist preacher. Dawkins has debated Collins in Time, and Harris has debated the talk-show host Dennis Praeger, who’s no dummy, as well as a major Christian religious leader whose name I forget. So it isn’t all farce. Also, while I respect theologians who are students of religious philosophy and history (which is undoubtedly an important field of scholarship), the idea of debating a “serious theologian” about the existence of God is, for me, like debating a “serious astrologer” about the validity of astrology.
DS: Good point.
Religion always seemed designed to be just beyond scientific purview, so that
there is an Oz-like curtain to cover up the homunculus. Do religions rely on
such trickery because they know, in daylight, their claims are essentially
silly? And, despite claims of a worldwide religious revival, I see the opposite,
especially since Y2K. I see 9/11 as an example of radical religion starting its
death throes. It’s so ineffectual for so many that it has to try to grab
attention any way possible. And, I believe there are far more Homer Simpsons
sleeping in the pews than Ned Flanderses marching onward, like Good Christian
Soldiers. Is this disconnect between the reality of a growing irreligiosity in
the world and the alarums about Fundamentalist Islam and Christianity due to
media outlets, like the ABC network, constantly pushing religion into the
SP: I agree. Natalie Angier, in her American Scholar article of a few years ago, pointed to data showing that religious belief in America is soft. People tell pollsters they believe in God because they equate the question with “Do you believe in morality and values?” but it doesn’t play much of a role in their lives or beliefs. Greg Paul and Philip Zuckerman make the case even more strongly on Edge.org. In Europe, Canada, Australia/NZ, Japan, and other postindustrial countries, the trend is even stronger. People don’t care about God, and are staying away from churches in droves – especially Catholic churches, which have been decimated by the pedophile priest scandals.
DS: Why is the mainstream media so hostile to rational thought? In the last half a decade, or so, ABC- as example, has aired a steady stream of Christian propaganda, on the so-called life of Jesus, the reality of God, yet only throwaway moments are devoted to rational responses. Jesus Christ, as example, is a figure with ZERO basis in historical reality. There are no contemporaneous mentions of him, despite his living in perhaps the most litigious and recorded area of the word, at the time. It’s decades later when a Josephus interpolates some mention of the man from Nazareth. Even the Roswell Incident has far more historical reality than Jesus Christ. Yet, corporations like ABC persist in upholding the mythos; ABC because its then anchorman, Peter Jennings (it is rumored), became a Born Again Christian in the final years of his life. Why is there such anti-intellectualism in this country? There seems to be, not only in religion, a desire to damn any real cogitation on issues. Is this the ignorant hand of Postmodernism come to cover all subjects, or is it remanent Puritanism?
Religion sells – a Newsweek editor
once told me that they cynically put Jesus or Mary on a cover twice a year
because it always gooses up readership. (For confirmation, see the hilarious Mother Jones feature Jesus,
What A Cover! Clearly there’s
a taboo about discussing the factual basis of religious history in American
public forums. I tend to think that the no-nonsense rhetorical tactics of a Sam
Harris (like those of David Hume, Bertrand Russell, H. L. Mencken, and others
before him) are necessary to bring these issues into the realm of rational
And what of PoMo? As you have with religion, you have been very critical of
Postmodernism. Why? I always laugh when I get emails from deliterate folk who
rail against some essay I’ve written on bad writers who hide behind the PoMo
cloak- such as David Foster
Wallace, claiming I a) don’t understand Postmodernism, or b) the writer
(in this case, Wallace) is not a PoMO writer, but a Post-Postmodernist writer.
Yet, what could be more PoMo than Po-PoMo? I even recall a Yoko Ono ‘Art’
Show at the Walker Center in Minneapolis, where they displayed such Onovian art
as a pencil dot on a blank sheet of paper, and a real green apple on a stand. I
knew a singer who actually thought that ‘Ono is deep and ahead of her time.’
To what do you ascribe such gullibility?
SP: Part of it is the feeling of superiority at getting a joke that goes over the heads of the rubes from Peoria. This is of a piece with everyone’s rediscovery of relativism at some point in their intellectual development – the epiphany that other cultures or periods see things differently than the way we do, therefore the world view we take for granted is a parochial and arbitrary prejudice which the enlightened can transcend. It’s basically a confusion between cosmopolitanism, which is good, and relativism, which is bad – bad because it’s self-refuting (if relativism itself is true, and believing it is good, then truth and goodness must exist), and because it’s belied by the success of science.
I must add, though, that to my surprise I really enjoyed a retrospective of Yoko Ono’s art from the 1960s which was featured at the List Gallery at MIT a few years ago. If one projects oneself back to 1966, her art is fresh and witty and thoroughly original. I even understood for the first time how John Lennon, a smart and creative kid who barely escaped the slums of Liverpool, could become infatuated with her. The problem is that there is just enough potential in minimalist conceptual art to support approximately one artist, and the time window in which it was fresh and original lasted for just a couple of years in the early-to-mid-1960s. Since then there have been far too many Yoko Onos, n including Yoko Ono herself.
Let’s move on to another subject that has been misconstrued: human violence. I
grew up in Queens, New York, in the late 1960s and 1970s. It was in a bad
neighborhood, and I saw much wanton violence. Yet, in the last few decades,
violence in large cities, and nationwide, has decreased dramatically, yet with
the increase in media coverage, one might assume the opposite is true. Is
violence just a manifestation of ‘the beast’ in us? And what is the price of
denying that reality? As example, I loved boxing, as a child, and one of my
favorite all-time sports moments came in the early 1980s, during a Monday Night
Football Game between my beloved New York Giants and the damnable Washington
Redskins. Two of my guys- Lawrence Taylor and Leonard Marshall, sacked the
arrogant and detestable Redskins quarterback Joe Theismann, and snapped his leg
in half. One could hear the snap over the television. While one might ponder why
such a thing is memorable to me- and fondly memorable, the fact is that I’ve
known many people whose fond memories consist of violent things. Why is this
common? Do men feel such things more, because of testosterone? Yet, I overcame
my natural inclination to violence, while many others never do. Can you
speculate on why some people can and others cannot overcome certain things, such
I had a chapter on violence in The Blank
Slate, and the decline of violence over the millennia will be the topic of
my next book (2010 or 2011) – an extension of the New
Republic article you mention below.
You seem to agree with my earlier posit on how the media misconstrues violence.
Do you feel this is deliberate? If not, what could account for it? Is it just
corporate greed, to fan the Lowest Common Denominator flames?
SP: “If it bleeds, it leads,” say the producers of news shows. As you note, we are all fascinated by witnessing staged and simulated violence, even if we abjure it in our behavior. Violence in movies has become more graphic (in the old movies, the bad guys never even bled when they got shot); violent video games have skyrocketed; and we continue to enjoy boxing, hockey, Shakespearean tragedies, and Mel Gibson movies – all during a period in which rates of violence have plummeted. This is one piece of evidence for the main intellectual theme of my career– that there is a timeless and universal human nature, but it is to be found in thought and emotion, not in behavior.
In an essay called A
History Of Violence, you state, ‘The
decline of killing and cruelty poses several challenges to our ability to make
sense of the world. To begin with, how could so many people be so wrong about
something so important? Partly, it’s because of a cognitive illusion: We
estimate the probability of an event from how easy it is to recall examples.
Scenes of carnage are more likely to be relayed to our living rooms and burned
into our memories than footage of people dying of old age. Partly, it's an
intellectual culture that is loath to admit that there could be anything good
about the institutions of civilization and Western society. Partly, it's the
incentive structure of the activism and opinion markets: No one ever attracted
followers and donations by announcing that things keep getting better. And part
of the explanation lies in the phenomenon itself. The decline of violent
behavior has been paralleled by a decline in attitudes that tolerate or glorify
violence, and often the attitudes are in the lead. As deplorable as they are,
the abuses at Abu Ghraib and the lethal injections of a few murderers in Texas
are mild by the standards of atrocities in human history. But, from a
contemporary vantage point, we see them as signs of how low our behavior can
sink, not of how high our standards have risen.
The other major challenge posed by the decline of violence is how to
explain it. A force that pushes in the same direction across many epochs,
continents, and scales of social organization mocks our standard tools of causal
explanation. The usual suspects—guns, drugs, the press, American
culture—aren't nearly up to the job. Nor could it possibly be explained by
evolution in the biologist’s sense: Even if the meek could inherit the earth,
natural selection could not favor the genes for meekness quickly enough. In any
case, human nature has not changed so much as to have lost its taste for
violence. Social psychologists find that at least 80 percent of people have
fantasized about killing someone they don't like. And modern humans still take
pleasure in viewing violence, if we are to judge by the popularity of murder
mysteries, Shakespearean dramas, Mel Gibson movies, video games, and hockey.
What has changed, of course, is people’s willingness to act on these fantasies….Man's inhumanity to man has long been a subject for moralization. With the knowledge that something has driven it dramatically down, we can also treat it as a matter of cause and effect. Instead of asking, ‘Why is there war?’ we might ask, ‘Why is there peace?’ From the likelihood that states will commit genocide to the way that people treat cats, we must have been doing something right. And it would be nice to know what, exactly, it is.’
agree with your second query being the more perplexing. Do you have an answer?
Is there really any answer?
In the article, I offer four hypotheses: (1) an effective democratic police and
court system deter violence; (2) the ease of trade, travel, and communication
have put us in positive-sum games in which other people are more valuable alive
than dead; (3) the circle of empathy has expanded because of journalism,
history, realistic fiction, and cosmopolitanism; and (4) life has become more
pleasant and predictable, leading us to value it for ourselves and others. All
could be true, and they could all be manifestations of some general trend toward
moral progress that stems from the moral logic of sociality. But I’ll leave an
exposition of these ideas to the next book, tentatively entitled The
Better Angels of Our Nature.
You have also graphically stated that, on a per capita basis, so-called
‘higher societies,’ i.e.- the First World, Western World, Liberal
Democracies, etc., have a far lower rate of violence and murder than tribal
societies, even including the last century of the Two World Wars and the ‘Hot
Flashes’ of the Cold War- Korea, Vietnam. You use this to debunk the
Rousseauvian ideal of the Noble Savage, even claiming that his antithesis,
Thomas Hobbes, was right and Jean-Jacques Rousseau was wrong.
such as this have gotten you labeled a Right Wing apologist, a Fascist, etc.
Yet, this claim seems to not be so ludicrous, especially when one looks at the
Middle East, where most people still live in tribal societies, where a sense of
commonweal is peregrine. Why has the Noble Savage perdured, when evidence,
especially in the New World, contradicts such? Of course, I refer to growing
evidence of cannibalism in the tribes of the American Southwest, evidence for
mass hunting kills of Stone Age large mammals and deforestation in the American
West, and the odd case of the Kennewick Man.
I keep an eye, probably foolishly, on things people say about me, but as far as
I know I’ve never been labeled a Fascist. Even overt accusations of right-wing
apologetics are pretty uncommon, given my views on evolution, secularism,
humanism, etc. But you’re right that belief in the Noble Savage dies hard. A
large reason is that it is a reaction to the demonization of nonwestern peoples
in the past and the free pass given to Europeans – the whitewashing of
genocides by the conquistadors and other American colonists. It’s also part of
the general romanticism that has characterized post-1960s ideology and culture.
Re: the ‘Fascist’ remark, it’s a common term I see tossed about for all
atheists or secular humanist in threads, so not specific to you; albeit people
such as you, Dawkins, Dennett, et al. are routinely called that. Speaking of the
Kennewick Man, if claims about it hold up, could this mean that the Americas
were far more integrated in the world culture than thought previously?
Specifically, could there have been Caucasian Americans that predated the
Mongoloid ancestral Indians? Also, since there seem to be many legends (and
tantalizing hints of fact) of pre-Columbian contact with the Americas- by
Phoenicians, Chinese, the Vikings, and even the Welsh, is there linguistic
evidence (if not DNA evidence) for such intermingling. I refer to claims of the
similarities between Welsh and the language of the Mandan Indians, or the claims
of some Central American tribes looking ‘Oriental,’ i.e.- having epicanthic
The linguistic evidence is marginal – without a statistical correction for the
number of similarities you would be expected to find by chance when
cherry-picking words post hoc, one shouldn’t take claims of connections
between remote languages seriously. As for physiognomy, I would think that any
Asian features of Native Americans can readily be explained by the fact that
they are Asians under the conventional
theory – i.e., descendants of Siberians who crossed the Bering isthmus 13,000
years ago. It’s certainly possible that there were pre-Clovis contacts between
the New and Old Worlds, but I had better leave this issue to the archeologists,
and increasingly in the future, to the geneticists.
DS: Let me opine on some of the things in perhaps your most famed and controversial book, to date. In a review of The Blank Slate I wrote, ‘SP later opines that people fear that if genes have some influence on people, that influence is conflated with total influence. This is easily disproved, & SP does so at some length & with great clarity. But why do people conflate some with total? Probably because of innate human laziness, & the distortions that pervade the media- especially in soundbiting ideas that need speechifying to elucidate thoroughly.’
my reason too facile? Is there a more pervasive or deeper reason as to why
people always seem to think in such black and white terms?
SP: In The Stuff of Thought, I note that all-or-none thinking is embedded in language. When we use a noun as a subject or an object, the natural interpretation is that the referent is affected or located in toto. For example, John drank the glass of beer suggests he drank all of it (compare John drank from the glass of beer), and The garden is swarming with bees suggests that all parts of it contain bees (compare Bees are swarming in the garden). This is common across languages (possibly universal), and probably reflects the way thoughts are constructed, with pointlike or bloblike symbols standing for complex entities. I suspect that this habit makes statistical comparisons (such as apportioning variance, or analyzing overlapping bell curves) highly unintuitive.
The phenomenon you mention also reflects the mental anchor points that people begin with, which in the case of 20th-century intuitive psychology, was the blank slate. If that’s your starting point, then any deviation from 0 – 1% to 100% -- becomes an equivalent heresy.
DS: Later, I write: ‘Then again, it may not be so curious since I see 2 modern parallels to twin studies in other endeavors. The 1st is in the relatively hard sciences of cosmology & cosmogony, where the Big Bang theory has held sway despite mounting evidence that does not support many of its conclusions- mainly 1) the conundrum that has bedeviled religiots for eons (updated to): If the Big Bang was the beginning, what came before the Big Bang? & 2) The fundamental absence of alot of supporting evidence that would be predicted by Big Bang physics- from strings & superstrings to dark matter & dark energy….The 2nd area that much of the twin studies seems to find parallels with is in the belief in Near Death Experiences (NDEs). Let me state that I do not believe NDEs are truly NDEs, but rather the last second panic-modes of a dying brain that are remembered upon improbable revival. & believe me, I’ve had a NDE & believe the latter to be true. That said, there is the classic NDE of a weightless incorporeal essence of you floating toward some bright light where you encounter a Jesus/Buddha/deity & look back on your life surrounded by loved 1s who have dies. Unfortunately, this is just not the majority of NDEs. Most NDEs are far more prosaic & range from typical dream-like experiences (such as I had- however bizarre), to downright Hellish nightmares. But only the ‘float to the light’ NDEs are propagated by believers- convenient ‘proof’ of an afterdeath. In this way, NDEs also resemble claims of alien abduction, in that those ‘abductions’ which have occurred in the USA over the last 30 years or so are done by perverse bug-eyed gray dwarves, whereas other cultures report a wild menagerie of extraterrestrial kidnappers.’
Let me work backwards from this: what are your opinions on things such as NDEs, or OBEs (Out Of Body Experiences)? No one has ever produced information gained in OBEs, or past life regressions, that could not have been obtained by ordinary means elsewhere. What of people who have visions of Jesus Christ or the Blessed Virgin Mary (BVMs)? In 1917, there was the infamous Our Lady Of Fatima mass delusion, where some claimed the sun stopped, some claimed a UFO was seen, and others saw Mother Mary. How can so many people be so off-kilter? And what of people who claim to be abducted by aliens and sexually abused and/or experimented upon? Is this Freudianism crawling out of the grave? Has Occam’s Razor- the maxim that the simplest explanation that best fits the known facts is usually correct, fallen to desuetude? It seems that brain studies of recent years are obviating the preternatural claims of all these visions. Similarly, there is not a single case of a supposedly reincarnated person who (via Past Life Regression) has obtained information that could be verified nor that could not be obtained by diurnal means. Also, while it’s true that someone like Uri Geller might be able to bend spoons via mind power alone, since even tyro magicians (much less old pros like James Randi) can do the same, Geller is likely a charlatan (and, indeed, has been proved such on more than one occasion). Why has common sense fallen out of favor?
SP: I don’t understand your criticism of the Big Bang theory. The question of what came before the Big Bang is not “mounting evidence” against it but a permanent conceptual puzzle, whose solution surely is that our folk concept of “before” (and other common-sense notions of time and space) breaks down at boundary and extreme conditions such as the birth of the universe, where we have no right to expect them to apply in the first place. Also, I’m no physicist, but my understanding is that the existence of dark matter and energy does not challenge the idea of the Big Bang (as opposed to the steady-state alternative), nor is the empirical difficulty of validating string theory.
As for near-death experiences – I completely agree. I don’t think common sense has fallen out of favor. On the contrary, as my former collaborator Paul Bloom argues in his book Descartes’ Baby, common sense is fundamentally dualist: we naturally think that people have minds that can part company with their bodies. Recall our discussion of dreaming above, not to mention the deeply unintuitive nature of death – it’s hard to imagine a person, especially the self, simply ceasing to exist. It’s the materialist view that is unintuitive, and that constantly needs to be reinforced by skeptical science.
Still working backwards, let’s turn to Twin Studies. I am adopted, and in my
mid-20s, found my natural father and brother. Yet, I sense that I am different
from most people in such studies. While I am not a twin, my natural brother and
I have little in common, and after a decade of trying to foster a relationship
they just drifted out of my life. But, both my natural father and brother share
only a tendency to be able to shuck off adversity. In terms of politics, social
views, finances, etc., we are worlds apart. By contrast, my adopted dad was a
trade unionist, blue collar man with a 5th grade education, and yet
his ethos seems to have slipped into me osmotically. I share much of his view on
life, and empathy for working class, minority, and poor people. Were it not for
his presence I’d likely be in prison. My adopted mom also resonated with me in
encouraging my quests for answers. Even my adopted sister (no blood relation)
shares a far greater sense of the world (although not nearly as intellectual)
with me than any of my blood relatives did. Perhaps, the fact that I am creative
has something to do with skewing the norms in such situations, but I am a bit
more skeptical of twin studies. To me, it seems that most of the similarities
are trivial matters. As in the underreporting of non-float toward the light NDEs,
or the skewed IQ questions, I suspect that much of the claimed similarities
arise from an expectation of such on the part of those conducting the
experiments, with inconvenient discrepancies overlooked. Not that the idea that
identical twins raised apart won’t have obvious similarities. I just find much
of it likely overstated. Have there been dissenting studies?
SP: See our discussion of statistical thinking above – as with all large-scale generalizations, “your mileage may vary.” Also, I wonder how many of your beliefs and values were shared with your peers and other kids in your neighborhood, as well as with your parents. As I point out in the “Children” chapter in The Blank Slate (based largely on Judith Rich Harris’s work), there is a huge three-way confound in most people’s experience between genes, parenting, and peer culture. Your being adopted severs the confound with genes, but unless your family was unusual in your neighborhood (e.g., migrants from another country or culture or class) it’s hard to disentangle parents from peers, and when they do dissociate (as in generational changes like the 1960s, or with immigrants), usually the peer values predominate.
for twin studies – these are truly robust effects, repeatedly replicated with
multiple converging methods (twins, non-twin siblings, other relatives, and
adoptees), often in massive samples from data-hoarding Scandinavian countries.
They also jibe with many people’s everyday experience. I’ve received many
emails from people with opposite stories to yours, in which they feel an instant
affinity with suddenly discovered biological relatives. And the correlations not
just in amusing but trivial traits like flushing the toilet before you use it,
or keeping rubber bands around your wrist – they are found in consequential
life outcomes like college attendance and success, income level, vulnerability
to addiction and psychiatric disorders, and likelihood of getting divorced or of
getting into trouble with the law.
Let me skip about a bit, and ask some queries based upon things I’ve gleaned
from doing research online about you. In an
online video interview you quote philosopher Colin McGinn, who claims that
philosophy is the study of things the human mind is incapable of understanding.
Do you agree? And are there limits to human knowledge- certainly individually,
but as a species? Could other sentient beings know the cosmos and its
‘truths’ differently from the way we do?
SP: Yes, I expanded McGinn’s argument in the closing discussion of How the Mind Works. Certainly qualia, or the “hard problem” of consciousness (why first-person subjective experience exists), is a good candidate for a problem that we can pose for ourselves but for which we can’t even imagine a satisfying answer. (This is why Dennett can coherently argue that it’s a pseudo-problem, while courting the incredulity of every sentient agent who, like Descartes, cannot doubt the fact of his own subjective awareness). By the way, it’s crucial to distinguish the hard problem from the so-called “easy problem,” namely the cognitive, neural, and evolutionary basis of the conscious/unconscious distinction. Most people, including scientists, confuse them.) The implication is that other species, if they had brains that were not confined to discrete combinatorial reasoning like ours is, might indeed find it child’s play to explain how neural firings observed from the outside could feel like something from the inside.
And what of non-human terrestrial intelligences? Could the oft-touted less than
2% genetic difference between humans and chimpanzees be merely an insignificant
difference in the grand scheme, no matter how much we see it as ennobling us
above all other creatures? And whales seem to have quite a complex
‘vocabulary’- even beyond the great apes? Elephants, in recent years, have
shown surprising complexity in their social structures, and individuation-
unlike social insects, where complexity arises only en masse, not individually.
Could you ever foresee a Planet Of the Apes scenario, where some
‘lesser’ species supplants man as the dominant force on the planet?
It’s a mistake to think that there must be a dominant group of animals
“ruling the earth,” as the old museum exhibits used to say. As E. O. Wilson
has pointed out if any group of animals is the dominant force on the planet
today, it’s the insects. Intelligence is a gadget that is selected when its
benefits (in particular, outsmarting the defenses of other plants and animals)
outweigh the costs (a big, injury-prone, birth-complicating, metabolically
expensive organ bobbling on top of your neck). And that probably happens only
for certain kinds of organisms in certain ecologically circumstances. It isn’t
a general goal of evolution, or else we’d see humanlike intelligence
repeatedly evolving. Since elephants and humans have not been primary ecological
competitors for most of the evolutionary history of the elephant, it’s
unlikely that they’ve been waiting for humans to get out of the way before
getting smarter. It’s more likely that they are at an adaptive plateau in
which still-better brains aren’t worth the cost.
What is the difference between consciousness and sentience?
In How the Mind Works, I used “sentience” as a more reader-friendly
synonym of “qualia” and “the hard problem of consciousness” – see
How much of a role does
celebrity play in science- i.e.- how much of Stephen Hawking is his ALS? And to
what degree is your fame- outside of science, dependent upon your moptop?
Certainly Hawking is a brilliant physicist whose standing within physics had
nothing to do with his illness. As for me – yes, it’s the hair.
As a psychologist, are you ever embarrassed by the Lowest Common Denominator Dr.
Phil types in the media?
SP: I’m not familiar enough with Dr. Phil, unfortunately.
Have you ever watched Michael Apted’s The
Up Series documentaries? What are your thoughts on it as a longitudinal
study of human development? How about sociologically? Do you agree with its
epigraph, the Jesuit proverb, ‘Give me a child until he is seven and I will
give you the man.’?
SP: Yes, I enjoyed one of the programs very much. It vividly shows the continuity of personality over the lifespan – and also some of the contingent unpredictability. I wouldn’t agree with the epigraph for at least two reasons. One is that it ignores the roles of genes and chance. The other is that it assumes a critical period for the development of personality and character (up to age seven), whereas development continues well beyond that age (just think of kids who immigrate after the age of seven and assimilate fully), and adolescence is surely a critical stage in that development.
On May 9th, 1961, FCC Chairman Newt Minnow famously derided
television as a ‘vast wasteland.’ Manifestly, with hundreds of channels now,
this is even more true, and the Internet only magnifies that verity, with all
its demagoguery and wrong information, Nigerian scams, Viagra and penis
enlargement ads, porno, political hate blogs, scams, online gambling, anonymous
defamations, bile, trolls, etc. Are all knowledge-potential technologies doomed
to the lowest common denominator? If not, are there steps to ameliorate it, or
will time just have to do what it does, level the garbage to dust?
SP: I don’t agree – the Internet is like the printing press or movie cameras in that it doesn’t care what kind of content it disseminates. There is a lot of dreck on the internet because human beings produce and consume a lot of dreck. Remember that half the population is below average in intelligence; ditto for taste and judgment. But the internet is creating a mind-boggling advance in human knowledge and discourse. Not only does it provide instant, searchable access to past issues of journals and magazines, searchable texts of all the great classics, instant access to much of the world’s art and music, and countless quantitative databases, but it provides a forum for the expression of a much wider range of informed opinion than were served by the oligopoly of print-based media with “New York” in their titles. The range of intelligent opinion that you can get on www.artsandlettersdaily.com, www.edge.org, www.slate.com, www.3quarksdaily.com, and so on simply dwarfs what you find on the New York Times op-ed page.
You are, indeed, associated with Edge.org, and last year suggested that the
query they pose to some of the leading thinkers in the world be,
‘What’s your dangerous idea?’ I found it interesting that so many of the
ideas that some considered dangerous were in direct opposition to each other. As
example, one person might claim that God exists and another that there is no
God. Or that we are alone in the universe as a planet that supports life, while
another claims life is everywhere. But, I got the biggest laugh from the fact
that one of those queried was ex-Monkee Mike Nesmith, and his almost comical
response about the nature of time and reality was so ignorant and ballocksed I
had to believe someone slipped his answer in there as a joke. Why did you
suggest that query? And were there any answers, aside from Nesmith’s, that
left you just shaking your head, saying, ‘What in the blue hell is wrong with
so and so?’ If so, which one(s)?
SP: I proposed the question because science is increasingly turning up heterodox ideas and the internet is increasingly blowing their cover. Whether or not we end up giving an open forum to all ideas, we need to think about the issue of when and how to discuss them. This was the subject of a course I taught with Alan Dershowitz at Harvard last semester (“Morality and Taboo”), and of the essay I wrote in connection with that feature that was printed as a preface to the resulting book and in the Chicago Sun-Times.
Your reply to your query was, ‘Groups
of people may differ genetically in their average talents and temperaments.’
Can you expound on this? And why do so many see this as being a racist
sentiment? At the extremes, one would not expect short, squat Eskimos to be able
to run as fast as certain lean and lanky Africans nor Andean Indians who were
born and raised in higher altitudes. Duh!
SP: Rather than expounding on this, I’ll take advantage of the internet and point to my past exposition.
DS: My wife said that her dangerous idea was ‘Why?’ I.e.- why all
and not none? The idea of first causes. If you are familiar
with the great old British television show from the 1960s, The
Prisoner, written and starring Patrick McGoohan, you’ll recall an
episode titled The General, wherein McGoohan’s character, #6, defeats
the titular room-sized supercomputer by asking the unanswerable Möbian
question, ‘Why?’ Nowadays there is talk of quantum computing, and even the
idea that the cosmos is a giant quantum computer. But, is that simple question
McGoohan asked an intellectual quale? Is ‘first cause’ not simply a
theological conundrum? Why this? Why that? Why time? And is the answer likely
what Woody Allen suggests at the end of his great film, Crimes And
Misdemeanors? That the search for reason and meaning will always be
fruitless as long as we look out there, that it is we who invest things with
meaning, so we should be careful what we grant? I.e.- it is the engagement of
the mind with the real that is the source of wonder.
We don’t have a right to expect that there will be an answer to all the
“why” questions we can pose. Anyone who remembers their three-year-old’s
“why” stage, in which every answer was followed reflexively by another
“why?”, can appreciate that not all “why” questions have answers. Some
– like “why is the sky blue?” – only have “how” answers; we can
explain how the fact came about, but not what purpose it serves, since it serves
no purpose at all. (Purpose exists only in the realm of intelligent agents like
humans, and perhaps certain feedback-driven cybernetic processes like human-made
machines and natural selection.) Ultimate “why” questions (why was there a
big bang? why is there something rather than nothing?) are unanswerable by
definition, since regardless of the answer you propose, our inner-three-year-old
can always follow up with yet another “why”?
79. DS: To finish up on this question, my ‘dangerous idea’ is that
there are no truly dangerous ideas, only dangerous actions, and that the placing
of blame on mere ideation removes culpability from the individual and actions-
something the Nuremburg Trials should have obviated. Any thoughts?
In my essay, I follow the great talmudic tradition of arguing a position as
forcefully as possible and then switching sides. That is, I consider the best
arguments for discouraging (though of course not prohibiting) the airing of
certain ideas. For example, people are responsible for the consequences of their
actions, including their public statements, and publicizing a scientifically
unproven idea that is guaranteed to increase racism in an inflammatory time and
place (e.g., a biological basis for increased ambition among Jews in the Nazi
era in Germany, or for black-white IQ differences in the early civil rights era)
is not morally unproblematic. Also, there are numerous circumstances in which as
individuals we rationally choose to be ignorant. We may choose not to know the
outcome of a football game we have recorded and hope to watch later, or who got
the placebo and who got the drug in a controlled clinical trial, or some
sensitive information that could make us vulnerable to kidnapping or extortion,
or a threat – the proverbial “offer we can’t refuse.” Perhaps the same
is true for collective intellectual discourse.
Let’s turn to social engineering. You’ve been critical of the attempts of
social engineers to micro-manage society, and blame this on the baleful
influence of The Blank Slate ideologues, such as the architect Le Corbusier, who
wanted to level and rebuild Paris, France. Growing up in New York City, we had
our own such ideologue- Robert Moses, who destroyed neighborhoods, contributed
to the government’s redlining and de facto theft of property
wealth from black and minority communities. What are your thoughts on such
macro-attempts to control the populace? Have they all fallen to the dustbin of
history? Or, are their Brasilias still on drafting tables?
Yes, and also Boston’s West End (now home to a high-rise desert) and Scollay
Square (now the brutalist City Hall and “Government Center”). Thankfully,
the “new urbanism” seems to have slowed this kind of thing down, thanks to
its acknowledgment of human needs like green space, intimate places for social
interaction, human scale, and resilient, bottom-up social organization.
I have always felt there is a difference between secular ethics (which are
immanent) and religious morals (which are imposed from on high). What are your
thoughts on the difference? Is there a deeper human set of values that all
share? Also, do humans need to be tricked into acting altruistically? If so, is
a theoretically altruistic political system like Communism possible? Was it
flawed because it did not acknowledge altruism needs to be the medicine slipped
into a piece of candy, and cannot be forced?
I don’t think people literally have to be tricked into altruism, but it’s
clear that some kinds of altruism are more natural than others. People are more
likely to sacrifice for their family and friends than for strangers, and they
are more likely to sacrifice for people they perceive as part of their clan or
extended kin (which is how ethnic groups are perceived, even if genetically
there’s not much basis for the perception) than for an abstraction like
“society.” It’s probably not a coincidence that successful welfare states
tend to take root in ethnically homogeneous societies, and that widespread
immigration tends to threaten their popularity. It’s also not so clear that
Communist collectivization was, either in perception or reality, a kind of
altruism. The inefficiencies of massive central planning, the aggrandizement of
leaders in the cults of personality, the pursuit of ideological dogmas in place
of feedback-guided adaptive policies, the ethnic favoritism, and the massive
corruption of those entrusted with overseeing the collectivization, meant that
even the kernel of altruism that theoretically characterized Marxist social
planning was not feasibly implementable by state force.
How about materialism, and relativism? As example, if I kill you, does it matter
if I kill you because I’m an anti-Semite, I hate atheists, I hate your hair
while I’m bald, I’m a sexual deviant, a terrorist, a druggy looking to mug
you, a hitman assigned to ‘take you out’ because you trespassed against a
Don, or because you stole a girlfriend from me years ago, and I never got over
it? The net result is you’re still dead. Does not this make ‘hate crimes’
silly, since it punishes perceived/subjective motive rather than
SP: I’m not sure what this has to do with materialism or relativism, but I do think it’s legitimate to consider a perpetrator’s motives when determining criminal punishment. The reason is that the ultimate goal of criminal punishment is deterrence, which very much depends on who is tempted to commit a crime and under what circumstances. That’s why we already have an insanity defense, and already distinguish cold-blooded premeditated murder from manslaughter committed in the heat of passion and involuntary manslaughter that results from a fight or from irresponsible conduct. In The Blank Slate, I suggest that it’s not a coincidence that that we assign the most responsibility to those people who would most easily be deterred by a policy of holding such people responsible. Though I share your distaste for laws that would implicitly make a black or a gay or female life more worthy of protection than a white or a straight or a male one, conceivably one could justify a law that sought to deter a category of crimes that might otherwise have been left too tempting by the rest of the criminal justice system, like picking out a gay person at random for a beating.
How about an afterdeath? Is there a material possibility for consciousness
outside of physical means? Whether of not we speak of ‘ghosts,’ or possible
alien life forms?
No, the evidence points to death after life, not life after death. As far as we
can tell, our own consciousness depends entirely on physiological processes
taking place in our brains. Whether a robot or computer or alien made from
silicon could be conscious in the hard-problem sense is one of those
imponderables we discussed earlier.
Let me pepper you with some questions from observations I’ve had over the
years, and see what your opinions are, and whether any of these things have ever
been studied or documented. In all my years in the arts, I’ve found that men
still dominate, in terms of quality. Even in artistic lean times, like these.
Even my wife agrees. I believe that this is because men take risks, so there is
a higher possible payoff. Women with talent, that I’ve known, tend to be too
emotionally attached to their art, whereas men can objectify it, and say,
‘That sucks,’ and start again. Yet, when I think of great female artists-
such as painter Georgia O’Keeffe or poet Sylvia Plath, they exhibited definite
masculate tendencies- riskiness, aggressiveness, a lack of demureness. Any
thoughts? And outside of the arts, just how much of gender is actually sex
I’ve ceded the family franchise on the psychology of sex differences to my
sister Susan Pinker, the Globe
and Mail columnist, whose book on the topic will be published early in 2008.
Earlier I mentioned the idea of the ‘gay brain’ or ‘gay gene,’ and it
seems like people always want easy solutions. Yet, people are always obfuscating
terminology. Re: homosexuality, why is it the term ‘homophobia’ is used to
describe people who are not keen on gays. Literally, it means to be fearful of
homosexuals, yet I’ve never met an anti-gay bigot who was afraid of
homosexuals (although I know gays are trying to back-smear those bigots with
being closet cases, and ‘fearing’ their own sexuality). Rather, almost all
people with an anti-gay view have a disgust, queasiness, or ‘yuck’ factor
when thinking of homosexual acts. I think a term like ‘homotaedium’ or
‘homotaediot’ would therefore be more accurate. So why has such an inapt
term as homophobia taken off?
Actually, homophobia literally means “fear of the same.” The sex part got
omitted, presumably because homosexualphobia
is too long and clumsy. This happens all the time – fax for facsimile telegraphy,
British telly for television,
and so on. The phobic suffix can be
used to refer to mere avoidance, as in the chemistry term hydrophobic,
“repelled by water.” As I discuss in Stuff,
there is a lot of caprice in which neologism takes off. Homophobia is more-or-less transparent, which helps, whereas homotaedium
would be opaque to most English speakers (a feeling of tedium when watching
reruns of Will and Grace?) In general, erudite analyses about what term ought
to be used by the public go nowhere. When I was a child I remember some guy
arguing that automobile should be
dropped in favor of autokineton because
auto is Greek and mobile
is Latin. We’re still waiting.
Of course, there can be political reasons that poor word choices are used. In
the mid-1980s, boiled cocaine became known as ‘crack’ and was posited as a
new drug threat, when it had really been around since the 1960s, known as
‘pop.’ It was just that it needed a scarier name since white suburban kids
were now dying from it. Then there are words like liberal, conservative, and
libertarian, although people with those claimed political views rarely embody
them. I.e.- what liberal would ban books, what conservative would ban abortion,
and what libertarian would shill for corporations? Even in the arts, the
bastardization of words goes on. People cannot distinguish between a good or bad
review and a positive or negative one. A good review can be positive or
negative, if it makes its points well, and accurately displays an artwork’s
flaws or strengths without bias. Even in criticism, critics often mistake
subjective like or dislike of a work for objective excellence. Why is this?
SP: Good and bad, like most common words, are polysemous – they have many meanings, when you stop and think carefully about them. So I don’t think it’s correct to say that a good review must mean a well-crafted one as opposed to a favorable one. Adjectives like good tend to modify the aspect of a noun whose variation is most relevant to the context. As I note in the book:
is everywhere. A sad movie makes you
sad, but a sad person already is sad.
When you begin a meal, you eat it (or,
if you’re a cook, prepare it), but when you begin
a book, you read it (or, if you’re an author, write it). What makes
something a good car is different from
what makes it a good steak, a good husband, or a good kiss. A fast car moves quickly, but a fast
book needn’t move at all (it just can be read in a short time), and
a fast driver, a
fast highway, a fast decision, a fast
typist, and a fast date are all
fast in still different ways.
You clearly cannot see anyone's point of view other that your own.
Students in my class will the use of they're to be spelled there - but it is not
Me: That's because it is two different words. Alot and a lot are variants of the same word- not to, too, and two. And you've not even acknowledged the fact that alot is plural, while a single lot is singular. There is no single alot (or a lot). The a- prefix pluralizes the word. Also, a student is not an artist that is changing an art form.
And on it went.
Yes, emailers are often lonely and disgruntled people, but if high school
teachers are this bad and unable to grasp the power of language and ideas, does
that make freaks out of researchers like you and writers like me? Is emailese
fated to displace good writing and readers of depth, thus robbing folks of
inflection and emotions which can defuse the rampant online anger? I also notice
how people like this teacher, are the first to point to older examples of things
that were criticized in their day, yet ignore their contemporary equivalents.
I.e.- those critics who damned Walt Whitman, Emily Dickinson, Oscar Wilde, or
the Impressionists, are laughed at by the very people who, in earlier times,
would have been condemning the aforementioned. This reminds me of Thomas
Kuhn’s The Structure Of Scientific Revolutions. Do you agree? And what
are your views on Kuhn’s posits?
Attempts to make spelling and grammar more “logical” tend to ignore a
basic design feature of language (and all other communication systems): there is
always a tradeoff between standardization (an entire community abiding by a
single protocol) and logic. Sure, Betamax was technically better than VHS, the
Dvorak keyboard better than QWERTY, OS/1 better than MS-DOS, and so on, but
there’s also an advantage in using the same code that everyone else is using,
even if it is suboptimal.
the case of language, this tradeoff is amplified by the fact that no one gets to
design the code from scratch or dictate who uses it. The standardization arises
from countless lateral interactions, like the synchrony in a school of fish, or
the way that young people all decided to wear their baseball caps backwards in
the 1990s. A single influential writer like Shakespeare is, to put it mildly,
there are numerous criteria for “logic” or “good design” in language,
and they conflict with one another. Should we aim for maximum transparency,
where every syllable stands for a single concept and people can coin neologisms
at will? But this would require six- and seven-syllable words – would it be
better to aim for brevity and efficiency? Should a language have redundancy, so
that a mumbled consonant doesn’t lead to comical misunderstandings? Or no
redundancy, so as to maximize brevity and transparency? It’s because of these
numerous tradeoffs that none of the “perfect languages” of the Enlightenment
(discussed in Umberto Eco’s marvelous book The
Search for the Perfect Language) caught on. A major theme of my own book Words and Rules, as well as The
Stuff of Thought and The Language
Instinct, is that all human languages are shaped by these tradeoffs.
the case you discuss, I’m not sure I see the rationale behind spelling a lot as alot. I suppose
one could argue that the a is a clitic
that is pronounced as a unit with the following noun and therefore should be
joined to it in spelling (the fact that it has the phonologically conditioned
variant an would support the clitic
analysis). But of course that would mean a loss in transparency, as readers
would no longer see the a and lot
as separate units – the same a as
in a dog, and the same lot as
in lots and a whole lot. The argument could go either way, if there ever was
deliberation over which system to adopt. But the point is that there never was
such a deliberation, and never will be. English has just evolved that way, with
morphological rather than phonological spelling. It’s hard enough when a cadre
of copy-editors, English teachers, and usage mavens try to enforce a standard
that people en masse tend to flout, but a single person trying to do it, no
matter how much logic is behind him, is like ordering the tide back.
DS: Thanks for doing this interview, and let me allow you a closing statement, on whatever you like. Hopefully there may be some seeds you can use here for your next project.
Thanks for your rich and insightful questions. No closing statement – I’ve
Return to Cosmoetica