The Dan Schneider Interview 27: Larry Sanger (first posted 2/9/11)

 

 

DS: This DSI is with epistemologist, Larry Sanger, a man best known for helping to found the online encyclopediae, Nupedia, Wikipedia, and Citizendium. There’s so much good stuff to plumb that much will have to be left out. Nonetheless, I want to delve into your opinions on a plenum of subjects- the philosophic, religious, political, and pop cultural. For those readers to whom your book and your name are unfamiliar, could you please give a précis for the uninitiated, on who Larry Sanger is: what you do, what your aims in your career are, major achievements, and your general philosophy, etc.

 

LS:  OK, here is my resume.  I was born Lawrence Mark Sanger in Bellevue, Washington on July 16, 1968, in a Lutheran church-going family as the youngest of four.  In December 1975 we moved up to Anchorage, Alaska where my Dad had a job as a marine biologist.  In my teen years I’m afraid I lost my faith entirely and became heavily interested in philosophy.  I went to college from 1986 to 1991 at Reed College (B.A. Philosophy, in Portland, Oregon, taking six months off in Munich, Germany, and then a year off processing insurance claims.  I went to graduate school at Ohio State from 1992 until 2000 (Ph.D. Philosophy), again taking a year off back in Alaska 1996-7.

  I have had all sorts of menial and weird jobs.  I worked for a short time on a Y2K website in 1998-9 and as an instructor of Irish fiddle in 1999-2000.  Then I moved out to San Diego to start Nupedia and, a year later, started Wikipedia.  A year after that I was out of a job and back in Ohio, and wound up teaching philosophy at local colleges around Columbus.  Then in 2005 I went back to California, this time near Santa Cruz, to help organize some encyclopedia projects (one of my titles was “Director of Encyclopedia Projects”) for something called the Digital Universe Foundation.  All I have to show for that is the Encyclopedia of Earth, which I helped get off the ground but with which I have had nothing to do since 2006.

  In September 2006 I announced the impending launch of a Wikipedia competitor, Citizendium, which means “the Citizens’ Compendium.”  We’ve been developing publicly for a little over four years now.  While working on that I was contacted by a retired businessman and philanthropist, one thing led to another, and he is funding a new educational video project, WatchKnow.  Lately I’ve been spending most of my time on that.  Since Wikipedia grew in the public eye, by the way, I’ve been asked more and more to speak, so I’ve been speaking and writing a fair bit as well.

  As to the aim of my career, sometimes I wish I knew.  One thing I apparently haven’t been aiming at is making a lot of money or a huge popular success, or I wouldn’t have made a lot of the seemingly puzzling career choices I’ve made.  When I was a philosophy student, I wanted to be a philosophy professor, but by 1995 I had started to go to professional conferences and was completely disillusioned about what goes on in academia.  I thought that most of the papers I heard were transparently lame attempts to apply already lame theories, and the whole thing seemed to be just a game or a joke in an effort to get academic jobs and buff vain professional reputations.  I didn’t really want to be part of that sort of system.  Some people might be surprised to hear me say this, since Citizendium has unjustly garnered the reputation of being merely a “Wikipedia for academics,” which doesn’t really describe it well at all.

  I had a reason for wanting to be a philosophy professor.  This is going to sound a little strange, because when I used to ask fellow graduate students why they were studying philosophy, they would tell me with a funny look, “To get a job as a philosophy professor,” as if I were crazy for asking and as if their answer put an end to the matter.  But I saw being a professor as a means to an end of developing a system of philosophy, something that is laughably out of fashion and has been for over a hundred years now.

  I wanted to develop a system of philosophy because I noticed when I was a teenager that the reason that most people are unhappy, and the reason that there are so many problems in the world, is that so many people have so many false thoughts.  I can give you an idea of what I meant.  I noticed acquaintances who had at a very early age become completely strung out on drugs (it was a fairly common problem in Alaska in the 1970s and 80s), apparently because they had gotten the idea from Cheech and Chong or whatever—from the drug culture in general, which had become part of pop culture—that drugs are cool, or a gateway to higher understanding, or whatever.  I thought that it was fairly obvious that they were full of all sorts of false ideas about drugs, which one could (with enough time and patience) explain to be false.  Mind you, as a libertarian, I think that drugs should probably be legal—politically, we should be free to choose or reject them.

  To get back to the point, it seemed to me that people destroyed their lives (or merely made them mediocre) and that entire civilizations had collapsed under the weight of bad ideas.  Like the man said, Ideas Have Consequences.  So I had the—perhaps nutty—idea that I would develop a system of philosophy, and that this system of philosophy would convince a lot of people of a better way of thinking about how the world works (in a broad, philosophical sense) and, consequently, how to live in it.  Becoming a philosophy professor was my means to this end.  Then, as I was saying, in 1995 I became fairly well convinced that becoming a philosophy professor wouldn’t help me particularly in this aim.  I did go on to finish the Ph.D. anyway.

  You might of course wonder if I still have the aim of developing a grand philosophical system that will change the way the world thinks.  The answer in short is “sort of.”  More than ever, the Western world is gasping desperately for a coherent world view to replace the socially unifying and personally stabilizing effects of the Christian world view.  I would like to write and do various things that might help satisfy the need.  For now, let’s just leave it at that.

  Above, I’m afraid I might have made myself sound like an idiot, but I’m trying in a spirit of honesty to convey the flavor of how I thought about things when I was younger.   I was a skeptic even before I had read Socrates, Descartes, or Hume.  In my more sober moments, of course, I never really did have a huge amount of faith that I would ever complete a philosophical system, much less that it would have any sort of social impact as I had imagined.  Still, I thought the pursuit of truth was a worthy goal in any case.

  I can justify my work on Citizendium and WatchKnow (and some of my other projects) in this context; they are attempts to help enlighten the world.  Much more exciting to me than anything else I’ve worked on, however, is the idea for the Collation Project described on Textop.org.

 

DS: What exactly does an epistemologist do? And what exactly is epistemology, in a non-Websterian definition? And how does it differ from such things as ontology?

 

LS: First, let me say that my old dissertation advisers, themselves distinguished epistemologists, would be appalled that I am being made to speak on behalf of epistemology.  I never published anything peer-reviewed in epistemology, unless my dissertation itself counts, and I’m afraid I am, especially since I am nine years away from earning my Ph.D., very far from being anything like an expert in epistemology (the academic subfield of philosophy, of course).

  Bravely, however, I’ll give you the literal answers of the sort most philosophers would give: an epistemologist is someone familiar with the literature in epistemology, who has published a few things about epistemology, and who probably works as a philosophy professor.

  Perhaps what you wanted me to say, however, was something like this: since epistemologists study knowledge and its standards, they have Deep Thoughts about the passing scene.  They have trenchant analyses of the mistakes of your political foes, exposing them as bigoted dogmatists.  They deconstruct the leading follies of the day.  They explain, very profoundly, how the variety of perspectives on this or that points up the fact that there is no objective knowledge.  Epistemologists don’t generally do these things, or at least, not in their capacity as epistemologists.  Philosophers generally have too much modesty and sense—or more likely, they fear the sneering contempt of their fellow philosophers too much, should they make some blunder—to make any significant comments on the passing scene.  Epistemologists rarely attempt to comment for a wide audience on “the epistemic value” of influential, everyday thoughts and phenomena they encounter.

  Sadly, philosophers generally leave it to other people to comment about what are philosophical matters.  Even biologists, of all people, have gotten into the act, so that the study of ideas is reduced to “memes” and the most famous contemporary atheist is Richard Dawkins.

  Anyway, you probably didn’t want to know all that.  Probably you wanted to know something like a standard definition of epistemology and what it has to do with ontology.  OK, so here goes.  Epistemology is the study of knowledge: what it is (so, something like an analysis), the conditions under which we have it (so, when beliefs are justified or warranted, and what true beliefs are), what the ultimate sources of our knowledge (our senses, reason, perhaps imagination; but is there anything beyond like that, like revelation?), and whether we have any knowledge at all (so, how to respond to skepticism).

  Ontology has been said to be the study of being, or of what things there are.  If you think something exists, then it is in your ontology.  What ontology is mainly, I think, is the study of the most fundamental categories.  For example, sometimes philosophers like to say that objects can be “reduced” or explained in terms of their properties and relations.  Then properties and relations are among the fundamental things, while objects are to be understood only in a derivative way.  Another example is the whole debate between the realism and idealism, although these days this is usually cast as a debate between realism and “anti-realism,” “irrealism,” and “pseudo-realism.”  The question at issue is whether objects and properties generally speaking enjoy a fundamental reality, independent of us, or whether their existence is instead ultimately to be explained in terms of various mental or social constructs, such as ideas, consensus of some sort, language, or something like cultural posits.

  The following is an oversimplification, but generally we could say that epistemology differs from ontology in that epistemologists in the analytic tradition take for granted that there are mind-independent objects—or there are if we have knowledge in any ordinary sense—and the puzzle comes in how how we know about such objects.  In this sense, ontology is “prior” to epistemology.  But if you drop the realist framework in which much of analytic philosophy has worked, then epistemology and ontology are much more overlapping.  This is why, by the way, those philosophers who have contempt for realism (and a lot do) also tend to have contempt for epistemology as it is pursued by analytic philosophers.

  So, for example, if I am remembering correctly (it’s been a while), Wilfrid Sellars argued against “the givenness” of experience, saying—in line with Kant, by the way—that our experience of something allegedly basic, expressable in a sentence like ‘I am seeing a red patch now’, is informed by concepts which are themselves learned by experience.  To  make sense of experience is precisely to apply some concept to it, so how can we get basic concepts from experience?  So there is no “basic” or “given” object experience.  This then raises not just epistemological questions, about how we know things like the colors of objects, but also ontological questions, like whether it makes any sense to say that the concepts which we apply to our experience are in the things themselves, independent of us.  Perhaps they are things we bring to the world.  If not, then perhaps what appears to exist independently does not, in fact.

  That’s just an example of how epistemology and ontology overlap, to the extent that we drop the realist framework—or even just when anti-realist objections are made against the realist suppositions of mainstream analytic epistemology.

  I was trained as an analytical philosopher.  Continental philosophers have a rather different approach to all this stuff.

 

DS: Is experience itself a ‘real’ thing? Or is it the connectedness the experience brings between ‘reality’ and ‘mind,’ even if that experience is the nudging of one atom against another? In short, how do people know what they know and what they do not know?

 

LS: I’m not quite sure where to begin with that one.  When you put “real” in quotes, you are either using the word in a special sense—and I’m not sure what you mean, in that case—or you are casting aspersions on the notion that there could be much “real” about experience.  Maybe you mean both.  Well, I think that we do have experiences, i.e., we aren’t zombies.  (I might have doubts in some people’s cases!)  As to the second question, I cannot make heads or tails of it, not for lack of trying.  Sorry.  I have my doubts that there is something over and above our mental contents, including experiences, called “the mind.”  Besides, if our experiences exist, they are real too.

  Maybe this will help.  I admit that I am unreconstructed realist.  All the reading Kant and Sellars, and taking classes from Robert Kraut and Neil Tennant, didn’t take; it failed to clue me in to the naivete of realism.  I remain more a fan of Thomas Reid and G.E. Moore.  This means that I think we are constantly experiencing things (when we are not in deep sleep, anyway), and occasionally (when we are awake, paying attention, etc.) our experiences are of a mind-, language-, and culture-independent world.  As to how people know what they do of the world, that all depends on the type of knowledge.

  Perceptual knowledge has to be understood one way, knowledge of remembered experiences another, and knowledge of anything complex or abstract yet another way.  If when you ask “how do people know,” you have in mind some skeptical problem in mind, I generally take those things one at a time.

  But I do think that there are some instances of knowledge that we cannot justify any further, i.e., if we try to offer any further account of a belief’s justification, what we say makes it no more plausible that that belief constitutes knowledge than if we had said nothing at all.  Then, like Reid and Moore, I simply assert that those beliefs are instances of knowledge.  My dissertation is in part a defense of this general view.

 

DS: What prompted your career in epistemology, rather than other branches of philosophy or science?

 

LS: I don’t have a career in epistemology, of course.  Those are very hard to come by, actually.  I mean, very few people are hired as tenure-track professors of philosophy where the area of specialization requested is epistemology. I’m quite sure I would never get such a job, at least not for a long time and not after a lot more writing and publishing.

  If what you mean to ask is why I decided to write my dissertation about epistemology (and otherwise make it one of my academic areas of specialization), and why I’ve pursued a good many knowledge-oriented projects (which I have), that I can say something more interesting about, perhaps.  Basically, since I was a teenager, even before reading Descartes, I had the notion that knowing the truth was the most important thing in life, and that so many errors in life stem from being inadequately acquainted with the truth.  So imagine my callow surprise when, taking my first college classes, I learned that so many people thought that knowledge was impossible, or that it was so completely dependent upon social facts like language and culture that there was no such thing as The Truth we might discover and be enlightened by.  Partly in response to such thoroughgoing skepticism and relativism, it seemed to me—like Descartes, on whom I wrote my senior thesis in college—that the first order of business was to discover what the foundation of knowledge might be.  Of course, along the way I learned very well that there are a lot of people that believe that knowledge has no foundation at all.

 

DS: Thinking on an old idea, and one which I’ve heard a few times, as the basis for possible stories- is the idea of being born ‘out of time.’ As example, there are doubtlessly living potential blacksmiths and abacus whizzes whose talents are meaningless today, just as there were potential astronauts or computer programmers centuries or eons ago, who never got a chance to display their skills. To what degree, then, is talent or knowledge, a part of history, in that it may only be useful in certain times and places? And, if such talents are not immanent, what are they? Is the analogy to a potential drunkard who never tastes alcohol in his life apt?

 

LS: What, are you working on a novel or something?  Well, it seems to me that this question would be better put to a cognitive scientist.  It’s not really a philosophical question.  If you want my completely lay opinion about such matters, I can give it but I can’t guarantee I won’t be making a fool of myself.  It seems obvious to me that there are a lot of talents that “carry over” very well from one activity to another.  This is why there are such a lot of good musicians among those who are in technical fields as well philosophy.  Music is associated with “analytic” and “technical” ability because music is really an analytic ability as well.  To take another example, I think Leibniz desperately wanted to be a computer programmer; he essentially conceived of a universal, computable language, and he actually invented a mechanical calculator.

  There are many instances of what philosophers call procedural knowledge, or know-how, which we might be inclined to think are “immanent.”  But in one way they certainly are not.  There is no such thing as a person who is naturally talented on precisely the violin.  Perhaps we can say that there is a combination of natural factors that, when combined, make him talented on the violin.  But there is no violin-playing gene, so to speak, because violins weren’t around when the human genome was evolving.

  Anyway, I do think there is such a thing as inborn talent, but never an inborn talent for one artificial skill, if that makes sense.  Exactly what combination of characteristics makes an excellent violinist, I haven’t the first clue, but I’m sure it would carry over to other skills.  People who are prodigies on one instrument are usually able to master other instruments quickly as well.

  All that said, I have recently been thinking that the way we are first introduced to some skill, or subject, can deeply influence both our taste and our aptitude for it.  The perfect teacher for a certain student can perhaps make her brilliant where otherwise, with anybody else, she would be mediocre.

  By the way, if you’re looking for lame novel ideas, Da Vinci probably would have made an excellent astronaut.

 

DS: Science historian Robert Proctor coined a term called agnotology, which is about the love of ignorance, for a variety of reasons. I believe that it is prevalent in modern society, and, as I will get into later, almost a Prime Directive of both the Internet and Wikipedia, which is, as a critic once aptly said (paraphrasing), ‘Merely a condensed version of the Internet, not an encyclopedia.’ Do you agree that it is prevalent in society and, especially online?

 

LS: That’s a very paradoxical thing to say though, isn’t it?  I am no poet, and I take things literally.  To say that anyone loves ignorance is to say that they seek it out.  I doubt that anyone seeks out ignorance either for its own sake or as a means to anything else.  Instead, they frequently avoid knowledge, and most people certainly avoid doing anything like hard work in order to gain knowledge.  No doubt one thing Robert Proctor had in mind was people with religious beliefs which he thought were false or unsupported, and people specifically avoided knowledge that appears to come in conflict with those beliefs.  But I would point out that this is also a common feature of academia: sometimes, the beliefs that most stymie learning are taught in universities, especially education schools.  It is also true of many political beliefs of all ideological stripes: the passion with which political views are held blind us to facts in conflict with those views.  All this admitted, it seems misleading to identify an attitude that prizes ignorance.  Surely we are all ashamed of our ignorance when it is exposed, which means we do not love it.  What we love is not ignorance, but the things that keep us ignorant: prejudice and easy leisure.

  Recently I’ve been reading a recent book by Susan Jacoby called The Age of American Unreason, and before that I had finished  John Taylor Gatto’s now-classic indictment of “assembly-line” education in Dumbing Us Down.  If I had to lay the blame for the anti-intellectual attitude prevalent especially in the United States—I can’t really speak for other countries—I would lay the blame squarely at the door of the American education system.  Aristotle said, “All men by nature love to know.”  I think perhaps he had in mind little children, who have an amazing, desperate thirst for knowledge.  In my opinion, too many people do not indulge their children’s desire for knowledge by talking to them, reading to them, answering their questions fully, and supplying them amply with books and other learning material.

  Then they send the kids off to school where their early love of knowledge is, in most cases, systematically and permanently stamped out.  How?  They are forced to learn from texts written by committee, which nobody likes.  They are taught en masse with all other students at the same level, which is naturally either too fast or too slow for most of them.  In any case, what they are studying today is frequently not what they are interested in learning today.  The interests they do have are usually ignored.  Their main source of emotional support is not their loving parents but a rowdy bunch of peers, who come to view teachers as “the other” and the teachers’ main concern—getting knowledge—as foreign and contemptible.  All this is what too many of us learn to associate with knowledge while at school.  No wonder it seems like such a chore to many people, instead of a wonderful adventure as it should be.  It’s astonishing anybody has any love of learning after such an experience.

  I also recently finished The Dumbest Generation by Mark Bauerlein, and he makes a pretty compelling case that “kids these days” are reading less and less, and are less intellectually curious, partly because they are constantly connected to each other digitally.  I think there may be something to that, but I don’t know of course.

  All that said, it seems really strange to say that reading and curiosity are in decline in an online world that is primarily text, after all.

  As to Wikipedia, I guess I will have to wait until we get into that in more detail before I comment much.  The Wikipedians themselves seem, for all their faults, to be highly curious and frequently well-informed, even if the know-nothings sometimes make spending time there unbearable.  But I would agree that it is actually a sad consequence that the immediate availability of information about so many things has apparently given people—even some educationists, who ought to know better—the notion that they can depend on Wikipedia and the larger Internet, and that they need not improve their minds generally.

 

DS: Sciolism also reigns online, and the Internet, Google, and Wikipedia, have led to what I’d term a sciolistic dialectic online. Since Wikipedia and other outlets are so manifestly flawed- as example why should I be allowed to comment on contemporary Bulgarian politics?; what do you see as a solution to this detrital mass of misinformation? How can the average layman, who wants to improve his knowledge of whatever subject, possibly distinguish the good and trustworthy information from the 99.99% of utter garbage out there?

 

LS: There is no question that Wikipedia shares with the larger Internet a kind of pretentious mediocrity that I find very annoying.  But I would have you bear in mind that these new media are intensely attractive to narcissistic personalities who have inflated  ideas of the strength of their understanding and the importance of their pronouncements.

  But I doubt the cause of the sheer bulk of the “utter garbage” as you put it is sciolism.  It is the fact that people are not imparting information for mass consumption; rather, they are conversing and having fun.  The Internet has two purposes, as I like to say: information and communication.  To be sure, a lot of the information deliberately put up for the sake of information consumption is low-quality, but I think that if you could see an Internet with all the social networking and the pure commentary blog posts removed, the situation would not look as dire as it apparently does to you.

  Still, I entirely agree that those of us who want to use the Internet to find reliable information face a serious practical problem.  What I hope for is, basically, a way for the world to come together to rate resources.  What we need is the creation and adoption of a standard way, called a syndication standard, to publish our ratings of resources.  [link available]  Then the ratings from many different people would be able to be meshed together and used in many different ways.  The standard could include not just rating data, but also information about ourselves, and our ratings of other raters.  You can imagine how a robust set of resource ratings, and mutual rater-ratings, seeded with information from experts and other reliable sources, could help the world to identify the most reliable resources in a variety of topics.  It would be very interesting to see where the experts and the general public disagreed in their ratings; this sort of system would allow us to make such comparisons.

 

DS: What are some of the major reasons that people do not think for themselves, and rely on the information they find online, even when easily disproven? I can list a few: a) fear or personally accountability and responsibility b) a fear of our own biases. c) a fear of losing our own biases d) a fear of being ostracized e) a fear of having to stand alone f) a fear of what one’s own conclusions might bear g) a fear of taking on a large power structure (i.e.- why juries rarely take important stands) g) thinking might require too much time the thinker is unwilling to give, and thereby cause a loss of time, relationships, opportunities, etc. All of these things obvious to me- are there any others that you can mention? And what of those I’ve listed?

 

LS: Well, I don’t claim to know a lot about the problem you mention, or even if it is such a terrible problem.  I’m sure we’d agree that people have never thought much for themselves.  The popular notion that it is a good thing to be a nonconformist only came into being with the Romantics and entered pop culture in the 1960s, if I’m not mistaken.  Now everybody wants to be a nonconformist.  In any event, humans are so strongly inclined toward conformism that you might call it our natural state.

  If I am reading you correctly, you’re suggesting that the Internet exacerbates or at least enables this sometimes regrettable human tendency.  That may be true—as I said, I don’t know about this so I only conjecture.  I wonder why you are so sure that “people do not think for themselves, and rely on the information they find online, even when easily disproven.”  Certainly, the Internet does feed us with ready-made opinions, and comments in blogs and discussions give the impression that there is quite a bit of blind head-nodding going on, without a lot of personal thinking-through.  Also, there is a certain kind of conformism that I think Internet communities are particularly conducive to, called groupthink.  Ten years ago now, Patricia Wallace wrote a fascinating analysis of these phenomena in The Psychology of the Internet, confirming and explaining the whole idea of online groupthink.  Cass Sunstein also wrote interestingly about this in Infotopia.  It is so remarkable and annoying when we come across it that we might well think that the Internet is little more than a series of echo chambers.  I have said so myself before.  But again, if I am going to be really serious and thoughtful, then I have to admit that we’re dealing with an empirical question, and the impressions of the prevalence of the phenomena which you and I share may turn out to be wrong.

  The explanations you give of intellectual conformity online are all psychological, but there are some interesting social or structural reasons you can give too  The echo chamber effect, or group think, is one such explanation.  These phenomena seem prevalent because we (many of us, anyway) are attracted to places which have a lot of people who think the same we do.  Even if we are otherwise independent-minded and critical-thinking, we might indulge in a lot more uncritical head-nodding ourselves if we are surrounded by people sharing our views.  I think of political sites like huffingtonpost.com on the one hand and freerepublic.com on the other.  Another structural explanation is that, online, our remarks are permanent and highly public, which makes it easier and more socially rewarding to say “yes” than “no.”  If we come out in disagreement, we are essentially throwing down the gauntlet, inviting a debate, and a lot of people (obviously, the quieter sorts) don’t want to do that.

  Of course, in all this I have not acknowledged the fact that the fuel that fires a lot of Internet communities is disagreement and dissent.  The Internet does make it a lot easier to find information from “the other side” of virtually any issue that, for the right person, I imagine the Internet would make it easier to become more reflective and independent-minded.  That I suppose is how things ought to be.

 

DS: I also coined a neologism- deliterate. It’s a term I came up with in opposition to illiterate. By deliterate I mean the willful choice to not read great nor compelling writing. To avoid the classics in favor of reading blogs. To write in emailese rather than proper grammar. Basically, I claim that deliteracy is far more a problem than illiteracy is. Do you agree?

 

LS: More or less, but perhaps you have overstated the problem.  I certainly see a lot of emailese and IM-ese, but then, these are taking the place of spoken conversations or conversations that never would have occurred otherwise.  They are also happening between people who formerly wouldn’t have written nearly as much in their day to day lives.  Just think of the “deliterate” messages you’ve received from people you know.  Do you think that the Internet has made them less able to write well?  I don’t think so.

  The notion that the Internet is making us read less seriously was bruited by Nick Carr in “Is Google Making Us Stupid?” and the discussion that followed [links available], and, before that, by Maggie Jackson in her very thought-provoking book Distracted.  In my own case, I read a lot online and I don’t notice less of an ability to read books.  But Carr and his acquaintances have apparently had a different experience.

  If I had to guess, I would say you’re right that illiteracy isn’t as serious a problem as what you call deliteracy.  But there is another problem: low literacy, or “functional illiteracy.”  This is a more serious problem than illiteracy in the sense of being unable to decode the language at all.  Low literacy refers to the inability to make sense of anything beyond the simplest level, including an average newspaper.  If you haven’t been a college or high school teacher, you may not realize how prevalent low literacy is.  It would surely be wrongheaded to claim that low literacy—that many people cannot comprehend what to you and me is simple English—is a lesser problem than that some people are deliberately choosing blogs over books [link needed].  The Internet users who can read books but choose not to don’t pose the same sort problem to themselves and to society as people who cannot read books in the first place.

  I don’t mean to minimize the problem of deliteracy.  I agree that it’s a problem and a significant one.  I would worry about the children of the non-book-readers.  Obviously, you just cannot get anything like an education from blogs in the way you can from the classics.  I just have to doubt that blog reading—or consumption of various kinds of “instant writing,” like Twitter—will in the end beat out extended writings by brilliant individuals.

 

DS: When I have interviewed science writers, I’ve always mentioned that I see a connection between art and science, as different approaches that use different ends of the same method- i.e.- science uses creativity in service to discovery, while art uses discovery in service to creation. Yet, criticism is more akin to teaching; it’s more didactic than creative. Do you think that one might use a similar equation as that I posit between art and science for the relationship between creation and its explication? It’s also the difference between knowing how to build a gun and aim it with skill. Do you agree? Also, how do creativity and discovery relate to what is known and how it is known?

 

LS: I’m afraid I can’t comment on most of this because I can’t understand what you mean (especially “Do you think that one might use a similar equation as that I posit between art and science for the relationship between creation and its explication?”), and I haven’t had many deep thoughts about art and science.  I’m not a scientist or a philosopher of science.  Sorry.

 

DS: To me, criticism is a hallmark of knowing what is known and unknown, and relates to both discovery and creativity. Oftentimes I hear (mostly from bad critics) that criticism should be analysis, not judgment. But, without judgment, analysis becomes recapitulation, which is mere description. Do you agree?

 

LS: Frankly, I’m flattered but puzzled that you’re interested in my opinions about such things.  I’m not much of an artist or critic, and I haven’t even had much training in aesthetics.

  First let me critique your premise that description, “mere description,” can easily or does typically lack judgment.  Terms of aesthetic description often double as terms of aesthetic evaluation.  Even the word “art” itself can be used descriptively or evaluatively—all the more so for words like “lean,” “expressive,” “Romantic,” “lengthy,” “off-balance,” etc.  It is hard not to find judgmental terms in even the most “clinical” and “objective”-sounding analyses of artworks.

  Maybe what you mean, though, is that the critic ought to give a global judgment of the work, a judgment of the work as a whole, saying it is good, bad, or in between.  Well, I don’t really see that that is always necessary.  With reviews, like movie and music reviews, they’re nice to have.  With Criticism of great Art and Literature, perhaps not.  Don’t global judgments usually say more about the standards or biases of the critic, or about his entirely idiosyncratic reaction, than about the work itself?  If you reply by saying, “Sure, but that’s what’s interesting to me,” I would grant you that.  Art really is to a great extent about the expression of worldviews, and articulating our reactions based on those worldviews can certainly be very interesting.

  Do you consider popular reviews to be criticism?  I would say that the nonexpert public consumers of reviews usually want a judgment, or else they don’t get what they are after first and foremost: a recommendation to go take in some work of art.  A movie review would seem to be missing something without a recommendation to go or not go.  To be sure there are some pieces of criticism which are not reviews, per se, but attempts to come to grips with a work in some way, in which case analysis might be all that is expected or needed.  Perhaps this is what those critics who say criticism should be “analysis, not judgment” mean—I don’t know.  For complex, difficult works, surely analysis is plenty, because it elucidates a work that otherwise be completely obscure to the viewer.  I’ve bought some art books lately and have found the accompanying blurbs, mainly descriptive and brief, to be extremely useful in helping me to appreciate the artworks.  That may not be deep, impressive Criticism, but it serves a very useful purpose to me.  Sometimes recapitulation is exactly what we want, because we can’t understand the work without such artificial aids.  Indeed, a lot of modern and contemporary art has been virtually meaningless without the accompanying critical analysis.  The work just doesn’t speak for itself.

  I’ll elaborate the latter, and though I think it is a common complaint about the contemporary arts, I don’t know if I’ve got the complaint right.  But here goes.  There seems to be a common strand of art and criticism of the last century or so, in which the artwork is a sort of puzzle, and the game is not to take in, appreciate, and get something important out of an artwork by itself.  Instead, the game is pretty much limited to explaining the artwork, thereby illustrating the critics’ role in the artworld in which they and the artwork are parts.  The artwork in itself, absent any explanation, is a fairly bare, puzzling, or shocking cipher.  In other words, art becomes not a transaction between the artist and the audience (viewer, reader, listener, etc.) but between the artist and the critic, whose interpretation essentially completes and crucially contextualizes the work.  It is as if the artist and critic together are creating the thing of aesthetic concern, namely the-work-and-its-interpretation.  The trouble with this, of course, is that the long, long history of art shows that it is—and I mean serious art, not just popular art—not best treated as a mere game for specialists, but instead as a universal human endeavor from which we can all benefit.  The game of criticism is mainly of interest to academics and pretentious intellectuals, and is irrelevant to most of the rest of us.  One has to wonder when things will change.

 

DS: One of the stereotypes of philosophers is that they tend to see things very black and white, while another is the exact opposite; that they get bogged down in filigrees of minutia with no relevance to the real world. Does either stereotype apply to you?

 

LS: I’m afraid I’m not at all familiar with the first stereotype.  Even though I am avowed “philosopher of common sense,”  I’m obviously much closer to the second stereotype.  (You can see it in these answers!)  When doing hard philosophy I tend to be a logic-chopper like most analytically-trained philosophers, and I certainly get into the minutiae of argument.  I believe “the devil is in the details” and that the world is a complex place.  Constructing a good philosophical theory or argument is sometimes surprisingly difficult, and disentangling the precise mistakes that others make can sometimes be a very tricky business.

  If philosophy is full of issues of little relevance to the real world, it is because the issues concern various implausible and abstruse theories that people invent to make careers for themselves, and which people are going to forget in a generation.  There aren’t many general questions in philosophy that I think are unimportant.

 

DS: Occam’s Razor- the idea that the simplest solution that best fits the known facts is usually the correct solution- would seem to be a good weapon to wield in acquiring knowledge, so why do you think such good (I won’t call it ‘common’ since it seems so rare) sense rarely surfaces?

 

LS: I am not sure exactly what you have in mind here, but I’ll take a stab.  If you have philosophy in mind, and you’re asking why Occam’s Razor is so seldom used in philosophy, I would have you list some instances in which you think it wasn’t used.  Philosophers typically go to great lengths to reduce the number of entities in their ontologies.  If complexity and confusion results it is not necessarily because their theories multiplied entities beyond necessity but because defending any philosophical theory takes hard and complex argument.  But even the most parsimonious philosophers, to preserve our intuitions, often have to produce flimsy substitutes for excised but necessary entities.

  Your parenthetic remark makes it clear that you were thinking of asking: “Why do you think such common sense rarely surfaces?”  Either you were assuming that use of Occam’s Razor ought to be a matter of common sense—which is plausible, perhaps—or you were wondering why philosophers so often defy common sense.  Well, let me tell you, one of the reasons that philosophers reject common sense is precisely that common sense seems (to the thinking of many philosophers not enamorous of common sense) to multiply entities beyond necessity.  Common sense might seem to hold that we have a mind with free will, and a body distinct from the mind, and in the world there are objects, properties, and relations, which exist in a mind-independent Euclidean space and time; many people think the existence of God is a matter of common sense; and that we know certain simple things, and that certain moral principles hold (like the one against killing babies for kicks), seem to be matters of common sense.  Many philosophers have doubted the existence of some or even many of these things, frequently on grounds that their systems are satisfyingly simple and face fewer of the traditional philosophical problems associated with the entities posited.  That may illustrate the use of Occam’s Razor, but is it good sense?

 

DS: Did you have any heroes in philosophy as you grew up? Or were you attracted to the discipline of ideas? As someone who is a writer, first and foremost, I enjoy the interplay of the abstract word and the real world thing it represents. In poetry, as example, a dictum or stricture such as rhyme can force a word choice upon you that may not have seemed natural sans the stricture. Yet, that stricture, due to its sound, can fundamentally affect the nature and idea of not only the immediate image or line, but the whole poem. Analogously, does the provenance of an idea have that sort of catalyzing effect on the essence of the idea?

 

LS: I guess I will come out and admit that I read a lot of Ayn Rand when I was a teenager, though I never bought into what she had to say lock, stock, and barrel.  I was always a critic.  Later I found Thomas Reid and G.E. Moore to be more to my taste.  Of the great philosophers my favorites were always Aristotle and Descartes.  I studied Hume probably more than anyone, though; even though I disagree with his skepticism, one can certainly learn a lot by studying Hume.  I’m afraid there aren’t any contemporary philosophers that I admire as much as these, although I admire a lot of the style and insight of epistemologists Alston, Goldman, and Plantinga.

  You’re absolutely right to say that the provenance of an idea can deeply influence how an idea is received.  For example, Quine has been widely regarded since the beginning of his career as absolutely brilliant, one of the greatest minds of his generation.  But on my humble view, he sure said a lot of bizarre things which probably wouldn’t have been taken as seriously if it weren’t Quine who had said them.

  Another example would be G.E. Moore again—writing something with which I disagree.  Philosophers generally admit that his Open Question Argument is a non sequitur, but go on to say that it really taps into something important about our moral concepts.  I think that under the name of a lesser philosopher, the particular text in question (from Principia Ethica) would long since have been forgotten.

  Sad to say perhaps, but for whether anyone pays attention to an argument, it deeply matters who makes it.  If you want to understand this phenomenon, it helps to bear in mind that philosophers are constantly reinventing the wheel, reselling old goods under new cover.  Often, the only thing that makes the rebranding notable and appealing is the manufacturer’s reputation for quality workmanship.

 

DS: All words simply denote things that other words can or cannot, therefore all definitions are dependent; so language is ultimately a circular exercise. Thus, is the penetration into real meaning something more mystical? Is it irresolvable? Is what you consider the color red really what I am seeing as red, etc.?

 

LS: Every curious schoolchild has gone through the old dictionary exercise of looking up words, then the words in the definitions, until you find that all the words are defined in terms of other words you’ve already looked up.  Similarly, if you construct a system of definitions, you wind up either with undefined terms or terms “defined (circularly) in context.”  This all seems obvious after a little thought.

  But from these obvious things it does not follow that the meaning of words are “dependent” (whatever that might ultimately mean) on circular definitions, or that language itself is “ultimately a circular exercise.”  To say that is to assume that words get their meaning from their definitions.  Granted, we can express and learn the meaning of words with definitions (in some cases), but to say so is not to say that we learn all of language, or the fundamentals of language, from definitions.  We learn the fundamentals of languages from context and indexically (people speak and point at the same time).  Language does not get its meaning from definitions, because definitions merely express meaning that is already there.  Exactly wherein the meaning of language does reside is a much-debated question among philosophers of language, but nobody takes seriously the suggestion that a word gets its meaning from a definition.  That puts the cart before the horse.  Rather, definitions are made correct or not based on the meaning of the word.

  To come to the point then, if a system of definitions must have a foundational set of indefinable terms, or else have “circular definitions” in context, that does not establish that language itself is in some way “circular.”

  Maybe I am being needlessly pedantic, though.  (Who, me?)  Certainly many thinkers have found something very puzzling, even mystical, about meaning—how language hooks up with the world.  The meaning of meaning is certainly one of the most mysterious of philosophical puzzles, not just because it is abstruse or technical.  I don’t think we are entirely unable to plumb the depths of meaning, though, just because words can refer to things without our being able to represent the meaning in a non-circular way.  I really think it comes down to clarifying which question you want to ask—which, on my view, is how most philosophical puzzles are best solved.  For example, if what you really want to know is how we learn language without being able to depend on a prior understood language, I refer you to developmental psycholinguistics.  Or if you want to know how we can justify an agreed-upon “translation” of inherently ambiguous indexical terms (a puzzle that exercised, e.g., Quine in Word and Object), I am inclined to go with a nativist solution, saying that we have a sort of in-born conceptual apparatus that make it possible for us to learn language.

 

DS: I mentioned the dependency of words, but are not ideas dependent, based upon causality? I.e.- without a claim for something, a reactionary movement or idea cannot exist. Correct? If so, does knowledge’s evolution resemble the evolutionary tree that most biologists now believe represents ideas of life on earth since its start? Or, is knowledge more like the old Classical idea of a ladder, wherein the accumulation of knowledge heads ever higher, without any digressions or regressions?

 

LS: I’m afraid once again I have to say I’m puzzled by the question.  It seems you’re asking me to offer a theory, or metaphor, for the history of ideas, and you’re giving me two theories to choose from, one of them being that an “evolutionary tree” while the other being “the old Classical idea of a ladder.”  Without knowing more about the theories, or how you understand the theories, I can’t really offer an opinion.

  I love the history of ideas, but I have never had much in the way of an overarching theory about how they develop historically.  I guess I’m pretty skeptical of the whole idea of broad, overarching patterns in history (which the phrase “knowledge’s evolution” seems to imply), which is why I have never really studied the philosophy of history.

 

DS: Philosophy is ideas, but art is ideas in motion, put to some purpose. I posit this makes it a higher and more difficult pursuit. Agree or not?

 

LS: Is the question who works harder and who has higher-minded notions?  That depends of course on the philosopher and the artist.  There are hard-working philosophers and lazy artists.

  But I suppose you mean that when art and philosopher are pursued in some truly excellent or ideal way, then art is “higher and more difficult” than philosophy.  Well, I suppose it means what you mean by ‘higher’ and ‘difficult’.  Obviously, you are using these words in some sense so that something is higher (more important?) and more difficult if it is practical.  I am puzzled about what you could mean there.  Since when did practicality make something more important or more difficult?

  Besides, one can easily debate your premise.  You are admittedly right when you say that poetry involves the application, or expression, of ideas, and philosophy per se does not involve any such application.  But philosophy can be almost immediately used or “applied” in ways that poetry—being art, which exists for its own sake—cannot be.  It makes policy recommendations, it informs critical approaches, it shapes how we live our lives.  What could be more practical?  How can poetry match that?

  Of course, if putting things in motion and to some practical purpose is the touchstone of height and difficulty, then engineering easily has philosophy and poetry beat.

 

DS: Let me turn to more biographical queries. But, before I do, let me delve into those areas you are most known for, your founding and work on online encyclopediae: Nupedia, Wikipedia, and Citizendium. You’ve written many articles, online and off, so I will only recap a few. On your Citizendium blog you have written of your ongoing war to not be marginalized in Wikipedia’s founding. Yet, the actual Wikipedia page on you seems to give you your due. Is this a personal war of words between you and the other co-founder, Jimmy Wales? If so, what prompted this?

 

LS: I might sum the situation up by saying that while, in the first few years after I left Wikipedia, I had been given proper credit by Jimmy Wales for my role in getting the project started, since 2004 he has minimized and downplayed my role.  It was not until relatively recently that I began “fighting back,” so to speak.  At first, I thought Wales was just being forgetful.  Then I thought he was objectionably selfish and selectively remembering, but not telling out-and-out self-serving lies.  I’ve finally and reluctantly, in the last couple years, come to the view that that’s exactly what he’s been doing.  Well, I refuse to facilitate and embolden such vicious behavior by my silence; not only does it do harm to me, it sets a bad example.  I know that in the absence of full context, it looks immodest and sometimes petty, but in context I hope it won’t be so regarded.

  So, yes, I’m afraid I have had a verbal dispute between Jimmy Wales.

  If the Wikipedia page about me now gives me due, that is partly because I took the time to put together a set of links that made the situation abundantly clear.  Between 2004 and 2006 or so, the articles about me, Wikipedia, Jimmy Wales, etc., downplayed or omitted mention of my role, largely because Wales was the only person speaking on behalf of the project—I was not so much.

  It’s interesting that you seem to imply that, since the Wikipedia articles now have it more or less right, it is not necessary to have any negative reaction to Wales when, for example, he lies repeatedly about me to Hot Press as he did recently.  If it were general knowledge that Jimmy Wales is a liar, and that he cannot be trusted when speaking about the origins of Wikipedia, then I would agree.  But Wikipedia is not the world’s sole source of information, of course, and many journalists still give Wales a platform allowing him to go to a broader (or at least another!) audience and, among other things, undermine me.

 

DS: With such obvious and voluminous proof, it seems silly for Wales to try to discredit you. Is this more a personal war than a historical one? And how did you meet Wales, decide on Wikipedia, and what was your relationship? And how is your relationship today? Will this feud eventually end up in court?

 

LS: It does indeed seem silly for Wales to try to discredit me.  This is one of the things that distressed me so much—if it was deliberate, it would be in such bad taste, so just plain stupid, and so ultimately self-destructive, that I couldn’t believe it was deliberate.  How it is possible for an otherwise intelligent person to do this I leave up to the psychologists and to people who know Wales better than I do.  I never use the word “war” or “feud.”  It doesn’t feel that way at all to me.  To me, it’s just defending my long-standing reputation against dumbfounding attacks by Wikipedia’s other co-founder, that’s all.

  I met Jimmy Wales online through some online philosophy discussion groups—first, his group about Ayn Rand, then my group about philosophy in general (the “Association for Systematic Philosophy”).  Our first encounters were sometimes friendly, sometimes hostile.  But after a while we got to know each other as acquaintances, and I even met him and another conversationalist in person in 1995.  So we knew each other somewhat, as acquaintances, when he hired me in 2000 to start Nupedia.

  The story of the origin of Wikipedia has been told in so many places so many places that I’ll give the brief canned answer: while working on Nupedia, it became clear that we needed another way to let the general public contribute to the overall encyclopedia-writing project.  So on the evening of January 2, 2001, over dinner, a friend told me about wikis.  Without even having seen a wiki, I immediately began thinking of what an encyclopedia project using a wiki would look like.  I made the proposal that night to Jimmy Wales and in a day or two some wiki software was installed for me to begin filling up.  I wrote the original guidelines, made the original announcements, and generally guided everything during the seminal first year of the project.  During this time, Wales was very much a hands-off boss and he did relatively little work on Wikipedia.  Almost all the important decisions were left to me; Wales was busy with bomis.com, I guess.  Then the Internet advertising market collapsed, and Bomis (which funded Wikipedia) lost the ability to pay me.  I was the last of a half-dozen new hires at Bomis to be laid off in late 2001-early 2002.

  Our relationship at the time was certainly cordial and friendly enough for a long time—I wouldn’t say we were ever really friends, because we never socialized significantly.  Today, purely on account of his shabby treatment of me, we are on very poor terms, and I doubt we will ever have a relationship in the future.  He might try to make things seem to be “one big misunderstanding”—that’s his modus operandi, as saying so makes him look like a cool-headed, compassionate mensch—but I am far beyond taking such overtures seriously.  But I very much doubt there will be any reason for this dispute to go into the court.

 

DS: If Jimmy Wales approached you and said he had to have you back on the Wikipedia team, what would it take for you to return? What methodological changes would Wikipedia have to make for you to even consider such an unlikely offer?

 

LS: In fact, this happened, or nearly happened, in 2004 or 2005.  If I recall correctly, in response to my essay, “Why Wikipedia must jettison its anti-elitism,” Wales said that he had been considering rehiring me, but that the essay would make it very difficult.  I told him at the time that he could not offer me enough money to come back.

  There is virtually nothing that the Wikimedia Foundation or Jimmy Wales could conceivably do to get me back on board, and I doubt many people there would want me back on board anyway.  I have moral objections to how Wikipedia is constituted as a community—I think it is fundamentally unjust in how it is set up.  Among the problems to correct is that the Foundation would have to forcefully remove all official and unofficial authority from Wales.  But much more importantly, they would have to set up a system of governance at least somewhat similar to what I have set up for the Citizendium, meaning there would have to be a low-key guiding role for subject matter experts, and real names would have to be required for administrators at the very least.  The community would also have to adopt a charter that would be binding on participants and that would create a division of powers.  All of this is very unlikely to happen.

 

DS: This article details the early history of Nupedia and Wikipedia. Could you briefly recap it for those unfamiliar with it? Also, what do you see as the relationship between the three encyclopediae? Are they really Nupedia 1.0, 2.0, and 3.0? If so, what do you envision as online Encyclopedia 4.0 or more?

 

LS: The article is a memoir of my time with Nupedia and Wikipedia.  It starts when I was hired and covers the development of Nupedia, the origin of the idea for Wikipedia, its early months, how it changed after mid-summer 2001, the reasons for the productivity of the project, problems that arose, and what happened to Nupedia in the end; and it has comments at the end (and throughout) about project governance.

  Originally, Wikipedia was to be a “content feeder” project, a side-project, for Nupedia.  When it became obvious that Wikipedia was growing like gangbusters, I spent most of my limited time on it, leaving Nupedia, unfortunately, to wither.  The Citizendium (CZ) is not a combination of the two in any important sense, but instead a development of the general idea of a wiki encyclopedia.  My hope with CZ is that it will, in time, develop into a viable and attractive alternative to Wikipedia.  I’m afraid I don’t have any bright ideas of an encyclopedia beyond CZ—if I did, I would have started that instead.

 

DS: In this article you mention cruft. What exactly is cruft and why is it bad? And does this tie back to Proctor’s term, agnotology, and my term, deliteracy?

 

LS: “Cruft” is a programmer’s term for data or code that serves no useful purpose—garbage.  In programming projects, it’s important to clean up data and code and remove cruft, at least occasionally.  “Cruft” has come to mean anything undesirable online that appears when you’re looking for something else.  It differs from spam in that spam is sent deliberately, while cruft merely accumulates as a by-product of other processes.  Cruft is bad by definition.  It’s a fairly generic term.

  First, let me say that unnecessary neologisms constitute linguistic cruft, so by commenting on these, I don’t actually endorse them.  But yes—our general tendencies to avoid knowledge that would cause of cognitive dissonance and to engage in a lot of meaningless blather rather than serious discourse online are both important sources of cruft.  Those of us who don’t want to be taken in by “agnotology” and “deliteracy” online would do well, as needed, to evaluate critically how we spend our time online.  But I guess people are realizing that more and more these days.

 

DS: You've written against Wikipedia’s anti-elitism (and I’ve mentioned how it’s just a condensed version of the Internet), which is a good thing, but I know people who have worked on both Wikipedia and Citizendium, and they tell me that Citizendium has gone to far in the other direction, holding up the cult of expertise to a point of celebritization, often making the writing of articles paralyzingly slow. Why has Citizendium swung to the other end of the spectrum? Is not a middle ground needed? One where people would need to use real names, but also one where the actual quality of the edits and person’s knowledge are the prime valued thing. I.e.- I have known people in Academia who are total idiots- robots who regurge what they’ve learnt. They have no ability to process and deploy new information without a syllabus. On the other hand, there are auto mechanics and blue collar folk who will know far more than so-called experts. A good example would be my own vast knowledge on poetry, writing, and criticism. Yet, because I am not degreed, nor do I have a syndicated newspaper column, I am not considered to be an ‘expert’ the way Donald Hall, a former Poet Laureate (yet bad poet) nor Pauline Kael, a famed film reviewer (and bad one) are and were. There’s something simply wrong about that, because there is an immanence to greatness, be it in the arts or sciences, that no taking of a course, can match. As example, no art school can create a Picasso, and no MFA writing program can create a Whitman. They were so far above their contemporaries because they simply lucked out and the genetic and creative dice favored them. Comments?

 

LS: I happen to have read a comment from you, Dan, in which you say that elitism isn’t really such a bad thing.  But I will not accuse you of contradicting yourself, because one can be an elitist in one respect and an anti-elitist in another respect.  Anti-elitism could be a good thing, but it depends on the context, doesn’t it?  I was referring to Wikipedia’s anti-elitism, which is decidedly bad.

  I am not sure who you have spoken with and why that person thinks there is such a thing as a “cult of expertise” on CZ or that it is particularly “celebritized” there.  This is a sad and I suspect frequently deliberate misrepresentation of the work of a lot of good people.  This has necessitated that I write a CZ page, Myths and Facts.  I think the mere involvement of expert editors in the project does not slow down the writing of articles in the slighest way; it is, after all, a wiki, which means you can write as much as you want, on whatever you want, whenever you want.  I can’t comment on your correspondent’s experience because I don’t know the details.

  If you had researched CZ rather better, you might take more seriously the possibility that CZ is the middle ground.  On the one side is Wikipedia, where all too many participants regard expertise as a conflict of interest—I am not making this up—because experts have articles, publicly-defended positions, and employment in the area.  On the other side are traditional edited encyclopedias, in which only experts are assigned to write articles, which are carefully edited by a professional staff and then published and sold.  CZ is far closer to the Wikipedia model than the traditional model.  If you are 13 years old and can write good English, and you agree with our fundamental policies, then you are welcome to write on any articles on CZ on which you can make a positive contribution.  We do not assign articles.  Indeed, experts may become editors, who do not approve edits or articles before they are posted; they work shoulder-to-shoulder with everyone, and may, when absolutely necessary, provide guidance.  Generally we have a different culture, in which people who have made it their life’s work to know about something are generally—not always, not in a knee-jerk fashion—deferred to.

  The “middle ground” as you describe it, “one where the actual quality of the edits and person’s knowledge are the prime valued thing,” is, as far as I know, just the ground we occupy.  Some of our most valued, appreciated contributors are not editors but authors, which is just as one should expect.  Perhaps you are unnecessarily trading on too strong of a notion of expertise yourself, if you think that a role in the system equates to a degree of necessary respect accorded to a person.

  I do agree with you that undegreed, uncredentialed people can be smarter and more knowledgeable than highly degreed and credentialed people.  I also agree that that fact is important for project designers to recognize.  But it looks like we’ll be discussing that more further down.

  As to the latter part of your comments, it seems to me that you are conflating two very different notions: on the one hand, there is expertise, which nearly anyone can achieve by dint of hard study or practice, and on the other hand there is greatness, which is something far more mysterious.  There are only rarely roles made in a publishing system for great thinkers, artists, etc.

 

DS: From your Wikipedia page:

  While Wikipedia is perceived to promote consensus and not truth,[103] and verifiability is the inclusion criteria - reporting on what other sources have to say,[104] Citizendium experts have the final say for article content[78] and it is not necessary to cite a source for a content decision on Citizendium.[105] Finally, while vandalization of articles takes up time and effort on the part of Wikipedia's editors to uncover and revert,[106][107] Citizendium presumably aims to prevent vandalism.[108]

 

LS: First let me comment on this quotation.  I’m sure you won’t be surprised to learn that I think my Wikipedia page is frequently wrong or confused.  I think it is frankly bizarre to me that Wikipedians can blithely offer up such statements in the texts of articles.

  The only bit I want to discuss in detail is this: “it is not necessary to cite a source for a content decision on Citizendium.”  This not only gets our policy badly wrong, it is absurd on its face.  Unlike Wikipedia, we assume that there are good previously existing practices of citation.  We say that where experts in a field would generally require a source in a (specialist) encyclopedia, we also require a source.  We reject the notion that citations ought to be used as bludgeons to shut up the other side in a silly Wikipedia edit war.  We say that if something is so well known in a field that it needs no citation, we don’t need a citation either, and that adding one will look silly (and sometimes does on Wikipedia).  In such a case, it is handy that we in fact sometimes do have a relevant expert on hand, who can be deferred to on the question whether a citation is actually needed.

  If there is a protracted content dispute, even if it is between an author and an editor, then usually our editors act maturely and reasonably.  If an author asks for a source on some point, an editor will typically give one.  Editors who demand anything like unquestioning obedience on CZ rarely last long.

 

DS: This is the thing that Wikipedia has as its biggest problem- it cares more for verifiability than reliability or accuracy. Thus, if an opinion is wrong, but comes from someone ‘credentialed,’ it is considered more valuable than a correct opinion from someone out of the loop. How did that nasty paradigm get started? And, isn’t the whole philosophy of multiple eyes producing quality 100% antithetical to the known lowest common denominator dictum that the more people who assemble to do a task the worse it will be, since the task will only proceed at the level of the worst individual(s) in the group? Since this is manifestly true at Wikipedia, how did that get started and why is it so slow to change?

 

LS: Turning now to your comments, it may or may not be interesting to note that, of Wikipedia’s various editorial fundamental policies, “verifiability” is not one that I do, or would want to, take responsibility for.  This isn’t because I think verifiability is a bad thing—of course it is a good thing—but precisely because it seems to be used as the basis for a bad policy about footnotes or citations.  In short, if somebody can manage to find a source—any source, as long as it is “edited”—then it seems he can make Wikipedia say whatever the source says.  This is absurd, not necessarily because of any abstract dispute over “verifiability” versus “reliability,” but just because all sorts of edited sources are wrong and, on the view of nearly any informed person, some of their particular claims can’t be trusted.  But if you take the view that, functionally speaking, there are or should be no specially privileged sources, then all sources are equal.  So if you can find any source saying something you want to say—and if you look hard enough, you will—then you’ll be able to say that on Wikipedia.  I know, of course, that this is a bit of an oversimplification, but in the broad outlines it’s perfectly true.  On all this, I’m not sure whether I’m agreeing with you or not.  I doubt that Wikipedia’s verifiability policy particularly militates in favor of “credentialed” sources.  The point is that they’ll take nearly any source that isn’t a blog or hobbyist web page—unless some influential admin happens to have an animus against it, I suppose.

  I think what you’re reacting to is the fact that Wikipedia does have a culture of “consensus,” which means that they say that they are working toward text that everybody can agree to, at least in a system in which they know that some consensus has to be sought.  Frequently it does work that way, but on issues that matter, it’s often not consensus but the views of the most influential admins who get involved. In any case, as I’ve said many times, the policy just doesn’t scale.  Get enough people working together on something, and there will never be anything like a real consensus, particularly when dissenters can’t easily be shut up.  Hence, Wikipedia aims at consensus but what it gets is endless edit wars with no rational way to resolve them.

  Basically, Wikipedia is conducive to participation by people who love playing the Wikipedia game, which frequently involves talking as if you’re aiming at consensus, while playing all sorts of disingenuous games to get your position accepted as “the consensus” position.  It take a special sort of person to love that.  I’m not one of them.

 

DS: Does Wikipedia’s logy in changing its failed methodology augur its ultimate failure once the next wave of online technology comes along to provide new informational avenues?

 

LS: It does seem that Wikipedia is ripe for replacement.  But what will replace it?  Maybe Citizendium or something else—that remains to be seen.  And maybe it will enjoy its now-dominant position for a long time.  Many deeply flawed institutions live on for centuries, as you know.  Like it or not, Wikipedia has a chance for long-term hegemony because it’s just so huge.

  On the other hand, if somehow you could organize all the scholars, students, and other intellectuals together in the right way, motivating them to contribute to a new work in the right way, you could replicate Wikipedia in just a few years, I’m sure.  The trick is the organizational part.  Wikipedia has been a popular success precisely because it is, essentially, self-organizing.  And that is, by the way, something I strongly support and tried to design and foster in the early days.  Bottom-up self-organization is a wondrous thing.

 

DS: Other than the problems of credentialism, I’ve been told that Citizendium has similar problems as Wikipedia, except from a different source. Just as Wikipedia experiences ‘edit wars,’ Citizendium is often too fascistic in its approach to articles, wherein one so-called ‘expert’ shuts down any real chance of a demotic approach, or differing viewpoints. Comments?

 

LS: “Often too fascistic”?  That’s just silly.  Who is telling you all these things, and what reason do you and your readers have to believe him or her?

  The Citizendium is not “credentialist.”  If you want to maintain that, don’t just assume it; define “credentialist,” and then find evidence for your view.  Among other things to point out in response, most of our active people are authors who need no credentials, and many of our editors work in areas that make no use of their credentials; and we accept as editors people who lack traditional credentials.  What they must demonstrate, however, is expertise.  Do you have a problem with that?  If you, Dan, are an expert about literature as you say, then surely you ought to be able to prove it one way or another.  If you can’t prove it, then pray tell why should we believe you?  Wouldn’t we be irresponsible to do so?  Anyway, we don’t absolutely require that Literature Editors have a Ph.D. in literature.  In other words, we are reasonable.  But if you just throw around the “credentialist” label, then you are basically throwing in with those confused souls who apparently think that institutions should never recognize expertise officially.

  Anyway, Dan, since you haven’t been in the system, I’d like to know on what grounds you believe CZ is “fascistic.”  I started the system, and I think it is actually the opposite of “fascistic.”  The label is unfair, hyperbolic, stupid, and ridiculous.  Rarely does an editor simply make a decision and demand absolute obedience, cutting off debate and resting upon his or her status as an editor in the system.  That can happen, and has happened, but it is rare, and it rarely has no consequences.  If an author complains to me, I will step in and evaluate the situation.  Sometimes I will overrule the editor—that has happened rarely—but more often, increased public attention to the situation leads to a more “demotic” (to use your word) result.  And suffice it to say that those editors who tend to demand unquestioning obedience, who are not able to work shoulder-to-shoulder with authors as most do, also tend to leave the project after a while.

 

DS: The person who told me of CZ’s ‘fascism,’ and other problems, is a contributor to CZ, and did so only on my guarantee of anonymity. Since this is not a court of law, and I am not being forced to reveal a name under penalty of imprisonment, I’ll honor my promise of secrecy. That said, there are other problems that Citizendium has, that are unrelated to Wikipedia. A source within Citizendium has told me that in two and a half years (as of this writing in spring of 2009), Citizendium has generated only 102 approved articles (with that number remaining static for the last few months). That’s an even slower article creation process than with traditional encyclopediae. Is this a flaw in the Citizendium process?

 

LS: Admittedly, it is certainly disappointing that we don’t approve articles more often.  But it’s not easy.  We recently got a volunteer to help move the process along, so the number of approvals has begun increasing again.

  But don’t confuse article generation with article approval.  CZ generated over 11,000 articles just over two years after becoming public (the first six months of the project were in private).  Most of these were far better in quality, per edit, than Wikipedia articles.  We have nearly 1,000 articles that, in somebody’s opinion, are “developed,” meaning they are ready to begin the approval process.  At this point, the number of approved articles doesn’t reflect much more than that our editors don’t want to spend the time to approve articles, and find it more interesting and rewarding to write articles.  The fact of the matter is that getting three experts who are working together on an article to approve the article, or one expert who is not working on the article to approve it, is not easy.  Experts tend to have high standards, and if they put their name on something generated in an open, public context as CZ is, they want to make sure that it really is up to their standards.

  There’s no “flaw” in the CZ process that leads to this, but there is something lacking: adequate technical tools to automate the approval process.  Frankly, though, I think this will speed things up only a little, because experts will always take their time in putting their public stamp of approval on a piece of text.  In the meantime, we’ll continue to create thousands of new articles and do a service to the general public in that way.

 

DS: Why has Citizendium experienced such turnover among its participating editors? Is it the aforementioned fascistic approach and over-reliance on the cult of expertise?

 

LS: Well, I’m not aware that we have had any more turnover in editors or authors than any ordinary online community.  Every online community I’ve been in has had considerable attrition; perhaps this is something you, or your anonymous inside source, weren’t aware of.  Occasionally people leave loudly, but in my judgment the loudest leavers are not the victims of “fascism” but more often the would-be “fascists” themselves, i.e., the editors who feel that they have not been kow-towed to quite enough.  Anyway, after much consideration, this is not something I worry about very much.

  Just let me underscore that so far from agreeing with the “fascist” label I want to say that we are far less fascist than most online communities, which are set up to let the brownshirts run rampant.  So I find your merely assuming the label to be offensive.  On my view, we are actually set up in a way that can arrive at very thoughtful results.  It’s disappointing to me, Dan, that you didn’t take the time to find other views about CZ.  We’ve got lots of supporters, you know.  I don’t appreciate being asked to give consequence to groundless epithets, without substantive argument, thrown around by anonymous persons who apparently have fundamental disagreements with our project.

 

DS: I would cite this celebritization of expertise as evidence of pop culture’s Lowest Common Denominator seeping into epistemology. I would also see that this leads to the formation of cliques, rather than a meritocracy. This is intimately tied to the failure of print newspapers and major publishing houses, in that their groups of columnists, critics, and authors are people in a ‘circle’ rather than the best available. This in turn leads to folk going to other sources for quality writing and reviews. Do you see links between the celebritization of expertise (ala cable tv news talking heads) as tied to the failure of the modern media?

 

LS: Living in New York City as you do, I’m sure you are closer to this phenomenon than I am.  I don’t really rub elbows with intellectual celebrities much even virtually.  So I can’t claim to know what you’re talking about.  I don’t especially envy what intellectual celebrities there are.  I’d be more inclined to take heart in the fact that any intellectuals have, in our coarse culture, achieved any measure of public attention.

  Now, if what you’re saying is that the same talking heads appear on the talk shows, signing book deals, publishing in famous magazines, etc., and that this is a problem because a lot of those talking heads are mediocre—well, I would agree with you.  I dislike it whenever mediocrity is raised up.  But for what it’s worth, most of the real heavyweights don’t generally feel they have time for popular writing.  To complain that the talking heads form “cliques” (do they?) seems about as justified as the high school complaints that the cheerleaders and the jocks get all the attention.  Sure, but who cares?  I never cared when I was in high school.

  You seem to assume that the talking heads (I won’t call them experts) have become more “celebritized” as the old media have declined.  I’m guess I don’t know what you mean.  Were they more of a meritocracy in, say, the 1970s?  I don’t know.  It’s common to bemoan an alleged decline of the “public intellectual.”

  Y’know Dan, I have to make an observation.  These questions are not really about what I believe, asking me to reveal what I’m thinking about, regarding topics that are of concern to me; many of them are more about what you believe, and getting my reaction to what you believe.  So if my answers are made dull by my saying, “I don’t really know about that,” that would be why: because you’re getting my feedback on your own thoughts, and it seems you frequently think about stuff I don’t know anything about, or haven’t thought much about.  No offense to you, that’s how everyone is.  I am sure you would be completely speechless if I were ask you about some of the things I think about—not because I’m brilliant or anything, mind you, but because, like you, I make all sorts of assumptions that would make you go, “Huh?” if I were to ask bare questions that just took those assumptions for granted.

 

DS: First, I last lived in New York in 1991, and second, of course the questions are going to be premised on what I know or believe to be so. Getting your reaction to them, especially if I have no clue as to what that reaction would be, is the point of an interview. Imagine Ted Koppel or Phil Donahue, back in the day, merely asking the obvious, what is known, to a subject. That would be an advertisement, not an interview, and I don’t do ads. I feel this lack of introspection goes beyond the masses, and, indeed, into those celebritized experts. When I interviewed philosopher Daniel Dennett, as example, he seemed almost a blank slate himself, unwilling to take on philosophic subjects beyond that he’s written of. As example, he had appeared on a tv talk show at the end of the century, as a panelist regarding the most influential folk of the last millennium. You recall how many lists were made, no doubt, and this was a classic example of the celebritization of expertise. Anyway, I thought it a great way to dovetail with my interest in mass murderers and despots, since I believe Genghis Khan was overlooked on most lists, with the issue of causality and determinism. Thus, I asked this query:

  That puts me in mind of another Charlie Rose show you did, with Steven Pinker and others, at the turn of the century, on the most influential people of last century. What I found a bit galling was some of the sheer stupidity on that panel- most notably by the President of the Carnegie Institute, Maxine Singer. She equated influence with good morality- an asinine position, yet one which no one, not even you, challenged. I similarly recalled Time magazine having a most important people of the last millennium issue, and leaving off, to my mind, easily the most influential person of the last thousand years, Genghis Khan. My reasoning is that influence comes with time, so the most influential person simply could not be in the last couple of hundred years. Then, there would have to be reach over several spheres. Then, there would be the mind experiment of removing that person and seeing if he or she was merely a part of historic forces, or one of the Great Men of History. Khan fits all of these- even if he was the worst mass killer in human history, up until the 20th Century. He was born early on- the 12th Century, and he took a nomadic Gobi people, with a six thousand year history of no territorial expansion, united the Mongol tribes with the Turkic tribes, and built a nation larger in area than the old Soviet Union- all within two decades- and sans guns or any advanced war materiel. His effect on politics, the arts, religion (his was a secular state), and life was profound. Remove him and the Mongols likely go on as nomads. Then there is no check on Chinese expansionism. Khan forced the Chinese to abandon their junk explorations across the Pacific and likely to the Americas. They hibernated xenophobically as a world power for centuries. The Khanates carved out of his empire, by his descendants, helped establish the Ottoman Empire, which acted as a bulwark against Muslim expansionism into Europe. Without the Ottomans, Islam may have displaced the Papacy, forcing its withdrawal to Scandinavia and a reduced status as a regional Arctic cult. China may have expanded across the Subcontinent, Oceania, and into the Andes and the western half of the Americas, while Europe was Islamized. Moorish Spain and Imam Britain may have then settled the Americas from the east. The Cold War of the last century may not have been between Communism and Capitalism, but between Islam and Sino aggression. Yet, none of that happened because one Mongol named Temujin preferred horseback riding and conquest to life as a scavenger. To me, this omission shows the profound lack of vision many so-called leaders and experts have in their respective fields.

  First, would you agree with my ranking of Genghis Khan as numero uno in influence last eon, for despite his genocidal ruthlessness, he was an organizational genius with a mind that wanted to know seemingly everything? He was arguably also the most amazing figure in human history. If you disagree, why? And why do you think he was so ignored on such lists? Was it simple Eurocentrism? Or something more confounding?

Dennett flippantly replied: ‘I guess I just don’t know enough about Genghis Khan to judge,’ which implied he a) had no clue that his humor was lacking, b) the question was essentially not about the Mongol warlord, c) did not care about giving a good interview nor digging a bit deeper into his mind, or d) all of the above. So, let me first ask you if such lack of intellectual engagement is a problem unique to ‘celebrity experts’ as Dennett, systemic in philosophy- which explains why even fewer people are interested in it than poetry, or simply evidence of the greater intellectual apathy of the times? After all, Dennett is a philosopher, not a creative thinker, nor a historian, so why was he even on such a panel in the first place? Secondly, given the points I laid out in the above question, whom would you place in the top spot on such a list, and what are your views of causality and determinism?

 

LS: About this interesting question, let me say that I mean no disrespect to Daniel Dennett in approaching it.  Out of plain old politeness, I would never pretend to speculate why he answered the way he did.  In answering it, therefore, I am not going to use Dennett’s name but instead speak in generalities which may or may not apply to the man himself.  I will say that I knew of Dennett when I was an undergrad, when he was famous only among philosophers.  He’s well respected as a philosopher by philosophers; it is wrong to dismiss him as a “celebrity expert.”

  Your question invites me to analyze or evaluate what is going on in this sort of interview.  So—you asked for it!  I have given many dozens of interviews about my various projects, and about my own life, as uninteresting as that has been.  So let me say that this sort of interview is completely unlike anything your interviewees may have encountered before.  Though your reputation and web traffic benefits, these distinguished and busy people are not paid by you and, if they are like me, they are motivated by a combination of factors: the fact that you have landed interviews with all sorts of famous people, the interesting notion of idly revealing oneself and getting some ideas out there—at great length—and of course probably the most important motivations, those of vanity, self-promotion, and the probably silly conceit that we (most of your interview subjects) actually deserve to be interviewed at such length.  In short, it’s just intriguing to be approached by someone like you with something like this.  For me, frankly, it’s kind of a guilty pleasure.  I should probably be spending my time more gainfully.

  But having to answer such a lot of deep questions is not easy.  I can see that a very busy, famous philosopher would simply answer in any brief way that sounded plausible because he feels that’s all the time he can afford.  Given the circumstances, I think that’s perfectly reasonable, and you have absolutely no grounds for complaint.  Does that mean that a person who initially agrees to the interview and then turns around and gives a bad faith, short answer is not intellectually engaged in general?  Of course not—and it’d obviously be vain of you to assume that.  Instead, it means he has decided his energy for substantial intellectual engagement is better spent elsewhere.

  In short, it is implausible that, when someone does not give you a deep, lengthy answer, it reveals an intellectual apathy.  If you assume that your questions are so fascinating to anyone who does have some “intellectual engagement” that he will be compelled to answer, then you might reasonably conclude that these laconic interviewees are intellectually apathetic.  But—why assume that?  Surely you know that the intellectual table for most of your interviewees is absolutely groaning with tempting tidbits, with which your questions are competing.  (What does it say about me that I’m answering in such great length by comparison?)

  By the way—“a philosopher, not a creative thinker”?  Please.  Philosophy is above all a deeply creative endeavor.

  Who was the most influential person in the second millennium AD?  I think that would have to be someone like Aquinas, Petrarch, Gutenberg, Francis Bacon, or Descartes.  The most influential events of the second millennium were the invention of printing press, the invention of the scientific method.  Both were, arguably, a result of the rediscovery of Aristotle (so, Aquinas) and the classics generally (so, somebody like Petrarch), which led to the Renaissance.  It was the spirit of the Renaissance which led to both inventions, which in turn led to other fantastic developments such as democracy and the Industrial Revolution.

  This answer is off the cuff,  a first stab.  But I also imagine that these figures beat out Genghis Khan (about whom I know even less than you) if the only reasons for including him are the ones you give.  The reasons you give all depend on various questionable assumptions about alternative history, so that Genghis Khan’s place is owing to the fact that he prevented the world from not being the way it is.  I think pride of place ought to go to those who actively made the world the way it actually is.

  As to my views on free will and determinism, I am a compatibilist.  I think most people who think about the whole free will/determinism problem do not give enough attention to the issue of what it means to say that someone acts, or wills, freely.  Basically, when we say we are free, we mean that we are free from some sort of constraint.  When we act freely, from what sort of constraint is our will free?  Basically, any physical compulsion, on the one hand, or anything that significantly impairs our ability to deliberate (such as drink, drugs, and insanity).  Well, it is possible for a free act of ours to be completely determined, i.e., for the act to be subject to an exhaustive explanation in terms of factors outside our control, because merely having causes does not constitute a constraint of the requisite sense.  That’s my take on that.

 

DS: The point is that, once one agrees to do something, remunerated or not, there is an ethical obligation to do it well. I know that may sound revolutionary in this slacker-worthy era, but, as you are a generation or two younger than Dennett, and implicitly recognize this, due to the depth of your replies, it’s a shame he did not. And, I would think anyone of intelligence, be they near or above your level, would be engaged by these queries vis-à-vis the usual 10 Q&A format of most interviews. How do you feel about this celebritization of expertise? By that, I mean the talking head Sunday morning cable news type format that has been co-opted by almost every field. Pro or con, why does Citizendium seem to worship at the altar of ‘expertise;’ defined as a set of credentials, rather than excellence, as I above described, which might come from an auto mechanic over a tenured professor? Is it just a sign of intellectual laziness to assume that a degree in a field equals expertise? And, aside from credentials, what other things are sought to qualify one as an expert or not? I think it’s ridiculous because, in fifty years (and it’s even started now) if someone looks up reviews of, say some classic old film, they are going to read them all with no bias pro-Roger Ebert vs. me, simply because he wrote for the Chicago Sun-Times and had a tv show, while I had a website. Yet, a Roger Ebert is regularly quoted about films, in Wikipedia, even if demonstrably wrong. Similarly, many well funded websites, like IMDB or Rotten Tomatoes, get automatic linkage from Wikipedia even if they provide nothing of substance to the article. This then just directs more traffic their way (primarily due to advertising), thus reinforcing the fallacy that they are somehow more ‘important,’ which is equated with more ‘correct’ factually. Thoughts?

 

LS: First, a rhetorical question: considering that you interview a lot of “celebrity intellectuals” yourself, don’t you think you are contributing to this alleged celebritization of expertise yourself?  Oh, the irony!

  And this is not actually irrelevant and flip, either.  Why did you choose just these people to interview?  Why didn’t you interview someone without credentials or accomplishments or any easy way to establish that they are worthwhile to interview?  The answer you would give is, probably, just the sort of answer that most TV producers, publishers, and other celebrity-makers would give.

  I’ll give you an example that has annoyed me a bit.  For some time at least, whenever some issue in medical ethics has come up in the news, it has seemed that the expert who was invariably trotted out is Arthur Caplan, as if he were the only medical ethicist out there.  Now, what exactly might you and I find wrong with this?  Of course, I have no beef whatsoever with Caplan.  It just struck me as unfair to the many other fine medical ethicists out there that one guy is made the face of the field.  As far as I’m concerned, it isn’t that an expert is interviewed, but that the media draw upon a very small set of experts—what you call the celebrities.

  You seem to think that the problem is that the media, the book-buying public, and the conference organizers who create the expert celebrities necessarily require that any old person with a Ph.D. will do.  If that is what you think, it’s wrong.  Most Ph.D.s labor in obscurity just like everybody else, and they frequently complain that the popularizers in their field are not, in fact, top-ranked researchers but people who would be unknown but for their pandering to the public.  And if you’re envious of the celebrity experts on TV, I submit that it’s not the fact they need credentials even to get in the door that bothers you, it’s the fact that there is just a few of them.  From my point of view, the laziness of the celebrity-makers comes from their failure to consult their own local experts for the names of other distinguished researchers whom they might interview.

  As for links to IMDB and Rotten Tomatoes, the problem there is not so much the prizing of expertise—after all, both sites feature reviews from the unwashed masses, prominently placed—as the fact that IMDB is a huge and useful database, and Rotten Tomatoes is an extremely useful collection of reviews, from reviewers distinguished, undistinguished, and completely unknown (indeed, anonymous).

  The Internet in general is undermining the “cult of expertise,” by making it so easy to access non-expert opinion.  In general, I don’t think this is a bad thing.  Are you surprised that I say this?  You shouldn’t be.  I’m actually very glad that everyone has a voice online now.  I just think that we haven’t found the best way to organize and access this wealth of information, opinion, and communication.

  But I know I haven’t answered your question entirely yet—I see I’ll be given another chance with your next question.

 

DS: Re: the cult of expertise, why do you think people refuse to evaluate skills and talents objectively and instead seek approbation from institutions; ala the 19th Century French Salonistas or modern Academia, especially when the product such institutions produce(d) is demonstrably bad?

 

LS: I too have my complaints about academia, although my exact criticism might be different from yours.  I think most academics are mediocre at best.  Most of them are terrible writers and couldn’t argue their way out of a paper bag (not that I claim to be any better in my own field of philosophy).  Worse, everywhere I look, I see intellectual dishonesty.  For example, their interpretations of texts require deliberately ignoring the texts themselves, and imposing their own out-of-context notions on cherry-picked bits.  The papers they write are frequently derivative, and their whole outlook is deeply conformist and conventional, ironically so, considering how cutting-edge so many of them fancy themselves as being.  Meaningful, deep criticisms of widely accepted doctrines and practices, so far from being encouraged, are grounds for denial of tenure.  Instead, what is prized is the ability to write articles that extend earlier research, no matter how ridiculous the original research is.  My disgust with this situation explains why I have repeatedly found myself alienated from academia.  I find most articles, books, journals, presentations, and conferences intellectually stifling and boring.  I also find most academics insufferably self-important and vain.

  Partly for this reason, I also have no small amount of resentment for the role of experts in contemporary society.  Again, you might be surprised that I say this—it’s just that I don’t much opportunity to express this resentment.  But you’ve given me the opportunity, so here goes.  It seems very unfortunate to me that writers and presenters in the media so frequently quote leading experts, professional societies, and committee reports as if they were fact-stating when, even within a field, there is usually a variety of opinion, and a variety that changes deeply from generation to generation.  In order to be fully rational, intellectual endeavor must be deeply critical, and therefore deeply suspicious of “leading experts,” alleged consensus, party lines, and the declarations of committees.  Of course, I think we should be critical of ourselves first and foremost, if we want to be fully rational; but when thinking together with others, it becomes almost as important for there to be an atmosphere of intellectual tolerance.  If there is one thing above all that I hate about the use of expert opinion in modern society, it is in how it is used to stop debates and set up walls of intellectual intolerance.  I find that deeply offensive, especially when used by those in authority to protect their own hegemony.  This animus of mine, by the way, is my main motivation in articulating and advancing the neutrality policy of Wikipedia, CZ, and my other projects.

  But so far I haven’t commented on your notion that the mechanisms of society should put more energy into evaluating skills and talents “objectively” instead of via credentials.  Well, as I have argued in this essay, as soon as a decision-making entity performs some “objective” evaluation of knowledge, skills, and talents, a new credential is thereby created and is fodder for such facile dismissals as yours.  I would hope that most people who set out to evaluate a person’s work would try to do so in a way that they regard as “objectively” sound; that goes for dissertation committees, which award Ph.D.s, as well as anything less formal and more to your liking.

  A key point badly misunderstood by popular commentators is that the modern love of degrees and other credentials actually evinces contemporary egalitarianism more than it does elitism.  (This is not an original observation on my part, either.)  A degree program allows anybody who manages to jump through some “objective” hoops, such as examinations, to join any number of “guilds” that, formerly, were open only to those who had the right social connections.

  It seems to me that your complaint is actually about the fact that there are still intellectual or professional guilds, “closed shops,” at all, which are consulted by the media and everybody else whenever an expert is called for.  And there, I actually tend to agree with you; I think there should be much less professional licensing, for example, than there is.  But the guilds, and licensing, have their uses.  In our modern society, devoted to mobility and too fast-paced to develop many deep personal relationships, there is bound to be an element of “industrial efficiency” in evaluating expertise.  Did you have a better plan in mind?

 

DS: A few years ago I wrote an exposé of Wikipedia, wherein I detailed many problems with it. I wrote, ‘In three separate entries on the planets of the solar system- which entailed rote information that every school child learnt, one of the etymologies of a planet’s name was dead wrong, the pronunciation of another planet’s name was wrong, and even more shockingly, another one of the planets’ place from the sun was listed in the numerically wrong position- and no, I don’t mean Neptune’s and Pluto’s temporary switch of positions! For several months I kept an eye on both, until finally it was corrected.’ Now, here’s the part that really pissed me off. After I posted my essay, I emailed Wikipedia’s founder, Jimmy Wales, with the three specific instances (BTW- the three planets were, respectively, Venus, Uranus, and Jupiter) I mentioned within, and in a few days the saved changes on the History tabs of those three planets’ pages, were wiped clean of the errors and the edits that corrected them, thus making it seem that my claim was fallacious, by the permanent redactions. By that I mean there was NO RECORD whatsoever that such information was on the pages. I later asked an expert on computer programming how such could be done, and was told Wiki format is the easiest type of programming to manipulate. I realize that such total erasure was necessary for legal reasons during the John Siegenthaler scandal, wherein Wikipedia let gossip and blatant falsehoods almost ruin a man’s reputation, but this wholly invalidates any claim that Wikipedia is open and above board. It also proves again that Wikipedia is more concerned with gossip than factuality, and that, if information can be removed without a trace, then claims of Wikipedia’s accountability are totally false. Wikipedia seems to be the chief purveyor of Internet sciolism. While the release of the collective id is fairly explicable, what do you think propels the sciolism and bigwordthrowingarounding that pervades not only Wikipedia, but so many blogs and chatrooms where people argue vociferously for or against things that are not factually nor historically debatable? Finally, do you think excesses like the Siegenthaler incident presage possible libel litigations that will shut down such anonymous sites like Wikipedia and others?

 

LS: I agree that the lack of transparency on Wikipedia is truly egregious.  Last I heard, it was brazen and widespread; probably somebody should do an exposé just about that particular problem.  The reason that it is possible is quite simply that there are no consequences for improperly deleting records.  That in turn is a result of the fact that there is ultimately nothing like a credible constitutional system in place in Wikipedia-land, despite the desperate need for one.  Instead, the project is still best described as a mixture of anarchy and mob rule.  (I don’t own up to guilt on this point as much as I should.  If anyone could have anticipated the problems, I should have been able to—but I didn’t.)

  The notion of a “constitutional system,” which makes those in authority answerable to the public, is of course contrary to traditional practices both of publishing and of private organizations generally, which Wikipedia naturally shares in, despite itself.  No sane publisher would publish every draft of a work along with the final proof.  Similarly, few enterprises of any sort are so transparent that everything is always on the permanent record.  I’m not saying that Wikipedia (and similar enterprises like the Citizendium) must be 100% transparent all of the time—of course not—but processes and rules should be in place at least for “discovery” of the facts needed for democratic, constitutional oversight.  That’s what Wikipedia is lacking.

  As you can see, I see Wikipedia’s lack of transparency as being a political problem.  I’m sure it has epistemological ramifications, i.e., you’re probably right that it makes Wikipedia marginally less reliable, but that’s wouldn’t be the main reason I’m concerned with its lack of transparency.

  Wikipedia has been the target of many lawsuit threats and a few have already gone to court.  That’s not a topic I follow closely.  On whether there will be a big influential lawsuit that will change Wikipedia policy, or shut it down, I don’t pretend to have an interesting opinion.  I doubt it anyway.

 

DS: Ever since I did my piece on Wikipedia I have been a Public Enemy there. Perhaps not Number One (is that you?), but definitely in the Top 25 or so. My own Wikipedia page has been vandalized (and protected) numerous times, nominated for deletion more than once, links to essays of mine have been vandalized and deleted (causing a war between my cyberstalkers- and their sockpuppets- including many Wikipedia admins, and my supporters)- then bizarrely called ‘spam,’ even though my website and links all met the stated external links requia Wikipedia laid out, and even the pages of people I’ve interviewed have been vandalized and nominated for deletion (unsuccessfully). It would not shock me if this interview was not allowed to be linked to your Wikipedia page. Obviously there is something infantilizing about being an editor at Wikipedia (and other blogs- especially those political). I was recently forwarded this link to this page by one of my fans who edits at Wikipedia. It is a complaint by an editor over the vandalizing of a page by an anonymous editor (likely one of my many cyberstalkers), and it aptly shows just how unthinking and zomboid most Wikipedia editors are. I’ve had fans of mine show me places where articles written in foreign languages are linked to English Wikipedia pages, yet my articles (and some other good websites) are delinked just because there is some proprietary editor who decides he or she cannot broach the link, for whatever reason. This particular editor’s page (and he is apparently part of the Wikipedia film articles team), obviously did not even read the initial editor’s complaint. Do you think that Wikipedia, along with political blogs, and other Lowest Common Denominator websites, have mainstreamed insanity and pathology into our culture? If so, what is the remedy? If not, what explains such bizarre behavior that people unleash toward others- the anonymity that they could never get in face to face real world arguments? Do you think it’s this anonymity that allows people’s inner MOTIs (Monsters of the Id) to emerge? Was that the reason for full disclosure of real names at Citizendium? And, has it worked?

 

LS: I don’t know or care who is Public Enemy One to Wikipedians.  I’m sure they’re divided on the question.  I’d be pretty high on a lot of lists, because the strongest hatred is usually reserved for apostates.  On the other hand, a lot of Wikipedians—I’m delighted to say—don’t care about my opinions about Wikipedia, and seem genuinely grateful to me for having gotten it started.  And I’m relieved to say that I’m not aware of having any “cyberstalkers” any longer, or if I do, I blissfully haven’t noticed them.  Maybe this means that my status in the online world has declined, you know—I can’t brag of cyberstalkers as you can—but I wouldn’t have it any other way.  I’m tremendously satisfied no longer to be personally involved with the insanity that is working on, or caring what happens on, Wikipedia any longer.  I strongly commend a stance of not-caring to you.

  Anyway, there does seem to be something pathological in the way some Wikipedians behave.  But then, there is often different “pathologies” associated with the way people behave in all sorts of different situations—on the highway, in the classroom, in church, and so forth.  You’re suggesting that your situation (which seems awfully tame to me, as Wikipedia horror stories go) indicates that Wikipedia has “mainstreamed insanity and pathology into our culture.”  No, television and Hollywood have done that, haven’t they?

  OK, I’ll be serious now.  Of course Wikipedia’s anonymity, like that of the Internet in general, have enabled some of the worst parts of our nature—the really vain, mean, unthinking, and hostile parts of our nature.  You’ve seen the famous cartoon, of course.  Is Wikipedia worse than other anonymous venues online in this regard?  I’m sorry to say it but, for some, yes.  It’s because, unlike a mere comment or a single rating, the bad actor on Wikipedia is able to affect what is purportedly known about a subject, or a person’s reputation, merely by playing the game in a certain way.  This is very attractive to narcissistic power-mongering types, of which Wikipedia has many—in fact, many of the people in positions of authority on Wikipedia are this type.  But in fairness, most Wikipedians are not that way, and there are some real gems among Wikipedians.  So, if there is a “pathology,” it is not contracted in 100% of cases.  And has Wikipedia “mainstreamed insanity”?  I don’t see why you’d say so; make your case if you think so.  Just because it’s frequently crazy and more influential than it has any business being, how does it follow that it has made its unique brand of insanity “mainstream”?

  (I wouldn’t know about Monsters of the Id.  As a more or less empiricist type, I doubt there is such a thing as an Id, because I don’t believe in things that don’t have a clear and testable description.)

  Avoiding this sort of insanity was certainly one of our main reasons for requiring real names.  And I would like to add that, to a great extent, it has worked very well.  Anyone who has spent much time on CZ recognizes that, in its social and governance model, it is truly a breath of fresh air.  To be sure, CZ too has its share of power-trippers and odd types, but what community, especially what online community, ever lacked them?  But it certainly does not feel pathological, at least not to me.

 

DS: Let me also ask this, re: the notion of verifiability and third party sources. I run this interview series, which- to my knowledge is the most read in Internet history (if anyone has proof on a more read series, feel free to let me know) and, naturally, information on my interview subjects has landed in their Wikipedia pages. Of course, this has led to my many cyberstalkers- including admins at Wikipedia, to vandalize many of these persons’ pages, even though the information is accurate, sourced, and presented by a third party- my website. But because they have an insane hatred of me, they delete information (see the Wikipedia page on zoologist  Desmond Morris, where, as of spring of 2009, the article has 7 mb of info, and a tag claiming ‘This biography of a living person does not cite any references or sources. Please help by adding reliable sources. Unsourced or poorly sourced contentious material must be removed immediately, especially if potentially libelous or harmful. (September 2007) Find sources: (Desmond Morrisnews, books, scholar). Yet, before it was vandalized in 9/08, it had 14 mb of info, including accurate and sourced information straight from my interview. It’s almost as if this is a joke; as if Wikipedia is actively pissing on the propagation of quality information. Even worse is the Wikipedia page on philosopher Mark Rowlands. Like Morris’s page, this article was vandalized by an admin in February of 2009 who delinked my interview, then also tagged the article asThis article needs references that appear in reliable third-party publications. Primary sources or sources affiliated with the subject are generally not sufficient for a Wikipedia article. Please add more appropriate citations from reliable sources.’ Cosmoetica easily fits that bill. Even worse, was that simply because the man was interviewed by me, in August of 2008, an editor nominated the whole article for deletion, despite its sourcing and the man’s stature. And then it turned out the editor was a teenager from the U.K. In what world does a literal child get to decide on the information, and its accuracy, presented to the world, when his top priorities are acne prevention and keeping his masturbation habits secret from his mom? It’s simply insane, no? And, to top it off, Rowlands was agog that his reputation was taking a hit due to the asinine vendetta that a handful of Wikipedians have towards me simply because I actually propound sound knowledge into the world, rather than waste my life 24/7 relentlessly arguing with idiots. Comments?

 

LS: I wouldn’t know what the most-read interview series online is—if it’s Cosmoetica, well done.  I think Edge.org must be giving you a run for your money.  But I do agree with you, of course, that Cosmoetica is a “reliable source” in that it has a lot of useful information about what people have said in response to your questions, duly edited and posted online in an important series.  I see no reason why it should not be used in Wikipedia articles.  Mind you, I don’t know that much about Cosmoetica.

  But as you’ve come out against Wikipedia publicly, all bets are off.  There are a lot of dishonest, irrationally bitter, and mean little people in the world, I’m afraid, and I’m sure such people are not above using Wikipedia to get back at you.  But, frankly, I don’t know why you care so much.  It’s just Wikipedia.  For somebody who has as much contempt for Wikipedia as you evidently do, I don’t see why you should care so much about what goes on there.

  You seem to be upset mainly because, in your opinion, Wikipedia decides on “the information, and its accuracy, presented to the world,” as if Wikipedia were the gatekeeper of all information.  I know it’s highly popular and very widely consulted, but its shortcomings are also pretty well understood by now as well—not many people actually regard Wikipedia as anything like a gatekeeper or decider of accurate information, and from what I’ve read, a critical understanding of Wikipedia is becoming more widespread, even among students.

 

DS: Why do you think it’s so difficult to change the culture at Wikipedia? As an example, in the ceaseless edit wars between my stalkers and fans, there are links to pieces and about me that are added 24/7, from dozens of different IP addresses, from different countries, and I’ve been shown pages where they even tracked edits to someone going from hotel to hotel across the U.S., yet they insist it’s me doing all of this omnipotently. I’ve even had Wikipedia editors send me viruses via ‘dummy pages’ they created, then emailed to me to send a ‘complaint form.’ This was before I knew there was no ‘them’ to Wikipedia. Yet all some fans have ever done is improve articles and source the improvements to something I wrote, online or off. By Wikipedia’s own celebritized standards, I am ‘notable,’ because I have my own page at Wikipedia, have the most popular arts website online, have a long published resume online and off, and have been written about in major news outlets, online and off. Is it just because I’ve succeeded at ‘beating them’ at their own game that these sorts obsess over me? And, as is obvious, I am one of only thousands of such ‘disputed’ persons, things, and facts. It seems to me that the sheer and utter waste of time, cyberspace, and money for all the page versions and bytes, is obscene. Yet, as I noted, if Wikipedia has a logy toward change and progress, Citizendium has a logy toward doing anything. There simply has to be a better way for a free and accurate online encyclopedia to be wrought. If so, have you any better options than those you’ve tried?

 

LS: Dan, I get the sense that you’ve been traumatized by Wikipedia, and I am performing therapy for you.  I hope it’s helping.  If indeed there are people on Wikipedia who “obsess” over you, I do feel sorry for you.

  As I said before, you don’t seem to understand the Citizendium system, though you puzzlingly pretend to, and your misunderstanding is the basis of this question.  And Dan—look, you’re obviously a very forthright and outspoken person, so I’m being forthright with you—don’t you recognize that it is just plain rude to ask me to suggest a replacement for a project to which I am still very much committed?

 

DS: Larry, I was in a gang as a teen, worked for a Mob front during those same years, and grew up in a poor New York neighborhood in the Serpico era. That has not traumatized me, so Wikipedia hardly would. Your former association with that website , though, lends to a higher than expected number of queries regarding it, as would an interview with the deceased Carl Sagan lead to an overabundance of queries on space and astronomical conflicts. Where do you see Citizendium in ten years? Where do you see Wikipedia in ten years? If neither will be around, or if neither will be as popular as today, what do you think will replace them? Will traditional encyclopedia companies eventually go online?

 

LS: As I gaze in my crystal ball, I perceive Wikipedia will still be around, and in roughly the form it is in now.  It will probably have a few more million articles, and there will probably still be enough people to maintain them in about the form they are in now.  But I have no idea, really, and I don’t care that much either at this point.

  Some have predicted recently that Wikipedia is on its way out.  I suppose that is possible; people do, after all, give up on massive collective endeavors en masse.  Just look at the flight from most organized religion, for example.  But I think that as long as Wikipedia is used and popular as much as it is, it will have a constituency, so to speak, devoted to “protecting” it.

  The Citizendium will, “in the fullness of time,” gain credibility it now lacks as an alternative to Wikipedia.  When it does, it will begin to accelerate its growth again, and this time it will not stop.  It may never grow as fast as Wikipedia did in its heyday, but I think that in ten years it could easily have on the order of hundreds of thousands of articles.  When it has that many articles, and a self-sustaining community devoted to quality in a way Wikipedia lacks, its articles could well displace Wikipedia as the go-to sources of information.

  Unless, of course, something better comes along.  That’s possible too.  But I don’t see it now.  Certainly not Knol, which has proven itself to be little more than a glorified blogging platform—or any other of the also-rans.

 

When you ask, “Will traditional encyclopedia companies eventually go online?” I scratch my head because they are all online now.  One of the less-appreciated, in fact, is the Oxford Reference Online.  I think you must mean to ask, “Will they open their contents for free reading and open development?”  Here I will make a prediction: yes, at least one major mainstream encyclopedia will open up.  I’ve spoken with executives at three different encyclopedia publishers (at least), and they have all considered it fairly seriously.  No one has really tried it (Britannica has only made overtures in that direction).  Maybe we’ll see a resurrected Encarta—I wouldn’t be a bit surprised by that.

 

DS: Another ill, online and at Wikipedia, is the bullying that goes on. Do you feel that Academics and scholars, who are wary of Wikipedia for its so-called Wild West culture and ‘anyone can edit’ beliefs, have begun to accept Citizendium? If not, why not?

 

LS: I think that, unfortunately, most scholars don’t know about CZ, and they certainly don’t understand that it is largely bully-free.  The reason they don’t, of course, is that we’re still obscure.  There’s really nothing more subtle or deep than that about it.  With time and the steady accumulation of content, I think that will change.  It took a few years for Wikipedia itself to become well-known, you know.

  The problem that keeps most scholars from contributing to any online reference is, of course, that writing encyclopedia articles and other reference material awfully resembles Work, and for that scholars needs either money or glory.  So far, most online reference resources offer neither.  There are a blessed few scholars who work out of a desire to teach and to share what is known in their field with the general public—who feel a civic duty to educate.  But there aren’t enough of such people to go around, I’m afraid.

 

DS: I mentioned Wikipedia’s obsession with verifiability over factuality, but there are a number of other problems with it. When I have seen blatant errors and misstatements about myself on my page and elsewhere and attempted to change them under my own name, I was harassed and told that I had no right to change that information; even though it was 100% false. Why can’t the individual that a Wikipedia article is about be allowed to edit that page when it contains falsehoods? Who else is going to better know the facts? Yes, there is always the fear of self-aggrandizement, but that pales in comparison to the copious amounts of falsehoods on Wikipedia; as the Siegenthaler scandal showed. How has Citizendium addressed this, especially in regards to biographies of living persons? And, the irony is that such open and honest objections, when dismissed, force people to edit anonymously, and sometimes even deliberately seek to vandalize pages. It’s so obviously self-defeating, so why does it persist?

 

LS: Of course, the reason you’re discouraged (it’s now forbidden?) from editing the article about yourself is that you are biased.  Isn’t it in rather bad taste?  I don’t think I’ve edited the Wikipedia article about me since, probably, 2003 or 2004 (I honestly don’t remember; the page used to be my Wikipedia user page).  Certainly not since that.  I have never edited the CZ article about me, period.

  Still, I know just what you mean.  Please don’t think I’m being unsympathetic.  I have a friend whose Wikipedia biography was full of bias and error, and getting the problems corrected required hours and hours.  It was really a headache.  And I can certainly understand if a person feels abused when a pack of mostly anonymous clods start writing what they purport to be the definitive truth (sorry, the “consensus”) about you, when they know virtually nothing about you.  To protect your reputation, you have to go through a bizarre, Kafkaesque negotiation with dim-witted, openly biased, and/or hostile anonymous users who brandish what they pretend to be (and really is not) definitive knowledge of how Wikipedia rules are to be interpreted.

  CZ also does not allow a person to edit the article about himself, but unless he is really a significant public figure, he can request that we not include an article about him.  He can also register as a “topic informant” and submit as many relevant self-interviews and essays as he likes, which we will place on the wiki and take due note of.  CZ is simply a much more polite place and you will find a sympathetic ear for any complaint you have.  This is, again, in large part because we take real-world responsibility for our work.

 

DS: On a tangent, why is original research disallowed at Wikipedia? I realize this is part of their verifiability over factuality problem, but it still has deleterious effects on the dissemination of knowledge. This makes me believe that there is more than meets the eye re: the whole Wikimedia Foundation. Is there anything to this suspicion, because, surely, the dissemination of facts is not the primary goal of Wikipedia. Comments? Does Citizendium allow original research, even if it is merely a re-evaluation of a work of art, not some new scientific claim?

 

LS: The restrictions on original research began with Nupedia.  It was a rule I originally formulated and advocated for, I believe.  My own arguments for it are complex in their details, but they aren’t that hard to outline.  To wit, encyclopedia articles purport to be summations of what is known on a topic—what is already known.  Purported contributions to knowledge need to be vetted by those who ought to know whether they are credible, and society has had a long succession of mechanisms for doing such vetting.  (Currently, in most developed societies and for most subjects, the mechanism involves publishing research in a peer-reviewed professional journal.  But it depends on the subject.)  If we simply jettison all restrictions on original research, then the process of editing a summation of what passes for knowledge among experts about a subject is pressed into service as a process of determining what should be considered to be known in the first place.  In other words, the Citizendium—or, heaven forfend, Wikipedia—would perform the same sort of vetting, of original attempts to contibute to knowledge, that is done by peer-reviewed journals.  Peer-reviewed journals are already a mess—what a nightmare it would be in the hands of something like Wikipedia.

 

DS: Why not remove the enterprise from the specter of a granting system, and a charity, and let it be a commercial site, subject to the whims of consumers? If the project is deemed as unfair as many people believe, it deserves to fail, no? And what do you think Wikipedia’s long term prospects are vis-a-vis Citizendium’s?

 

LS: I’m puzzled—surely, whether Wikipedia or CZ should be commercial is a very different question from what their long-term prospects are.  I think I addressed that above adequately (for now), so I’ll just explain why the projects are noncommercial.

  The projects are and should be noncommercial, in my mind, for two main reasons.  The first is the obvious reason—that advertisers become stakeholders who feel they ought to have influence in what should be a definitive summation of the state of knowledge of a subject.  You might say that this is a reason for having noncommercial newspapers, and perhaps so; but the Internet makes a noncommercial enterprise possible while high-quality, responsible newspaper reporting basically requires commerical support, like it or not.  (Or if there is a noncommercial way to support it, we haven’t discovered it yet; the Blogosphere will not be replacing newspapers anytime soon.)

  The second reason is that collaborative, online encyclopedia projects are community projects.  They should be collectively controlled by the people who create them.  This can be defended on Lockean grounds, i.e., the people who mix their labor with the bits and bytes ought to be co-owners.  This is really about control as much as anything having to do with profit.  In a commercial enterprise, deeply important governance decisions are in the hands of the owners of the enterprise, and will be made for commercial reasons; and, of course, those considerations may not cohere with the views of the people who actually create the content.  It might be different, by the way, if wikis were very difficult and expensive to set up and manage—because then there might be no other way to support the upkeep of the website than to make it commercial.  But wikis are simple to set up and not expensive to manage.

  Your question suggests that publishing project will be less “unfair” when they are made to rely on marketability for their livelihood.  But the fact that tabloid newspapers and magazines thrive as well as any puts the lie to that, doesn’t it?  Don’t get me wrong—I’m sympathetic to your suggestion, but it does not seem to be supported by the facts.

 

DS: And what of plain old copy editors. As bad as the factuality of most Wikipedia articles are, their text and formatting is even worse- at a sub-Junior High School newspaper level. Does Citizendium have people who are just professional copy editors, not subject experts?

 

LS: No, but I don’t think we need them.  Our quality of writing is considerably higher than the average writing on Wikipedia—which, I agree, is often appalling.  We have a long document on article mechanics that should help explain our important different approach.  For one thing, while Wikipedia is content to have what are essentially grab-bags of unrelated facts, we encourage authors to tie things together into coherent narratives.

  In open projects, adding more and more roles tends to make the project harder to manage, and less open.  We’ve considered adding a special “copy editor” role but ultimately rejected it because the modest potential benefits did not seem justified by the bureaucratic weight.  Considering that our articles do tend to be well-written, I don’t regret this position.

 

DS: Knowledge is not copyrightable. Neither are facts. So why does Wikipedia act as if copying bare bones facts about a subject, if taken from another source, is taboo? Do they simply not understand copyright law? A passage from a Hemingway novel is protected, but not a single dictionary, almanac, nor encyclopedia entry. What is the Citizendium policy re: this?

 

LS: We have no policy about this yet and I wouldn’t want to prejudice future decisions about one in an interview like this.

 

DS: Why do you think pet pages are such a problem at Wikipedia- wherein certain editors or admins will disallow any dissenting opinions to be represented on a page? As example, here is the user page of one of my cyberstalkers. I almost laughed aloud when one of my readers showed me this page. I recalled an old book I’d read of a man who was in the Hitler Youth and had to write of his ten year old vision of the Fatherland. If one looks at his edits you see a classic case of proprietary editing, wherein dissent is not allowed, and cyberbullying the norm. How does Citizendium combat this?

 

LS: Squatting on pages and fighting every change with which one disagrees tooth and nail—especially when those do this without any regard for the neutrality policy—is certainly one of the most irksome aspects of Wikipedia to me.

  It has happened on CZ, occasionally, to be sure.  To be completely honest, we do not have an effective way to deal with it yet, other than the fact that discussion is conducted at a more mature level, and most Citizens, as we call ourselves, are a lot more reasonable than the cyber-squatters (if you will) you find on Wikipedia.  For a whole variety of reasons—not least of which is that it is very difficult for people to agree when the neutrality policy has been violated—it’s a very difficult problem.  So far we have gotten by on the general good sense of our regulars.  That said, I am convinced that with our soon-to-be-adopted Charter, we will add a mechanism whereby people can be taken to task for more persistent violations of our neutrality policy.

 

DS: What of the puerilization the Internet brings, such as the desire to be popular above all else, yet truly creative people will always be, by definition, unpopular, or loners? Do you think this causes a de facto feedback loop for why so many online addicts are miserable? They are truly lonely, despite their cyberconnectednes, and this dissonance between their real isolation and online popularity causes a sort of psychic motion sickness within them, which may push them to be even more outrageous or vicious online?

 

LS: Creative people are unpopular loners by definition?  Surely not.  I was just listening to a CD about the life of the great composer Mendelssohn (with my family in the car), and it seems Mendelssohn was deeply popular and loved by his friends and family.  He is also one of my favorite composers, so I would call him very creative.  If you think of the whole range of great artists and creative thinkers throughout history, and their lives, I think you’ll see there were a great many popular and social people among them.  Even philosophers with then-unusual views, like Socrates, Descartes, and Hume, were social and popular.

  As to your theory—I’m afraid I don’t understand it (I’d have to hear more), so I can’t really do it justice.  I think Internet addicts are miserable, if they are, because all addicts are miserable insofar as they are missing a deep human need, namely, a need for freedom.  Both to feel and to be free, our habits must cohere with our highest judgments about how we should be living our lives.  If we live too much online, and feel that we should be doing more offline, we will feel to that extent unfree and not in control over of our lives.  This is a distressing thing for all of us, when we experience it—or so I imagine.

 

DS: Secondly, why does no one else seem to recognize the keys to a ‘real’ popularity might be excellence vs., say, those websites (in the arts or beyond) that are briefly ‘popular’ due to some passing cultural hiccup? To quote Spinoza:All things excellent are both difficult and rare.’

 

LS: (Warning—pedantry ahead!)  One of my most favorite quotes in all of philosophy, although I thought it was more to the effect that all excellent things are as difficult as they are rare, meaning, I guess, that the reason that excellent things are rare is that they are difficult—or more precisely, the rareness of excellent things is a function of their difficulty.  As I recall (I could have this completely wrong), Spinoza’s point was that to do something excellently, one must first have humility, because it is rare really to succeed, and one must also work especially hard if one is to have any hope of achieving it.  Spinoza, by the way, was a great one for the moral and psychological importance of freedom.

  To answer your exact question, I think few would agree with the claim that the key to popularity is excellence, simply because the claim is not true.  In fact, it’s rather obviously not true.  Consider how many excellent things are not terribly popular.  Consider the whole history of Western culture, which enjoys a miniscule popularity compared to the brainless parts of the Internet, television, Hollywood, and popular publishing.  And if your claim is a causal one, then simply look at the websites that are most popular today.  If Facebook, Wikipedia, and YouTube are popular, it certainly is not because they are excellent; it is because they are entertaining, or merely useful.

  By the way, with my latest project, WatchKnow.org, I’m focusing especially on educational videos.  It’s hard to get everything right, but we’re incrementally making all sorts of changes to the system and the database that will, in the end, make a resource that is unusually excellent, I hope.  Not perfect, but definitely above the norm.  I do think that popularity will come, at least within the segment of the population we’re trying to serve.  I’ll be very happy with that, even if we’re never a top 1,000 website.

 

DS: Culture critic Lee Siegel wrote, in his book Against The Machine: Being Human In The Age Of The Electronic Mob, that there are five open supersecrets of the Internet: 1) Not everyone has something meaningful to say. 2) Few people have anything original to say. 3) Only a handful of people know how to write well. 4) Most people will do anything to be liked. 5) ‘Customers’ are always right, but ‘people’ aren’t. The thread in these supersecrets seems to focus more on individuals than groups. Any thoughts re: how this affects the dissemination of knowledge, for these things are, in effect, static?

 

LS: I didn’t read that book, and I am probably not understanding your question in anything like the way you intended it.  I don’t know, but maybe Siegel’s point is that, while the medium of publishing traditionally conferred some credibility, these five facts explain why so much of what is written online is worthless.  Well, as I have said elsewhere, the Internet has two basic functions, communication and information, and what works well as communication usually serves very poorly as information.  But insofar as so much of “crowdsourcing” involves aggregating the results of what are, essentially, communications, the dissemination of knowledge through “crowdsourced” websites is going to be of depressingly low quality.

  High quality in information resources takes more time (and sometimes, money), and requires a commitment to developing reliable information—but it will come.  There is already a stunning array of high-quality free information online.  It’s just difficult to pick it out from all the noise.

 

DS: I’ve railed a bit about the celebritization and cult of expertise. But the opposite is also a problem, and I think a happy medium needs to be found. What annoys me is that the idea of elitism is somehow bad. Yes, elitism based upon birth or wealth is not healthy, but based on meritocracy- hell, that’s the whole ‘theory’, if one will, that America, and the Jeffersonian ideal, were based on? When someone calls me an elitist, I say,Of course. Don’t you want great artists, doctors, leaders, etc.?’ There is this whole notion, expounded by PoMo and PC, that has led to the exaltation of mediocrities (at best) like a Steven Spielberg of Oprah Winfrey, in pop culture, and the rise of idiotocracy in politics that led to 2008’s crowning of the incredibly dumb and profoundly intellectually unqualified Alaskan governor Sarah Palin as the Republican Vice Presidential nominee; not to mention the problems elitism by birth caused with the selection of George Bush over Al Gore in the 2000 Presidential race (and I’m not a Gore fan, I voted for Nader!). Thoughts?

 

LS: “Elitism” seems to be the wrong word for what you’re endorsing, but I understand why you use it.  You’re saying at least that you think that there are differences in merit between different artists, talking heads, politicians, etc.  Of course there are.  I think you’re also saying that those with more merit actually deserve something more—but what?—than those with less merit.  Perhaps, in some cases, only more respect; just that might be offensive enough to tar you with the brush of elitism.

  For myself, I would say that there is an increasingly troubling tendency to extend the once-vanilla and universally (in the U.S.) endorsed standard of egalitarianism—equality before the law—to every feature of life, so that any suggestion that people that should be treated or evaluated differently increasingly smacks of “elitism.”  The fact that you, evidently a progressive politically, perceive yourself to be tarred with the “elitist” brush simply because you think that some people are more deserving than others for their merit shows just how far egalitarianism has come in our society.  In its most extreme form, contemporary egalitarianism opposes the whole notion that people can deserve anything at all for any feature in which they are superior to some others.  I’m intrigued by the notion that a lackadaisical attitude toward excellence, inspired by egalitarianism, is what makes it possible for mediocrities to rise to positions of fame and influence.  Maybe there’s something to that, but I’m not enough of a historian to analyse it properly.  After all, in ages past in which the dominant culture was unapologetically elitist in whatever sense you like, didn’t many of the leading thinkers, artists, etc., prove to be mediocre as they are today?  I don’t know—I’m just saying.

  Obviously, in many fields, gaining long-lived popularity and influence requires a social skill set—involving glad-handing, flattery, avoiding disagreements, getting behind whoever is popular, toeing the party line, avoiding original thinking, not appearing better than your superiors, always being available for superiors while snubbing underlings, etc.—which often has nothing to do with ability in the field.  That’s certainly true in politics.  In a world increasingly constrained by egalitarian sensibilities, I suspect that this skill set will be the mark of the true elite.  That’s why I have never had any designs on trying to join that elite, because I’ve never been able to stomach developing that skill set.

 

DS: Let me turn aside and delve into some biographical material. Are you married? What does your wife do? And how did you meet?

 

LS: I am married, but my wife is a private person and would not want me to answer these questions.

 

DS: When and where were you born? What were some of the major, or defining, issues during your youth, insofar as they affected your career path?

 

LS: I was born July 16, 1968 in Bellevue, Washington, when my family was living in nearby Kirkland.  Later we moved to Redmond (before Microsoft was even thought of) and then in 1975 we moved up to Anchorage, Alaska, where I did most of my schooling and growing up.  My family went to church regularly until I was about 12, and in my teen years I gradually lost my faith.  I had a deep skepticism about everything, but also a great desire for “certain knowledge”—what Descartes wrote about his education and method in Discourse on Method resonated very well with me when I read him in my freshman year in college.  I ended up writing my college thesis about Descartes’ method.

 

DS: Did living in Anchorage, Alaska have any effect on your later life? As that state’s largest city, is it urban? Or was it a large town? Were you active, out in the countryside, or were you reading and studying?

 

LS: When I was there, Anchorage had under or around 200,000 people.  Parts of Anchorage were and are pretty urban enough, and other parts are very much suburban.  But I guess if you distinguish “cities” from “large towns” it would count as a large town.  But it’s the biggest town for thousands of miles, and it’s where many oil company offices are, it’s a major airline and military hub, and of course a jumping-off place for tourists.  So it’s a very interesting place—I kind of miss it, and we could live there, but I’ve decided I can’t stand the darkness in winter and being so far away from everything.

  I was more active as a kid than I am now.  As a teenager I went cross-country skiing a lot in the winter in lots of places around the area.  My Dad is pretty outdoorsy, so my family went fishing quite a bit from the Kenai Peninsula and Prince William Sound on up to the Wasilla area—yes, that Wasilla.  (Some of my family lives there.  Sarah reminds me of some of the girls I knew growing up.)  We also went boating in an inflatable Zodiac in those places as well.  Some of my favorite memories are hiking in the Chugach Mountains with friends.  We hiked up several of the peaks that overlook Anchorage—Flattop, O’Malley, Wolverine, and Near Point.

  As a kid I didn’t study nearly as much as I should have, but I guess I studied more than most.

 

DS: What were some of the cultural touchstones in your life, the things, events, or people who graced your existence with those ‘I remember exactly where I was’ moments?

 

LS: There are probably too many to mention, and most of them are too personal.  You’re inviting me to get confessional, and while I don’t mind sharing my ideas, I am not interested in sharing details of emotional moments in my personal life, and frankly I doubt your readers would be that interested anyway.

 

DS: What did you want to be when you grew up? Who were your childhood heroes and why? Where did you go to high school, and to what college? What were your major areas of study?

 

LS: I had a whole succession of ambitions.  Ages 5-7, an astronaut; 7-10 maybe, a cartographer (I just loved maps); 10-13 or so, an astronomer; 13-17, a novelist; 17-26 or so, a philosophy professor.  After that, I really didn’t know what I wanted to be.  Now I don’t have any grand career ambitions, only some particular ideas for projects I hope to do someday, and books to write, although that requires leisure I don’t have now.

  I don’t recall having any childhood heroes as such.  Maybe I’m just not remembering, but I don’t think so.

  I went to Robert Service High School, in Anchorage, and Reed College, in Portland, Oregon.  When I arrived at Reed I was going to double major in philosophy and psychology, but after taking 5-6 semesters of psychology courses I dropped it and focused on philosophy.

 

DS: Robert Service has a school named after him? Cool! What sort of child were you- a loner or center of attention? Did you get good grades? Were you a mama’s boy, a nerd, or a rebel?

 

LS: As a little kid, I was the “baby” of the family—the youngest and by all accounts a cute kid.  I wasn’t exactly a loner, because I always had some friends, but I was very introverted (still am).  I was occasionally the center of attention, but usually I was happy to be doing my own thing.  I definitely was more of a nerd than a rebel, but I certainly have a bit of rebel, or at least a crochety old man, in me.  I’ve never felt that I fit in very well almost anywhere, in any group—again, my friends aside.  I’ve always had my own ideas about things and have usually had no small amount of contempt for conformity, even the sort of “conformity to nonconformity” I detected in my peers at Reed.  In college I quickly realized that the name for what I was, was “intellectual individualist.”  Frankly, I don’t know very many people like myself.  I don’t say that to brag or to confess anything, it’s just a somewhat sad fact.

 

DS: Any siblings? What paths in life have they followed?

 

LS: I wouldn’t want to talk about them, especially without asking their permission; I think it would be a violation of their privacy and an imposition for me even to ask.

 

DS: Any children? What paths have they followed in life? What are their interests and careers? Are any of them writers?

 

LS: We have just one.  My little 3-year-old loves heavy machinery, both for construction and farming, and recently has taken an interest in animals and, especially, pigs and dinosaurs.  He says he wants to be a farmer and a truck driver.  He plays all day long with legos.  We’ve read a lot to him.  We have a very large book collection, and in the last year or so have taken to reading him several dozen chapter books, including Charlotte’s Web and Pinocchio both twice (the second times at his repeated request).  We’re also on Magic Tree House #21—we often go through one of those in a day, he loves them.  He’s a pretty bright little kid.

 

DS: What of your parents? What were their professions?

 

LS: My Dad was a respected marine and seabird biologist and later an ecotour operator (still is), and my Mom was a homemaker and then secretary.  They didn’t really encourage me in any particular career, as far as I can remember, and they didn’t restrain me too much from philosophy.  I do vaguely recall gentle advice that I ought to pursue some field that I can make a living in.  Obviously, I ignored that advice.  When I have gone into fun things that are seriously unremunerative, like teaching Irish fiddle, the women in my life have had gentle words with me.

 

DS: What was your youth like, both at home and in terms of socializing with other children?

 

LS: I wouldn’t say I had a very happy or a very troubled youth.  My parents got divorced when I was around 11 and that affected me a lot, but I was loved, I had friends, I had plenty to do, we were never hard up for cash.  Maybe my biggest regret about my childhood is that I was very disappointed even at the time with my education at public schools—even Anchorage public schools, which were supposed to be some of the best in the country, at the time anyway (I don’t know anything about how they are now).  I was a pretty bright kid but I just couldn’t abide the sort of slow-motion busywork and the “How do you feel about this?” sort of drivel that was imposed on us in the 1970s and 80s.  To avoid such nonsense is really why I’m going to homeschool my boy (and any future children) and try to give him every advantage when it comes to education.

  I had some great friends throughout my youth—I’m sorry I lost touch.  We spent hours and hours riding bikes and playing games.  As a teenager I was pretty geeky.  I was rarely the leader in any situation because I was usually the shortest kid in my class, and in junior high I was the kind of kid whose gym sweats were pulled down.  I think I got some respect in high school by being a state ranked debater and orator, as well as doing pretty well in cross country running and skiing.

 

DS: Do you consider yourself a social or cultural critic?

 

LS: Yes, at least on Internet and education technology issues.  I feel very grateful that my opinion is sought out on such issues from time to time.  It has given me a perspective on journalism and cultural criticism that I wouldn’t otherwise have.

 

DS: On my list of most influential books in my life, I would include Alex Haley’s The Autobiography Of Malcolm X; Walt Whitman’s Leaves Of Grass; Loren Eiseley’s autobiography All The Strange Hours; Leonard Shlain’s Art And Physics, and the Betty Smith’s novel A Tree Grows In Brooklyn. What books would you put on such a list as mine above?

 

LS: I guess it would go in a series based on the periods of time in my life.  Before my teen years, I’m sure the Bible was the most important book in my life; as a little kid, I read a lot of it and took it very seriously.  After that, I remember that in my early teen years The Lord of the Rings really fired my imagination, and Catcher in the Rye inspired my style and outlook as a would-be novelist in my mid-teens.  Then in my later teen years and in college, I was very much taken by Descartes’ Discourse on Method and Ayn Rand’s The Fountainhead.  In graduate school, I discovered Thomas Reid’s Inquiry into the Human Mind and Essays on the Intellectual Powers of Man, as well as essays by G. E. Moore, and I guess they most deeply influenced my philosophical development and style.

  Since graduate school I have read a lot of books, but none of them have really had an impact on me the way those books did before I turned 30.  Well, I guess Eric Raymond’s essay “The Cathedral and the Bazaar,” and his other essays, helped form my early thinking on the Internet and Internet communities—Raymond has really gotten the short shrift since he has come out against some of the dogmas of Web 2.0.  Also, very recently I’ve read Mark Bauerlein’s The Dumbest Generation and Maggie Jackson’s Distracted, and they have helped me to think about some of the deep social issues regarding Internet communities.  As to fiction, since my youth, I’m not sure it has affected me all that much.  I really loved Dostoyevsky’s The Idiot and many other Russian novels when I read them in high school and college.  I also read a bunch of Walter Scott novels.  I admit I haven’t read very much contemporary literature.  Some of it (like Bulgakov’s Master and Margarita) is interesting, but contemporary literature too frequently reflects a cynical and nihilistic, not to mention annoyingly pretentious, sense of life which is totally foreign to me.  I’ve also evolved from a would-be teen novelist, an artistic type who appreciated fancy language, to a philosopher with a plainer unadorned style who can barely stand the sort of precious language novelists often indulge in.  I read novels for moral inspiration, ideas, and entertainment, but nothing seems to just whack me over the head anymore.  I guess it’s mostly because I have my own ideas now.

 

DS: The Idiot is very underrated, and I’ve always wondered why Crime And Punishment gets so much attention from Dostoevsky fans, when The Idiot is as good or better. On a tangent, there was a mid-1990s film called Smoke, with Harvey Keitel, about a Brooklyn cigar shop owner who chronicles his street corner every morning, by taking a photo of it at exactly the same time. His character later says something to the effect that his cigar shop is no less interesting nor real than the top of Mount Everest. It just requires a change in the percipient, not the locale. In fact, I would guess that an alien explorer would likely find the cigar shop infinitely more fascinating than the cold lifeless vista Everest provides. Agree or not? And what does that say about the degree and depth of how we observe and integrate knowledge based upon biases for or against something?

 

LS: Well, I would certainly agree that daily pictures of the street corner would be more interesting to look at than daily pictures of the view from Everest.  That’s because ultimately the human is more interesting to me—and most of us—than the natural world.  Now, a single picture of the view from Everest would be more interesting to me than a single picture of the street corner.  The view from Everest is dramatic, unique, rare.  A single view of the street corner would be generic and boring—just another street corner.  What would make repeated viewings of the passing scene on a corner interesting would be the more mundane but still interesting human dramas that would unfold.  Repeated viewings from Everest, however, would quickly become tedious simply because the repetition would devalue, rather than enhance, its drama and uniqueness.

  On this analysis, the interesting difference between the two views lies simply in where we can find drama—the human becomes more dramatic when familiar, the natural less so.  I’m not sure what it might say about “the degree and depth of how we observe and integrate knowledge based upon biases for or against something.”

 

DS: What are your views on religion? Do you believe in gods or not? Are you an atheist or agnostic? What links do you see between philosophy and religion? Is myth merely expired religion, and religion myth alive? Do you see religion spawning from the same human wellspring as art?

 

LS: I don’t believe in God, or any gods, although I can respect theism.  I believe I have an obligation to myself not to believe anything that I do not really understand, that is, anything that is not rooted, however indirectly, in my experience.  The main trouble I have with theism is not the strength of the arguments for it—in fact, having reviewed them many times, I find the arguments are not bad at all—but that I cannot really understand what people mean when they speak of God.  God is supposed to be to be a spirit which is all-knowing, all-powerful, and all-loving; but what is a spirit?  I am supposed to be able to intuit this by being acquainted with my own spirit.  But along those lines, I think I am acquainted only with my own perceptions and thoughts, which is what I mean by my mind.  I think that mental events “supervene” on physical events in some way; without the physical, there is no mental.  The notion of a disembodied spirit, therefore, makes little sense to me.  It makes even less sense of such a spirit, for anything with which I am acquainted, to create or even affect anything except in my body such as a movement of my arm—much less create anything ex nihilo as God is said to do.

  I do not claim that God does not exist, because I have no evidence of that.  But I also do not claim that there are no invisible, weightless, soundless, floating pink elephants outside my window, either.  I just don’t care that much about the proposition, that’s all, apart from the fact that so many people have believed it.

  I have two reasons for respecting theism and theists more than some of the more strident atheistic intellectuals these days.  One is that I am, I like to think, modest.  I know that I do not know everything, and there are many people who are far more intelligent and well-educated than I am or ever will be, who believe that God exists.  Perhaps they have insight and perspective that, if I could become acquainted with it, would change my mind.  And I am not at all impressed with the reply to this that, essentially, those people were merely partaking in a tradition of theism; one could just as well say that their detractors are partaking in a tradition of atheism.  Both claims are idle and prove little.  The other reason is that many theists ground their belief, as they say, on personal experience of God.  One of the major modern formulations of the argument from religious experience was elaborated by one of my philosophical heroes, the late William Alston.  Direct experience of God, too, is how Saul of Tarsus became Paul the Apostle—arguably, the deepest of the writers and thinkers behind the New Testament.  If I sincerely believed that I, too, had had a direct experience of God, then that might (I imagine) form the basis of a God concept, on which I might hang the rest of a personal theology.

  As to the links between religion and philosophy, I think religion has mostly inspired philosophers to make sense of itself, and to render consistent and rational the dictates of dogma.  I think that an areligious philosophy—including an areligious ethics—are perfectly possible.  If some day I sum up my views on matters philosophical, as I would like to do, I will show how.  But of course the main project of ethical theory is precisely this; few ethical theorists rely on God’s dictates to justify or explain their theories.

  I think “myth” is probably overused in a religious context.  I know people say it is a value-neutral term, that to call a religious story a myth is not to imply that it is made-up, but I am not convinced.  The value-neutral usage has not caught on outside of academia, it seems to me.  So, yes, I would agree with the claim that the word ‘myth’ is probably best reserved for religious stories from expired religions.  But this really isn’t something I think or know a lot about.  I’m sure religionists have good reasons for insisting on using the word in their scholarship, but it’s hard to fault non-scholars for taking offense at the usage, which sounds intolerant, disrespectful, or dismissive to them.

  Does religion spawn from the same wellspring of art?  Well, perhaps; it depends on what you think the “wellspring” of religion and art are, and surely they both have many.  I guess I agree with one common view, that religion answers a need for a sense of safety or groundedness in the face of the uncertainty of life and the certainty of death.  I have not read about the lives of many great artists, and I don’t pretend to know what inspires great artists.  In my own feeble attempts at various kinds of art, from time to time, I suppose I am inspired by to create something that expresses deep truths about the human condition, whether to entertain or to demonstrate my clever grasp of profundities.  So, while I have heard the suggestion made before that  art and religion have a similar source, I guess I can’t agree with it.  Death and uncertainty are, after all, only two aspects of the human condition.  I would sooner say that art and science come from the same wellspring.

 

DS: Since God concepts are obviated by simply asking ‘Who made God?’, because the answer could always be, ‘He always was;’ which is the same answer one can ask re: ‘What made the cosmos?’; thereby making God a superfluity, why does such a belief persist?

 

LS: I am not a specialist in philosophy of religion, but I’ve taught it.  I find it very interesting how atheistic intellectuals, especially scientists like Dawkins recently, have made all sorts of glib pronouncements about religion when, really, I doubt they know much about it.  They think they’re being clever, when they’re really just reheating very old arguments.  It is very glib to say such things as “God concepts are obviated by simply asking ‘Who made God?’”  Any graduate student in philosophy of religion or in divinity would be able to respond quickly and easily.  I say this not without some sympathy for your view, because I am an agnostic myself.

  Without getting into the technical details of arguments, let me try to explain why so many theists are not as convinced that your sort of argument is not as clever as you apparently think it is.  Basically, you’re saying that God is “superfluous,” because whatever explanatory work God does is work that the universe itself can be said to do.  But in saying that you are at bottom simply begging the question.  Theists who accept God for explanatory reasons do so precisely because they believe that the universe, the material universe we encounter, is not the sort of thing that could come into being by itself, or organize itself in various ways (not just biologically).  They find this to be quite as obvious and intuitive as you find the contrary, and if they do not heap scorn upon you for your ignorance (which some of them do, in their own way), it is only because they are more polite than you are.  When religious philosophers ask, “Why is there something rather than nothing?” and “Why do we find the universe so fine-tuned for the beauty and order, physical and biological, we see around us?” they see answers such as “there is no reason” and “maybe we’re just in one possible universe that happens to be fine-tuned in just the right ways” to be meaningless cop-outs.

  I’m not siding with them; I’m just saying that the issue is not really as clear-cut as it might seem to you.  Simply, there is a lot more to say.

 

DS: Have you ever read Pascal Boyer’s Religion Explained? An pal of mine recommended the book to me, but it was not well written and its ideas were dubious. Basically, Boyer’s explanation boils down to the fear and the bush analogy. If there are two people, and there is a mysterious rustling behind the bush, the person who is fearful and immediately runs away is likely to pass on more of his genes to the next generation because, while the brave person may be braver, if there was a saber-toothed tiger behind the bush, the brave person is dead, and bravery is weeded out. Similarly, religious people and beliefs dominate because fear is good for spreading one’s genes, and beliefs in the supernatural are fear-based. While fear is no doubt a part of religion- i.e.- the fear of death, Boyer’s is too simplistic an approach. Thoughts on the idea, and on religion’s provenance?

 

LS: Such speculation strikes me as silly to consider at great length, because it is not falsifiable and, just as you say, simplistic.  But what do I know?  Again, I don’t specialize in this stuff.

  The sorts of explanations of religion that appeal to me are those that we can see at work today and in ourselves.  In other words, to explain religion as a phenomenon, it seems to me it is very important that one give a psychological explanation of why so many of us are motivated to accept the supernatural.

  My approach is, I suppose, similar to that of the existentialists (though, otherwise, I doubt I have much sympathy for existentialism).  People accept religion because the thought of falling back on themselves, of confronting a faceless, impersonal Nature with their own paltry resources, of having limitless essentially unconstrained choices, fills them with terror.  There’s so much we could get wrong that could result in tragedy.  Religion stems not merely from the fear of death, but from fear of both the unknown and the difficulty and uncertainty of rational thought needed to confront the unknown.  Being plugged into a higher power (or powers) of some sort, along with an accompanying world view, gives one a very comforting sense of understanding and belongingness.  Being able to pray sincerely, or otherwise go into a state of mind in which one is convinced one is better connected to the deeper currents underlying the universe, also makes us feel less lonely and alienated.

  As you can see, I don’t really have any original thoughts on that subject.

 

DS: Are there any major areas of philosophy, that you think have been wrongheaded, since the earliest times they were proposed? What are they and why?

 

LS: By “areas of philosophy” I assume you mean whole subdisciplines, like metaphysics, epistemology, and ethics.  No, I think all of the widely recognized ones are perfectly sensible and worthwhile, including more recent ones such as feminist philosophy and philosophy of sport.  Some people are quite hostile toward epistemology, but that was my own area of specialization.  And then, of course, the Vienna Circle and Natural Language philosophers were opposed to metaphysics on grounds that it is not falsifiable, but I have always thought that it was primarily a study of our fundamental concepts, so it is not a count against it if it is not empirically falsifiable.  I do think, of course, that some common approaches to various subdisciplines of philosophy are wrongheaded since the earliest times they were proposed, but I don’t think that’s what you asked.

 

DS: Do you belong to any political party, and what are your views on such current politicized matters as euthanasia, abortion, gay marriage, and stem cell research?

 

LS: Becoming associated with any particular party or ideological position would compromise what I do as an organizer of online community projects which are committed to neutrality.  I can say that I am an individualist and pro-liberty, which is something that has, occasionally, been associated with both major U.S. parties.

 

DS: How political is philosophy, internally, within the domain, as far as careerism?

 

LS: I don’t know.  I am pretty tone-deaf when it comes to career and office-type politics.  I have read and heard things, but I take gossip and conspiracy theories with a grain of salt.  If I was ever the victim of academic politics myself, I wouldn’t know it, and I probably wouldn’t want to know it either.  It’s a game that always makes me nervous and which I tend to ignore as much as possible.  I’m sure I’ll never go far because of this quirk of mine—oh well, we all have our limitations.

 

DS: Do you have a philosophic or epistemic bete noir? Who is he or she, and what is the source of your dispute?

 

LS: This is a very tall order.  I don’t want to offend anyone, quite frankly, and without going into quite a bit of detail—book-length detail—I couldn’t explain my negative views on various figures without offending a lot of philosophers who might happen to read this.  I just don’t want to get into it without taking a lot more space to explain, qualify, and defend my views.

 

DS: Is knowledge just data, or the search for answers? And what is wisdom- the ability to apply knowledge? What then is meaning? Do we simply graft it from the ether? Do we all determine it? Is that then not solipsism? Am I entitled to say that Albert Einstein or Abraham Lincoln led lives of more meaning than Nancy Slowowicz, a pole dancer from Newark, New Jersey? And more importantly, am I correct to say it?

 

LS: Come on, that’s too many questions to answer adequately in a book, let alone a single interview, let alone one “question” in one part of an interview.  I don’t even understand how your questions suggest a single coherent narrative I might offer that more or less covers them all.  Perhaps you are suggesting that wisdom is what gives our lives meaning.

  You ask questions about three different general topics here: knowledge, wisdom, and the meaning of life.  I’ll take up each very briefly.

  As any epistemologist will tell you, the word “knowledge” has multiple meanings, depending on context.  The concept studied most by philosophers is called factual or propositional knowledge, the sort of knowledge we mean when we say, “I know she is at work, because I just saw her there.”  We distinguish knowledge, in this sense, from data because data can be false while knowledge is true (there is no “false knowledge” in most senses of the term).  Many people further distinguish knowledge from “wisdom,” with the idea that wisdom is some sort of premium-grade knowledge.  Perhaps, perhaps not.  I think of wisdom as the understanding of broad principles that inform vast numbers of other topics about how we think about life and the universe, and especially about how we live our lives.

  The question of the meaning of life is different.  I offered an answer in a commencement speech at my alma mater, Reed College.  In brief, I think that our lives are given meaning by the positive impact we can have on human life—our own but also the lives of others.  Einstein’s and Lincoln’s lives had more meaning than the pole dancer’s insofar as they had a deeper, more positive impact on more people’s lives.  This isn’t to say that the pole dancer’s life was meaningless; how she lives her life has the greatest impact, first and foremost, on herself, and secondarily on her immediate family and friends.  Like George Bailey in It’s a Wonderful Life (one of my favorite films, by the way), she might have a profoundly positive impact on a relatively small circle of people.  Even if she goes off and lives as a hermit, somehow making herself divinely happy, she can still have had a meaningful life for having done so.

               

DS: I ask this because I have come to the conclusion that 99.99% of people are mere placeholders- i.e.- they are the genetic go-betweens connecting the great people who push human life, society, and culture forward. Think of all the people who claim to want to sacrifice for their children, but for what? So that their children can sacrifice for their children who can repeat the process ad nauseam? No, whether they realize it or not, they are doing it in the hopes of being part of a lineage that will affect something deeper. Agree or not?

 

LS: Disagree.  Yours appears to be a deeply collectivist insight, one that, it chagrins me to say, I associate with fascism and the “great man” theory of history.  Boy, do you have a few things to learn about value theory; thinking this way will generate a number of radically wrong conclusions.  Notwithstanding my views on the meaning of life, I am very much an individualist.  We can have the deepest impact on our own lives.  There are exceptions, as with people who—admirably—choose lives of self-sacrifice.  But simply by improving our minds and bodies, taking in the beauty of the world, living as pleasant a life as we can, or even (should God turn out to exist) having a certain relationship to God, we give our own lives a sort of base-line meaning, a kind of meaning accessible even to children who entirely lack responsibility for others.

  Let me put it another way.  I do not live my life in order to support some great man, or woman; I live my life, first and foremost, for myself, because I am the person whose life I can have the greatest impact on.  But if in virtue of my relationships and position in the world I can positively affect others, then my life gains meaning that way as well; and sometimes, the benefit I can offer humankind can be great enough for me to give up my own.  This ought to be rare, but it explains why we praise heroic soldiers, firefighters, and the like.  My life can have meaning in this way without any reference to “the great people.”  Indeed, the suggestion that my life lacks meaning except as a “genetic go-between” connecting “great people” deeply misunderstands the value of human life.  To say so is to imply that my life is expendable, can be treated cavalierly, and ultimately that I have no rights unless I am one of those “great people.”  This is how 20th century collectivist dictators treated their rank and file citizens.

 

DS: As an artist, for example, there are manifest examples of this urge that crop up. Sometimes young wannabe writers email me and ask me why do I write, and I usually say that in ten thousand years, on some starship ten thousand light years away, I want some sentient being, human or not, who may be lonely on some interstellar freighter, to seek to alleviate his tedium by searching the Encyclopedia Galactica, to stumble across my work- read a poem or story or essay, and say to himself, ‘Ah, that ancient earthling- he knew!’ What it was I knew is no matter, but I want that power to awaken another being to something greater, deeper, more lasting. To me, there’s no other reason to write, create art, or pursue any endeavor, save to bring pieces of your life and knowledge to others, so they can benefit intellectually or emotionally. Can there be a deeper or more profound concept of immortality? After all, when we speak of Shakespeare, we do not usually refer to the guy stiff under Avon, but to the ideas and feelings his art ushers forth. Is this why you pursue philosophy?

 

LS: Do you have children?  Well, yes, there is a deeper and more profound concept of immortality: it is the sense in which we all live on through our children, as well as through our students and through any on whom we can have a positive impact.  I know this probably sounds very trite, but there’s a reason why it’s trite.  Given the choice between allowing my children to live and thrive, and writing the Deepest and Most Profound Work of Art in History, I would instantly and without another thought choose the former.  Anyone who feels otherwise either does not have children, or should not have them.

  I am not saying this in order to set up a false dichotomy between child rearing and (other) creative endeavors, but to address the premise behind your quite interesting remarks; your premise is that the highest or most important “immortality,” or impact we can have on other sentient beings, lies not in immediate personal relationships but instead in abstract, historical relationships.  To be sure, the Shakespeares of the world have had an impact on humanity that is deeper and more meaningful than the ordinary parent; but that hardly means that the impact of the parent on the child is, somehow, trivial or unimportant.

  The ability to have an impact on many people, or across the generations, is surely a grand thing, and one can’t fault people for seeking that sort of influence.  But great harm is done when mediocrities who fancy they can have that sort of influence neglect the direct impact they have on those around them.  I don’t mean to accuse you, in particular, of course.  I guess this is why, though I wish I had time to develop a system of philosophy, I haven’t done so yet.  I wish I did have time; maybe eventually I will be able to.

 

DS: I maintain that the creative arts are higher than the performing or interpretive arts, because you are basically starting with less to work with. In short, an actor interpreting Shakespeare or O’Neill has it much easier than the two playwrights did in conjuring the drama. Similarly, I posit that writing and poetry are the two highest general and specific art forms, for writing is wholly abstract- black squiggles on white that merely represent and must be decoded, whereas the visual arts are inbred, and one can instantly be moved by a great photo or painting, while even the greatest haiku will take five or ten seconds to read and digest. Poetry is the highest form of writing because, unlike fiction, it needs no narrative spine to drape its art over- it can be a moment captured, and wholly abstractly, unlike a photo. Do you agree with these views? If so, why do you think this is so? I would bet that since language (at least written) is only a six or so thousand year old phenomenon, while sight has been around for 600 million years or more, that’s a hell of a head start the visual arts have over writing.

 

LS: As I understand it, you’re ranking the different arts based on how much structure they provide; the less structure there is, the higher the art form.  The assumption seems to be that structure removes the necessity of inspiration and thought, and without structure, the inspiration or thought behind a work of art is greatest, greatness being conferred by profound inspiration and thought.  Well, that’s an interesting theory.  Not being an aesthetician, I haven’t given much thought to the ranking of the arts.  Maybe you’re onto something.  All I would observe is that there are some mighty trivial poets and some mighty profound filmmakers.  But is the average poem more profound or “higher” than the average novel or the average painting?  I don’t know; I guess I don’t even know what the proposition even means in the first place.  Maybe.

 

DS: I feel ‘greatness,’ or the ability to more deeply affect the human condition, is a random thing. When people have tried to make available the sperm or eggs of Nobel Laureates or Mensans, as example, the kids turn out to be rather average. This gibes with the fact that almost all great people, such as Pablo Picasso, Isaac Newton, Einstein, and most famously-Thomas Jefferson, have never had any forebears nor descendents come close to their achievements. And the few famed people who’ve had success run in their families- the Adamses, the Darwins, the Barrymores, have never really had any greats in their clans, or- as in the Darwin case, Erasmus was not in a league with his grandson Charles, a great man by any measure. I call this fact The Infinity Spike, meaning that the idea that a Master Race could be engineered- at least intellectually, is folly. Perhaps physical characteristics, but the chances of two Mensans or Nobel Laureates producing another Michelangelo or Akira Kurosawa are only negligibly greater than such a person coming from a plumber and a teacher. Perhaps a three or four out of fifty million chance versus a one and a half to two chance. In short, greatness spikes toward infinity out of nowhere- there is no predictable bell curve nor progression toward excellence. What are your thoughts on this posit? And does this increase or decrease the desire for meaning to the individual?

 

LS: I should think that great persons are extremely rare by definition.  Therefore, it is extremely unlikely that any two of them will appear in the same family.  It seems to me that you’re simply being impressed by some rather trivial consequences of statistics.

  You don’t believe that greatness is somehow required for meaningfulness, do you?  What a sad and utterly wrongheaded view.

 

DS: No, I don’t, but greatness obviously has an amelioratiove effect on the chances of producing meaning, does it not? I.e.- meaning is more likely to emanate from a Darwin or Mozart than Joe the Plumber. Let me digress to one of the hallmarks of both modern Political Correctness and Postmodernism, the idea that all is subjective. I argue this is manifest folly, and that anyone even arguing such a point cannot believe it, for if they truly did, there would be no rationale to argue the point. Agree or not?

 

LS: Yes, I agree, largely for the reason you give.  I, like many philosophy instructors, often had to disabuse my undergraduates of such simplistic thinking.  And don’t get me wrong—this isn’t simply dogmatism on my part.  The point is that the thinking behind “everything is relative” is simply sloppy.  This is something easily admitted even by those philosophers who want to get their students to believe in some more sophisticated form of relativism.

 

DS: Oftentimes I have argued with other artists who use theart is truthcanard, or the all art is subjectivenonsense that, ‘Only bad artists claim all art is subjective.’ Logically, if all is subjective, then there’s no reason doing a damned thing in this life. Yet, just as a single drop of blood would de-purify, say, the Pacific Ocean- were it wholly purely water, so does one objective fact objectify a subjective universe, for anything then can be related or parallaxed to or against it. In writing, as example, clichés are greatly numerically repeated images or groups of words that are placed together in greatly numerically repeated situations. Thus, there is nothing subjective about a manifest cliché likebleeding heart.’ Only if a writer somehow subverts that, out of the context of emotional sorrow, and perhaps uses that phrase in a poem or story about someone literally stabbed or shot in the heart, might that term be annealed or wholly subverted. Do you agree or not?

 

LS: First, I’d say that art itself is neither objective nor subjective; claims made about art can be objectively or subjectively true or false.  Of course it is possible to give sensible operational definitions of terms like “cliché,” and proceed to identify the clichés objectively.  In writing, you might be able to program software to identify clichés.  In this connection a classic essay, that even I can remember (it’s been a long time), is Kendall Walton’s “Categories of Art.”  He carefully describes just how different properties, or descriptions (like “clichéd”), can be said to be “true of” artworks.

 

DS: If we realize that objectivity has limits- real, material, or philosophic, is not that as good as no limits because we’ve ‘accepted’ the field of play, so to speak? It’s just that the field has shrunk from infinite to not quite infinite.

 

LS: I’m sorry, but I just can’t parse this question.

 

DS: Is a thing real only if it is material? Are not desiderata and emotions ‘real’ then?

 

LS: It sounds like you’re asking what is in my ontology, i.e., whether I extend it beyond material objects, to include “desiderata” and “emotions.”  This is a huge topic and quickly becomes technical.  I don’t think you’d find my answers to be very interesting, and it would take too long to make them clear.

 

DS: To what do you attribute the lack of introspection in modern society? Is American or Western culture simply as shallow as man of its detractors claim? In the arts, PC and Postmodernism have certainly aided in the ‘dumbing down’ of culture.

 

LS: Well, do you think people were more introspective in days gone by?  I doubt it.  Introspectiveness has been rare in all societies and in all ages.

  I think television numbs the brain, and the tendency of people to spend their reading time online, separated into silos discussing with like-minded people, make it hard to become more reflective.

  It’s true that there is a kind of practical-mindedness and pragmatism about Americans, especially about midwesterners, that makes them very uncomfortable with much theoretical reflection of any sort, and this includes navel-gazing.  There are probably historical reasons for this—we are still seeing the cultural remnants of an unsophisticated frontier society.

  I would also chalk up unreflectiveness to the cult of “cool,” something I will eventually have to write an essay about.  One can imagine how the classic “cool” character, like James Dean, might have been philosophical.  “Cool” in this sense means someone who is not bound by societal conventions and calmly, even courageously chooses his own path.  Now it means something almost opposite to that: a person is cool who is the best exemplar of the most popular trends.  The “cool” has grown into a veritable movement, which seems stronger than ever; my impression is that young people find it harder than ever to express views contrary to whatever the mainstream is.  This strongly militates against self-reflection and introspection.  A person who is really serious about understanding himself, society, and the universe in general cannot be cool in the normal sense.  The sad thing is that as mass media has come to dominate culture, and as the cult of cool has come to dominate mass media, reflectiveness has become rare and, blogs and tweets notwithstanding, more often hidden than shared.

  To be sure, forthright, earnest reflectiveness might push too easily against topics that are verboten to the PC mindset, and for academics and artists, this too can make reflection (ironically) more difficult as well.

  All that said, in some ways, we are more reflective as a society than we have ever been.  The intolerance of the mavens of PC notwithstanding, there are more topics open to discussion than ever before, and the rise of the Internet means that there is certainly no shortage of talk, period.  It’s just that this is rarely very profound, intellectually honest talk.  Rather, I see a lot of posturing—posturing is one of our leading intellectual vices, these days.

                       

DS: If illusion has all the hallmarks of reality, is it not reality? Is the mind the only ‘real’ thing in the cosmos? Is there such a thing as the cosmos beyond the mind?

 

LS: Illusion doesn’t have all the hallmarks of reality, or else we would not be able to distinguish it from reality, which we can.  Generally, real things exhibit a kind of perceptual permanence, while illusions do not.  If you change your standpoint, the lighting, or other perceptual constraints, an illusion disappears, while reality remains.

  The terms “illusion” and “reality,” as I might teach them to my four-year-old boy, can be operationally defined fairly easily.  Recently, my little boy surprised me by observing that a car we passed, which seemed to be going backwards, was not actually going backwards, and he then said, “That’s an illusion.”  Philosophers of a certain idealistic stripe (such as F. H. Bradley or Leibniz) pressed such words as “illusion” and “reality” into service to mean whatever was really permanent, whatever could never be changed.  Hence they would say things such as that the universe as a whole is the only thing that is real, that “reality is one” as Bradley had it, and its various changeable parts enjoy no objective, mind-independent existence.  Instead, we merely have different subjective experiences of an “absolute” reality, which itself can be regarded as the sum total of possible experiences.

  This way of thinking has always struck me as being pretty obviously wrong, and despite having studied a number of idealist philosophers, I could never understand the appeal.  The best explanation of the semi-permanence of objects in our experience is a mind-independent reality; and our best theories of physical objects does not include their being constructed, somehow, by minds.

 

DS: I earlier mentioned philosopher Daniel Dennett, so let me digress to his ideas on the function of the brain and mind. Could consciousness be a epiphenomenal synergy brought on by rote processes in the mind- a de facto accident like Dennett’s Multiple Drafts Theory suggests? Also, re: consciousness, and how does its provenance affect or not the way we approach knowledge seeking?

 

LS: Epiphenomenalism holds little attraction for me.  It does not really address the hard problem about consciousness and the mind.  If we say that the mind is an epiphenomenon, we have simply specified some properties of the mind (we say it is a result of the body, without effect itself); but there is still then the mystery of what the mind itself is.  My view on this can be stated briefly but, I fear, cryptically.  I think “the mental” (mental properties, mental events) seems like a different “sort” of thing because it is a property of the brain.  This means that, at bottom, the mind-body problem is a specialized application of the problem of universals.  The mind is mysterious in about the same way, and for the same reasons, that universals are mysterious.  My view is that properties, and mental stuff, certainly do exist, it’s simply that they are basic ontological categories, not further to be explained.  The furniture of the world has to be analyzed semantically into some basic elements, and two elements—things and their properties—are not apt to be further analyzable.  Our minds are properties of our bodies, and so when we attempt to explain what sort of “thing” our minds (or consciousness, or qualia, or whatever) are supposed to be, are committing what philosophers call a “category mistake.”  Minds (consciousness, etc.), being properties of the physical, have all the mysteriousness that inheres in properties.

  Off hand, I’m not sure how the solution to the problem of consciousness affects epistemological questions.  Little of epistemological importance seems to rest on how one solves the problem.  But, well, maybe I’m wrong…

 

DS: Are memories, essentially, the sum of a life? If so, if one deliberately or, by dint of time’s passage, has memories that change, does that change the life? Or, would that be dependent upon whether or not there is free will? If there is free will, then there is no predetermination, and the past is as open to interpretation as the future. If there is predetermination, then all is fixed, and even if someone loses their memories to Alzheimer’s Disease, it does not essentially change their life nor person, correct?

 

LS: Only a radical subjectivist would say that a life is no more than the memories, as if the actual actions and events that shaped a life, in the past, were not part of the life.  A life is something that enjoys existence over time.  So, of course, if we misremember things later, that doesn’t change what happened to us.  I’m not sure what this is supposed to have to do with free will.  Your claim, “If there is free will, then there is no predetermination, and the past is as open to interpretation as the future,” is simply balderdash; the last part doesn’t follow in the slightest.  The past might be “open to interpretation” regardless of the existence of free will, but the past cannot be changed by whim; is that really what you’re saying?  If that’s what you’re saying, I don’t see why you don’t take that as a reductio of your puzzling claim that memories are “the sum of a life.”  And you also need to distinguish between determinism and predetermination (or predestination).

 

DS: If you are familiar with UFO lore, you know that many people who claim to be abductees of extraterrestrial sexual experimenters only recall their traumas long after the fact. This is akin to the now verified False Memory Syndrome that has exculpated false claims of sexual abuse rings, Satanic torture, and a myriad of other bizarre claims. You must know of the work of the late psychiatrist John Mack, and his work with claimed alien abductees. He grew to believe in the mythos. So, if memories can change, is the past in any way mutable? And, is the past safer than the present or the future because we know how it turned out?

 

LS: I am not familiar with UFO lore, and I really don’t have much of a desire to get acquainted with it, because it all seems pretty nutty to me.  “If memories can change is the past in any way mutable?”  No.  Why would you think it is?  You who just got done inveighing against subjectivism in art, the one place where subjectivism makes a modicum of sense, suddenly find subjectivism plausible when it comes to the past?  Really?

 

DS: You must know of the Libet and Kornhuber experiments. What exactly were they, and how do they relate to the idea of free will, consciousness, and the way knowledge is acquired?

 

LS: Must I?  Sorry, I’m not familiar with them.  I see from a quick web search that they had something to do with cognitive psychology—well, I haven’t studied the field (except a few books, maybe) since 1989 or so…

  You know, far be it from me to tell you how to conduct an interview, but generally in interviews one picks questions that the interviewee actually knows something about, instead of something that one personally is interested in.

 

DS: As I am not clairvoyant, how could I possibly know what you know, unless I assume that all that you know you have written in length about? I take a general set of questions in a knowledge field, and proceed from there. It’s up to the reader to decide, ‘Ah, it’s interesting that Sanger, of all people, might not know of, nor care of, A, B, or C.’ For me to do so would be presumptive of both your and the reader’s states and interests. No? I think what a person does not know or does not care about can often say more than what they do. And what exactly is free will? It certainly cannot be limitless choices, but the ability to select from more than one choice. After all, I can choose to fly under my own arm power, but that does not make it so. So why do so many people grossly misunderstand the concept?

 

LS: You seem to supposing that (1) you have stated some clear theory of free will, which I don’t agree you’ve done, and that (2) I agree with this theory, which is impossible as I don’t know what the theory is.  So, without addressing the cogitations you have surrounded the basic question with, I’ll have a go at explaining a theory of what free will is.

  I believe that when we say the will, or action, is free, in order to give good sense to the concept, we have to be distinguishing it from unfree willing or acting.  What is it about the will, then, is it in virtue of which we call the will free?  Not simply the ability to choose from among various options, because the ability to choose can itself be constrained by insanity, inebriation, drugs, hypnotism, etc.  The court system regards as exculpatory certain situations in which a person is said not to be acting freely.  What, then, confers freedom when such exculpatory situations (e.g., insanity) are lacking?  Quite simply, the proper functioning of judgment.  As long as we are able to act as we choose, and our ability to deliberate on our choice is not impaired by insanity or other such freedom-impairing conditions, then we act freely.  Of what are we free?  Not of causation—how could we act without causation?  We are free of constraints upon our “sovereign judgment.”  You might say that this does not do it, because our judgment can be determined by prior circumstances, or whatever; but I don’t care, because what we mean when we say we act or choose freely is quite simply that we act without any impairment to our capacity for rational deliberation.  This is, on reflection, the answer we should expect, after all; it is precisely our capacity for rational deliberation which is said to make us human, to give us dignity, to make us competent adults, and even to confer rights.  That freedom in this sense is compatible with determinism does not bother me in the least.

 

DS: On a tangent, why can’t folk think for themselves? Why do they buy into religions and philosophies and –isms rather than thinking out their own takes on life?

 

LS: Thinking is very hard, and there are many influences that pressure us toward conformity.

 

DS: Does not the idea of an all-knowing, all-powerful God mandate predestination and the illusion of free will? After all, that God, if all knowing, would know all things at all times forever. So, free will obviates the Christian concept of God, right?

 

LS: Not at all.  There is a lot to say about this old problem, but I’ll just say this.  Suppose my theory of free will is correct.  Suppose that a God knows everything that I will do, and might have set all events in motion that, more or less, deterministically (pace concerns about chaos and subatomic indeterminism) cause me to decide a certain way.  Nevertheless, I am at the very least the sum of my free decisions, i.e., the decisions which I can and sometimes do deliberate upon without the influence of things like drugs or insanity.  I can therefore be praised or blamed for what I do, because through deliberation I am free to change myself; the fact that God knows my decisions in advance, and even set events in motion that ultimately explain my decisions, does not matter.  My deliberations are part of what make me me, and nothing more is needed to give me freedom.

  By the way, far from obviating the Christian concept of God, it is Christianity above all other religions that, quite famously, celebrates the freedom of human beings, indeed making the freedom to choose to follow Jesus the fundamental saving feature of human nature.  I say this as an agnostic who has taught philosophy of religion and was raised Lutheran.  It’s all about freedom.  The devotion to the freedom of individual conscience is why Western civilization has been the freest in world history, and why a staunchly Protestant country, the United States, was founded on the idea of (what philosophers call negative) freedom.

 

DS: Are determinism and predestination different? Determinism is a fixed result based on the past, and predestination a fixed result in the future, with no bearing on what you do now or did in the past. But, is not that just a minor cavil? Are not both destroyed by chaos theory? If a random pop of an elemental particle can make me go left or right, are we confusing material cause with immaterial influence? And, does determinism ascribe meaning to actions that lack matter and meaning?

 

LS: Determinism says that prior events deterministically cause everything that happens in the world.  Predestination says that God caused every detail of what happens in the world.  So, yes, they are different.  It’s possible to argue that predestination is one kind of deterministic causation.

  I don’t know much about chaos theory, but it isn’t my understanding that chaos theory is actually contrary to determinism or predestination per se.  On at least one interpretation, anyway, the fact that systems behave chaotically does not mean that a God would not be able to do all the calculations needed to predict everything perfectly.

 

DS: Are we our own ultimate influence? That is to say, is not our free will the ultimate- if not artificer, of our lives, then the ultimate reactor? Is influence a subtler agent on decision-making than prior shared commonalities?

 

LS: I’m sorry, I just can’t grok this question.  You use a lot of jargon (“ultimate influence,” “ultimate artificer,” “ultimate reactor,” “prior shared commonalities”) that must mean something to you, but it doesn’t mean anything very precise to me.

 

DS: One of the stereotypes of philosophers is that they tend to see things very black and white, while another is the exact opposite; that they get bogged down in filigrees of minutia with no relevance to the real world. Are either correct, both, or neither?

 

LS: Do philosophers have a stereotype of seeing things in black and white?  That’s a new one to me.  Now, it is true that philosophers occasionally endorse radical theories that would seem very “black and white” to most people.  But as a class, I find philosophers to be some of the most open-minded and non-committal sorts of people there are.  They do certainly get bogged down in minutia, but to be fair, one must add that, often, these minutia have deep consequences for questions that actually matter.  The most important arguments of every philosopher of consequence will appear to be hair-splitting to the uninitiated.  I do have a related complaint, however, about contemporary philosophers; they too infrequently go beyond the minutiae to talk about the things that really matter.  There are few philosophers who are inclined to old-fashioned system-building, because doing so would have them make too many claims that they feel they cannot adequately defend (or rather, which they have not “earned the right” to make).

 

[Editorial note: this interview was conducted in five parts, chronologically, as it is written, over a 2½ year span. It’s interesting to note that I asked this exact same query in an earlier section, above, and accidentally repeated it, and Sanger’s reply was different. Were it essentially the same reply I’d have edited out one of the q&a’s, but its difference may be of some value, so I let them both stand, reflecting Sanger’s moods and opinions at each interval in time.]

 

DS: Is why? the ultimate query? If so, what is the ultimate answer? Is it why not? Is it because!? Or is that just super-simplistic philosophic bullshit that someone is better off simply saying so? to? And is so? the best and/or safest reply to any philosophic query?

 

LS: Pseudo-philosophical bullshit seems like the best description of all of this.

 

DS: Do you view religious morality (that imposed from without) as different from secular ethics (that immanent)? After all, some moralities justify the killing of infidels, but no ethics do. What are your thoughts on the difference? Is there a deeper human set of values that all share? Also, do humans need to be tricked into acting altruistically?

 

LS: As usual, you make some assumptions that I don’t.  I would not distinguish religious from secular ethics based on whether or not it is “imposed from without,” or independent of human choice.  It is possible to specify a secular ethics that is objective, and this ultimately is the task of ethicists when they weigh the merits of utilitarianism and deontological ethics.  Subjective judgment about moral principles forms no part of the common arguments for these views, I think.

  One thing rarely noticed by critics of religion is that secular morality, too, can be used to justify all sorts of immoral action.  This is especially and famously true of utilitarianism.  No philosophically serious critique of religious ethics per se should be based on how religious principles have been historically applied or misapplied.  Besides, there are stronger objections to religious ethics than that.

  Is there a deep set of values that all share?  Perhaps, perhaps not.  I’m not really sure that the outcome of anthropological study of morality ultimately matters very much.  After all, societies can be morally flawed just as people are, and the fact they they endorse some morally horrific principles does not mean that a more correct view of the question is, for that reason, less plausible or less objectively correct.

  I think some people do seem to need, intellectually or even emotionally, the grounding of religion in order to accept moral principles.  They really do take seriously Nietzsche’s deeply flawed dictum that “if God is dead, everything is permitted.”

 

DS: Are not all rights manmade fictions? Useful fictions and beneficial ones, but nonetheless fictions. After all, if they were not fictions, a real alien species would be bound to respect our ‘rights.’ Unlike the Aliens that use humans as gestation pods, or Star Trek’s the Borg race, which see all other beings as fodder for their culture, it’s simply silly to not recognize rights as fictions. I recall the old Twilight Zone episode, To Serve Man, in which aliens ended war and poverty and hunger, only so we could become entrees for their meals.

 

LS: Rights are no more “manmade fictions” than any number of other abstract concepts.  On my view, there are certain objective obligations we owe one another, such as the obligation not to take one another’s life (under virtually all conditions); and this is obligation is the semantic basis (i.e., it’s what we mean) for our talk of rights.

  The fact that a person, or an alien, does not recognize our rights does not mean that we don’t have them.  I’m surprised you don’t see that immediately.  Since when did the existence of rights depend on their being universally acknowledged?  The very notion of rights was formulated precisely because the rights of human beings were so often not acknowledged.

 

DS: You know, we’ve spoken of your career, but, what exactly has been your claim to immortality” If no Plato’s Cave, is there a Sanger’s Theorem, or the like?

 

LS: I have no such claim to immortality, and even if I did, it would be the height of immodesty for me to pretend that anything I have ever said would make me “immortal.”  One ought to leave such determinations to others, of course.

 

DS: Now, on to some questions I ask almost all my interview subjects. I started these interviews because so many interviews, online and in print, are atrocious. They are merely vehicles designed to pimp a book or other product- film, CD, etc. One of the things I’ve tried to do with these interviews is avoid the canned sort of responses that most interviews- print or videotaped, indulge in, yet most people find comfort in hearing the expected. Why are the readers and the interviews so banal? Where have all the great interviewers like a Phil Donahue, Dick Cavett, David Susskind, or Bill Buckley gone? Only Charlie Rose, on PBS, is left. Is conversation, which an interview is merely a rigorous form of, dying?

 

LS: I guess my explanation would be economic: there is less of a supply for in-depth interviews because there is less demand.  So why is there less demand?  I’m sure I don’t know, but if I had to take a guess, it would be a very common complaint, that the pace of life has increased so much.  Many of us, sadly, do not seem to have much time for conversation anymore.  So much that is new is happening, and it is now all so easily accessible, that we are distracted as a society by the enormous amount of creativity flowing in nearly every sector of life.

  But if there are still brief interviews, why are they so banal?  I think that we, as a society, have become less reflective.  It isn’t exactly that we are becoming more stupid, I think; it is again more due to the fact that we are so driven by our careers, and filling up every other moment with media and entertainment.  Coming up with deep insights requires creativity, which requires that we slow down and allow ourselves to let our intellects flow in a little less directed fashion, making mistakes.  We have to take time to think, and when we do not, neither our questions nor our answers will be very incisive.

 

DS: I believe that artists are fundamentally different, intellectually, than non-artists, and that the truly great artists are even more greatly different from the average artists than the average artist is from the non-artist. Let me quote from an essay I did on Harold Bloom, the reactionary critic who champions the Western Canon against Multiculturalism: ‘….the human mind has 3 types of intellect. #1 is the Functionary- all of us have it- it is the basic intelligence that IQ tests purport to measure, & it operates on a fairly simple add & subtract basis. #2 is the Creationary- only about 1% of the population has it in any measurable quantity- artists, discoverers, leaders & scientists have this. It is the ability to see beyond the Functionary, & also to see more deeply- especially where pattern recognition is concerned. And also to be able to lead observers with their art. Think of it as Functionary2 . #3 is the Visionary- perhaps only 1% of the Creationary have this in measurable amounts- or 1 in 10,000 people. These are the GREAT artists, etc. It is the ability to see farther than the Creationary, not only see patterns but to make good predictive & productive use of them, to help with creative leaps of illogic (Keats’ Negative Capability), & also not just lead an observer, but impose will on an observer with their art. Think of it as Creationary2 , or Functionary3 .’ In the sciences, this dynamic is applicable. When I interviewed Steven Pinker, he seemed to feel IQ was good at predicting success in life- at least socially, academically, and career-wise. But my 3 Intellects posit is something I don’t think he fully got the point of. That is that creativity is wholly ‘outside’ the axis of IQ. In other words, there could be someone with an IQ of 180, and a Functionary Mind, vs. someone with a 120 IQ, and while the 180 may be better suited for test taking, the 120 IQ, with a Creationary or Visionary Mind, will be able to understand concepts at a deeper level. IQ measures narrow problem solving, but is utterly useless in regards to creativity. To use an analogy: think of vision tests. In a real world sense, this is akin to the Functionary being able to see on the 20/20 scale, while the Creationary might be able to see on the first scale- yet only at 20/50, but also be able to see in other light- X-rays or infrared, etc. Then, with the Visionary, not only are the two sorts of sight available, but also the ability to see around corners, through steel, etc. In a scientific sense, the Functionary might be represented by your typical person working in the sciences, the Creationary by someone along the lines of a Madame Curie, or Nicolaus Copernicus, who can discover great ideas, but which are logical extensions of prior paradigms. The Visionary, however, might be able to make even greater leaps- such as Hutton reaching far beyond Bishop Ussher, Darwin’s and Wallace’s ability to transcend Lamarckism, Newton’s development of a new mathematics- calculus, etc. What are your thoughts on this? Are their current philosophers who might be considered visionaries in a hundred or more years? Who are they? Is there one discipline of philosophy that lends itself more to creative or visionary thought? And, if you are copacetic with such a system, where on the scale would you place yourself?

 

LS: You seem to associate great achievement with some unusual traits of personality or character, at least one of which you label as “creativity.”  But sometimes, I am very sure, the difference between ordinary achievement and Great achievement is plain old luck.  Isn’t it possible for a person to have a really whiz-bang idea, more or less by accident?  Now, to be sure, there have surely been true geniuses, truly great minds, people like Aristotle, Shakespeare, and Newton, whose influence was not a matter of luck but a direct result of the tremendous qualities of their minds.  But—without going on for too long—I simply wonder whether there really is anything about the minds or characters of the most famous and influential people in history, features they all had in common, that explains why they became famous and influential.  Fame is capricious and greatly subject to accidents of history and culture.

  You find it significant that Steven Pinker was not taken with your rather idiosyncratic trichotomy.  With all due respect, you’re an interesting thinker, but you’re no Steven Pinker (and neither am I).  If he did not glom onto your notion, it might simply be because he did not find it very compelling or explanatorily rich; it might not be because of some failure of insight on his part.

  In a loose way, given your descriptions, we can recognize certain people as being “functionaries,” “creationaries,” and “visionaries.”  My accountant is a functionary, successful, prominent CEOs are “creationaries,” and the “great” people of history are “visionaries.”  So far, that’s simply to press common English words into specific service.  But if you’re doing anything interesting with this trichotomy, what you’re offering up is a take on what makes creators and visionaries different, or special.  Yet all you say here, as far as I can tell, is that the creators “see beyond” and “see more deeply” and “see patterns,” and you say the visionaries see even farther, make “predictive and productive” use of patterns.  You further imply that there is some sort of exponential relationship between your categories, but it seems this only makes us stand more in awe of the Great Visionaries.  Well, if this is your theory of genius, all I can say is that there is not much to it, and it needs more careful development: leave it to an analytically trained philosopher to say that.  But what more, really can I say?

  So let’s put your theory of genius aside, and talk about the question whether IQ matches up with being a visionary.  Surely not.  I wouldn’t be surprised if Pinker was correct that IQ is an excellent predictor of income and social standing (though we can all think of counterexamples, of course).  But when I think of some great philosophers and artists, for example, I am very sure indeed that many of them did not have “genius-level IQs.”  One can become an influential, even a historically prominent thinker and artist simply by doing work that is clear, precise, an excellent example of a type; sometimes “frighteningly smart” people are really terrible at stating their thoughts clearly, and as a result they don’t have much influence.  Most frighteningly smart people, of course, die in relative obscurity, like most of the rest of us do.

  If you were to give some non-trivial definitions of the three terms, I would go farther and guess that most “visionaries” die in relative obscurity as well, and that a great many of the most famous people in history were not visionaries at all but either functionaries or creationaries.

  As to your last couple questions here: “Is there one discipline of philosophy that lends itself more to creative or visionary thought?”  I doubt it.  I don’t know what you understand by “discipline of philosophy” (do you mean subjects such as ethics, metaphysics, and aesthetics?), but in any event, there is no branch of philosophy, or any philosophical movement, that strikes me as being more visionary than another.  You might think that analytic philosophy is “functionary” but there have been some really profound analytic philosophers (they simply aren’t appreciated by people who don’t understand analytic philosophy).  And then there are some very “high-flying” sorts of philosophers, idealists and cryptic essay-writers, who strike me as being blowhards with illusions of profundity.  “And, if you are copacetic with such a system, where on the scale would you place yourself?”  Are you really inviting me to declare whether I am “functionary,” “creationary,” or “visionary”?  Anything I could say would sound either vain or falsely modest, of course.

 

DS: Many artists seem to deny their own creativity, pawning it off on God, or some other force or demiurge. I call this the Divine Inspiration Fallacy. There is no Muse. For better or worse, it’s all me, or you, or any artist. Do philosophers suffer from this, as well? Comments on its existence, origins, verity?

 

LS: There must be exceptions, but I can’t recall coming across a philosopher of repute who claimed to be directly inspired by the divine.  That’s only for artists and prophets, it seems.  Even Socrates, who claimed to have a “daemon” who advised him, said that it had only a negative role, restraining him from foolish action and saying; it did not give him deep thoughts.  Surely there were religious philosophers who felt guided by the Holy Spirit, but I guess I never studied enough medieval philosophy to be able to think of an example.  (Now I’m just revealing my ignorance of religious philosophy, I’m afraid.)

  It occurs to me that Plato and Christian philosophers found the truth to be something immanent in God (as when John says “the word was with God, and the word was God”), and so insofar as they articulate the truth, they are in touch with the divine.  But to say so is not to say that whatever they happened to write was divinely inspired.

  Anyway, there is one main reason that philosophers generally do not claim divine inspiration: if a philosopher does so, then his thoughts are not just incontrovertibly true but blessed by God; few philosophers are so immodest, and indeed they would often have been received as heretical if they made that claim.  With artists, it is different: artists who say their source of creativity is divine are not saying that their art is divine but that all glory properly goes to God, not to themselves.  So what would be deeply vain for philosophers is actually modest for artists.

  Since I don’t believe that there is any God, I don’t think there is any divine source of inspiration.  The connections within human brain are unfathomably complex; what goes on in artistic creation is far more complex than is revealed by any surface description.

 

DS: On a philosophic level, do you see any criteria as wholly objective? Or, is it all a philosophic exercise- i.e.- a single drop of objectivity objectifies a whole ocean’s worth of subjectivity, the way a single drop of blood would literally make an ocean of pure water impure?

 

LS: Criteria of what?  I wish I could understand what you’re going on about here, but I can’t.  Suppose you mean criteria of truth.  Well then, sure; there are criteria of truth that are wholly objective, in the sense that those criteria hold good independent of anybody’s mind.  Various laws of logic would count as such objectively true criteria of truth.  But as to this single drop of objectivity in an ocean of subjectivity stuff—sorry, I don’t get it.

 

DS: Agree or not on this bit of PC: if everyone is special, that means no one is.

 

LS: Well, emotionally, I disagree: it’s important that we understand that some people really are more special than others, in certain ways.  Mr. Tommy Peoples is a very special fiddle player, for example, a hell of a lot more so than me, and such distinctions matter in a practical way; they explain why Tommy can get crowds of people to concerts, and I rarely even go to sessions to play, let alone give concerts.  But intellectually, I draw a distinction: the claim is ambiguous because “special” can mean either “imbued with distinct dignity” or “unusual or remarkable.”  I do believe that we all have the dignity and worth that ought to be accorded to all human beings equally; but I don’t believe that all human beings are equally remarkable in all respects.

  This is just a matter of common sense.  Why do we even need to talk about this stuff, except to unconfuse high school students and undergraduates who haven’t figured it out?

 

DS: Any broader ideas on Political Correctness and Multiculturalism, especially any effect they have had on philosophy?

 

LS: I think P.C. and Multiculturalism have partly been—or at least, have reflected—forces for good.  I mean, clearly, it is a good thing that racism is now widely regarded as shameful, for example.  P.C. and Multiculturalism have it right on a wide variety of issues that cretins throughout history have gotten wrong.  Now, I don’t think the credit for our relative enlightenment on such issues actually belongs with the self-righteous, heavy-handed “mavens of P.C.”; I think most educated people recognize and pass on these sentiments now, and long may this sort of ethnic open-mindedness continue.  After all, racial, national, and ethnic tolerance are historically unusual.  If you read a little history you’ll see just how extraordinarily we in democratic societies have changed in our views of our fellow human beings.

  But the way that these ideas have been more recently propagated, by some, have effectively silenced dissent on a much broader range of less-obvious questions.  This has led to self-censorship and creepy groupthink particularly in the humanities and social sciences, as well as the arts.  People committed to ethnic tolerance can be pretty intellectually intolerant, it turns out, which makes one despair that humanity will ever get it right when it comes to tolerance, generally speaking.

  The impact of this situation on analytic philosophy has been, mercifully, slighter than in other areas of the humanities, probably because analytic philosophy is not so often driven by a political agenda.  It’s not deeply politically relevant how one comes down on questions of universals or even moral theory.  Even in political theory, P.C. and Multiculturalism hasn’t made such profound inroads because, well, the issues are often completely orthogonal to issues of P.C. and Multiculturalism.  Of course, the really committed find ways to make even the most tenuously related issues political, but they can take this only so far among analytic philosophers before eyes start rolling.

  In Continental philosophical traditions, things are a lot more political, so my impression is that there is a lot more P.C. nonsense.

 

DS: Ok, this next question is one that Steven Pinker dodged:

  Then there is the old example of, ‘What if a building was burning, and you could only save a person or the last extant manuscript of the works of William Shakespeare (or The Mona Lisa, or some other great work of art). Which would you save?’ Most people say, the person, and likely mean it. Yet, to me, I would have to weigh the person and the works. Even a good person is likely to not have a fraction of the cultural impact of a great work of art, especially over the centuries. Yes, saving Darwin or Galileo or Picasso or Rembrandt, over their works, is easy, for they can recapitulate most of that stuff. But saving Larry MacDougall, of MacDougall’s Plumbing? I’m not gonna lie, Larry would probably die, because nothing he could ever do would likely be as valuable to human culture as that great work of art. And it’s not because I devalue a human life, as much as I truly value human creations over human non-creators. Does that belief make one a cold, calculating proto-Fascist, a Stalinist wannabe, an über-sensitive lover of all things, or simply a mature, rational adult?

Do you agree or not? And, is not this a version of the lame argument that anti-abortionists use- that you could be flushing away the person who cures cancer, unaware that no single person will ever do such a thing, for scientific discovery always has its Marconis and Edisons and Teslas waiting to step in if one of them fails? Thus, is not a decision to save the more valuable item, regardless of pro human bias, the truly enlightened view?

 

LS: Your opinion is not just wrong, it is chilling.  Along with some of the other things you’ve said, this indicates to me that you should indeed probably check with your philosophical therapist on your Fascist tendencies.  I mean, sheesh.

  So why I do I have such a definite and adamant opinion in this case?  For one thing, a human life is more valuable than any object, except perhaps objects that are specifically necessary to save other human lives, which would not include the Mona Lisa.  Arguments over abortion are irrelevant here: we are talking about the value of a competent adult.

  I hope you won’t conclude that I undervalue art.  One of art’s most important functions is to get us to internalize values that are deeply important to our happiness—great art has a capacity to change both how we think and feel.  Without it, our lives would be far poorer and we would be less capable of meeting the challenges of life with courage.  In that way, one might argue that art does save lives; it certainly makes our lives richer.  But simply speaking about your specific case, if it comes down to the Mona Lisa versus a plumber, old Mona goes.  She’s been photographed and analyzed to death, I assume.  The value of having the original is miniscule compared to the value of a single human life.

  At bottom all the difficulty comes down to this tiresome old question: can a price be put on a human life?  You apparently think so, because it is less than many great works of art that would fetch less than $200 million on the open market.

 

DS: It’s not Fascistic but realistic, and tossing about the F-term is an easy dodge to avoid serious ethical questions. Would you say that the collection of atoms that were Mozart, over the decades of his life, were more important than the end result of that life- his music? If you are, the YOU are valuing an individual’s benefit over that of all the rest of us, and that strikes me as petty, shortsighted, and detached from historical reality. And, going one step further, there is the old canard about what would happen if everyone in China dropped dead, or just disappeared off the face of the earth. Yes, I’d be sad, but, realistically, my life would go on fairly the same as it did, if a bit warier that such a thing could recur elsewhere in the world. I think people who ply the devastation reaction are simply liars trying to seem PC. After all, one need only look at popular tv talk and court shows to see how utterly indifferent to the feelings of people most folk know truly are. Agree or not?

 

LS: I really wonder what impelled you to ask this question.  You ask if I agree; and I guess this is what I am to evaluate: “I’d be sad, but, realistically, my life would go on fairly the same as it did, if a bit warier that such a thing could recur elsewhere in the world.”  Are you asking me to predict what my feelings would be in such a case?  Why should it matter what my feelings would be in such a case?  Would that prove something noble, or shocking, about me?  The issue you raises seems to be whether one should, somehow, feel devastated about such things, and whether such feelings are actually sincere.

  To this I have three reactions.  First, of course someone could be sincerely devastated about the destruction of a country.  To suggest that this would be merely “PC” implies that you do not believe someone could be sincere in their deep sadness over the deaths of strangers.  If you don’t believe that, well, all I can say is that “There are more things in heaven and earth, Horatio, Than are dreamt of in your philosophy.”  When I was in college, I observed how the world itself seemed to change as I myself changed (in mood or perspective); and therefore, how extremely different it must be to be a different person, even someone who is in largely the same circumstances as I am.  Then imagine someone from a foreign country, with a different religion, a completely different set of family and societal practices, and so forth.  In a way, there are billions of universes, even if there are similarities and even universal conditions.  So, of course, with such variety in humanity, many people could be sincere in a reaction of devastation to the news of the deaths of many strangers.

  Second, suggesting that feeling devastated would be merely “PC,” and setting yourself up in a defensive stance regarding your own predicted reaction (you could, of course, be quite wrong about what your reaction would be) both suggests that such feelings are, somehow, voluntary.  Well, in general, I don’t think emotional reactions to events are voluntary.  You cannot be faulted for feeling nothing, if that is your feeling.  Of course, there are truly monstrous feelings, but only because they reveal a propensity to truly vicious behavior.  Failing to feel devastated about a disaster reveals little about you, in my opinion.

  It sounds like you had a disagreement with someone over this question.  So, third, in your defense, I would say that even if one says that our emotional reactions say something about the kind of people we are, the failure to feel devastated would not necessarily mean that we are callous or uncaring.  It is a common (and I think rather contemptible) mistake to confuse mere empathy with actual care.  To empathize is to react to a person’s feeling by feeling the same way.  As far as I am concerned, such empathy, like most feelings, is morally neutral.  What is praiseworthy is not empathy but the action of helping, regardless of empathy.  Who is more praiseworthy: a person who feels deeply wounded at a famine in Africa, while doing nothing, or a person who sends money that prevents 100 people from dying, but otherwise feels nothing?  Of course, if you are meeting and interacting with a person, then empathy is more of a moral requisite, but that is due to the personal interaction.

 

DS: Larry, I’ve disagreed with most people about at least a few of the queries asked herein. Why would anyone want to read an interview wherein the interviewer just asks creampuff queries he already knows the answers that the interviewee will state? Would you want to read that, even if it was this interview, and you didn’t know your own answers? I’d hope not. A few years back I co-hosted an Internet radio show called Omniversica. On one show we spoke with a poet named Fred Glaysher, who- in arguing with my co-host Art Durkee, claimed that, in art, change does not come until some giant- or great artist, comes along, and buries the rest of the wannabes. It’s akin to Thomas Kuhn’s The Structure Of Scientific Revolutions. Is the same true in philosophic precedent?

 

LS: Boy, you artiste types really care about being Great, don’t you?  How sad.  I mean, the chances of your being Great on any measure are vanishingly small—I’m just talking statistics.  You sure are setting yourselves up for disappointment.  Maybe you should study the wonderful old folk musicians and other folk artists, who pursued their art not in order to create the Great Art for the Ages, but to entertain themselves, to express themselves, to get joy from crafting something as nice as they can, to enjoy the process of personal improvement.

  Besides, I dare say that Greatness is not all that it’s cracked up to be.  I am not speaking from personal experience or anything—I certainly don’t claim to be Great in any sense at all, and I doubt I ever will, at anything—but if you read their biographies, it seems a lot of Great people were very unhappy and unpleasant sorts of people.  Being Great is no guarantee of happiness, or even that you will have a clearly positive effect on the world.  Indeed, some of the most influential people in terms of the amount of positive effect they have on the world are hardly known at all.  If it came down to it, I am not so star-struck or ambitious regarding Greatness that I would like to be either unhappy, or vicious, in my pursuit of it.  I’d much prefer to have a happy family, a stable income, and to know that I am improving the world as much as I can reasonably be expected to do.

  Anyway, that aside: no, my understanding of philosophical movements is that it’s not usually just one person who works a major change.  It wasn’t, for example, just Descartes who ushered in the Enlightenment; it took Hobbes and Bacon to get it started (among the philosophers), and people like Locke and Leibniz to carry the more free-thinking ways forward in other countries.  Similarly, analytic philosophy is not the child of one Great Man, Russell; his colleagues Moore, Whitehead, and Wittgenstein were also very influential in their own right, and Russell himself was building on work by Frege and other Germans.

 

DS: In human endeavor, what else really matters, but excellence? Intent is meaningless, so, of course, be it in my writing or my day job, I strive to be excellent. Don’t you? Have you ever watched Michael Apted’s The Up Series documentaries? What are your thoughts on it as a longitudinal study of human development? How about sociologically? Do you agree with its epigraph, the Jesuit proverb, ‘Give me a child until he is seven and I will give you the man.’?

 

LS: I haven’t seen the documentaries.  Sounds very interesting indeed.  I am not an acute enough observer of the human condition to have an interesting opinion about the Jesuit proverb.

 

DS: At this point in your life, have you accomplished the things you wanted to do? If not, what failures gnaw at you the most? Which of those failures do you think you can accomplish yet?

 

LS: I guess you mean to ask whether I have accomplished the things I have set out to do so far.  Well, of course not; I have set out to do many things that have not come to pass, and I think that’s true of most people.  But I have achieved a fair bit that I’m happy with; I earned a Ph.D. in Philosophy, I got married and have two nice little boys, and I have got some interesting Internet projects started.  That’s all worthwhile—time well spent.

  As you might expect, I am disappointed that Wikipedia did not turn out better, and I am also disappointed that the Citizendium has not grown faster—but it’s still chugging away, so we’ll see!

  Eventually, I would like to write a book of  moral advice for children and a systematic exploration of philosophical issues.  I also want to develop the Textop idea (http://www.textop.org), which is perhaps the Internet idea that has had me most excited.  It would just be a huge project.  I suspect that when my sons are old enough to help me with it, I’ll be able to work on it.

 

DS: What is in store, in the next year or two, in terms of your work?

 

LS: This year, the plan is to release a new free reading tutorial program, tentatively called WatchKnow Reader.  We were going to do it in Flash, but we decided instead to code it in HTML5.  It will introduce words systematically in phonetic groupings, with precisely-chosen pictures to illustrate both the words and sentences.  It will also highlight the individual words, in fact individual phonemes, as they are being read.  While I have stopped managing WatchKnow.org, I am continuing to advise the people who are; we’ll be trying to improve that and get the word out about it.

  If the platform is sound (we just started coding, but so far, so good), I will use it to create a multimedia encyclopedia for preschool and elementary-aged children.  It’s hard to explain what I’m doing and why I’m doing it briefly.  Suffice it to say that I see some low-hanging fruit: a free educational resource for teaching children to read and to build up both their vocabulary and their general knowledge base.  It has long struck me that the Internet should be delivering a lot more high-quality free educational resources than one can find.  Of course, there are huge amounts of educational resources of all sorts online; what is in relative short supply is free content that is so compelling that I would want to use it with my little boys.  Of course, tastes differ, and things that my boys love might be boring to other kids.  But I suspect a lot of kids will like, and learn much from, the resource I’m working on now.

 

DS: Thanks for doing this interview, Larry Sanger, and let me allow you a closing statement, on whatever you like.

 

LS: I really don’t have anything more to say about myself—I shudder to think how many words I’ve written.  This has been the weirdest interview I’ve ever had.  It’s not really so much a systematic exploration of what I believe as a systematic exploration of my reactions to various often idiosyncratic puzzles that you have come up with.  Of course, it’s been pretty revelatory of myself, but also of you.  To that extent, it has been like a conversation—but not really a conversation because that requires that there be the possibility of a back-and-forth.

  I’m very concerned that the whole thing might have been nothing more than an exercise in vanity.  And a pointless exercise, as I’m sure nobody but you or I would have read this whole thing.  Oh well.

 

[Editorial note: I think Sanger’s wrong. This is a very good interview, and with the passage of time I believe those who know of, and care of, Sanger’s work, will find it a good tool to use to write their own opinions and books on. May it sire many children, online and off.]

 

Return to Cosmoetica

Bookmark and Share