Greater Than

Over the centuries, philosophers have attempted to construct irrefutable arguments that prove the existence of God. These arguments have been sorted into various categories: the teleological argument, the cosmological argument, the argument from design, and so on.

My favorite of these attempted proofs is the so-called ontological argument. It essentially goes like this:

God is the greatest of all possible beings.
A being that exists is greater than a being that doesn’t exist.
Therefore, God must exist.

I’ve always loved this argument because it feels like a magic trick: It elegantly and instantly performs a transformation that feels impossible. You know that there’s something shady going on behind the scenes, but you can’t quite figure out what it is.

Well, one thing that makes the trick work is some cleverly camouflaged circular reasoning. If you think about it, the only logical way to find the greatest of all possible beings is to make an inventory of all possible beings and rank them according to greatness. This task is made easier by limiting the inventory to beings that exist, as per the argument’s second premise.

Assuming that one can come up with criteria (beyond mere existence) for evaluating greatness, we just have to look at the scores and see who comes out as #1 in the ranking. If the deity of the Bible actually exists, he or she would be a shoo-in to take the top spot. If not, the top spot would go to some other being. (Who knows? A gas cloud at the edge of the Milky Way? A tree in Pittsburgh?) In other words, the ontological argument for the existence of God only works if you first assume the existence of God.

But it’s also worth taking a look at the second premise. Is it really true that something that exists is greater than something that doesn’t? I’d maintain that there are beings — fleas, or coronaviruses, or Mitch McConnell — whose nonexistence would make the universe better off. There are certainly much worse possible things — such as a godlike, all-powerful but malevolent entity — that are the greater for not existing.

This thought comes up often when I hear my creative friends — writers and musicians and painters — talk about how because they are artists, whatever they produce has value, and that they’re not being fairly compensated for the value of their work. Even apart from financial considerations, they often insist that the mere act of producing something has value. (One Facebook friend recently posted, “I am claiming my space as a creator.”)

I certainly like to believe that anything I bring into existence is greater than something that doesn’t exist. My believing that, however, doesn’t necessarily make it true. I’m a lover of live music, and one of the reasons I host house concerts is to give talented musicians an opportunity to be paid for their work. But I’ve also heard music that’s so badly performed that it makes me wince, in which case I’d say that its existence has negative value. Fair compensation in that case would be for the musicians to pay me to keep listening.

If something I create has value to me, that’s great. But if I’m to call myself an artist, what I create has to have value to others, and I’m in no position to judge whether that’s the case.

When I was a freelance writer/producer/editor/designer, I always had misgivings about taking my clients’ money. If they were going to pay me, I wanted it to be because they were so pleased with my work that they actively wanted to pay me — not just because we had a contract. Of course I always billed the client for the amount we’d agreed to, because I wasn’t in a position not to do so. But I always felt lucky to get the money, rather than feeling entitled to it. Work doesn’t acquire value simply by virtue of existing; it can only have value if it fills a need that would otherwise have gone unmet.

Read Me 1 comment

Dance Academy (2)

(part two of two)

I recently saw a performance (well, four performances — more about that in a moment) by my favorite San Francisco dance company, FACT/SF. The piece, called “Split,” is performed by a single dancer for a single audience member, eight times a night. Four different dancers perform the show in rotation, each with a different, personal interpretation of the choreography. Naturally, then, I went to see it four times.

As you might expect from a piece that’s performed one-on-one, “Split” is largely about identity — or as FACT/SF’s director Charles Slender-White puts it, “the relationship between dissociative episodes and identity formation,” particularly among members of the queer community. In other words, it’s about the experience of finding out that you’re not who you thought you were.

Seeing the show started me on the path of thinking deeply about the nature of identity. “Identity” is a word we use all the time, but it’s not always clear what we mean by it. When I was an undergraduate philosophy major, one of the fields I studied was that of “personal identity,” which addresses questions like “If all the cells in the human body are replaced over a period of seven to ten years, in what sense can I be considered the same individual that I was ten years ago?” But that’s a technical application of the term, and not the way it tends to be used in ordinary conversation.

The news these days is filled with talk about “identity politics,” which is the idea that your membership in a group — particularly a group that has experienced oppression or discrimination — dictates your political agenda. More controversially, it holds that people who are not members of that group cannot understand your life experience, and therefore have no right to speak for you. In this context, identity can be considered simply a collection of categories into which one fits. In any political discussion, I would be considered an old, straight, white, cisgender, Jewish American man.

But does that description really constitute my identity? After all, I didn’t invent those categories. I may have some beliefs about which I fit into, but other people — or society at large — may have different beliefs. If neo-Nazis start rounding up Jews, it won’t help for me to tell them that I’ve never practiced Judaism. In practice, they get to decide my identity; I don’t.

I think that if “identity” is to have any real meaning, it would have to be something that’s inherent in me, not something that’s determined by others. And yet, when people talk about their own experience of establishing an identity, they tend to use those same externally defined categories. We’ve all heard people say “I thought I was straight, but I realized that I’m gay.” “I was assigned male at birth, but I’ve always been a woman.” “My light skin makes people think I’m white, but I’m really Black.” Of course these distinctions have real social and political consequences, but fitting into a particular group or category can hardly constitute who one really is.

So I came to ask, what’s my identity? Descriptors like “American” or “male” may apply to me in a political context, but they don’t resonate with me personally. “Straight” may describe who I’m attracted to, but it doesn’t say anything about who I am. “Old” may characterize my body, but not the being that inhabits it.

The more I think about it, the more I realize that real identity is undefinable and indescribable. I am who I am, and nothing more can be said about it.

It occurred to me that this may be why I’ve always had problems with my name. I’ve never much identified with the name Mark Schaeffer (or, for that matter, either Mark or Schaeffer). When I hear myself referred to that way, my immediate mental reaction is, “Who’s that?” So I’ve always sensed that I have the wrong name, but it all these years, I’ve never been able to figure out what the right one is.

I’ve asked friends — some of whom have changed their own names — what they think my “real” name would be. People have offered suggestions, but all of their proposed names felt equally arbitrary. It’s only recently that I’ve come to realize that all names are arbitrary. They’re just labels that we each put on a collection of cells that’s being replaced every seven to ten years. How do I know that I’m the same individual that I was ten years ago? At least I can say, “Well, I have the same name.”

But my true identity — whatever that is — doesn’t have a name, and it doesn’t have categories. Neither does yours. As I stated in my previous post, the dance is the dance. Now I have to add: The dancer is the dancer.

Read Me 5 comments

Modus Ponens

Detail from “The Death of Socrates” by Jacques-Louis David, 1787

I’ve never liked it when, in the course of working through a disagreement, someone says to me, “I understand what you’re saying, but I don’t agree.” I find that irritating. “Clearly you don’t understand,” I want to respond, “because if you really understood, you would recognize the obvious truth of what I’m saying.”

That’s just an emotional reaction, of course. I can’t really object when someone claims to understand but disagree, because I find it equally irritating when someone assumes that my disagreeing with them is the result of ignorance or laziness. I once had an English teacher who loved literature, and if I read one of his favorite books and didn’t fall in love with it, he would insist that I hadn’t read the book closely enough. Or there’s the common retort from anyone who disagrees with me on social media — “Educate yourself!” — meaning that if I knew the same facts that this person did, I would automatically have to take their side.

The problem in all of these cases is that arguments are not based solely on facts. Any convincing argument has to start from a set of premises — statements that both the person making the case and the person listening assume to be true. Philosophers consider a valid argument to be one in which the conclusion follows logically from the premises, and a sound argument to be one in which the premises are demonstrably true. The traditional example taught in philosophy classes is:

All men are mortal.
Socrates is a man.
Therefore, Socrates is mortal.

This argument is both valid and sound. Since both premises are indisputable facts, you have no choice but to accept the conclusion. (If you don’t accept the premise that all men are mortal, I would be perfectly justified — if a bit rude — in telling you to “educate yourself!”)

What complicates the matter is that premises don’t necessarily have to take the form of facts. In many cases, premises are values, and values can’t be held to a standard of truth. Let’s say I make this argument:

All lying is immoral.
Socrates has lied.
Therefore, Socrates has done something immoral.

“All lying is immoral” is not a fact; it is a belief. If you and I both hold that belief, then we can agree with the conclusion of the argument. But if you don’t believe that all lying is immoral, there’s no point in my telling you to go out and learn the facts. In your eyes, my argument is valid but not sound. The best either of us can say to the other is, “I understand what you’re saying, but I disagree.”

Serious conflict occurs when something that I consider to be a value is something that you consider to be a fact. For example, let’s say you make this argument:

Interfering with the process of human reproduction is immoral.
Using contraceptives interferes with the process of human reproduction.
Therefore, using contraceptives is immoral.

I might say, “I can’t accept your conclusion, because I don’t believe that interfering with human reproduction is immoral.” And you might reply, “It’s not a belief. It’s simply true.” And if you were in an arrogant state of mind, you might add, “Educate yourself!” Unfortunately, no amount of education is going to persuade me to accept your premise, because values are not provable; and each of us is going to resent the other for questioning our integrity.

I could end with the simple conclusion that telling people to educate themselves is not a constructive contribution to public discourse. But I’m driven to go a step further, and ask: If values and beliefs aren’t facts, how do people come to treat them that way?

I’m always fascinated with people who have strong religious faith. When I ask them why, they’ll say that it’s how they were brought up, or that it’s part of their culture. The question I always want to ask, but avoid asking for fear of being rude, is, “OK, you grew up among people who held these beliefs. But what led you, personally, to accept them?” I was given a full religious indoctrination when I was growing up, and yet none of it stuck, because no one could demonstrate to me that any of it was true. What causes some people to treat God as a fact, and others to consider God an invention?

It’s not a matter of education, because I know highly educated people in both camps. Religious or not, every one of us holds a set of values that we take to be self-evident. If I say, “Hurting people unnecessarily is wrong,” I don’t feel like I’m expressing a belief; I feel like I’m stating a universal truth. Yet that value isn’t any more provable than God is. If you were to ask me why a statement like that feels like so much more than a simple opinion, I really couldn’t give you an answer.

Read Me 1 comment

No Man’s Land

My wife Debra and I are not outdoor people. We don’t camp; we don’t go hiking. Camping is what I used to do when I was traveling cross-country and was too poor to afford a hotel room. Now that I can afford a hotel room, why would I want to sleep on the ground?

As for hiking, I never understood the concept. So far as I can tell, hiking is the same as walking, except that you do it someplace that’s dirty, insect-ridden, too hot, and almost always uphill. I understand that hiking allows you to see some beautiful sights, but there are plenty of equally beautiful places that I can get to in my car, and I haven’t yet used them up. For us, visiting the Grand Canyon meant sitting in the El Tovar hotel dining room drinking some really nice red wine, looking out at the canyon at sunset, and saying, “Oooh! Pretty!”

We were amazed to find out that our friends were taking trips in the middle of the COVID-19 pandemic, when everything was closed. If you can’t visit museums, theaters, and restaurants, what’s the point of going anywhere? Still, we knew plenty of people who were traveling to Lake Tahoe, or Palm Springs, or some national park or other. I have no idea what they found to do when they got there.

We actually have lovely scenery right in our backyard. I mean that literally — in our backyard. Apart from regular maintenance, we haven’t done anything to it in the thirty years we’ve lived in our house. It’s all natural dirt paths and trees and vines and moss. Everyone who comes to visit us admires it. We actually admire it as well; it’s very pretty to look at through our kitchen window. We used to have a big party there every summer, to which we’d invite everyone we knew, but we stopped doing that when the amount of work started to exceed the amount of fun. Now we rarely have any reason to go out there at all.

I have to admit that I feel guilty that such a nice yard is going to waste. When someone wants to host a gathering but doesn’t have the space, we say, “Have it in our yard!” But nobody takes us up on it; they probably think they’d be imposing. When apartment-dwelling friends complain that they have no place to make a garden, we say, “Come make a garden in our yard!” But the ground out there is rock-hard, and there’s no spot that gets much sun, so the yard remains garden-free.

When Debra and I moved here, we were running a not-very-lucrative business producing educational and training materials. The only reason we could afford the place was that it was old and in run-down condition. We bought it with the understanding that we could fix it up gradually as we became able to afford the improvements, and that’s what we did. But for the first ten years or so, I remember feeling embarrassed about how shabby the house looked, and how the size of our backyard put it beyond our ability to maintain. The 17th-century philosopher John Locke had said that a person had a natural right to as much land as they could take responsibility for, and by that standard, I felt that I wasn’t earning the privilege of living here. By allowing the yard to become overrun with weeds, and living in a house with peeling paint and rotting wood, I was forfeiting my right of ownership.

That feeling eventually went away as our incomes increased and we were able to put some money into home improvements. Over many years, we completely revamped the house — inside and out — and we began to be able to pay a gardening crew to keep our yard looking neat. We encourage community (in non-pandemic times) by opening our house for concerts, and by inviting friends and neighbors to share in whatever resources we have. It’s been a long time since I felt ashamed to be living here. But I haven’t lost the belief that ownership of property doesn’t come automatically with the signing of a deed; it’s something that needs to be continually earned.

To be honest, the whole idea of owning land is hard for me to grasp. The land was here long before we were, and will be here long after we’re gone, so in what sense can we be said to own it? I think of our land in the same way I think of our cats: We’re not their owners; we’re their guardians — we’ve taken responsibility for them. In return for taking responsibility for a piece of land, I get to decide what’s done with it and who can tread on it. But calling it “my land” feels like just a figure of speech. I can own something that humans have made, but I can’t own something that God made.1

I heard a story once that I really like: A stranger encounters two neighbors who are fighting over a piece of land that lies between them.

“I own this land!” says one neighbor.

“No, I own this land!” says the other.

The stranger says, “Why don’t we find out what the land thinks?” As the quarreling neighbors look on, he lowers his ear to the ground and listens; then he nods and gets up again.

“Well, what did the land say?” ask the neighbors.

The stranger replies, “She says that both of you belong to her.”


Read Me 1 comment

Ruling Out

To the people who know me, I’m infamous for following rules. It’s not that I naturally defer to authority — I really don’t like being told what to do — but I can usually see why the rule is in place, and therefore believe that it’s a good idea to follow it.

I had a friend in college who used to pocket fruit in the dining hall to eat later. I reproached him for doing that, since the rule was that the buffet-style food was only to be eaten on the premises.

“What’s the harm?” he said. “Why does it matter whether I eat it now or later?”

“Because if everybody took fruit away with them, the dining hall would run out of fruit. Or they’d have to keep buying extra fruit, which means that the cost of meals would go up.”

“But everybody doesn’t do it,” he said.

“That’s because there’s a rule,” I said. “Besides, what if everybody said, ‘It’s fine if I do it, because nobody else is doing it’? That would result in everybody doing it.”

I never convinced him. He continued to pocket the fruit, and nothing bad happened as a result.

Like the not-taking-away-food rule, most rules exist to remind us that we live in a community. Doing something that benefits me may have adverse repercussions for other people. We tend not to think about other people, which is why effective rules and laws have to include consequences that apply directly to us.

When you think about it, the consequences of most rule-breaking are entirely artificial. If I drive too fast, the real consequence is that I’m putting other people in danger. But that’s too abstract to stop most people (including me, sometimes) from doing it. So the law has to provide a penalty that’s specific to me: If I’m caught speeding, I have to pay a fine. There’s no inherent connection between my rule-breaking and my having to pay a fine; it’s not as if my parting with my money is in any way reducing the danger that I created for other people. And yet we tend to accept this cause-and-effect as perfectly natural.

I have a problem with this when I’m the one making the rules. If I’m teaching a course, I want my students to complete assignments on time, so I impose a penalty for turning work in late. In part, that’s because late work causes real problems for me: I have to take make time to grade each straggling assignment, when it would have been much more efficient for me to grade all of them at once.

But what if, as sometimes happens, I’ve procrastinated on grading for a couple of days? That means that if a student turns in an assignment a day or two late, it really doesn’t matter; I can still grade it along with all the others. Yet I still have to deduct points for lateness. I can see why it’s important to do this — I need to be consistent in enforcing rules; I need to teach students that there are consequences for missing deadlines — but penalizing the student in a case like this still bothers my conscience. It feels wrong to create artificial consequences for an action that has no actual consequences.

Looking at my feelings in that situation makes me realize that there’s another aspect to rules and rule-breaking that has nothing to do with consequences: plain old morality.

I’ve never cheated on a test — not out of fear of getting caught, but just because I know it’s wrong. I’m just not the kind of person who cheats on tests, even if it were possible to do so with no risk of punishment.

Where does this sense of right and wrong come from? As I mentioned in “Only Just,” that question is the only thing that leads me to believe in anything like a deity. But regardless of where it originates, I don’t think it’s something that can be taught, at least not past early childhood.

If I’m trying to persuade my students that plagiarism is a bad thing, it doesn’t help to tell them that it’s just wrong. If they don’t believe that already, nothing I say is going to give them that belief. The only thing I can do is talk about the consequences. And the only consequences I can talk about are artificial ones — failing the course, possible expulsion from school — that are in no way direct results of the act of plagiarism itself. They’re just things that we invented to discourage students from committing an act that we believe to be immoral.

Ideally, doing something wrong would have its own inherent consequences. I guess that’s what makes people believe in karma, or “what goes around comes around.” But those beliefs seem to be used more in judging other people’s actions than guiding our own.

For practical purposes, the only way our society seems to have of limiting people’s behavior is to make rules and provide artificial consequences for breaking them: fines, or imprisonment, or execution, or — for some people — punishment after death (such as going to hell). How strange it is that the concept of right and wrong has such a prominent place in our culture, and so little power to actually change anything!

Read Me 1 comment