Ruling Out

I’m generally known to be a rule-follower, but that’s only because I can see how most rules make sense. There’s a flip side to that coin, however: If I don’t see how a rule makes sense, and if no one can explain to me why the rule is in place, then I feel no obligation to follow it.

For example, at the community college where I taught digital media courses, there was a rule that we were not supposed to dismiss a class early. I was told that the rationale was that it could be seen as shirking on the part of the faculty member, depriving students of their money’s worth, and granting them academic credit that they hadn’t earned through time in the classroom. But none of that made any sense to me. Dismissing a class early wasn’t the same as requiring students to leave early; anybody who wanted to stay for the remaining time was welcome to, and would receive my full attention. And besides, I’d always told my students that instead of pushing themselves to solve a seemingly insoluble problem, the best thing is to go away, do something else, and then come back to the problem later, at which time they’d probably be able to solve the problem quickly and easily. To me, it often seemed that allowing them to forget about digital media for a while would be more productive than continuing to force-feed them with it. So I would ignore the rule and dismiss the class. The students did fine, and the educational system did not collapse.

Lately, however, I’ve come to realize that my attitude toward rules is sort of a self-centered one. Even if nobody is able to explain a rule in a way that makes sense to me, it doesn’t necessarily mean that there’s something wrong with the rule; it may just mean that I’m asking the wrong people to explain it.

I remember that when I was growing up, my parents used to take my sister and me out for an occasional afternoon at the bowling alley. Both of my parents were pretty good bowlers; I, unsurprisingly, was not. My father was constantly giving me advice to improve my game: Look at the dots on the lane, not at the pins; roll the ball so that it arcs toward the pins rather than traveling in a straight line, and so on. For the most part, I’d compliantly follow his instructions, but there was one step I strongly resisted: the thing he called “following through.”

“Following through” meant that after releasing the ball, I was supposed to continue to swing my arm upward until it reached about chin height. To me, this felt ridiculous. How could anything I would do after releasing the ball possibly affect its trajectory? I think the term “following through” was actually part of the problem — it implied that this was a separate step to be tacked on at the end. I certainly visualized it that way, which of course made it ineffective. After trying it a few times and feeling silly, I refused to do it anymore until my father could explain how the continued motion of my arm could influence a ball that was already partway down the lane.

My father worked at the time as a mechanical engineer, so you’d think he could have given me a quick lesson in physics and momentum. But for whatever reason, he didn’t — probably because the physics of it never occurred to him. He simply kept insisting that following through was part of good form, and I should just do it. I stubbornly declined to do a thing that made no sense to me. Naturally, my ball consistently ambled down the lane instead of hurtling the way my father’s did, and I reluctantly chalked it up to my general lack of athletic ability.

I swear that it wasn’t until I reached adulthood that I worked out what following through was about. I discovered that if I swung my arm in a fluid arc and released the ball midway, the ball traveled with much more speed and precision. Of course the motion of my arm had no effect on the ball after it was released — I had certainly been right about that. What I hadn’t realized, and what my father somehow failed to explain, was that creating the optimal conditions for launching the ball required my arm to keep moving afterward. As they say: Duh!

Recalling this outcome late in life has given me a lesson in humility. If a rule fails to make obvious sense, it’s not necessarily due to a deficiency in the rule; it may just be that I and whoever created the rule are using different frames of reference. I may be right based on the way I’m framing the problem; they may be right within their framing of the problem. The challenge is for each of us to see outside of our own frame of reference and understand the other’s. Until I’m able to do that, I need to do what I was unable to do with my father: give him the benefit of the doubt.

Read Me 2 comments

The Second Bite

It was my college roommate Krishna who first introduced me to the miraculous liquid known as scotch, but it was my friend Brad who, a few years later, opened the door to the diverse world of single-malt scotch by offering me my first taste of Laphroaig.

Laphroaig, for the uninitiated, tastes like dirt. There’s a story that during Prohibition, wholesalers were still able to import Laphroaig by having the distillery ship it in containers marked “cleaning fluid.” Supposedly the customs inspector who sampled it concluded that no human being would ever drink this stuff, so he let it go through.

Single-malt whiskies such as Laphroaig are made at a single distillery using local ingredients. As a result, each single malt has its own unique character that can’t be duplicated elsewhere. The most familiar brands of scotch, such as Johnnie Walker or Dewars, are blended whiskies whose distillers mix together a variety of single malts to create a product that’s smooth and inoffensive. Single malts, unlike blends, are willing to offend. Laphroaig, for example, is noted for its overpowering flavors of smoke and peat, which some people despise and others (like me) relish.

But “relish” is not a word that I’d use to describe my first sip of Laphroaig. It tasted so unlike the blended whiskies I was accustomed to that I thought that the bartender had made a mistake. I imagined that he must accidentally have poured a shot of — well, cleaning fluid. If not for the potential embarrassment of doing so in the classy New York bar that Brad had led me into, I probably would have wanted to spit it out.

Then a remarkable thing happened when I took my second sip. Now that I’d been forced to abandon my expectations, and to realize that my previous experiences of drinking scotch were irrelevant, I was able to experience the whisky on its own terms. “Oh!” I remember thinking. “I get it now!”

That incident led to my formulating a theory that I call “the second bite.” The first bite (or sip, or revelation) of something new is about the shock of experiencing something unexpected, of having one’s preconceptions violated. That experience says more about the taster than the thing being tasted. It’s only on the second bite that one can start to appreciate the thing for what it actually is.

Although the second-bite principle applies primarily to food or drink, it’s not exclusive to comestibles. It explains, for example, why I have such an ambivalent reaction to the movie “It’s a Wonderful Life,” which so many people love unreservedly. In the unlikely event that you haven’t seen the film, its protagonist, George Bailey — played by Jimmy Stewart — is in despair, beset by problems beyond his control, and is about to take his own life. An apprentice angel, Clarence, intercedes and takes George on a tour of his hometown, showing how worse off the town would have turned out if George had never been born.

The people he encounters don’t recognize George (since, from their point of view, he’d never existed), and he reacts in the way you undoubtedly would if someone with whom you’d been intimate suddenly treated you as a stranger — with shock and distress. But that’s just the first bite: his surprise and confusion in response to an experience he was not prepared for.

In each situation, he is eventually able to take the second bite, observing the state of affairs that his nonexistence has brought about. He sees that Mr. Gower, the respected pharmacist for whom he had worked in his youth, is now a drunken ex-convict who has served time for a child’s death as a result of his dispensing the wrong medication — an error that George, in real life, had prevented. He discovers that his brother Harry, who should have gone on to become a war hero, instead died at age nine in a drowning accident that George, in real life, had saved him from. These and a series of similar incidents are quite moving, and prompt us to reflect on the ways that our own existence might have improved the world in ways we’re unaware of.

But each time, before we get to the interesting part — the lesson learned — we have to sit through the first bite: “Don’t you know me? I’m your friend/neighbor/son/husband…!” followed by George’s anguish at not being recognized. I find myself getting increasingly impatient, which is difficult when the object of one’s impatience is someone as likeable as Jimmy Stewart. “Come on, George,” I want to say. “We get it by now — you’ve never been born. Why can’t you get it?”

All too often, as Frank Capra did in directing “It’s a Wonderful Life,” we concentrate too much on the first bite, which has nothing new to tell us. All first bites are the same. But every second bite is different, and those are the ones we should be paying attention to.

Read Me 2 comments

All Alone in the Moonlight

Apparently there’s such a thing as crypto­currency. I say “apparently” because I’ve never seen any or held any in my hand — doing so is impossible, because crypto­currency is perceptible only as numbers on a screen. Crypto­currency appears to be predicated on the idea that anything of which there is a limited supply has value, but in this case, the thing of which there is a limited supply is nothing.

As I understand it, the original idea behind Bitcoin, the first cryptocurrency, was a noble one. Standard currency has value because it’s backed by large institutions, such as governments and corporations, in which we’ve traditionally had faith. As we’ve come to trust these institutions less and less (mostly for good reasons), the aim was to devise a medium of exchange that was independent of any corporate or political entity — in other words, a currency that backs itself. (This independence is underpinned technologically by something called the blockchain, which, fortunately, no user of cryptocurrency is required to understand.)

Contrary to their inventors’ intentions, Bitcoin and its crypto-brethren — perhaps because of their lack of ties to stable institutions — have turned out to be wildly volatile, gaining and losing value unpredictably. As a result, they’ve become virtually unusable as an everyday medium of exchange, and instead have become instruments for investment, like stocks or bonds. Unlike stocks or bonds, however, they do nothing to support anything constructive in the real world.

Why am I blabbering on about cryptocurrency, instead of the personal experiences that usually form the core of this blog? It’s because I’ve lately come to realize that personal experiences are, in themselves, a form of cryptocurrency. It’s because after the moment when they happen, those experiences exist only in the form of memories, which — although they may be in limited supply — are essentially nothing.

I, like you, have a reserve of memories locked up in the Fort Knox of my brain. Some of them — the ones from which I can learn lessons — are useful, and those are the ones that I generally write about here. But there are others, equally vivid, that serve no purpose whatever: The smell of the melted-cheese sandwich my mother made in the toaster oven. The colors of the striped polo shirt that I glanced down at while running out the door into my front yard. The feeling of exhilaration I experienced the first time I managed to ride a bicycle without training wheels. The helplessness I felt when my father was mowing the lawn outside while my mother was vacuuming the carpet inside, leaving me no place to go to escape those terrifyingly loud machines.

Memories like those are unsubstantial — they exist (if “exist” is even the right word) only in my head. None of them can be substantiated by anyone other than myself. (For all I know, I might have invented them.) If a solar flare were to suddenly erase all of the hard drives on earth, cryptocurrencies — and any wealth that they embody — would disappear; similarly, when I die and my brain activity comes to an end, my memories will vanish just as completely.

And yet, for no rational reason I can think of, those memories have value to me. I embrace them and caress them, just like the hoarded coins I used to fondle when I was a child. I work each day to make more memories, just as so many of us work to make more money. Why?

I’ve never invested in cryptocurrency, but clearly I’ve invested in memories, which are just as ephemeral. Debra and I are preparing to take a long, expensive European vacation — and what is that if not an investment whose hoped-for return will be added to my mental Fort Knox?

The best spiritual teachings tell us that there is no past and there is no future; all that exists is the present moment. Life, properly lived, is an ongoing succession of present moments, experienced for what they are, with no preconceptions or expectations. Like most flawed humans, I haven’t achieved the ability to experience my life in that way — the best I can hope for is an occasional flash of insight, a fleeting fraction of a second when I can see things as they are. For the rest of the time, I have memories in the bank.

Read Me 3 comments

They Don’t Make Centuries Like They Used To

I don’t remember how old I was — clearly not old enough to do the math myself — but I vividly recall having my mind blown when my mother told me that there were people still alive who had been born in the nineteenth century. This must have been in the early 1960s — a time when “a century ago” meant the Civil War and the Wild West. That such an ancient era could be so close to our own seemed unimaginable.

Sixty-or-so years have passed since then, and “a century ago” now means 1922. Could I be the only one for whom 1922 doesn’t feel nearly as remote as the Civil War? Maybe it’s because we — or at least I — routinely watch films that were made in the 1920s, but life at that time doesn’t seem much different than life now. People drove cars, talked on telephones, and had gas stoves and electric lights. They went to the office, shopped in department stores, listened to recordings, and went to the movies. Granted, making a phone call required talking to an operator, and drivers of automobiles had to share the road with the occasional horse-drawn vehicle, but all in all, if someone were to put me in a time machine and drop me off in the 1920s, I think I’d make out just fine.

Jump ahead a decade to the 1930s, and life feels even more familiar. Cars look like cars instead of horseless carriages, phones can connect to each other autonomously, kitchens are bright with white-enameled appliances, and mass media — in the form of the ubiquitous radio — has entered everyone’s home. Most important, people can suddenly talk.

To clarify, people had been blabbing for thousands of years, but in movies (which is the most direct way we have of getting to know them), characters didn’t really start talking until the end of the 1920s. I’m guessing that the fact that we can not only see people going about their lives, but hear them as well, is what makes them feel so like us.

It’s disconcerting to spend time with characters in a 1930s movie — experiencing them not as relics of another time, but as living, breathing people — and suddenly remember that the actors we’re looking at are long gone. Fred Astaire and Ginger Rogers dancing, William Powell and Myrna Loy quipping, Clark Gable and Claudette Colbert bantering — it all feels so immediate. I simply can’t adjust to the fact that the events I’m watching happened ninety years ago. Ninety years is practically a century.

I find myself wondering about the people who were my current age — in their mid-sixties, or older — in the 1960s. They would have been born around or before the turn of the twentieth century. Did the Civil War and Reconstruction feel as recent to them as the 1920s do to me? Would they have felt that if a time machine dropped them somewhere in the 1860s or ’70s, they would be in relatively familiar surroundings? Or does the absence of movies and sound recordings from that time mean that the Civil War era always felt remote to people who didn’t live through it?

I’m fairly certain that for young people today, the 1920s and ’30s do feel like the distant past — probably as distant as the era of Manifest Destiny felt to me when I was young. But I’ll bet that’s largely because they haven’t seen the films that I’ve seen. (In my experience, people under 30 refuse to watch any movie that’s in black-and-white, no matter how excellent it is.)

I developed my interest in silent films when I was still in elementary school, so I realize that I’m an anomaly even in my own generation. But I’d love to find a present-day me-equivalent — someone young who has watched a lot of classic films — and find out what their relationship is to the America of a hundred years ago. Does a century feel as short to them as it does to me?

Read Me 4 comments

Growth Experience

The kittens at four weeks

My college roommate Jay, an Iowan, used to tell us that if you stand in a cornfield on a warm, quiet afternoon, you can hear the corn grow. I’m not sure how he knew this — he was from Council Bluffs, an honest-to-goodness city whose residents spend very little time in cornfields — but I have no reason to doubt him. A typical stalk of corn grows almost an inch per day, so it’s easy to imagine that those stiff, proliferating plant cells might make some sort of racket.

I think of that now because Debra and I — as we so often do — have been raising a litter of foster kittens. When they arrived from the city animal shelter, they were only a few days old. They looked like tiny mice, their eyes and ears still sealed, their legs wobbly. We had to bottle-feed them every three hours around the clock. We fretted about whether these fragile creatures would manage to survive without a mama cat to nurture them.

Well, it’s now two months later, and those mice have grown into healthy, affectionate, playful, and adorable kittens. They weigh more than five times what they did when they first arrived. Soon we’ll be returning them to the animal shelter, which will find a loving, permanent home for each of them. None of this is particularly remarkable; and yet I find myself wondering daily: How is this possible?

We used to watch these kittens latch onto a rubber nipple and suck down half a bottle of milk substitute without stopping for breath. Eventually they graduated to lapping up partly-solid gruel, and now they’re noisily feasting on juicy, brown paté spooned straight out of a can. Somehow, those nondescript foodstuffs routinely turned into more kitten. Each kitten’s weight increased by half an ounce or an ounce a day; now the daily weight gain is sometimes as much as two ounces.

I realize that growth is a pretty universal function for living things; we’ve all experienced it ourselves. What makes it feel so miraculous in this case is that it happens so quickly — from a little mound of fluff that fits in the palm of the hand to a fully-formed animal, all in the space of a few weeks. Every time I look at or hold one of these kittens, I realize that it must be growing right now, right in front of me. Cells are multiplying, differentiating, turning into organs and limbs and fur, as I watch. You’d think that if I looked hard enough, just as if I were listening intently in a cornfield, I should be able to be able to observe the process happening. And yet all I see is a kitten doing the normal things — eating, breathing, purring — with no detectable sign of the furious activity underneath.

When I was very young, I used to stare at the hour hand on our kitchen clock, trying to catch it in motion. It clearly had to be moving, since it was in a different position each time I returned to the clock, but it frustratingly always appeared to be standing still. My parents’ explanation that its movement was so slow as to be imperceptible by human eyes was something I refused to believe. It was as unappealing as the idea that the earth was too large for me to see its curvature.

We like to think that our senses allow us to see the world as it is, and perhaps they do — but only the small slice of the world that’s available to us. Just as there are entities that are too small or too large for us to see, there are events that are too fast or too slow, wavelengths that are too short or too long. All of these levels of being exist simultaneously, and there’s no reason to believe that our human perception of reality is any better or any “realer” than any of the others. And yet — for me, at least — the only things that feel true are the things I can experience firsthand. All of the rest is just concepts and abstractions.

Clearly I’m not the only one who has this problem. The world, as we now know, is gradually growing warmer, but at a rate that’s too slow for us to experience through our senses. We know that it’s happening, but not in a way that we can perceive directly, and therefore it’s easy and natural for us to dismiss it as not quite real. Just as we can’t see the cells of a kitten multiplying, we can’t see the molecules of carbon dioxide accumulating in the atmosphere. All we can see is what’s on a larger scale — wildfires, droughts, intense weather events — and ask ourselves, as when we see a kitten somehow getting larger: How is this possible?

Read Me 1 comment